文档库 最新最全的文档下载
当前位置:文档库 › 单片机论文外文文献和中文翻译(有出处)

单片机论文外文文献和中文翻译(有出处)

单片机论文外文文献和中文翻译(有出处)
单片机论文外文文献和中文翻译(有出处)

微机发展简史

IEEE的论文剑桥大学,2004/2/5

莫里斯威尔克斯

计算机实验室

剑桥大学

第一台存储程序的计算开始出现于1950前后,它就是1949年夏天在剑桥大学,我们创造的延迟存储自动电子计算机(EDSAC)。

最初实验用的计算机是由象我一样有着广博知识的人构造的。我们在电子工程方面都有着丰富的经验,并且我们深信这些经验对我们大有裨益。后来,被证明是正确的,尽管我们也要学习很多新东西。最重要的是瞬态一定要小心应付,虽然它只会在电视机的荧幕上一起一个无害的闪光,但是在计算机上这将导致一系列的错误。

在电路的设计过程中,我们经常陷入两难的境地。举例来说,我可以使用真空二级管做为门电路,就象在EDSAC中一样,或者在两个栅格之间用带控制信号的五级管,这被广泛用于其他系统设计,这类的选择一直在持续着直到逻辑门电路开始应用。在计算机领域工作的人都应该记得TTL,ECL和CMOS,到目前为止,CMOS已经占据了主导地位。

在最初的几年,IEE(电子工程师协会)仍然由动力工程占据主导地位。为了让IEE 认识到无线工程和快速发展的电子工程并行发展是它自己的一项权利,我们不得不面对一些障碍。由于动力工程师们做事的方式与我们不同,我们也遇到了许多困难。让人有些愤怒的是,所有的IEE出版的论文都被期望以冗长的早期研究的陈述开头,无非是些在早期阶段由于没有太多经验而遇到的困难之类的陈述。

60年代的巩固阶段

60年代初,个人英雄时代结束了,计算机真正引起了重视。世界上的计算机数量已经增加了许多,并且性能比以前更加可靠。这些我认为归因与高级语言的起步和第一个操作系统的诞生。分时系统开始起步,并且计算机图形学随之而来。

综上所述,晶体管开始代替正空管。这个变化对当时的工程师们是个不可回避的挑战。他们必须忘记他们熟悉的电路重新开始。只能说他们鼓起勇气接受了挑战,尽管这个转变并不会一帆风顺。

小规模集成电路和小型机

很快,在一个硅片上可以放不止一个晶体管,由此集成电路诞生了。随着时间的推移,一个片子能够容纳的最大数量的晶体管或稍微少些的逻辑门和翻转门集成度达到了一个最大限度。由此出现了我们所知道7400系列微机。每个门电路或翻转电路是相互独立的并且有自己的引脚。他们可通过导线连接在一起,作成一个计算机或其他的东西。

这些芯片为制造一种新的计算机提供了可能。它被称为小型机。他比大型机稍逊,但功能强大,并且更能让人负担的起。一个商业部门或大学有能力拥有一台小型机而不是得到一台大型组织所需昂贵的大型机。

随着微机的开始流行并且功能的完善,世界急切获得它的计算能力但总是由于工业上不能规模供应和它可观的价格而受到挫折。微机的出现解决了这个局面。

计算消耗的下降并非起源与微机,它本来就应该是那个样子。这就是我在概要中提到的“通货膨胀”在计算机工业中走上了歧途之说。随着时间的推移,人们比他们付出的金钱得到的更多。

硬件的研究

我所描述的时代对于从事计算机硬件研究的人们是令人惊奇的时代。7400系列的用户能够工作在逻辑门和开关级别并且芯片的集成度可靠性比单独晶体管高很多。大学或各地的研究者,可以充分发挥他们的想象力构造任何微机可以连接的数字设备。在剑桥大学实验室力,我们构造了CAP,一个有令人惊奇逻辑能力的微机。

7400在70年代中期还不断发展壮大,并且被宽带局域网的先驱组织Cambridge Ring所采用。令牌环设计研究的发表先于以太网。在这两种系统出现之前,人们大多满足于基于电报交换机的本地局域网。

令牌环网需要高可靠性,由于脉冲在令牌环中传递,他们必须不断的被放大并且再生。是7400的高可靠性给了我们勇气,使得我们着手Cambridge Ring.项目。

精简指令计算机的诞生

早期的计算机有简单的指令集,随着时间的推移,商业用微机的设计者增加了另外的他们认为可以微机性能的特性。很少的测试方法被建立,总的来说特性的选取很大程度上依赖于设计者的直觉。

1980年,RISC运动改变了微机世界。该运动是由Patterson 和Ditzel发表了一篇命名为精简指令计算机的情况论文而引起的。

除了RISC这个引人注目缩略词外,这个标题传达了一些指令集合设计的

见解,随之引发了RISC运动。从某种意义上说,它推动了线程的发展,在处理器中,同一时间有几个指令在不同的执行阶段称为线程。线程不是个新概念,但是它对微机来说是从未有过的。

RISC受益于一个最近的可用的方法的诞生,该方法使估计计算机性能成为可能而不去真正实现该微机的设计。我的意思是说利用目前存在的功能强大的计算机去模拟新的设计。通过模拟该设计,RISC的提倡者能够有信心的预言,一台使用和传统计算机相同电路的RISC计算机可以和传统的最好的计算机有同样的性能。

模拟仿真加快了开发进度并且被计算机设计者广泛采用。随后,计算机设计者变的多些可理性少了一些艺术性。今天,设计者们希望有满屋可用计算机做他们的仿真,而不只是一台,

X86指令集

除非出现很大意外,要不很少听到有计算机使用早期的RISC指令集了。INTEL 8086及其后裔都与x86密切相关。X86构架已经占据了计算机核心指令集的主导地位。被认为是相当成功的RISC指令集现在的生存空间越来越小了。

对于我们这些从事计算机学术研究的人,X86的统治地位让我们感到失望。毫无疑问,商业上对于x86的生存会有更多的考虑,但是这里还有很多原因,尽管我们多么希望人们考虑其他的方面。高级语言并没有完全消除对机器原始编码的的使用。我们仍需要不断提醒我们自己:我们应该严格的与先前的应用在机器层面上保持兼容。然而,情况也许有所不同,如果Intel的主要目的是为是生产一个好的RISC芯片。有一个已经取得了更大的成功,我所说的i860(不是i960,它们有一些不同)。从许多方面来说,i860是个卓越的芯片,但是它的软件借口不适合在工作站上应用。

对于x86取得胜利的最后有一件有意思的事情。直接应用先前x86的实现方式对于满足RISC处理器的持续增长的速度要求,是不可能的。因此,设计者们没有完全实现RISC指令集,尽管这不是很明显。表面上,一片现代的x86芯片包含了隐藏实现的部分,好象和实现RISC指令集的芯片一样。当致命的异常发生时,X86引入的代码是,经过适当的篡改后,被转化为它的内部代码并且被RISC芯片处理。

对于以上RISC运动的总结,我非常信赖最新版本的哈里斯和培生出版社的有关计算机设计的书籍。请参考特殊计算机体系构造,第三版,2003,P146,151-4,157-8

IA-64指令集

很久以前,Intel 和Hewlett-Packard引进了IA-64指令集。这最初主要是

为了满足通常的64位地址空间问题。在这种情况下,随后出现了MIPS R4000和Alpha。然而,人们普遍认为Intel应该与x86构架保持兼容,可令人疑惑的是恰恰相反。

进一步说,IA-64的设计与其他所有的指令集在主要实现方式上有所不同。特别的,每条指令它需要附加的6位。这打乱了传统的在指令字长和信息内容的平衡,并且它改变了编译器作者的原先的大纲。

尽管IA-64是个全新的指令集,但Intel发表了一个令人困惑的声明:基于IA-64的芯片将与早期的x86芯片保持兼容。很难弄懂它所指的是什么。

最新的称为Itaninu IA-64处理器显然需要特殊的兼容性的硬件,尽管如此,x86编码运行的相当慢。

由于以上的复杂因素,IA-64的实现需要更大的体积相对与传统的指令集,这暗示着更大的消耗。因此,在任何情况下,作为常识和一般性的标准,Gordon Moore在访问剑桥最近开放的Betty and Gordon Moore 图书馆时所反复强调。在听到他说问题出现在Intel内部也许有所不同,我很不理解。但是我已经作好了准备,去接受这样的事实,我已经完全不了解半导体经济学了。

AMD已经定义了一种64位的与x86更加兼容的指令集,并且他们已经取得了进展。这种片子并不是很大。很多人认为这才是Intel应该做的。(在这篇演讲稿被提交之前,Intel表示他们将销售一系列本质上与AMD兼容的芯片)更小晶体管的出现

集成度还在不断增加,这是通过缩小原始晶体管以致可以更容易放在一个片子上。进一步说,物理学的定律占在了制造商的一方。晶体管变的更快,更简单,更小。因此,同时导致了更高的集成度和速度。

这有个更明显的优势。芯片被放在硅片上,称为晶片。每一个晶片拥有很大数量的独立芯片,他们被同时加工然后分离。因为缩小以致在每块晶片上有了更多的芯片,所以每块芯片的价格下降了。

单元价格下降对于计算机工业是重要的,因为,如果最新的芯片性能和以前一样但价格更便宜,就没有理由继续提供老产品,至少不应该无限期提供。对于整个市场只需一种产品。

然而,详细计算各项消耗,随着芯片小到一定程度,为了继续保持产品的优势,移到一个更大的圆晶片上是十分必要的。尺寸的不断增加使的圆晶片不再是很小的东西了。最初,圆晶片直径上只有1到2英寸,到2000年已经达到了12英寸。起初,我不太明白,芯片的缩小导致了一系列的问题,工业上应该在制造更大的圆晶片上遇到更多的问题。现在,我明白了,单元消耗的减少在工业上和在一个芯片上增加电子晶体管的数量是同等重要的,并且,在风险中

增加圆晶片厂的投资被证明是正确的。

集成度被特殊的尺寸所衡量,对于特定的技术,它是用在一块高密度芯片上导线间距离的一半来衡量的。目前,90纳米的晶片正在被建成。

对Murphy?s定理的怀疑

1997年3月,在Cavendish实验室建立一百周年纪念庆典上,Gordon Moore 被邀作为一名演讲者。在他演讲的过程中,我第一次了解到这样一个事实,我们可以使得硅芯片既快并且消耗低,从而违反在英国被称为Murphy?s 定律或Sod?s 定律。Moore说在其它领域你也许不在二者之间做出取舍,但事实上,在硅片上,同时拥有二者是可能的。

在网上可得到一本相关的书籍,Murphy是在美国空军中从事人体重力加速度研究的工程师。然而在我们的学生时代就已经相当熟悉该定律,当时我们对于该定律有个更接近散文的名字而不是上面我们提到的那两个名字,我们称为General Cussedness定律。甚至它都曾出现在我们的试卷上。问题是这样,第一部分是关于该定律的定义,第二部分是应用该定律解决一道问题。我们的试题是:一、给出General Cussedness定律的定义;二、当一个骑自行车人围绕着圆做运动时,在任何情况下,考虑到风的因素得到一个平衡公式。

单片机

芯片每次的缩小,芯片数量将减少;并且芯片间的导线也随之减少。这导致了整体速度的下降,因为信号在各个芯片间的传输时间变长了。

渐渐地,芯片的收缩到只剩下处理器部分,缓存都被放在了一个单独的片子上。这使得工作站被建成拥有当代小型机一样的性能,结果搬倒了小型机绝对的基石。正如我们所知道的,这对于计算机工业和从事计算机事业的人产生了深远的影响

自从上述时代的开始,高密度CMOS硅芯片成为主导。随着芯片的缩小技术的发展,数百万的晶体管可以放在一个单独的片子上,相应的速度也成比例的增加。

为了得到额外的速度。处理器设计者开始对新的体系构架进行实验。一次成功的实验都预言了一种新的编程方式的分支的诞生。我对此取得的成功感到非常惊奇。它导致了程序执行速度的增加并且其相应的框架。

同样令人惊奇的是,通过更高级的特性建立一种单片机是有可能的。例如,为IBM Model 91开发的新特性,现在在单片机上也出现了。

Murphy定律仍然在中止的状态。它不再适用于使用小规模集成芯片设计实

验用的计算机,例如7400系列。想在电路级上做硬件研究的人们没有别的选择除了设计芯片并且找到实现它的办法。一段时间内,这样是可能的,但是并不容易。

不幸的是,制造芯片的花费有了戏剧性的增长,主要原因是制造芯片过程中电路印刷版制作成本的增加。因此,为制作芯片技术追加资金变的十分困难,这是当前引起人们关注的原因。

半导体前景规划

对于以上提到的各个方面,在部分国际半导体工业部门的精诚合作下,广泛的研究与开发工作是可行的。

在以前美国反垄断法禁止这种行为。但是在1980年,该法律发生了很大变化。预竞争概念被引进了该法律。各个公司现在可以在预言竞争阶段展开合作,然后在规则允许的情况下继续开发各自的产品。

在半导体工业中,预竞争研究的管理机构是半导体工业协会。1972年作为美国国内的组织,1998年成为一个世界性的组织。任何一个研究组织都可加入该协会。

每两年,SIA修订一次ITRS(国际半导体科学规划),并且逐年更新。1994年在第一卷中引入了“前景规划”一词,该卷由两个报告组成,些于1992年,在1993年提交。它被认为是该规划的真正开始。

为了推动半导体工业的向前发展,后续的规划提供最好的可利用的工业标准。它们对于15年内的发展做出了详细的规划。要达到的目标是每18个月晶体管的集成度增加一倍,同时每块芯片的价格下降一半,即Moore定律。

对于某些方面,前面的道路是清楚的。在另一方面,制造业的问题是可以预见的并且解决的办法也是可以知道的,尽管不是所有的问题都能够解决。这样的领域在表格中由蓝色表示,同时没有解决办法的,加以红色。红色区域往往称为红色砖墙。

规划建立的目标是现实的,同时也是充满挑战的。半导体工业整体上的进步于该规划密不可分。这是个令人惊讶的成就,它可以说是合作和竞争共同的价值。

值得注意的是,促进半导体工业向前发展的主要的战略决策是相对开放的预竞争机制,而不是闭关锁国。这也包括大规模圆晶片取得进展的原因。

1995年前,我开始感觉到,如果达到了不可能使得晶体管体积更小的临界点时,将发生什么。怀着这样的疑惑,我访问了位于华盛顿的ARPA(美国国防部)指挥总部,在那,我看到1994年规划的复本。我恍然大悟,当圆晶片尺

寸在2007年达到100纳米时,将出现严重的问题,在2010年达到70纳米时也如此。在随后的2004年的规划中,当圆晶片尺寸达到100纳米时,也做了相应的规划。不久半导体工业将发展到那一步。

从1994年的规划中我引用了以上的信息,还有就是一篇提交到IEE的题目为CMOS终结点的论文和在1996年2月8号的Computing上讨论的一些题目。

我现在的想法是,最终的结果是表示一个存在可用的电子数目从数千减少到数百。在这样的情况下,统计波动将成为问题。最后,电路或者不再工作,或者达到了速度的极限。事实上,物理限制将开始让他们感觉到不能突破电子最终的不足,原因是芯片上绝缘层越来越薄,以致量子理论中隧道效应引起了麻烦,导致了渗漏。

相对基础物理学,芯片制造者面对的问题要多出许多,尤其是电路印刷术遇到的困难。2001年更新2002年出版的规划中,陈述了这样一种情况,照目前的发展速度,如果在2005年前在关键技术领域没有取得大的突破的话,半导体业将停止不前。这是对“红色砖墙”最准确的描述。到目前为止是SIA遇到的最麻烦的问题。2003年的规划书强调了这一点,通过在许多地方加上了红色,指示在这些领域仍存在人们没有解决的制造方法问题。

到目前为止,可以很满意的报道,所遇到的问题到及时找到了解决之道。规划书是个非凡的文档,并且它坦白了以上提到的问题,并表示出了无限的信心。主要的见解反映出了这种信心并且有一个大致的期望,通过某种方式,圆晶体将变的更小,也许到45纳米或更小。

然而,花费将以很大的速率增长。也许将成为半导体停滞不前的最终原因。对于逐步增加的花费直到不能满足,这个精确的工业上达到一致意见的平衡点,依赖于经济的整体形势和半导体工业自身的财政状况。

最高级芯片的绝缘层厚度仅有5个原子的大小。除了找到更好的绝缘材料外,我们将寸步难行。对于此,我们没有任何办法。我们也不得不面对芯片的布线问题,线越来越细小了。还有散热问题和原子迁移问题。这些问题是相当基础性的。如果我们不能制作导线和绝缘层,我们就不能制造一台计算机。不论在CMOS加工工艺上和半导体材料上取得多么大的进步。更别指望有什么新的工艺或材料可以使得半导体集成度每18个月翻一番的美好时光了。

我在上文中说到,圆晶体继续缩小直到45纳米或更小是个大致的期望。在我的头脑中,从某点上来说,我们所知道的继续缩小CMOS是不可行的,但工业上需要超越它。

2001年以来,规划书中有一部分陈述了非传统形式CMOS的新兴研究设备。一些精力旺盛的人和一些投机者的探索无疑给了我们一些有益的途径,并

且规划书明确分辨出了这些进步,在那些我们曾经使用的传统CMOS方面。

内存技术的进步

非传统的CMOS变革了存储器技术。直到现在,我们仍然依靠DRAM作为主要的存储体。不幸的是,随着芯片的缩小,只有芯片外围速度上的增长——处理器芯片和它相关的缓存速度每两年增加一倍。这就是存储器代沟并且是人们焦虑的根源。存储技术的一个可能突破是,使用一种非传统的CMOS管,在计算机整体性能上将导致一个很大的进步,将解决大存储器的需求,即缓存不能解决的问题。

也许这个,而不是外围电路达到基本处理器的速度将成为非传统CMOS.的最终角色。

电子的不足

尽管目前为止,电子每表现出明显的不足,然而从长远看来,它最终会不能满足要求。也许这是我们开发非传统CMOS管的原因。在Cavendish实验室里,Haroon Amed已经作了很多有意义的工作,他们想通过一个单独电子或多或少的表现出0和1的区别。然而对于构造实用的计算机设备只取得了一点点进展。也许由于偶然的好运气,数十年后一台基于一个单独电子的计算机也许是可以实现的。

附录2

Progress in Computers

Prestige Lecture delivered to IEE, Cambridge, on 5 February 2004

Maurice Wilkes

Computer Laboratory

University of Cambridge

The first stored program computers began to work around 1950. The one we built in Cambridge, the EDSAC was first used in the summer of 1949.

These early experimental computers were built by people like myself with varying backgrounds. We all had extensive experience in electronic engineering and were confident that that experience would stand us in good stead. This proved true, although we had some new things to learn. The most important of these was that transients must be treated correctly; what would cause a harmless flash on the screen of a television set could lead to a serious error in a computer.

As far as computing circuits were concerned, we found ourselves with an embarass de richess. For example, we could use vacuum tube diodes for gates as we did in the EDSAC or pentodes with control signals on both grids, a system widely used elsewhere. This sort of choice persisted and the term families of logic came into use. Those who have worked in the computer field will remember TTL, ECL and CMOS. Of these, CMOS has now become dominant.

In those early years, the IEE was still dominated by power engineering and we had to fight a number of major battles in order to get radio engineering along with the rapidly developing subject of electronics.dubbed in the IEE light current electrical engineering.properly recognised as an activity in its own right. I remember that we had some difficulty in organising a conference because the power engineers? ways of doing things were not our ways. A minor source of irritation was that all IEE published papers were expected to start with a lengthy statement of earlier practice, something difficult to do when there was no earlier practice

Consolidation in the 1960s

By the late 50s or early 1960s, the heroic pioneering stage was over and the computer field was starting up in real earnest. The number of computers in the world had increased and they were much more reliable than the very early ones . To those years we can ascribe the first steps in high level languages and the first operating systems. Experimental time-sharing was beginning, and ultimately computer graphics was to come along.

Above all, transistors began to replace vacuum tubes. This change presented a formidable challenge to the engineers of the day. They had to forget what they knew about circuits and start again. It can only be said that they measured up superbly well to the challenge and that the change could not have gone more smoothly.

Soon it was found possible to put more than one transistor on the same bit of silicon, and this was the beginning of integrated circuits. As time went on, a sufficient level of integration was reached for one chip to accommodate enough transistors for a small number of gates or flip flops. This led to a range of chips known as the 7400 series. The gates and flip flops were independent of one another and each had its own pins. They could be connected by off-chip wiring to make a computer or anything else.

These chips made a new kind of computer possible. It was called a minicomputer. It was something less that a mainframe, but still very powerful, and much more affordable. Instead of having one expensive mainframe for the whole organisation, a business or a university was able to have a minicomputer for each major department.

Before long minicomputers began to spread and become more powerful. The world was hungry for computing power and it had been very frustrating for industry not to be able to supply it on the scale required and at a reasonable cost. Minicomputers transformed the situation.

The fall in the cost of computing did not start with the minicomputer; it had always been that way. This was what I meant when I referred in my abstract to inflation in the computer industry …going the other way?. As time goes o n people get more for their money, not less.

Research in Computer Hardware.

The time that I am describing was a wonderful one for research in computer hardware. The user of the 7400 series could work at the gate and flip-flop level and

yet the overall level of integration was sufficient to give a degree of reliability far above that of discreet transistors. The researcher, in a university or elsewhere, could build any digital device that a fertile imagination could conjure up. In the Computer Laboratory we built the Cambridge CAP, a full-scale minicomputer with fancy capability logic.

The 7400 series was still going strong in the mid 1970s and was used for the Cambridge Ring, a pioneering wide-band local area network. Publication of the design study for the Ring came just before the announcement of the Ethernet. Until these two systems appeared, users had mostly been content with teletype-based local area networks.

Rings need high reliability because, as the pulses go repeatedly round the ring, they must be continually amplified and regenerated. It was the high reliability provided by the 7400 series of chips that gave us the courage needed to embark on the project for the Cambridge Ring.

The RISC Movement and Its Aftermath

Early computers had simple instruction sets. As time went on designers of commercially available machines added additional features which they thought would improve performance. Few comparative measurements were done and on the whole the choice of features depended upon the designer?s intuition.

In 1980, the RISC movement that was to change all this broke on the world. The movement opened with a paper by Patterson and Ditzel entitled The Case for the Reduced Instructions Set Computer.

Apart from leading to a striking acronym, this title conveys little of the insights into instruction set design which went with the RISC movement, in particular the way it facilitated pipelining, a system whereby several instructions may be in different stages of execution within the processor at the same time. Pipelining was not new, but it was new for small computers

The RISC movement benefited greatly from methods which had recently become available for estimating the performance to be expected from a computer design without actually implementing it. I refer to the use of a powerful existing computer to simulate the new design. By the use of simulation, RISC advocates were able to predict with some confidence that a good RISC design would be able to

out-perform the best conventional computers using the same circuit technology. This prediction was ultimately born out in practice.

Simulation made rapid progress and soon came into universal use by computer designers. In consequence, computer design has become more of a science and less of an art. Today, designers expect to have a roomful of, computers available to do their simulations, not just one. They refer to such a roomful by the attractive name of computer farm.

The x86 Instruction Set

Little is now heard of pre-RISC instruction sets with one major exception, namely that of the Intel 8086 and its progeny, collectively referred to as x86. This has become the dominant instruction set and the RISC instruction sets that originally had a considerable measure of success are having to put up a hard fight for survival.

This dominance of x86 disappoints people like myself who come from the research wings.both academic and industrial.of the computer field. No doubt, business considerations have a lot to do with the survival of x86, but there are other reasons as well. However much we research oriented people would like to think otherwise. high level languages have not yet eliminated the use of machine code altogether. We need to keep reminding ourselves that there is much to be said for strict binary compatibility with previous usage when that can be attained. Nevertheless, things might have been different if Intel?s major attempt to produce a good RISC chip had been more successful. I am referring to the i860 (not the i960, which was something different). In many ways the i860 was an excellent chip, but its software interface did not fit it to be used in a workstation.

There is an interesting sting in the tail of this apparently easy triumph of the x86 instruction set. It proved impossible to match the steadily increasing speed of RISC processors by direct implementation of the x86 instruction set as had been done in the past. Instead, designers took a leaf out of the RISC book; although it is not obvious, on the surface, a modern x86 processor chip contains hidden within it a RISC-style processor with its own internal RISC coding. The incoming x86 code is, after suitable massaging, converted into this internal code and handed over to the RISC processor where the critical execution is performed.

In this summing up of the RISC movement, I rely heavily on the latest edition of Hennessy and Patterson?s books on computer design as my supporting authority; see in particular Computer Architecture, third edition, 2003, pp 146, 151-4, 157-8.

The IA-64 instruction set.

Some time ago, Intel and Hewlett-Packard introduced the IA-64 instruction set.

This was primarily intended to meet a generally recognised need for a 64 bit address space. In this, it followed the lead of the designers of the MIPS R4000 and Alpha. However one would have thought that Intel would have stressed compatibility with the x86; the puzzle is that they did the exact opposite.

Moreover, built into the design of IA-64 is a feature known as predication which makes it incompatible in a major way with all other instruction sets. In particular, it needs 6 extra bits with each instruction. This upsets the traditional balance between instruction word length and information content, and it changes significantly the brief of the compiler writer.

In spite of having an entirely new instruction set, Intel made the puzzling claim that chips based on IA-64 would be compatible with earlier x86 chips. It was hard to see exactly what was meant.

Chips for the latest IA-64 processor, namely, the Itanium, appear to have special hardware for compatibility. Even so, x86 code runs very slowly.

Because of the above complications, implementation of IA-64 requires a larger chip than is required for more conventional instruction sets. This in turn implies a higher cost. Such at any rate, is the received wisdom, and, as a general principle, it was repeated as such by Gordon Moore when he visited Cambridge recently to open the Betty and Gordon Moore Library. I have, however, heard it said that the matter appears differently from within Intel. This I do not understand. But I am very ready to admit that I am completely out of my depth as regards the economics of the semiconductor industry.

AMD have defined a 64 bit instruction set that is more compatible with x86 and they appear to be making headway with it. The chip is not a particularly large one. Some people think that this is what Intel should have done. [Since the lecture was delivered, Intel have announced that they will market a range of chips essentially compatible with those offered by AMD.]

The Relentless Drive towards Smaller Transistors

The scale of integration continued to increase. This was achieved by shrinking the original transistors so that more could be put on a chip. Moreover, the laws of physics were on the side of the manufacturers. The transistors also got faster, simply by getting smaller. It was therefore possible to have, at the same time, both high density and high speed.

There was a further advantage. Chips are made on discs of silicon, known as

wafers. Each wafer has on it a large number of individual chips, which are processed together and later separated. Since shrinkage makes it possible to get more chips on a wafer, the cost per chip goes down.

Falling unit cost was important to the industry because, if the latest chips are cheaper to make as well as faster, there is no reason to go on offering the old ones, at least not indefinitely. There can thus be one product for the entire market.

However, detailed cost calculations showed that, in order to maintain this advantage as shrinkage proceeded beyond a certain point, it would be necessary to move to larger wafers. The increase in the size of wafers was no small matter. Originally, wafers were one or two inches in diameter, and by 2000 they were as much as twelve inches. At first, it puzzled me that, when shrinkage presented so many other problems, the industry should make things harder for itself by going to larger wafers. I now see that reducing unit cost was just as important to the industry as increasing the number of transistors on a chip, and that this justified the additional investment in foundries and the increased risk.

The degree of integration is measured by the feature size, which, for a given technology, is best defined as the half the distance between wires in the densest chips made in that technology. At the present time, production of 90 nm chips is still building up

Suspension of Law

In March 1997, Gordon Moore was a guest speaker at the celebrations of the centenary of the discovery of the electron held at the Cavendish Laboratory. It was during the course of his lecture that I first heard the fact that you can have silicon chips that are both fast and low in cost described as a violation of Murphy?s law.or Sod?s law as it is usually called i n the UK. Moore said that experience in other fields would lead you to expect to have to choose between speed and cost, or to compromise between them. In fact, in the case of silicon chips, it is possible to have both.

In a reference book available on the web, Murphy is identified as an engineer working on human acceleration tests for the US Air Force in 1949. However, we were perfectly familiar with the law in my student days, when we called it by a much more prosaic name than either of those mentioned above, namely, the Law of General Cussedness. We even had a mock examination question in which the law featured. It was the type of question in which the first part asks for a definition of

some law or principle and the second part contains a problem to be solved with the aid of it. In our case the first part was to define the Law of General Cussedness and the second was the problem;A cyclist sets out on a circular cycling tour. Derive an equation giving the direction of the wind at any time.

The single-chip computer

At each shrinkage the number of chips was reduced and there were fewer wires going from one chip to another. This led to an additional increment in overall speed, since the transmission of signals from one chip to another takes a long time.

Eventually, shrinkage proceeded to the point at which the whole processor except for the caches could be put on one chip. This enabled a workstation to be built that out-performed the fastest minicomputer of the day, and the result was to kill the minicomputer stone dead. As we all know, this had severe consequences for the computer industry and for the people working in it.

From the above time the high density CMOS silicon chip was Cock of the Roost. Shrinkage went on until millions of transistors could be put on a single chip and the speed went up in proportion.

Processor designers began to experiment with new architectural features designed to give extra speed. One very successful experiment concerned methods for predicting the way program branches would go. It was a surprise to me how successful this was. It led to a significant speeding up of program execution and other forms of prediction followed

Equally surprising is what it has been found possible to put on a single chip computer by way of advanced features. For example, features that had been developed for the IBM Model 91.the giant computer at the top of the System 360 range.are now to be found on microcomputers

Murphy?s Law remained in a state of suspension. No longer did it make sense to build experimental computers out of chips with a small scale of integration, such as that provided by the 7400 series. People who wanted to do hardware research at the circuit level had no option but to design chips and seek for ways to get them made. For a time, this was possible, if not easy

Unfortunately, there has since been a dramatic increase in the cost of making chips, mainly because of the increased cost of making masks for lithography, a photographic process used in the manufacture of chips. It has, in consequence, again become very difficult to finance the making of research chips, and this is a currently

cause for some concern.

The Semiconductor Road Map

The extensive research and development work underlying the above advances has been made possible by a remarkable cooperative effort on the part of the international semiconductor industry.

At one time US monopoly laws would probably have made it illegal for US companies to participate in such an effort. However about 1980 significant and far reaching changes took place in the laws. The concept of pre-competitive research was introduced. Companies can now collaborate at the pre-competitive stage and later go on to develop products of their own in the regular competitive manner.

The agent by which the pre-competitive research in the semi-conductor industry is managed is known as the Semiconductor Industry Association (SIA). This has been active as a US organisation since 1992 and it became international in 1998. Membership is open to any organisation that can contribute to the research effort.

Every two years SIA produces a new version of a document known as the International Technological Roadmap for Semiconductors (ITRS), with an update in the intermediate years. The first volume bearing the title …Roadmap? was issued in 1994 but two reports, written in 1992 and distributed in 1993, are regarded as the true beginning of the series.

Successive roadmaps aim at providing the best available industrial consensus on the way that the industry should move forward. They set out in great detail.over a 15 year horizon. the targets that must be achieved if the number of components on a chip is to be doubled every eighteen months.that is, if Moore?s law is to be maintained.-and if the cost per chip is to fall.

In the case of some items, the way ahead is clear. In others, manufacturing problems are foreseen and solutions to them are known, although not yet fully worked out; these areas are coloured yellow in the tables. Areas for which problems are foreseen, but for which no manufacturable solutions are known, are coloured red. Red areas are referred to as Red Brick Walls.

The targets set out in the Roadmaps have proved realistic as well as challenging, and the progress of the industry as a whole has followed the Roadmaps closely. This is a remarkable achievement and it may be said that the merits of cooperation and competition have been combined in an admirable manner.

It is to be noted that the major strategic decisions affecting the progress of the

industry have been taken at the pre-competitive level in relative openness, rather than behind closed doors. These include the progression to larger wafers.

By 1995, I had begun to wonder exactly what would happen when the inevitable point was reached at which it became impossible to make transistors any smaller. My enquiries led me to visit ARPA headquarters in Washington DC, where I was given a copy of the recently produced Roadmap for 1994. This made it plain that serious problems would arise when a feature size of 100 nm was reached, an event projected to happen in 2007, with 70 nm following in 2010. The year for which the coming of 100 nm (or rather 90 nm) was projected was in later Roadmaps moved forward to 2004 and in the event the industry got there a little sooner.

I presented the above information from the 1994 Roadmap, along with such other information that I could obtain, in a lecture to the IEE in London, entitled The CMOS end-point and related topics in Computing and delivered on 8 February 1996.

The idea that I then had was that the end would be a direct consequence of the number of electrons available to represent a one being reduced from thousands to a few hundred. At this point statistical fluctuations would become troublesome, and thereafter the circuits would either fail to work, or if they did work would not be any faster. In fact the physical limitations that are now beginning to make themselves felt do not arise through shortage of electrons, but because the insulating layers on the chip have become so thin that leakage due to quantum mechanical tunnelling has become troublesome.

There are many problems facing the chip manufacturer other than those that arise from fundamental physics, especially problems with lithography. In an update to the 2001 Roadmap published in 2002, it was stated that the continuation of progress at present rate will be at risk as we approach 2005 when the roadmap projects that progress will stall without research break-throughs in most technical areas “. This was the most specific st atement about the Red Brick Wall, that had so far come from the SIA and it was a strong one. The 2003 Roadmap reinforces this statement by showing many areas marked red, indicating the existence of problems for which no manufacturable solutions are known.

It is satisfactory to report that, so far, timely solutions have been found to all the problems encountered. The Roadmap is a remarkable document and, for all its frankness about the problems looming above, it radiates immense confidence. Prevailing opinion reflects that confidence and there is a general expectation that, by

one means or another, shrinkage will continue, perhaps down to 45 nm or even less.

However, costs will rise steeply and at an increasing rate. It is cost that will ultimately be seen as the reason for calling a halt. The exact point at which an industrial consensus is reached that the escalating costs can no longer be met will depend on the general economic climate as well as on the financial strength of the semiconductor industry itself.。

Insulating layers in the most advanced chips are now approaching a thickness equal to that of 5 atoms. Beyond finding better insulating materials, and that cannot take us very far, there is nothing we can do about this. We may also expect to face problems with on-chip wiring as wire cross sections get smaller. These will concern heat dissipation and atom migration. The above problems are very fundamental. If we cannot make wires and insulators, we cannot make a computer, whatever improvements there may be in the CMOS process or improvements in semiconductor materials. It is no good hoping that some new process or material might restart the merry-go-round of the density of transistors doubling every eighteen months.

I said above that there is a general expectation that shrinkage would continue by one means or another to 45 nm or even less. What I had in mind was that at some point further scaling of CMOS as we know it will become impracticable, and the industry will need to look beyond it.

Since 2001 the Roadmap has had a section entitled emerging research devices on non-conventional forms of CMOS and the like. Vigorous and opportunist exploitation of these possibilities will undoubtedly take us a useful way further along the road, but the Roadmap rightly distinguishes such progress from the traditional scaling of conventional CMOS that we have been used to.

Advances in Memory Technology

Unconventional CMOS could revolutionalize memory technology. Up to now, we have relied on DRAMs for main memory. Unfortunately, these are only increasing in speed marginally as shrinkage continues, whereas processor chips and their associated cache memory continue to double in speed every two years. The result is a growing gap in speed between the processor and the main memory. This is the memory gap and is a current source of anxiety. A breakthrough in memory technology, possibly using some form of unconventional CMOS, could lead to a major advance in overall performance on problems with large memory requirements, that is, problems which fail to fit into the cache.

Perhaps this, rather than attaining marginally higher basis processor speed will be the ultimate role for non-conventional CMOS.

Shortage of Electrons

Although shortage of electrons has not so far appeared as an obvious limitation, in the long term it may become so. Perhaps this is where the exploitation of

non-conventional CMOS will lead us. However, some interesting work has been done.notably by Haroon Amed and his team working in the Cavendish Laboratory.on the direct development of structures in which a single electron more or less makes the difference between a zero and a one. However very little progress has been made towards practical devices that could lead to the construction of a computer. Even with exceptionally good luck, many tens of years must inevitably elapse before a working computer based on single electron effects can be contemplated.

毕业论文英文参考文献与译文

Inventory management Inventory Control On the so-called "inventory control", many people will interpret it as a "storage management", which is actually a big distortion. The traditional narrow view, mainly for warehouse inventory control of materials for inventory, data processing, storage, distribution, etc., through the implementation of anti-corrosion, temperature and humidity control means, to make the custody of the physical inventory to maintain optimum purposes. This is just a form of inventory control, or can be defined as the physical inventory control. How, then, from a broad perspective to understand inventory control? Inventory control should be related to the company's financial and operational objectives, in particular operating cash flow by optimizing the entire demand and supply chain management processes (DSCM), a reasonable set of ERP control strategy, and supported by appropriate information processing tools, tools to achieved in ensuring the timely delivery of the premise, as far as possible to reduce inventory levels, reducing inventory and obsolescence, the risk of devaluation. In this sense, the physical inventory control to achieve financial goals is just a means to control the entire inventory or just a necessary part; from the perspective of organizational functions, physical inventory control, warehouse management is mainly the responsibility of The broad inventory control is the demand and supply chain management, and the whole company's responsibility. Why until now many people's understanding of inventory control, limited physical inventory control? The following two reasons can not be ignored: First, our enterprises do not attach importance to inventory control. Especially those who benefit relatively good business, as long as there is money on the few people to consider the problem of inventory turnover. Inventory control is simply interpreted as warehouse management, unless the time to spend money, it may have been to see the inventory problem, and see the results are often very simple procurement to buy more, or did not do warehouse departments . Second, ERP misleading. Invoicing software is simple audacity to call it ERP, companies on their so-called ERP can reduce the number of inventory, inventory control, seems to rely on their small software can get. Even as SAP, BAAN ERP world, the field of

概率论毕业论文外文翻译

Statistical hypothesis testing Adriana Albu,Loredana Ungureanu Politehnica University Timisoara,adrianaa@aut.utt.ro Politehnica University Timisoara,loredanau@aut.utt.ro Abstract In this article,we present a Bayesian statistical hypothesis testing inspection, testing theory and the process Mentioned hypothesis testing in the real world and the importance of, and successful test of the Notes. Key words Bayesian hypothesis testing; Bayesian inference;Test of significance Introduction A statistical hypothesis test is a method of making decisions using data, whether from a controlled experiment or an observational study (not controlled). In statistics, a result is called statistically significant if it is unlikely to have occurred by chance alone, according to a pre-determined threshold probability, the significance level. The phrase "test of significance" was coined by Ronald Fisher: "Critical tests of this kind may be called tests of significance, and when such tests are available we may discover whether a second sample is or is not significantly different from the first."[1] Hypothesis testing is sometimes called confirmatory data analysis, in contrast to exploratory data analysis. In frequency probability,these decisions are almost always made using null-hypothesis tests. These are tests that answer the question Assuming that the null hypothesis is true, what is the probability of observing a value for the test statistic that is at [] least as extreme as the value that was actually observed?) 2 More formally, they represent answers to the question, posed before undertaking an experiment,of what outcomes of the experiment would lead to rejection of the null hypothesis for a pre-specified probability of an incorrect rejection. One use of hypothesis testing is deciding whether experimental results contain enough information to cast doubt on conventional wisdom. Statistical hypothesis testing is a key technique of frequentist statistical inference. The Bayesian approach to hypothesis testing is to base rejection of the hypothesis on the posterior probability.[3][4]Other approaches to reaching a decision based on data are available via decision theory and optimal decisions. The critical region of a hypothesis test is the set of all outcomes which cause the null hypothesis to be rejected in favor of the alternative hypothesis. The critical region is usually denoted by the letter C. One-sample tests are appropriate when a sample is being compared to the population from a hypothesis. The population characteristics are known from theory or are calculated from the population.

平面设计中英文对照外文翻译文献

(文档含英文原文和中文翻译) 中英文翻译 平面设计 任何时期平面设计可以参照一些艺术和专业学科侧重于视觉传达和介绍。采用多种方式相结合,创造和符号,图像和语句创建一个代表性的想法和信息。平面设计师可以使用印刷,视觉艺术和排版技术产生的最终结果。平面设计常常提到的进程,其中沟通是创造和产品设计。 共同使用的平面设计包括杂志,广告,产品包装和网页设计。例如,可能包括产品包装的标志或其他艺术作品,举办文字和纯粹的设计元素,如形状和颜色统一件。组成的一个最重要的特点,尤其是平面设计在使用前现有材料或不同的元素。 平面设计涵盖了人类历史上诸多领域,在此漫长的历史和在相对最近爆炸视觉传达中的第20和21世纪,人们有时是模糊的区别和重叠的广告艺术,平面设计和美术。毕竟,他们有着许多相同的内容,理论,原则,做法和语言,有时同样的客人或客户。广告艺术的最终目标是出售的商品和服务。在平面

设计,“其实质是使以信息,形成以思想,言论和感觉的经验”。 在唐朝( 618-906 )之间的第4和第7世纪的木块被切断打印纺织品和后重现佛典。阿藏印在868是已知最早的印刷书籍。 在19世纪后期欧洲,尤其是在英国,平面设计开始以独立的运动从美术中分离出来。蒙德里安称为父亲的图形设计。他是一个很好的艺术家,但是他在现代广告中利用现代电网系统在广告、印刷和网络布局网格。 于1849年,在大不列颠亨利科尔成为的主要力量之一在设计教育界,该国政府通告设计在杂志设计和制造的重要性。他组织了大型的展览作为庆祝现代工业技术和维多利亚式的设计。 从1892年至1896年威廉?莫里斯凯尔姆斯科特出版社出版的书籍的一些最重要的平面设计产品和工艺美术运动,并提出了一个非常赚钱的商机就是出版伟大文本论的图书并以高价出售给富人。莫里斯证明了市场的存在使平面设计在他们自己拥有的权利,并帮助开拓者从生产和美术分离设计。这历史相对论是,然而,重要的,因为它为第一次重大的反应对于十九世纪的陈旧的平面设计。莫里斯的工作,以及与其他私营新闻运动,直接影响新艺术风格和间接负责20世纪初非专业性平面设计的事态发展。 谁创造了最初的“平面设计”似乎存在争议。这被归因于英国的设计师和大学教授Richard Guyatt,但另一消息来源于20世纪初美国图书设计师William Addison Dwiggins。 伦敦地铁的标志设计是爱德华约翰斯顿于1916年设计的一个经典的现代而且使用了系统字体设计。 在20世纪20年代,苏联的建构主义应用于“智能生产”在不同领域的生产。个性化的运动艺术在俄罗斯大革命是没有价值的,从而走向以创造物体的功利为目的。他们设计的建筑、剧院集、海报、面料、服装、家具、徽标、菜单等。 Jan Tschichold 在他的1928年书中编纂了新的现代印刷原则,他后来否认他在这本书的法西斯主义哲学主张,但它仍然是非常有影响力。 Tschichold ,包豪斯印刷专家如赫伯特拜耳和拉斯洛莫霍伊一纳吉,和El Lissitzky 是平面设计之父都被我们今天所知。 他们首创的生产技术和文体设备,主要用于整个二十世纪。随后的几年看到平面设计在现代风格获得广泛的接受和应用。第二次世界大战结束后,美国经济的建立更需要平面设计,主要是广告和包装等。移居国外的德国包豪斯设计学院于1937年到芝加哥带来了“大规模生产”极简到美国;引发野火的“现代”建筑和设计。值得注意的名称世纪中叶现代设计包括阿德里安Frutiger ,设计师和Frutiger字体大学;保兰德,从20世纪30年代后期,直到他去世于1996年,采取的原则和适用包豪斯他们受欢迎的广告和标志设计,帮助创造一个独特的办法,美国的欧洲简约而成为一个主要的先驱。平面设计称为企业形象;约瑟夫米勒,罗克曼,设计的海报严重尚未获取1950年代和1960年代时代典型。 从道路标志到技术图表,从备忘录到参考手册,增强了平面设计的知识转让。可读性增强了文字的视觉效果。 设计还可以通过理念或有效的视觉传播帮助销售产品。将它应用到产品和公司识别系统的要素像标志、颜色和文字。连同这些被定义为品牌。品牌已日益成为重要的提供的服务范围,许多平面设计师,企业形象和条件往往是同时交替使用。

89c51单片机论文英语文献

AT89C51 Description The AT89C51 is a low-power, high-performance CMOS 8-bit microcomputer with 4K bytes of Flash Programmable and Erasable Read Only Memory (PEROM) and 128 bytes RAM. The device is manufactured using Atmel’s high density nonvolatile memory technology and is compatible with the industry standard MCS-51? instruction set and pinout. The chip combines a versatile 8-bit CPU with Flash on a monolithic chip, the Atmel AT89C51 is a powerful microcomputer which provides a highly flexible and cost effective solution to many embedded control applications. Features: ? Compatible with MCS-51? Products ? 4K Bytes of In-System Reprogrammable Flash Memory ? Endurance: 1,000 Write/Erase Cycles ? Fully Static Operation: 0 Hz to 24 MHz ? Three-Level Program Memory Lock ? 128 x 8-Bit Internal RAM ? 32 Programmable I/O Lines ? Two 16-Bit Timer/Counters ? Six Interrupt Sources ? Programmable Serial Channel ? Low Power Idle and Power Down Modes The AT89C51 provides the following standard features: 4K bytes of Flash, 128 bytes of RAM, 32 I/O lines, two 16-bit timer/counters, a five vector two-level interrupt architecture, a full duplex serial port, on-chip oscillator and clock circuitry. In addition, the AT89C51 is designed with static logic for operation down to zero frequency and supports two software selectable power saving modes. The Idle Mode stops the CPU while allowing the RAM, timer/counters, serial port and interrupt system to continue functioning. The Power Down Mode saves the RAM contents but freezes the oscillator disabling all other chip functions until the next hardware reset. Block Diagram

毕业论文外文翻译模版

吉林化工学院理学院 毕业论文外文翻译English Title(Times New Roman ,三号) 学生学号:08810219 学生姓名:袁庚文 专业班级:信息与计算科学0802 指导教师:赵瑛 职称副教授 起止日期:2012.2.27~2012.3.14 吉林化工学院 Jilin Institute of Chemical Technology

1 外文翻译的基本内容 应选择与本课题密切相关的外文文献(学术期刊网上的),译成中文,与原文装订在一起并独立成册。在毕业答辩前,同论文一起上交。译文字数不应少于3000个汉字。 2 书写规范 2.1 外文翻译的正文格式 正文版心设置为:上边距:3.5厘米,下边距:2.5厘米,左边距:3.5厘米,右边距:2厘米,页眉:2.5厘米,页脚:2厘米。 中文部分正文选用模板中的样式所定义的“正文”,每段落首行缩进2字;或者手动设置成每段落首行缩进2字,字体:宋体,字号:小四,行距:多倍行距1.3,间距:前段、后段均为0行。 这部分工作模板中已经自动设置为缺省值。 2.2标题格式 特别注意:各级标题的具体形式可参照外文原文确定。 1.第一级标题(如:第1章绪论)选用模板中的样式所定义的“标题1”,居左;或者手动设置成字体:黑体,居左,字号:三号,1.5倍行距,段后11磅,段前为11磅。 2.第二级标题(如:1.2 摘要与关键词)选用模板中的样式所定义的“标题2”,居左;或者手动设置成字体:黑体,居左,字号:四号,1.5倍行距,段后为0,段前0.5行。 3.第三级标题(如:1.2.1 摘要)选用模板中的样式所定义的“标题3”,居左;或者手动设置成字体:黑体,居左,字号:小四,1.5倍行距,段后为0,段前0.5行。 标题和后面文字之间空一格(半角)。 3 图表及公式等的格式说明 图表、公式、参考文献等的格式详见《吉林化工学院本科学生毕业设计说明书(论文)撰写规范及标准模版》中相关的说明。

毕业设计(论文)外文文献译文

毕业设计(论文) 外文文献译文及原文 学生:李树森 学号:201006090217 院(系):电气与信息工程学院 专业:网络工程 指导教师:王立梅 2014年06月10日

JSP的技术发展历史 作者:Kathy Sierra and Bert Bates 来源:Servlet&JSP Java Server Pages(JSP)是一种基于web的脚本编程技术,类似于网景公司的服务器端Java脚本语言—— server-side JavaScript(SSJS)和微软的Active Server Pages(ASP)。与SSJS和ASP相比,JSP具有更好的可扩展性,并且它不专属于任何一家厂商或某一特定的Web服务器。尽管JSP规范是由Sun 公司制定的,但任何厂商都可以在自己的系统上实现JSP。 在Sun正式发布JSP之后,这种新的Web应用开发技术很快引起了人们的关注。JSP为创建高度动态的Web应用提供了一个独特的开发环境。按照Sun的说法,JSP能够适应市场上包括Apache WebServer、IIS4.0在内的85%的服务器产品。 本文将介绍JSP相关的知识,以及JavaBean的相关内容,当然都是比较粗略的介绍其中的基本内容,仅仅起到抛砖引玉的作用,如果读者需要更详细的信息,请参考相应的JSP的书籍。 1.1 概述 JSP(Java Server Pages)是由Sun Microsystems公司倡导、许多公司参与一起建立的一种动态网页技术标准,其在动态网页的建设中有其强大而特别的功能。JSP与Microsoft的ASP技术非常相似。两者都提供在HTML代码中混合某种程序代码、由语言引擎解释执行程序代码的能力。下面我们简单的对它进行介绍。 JSP页面最终会转换成servlet。因而,从根本上,JSP页面能够执行的任何任务都可以用servlet 来完成。然而,这种底层的等同性并不意味着servlet和JSP页面对于所有的情况都等同适用。问题不在于技术的能力,而是二者在便利性、生产率和可维护性上的不同。毕竟,在特定平台上能够用Java 编程语言完成的事情,同样可以用汇编语言来完成,但是选择哪种语言依旧十分重要。 和单独使用servlet相比,JSP提供下述好处: JSP中HTML的编写与维护更为简单。JSP中可以使用常规的HTML:没有额外的反斜杠,没有额外的双引号,也没有暗含的Java语法。 能够使用标准的网站开发工具。即使是那些对JSP一无所知的HTML工具,我们也可以使用,因为它们会忽略JSP标签。 可以对开发团队进行划分。Java程序员可以致力于动态代码。Web开发人员可以将经理集中在表示层上。对于大型的项目,这种划分极为重要。依据开发团队的大小,及项目的复杂程度,可以对静态HTML和动态内容进行弱分离和强分离。 此处的讨论并不是说人们应该放弃使用servlet而仅仅使用JSP。事实上,几乎所有的项目都会同时用到这两种技术。在某些项目中,更适宜选用servlet,而针对项目中的某些请求,我们可能会在MVC构架下组合使用这两项技术。我们总是希望用适当的工具完成相对应的工作,仅仅是servlet并不一定能够胜任所有工作。 1.2 JSP的由来 Sun公司的JSP技术,使Web页面开发人员可以使用HTML或者XML标识来设计和格式化最终

大学毕业论文---软件专业外文文献中英文翻译

软件专业毕业论文外文文献中英文翻译 Object landscapes and lifetimes Tech nically, OOP is just about abstract data typing, in herita nee, and polymorphism, but other issues can be at least as importa nt. The rema in der of this sect ion will cover these issues. One of the most importa nt factors is the way objects are created and destroyed. Where is the data for an object and how is the lifetime of the object con trolled? There are differe nt philosophies at work here. C++ takes the approach that con trol of efficie ncy is the most importa nt issue, so it gives the programmer a choice. For maximum run-time speed, the storage and lifetime can be determined while the program is being written, by placing the objects on the stack (these are sometimes called automatic or scoped variables) or in the static storage area. This places a priority on the speed of storage allocatio n and release, and con trol of these can be very valuable in some situati ons. However, you sacrifice flexibility because you must know the exact qua ntity, lifetime, and type of objects while you're writing the program. If you are trying to solve a more general problem such as computer-aided desig n, warehouse man ageme nt, or air-traffic con trol, this is too restrictive. The sec ond approach is to create objects dyn amically in a pool of memory called the heap. In this approach, you don't know un til run-time how many objects you n eed, what their lifetime is, or what their exact type is. Those are determined at the spur of the moment while the program is runnin g. If you n eed a new object, you simply make it on the heap at the point that you n eed it. Because the storage is man aged dyn amically, at run-time, the amount of time required to allocate storage on the heap is sig ni fica ntly Ion ger tha n the time to create storage on the stack. (Creat ing storage on the stack is ofte n a si ngle assembly in structio n to move the stack poin ter dow n, and ano ther to move it back up.) The dyn amic approach makes the gen erally logical assumpti on that objects tend to be complicated, so the extra overhead of finding storage and releas ing that storage will not have an importa nt impact on the creati on of an object .In additi on, the greater flexibility is esse ntial to solve the gen eral program ming problem. Java uses the sec ond approach, exclusive". Every time you want to create an object, you use the new keyword to build a dyn amic in sta nee of that object. There's ano ther issue, however, and that's the lifetime of an object. With Ian guages that allow objects to be created on the stack, the compiler determines how long the object lasts and can automatically destroy it. However, if you create it on the heap the compiler has no kno wledge of its lifetime. In a Ianguage like C++, you must determine programmatically when to destroy the

英文文献及中文翻译

毕业设计说明书 英文文献及中文翻译 学院:专 2011年6月 电子与计算机科学技术软件工程

https://www.wendangku.net/doc/861260450.html, Overview https://www.wendangku.net/doc/861260450.html, is a unified Web development model that includes the services necessary for you to build enterprise-class Web applications with a minimum of https://www.wendangku.net/doc/861260450.html, is part of https://www.wendangku.net/doc/861260450.html, Framework,and when coding https://www.wendangku.net/doc/861260450.html, applications you have access to classes in https://www.wendangku.net/doc/861260450.html, Framework.You can code your applications in any language compatible with the common language runtime(CLR), including Microsoft Visual Basic and C#.These languages enable you to develop https://www.wendangku.net/doc/861260450.html, applications that benefit from the common language runtime,type safety, inheritance,and so on. If you want to try https://www.wendangku.net/doc/861260450.html,,you can install Visual Web Developer Express using the Microsoft Web Platform Installer,which is a free tool that makes it simple to download,install,and service components of the Microsoft Web Platform.These components include Visual Web Developer Express,Internet Information Services (IIS),SQL Server Express,and https://www.wendangku.net/doc/861260450.html, Framework.All of these are tools that you use to create https://www.wendangku.net/doc/861260450.html, Web applications.You can also use the Microsoft Web Platform Installer to install open-source https://www.wendangku.net/doc/861260450.html, and PHP Web applications. Visual Web Developer Visual Web Developer is a full-featured development environment for creating https://www.wendangku.net/doc/861260450.html, Web applications.Visual Web Developer provides an ideal environment in which to build Web sites and then publish them to a hosting https://www.wendangku.net/doc/861260450.html,ing the development tools in Visual Web Developer,you can develop https://www.wendangku.net/doc/861260450.html, Web pages on your own computer.Visual Web Developer includes a local Web server that provides all the features you need to test and debug https://www.wendangku.net/doc/861260450.html, Web pages,without requiring Internet Information Services(IIS)to be installed. Visual Web Developer provides an ideal environment in which to build Web sites and then publish them to a hosting https://www.wendangku.net/doc/861260450.html,ing the development tools in Visual Web Developer,you can develop https://www.wendangku.net/doc/861260450.html, Web pages on your own computer.

参考基于单片机的外文文献翻译毕业论文

AT89S52 MCU Applications Function Characteristic Description The AT89S52 is a low-power, high-performance CMOS 8-bit microcontroller with 8K bytes of in-system programmable Flash memory. The device is manufactured using Atmel’s high-density nonvolatile memory technology and is compatible with the indus-try-standard 80C51 instruction set and pinout. The on-chip Flash allows the program memory to be reprogrammed in-system or by a conventional nonvolatile memory pro-grammer. By combining a versatile 8-bit CPU with in-system programmable Flash on a monolithic chip, the Atmel AT89S52 is a powerful microcontroller which provides a highly-flexible and cost-effective solution to many embedded control applications.The AT89S52 provides the following standard features: 8K bytes of Flash, 256 bytes of RAM, 32 I/O lines, Watchdog timer, two data pointers, three 16-bit timer/counters, a six-vector two-level interrupt architecture, a full duplex serial port, on-chip oscillator, and clock circuitry. In addition, the AT89S52 is designed with static logic for operation down to zero frequency and supports two software selectable power saving modes. The Idle Mode stops the CPU while allowing the RAM, timer/counters, serial port, and interrupt system to continue functioning. The Power-down mode saves the RAM con-tents but freezes the oscillator, disabling all other chip functions until the next interrupt or hardware reset. Pin Description VCC :Supply voltage. GND :Ground. Port 0:Port 0 is an 8-bit open drain bidirectional I/O port. As an output port, each pin can sink eight TTL inputs. When 1s are written to port 0 pins, the pins can be used as high-impedance inputs. Port 0 can also be configured to be the multiplexed low-order address/data bus during accesses to external program and data memory. In this mode, P0 has internal pull-ups. Port 0 also receives the code bytes during Flash programming and outputs the code bytes dur-ing program verification. External pull-ups are required during program verification. Port 1:Port 1 is an 8-bit bidirectional I/O port with internal pull-ups. The Port 1 output

毕业论文外文文献

毕业论文外文文献 Photography Pen Film director and critic Alexander Astruc's comments in today, wrote a famous: "Following a variety of other arts, especially painting, novel, film is rapidly becoming a tool to express ideas. It swept the market, a mall next to the theater's entertainment products. It is a well preserved image of the times methods. Now is gradually becoming a language, that is, the artist can use it to express themselves through a means of thinking, no matter how abstract this idea, or that it is also used as a kind of artists like prose or fiction a form to express their themes. So, I put this new era for film today called "photo pen" era, that era of writing, the use of the camera …… "Silent film attempts to use symbolic links to all the concept and meaning of the expression. We know, Lenovo exist in the image itself, naturally present in the film development process, there is the role of performance in each posture and expression, present in every word of in; also present in the camera movement, this movement linked to a piece of things, to link people and things …… "Obviously, that is, screenwriter making his own films. Or even say that there is no longer what the movie writer. Because, in such films, the playwright and director, there is nothing between significant

相关文档