文档库 最新最全的文档下载
当前位置:文档库 › OperationalAmplifiers毕业论文外文文献翻译及原文

OperationalAmplifiers毕业论文外文文献翻译及原文

OperationalAmplifiers毕业论文外文文献翻译及原文
OperationalAmplifiers毕业论文外文文献翻译及原文

毕业设计(论文)

外文文献翻译

文献、资料中文题目:运算放大器

文献、资料英文题目:Operational Amplifiers

文献、资料来源:

文献、资料发表(出版)日期:

院(部):

专业:

班级:

姓名:

学号:

指导教师:

翻译日期: 2017.02.14

1

Operational Amplifiers

In 1943 Harry Black commuted from his home in New York City at Bell Labs in New Jersey by way of a ferry. The ferry ride relaxed Harry enabling him to do some conceptual thinking. Harry had a tough problem to solve; when phone lines were extended long distance, they needed amplifiers, and undependable amplifiers limited phone service. First, initial tolerances on the gain were poor, but that problem was quickly solved wuth an adjustment. Second, even when an amplifier was adjusted correctly at the factory, the gain drifted so much during field operation that the volume was too low or the incoming speech was distorted.

Many attempts had been made to make a stable amplifier, but temperature changes and power supply voltage extremes experienced on phone lines caused uncontrollable gain drift. Passive components had much better drift characteristics than active components had, thus if an amplifier?s gain could be made dependent on passive components, the problem would be solve. During one of his ferry trips, Harry?s fertile brain conceived a novel solution for the amplifier problem, and he documented the solution while riding on the ferry.

The solution was to first build an amplifier that had more gain than the application required. Then some of the amplifier output signal was fed back to the input in a manner that makes the circuit gain (circuit is the amplifier and feedback components) dependent on the feedback circuit rather than the amplifier gain. Now the circuit gain is dependent on the passive feedback components rather than the active amplifier. This is called negative feedback, and it is the underlying operating principle for all modern day opamps. Harry had documented the first intentional feedback circuit had been built prior to that time ,but the designers ignored the effect.

I can hear the squeals of anguish coming from the manager and amplifier designers. I imagine that they said something like this, “it is hard enough to achieve 30kHz gainbandwidth (GBW), and now this fool wants me to design an amplifier with 3MHz GBW. But ,he is still going to get a circuit gain GBW of 30kHz .” Well, time has proven Harry right ,but there is a minor problem. It seems that circuit designed with large pen loop gains sometimes oscillate when the loop is closed. A lot of people investigated the instability effect, and it was pretty well understood in the 1940s, but solving stability problems involved long, tedious, and

intricate calculations. Years passed without anybody making the problem solution simpler or more understandable.

In 1945 H. W. Bode presented a system for analyzing the stability of feedback system by using graphical methods. Until this time, feedback analysis was done by multiplication and division, so calculation of transfer functions was a time consuming and laborious task. Remember, engineers did not have calculators or computers until the …70s, Bode presented a log technique that transformed the intensely mathematical process of calculating a feedback system?s stability into graphical analysis that was simple and perceptive. Feedback system design was still complicated, but it no longer was an art dominated by a few electrical engineers kept in a small dark room. Any electrical engineer could use Bode?s methods to find the stability of a feedback circuit, so the application of feedback to machines began to grow. There really wasn?t much call for electrical feedback design until computers and transducers become of age.

The first real-time computer was the analog computer! This computer used preprogrammed equations and input data to calculate control actions. The programming was hard wired with a series of circuit that performed math operations on the data, and the hard wiring limitation eventually caused the declining popularity of the analog computer. The heart of the analog computer was a device called an operational amplifier because it could be configured to perform many mathematical operations such as multiplication, addition, subtraction, division, integration, and differentiation on the input signals. The name was shortened to the familiar op amp, as we have come to know and love them. The op amp used an amplifier with a large open loop gain, and when the loop was closed, the amplifier performed the mathematical operations dictated by the external passive components. This amplifier was very large because it was built with vacuum tubes and it required a high-voltage power supply,but it was the heart of the analog computer, thus its large size and huge power requirements were accepted. Many early op amps were designed for analog computers, an it was soon found out the op amps had other uses and were handy to have around the physics lab .

At this time general-purpose analog computers were found in universities and large

company laboratories because they were critical to the research work done there. There was a parallel requirement for transducer signal conditioning in lab experiments, and op amps found their way into signal conditioning applications. As the signal conditioning applications expanded, the demand for op amps grew beyond the analog computer requirements , and even when the analog computers lost favor to digital computers, the op amps survived because of its importance in universal analog applications. Eventually digital computes replaced the analog computers, but the demand for op amps increased as measurement applications increased.

The first signal conditioning op amps were constructed with vacuum tubes prior to the introduction of transistors, so they were large and bulky. During the?50s, miniature vacuum tubes that worked from lower voltage power supplies enabled the manufacture of op amps that shrunk to the size lf a brick used in house construction, so the op amp modules were nick named bricks. Vacuum tube size and component size decreased until an op amp was shrunk to th e size of a single octal vacuum tube. Transistors were commercially developed in the …60s, and they further reduced op amp size to several cubic inches. Most of these early op amps were made for specific applications, so they were not necessarily general purpose. The early op amps served a specific purpose, but each manufacturer had different specifications and packages; hence, there was little second sourcing among the early op amps.

ICs were developed during the late 1950s and early 1960s, but it wasn?t t ill the middle 1960s that Fairchild released the μA709. This was the first commercially successful IC op am. TheμA709 had its share of problems, bur any competent analog engineer could use it, and it served in many different analog applications. The major drawback of theμA709 was stability; it required external compensation and a competent analog engineer to apply it. Also, theμA709 was quite sensitive because it had a habit of self-destruction under any adverse condition. TheμA741 followed theμA709, and it is an internally compensated op amp that does not require external compensation if operated under data sheet conditions. There has been a never-ending series of new op amps released each year since then, and their performance and reliability had improved to the point where present day op amps can be used for analog applications by anybody.

The IC op amp is here to stay; the latest generation op amps cover the frequency spectrum from 5kHz GBW to beyond 1GHz GBW. The supply voltage ranges from guaranteed operation at 0.9V to absolute maximum voltage ratings of 1000V. The input current and input offset voltage has fallen so low that customers have problems verifying the specifications during incoming inspection. The op amp has truly become the universal analog IC because it performs all analog tasks. It can function as a line driver, comparator (one bit A/D), amplifier, level shifter, oscillator, filter, signal conditioner, actuator driver, current source, voltage source, and etc. The designer?s problem is how to rapidly select the correct circuit /op amp combination and then, how to calculate the passive component values that yield the desired transfer function in the circuit.

The op amp will continue to be a vital component of analog design because it is such a fundamental component. Each generation of electronic equipment integrates more functions on silicon and takes more of the analog circuit inside the IC. As digital applications increase, analog applications also increase because the predominant supply of data and interface applications are in the real world, and the real world is an analog world. Thus , each new generation of electronic equipment creates requirements for new analog circuit; hence, new generations of op amps are required to fulfill these requirements. Analog design, and op amp design, is a fundamental skill that will be required far into the future.

放大器

1943年,哈利·布莱克乘火车或渡船从位于纽约市的家去新泽西州的贝尔实验室上班。在上班途中,哈利能够送下来,思考一些概念上的问题。哈利需要解决一个很棘手的问题:电话线在用于长途传输时就需要放大器,而性能不可靠的放大器限制了电话业务的扩展。首先,增益容差性能很差;但是,通过使用调节器,这个问题很快就解决了。第二,即使放大器在工厂里被调节正确,但在现场工作是增益还是漂移地很厉害,以至于音量太低或输入语音发生畸变。

为了制造出稳定的放大器,哈利已经进行了多次尝试;但是,温度变化和电话线上出现的供电电压极限使得增益漂移无法控制。无源器件比有源器件具有好得多漂移特性;假如能使放大器增益仅由无源器件决定的话,那么这个问题就会解决。在乘渡船上下班途中,哈利构想一个新颖的、解决放大器问题的办法,并在途中将它记录下来。

这个解决方案是这样的:首先设计一个增益比实际需求高的放大器,然后,将放大器输出信号的一部分反馈输入端,这使得电路增益(这里的电路由放大器和反馈器件组成)由反馈电路决定,而不是放大器增益决定,这样的话,电路增益就取决于无源反馈器件,而不是有源放大器。这个方案被称为“负反馈“,它是现代运算放大器的基本工作原理。哈利在乘坐渡船途中记录了第一个有意加入反馈的电路。此前,也一定有人无意中使用过反馈电路,但设计者忽视了这种作用!

管理者和放大器设计者可能会发生痛苦的抱怨。他们可能会说:“获得30kHz的增益带宽就够难了,现在这个傻瓜要我设计增益带宽为30MHz的放大器,而他还是想得到一个增益带宽为30kHz电路”。然而,时间已经证明哈利是正确,但有一个小问题哈利没有详细讨论——振荡问题。在环路闭合时候,开环增益设计得很大的电路有时会发生振荡。很多人都研究了这个不稳定现象,在20世纪40年代人们对它有了清晰的认识;不过,解决稳定问题需要长时间单调复杂的计算。许多年过去了,没有人能将解法简化或使之易于理解。

1945年,波特用图形化方法展示了一个用于分析反馈系统稳定性的系统。那时,反馈分析是用乘法和除法完成的。因此,计算传输函数是一件耗时费力的工作。需要知道的是:直到20世纪70年代,工程师才有了计算器和计算机。波特采用了一种对数方法——将分析反馈系统稳定的数学过程转换为容易的、好理解的图形化分析。虽然反馈系统的设计依然很复杂,但它再也不是一件只有少数电气工程师掌握的技术。任何电气工程师都可以使用波特的方法确定反馈电路的稳定性,并越来越多地将反馈应用到机器

1外文文献翻译原文及译文汇总

华北电力大学科技学院 毕业设计(论文)附件 外文文献翻译 学号:121912020115姓名:彭钰钊 所在系别:动力工程系专业班级:测控技术与仪器12K1指导教师:李冰 原文标题:Infrared Remote Control System Abstract 2016 年 4 月 19 日

红外遥控系统 摘要 红外数据通信技术是目前在世界范围内被广泛使用的一种无线连接技术,被众多的硬件和软件平台所支持。红外收发器产品具有成本低,小型化,传输速率快,点对点安全传输,不受电磁干扰等特点,可以实现信息在不同产品之间快速、方便、安全地交换与传送,在短距离无线传输方面拥有十分明显的优势。红外遥控收发系统的设计在具有很高的实用价值,目前红外收发器产品在可携式产品中的应用潜力很大。全世界约有1亿5千万台设备采用红外技术,在电子产品和工业设备、医疗设备等领域广泛使用。绝大多数笔记本电脑和手机都配置红外收发器接口。随着红外数据传输技术更加成熟、成本下降,红外收发器在短距离通讯领域必将得到更广泛的应用。 本系统的设计目的是用红外线作为传输媒质来传输用户的操作信息并由接收电路解调出原始信号,主要用到编码芯片和解码芯片对信号进行调制与解调,其中编码芯片用的是台湾生产的PT2262,解码芯片是PT2272。主要工作原理是:利用编码键盘可以为PT2262提供的输入信息,PT2262对输入的信息进行编码并加载到38KHZ的载波上并调制红外发射二极管并辐射到空间,然后再由接收系统接收到发射的信号并解调出原始信息,由PT2272对原信号进行解码以驱动相应的电路完成用户的操作要求。 关键字:红外线;编码;解码;LM386;红外收发器。 1 绪论

毕业论文外文翻译模版

吉林化工学院理学院 毕业论文外文翻译English Title(Times New Roman ,三号) 学生学号:08810219 学生姓名:袁庚文 专业班级:信息与计算科学0802 指导教师:赵瑛 职称副教授 起止日期:2012.2.27~2012.3.14 吉林化工学院 Jilin Institute of Chemical Technology

1 外文翻译的基本内容 应选择与本课题密切相关的外文文献(学术期刊网上的),译成中文,与原文装订在一起并独立成册。在毕业答辩前,同论文一起上交。译文字数不应少于3000个汉字。 2 书写规范 2.1 外文翻译的正文格式 正文版心设置为:上边距:3.5厘米,下边距:2.5厘米,左边距:3.5厘米,右边距:2厘米,页眉:2.5厘米,页脚:2厘米。 中文部分正文选用模板中的样式所定义的“正文”,每段落首行缩进2字;或者手动设置成每段落首行缩进2字,字体:宋体,字号:小四,行距:多倍行距1.3,间距:前段、后段均为0行。 这部分工作模板中已经自动设置为缺省值。 2.2标题格式 特别注意:各级标题的具体形式可参照外文原文确定。 1.第一级标题(如:第1章绪论)选用模板中的样式所定义的“标题1”,居左;或者手动设置成字体:黑体,居左,字号:三号,1.5倍行距,段后11磅,段前为11磅。 2.第二级标题(如:1.2 摘要与关键词)选用模板中的样式所定义的“标题2”,居左;或者手动设置成字体:黑体,居左,字号:四号,1.5倍行距,段后为0,段前0.5行。 3.第三级标题(如:1.2.1 摘要)选用模板中的样式所定义的“标题3”,居左;或者手动设置成字体:黑体,居左,字号:小四,1.5倍行距,段后为0,段前0.5行。 标题和后面文字之间空一格(半角)。 3 图表及公式等的格式说明 图表、公式、参考文献等的格式详见《吉林化工学院本科学生毕业设计说明书(论文)撰写规范及标准模版》中相关的说明。

概率论毕业论文外文翻译

Statistical hypothesis testing Adriana Albu,Loredana Ungureanu Politehnica University Timisoara,adrianaa@aut.utt.ro Politehnica University Timisoara,loredanau@aut.utt.ro Abstract In this article,we present a Bayesian statistical hypothesis testing inspection, testing theory and the process Mentioned hypothesis testing in the real world and the importance of, and successful test of the Notes. Key words Bayesian hypothesis testing; Bayesian inference;Test of significance Introduction A statistical hypothesis test is a method of making decisions using data, whether from a controlled experiment or an observational study (not controlled). In statistics, a result is called statistically significant if it is unlikely to have occurred by chance alone, according to a pre-determined threshold probability, the significance level. The phrase "test of significance" was coined by Ronald Fisher: "Critical tests of this kind may be called tests of significance, and when such tests are available we may discover whether a second sample is or is not significantly different from the first."[1] Hypothesis testing is sometimes called confirmatory data analysis, in contrast to exploratory data analysis. In frequency probability,these decisions are almost always made using null-hypothesis tests. These are tests that answer the question Assuming that the null hypothesis is true, what is the probability of observing a value for the test statistic that is at [] least as extreme as the value that was actually observed?) 2 More formally, they represent answers to the question, posed before undertaking an experiment,of what outcomes of the experiment would lead to rejection of the null hypothesis for a pre-specified probability of an incorrect rejection. One use of hypothesis testing is deciding whether experimental results contain enough information to cast doubt on conventional wisdom. Statistical hypothesis testing is a key technique of frequentist statistical inference. The Bayesian approach to hypothesis testing is to base rejection of the hypothesis on the posterior probability.[3][4]Other approaches to reaching a decision based on data are available via decision theory and optimal decisions. The critical region of a hypothesis test is the set of all outcomes which cause the null hypothesis to be rejected in favor of the alternative hypothesis. The critical region is usually denoted by the letter C. One-sample tests are appropriate when a sample is being compared to the population from a hypothesis. The population characteristics are known from theory or are calculated from the population.

毕业论文英文参考文献与译文

Inventory management Inventory Control On the so-called "inventory control", many people will interpret it as a "storage management", which is actually a big distortion. The traditional narrow view, mainly for warehouse inventory control of materials for inventory, data processing, storage, distribution, etc., through the implementation of anti-corrosion, temperature and humidity control means, to make the custody of the physical inventory to maintain optimum purposes. This is just a form of inventory control, or can be defined as the physical inventory control. How, then, from a broad perspective to understand inventory control? Inventory control should be related to the company's financial and operational objectives, in particular operating cash flow by optimizing the entire demand and supply chain management processes (DSCM), a reasonable set of ERP control strategy, and supported by appropriate information processing tools, tools to achieved in ensuring the timely delivery of the premise, as far as possible to reduce inventory levels, reducing inventory and obsolescence, the risk of devaluation. In this sense, the physical inventory control to achieve financial goals is just a means to control the entire inventory or just a necessary part; from the perspective of organizational functions, physical inventory control, warehouse management is mainly the responsibility of The broad inventory control is the demand and supply chain management, and the whole company's responsibility. Why until now many people's understanding of inventory control, limited physical inventory control? The following two reasons can not be ignored: First, our enterprises do not attach importance to inventory control. Especially those who benefit relatively good business, as long as there is money on the few people to consider the problem of inventory turnover. Inventory control is simply interpreted as warehouse management, unless the time to spend money, it may have been to see the inventory problem, and see the results are often very simple procurement to buy more, or did not do warehouse departments . Second, ERP misleading. Invoicing software is simple audacity to call it ERP, companies on their so-called ERP can reduce the number of inventory, inventory control, seems to rely on their small software can get. Even as SAP, BAAN ERP world, the field of

java毕业论文外文文献翻译

Advantages of Managed Code Microsoft intermediate language shares with Java byte code the idea that it is a low-level language witha simple syntax , which can be very quickly translated intonative machine code. Having this well-defined universal syntax for code has significant advantages. Platform independence First, it means that the same file containing byte code instructions can be placed on any platform; atruntime the final stage of compilation can then be easily accomplished so that the code will run on thatparticular platform. In other words, by compiling to IL we obtain platform independence for .NET, inmuch the same way as compiling to Java byte code gives Java platform independence. Performance improvement IL is actually a bit more ambitious than Java bytecode. IL is always Just-In-Time compiled (known as JIT), whereas Java byte code was ofteninterpreted. One of the disadvantages of Java was that, on execution, the process of translating from Javabyte code to native executable resulted in a loss of performance. Instead of compiling the entire application in one go (which could lead to a slow start-up time), the JITcompiler simply compiles each portion of code as it is called (just-in-time). When code has been compiled.once, the resultant native executable is stored until the application exits, so that it does not need to berecompiled the next time that portion of code is run. Microsoft argues that this process is more efficientthan compiling the entire application code at the start, because of the likelihood that large portions of anyapplication code will not actually be executed in any given run. Using the JIT compiler, such code willnever be compiled.

外文文献及翻译

文献翻译 原文 Combining JSP and Servlets The technology of JSP and Servlet is the most important technology which use Java technology to exploit request of server, and it is also the standard which exploit business application .Java developers prefer to use it for a variety of reasons, one of which is already familiar with the Java language for the development of this technology are easy to learn Java to the other is "a preparation, run everywhere" to bring the concept of Web applications, To achieve a "one-prepared everywhere realized." And more importantly, if followed some of the principles of good design, it can be said of separating and content to create high-quality, reusable, easy to maintain and modify the application. For example, if the document in HTML embedded Java code too much (script), will lead the developed application is extremely complex, difficult to read, it is not easy reuse, but also for future maintenance and modification will also cause difficulties. In fact, CSDN the JSP / Servlet forum, can often see some questions, the code is very long, can logic is not very clear, a large number of HTML and Java code mixed together. This is the random development of the defects. Early dynamic pages mainly CGI (Common Gateway Interface, public Gateway Interface) technology, you can use different languages of the CGI programs, such as VB, C / C + + or Delphi, and so on. Though the technology of CGI is developed and powerful, because of difficulties in programming, and low efficiency, modify complex shortcomings, it is gradually being replaced by the trend. Of all the new technology, JSP / Servlet with more efficient and easy to program, more powerful, more secure and has a good portability, they have been many people believe that the future is the most dynamic site of the future development of technology. Similar to CGI, Servlet support request / response model. When a customer submit a request to the server, the server presented the request Servlet, Servlet responsible for handling requests and generate a response, and then gave the server, and then from the server sent to

毕业论文 外文翻译#(精选.)

毕业论文(设计)外文翻译 题目:中国上市公司偏好股权融资:非制度性因素 系部名称:经济管理系专业班级:会计082班 学生姓名:任民学号: 200880444228 指导教师:冯银波教师职称:讲师 年月日

译文: 中国上市公司偏好股权融资:非制度性因素 国际商业管理杂志 2009.10 摘要:本文把重点集中于中国上市公司的融资活动,运用西方融资理论,从非制度性因素方面,如融资成本、企业资产类型和质量、盈利能力、行业因素、股权结构因素、财务管理水平和社会文化,分析了中国上市公司倾向于股权融资的原因,并得出结论,股权融资偏好是上市公司根据中国融资环境的一种合理的选择。最后,针对公司的股权融资偏好提出了一些简明的建议。 关键词:股权融资,非制度性因素,融资成本 一、前言 中国上市公司偏好于股权融资,根据中国证券报的数据显示,1997年上市公司在资本市场的融资金额为95.87亿美元,其中股票融资的比例是72.5%,,在1998年和1999年比例分别为72.6%和72.3%,另一方面,债券融资的比例分别是17.8%,24.9%和25.1%。在这三年,股票融资的比例,在比中国发达的资本市场中却在下跌。以美国为例,当美国企业需要的资金在资本市场上,于股权融资相比他们宁愿选择债券融资。统计数据显示,从1970年到1985年,美日企业债券融资占了境外融资的91.7%,比股权融资高很多。阎达五等发现,大约中国3/4的上市公司偏好于股权融资。许多研究的学者认为,上市公司按以下顺序进行外部融资:第一个是股票基金,第二个是可转换债券,三是短期债务,最后一个是长期负债。许多研究人员通常分析我国上市公司偏好股权是由于我们国家的经济改革所带来的制度性因素。他们认为,上市公司的融资活动违背了西方古典融资理论只是因为那些制度性原因。例如,优序融资理论认为,当企业需要资金时,他们首先应该转向内部资金(折旧和留存收益),然后再进行债权融资,最后的选择是股票融资。在这篇文章中,笔者认为,这是因为具体的金融环境激活了企业的这种偏好,并结合了非制度性因素和西方金融理论,尝试解释股权融资偏好的原因。

大学毕业论文---软件专业外文文献中英文翻译

软件专业毕业论文外文文献中英文翻译 Object landscapes and lifetimes Tech nically, OOP is just about abstract data typing, in herita nee, and polymorphism, but other issues can be at least as importa nt. The rema in der of this sect ion will cover these issues. One of the most importa nt factors is the way objects are created and destroyed. Where is the data for an object and how is the lifetime of the object con trolled? There are differe nt philosophies at work here. C++ takes the approach that con trol of efficie ncy is the most importa nt issue, so it gives the programmer a choice. For maximum run-time speed, the storage and lifetime can be determined while the program is being written, by placing the objects on the stack (these are sometimes called automatic or scoped variables) or in the static storage area. This places a priority on the speed of storage allocatio n and release, and con trol of these can be very valuable in some situati ons. However, you sacrifice flexibility because you must know the exact qua ntity, lifetime, and type of objects while you're writing the program. If you are trying to solve a more general problem such as computer-aided desig n, warehouse man ageme nt, or air-traffic con trol, this is too restrictive. The sec ond approach is to create objects dyn amically in a pool of memory called the heap. In this approach, you don't know un til run-time how many objects you n eed, what their lifetime is, or what their exact type is. Those are determined at the spur of the moment while the program is runnin g. If you n eed a new object, you simply make it on the heap at the point that you n eed it. Because the storage is man aged dyn amically, at run-time, the amount of time required to allocate storage on the heap is sig ni fica ntly Ion ger tha n the time to create storage on the stack. (Creat ing storage on the stack is ofte n a si ngle assembly in structio n to move the stack poin ter dow n, and ano ther to move it back up.) The dyn amic approach makes the gen erally logical assumpti on that objects tend to be complicated, so the extra overhead of finding storage and releas ing that storage will not have an importa nt impact on the creati on of an object .In additi on, the greater flexibility is esse ntial to solve the gen eral program ming problem. Java uses the sec ond approach, exclusive". Every time you want to create an object, you use the new keyword to build a dyn amic in sta nee of that object. There's ano ther issue, however, and that's the lifetime of an object. With Ian guages that allow objects to be created on the stack, the compiler determines how long the object lasts and can automatically destroy it. However, if you create it on the heap the compiler has no kno wledge of its lifetime. In a Ianguage like C++, you must determine programmatically when to destroy the

文献翻译英文原文

https://www.wendangku.net/doc/109571003.html,/finance/company/consumer.html Consumer finance company The consumer finance division of the SG group of France has become highly active within India. They plan to offer finance for vehicles and two-wheelers to consumers, aiming to provide close to Rs. 400 billion in India in the next few years of its operations. The SG group is also dealing in stock broking, asset management, investment banking, private banking, information technology and business processing. SG group has ventured into the rapidly growing consumer credit market in India, and have plans to construct a headquarters at Kolkata. The AIG Group has been approved by the RBI to set up a non-banking finance company (NBFC). AIG seeks to introduce its consumer finance and asset management businesses in India. AIG Capital India plans to emphasize credit cards, mortgage financing, consumer durable financing and personal loans. Leading Indian and international concerns like the HSBC, Deutsche Bank, Goldman Sachs, Barclays and HDFC Bank are also waiting to be approved by the Reserve Bank of India to initiate similar operations. AIG is presently involved in insurance and financial services in more than one hundred countries. The affiliates of the AIG Group also provide retirement and asset management services all over the world. Many international companies have been looking at NBFC business because of the growing consumer finance market. Unlike foreign banks, there are no strictures on branch openings for the NBFCs. GE Consumer Finance is a section of General Electric. It is responsible for looking after the retail finance operations. GE Consumer Finance also governs the GE Capital Asia. Outside the United States, GE Consumer Finance performs its operations under the GE Money brand. GE Consumer Finance currently offers financial services in more than fifty countries. The company deals in credit cards, personal finance, mortgages and automobile solutions. It has a client base of more than 118 million customers throughout the world

毕业论文5000字英文文献翻译

英文翻译 英语原文: . Introducing Classes The only remaining feature we need to understand before solving our bookstore problem is how to write a data structure to represent our transaction data. In C++ we define our own data structure by defining a class. The class mechanism is one of the most important features in C++. In fact, a primary focus of the design of C++ is to make it possible to define class types that behave as naturally as the built-in types themselves. The library types that we've seen already, such as istream and ostream, are all defined as classesthat is,they are not strictly speaking part of the language. Complete understanding of the class mechanism requires mastering a lot of information. Fortunately, it is possible to use a class that someone else has written without knowing how to define a class ourselves. In this section, we'll describe a simple class that we canuse in solving our bookstore problem. We'll implement this class in the subsequent chapters as we learn more about types,expressions, statements, and functionsall of which are used in defining classes. To use a class we need to know three things: What is its name? Where is it defined? What operations does it support? For our bookstore problem, we'll assume that the class is named Sales_item and that it is defined in a header named Sales_item.h. The Sales_item Class The purpose of the Sales_item class is to store an ISBN and keep track of the number of copies sold, the revenue, and average sales price for that book. How these data are stored or computed is not our concern. To use a class, we need not know anything about how it is implemented. Instead, what we need to know is what operations the class provides. As we've seen, when we use library facilities such as IO, we must include the associated headers. Similarly, for our own classes, we must make the definitions associated with the class available to the compiler. We do so in much the same way. Typically, we put the class definition into a file. Any program that wants to use our class must include that file. Conventionally, class types are stored in a file with a name that, like the name of a program source file, has two parts: a file name and a file suffix. Usually the file name is the same as the class defined in the header. The suffix usually is .h, but some programmers use .H, .hpp, or .hxx. Compilers usually aren't picky about header file names, but IDEs sometimes are. We'll assume that our class is defined in a file named Sales_item.h. Operations on Sales_item Objects

相关文档