文档库 最新最全的文档下载
当前位置:文档库 › 机器人资料

机器人资料

机器人资料
机器人资料

Sensor-Actuator-Comparison as a Basis for

Collision Detection for a Quadruped Robot

Jan Ho?mann and Daniel G¨o hring

Institut f¨u r Informatik,LFG K¨u nstliche Intelligenz, Humboldt-Universit¨a t zu Berlin,Unter den Linden6,10099Berlin,Germany

https://www.wendangku.net/doc/333847296.html,

Abstract.Collision detection in a quadruped robot based on the com-

parison of sensor readings(actual motion)to actuator commands(in-

tended motion)is described.Ways of detecting such incidences using just

the sensor readings from the servo motors of the robot’s legs are shown.

Dedicated range sensors or collision detectors are not used.It was found

that comparison of motor commands and actual movement(as sensed by

the servo’s position sensor)allowed the robot to reliably detect collisions

and obstructions.Minor modi?cations to make the system more robust

enabled us to use it in the RoboCup domain,enabling the system to

cope with arbitrary movements and accelerations apparent in this highly

dynamic environment.A sample behavior is outlined that utilizes the

collision information.Further emphasis was put on keeping the process

of calibration for di?erent robot gaits simple and manageable.

1Introduction

Many research e?orts in mobile robotics aim at enabling the robot to safely and robustly navigate and to move about both known and unknown environments

(e.g.the rescue scenarios in the RoboCup Rescue League[1],planetary surfaces

[13]).While wheeled robots are widely used in environments where the robot can move on?at,even surfaces(such as o?ce environments or environments that are accessible to wheelchairs[5]),legged robots are generally believed to be able to deal with a wider range of environments and surfaces.There are many designs of legged robots varying in the number of legs used,ranging from insectoid or arachnoid with6,8or more legs(e.g.[2]),4-legged such as the Sony Aibo[3], humanoid:2-legged(e.g.[8]).

Obstacle avoidance is often realized using a dedicated(360?)range sensor [12].Utilizing vision rather than a dedicated sensor is generally a much harder task since a degree of image understanding is necessary.For the special case of color coded environments,straight forward solutions exist that make use of the knowledge about the robot’s environment(such as the color of the surface or the color of obstacles[6]).If,however,obstacle avoidance fails,robots often are unable to detect collisions since many designs lack touch sensors or bumpers.The robot is unaware of the failure of its intended action and ends up in a situation it

2

a)b)

Fig.1.a)A collision of two robots.Neither robot cannot move into the desired di-rection.Even worse,robots often interlock their legs which further prevent them from resolving the situation.

b)Illustration of the DOFs of the Aibo.Each robot leg has three joints,two degrees of freedom(DOF)in the shoulder joint and one DOF in the knee joint,denotedΦ1,Φ2 andΦ3.Joints are labeled in the following way:F(ront)or H(ind)+L(eft)or R(ight) +Number of joint(1,2,3).Using this nomenclature,the knee joint of the highlighted leg in the above image is FR3.

is unable to resolve;it is-quite literally-“running into a wall”without noticing it.

Apart from the current action failing,collisions(and subsequently being stuck)have severe impact on the robot’s localization if odometry is used to any degree in the localization process(as is the case in[11,4]).For these approaches to be robust against collisions,they tend to not put much trust in odometry data.

This work investigates the possibilities of detecting collisions of a Sony Aibo 4-legged robot using the walking engine and software framework described in [10].The robot does not have touch sensors that can be used to detect collisions of it with the environment.As we will show,the servo motor’s direction sensors can be used for this task.Work by[9]shows that it is possible to learn servo direction measurements for di?erent kinds of(unhindered)motions and use this to detect slippage of the robot’s legs and also to detect collisions of the robot with its environment.

The approach to collision detection using the Aibo presented by[9]stores a large number of reference sensor readings and uses these to detect unusual sen-sor readings caused by collision and slip.Our approach di?ers in that we make assumptions about the robot motion that allows the robot to detect collisions by comparing the actuator command(desired motion)to the sensor readings (actual motion).The used set of reference values can be much smaller using this approach.We will show that the method is robust and also quickly ad-justable to di?erent walking gaits,robots,and surfaces.Section4compares the two approaches in detail.

Fig.2.

speed of75mm/s.Sensor and actuator curves are almost congruent except for a slight phase shift.

2Method

2.1Comparison of the Actuator Signals and the Direction Sensor

of the Robot’s Servos

The presented collision detection method is based on the comparison of actuator commands to direction sensor readings.Fig.2shows typical sensor measurements alongside actuator signals.

It can be seen that for an unhindered period of movement T the sensor and actuator curve of a joint are congruent,i.e.they are of the same shape but shifted by a phase??.If??was zero,the area in between the two curves becomes minimal:

0≤

t0+T

t0

(a(t)?s(t+??))2d t(1)

Tests using discrete time showed that collisions cause a discrepancy between actuator and sensor data which can be recognized by calculating the area be-tween the sensor and actuator data.It was found that it was not necessary to sum over one complete period of the motion to detect collisions.Shorter intervals yield faster response times.Trading o?response time and sensitivity to sensor noise,we found that12frames1were su?cient.The last12frames are used to calculate the the Total Squared Di?erence(TSD):

T SD a,s(??)=

t2

i=t1

(a i?s(i+??))2(2)

Diagram3shows the TSD of the FL1joint(left shoulder)for a robot colliding with the?eld boundary.The peaks in the TSD when the robot’s leg hits the boundary are clearly distinguishable.

1A frame is an atomic step of the motion module;there are125frames per second, one frame is8ms long.

4

-10000-50000

5000

10000

15000

20000

25000

30000

35000

t [8 ms]

T o t a l S q . D i f f . [m r a d 2]-1000-5000500100015002000250030003500j o i n t a n g l e [m r a d ]TSD for last 12 frames actuator, FL1sensor FL1, phase shift = 8 frames

Fig.3.Sensor and actuator data of a collision with the ?eld boundary walking forward at 150mm/s.In the TSD the collisions can be seen as peaks in the curve.They occur brie?y after the actual collision and can easily be distinguished from unhindered movements.For classi?cation of collisions the TSD is compared to a threshold.If the TSD is larger than this threshold,it is assumed that a collision has occurred.The thresholds for every motion component (i.e.walking forward/backward,walking sideways,rotation)are saved in a lookup table.For combined motions (e.g.walking forward and walking sideways at the same time)the di?erent thresholds for each motion component are summed as described in section 3.3.

2.2Aligning the Actuator and Sensor Curve

Fig.4shows the impulse response of one of the robot’s servo motors.It can be seen that the joint doesn’t move for about 5frames (40ms).Reasons for this are momentum and calculation time;the step height and the load that the joints have to work against also have an in?uence on the observed phase di?erence.After 5frames the joint slowly starts moving and accelerates until it reaches its maximum speed after 8frames.Just before reaching its destination,the joint angle changes are decreasing.This is due to the joint’s P.I.D.controller smoothing the robot’s motions.

-4000

400800120016002000

0102060t [8 ms]j o i n t a n g l e [m r a d ]actuator FL1sensor FL1

30

4050Fig.4.Sensor and actuator data for a rectangular actuator impulse.The actuator function jumps to its new value.The corresponding servo’s direction sensor readings are shown.

5

-1000-800

-600

-400

-200

200

400

600

50100t [8 ms]t [8 ms]

j o i n t a n g l e [m r a d ]j o i n t a n g l e [m r a d ]actuator FL1sensor FL1difference -1000-800-600-400-200020040060050100

Fig.5.Left.Sensor and actuator data for walking freely at 150mm/s.Actuator and sensor curve out of phase and the corresponding TSD Right.As above but phase shifted.Sensor function is shifted by 8frames.The corresponding TSD now clearly shows collisions (peaks in the curve).

In ?gure 5,a)the TSD is shown for a sample motion.The smallest values of the TSD are found at the intersection of the two curves.Collision e?ects have little in?uence on the di?erence level.In b)actuator and sensor curves are aligned by shifting the sensor data curve left by 8frames.The calculated TSD shows a strong response to collisions.

Since phase shifts of varying length were observed,the 12frames wide win-dow of the TSD is calculated for several phase shifts ??ranging from 6to 15frames.The smallest TSD is used to detect collisions.This approach eliminates phase shifts which are not caused by collisions and reduces the risk of wrongly recognized collisions (false positives).Due to the small number of possible values of ??,real collisions still produce a strong signal.

2.3Filtering of Actuator Input

The presented approach to collision detection works well under laboratory condi-tions,i.e.when applied to homogeneous motions with small,well de?ned motion changes (see sample application described in section 4).In real world appli-cations,motion commands may change rapidly over time as the robot interacts with the environment.In the dynamic,highly competitive RoboCup domain,the robot changes its walking speed and direction quite frequently as determined by the behavior layer of the agent.Figure 6shows the actuator commands for a robot playing soccer.Most of these changes are relatively small and unproblem-atic but some are too extreme to be executed by the servos,e.g.when the robot suddenly sees the ball and moves towards it at the highest possible speed.This is compensated by increasing the TSD threshold if the joint acceleration exceeds a certain value.This increased threshold is used only for some tenths of a second and then falls back to its initial level.

2.4Threshold Calibration

The values of the thresholds are calibrated manually.They are measured for each of the elementary motions (forward/backward,sideways,rotation)in steps

6

-1000

-800-600

-400-2000200400600

t [8 ms]

j o i n t a n g l e [m r a d ]sensor FL1actuator FL1

Fig.6.Actuator commands and sensor measurements during an actual RoboCup game.The robot is changing directions frequently.It can be seen that the servo is unable to perform the requested motions.

of 30mm/s and 0.5rad respectively.This adds up to a total of 40measurements needed for operation.

A threshold is determined by letting the robot walk freely and without col-lision or slip on the ?eld freely for about three seconds while monitoring both motor commands and sensor readings.The TSD is calculated and the maximum TSD is used to derive a threshold value.The maximum TSD value that occurred is tripled;this means that for the robot to detect a collision,the TSD must be 3times greater than the maximum TSD measured during calibration.

In our experiments the calibration was done by hand since robot gaits do not undergo frequent change and the calibration process is performed quickly.We therefore did not see the need for automating the calibration process (given that an external supervisor has to make sure that no collisions occur during calibration anyway).

3Detectability of Collisions During Directed Robot

Locomotion

For di?erent walking directions,collisions have di?erent e?ects on the robot’s joints depending on how the joints are hindered in their motion.Therefore,the following cases were investigated.In our experiments,only the legs’servos were used for collision detection.However,the robot’s head motors can also be used to directly detect whether a robot hits an obstacle with its head (or its head’s freedom of motion is otherwise impaired by an obstacle).

In the detection of collisions,a trade o?has to be performed between being sensitive to collisions and being robust against false positives (i.e.the detection of a collision where in reality the robot was moving freely).Since we wanted to avoid false positives,the threshold value for detecting collisions was raised at the cost of being less sensitive to detecting collisions.Furthermore,by integrating the information gathered over a short period of time,false positives can be suppressed.This,however,makes the robot less reactive.

7 3.1Elementary Motions

Walking Forward or Backward.Collisions are easily detected in the front left or right shoulder joints FL1and FR1of the robot,depending on which of the legs hits the obstacle(see1).This way collisions with the?eld boundary can be detected.Collisions with other robots can also be detected,but not as reliably because this sort of collision is of a much more complex type(the other robot may be moving,etc.).Collisions when walking backwards are slightly harder to recognize because of the particular position of the joints of the hind legs.This is due to the robot’s body being tilted forward and the backward motion not being symmetric to the forward motion.

The rate of detection of collisions during forward movement was about90%; for backward movement it was about70%.Sometimes collisions would not be detected because the robot would push itself away from obstacles rather than being hindered in its joints’motions.No false positives were observed.

Walking sideways.Collisions which are occurring while the robot is walking sideways can be recognized best in the sideways shoulder jointθ2(e.g.FL2) on the side where the robot hits the obstacle.This is not quite as reliable as in the case of forward motions because the Aibo loses traction more quickly when walking sideways for the gait that was used.About70%percent of the actual collisions were detected.Some phantom collisions were detected at a rate of about1-2per minute.

Turning.The same joints that are used to recognize collisions while moving sideways can be used to recognize collisions when the robot is turning.This way,a common type of collision can also be detected:The legs of two robots attempting to turn interlock and prevent the rotation from being performed successfully.How well this can be recognized depends on how much grip the robots have and on the individual turning(or moving)speeds.

The detection rate of collisions and the rate of false positives is of the same order as when the robot is moving sideways.When raising the detection threshold to completely eliminate false positives for a robot rotating at1.5rad/sec,the rate of detection drops to about50%.

3.2Leg Lock

The before mentioned“leg lock”also occurs in situations where two robots are close to each other(e.g.when chasing the ball).Leg lock is detected in the same way collisions are.Therefore,“leg lock”is detected but cannot be distinguished from other collisions.

3.3Superposition of Elementary Motions

While it is easy for the robot to recognize the above motions separately,it is harder to recognize collisions when motions are combined,e.g.when the robot

8

Fig.7.Simple behavior option graph denoted in XABSL[7].The robot walks forward until it hits an obstacle.It then turns away from it and continues walking in the new direction.

walks forward and sideways at the same time.For lower speeds,the result-ing motions can be viewed as a superposition of the three elementary motions and the resulting threshold is approximated by the sum of the three individual thresholds:

T(v,s,r)=T(v,0,0)+T(0,s,0)+T(0,0,r)(3) where v is the forward,s the sideways,and r the rotation component of the motion.For high speeds,the requested motions exceed the servos performance. To compensate for this,the collision thresholds are increased by multiplication by a scale factor f which is a function of v and s:

f=f(v,s)=

1if v<50mm/s and s<50mm/s

v+s

100

otherwise

(4)

With this extension,the method can be applied to practically all kinds of robot motions and speeds that we observed in a RoboCup game.

4Application and Performance

Sample Application.A simple behavior was implemented in the XABSL behavior mark up language[7]:The robot walks straight ahead;if it touches an obstacle with one of its front legs,it stops and turns left or right depending on the leg the collision was detected with.The robot turns away from where the collision occurred then continues to walk straight.

9 This simple behavior was tested on the RoboCup?eld in our laboratory and it was found to work reliably regardless of the type of collision(e.g.static obstacle or other robots).Collisions were detected with high accuracy.In some rare cases,collisions would not be detected immediately because of slippage of the robot’s legs.In these cases,the robot would recognize the collision after a brief period of time(order of tenths of seconds).

RoboCup.As pointed out in[9],collision detection can be used to have the robot “realize”that an intended action was not successful and to have it act accord-ingly.It did,however,prove to be a di?cult task to?nd the right action in a situation where two robots run into each other.This usually happens when they pursue the same goal,in our case when both are chasing the ball.Backing o?gives the opponent robot an advantage,pushing it makes the situation worse. Current work investigates possible actions.

Other Approaches.A similar approach aimed at traction monitoring and colli-sion detection was presented by another RoboCup team,the“Nubots”,in2003 [9].The method compares the current sensor data to reference sensor data.It does not use actuator commands for collision detection.The reference data con-sists of sensor data value and variance for a given motion type and is measured prior to the run.This training is done by measuring the sensor data of possible combinations of elementary motions.A four-dimensional lookup table is used to store the reference data.The four dimensions of the table are:forward/backward motion(backStrideLength),sideward motion(strafe),rotation(turn),and time parameter which stores information about the relative position of the paw in its periodic https://www.wendangku.net/doc/333847296.html,ing this approach,the“Nubots”were able to detect collisions and slip.However,the four-dimensional lookup table requires a con-siderable amount of memory and training time(according to[9],20x12x20x20 entries are used to fully describe a gait).During the training it is important that no collisions or slip https://www.wendangku.net/doc/333847296.html,ing the lookup-table,no assumptions are made about similarities between actuator command and sensor readings.

In contrast,our approach makes the assumption that there is a similarity between intended and actual motion and the variance of the sensor signal is constant for the entire period of the motion.Making these(fair)assumptions, very little memory is needed(40parameters describe all possible motions)while still achieving good results in detecting obstacles.The parameter table needed for a given gait is generated quickly and easily.

5Conclusion

With the presented method the robot is able to reliably detect collisions of a4-legged robot with obstacles on even surfaces(e.g.RoboCup?eld).Comparing the requested motor command to the measured direction of the servo motors of the robot’s legs was found to be an e?cient way of detecting if the robot’s

10

freedom of motion was impaired.In a sample behavior,the robot turns away from obstacles after having detected the collision.The method was extended for use in RoboCup games.Here it is used to detect collisions(with players and the?eld boundaries)and to let the robot act accordingly and also to improve localization by providing additional information about the quality of current odometry data (validity).Further work will focus on?nding appropriate reactions in competitive situations.

6Acknowledgments

The project is funded by the Deutsche Forschungsgemeinschaft,Schwerpunkt-programm“Kooperierende Teams mobiler Roboter in dynamischen Umgebun-gen”(“Cooperative Teams of Mobile Robots in Dynamic Environments”).

Program code used was developed by the GermanTeam,a joint e?ort of the Humboldt University of Berlin,University of Bremen,University of Dortmund, and the Technical University of Darmstadt.Source code is available for download at http://www.robocup.de/germanteam.

References

1.Robocup rescue web site.https://www.wendangku.net/doc/333847296.html,/robocup2003.2003.

2.J.E.Clark,J.G.Cham,S.A.Bailey,E.M.Froehlich,P.K.Nahata,R.J.Full,

and M.R.Cutkosky.Biomimetic Design and Fabrication of a Hexapedal Running Robot.In Intl.Conf.Robotics and Automation(ICRA2001),2001.

3.M.Fujita and H.Kitano.Development of an Autonomous Quadruped Robot for

Robot Entertainment.Autonomous Robots,5(1):7–18,1998.

4.J.-S.Gutmann,W.Burgard,D.Fox,and K.Konolige.An experimental comparison

of localization methods.Proceedings of the1998IEEE/RSJ Intl.Conference on Intelligent Robots and Systems(IROS’98),1998.

5. https://www.wendangku.net/doc/333847296.html,nkenau,T.R¨o fer,and B.Krieg-Br¨u ckner.Self-Localization in Large-Scale

Environments for the Bremen Autonomous Wheelchair.In Spatial Cognition III, Lecture Notes in Arti?cial Intelligence.Springer,2002.

6.S.Lenser and M.Veloso.Visual Sonar:Fast Obstacle Avoidance Using Monocular

Vision.In Proceedings of IROS’03,2003.

7.M.L¨o tzsch,J.Bach,H.-D.Burkhard,and M.J¨u ngel.Designing agent behavior

with the extensible agent behavior speci?cation language XABSL.In7th Interna-tional Workshop on RoboCup2003(Robot World Cup Soccer Games and Confer-ences),Lecture Notes in Arti?cial Intelligence.Springer,2004.to appear.

8. C.L.P.Dario,E.Guglielmelli.Humanoids and personal robots:design and ex-

periments.Journal of Robotic Systems,18(2),2001.

9.M.J.Quinlan,C.L.Murch,R.H.Middleton,and S.K.Chalup.Traction Moni-

toring for Collision Detection with Legged Robots.In RoboCup2003Symposium, Lecture Notes in Arti?cial Intelligence.Springer,2004.to appear.

10.T.R¨o fer,I.Dahm,U.D¨u?ert,J.Ho?mann,M.J¨u ngel,M.Kallnik,M.L¨o tzsch,

M.Risler,M.Stelzer,and J.Ziegler.GermanTeam2003.In7th International Workshop on RoboCup2003(Robot World Cup Soccer Games and Conferences), Lecture Notes in Arti?cial Intelligence.Springer,2004.to appear.more detailed in http://www.robocup.de/germanteam/GT2003.pdf.

11 11.T.R¨o fer and M.J¨u ngel.Vision-Based Fast and Reactive Monte-Carlo Localization.

IEEE International Conference on Robotics and Automation,2003.

12.T.Weigel,A.Kleiner,F.Diesch,M.Dietl,J.-S.Gutmann,B.Nebel,P.Stiegeler,

and B.Szerbakowski.CS Freiburg2001.In RoboCup2001International Sympo-sium,Lecture Notes in Arti?cial Intelligence.Springer,2003.

13.K.Yoshida,H.Hamano,and T.Watanabe.Slip-Based Traction Control of a Plan-

etary Rover.In Experimental Robotics VIII,Advanced Robotics Series.Springer, 2002.

智能机器人的现状和发展趋势

智能移动机器人的现状和发展 姓名 学号 班级:

智能移动机器人的现状及其发展 摘要:本文扼要地介绍了智能移动机器人技术的发展现状,以及世界各国智能移动机器人的发展水平,然后介绍了智能移动机器人的分类,从几个典型的方面介绍了智能移动机器人在各行各业的广泛应用,讨论了智能移动机器人的发展趋势以及对未来技术的展望,最后提出了自己的建议和设想,分析我国在智能移动机器人方面发展并提出期望。 关键词:智能移动机器人;发展现状;应用;趋势 1引言 机器人是一种可编程和多功能的,用来搬运材料、零件、工具的操作机,或是为了执行不同的任务而具有可改变和可编程动作的专门系统。智能移动机器人则是一个在感知 - 思维 - 效应方面全面模拟人的机器系统,外形不一定像人。它是人工智能技术的综合试验场,可以全面地考察人工智能各个领域的技术,研究它们相互之间的关系。还可以在有害环境中代替人从事危险工作、上天下海、战场作业等方面大显身手。一部智能移动机器人应该具备三方面的能力:感知环境的能力、执行某种任务而对环境施加影响的能力和把感知与行动联系起来的能 力。智能移动机器人与工业机器人的根本区别在于,智能移动机器人具有感知功 能与识别、判断及规划功能[1] 。 随着智能移动机器人的应用领域的扩大,人们期望智能移动机器人在更多领 域为人类服务,代替人类完成更复杂的工作。然而,智能移动机器人所处的环境 往往是未知的、很难预测。智能移动机器人所要完成的工作任务也越来越复杂; 对智能移动机器人行为进行人工分析、设计也变得越来越困难。目前,国内外对 智能移动机器人的研究不断深入。 本文对智能移动机器人的现状和发展趋势进行了综述,分析了国内外的智能 移动机器人的发展,讨论了智能移动机器人在发展中存在的问题,最后提出了对 智能移动机器人发展的一些设想。 1

智能机器人相关资料 (76)

About "FD on Desk" An example of "FD on Desk" startup screen

Quick operation guide (For details, refer to the instruction manual.) [How to start] Execute the shortcut on the desktop or the start menu on your PC. [How to exit] Click the Virtual FD window and press ESC key. A pop-up window will show up. Select "YES". [Operation] The "Virtual FD" can be operated by the "Virtual TP", the keyboard or the mouse. How to operate Mouse Click the icons etc. on the screen like a touch screen of the teach pendant. Virtual TP Click the buttons/switches on the Virtual TP . Keyboard See the following picture. stands for the input signals. The icon of stands for the output signals. The input signals can be turned ON/OFF via the button click operation. For the User I/O signals, signal number 1 to 2048 can be displayed by using the scroll bar on the right side of the window. The bolded numbers show the signals that are assigned to condition signals. The assignment setting information of the I/O signals will be reflected to "Virtual I/O" window. [Motion check of the robot] It is possible to check the motion of the robot on the "Virtual Robot" window. The "Virtual Robot" window has simple CAD function, servo gun axis definition function, target position create function, interference check function, etc. Ctrl CLOSE / SELECT SCREEN ACC PROG / STEP O.WRITE / REC INS INTERP / COORD ENABLE SPEED MOD UNDO REDO END/TIMER DEL CLAMP/ARC STOP/CONT UNIT/MECHANISM SHIFT EDIT ENTER HELP BS INPUT OUTPUT CHECK SPEED / TEACH SPEED SYNC RESET/R I/F

智能机器人的发展历史及现状

课程序号: 3908 《机器人技术基础》 学院 : 行知学院专业: 环境艺术10b2班 姓名:虞佳文学号: 10746330 授课教师 : 周武提交时间: 2011 年 10 月 30日成绩:

智能机器人的发展历史及现状 摘要:作为现代计算技术和IT技术的延伸,机器人正在逐渐走进我们的生活, 而高度智能化和特性化正成为个人机器人的鲜明特征。本文针对现代智能机器人的现状和发展趋势进行总结,提出了下一代智能机器人的关键技术——基于经验的记忆学习。 关键词:智能机器人;历史;应用;发展状况 1)智能机器人的现状 我们从广泛意义上理解所谓的智能机器人,它给人的最深刻的印象是一个独特的进行自我控制的“活物”。其实,这个自控“活物”的主要器官并没有像真正的人那样微妙而复杂。智能机器人具备形形色色的内部信息传感器和外部信息传感器,如视觉、听觉、触觉、嗅觉。除具有感受器外,它还有效应器,作为作用于周围环境的手段。这就是筋肉,或称自整步电动机,它们使手、脚、长鼻子、触角等动起来。 机器人技术自上个世纪中叶问世以来,经历四十多年发展已取得长足进步,成为提高产业竞争力方面极为重要的战略高技术。目前,机器人关键技术日臻成熟,应用范围迅速扩展,作为计算机、自动控制、传感器、先进制造等领域技术集成的典型代表,面临巨大产业发展机会。国内外业界专家预测,智能机器人将是21世纪高技术产业新的增长方向。2003至2006年间,全球智能服务机器人以每年40%左右的速度迅速增长。当代机器人专家现已达成了共识:作为计算机技术及现代IT综合技术的一个必然延伸,机器人技术完全可能遵循“摩尔定律”,以前所未有的速度实现突破。智能机器人将成为继家电、个人电脑之后、第三个以超常规速度走向我们日常生活的产品。 智能机器人之所以叫智能机器人,这是因为它有相当发达的“大脑”。在脑中起作用的是中央计算机,这种计算机跟操作它的人有直接的联系。最主要的是,这样的计算机可以进行按目的安排的动作。正因为这样,我们才说这种机器人才是真正的机器人,尽管它们的外表可能有所不同。我们称这种机器人为自控机器人,以便使它同前面谈到的机器人区分开来。它是控制论产生的结果,控制论主张这样的事实:生命和非生命有目的的行为在很多方面是一致的。正像一个智能机器人制造者所说的,机器人是一种系统的功能描述,这种系统过去只能从生命细胞生长的结果中得到,现在它们已经成了我们自己能够制造的东西了。 机器人即将重复个人电脑崛起的道路,机器人将与30年前的个人电脑一样迈入家家户户,彻底改变人类的生活方式。随着机器人技术的深入发展,机器人智能程度在不断提高,进一步拓宽了机器人的应用领域。机器人不但在工业领

智能机器人论文

智能机器人的发展与应用前景 摘要 本文介绍了智能机器人的发展概况、机器人的感官系统、机器人运动系统及人工智能技术在机器人中的应用,智能机器人是一个在感知-思维-效应方面全面模拟人的机器系统,外形不一定像人。它是人工智能技术的综合试验场,可以全面地考察人工智能各个领域的技术,研究它们相互之间的关系。还可以在有害环境中代替人从事危险工作、上天下海、战场作业等方面大显身手。 关键词: 智能机器人感官仿生人工智能 1.引言 人们通常把机器人划分为三代。第一代是可编程机器人。这种机器人一般可以根据操作人员所编的程序,完成一些简单的重复性操作。这一代机器人是从60年代后半叶开始投入实际使用的,目前在工业界已得到广泛应用。第二代是“感知机器人”,又叫做自适应机器人,它在第一代机器人的基础上发展起来的,能够具有不同程度的“感知”周围环境的能力。这类利用感知信息以改善机器人性能的研究开始于70年代初期,到1982年,美国通用汽车公司为其装配线上的机器人装配了视觉系统,宣告了感知机器人的诞生,在80年代得到了广泛应用。第三代机器人将具有识别、推理、规划和学习等智能机制,它可以把感知和行动智能化结合起来,因此能在非特定的环境下作业,称之为智能机器人。智能机器人与工业机器人的根本区别在于,智能机器人具有感知功能与识别、判断及规划功能。而感知本身,就是人类和动物所具有的低级智能。因此机器的智能分为两个层次:①具有感觉、识别、理解和判断功能; ②具有总结经验和学习的功能。所以,人们通常所说的第二代机器人可以看作是第一代智能机器人。 2.智能机器人的感官系统 2.1触觉传感器 英国近几年在阵列触觉传感方面开展了相当广泛的研究。例如:Sussex大学和Shack-leton系统驱动公司研制的基于运动的介电电容传感的阵列;由威尔士大学和软件科学公司研制的采用压强技术的装在机器人夹持器上的传感器。 2.2视觉传感 在机器人视觉方面,目前市场上销售的有以下6类传感器:①隔开物体的二维视觉:双态成像;②隔开物体的二维视觉:灰度标成像;③触觉或叠加物体的二维视觉;④二维观察;⑤二维线跟踪;⑥使用透视、立体、结构图示或范围找寻技术从隔开物体中提取三维信息。在这类系统方面,它们只能做一些很简单的操作。例如:为了使机器人具有某种程度的人眼功能,已进行大量的研究工作并向如下两类系统发展:①从一维物体中提取三维信息;②活动机器人导航、探路和躲避障碍物的现场三维分析。伦敦大学目前正在研究一种双目视觉机器人的实时图像处理机。还有正在研究机器人视觉系统的教育机构有:考文垂工业大学、爱丁堡大学、格拉斯哥大学、格温特大学;而伯明翰大学则专门研究惯性传感器。另外,还有许多从事传感系统开发的单位,都进行了传感反馈研究。如米德尔塞克斯工业大学致力于使机器人能组织和使用来自不同类型传感器的数据。这种机器人能“看”、“感”和“听”,它更接近于人。 2.3听觉传感

智能机器人的现状及其发展趋势

智能机器人的现状及其发展趋势 摘要:本文扼要地介绍了智能机器人技术的发展现状,以及世界各国智能机器人的发展水平,然后介绍了智能机器人的分类,从几个典型的方面介绍了智能机器人在各行各业的广泛应用,讨论了智能机器人的发展趋势以及对未来技术的展望,最后提出了自己的建议和设想,分析我国在智能机器人方面发展并提出期望。 关键词:智能机器人;发展现状;应用;趋势 The status and trends of intellectual robot Abstract: This paper briefly discusses the development, status of intellectual robot, development of intellectual robot in many countries. And then it presents the categories of intellectual robot, talks about the extensive applications in all works of life from several typical aspects and trends of intellectual robot. After that, it puts forward prospects for future technology, suggestion and a tentative idea of myself, and analyses the development of intellectual robot in China. Finally, it raises expectations of intellectual robot in China. Key words: intellectual robot; development status; application; trend 1 引言 机器人是一种可编程和多功能的,用来搬运材料、零件、工具的操作机,或是为了执行不同的任务而具有可改变和可编程动作的专门系统。智能机器人则是一个在感知- 思维- 效应方面全面模拟人的机器系统,外形不一定像人。它是人工智能技术的综合试验场,可以全面地考察人工智能各个领域的技术,研究它们相互之间的关系。还可以在有害环境中代替人从事危险工作、上天下海、战场作业等方面大显身手。一部智能机器人应该具备三方面的能力:感知环境的能力、执行某种任务而对环境施加影响的能力和把感知与行动联系起来的能力。智能机器人与工业机器人的根本区别在于,智能机器人具有感知功能与识别、判断及规划功能[1]。 随着智能机器人的应用领域的扩大,人们期望智能机器人在更多领域为人类服务,代替

智能机器人材料3

2013 10th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI) October 31-November 2, 2013 / Ramada Plaza Jeju Hotel, Jeju, Korea
Emotional Gait Generation Method based on Emotion Mental Model - Preliminary experiment with Happiness and Sadness Matthieu Destephe1, Kenji Hashimoto2 and Atsuo Takanishi3
Graduate School of Science and Engineering, Waseda University, Tokyo, Japan 2 Faculty of Science and Engineering, Waseda University, Tokyo, Japan 3 Department of Modern Mechanical Engineering & Humanoid Robotics Institute, Waseda University, Tokyo, Japan (Tel : +81-3-3203-4394; E-mail: contact@takanishi.mech.waseda.ac.jp)
1
Abstract – Designing humanoid robots able to interact socially with humans is a challenging task. If we want the robots to be actively integrated in our society, several issues have to be taken in account: the look of the robot, the naturalness of its movements, the stability of its walk, the reactivity it might have with its human partners. We propose to improve the reactivity of the robot by using a emotional mental model in order to generate emotional gait patterns. Those patterns will help the understanding of any emotional message conveyed between a human and a robot. We propose a novel emotional gait generation method based on the Emotion mental model. We did preliminary experiments with the Happiness and Sadness emotions and with different intensities. Keywords - Motion Generation, Emotion, Biped Robot, Social Robotics
1. Introduction
Humanoid robots are designed to interact with people in their daily life and at any age, as soon as kindergarten or as late as nursing home. Advanced robots such as robot companions, robot workers, etc., will need to be able to adapt their behavior according to human feedback. For humans it is important to be able to give and to be given such feedback in a natural way, e.g., using emotional expression. Expressive robots can act as caregivers for children, with or without disabilities, and help their emotional development and wellbeing through emotive interaction. Therefore the ability of expressing emotions is important to facilitate intuitive human robot interaction. Moreover, the adaptation of the robot movements to the interaction context is necessary in order to create an interaction as natural and beneficial as possible. In this context, several emotion capable robots were developed along the years. For example, the robot Kismet was designed to simulate emotion and assess the affective intent of the caregiver [1]; NAO a small humanoid (58 cm) is often used in Human Robot Interaction (HRI) studies with children [2]; and the Waseda KOBIAN (fig. 1), designed by the applicant's team, combines a face capable of human-like expressions (24 DoF) and the bipedal locomotion ability. Preliminary studies on KOBIAN showed that whole-body posture clearly improves the emotion recognition [3].
However, if we want to perform a smooth and natural Human Robot Interaction, it necessitates a dynamic interaction between the participants with feedback, which could be visual, audible or tactile. Most of the current robots are only focused on the facial expressions and use rarely their limbs [4-5]. It was showed that the use of whole-body to express emotions improves the recognition rate of the emotions, and thus could increase the understanding and feedback during an interaction. In the case where movements are used for the interaction, they are usually fixed and follow a pre-determined pattern. This means that the robot will follow the same stimuli-response pattern. However, emotions are known to be dependent on several factors such as interaction context, culture, age, and gender. Without dynamic adaptation, after some time, the human partner will become progressively bored and the human implication in the interaction will drop. Additionally, the emotive walking research is an innovative field of research which stays mainly unexplored. In this paper, we propose an emotional gait generation method based on the Emotion mental model [6]. After a brief literature review in section 2, we describe our robot platform, the emotional mental model and a new emotional gait generation method in section 3. We present an experiment in the section 4 and we conclude and comment our work in the section 5.
2. Related works
2.1 Humanoid robots Among human sized humanoids robots, just a few are capable of expressing emotions. HRP-4C is a geminoid which can express pre-programmed facial emotions but cannot walk [7]. ASIMO [8] and WABIAN-2RII [9] are able to walk but do not have emotion expression capabilities. KIBO [10], developed by KIST, can express facial emotion expression but this capability was not assess by research. KOBIAN-2R [11], developed at Waseda University, is able to walk and express emotions not only with its face but also with its whole body [13]. 2.2 Emotion models Emotion models can be classified in three categories: appraisal, a categorical or a dimensional approach. The appraisal approach states that our appreciation of events (appraisal) determines the reaction to those events and it is
978-1-4799-1197-4/13/$31.00 ?2013 IEEE
86

人工智能期末复习资料

1.什么是智能体?什么是理性智能体?智能体的特性有哪些?智能体的分类有哪些? 智能体定义:通过传感器感知所处环境并通过执行器对该环境产生作用的计算机程序及其控制的硬件。 理性智能体定义:给定感知序列(percept sequence)和内在知识(built-in knowledge),理性智能体能够选择使得性能度量的期望值(expected value)最大的行动。 智能体的特性:自主性(自主感知学习环境等先验知识)、反应性(Agent为实现自身目标做出的行为)、社会性(多Agent及外在环境之间的协作协商)、进化性(Agent自主学习,逐步适应环境变化) 智能体的分类: 简单反射型智能体:智能体寻找一条规则,其条件满足当前的状态(感知),然后执行该规则的行动。 基于模型的反射型智能体:智能体根据内部状态和当前感知更新当前状态的描述,选择符合当前状态的规则,然后执行对应规则的行动。 基于目标的智能体:为了达到目标选择合适的行动,可能会考虑一个很长的可能行动序列,比反射型智能体更灵活。 基于效用的智能体:决定最好的选择达到自身的满足。 学习型智能体:自主学习,不断适应环境与修正原来的先验知识。 2.描述几种智能体类型实例的任务环境PFAS,并说明各任务环境的属性。 答题举例: 练习:给出如下智能体的任务环境描述及其属性刻画。 o机器人足球运动员 o因特网购书智能体 o自主的火星漫游者 o数学家的定理证明助手 二、用搜索法对问题求解 1.简述有信息搜索(启发式搜索)与无信息搜索(盲目搜索、非启发式搜索)的区别。 非启发式搜索:按已经付出的代价决定下一步要搜索的节点。具有较大的盲目性,产生较多的无用节点,搜索空间大,效率不高。 启发式搜索:要用到问题自身的某些信息,以指导搜索朝着最有希望的方向前进。由于这种搜索针对性较强,因而原则上只需搜索问题的部份状态空间,搜索效率较高。 2.如何评价一个算法的性能?(度量问题求解的性能) 完备性:当问题有解时,算法是否能保证找到一个解; 最优性:找到的解是最优解; 时间复杂度:找到一个解需要花多长时间 搜索中产生的节点数 空间复杂度:在执行搜索过程中需要多少内存 在内存中存储的最大节点数 3.简述几种搜索方式的思想。 非启发式搜索: 广度优先搜索:首先扩展根节点,接着扩展根节点的所有后续,然后在扩展它们的后续,依

有关机器人的资料

有关机器人的资料 你的问题实际上包括了两方面:设计、制作。 制作相对于设计来讲,要容易一些,因为仿造一个机器人也可以说是制作。 所需要的知识大概如下(可参照工科专业大学课程规划): 结构方面(机械专业相关):机械原理(所需机构的基本运动学、力学原理),机械设计(知道各种零件的用途、基本配合关系),基本装配方法(好多结构能设计出来,但是装不起来),工程图学(交给别人机加时,至少得会出二维图),公差计算基础(知道机加时哪些尺寸需要多高的精度),工程材料基础(了解各种常见材料的特性,复合材料方面不需要多深入,了解一些常用的复合材料就行),加工工艺基础(知道各种零件能如何加工出来,主要侧重冷加工方面)。硬件电路方面(电子专业相关):电路基本原理,模拟电路、数字电路基础(会实现基本的控制电路:电源、逻辑电路等),检测技术基础(知道常见传感器的原理、指标及使用方法)。 软件方面(计算机专业相关):编程语言(C语言或一种汇编),操作系统基础(有助于设计较为复杂的软件架构,也有助于学习单片机、DSP等控制器),常用算法与数据结构(有助于设计出合理、高效、有创意的机器人控制算法),计算机网络基础(比如未来可以做多机器人通讯等),软件工程(知道如何设计并维护软件),数字图像处理基础(用到摄像头等传感器时需要这方面知识)。控制方面(自动控制专业相关):电机学(知道如何控制电机(马达)转起来),自动控制原理(让电机按照你的期望速度和目标位置转动起来,如经典的PID 算法),一些信号处理知识(比如用于滤波)。 数学方面(工科专业相关):计算方法(一些用于实际计算时的算法),线性代数(或高等代数、矩阵论)用于未来需要计算多关节机器人的运动学、动力学的必备工具。 会计方面(商科专业相关):成本分析、预算等概念,起码需要知道做出这个机器人要花多少钱(一般机加要占大头,画电路板是其次,自己做的话,软件开发的会计成本可以忽略)。 具体到可能需要学习的技术方面,可以从以下几方面涉猎: 结构方面:学习一种三维建模工具(SolidWorks或Pro E、UG等),能出二维图;能做一些有限元分析就更好了(会帮助你设计可靠的结构)。 硬件电路方面:会用一种电子设计工具(Protel、PowerPCB等),能画原理图,会画PCB;一种电路仿真工具(EDA软件:如Proteus),单片机(51、A VR、Freescale、ARM等)或CPLD/FPGA,你的驱动程序、控制算法要在上边实现。软件方面:学会一种IDE(如Keil、ICC、Realview等),了解编辑、编译、调试方法;学习一下uC/OS或其他RT OS的使用,如果要跑操作系统,可方便移植。 经典控制 如果要设计出自己的机器人来,恐怕需要在一定“制造”的基础之上有自己的靠谱想法,可以从多个学科深入下去,仅举几个例子: 机构:学习更多机构原理,积累更多机械设计经验; 数字图像处理:目标检测、图像识别,立体视觉等; 控制理论:滤波算法、系统辨识、自适应控制、模糊控制等; 仿生学:好多新颖的机器人都是建立在仿生学的基础上的。

全球十大工业机器人品牌

全球十大工业机器人品牌随着智能装备的发展,机器人在工业制造中的优势越来越显着,机器人企业也如雨后春笋般的出现。然而占据主导地位的还是那些龙头企业。 1.发那科(FANUC) FANUC(发那科)是日本一家专门研究数控系统的公司,成立于1956年。是世界上最大的专业数控系统生产厂家,占据了全球70%的市场份额。FANUC1959年首先推出了电液步进电机,在后来的若干年中逐步发展并完善了以硬件为主的开环数控系统。进入70年代,微电子技术、功率电子技术,尤其是计算技术得到了飞速发展,FANUC公司毅然舍弃了使其发家的电液步进电机数控产品,一方面从GETTE S公司引进直流伺服电机制造技术。 1976年FANUC公司研制成功数控系统5,随后又与SIEMENS公司联合研制了具有先进水平的数控系统7,从这时起,FANUC公司逐步发展成为世界上最大的专业数控系统生产厂家。 自1974年,FANUC首台机器人问世以来,FANUC致力于机器人技术上的领先与创新,是世界上唯一一家由机器人来做机器人的公司,是世界上唯一提供集成视觉系统的机器人企业,是世界上唯一一家既提供智能机器人又提供智能机器的公司。FANUC机器人产品系列多达240种,负重从公斤到吨,广泛应用在装配、搬运、焊接、铸造、喷涂、码垛等不同生产环节,满足客户的不同需求。 2008年6月,FANUC成为世界第一个突破20万台机器人的厂家;2011年,FANU C全球机器人装机量已超25万台,市场份额稳居第一。 2.库卡(KUKA)

库卡(KUKA)及其德国母公司是世界工业机器人和自动控制系统领域的顶尖制造商,它于1898年在德国奥格斯堡成立,当时称“克勒与克纳皮赫奥格斯堡(Ke llerundKnappichAugsburg)”。公司的名字KUKA,就是KellerundKnappichAugs burg的四个首字母组合。在1995年KUKA公司分为KUKA机器人公司和KUKA库卡焊接设备有限公司(即现在的KUKA制造系统),2011年3月中国公司更名为:库卡机器人(上海)有限公司。 KUKA产品广泛应用于汽车、冶金、食品和塑料成形等行业。KUKA机器人公司在全球拥有20多个子公司,其中大部分是销售和服务中心。KUKA在全球的运营点有:美国,墨西哥,巴西,日本,韩国,台湾,印度和欧洲各国。 库卡工业机器人的用户包括通用汽车、克莱斯勒、福特汽车、保时捷、宝马、奥迪、奔驰(Mercedes-Benz)、大众(Volkswagen)、哈雷-戴维森(Harley-Da vidson)、波音(Boeing)、西门子(Siemens)、宜家(IKEA)、沃尔玛(Wal-Mart)、雀巢(Nestle)、百威啤酒(Budweiser)以及可口可乐(Coca-Cola)等众多单位。 1973KUKA研发其第一台工业机器人,即名为FAMULUS.这是世界上第一台机电驱动的6轴机器人.今天该公司四轴和六轴机器人有效载荷范围达3–1300公斤、机械臂展达350–3700mm,机型包括:SCARA、码垛机、门式及多关节机器人,皆采用基于通用PC控制器平台控制。 KUKA的机器人产品最通用的应用范围包括工厂焊接、操作、码垛、包装、加工或其它自动化作业,同时还适用于医院,比如脑外科及放射造影。 KUKA工业机器人在多部好莱坞电影中出现过。在电影“新铁金刚之不日杀机”中,在冰岛的一个冰宫,国家安全局特工受到激光焊接机器人的威胁。在电影

2020年智能机器人的现状及其发展趋势

作者:空青山 作品编号:89964445889663Gd53022257782215002 时间:2020.12.13 智能机器人的现状及其发展趋势 摘要:本文扼要地介绍了智能机器人技术的发展现状,以及世界各国智能机器人的发展水平,然后介绍了智能机器人的分类,从几个典型的方面介绍了智能机器人在各行各业的广泛应用,讨论了智能机器人的发展趋势以及对未来技术的展望,最后提出了自己的建议和设想,分析我国在智能机器人方面发展并提出期望。 关键词:智能机器人;发展现状;应用;趋势 The status and trends of intellectual robot Abstract: This paper briefly discusses the development, status of intellectual robot, development of intellectual robot in many countries. And then it presents the categories of intellectual robot, talks about the extensive applications in all works of life from several typical aspects and trends of intellectual robot. After that, it puts forward prospects for future technology, suggestion and a tentative idea of myself, and analyses the development of intellectual robot in China. Finally, it raises expectations of intellectual robot in China. Key words: intellectual robot; development status; application; trend 1 引言 机器人是一种可编程和多功能的,用来搬运材料、零件、工具的操作机,或是为了执行不同的任务而具有可改变和可编程动作的专门系统。智能机器人则是一个在感知- 思维- 效应方面全面模拟人的机器系统,外形不一定像人。它是人工智能技术的综合试验场,可以全

智能机器人的现状及其发展

智能机器人的现状及其发展 学院:电气信息学院姓名:张琪学号:1143031172 摘要:本文主要介绍了智能机器人的发展现状、关键技术及其在各个领域的应用。然后总结了智能机器人在发展中存在的一些问题。最后提出了自己的建议和设想。 关键词:智能机器人;发展现状;传感器技术;智能控制;人机接口;应用 1.引言 机器人是一种可编程和多功能的,用来搬运材料、零件、工具的操作机,或是为了执行不同的任务而具有可改变和可编程动作的专门系统。智能机器人则是一个在感知- 思维- 效应方面全面模拟人的机器系统,外形不一定像人。它是人工智能技术的综合试验场,可以全面地考察人工智能各个领域的技术,研究它们相互之间的关系。还可以在有害环境中代替人从事危险工作、上天下海、战场作业等方面大显身手。一部智能机器人应该具备三方面的能力:感知环境的能力、执行某种任务而对环境施加影响的能力和把感知与行动联系起来的能力。智能机器人与工业机器人的根本区别在于,智能机器人具有感知功能与识别、判断及规划功能。 随着智能机器人的应用领域的扩大,人们期望智能机器人在更多领域为人类服务,代替人类完成更复杂的工作。然而,智能机器人所处的环境往往是未知的、很难预测。智能机器人所要完成的工作任务也越来越复杂;对智能机器人行为进行人工分析、设计也变得越来越困难。目前,国内外对智能机器人的研究不断深入。 本文对智能机器人的现状和发展趋势进行了综述,分析了国内外的智能机器人的发展,讨论了智能机器人在发展中存在的问题,最后提出了对智能机器人发展的一些设想。 2.国内外在该领域的发展现状综述 智能机器人是第三代机器人,这种机器人带有多种传感器,能够将多种传感器得到的信息进行融合,能够有效的适应变化的环境,具有很强的自适应能力、学习能力和自治功能。 目前研制中的智能机器人智能水平并不高,只能说是智能机器人的初级阶段。智能机器人研究中当前的核心问题有两方面:一方面是,提高智能机器人的自主性,这是就智能机器人与人的关系而言,即希望智能机器人进一步独立于人,具有更为友善的人机界面。从

人工智能资料

人工智能 考试时间:90分钟满分:100分 说明:.本卷分为第一卷和第二卷两部分,共8页。第一卷为客观题,含单项选择题和判断题,单项选择题40小题,每小题1.5分,共60分;判断题10题,每小题1分,共10分;第二卷2页为主观题,共30分,全卷共100分,考试时间90分钟。 一、单选题: 1:人类智能的特性表现在4个方面。 A:聪明、灵活、学习、运用。 B:能感知客观世界的信息、能对通过思维对获得的知识进行加工处理、能通过学习积累知识增长才干和适应环境变化、能对外界的刺激作出反应传递信息。 C:感觉、适应、学习、创新。 D:能捕捉外界环境信息、能够利用利用外界的有利因素、能够传递外界信息、能够综合外界信息进行创新思维。 2:人工智能的目的是让机器能够,以实现某些脑力劳动的机械化。 A:具有智能B:和人一样工作 C:完全代替人的大脑D:模拟、延伸和扩展人的智能 3:下列关于人工智能的叙述不正确的有:。 A:人工智能技术它与其他科学技术相结合极大地提高了应用技术的智能化水平。 B:人工智能是科学技术发展的趋势。 C:因为人工智能的系统研究是从上世纪五十年代才开始的,非常新,所以十分重要。 D:人工智能有力地促进了社会的发展。 4:人工智能研究的一项基本内容是机器感知。以下列举中的不属于机器感知的领域。 A:使机器具有视觉、听觉、触觉、味觉、嗅觉等感知能力。 B:让机器具有理解文字的能力。 C:使机器具有能够获取新知识、学习新技巧的能力。 D:使机器具有听懂人类语言的能力 5:自然语言理解是人工智能的重要应用领域,下面列举中的不是它要实现的目标。 A:理解别人讲的话。B:对自然语言表示的信息进行分析概括或编辑。 C:欣赏音乐。D:机器翻译。 6:为了解决如何模拟人类的感性思维,例如视觉理解、直觉思维、悟性等,研究者找到一个重要的信息处理的机制是:。 A:专家系统B:人工神经网络C:模式识别D:智能代理 7:如果把知识按照作用来分类,下述不在分类的范围内。 A:用控制策略表示的知识,即控制性知识。 B:可以通过文字、语言、图形、声音等形式编码记录和传播的知识,即显性知识。 C:用提供有关状态变化、问题求解过程的操作、演算和行动的知识,即过程性知识。 D:用提供概念和事实使人们知道是什么的知识,即陈述性。 8:下述不是知识的特征。 A:复杂性和明确性B:进化和相对性 C:客观性和依附性D:可重用性和共享性

机器人资料

直角坐标机器人(Cartesian coordinate robot) 1、是以XYZ直角坐标系统为基本数学模型,以伺服电机、步进电机为驱动的单轴机械臂为基本工作单元,以滚珠丝杆、同步皮带、齿轮齿条为常用的传动方式所架构起来的机器人系统,可以完成在XYZ三维坐标系中任意一点的到达和遵循可控的运动轨迹。(百度百科:直角坐标机器人) 2、可应用于点胶、滴塑、喷涂、码垛、分拣、包装、焊接、金属加工、搬运、上下料、装配、印刷等常见的工业生产领域。(百度百科:直角坐标机器人) 3、A cartesian coordinate robot (also called linear robot) is an industrial robot whose three principal axes of control are linear (i.e. they move in a straight line rather than rotate) and are at right angles to each other.The three sliding joints correspond to moving the wrist up-down,in-out,back-forth. Among other advantages, this mechanical arrangement simplifies the Robot control arm solution. Cartesian coordinate robots with the horizontal member supported at both ends are sometimes called Gantry robots. They are often quite large. A popular application for this type of robot is a computer numerical control machine (CNC machine) and 3D printing. The simplest application is used in milling and drawing machines where a pen or router translates across an x-y plane while a tool is raised and lowered onto a surface to create a precise design. Pick and place machines and plotters are also based on the principal of the cartesian coordinate robot.(From Wikipedia) SCARA机器人(Selective Compiance Assembly Robot Arm) 1、平面关节型机器人又称SCARA型机器人(Selective Compiance Assembly Robot Arm),是一种应用于装配作业的机器人手臂。 2、SCARA机器人是一种圆柱坐标型的特殊类型的工业机器人,它有3个旋转关节,其轴线相互平行,在平面内进行定位和定向。另一个关节是移动关节,用于完成末端件在垂直于平面的运动。 3、应用于塑料工业、汽车工业、电子产品工业、药品工业和食品工业等领域,主要职能是搬取零件和装配工作,由于其具有的特定的形状,决定了其工作范围类似于一个扇形区域。(百度百科)

相关文档