文档库 最新最全的文档下载
当前位置:文档库 › On Satellite Vision-aided Robotics Experiment

On Satellite Vision-aided Robotics Experiment

On Satellite Vision-aided Robotics Experiment
On Satellite Vision-aided Robotics Experiment

On Satellite Vision-aided Robotics Experiment Maarten Vergauwen,Marc Pollefeys,Tinne Tuytelaars and Luc Van Gool

ESAT–PSI,K.U.Leuven,

Kard.Mercierlaan94,B-3001Heverlee,Belgium

Phone:+32-16-32.10.64,Fax:+32-16-32.17.23

vergauwe|pollefey|tuytelaa|vangool@esat.kuleuven.ac.be

Abstract

This contribution describes the vision-based robotic control(VBRC)experiments executed on the Japanese research satellite ETS-VII.The VBRC experiments were designed to enhance image quality,re?ne calibra-tion of di?erent system components,facilitate robot-operation by automatically re?ning the robot-pose and provide data for robot-calibration.

1Introduction

The vision-based robotic control(VBRC)experiments were executed on the Japanese research satellite ETS-VII[5]in conjunction with the Visual Interactive Au-tonomy Bi-Lateral Experiments(VIABLE)between ESA and NASDA.

The VIABLE1project is the?rst collaboration be-tween ESA and NASDA with the aim to test the In-teractive Autonomy(IA)concept for space robotics and to investigate advanced vision-based techniques for robot-control and calibration.

The ETS-VII satellite is equipped with a6-DOF robot manipulator and two sets of cameras.The VIABLE experiments had access to a taskboard that allows sev-eral tasks to be executed by the manipulator.The taskboard contains a set of3-point calibration mark-ers with known3D positions in the taskboard reference frame.

The ETS-VII onboard vision system consists of two sets of cameras.The arm monitor camera(AMC)is mounted on the?rst joint and the arm hand camera (AHC)is mounted on the end e?ector of the robot arm.Each set contains two cameras,one primary and one redundant unit.Both can be utilized as stereo 1The VIABLE consortium consisted of the Belgian compa-nies TRASYS-SPACE and SAS and the K.U.Leuven depart-ments PMA and ESAT-PSI.head with60mm baseline.Each camera records a grey level image with668x480pixel resolution with ?xed focal length.The images are compressed with JPEG by a factor of8.6to yield a frame rate of4im-ages per second on the video downlink.Two NTSC video channels allow access of two camera images si-multaneously.

2VBRC Experiments

The VBRC experiments were designed to ?enhance the image quality to allow better visual control,

?re?ne calibration of the system components(in-trinsic camera parameters,eye-hand calibration) based on the calibration markers,

?perform on-line pose estimation procedures and guide the robot by automatically re?ning the robot pose,

?aid the operator during the experiments with vi-sual clues using augmented reality techniques,?provide material for post-mission robot calibra-tion and testing of advanced methods for uncali-brated vision experiments.

3Viable station setup

An important part of the VBRC experiments is the capability of the Vision Tools to allow operator in-tervention during execution of a vision task.Image processing and computer vision is a process with pos-sibly many sources for errors that can not all be mod-eled beforehand.Therefore a user-friendly interface

was developed to assist the VBRC tasks.The inter-face allowed the operator to interact with the vision system to guide and help the automatic processing. While the human operator is very good at interpret-ing the scene and recognizing qualitative information, the vision system is good at precise quantitative mea-surements when given the appropriate input data. To simulate and verify the VIABLE experiments a photo-realistic3D model of the taskboard and the robot was constructed.This model served as refer-ence for the IA path planning(in ROBCAD)and the VBRC visual simulator(in OpenInventor).The model allowed the realistic visual simulation of all aspects of the experiments.

Veri?cation of this simulation was performed with a mockup taskboard of scale1:1.It contained all visu-ally signi?cant parts and served as a realistic testbed for the VBRC experiments.

4Enhancing the image quality

A?rst set of experiments evaluated the impact of the imaging conditions in space(degradation of the im-ages due to noise,image compression,direct sunlight, etc.)and derived parameters for image preprocessing. Analysis of the images that were taken for this purpose yielded a set of parameters for image-enhancement?l-ters.Evaluation led to the following?lter sequence: 1.a median?lter.This non-linear?lter e?ectively

removes spikes and noise in the image but pre-serves the edges.It was chosen for its capacity to remove the ringing that typically occurs around the edges of an image when JPEG compression is used.Because the JPEG-ringing was quite severe,

a window-size of5was used for most images.

2.a binomial?lter.This low-pass?lter smoothes

the image to remove noise.It has the advantage over standard mean?ltering that its frequency response has no ripples.

3.a sharpening?lter.This unsharp-masking?l-

ter cancels the smoothing of the edges caused by the previous?lter.

4.radial distortion compensation.This?lter

undoes the quite severe radial distortion of the images.

5.aspect ratio compensation.This procedure

restores the original aspect ratio of the image

which was changed due to the conversion to NTSC.

These preprocessing?lters were applied to all incom-ing images before further processing.

5Invariants

Quite some experiments about calibration,pose esti-mation and robot calibration rely on the automatic extraction and identi?cation procedures of certain ob-jects in the scene.These procedures are based on the concept of invariants and are discussed in this section.

5.1General concept

Invariance is an important concept in computer vision applications.Per de?nition,an invariant feature is a feature that remains unchanged under a certain type of transformation.If for example a feature is projec-tively invariant,it is invariant under changing view-points of the camera.In the VBRC experiments this concept is applied to the3-point calibration markers. The3D position of these markers on the taskboard is given.The markers are robustly extracted in the im-age and the corresponding3D features are found au-tomatically using viewpoint invariant relations.These 2D-3D relations can then be used to compute the po-sition and orientation of the camera in the taskboard frame.Two strategies are developed.

5.2Invariants based on cross-ratio

If enough markers are visible,ellipses are found in the image and marker points are extracted as ellipse cen-ters.Collinear points are easily found back in the image because collinearity is projectively invariant:if 3points are collinear in3D,they will also be in ev-ery image.For4collinear points the cross ratio is invariant under projective transformations[1].

x3?x1

x2?x3

x4?x1

x2?x4

=

y3?y1

y2?y3

y4?y1

y2?y4

This allows us to?nd back automatically which collinear points correspond to markers and which do not.Now we only have to identify the markers:which marker in the image corresponds to which3D marker? This is done by computing the cross ratio of the points on the line with the intersection point of two lines. Figure1shows the marker points and lines found back by this approach.

Figure 1:Marker points (found as ellipse centers)are grouped into lines using viewpoint invariant relations.The lines for each marker block are superimposed (in white)over the image for visual con?rmation.Corre-spondences between the lines and marker blocks are also computed which yields 2D-3D relations.

5.3Invariants based on common tangents

If the camera is closer to the taskboard and only one 3-point marker is visible,a di?erent approach is used.Because the ellipses can be extracted more reliably in this case,we can use them (and not only their center points)to ?nd enough 2D-3D correspondences.The fact is exploited that tangent points of two ellipses with a common tangent are invariant under projec-tive transformations.In ?gure 2the common tangent point and lines are superimposed over one of the 3-point

markers.

Figure 2:Common tangent lines of ellipses yield tan-gent points that are invariant under projective trans-formations.The ellipses and their common tangent lines are superimposed (in white)over the image for visual con?rmation.This yields 2D-3D relations.

6Calibration

Online calibration is one of the crucial needs in the VI-ABLE project because no a priory calibration of the intrinsic camera parameters,the eye-hand,or robot pose is available.Only approximate calibration pa-rameters could be obtained from the speci?cation doc-uments and from a limited set of images taken while the system was still on ground.We therefore designed a set of calibration experiments that veri?ed and re-?ned the approximate calibration from images during the ?ight segment.These experiments are explained in paragraph 6.1and 6.2.

6.1Camera intrinsic calibration

Calibrating the intrinsic parameters of the camera is an important task in every application where measure-ments in the image are used to compute 3D spatial information like camera poses or 3D reconstructions.Based on two images of a calibration pattern that were taken by the cameras before the satellite was launched the intrinsic parameters of the cameras were computed.

During the ?ight segment images of the 3-point mark-ers were taken by the AHC.These markers served as a calibration pattern.The result of the processing of these images was consistent with the precomputed val-ues of both intrinsic parameters and radial distortion.

6.2Eye-hand calibration

For robot guidance from images the relative transfor-mation between the cameras and the robot tip frame –the eye-hand calibration –has to be known.A pro-cedure was developed especially targeted towards the ETS-VII robot.When the robot executes the proce-dure to grasp the grapple-?xture (GPF),it comes into contact with the taskboard in a prede?ned position and orientation.In this speci?c pose,the cameras are approximately aligned with 3-point markers.These markers are exploited to compute the camera poses with the second technique explained in paragraph 5.3.Based on these computed camera poses and the ?xed robot pose,the eye-hand calibration can be calculated.Figure 3shows the setup of this calibration experi-ment.

Figure3:Setup for the eye-hand calibration experi-ment.The robot is touching the GPF and the AHC is above a3-point marker.The common tangent points are found and based on these2D-3D relations the cam-era pose is computed.This yields the eye-hand cali-bration of the camera.

7Pose estimation and on-line robot guidance

Several experiments concerning pose estimation and on-line robot guidance were performed during the ?ight segments.

7.1Calculating pose from known markers A?rst experiment consisted of calculating the robot pose from the known3-point markers.The robot moved to a position where di?erent markers were https://www.wendangku.net/doc/0b18532445.html,ing the invariant relations described in para-graph5.2,2D-3D relations were found.These rela-tions were the input for a robust camera pose estima-tion algorithm.An immediate veri?cation of the cur-rent calibration status and the accuracy of the com-puted position could be supplied to the operator by superimposing parts of the given CAD-model with the actual images,using the calculated position.An ex-ample of this superimposition can be seen in?gure4.

A second step in this experiment consisted in moving the camera to a position much closer to one of the3-point markers.The robot was intentionally positioned in a pose not perfectly above the marker.The second invariant method of paragraph5.3was used to calcu-late the camera-(and using the eye-hand transforma-tion also the robot-)pose.Parts of the model were reprojected into the actual image to verify the calcu-lation(?gure5).Using the computed pose,a relative translation-and orientation-change was computed

to Figure4:From automatically found2D-3D relations the camera pose is computed.Parts of the model are superimposed over the real image and give a very good and intuitive veri?cation of the calibration accuracy. position the robot perfectly above the3-point marker (?gure

6).

Figure5:From automatically found2D-3D relations the camera pose is computed.Veri?cation of the result is possible by reprojection of parts of the CAD-model in the image.

7.2Insertion of GPF into a hole

The ETS-VII robot has the possibility to attach a grapple-?xture(GPF)to its end e?ector and insert it into di?erent holes and a slider on the taskboard. Usually,positioning of the robot is done manually by the operator who uses the arti?cial markers as a vi-sual clue.During the VIABLE experiments we showed that positioning could be done automatically using the image of the hole or slider only.This is especially im-portant for the case of the slider because its exact position is unknown due to possible previous motions.

Figure6:After a relative motion from the current

position(left image),automatically computed by the

vision-tools,the AHC ends up perfectly above the3-

point marker(right image).This is veri?ed visually

by the fact that the central marker tip is centered

perfectly with the outer marker ring.

Using an ellipse-?tting algorithm the hole or slider was

extracted and the center point was found.This al-

lowed the algorithm to compute a relative update of

the current pose to position the GPF perfectly above

the hole or slider.The image was augmented with the

current impact point of the GPF(the point where the

GPF would hit the taskboard if it were lowered from

its current position)and the estimated impact point

after relative motion.During operations the robot was

deliberately mispositioned above both hole and slider.

The algorithm managed to automatically update the

pose to allow insertion.Figure7shows both current

(misplaced)and estimated impact

point.

Figure7:The vision system computes the current im-

pact point of the GPF.The center point of the slider is

extracted automatically and the relative movement is

computed to position the GPF above the slider.The

predicted impact point is shown to?t into the hole.

8Taskboard calibration and recon-

struction

8.1Calibration of3-point markers

The3-point markers on the taskboard are important

for vision-based algorithms.The calculation of the

camera pose from2D-3D relations,found by the algo-

rithm,needs the exact3D coordinates of these mark-

ers.Experiments were designed which could retrieve

this information.

Because a good estimate of the3D coordinates of the

markers was supplied to us by NASDA,a quick and

easy check on the consistency of this data could be

performed.We moved the robot over the taskboard

to di?erent positions for which di?erent markers were

visible in the images.We computed the camera pose

based on the markers and reprojected the given3-

point markers in the original image.The estimated

mean reprojection error was below a pixel which con-

?rmed the consistency of the marker positions.

The coordinates of the3-point markers can also be

explicitly retrieved from images.This is what was

done in another experiment.Three di?erent images,

taken from three di?erent poses,showed the same3-

point markers.Based on the given pose of the robot

and the eye-hand calibration the camera poses were

computed.Based on the identi?cation of the markers

given by the invariants,multiple-view matches were

found.The markers could then be reconstructed in

3D by triangulation.The resulting data was consistent

with the given3D information(up to the accuracy of

the reconstruction of2.23mm in x,1.45mm in y and

0.84mm in z).

8.2Taskboard reconstruction

In an advanced experiment we investigated novel tech-

niques for calibration based on image data alone,with-

out the need to know precise3D calibration markers.

Based on a sequence of images taken from di?erent

view points,one can obtain a metric calibration(up

to a constant scale factor)of the cameras and the scene

(see[2,3]).This technique allows the handling of a

priory unknown objects with little calibration infor-

mation.For these experiments we recorded prede?ned

image sequences during the?ight segment and evalu-

ated these techniques in the post processing phase.

Figure8shows some results of the reconstruction of

the slider-area of the taskboard.The?gure shows

views of the reconstruction without any manual re-

?nement.In a post processing step it is easy to obtain

reconstructions of parts of the taskboard by human in-teraction in the image only,using the computed depth

data.

Figure8:Di?erent arti?cial views from the recon-

struction of the slider-area of the taskboard.

9Robot calibration

Robot calibration is a procedure which aims at im-

provement of the robot accuracy by modifying the

robot positioning software,rather than changing or al-

tering the design of the robot or its control system[4].

The procedure that is followed to obtain this goal is

?position the robot in di?erent poses,trying to ex-

cite all possible modes,

?measure these poses with a measurement system,

?compute the di?erence between these measured

poses and the pose computed from the joints

telemetry by the forward kinematics model of the

robot.

If all modes are excited su?ciently this allows to iden-

tify updates to be made to the current model.

Standard robot calibration procedures obtain pose

measurements from external measuring systems.In

the case of the ETS-VII robot no such system is avail-

able.Instead we computed the robot poses from the

3-point markers as explained in paragraph7.1.

Since the taskboard on which all VBRC experiments

were conducted is placed in one corner of the ETS-

VII satellite,we could excite only a limited range of

values in joint space.Ongoing evaluation will show if

all joint o?sets and link lengths can be identi?ed or if

computing a subset will yield better results.

10Conclusion

In this paper the vision-based robotic control exper-

iments,executed on the Japanese satellite ETS-VII

were described.The on-board vision system has suc-

cesfully been used to perform several on-line and o?-

line calibration procedures,robot guidance experi-

ments and tests of advanced uncalibrated computer-

vision algorithms.

Acknowledgments

We acknowledge support from the Belgian IUAP4/24

’IMechS’project.

References

[1]J.Mundy and A.Zisserman,“Machine Vision”,

In J.L.Mundy, A.Zisserman,and D.Forsyth

(eds.),Applications of Invariance in Computer

Vision,Lecture Notes in Computer Science,

Vol.825,Springer-Verlag,1994.

[2]M.Pollefeys,R.Koch,M.Vergauwen and L.Van

Gool,“Metric3D Surface Reconstruction from

Uncalibrated Image Sequences”,In Proceedings

SMILE Workshop(post-ECCV’98),LNCS1506,

pp.138-153,Springer-Verlag,1998.

[3]M.Pollefeys,R.Koch,M.Vergauwen and L.Van

Gool,“Flexible3D Acquisition with a Monocu-

lar Camera”,In Proceedings IEEE International

Conference on Robotics and Automation(ICRA

’98),Vol.4,pp.2771-2776,Leuven,1998.

[4]K.Schr¨o er,Handbook on Robot Performance

Testing and Calibration(IRIS-project),ISBN3-

8167-5200-4,Fraunhofer IRB Verlag,1998.

[5]Y.Ohkami and M.Oda,“NASDA’s activities

in space robotics”,In Proceedings Fifth Inter-

national Symposium on Arti?cial Intelligence,

Robotics and Automation in Space(iSAIRAS

’99),ISBN92-9092-760-60,pp.11-18,ESA,1999.

matlab中robotics toolbox的应用

matlab中robotics toolbox的函数解说 1. PUMA560的MATLAB仿真 要建立PUMA560的机器人对象,首先我们要了解PUMA560的D-H参数,之后我们可以利用Robotics Toolbox工具箱中的link和robot函数来建立PUMA560的机器人对象。 其中link函数的调用格式: L = LINK([alpha A theta D]) L =LINK([alpha A theta D sigma]) L =LINK([alpha A theta D sigma offset]) L =LINK([alpha A theta D], CONVENTION) L =LINK([alpha A theta D sigma], CONVENTION) L =LINK([alpha A theta D sigma offset], CONVENTION) 参数CONVENTION可以取‘standard’和‘modified’,其中‘standard’代表采用标准的D-H参数,‘modified’代表采用改进的D-H参数。参数‘alpha’代表扭转角,参数‘A’代表杆件长度,参数‘theta’代表关节角,参数‘D’代表横距,参数‘sigma’代表关节类型:0代表旋转关节,非0代表移动关节。另外LINK还有一些数据域: LINK.alpha %返回扭转角 LINK.A %返回杆件长度 LINK.theta %返回关节角 LINK.D %返回横距 LINK.sigma %返回关节类型 LINK.RP %返回‘R’(旋转)或‘P’(移动) LINK.mdh %若为标准D-H参数返回0,否则返回1 LINK.offset %返回关节变量偏移 LINK.qlim %返回关节变量的上下限 [min max]

基于MATLAB Robotics Tools的机械臂仿真

基于MATLAB Robotics Tools的机械臂仿真 【摘要】在MATLAB环境下,对puma560机器人进行运动学仿真研究,利用Robotics Toolbox工具箱编制了简单的程序语句,建立机器人运动学模型,与可视化图形界面,利用D-H参数法对机器人的正运动学、逆运动学进行了仿真,通过仿真,很直观的显示了机器人的运动特性,达到了预定的目标,对机器人的研究与开发具有较高的利用价值。 【关键词】机器人;运动学正解;运动学逆解 Abstract:For the purpose of making trajectory plan research on puma560 robot,in the MATLAB environment,the kinematic parameters of the robot were designed. Kinematic model was established by Robotics Toolbox compiled the simple programming statements,the difference was discussed between the standard D-H parameters,and the trajectory planning was simulated,the joints trajectory curve were smooth and continuous,Simulation shows the designed parameters are correct,thus achieved the goal. The tool has higher economic and practical value for the research and development of robot. Key words:robot;trajectory planning;MTALAB;simulation 1.前言 机器人是当代新科技的代表产物,随着计算机技术的发展,机器人科学与技术得到了迅猛的发展,在机器人的研究中,由于其价格较昂贵,进行普及型实验难度较大,隐刺机器人仿真实验变得十分重要。对机器人进行软件仿真,从运动图像和动态曲线表,可以模拟机器人的动态特性,更加直观的显示了机器人的运动状况,从而可以分析许多重要的信息。 对机器人的运动学仿真,很多学者都进行了研究。文献2以一个死自由度机器人为例,利用MATLAB软件绘制了其三维运动轨迹;文献4对一种柱面机械手为对象,对机械手模型的手动控制和轨迹规划进行了仿真;但上述各种方法建立的机器人模型只适合特定的机械臂模型。一种通用的,经过简单修改便可用于任何一种机械臂的仿真方法显得尤为重要。 2.机器人运动学简介 机器人学中关于运动学和动力学最常用的描述方法是矩阵法,这种数学描述是以四阶方阵变换三维空间的齐次坐标为基础的。矩阵法、齐次变换等概念是机器人学研究中最重要的数学基础。利用MATLAB Robotics Toolbox工具箱中的transl、rotx、roty和rotz函数可以非常容易的实现用其次变换矩阵表示平移变换和旋转变换。例如机器人在X轴方向平移了0.5米的其次坐标变换可表示为:

robotics toolbox for matlab的机器人仿真

要建立PUMA560的机器人对象,首先我们要了解PUMA560的D-H参数,之后我们可以利用Robotics Toolbox工具箱中的link和robot函数来建立PUMA560的机器人对象。 其中link函数的调用格式: L = LINK([alpha A theta D]) L =LINK([alpha A theta D sigma]) L =LINK([alpha A theta D sigma offset]) L =LINK([alpha A theta D], CONVENTION) L =LINK([alpha A theta D sigma], CONVENTION) L =LINK([alpha A theta D sigma offset], CONVENTION) 参数CONVENTION可以取‘standard’和‘modified’,其中‘standard’代表采用标准的D-H参数,‘modified’代表采用改进的D-H参数。参数‘alpha’代表扭转角,参数‘A’代表杆件长度,参数‘theta’代表关节角,参数‘D’代表横距,参数‘sigma’代表关节类型:0代表旋转关节,非0代表移动关节。另外LINK还有一些数据域: LINK.alpha %返回扭转角 LINK.A %返回杆件长度 LINK.theta %返回关节角 LINK.D %返回横距 LINK.sigma %返回关节类型 LINK.RP %返回‘R’(旋转)或‘P’(移动) LINK.mdh %若为标准D-H参数返回0,否则返回1 LINK.offset %返回关节变量偏移 LINK.qlim %返回关节变量的上下限 [min max] LINK.islimit(q) %如果关节变量超限,返回 -1, 0, +1 LINK.I %返回一个3×3对称惯性矩阵 LINK.m %返回关节质量 LINK.r %返回3×1的关节齿轮向量 LINK.G %返回齿轮的传动比 LINK.Jm %返回电机惯性 LINK.B %返回粘性摩擦 LINK.Tc %返回库仑摩擦 LINK.dh return legacy DH row LINK.dyn return legacy DYN row 其中robot函数的调用格式: ROBOT %创建一个空的机器人对象 ROBOT(robot) %创建robot的一个副本 ROBOT(robot, LINK) %用LINK来创建新机器人对象来代替robot ROBOT(LINK, ...) %用LINK来创建一个机器人对象 ROBOT(DH, ...) %用D-H矩阵来创建一个机器人对象 ROBOT(DYN, ...) %用DYN矩阵来创建一个机器人对象

Microsoft Robotics Developer Studio 4学习笔记-案例2-开始玩机器人

Microsoft Robotics Developer Studio 4 学习笔记 案例2:开始玩机器人 1.为了看见车子,我们从“Service”中选择“Simulated Generic Differential Drive”放在“Diagram”上; ①对“Simulated Generic Differential Drive”右击,选择“Set Configuration”;

②跳到“Simulated Generic Differential Drive”的设定界面,选择“Use a manifest”; ③选择“Import Manifest”,导入虚拟机器人; ④选择“LEGO.NXT.Tribot.Simufation.Manifest.xml”

⑤现在保存文件,运行后就能看到“乐高”虚拟机器人了。 2.为了操控虚拟机器人,从“service”中拖一个“Direction Dialog”到“Diagram”中,这个“Direction Dialog”会在系统中产生一个有上下左右及Stop的五个键,让我们可以编程后控制里面的机器人。

3.在“Basic Activities”中找到“Calculate”,把“Calculate”拉到“Diagram”中; ①把“Direction Dialog”的“Notification(右下方的红圈)”拉到“Calculate”,表示把“Direction Dialog”的通知传递给“Calculate”,这时会弹出“Connections”对话框。 ②选择“ButtonPress”表示:当鼠标被按下时。

2016机器人报告-World Robotics Report 2016

International Federation of Robotics (IFR) Secretariat President Joe Gemma Secretariat Gudrun Litzenberger c/o FV R+A im VDMA Lyoner Stra?e 18 60528 Frankfurt am Main, Germany Phone +49 69 66 03-16 97 Fax +49 69 66 03-26 97 E-mail: secretariat@https://www.wendangku.net/doc/0b18532445.html, Internet https://www.wendangku.net/doc/0b18532445.html, World Robotics Report 2016: European Union occupies top position in the global automation race Frankfurt, September 29th, 2016 – By 2019, more than 1.4 million new industrial robots will be installed in factories around the world - that's the latest forecast from the International Federation of Robotics (IFR). In the race for automation in manufacturing, the European Union is currently one of the global frontrunners: 65 percent of countries with an above-average number of industrial robots per 10,000 employees, are located in the EU. The strongest growth drivers for the robotics industry are found in China; however, in 2019 some 40 percent of the worldwide market volume of industrial robots will be sold there alone. So says the 2016 World Robotics Report, as published by the International Federation of Robotics (IFR). "Automation is a central competitive factor for traditional manufacturing groups, but is also becoming increasingly important for small and medium-sized enterprises around the world ", says Joe Gemma, President of the International Federation. The industrial robots boom 2019 The number of industrial robots deployed worldwide will increase to around 2.6 million units by 2019. That's about one million units more than in the record-breaking year of 2015. Broken down according to sectors, around 70 percent of industrial robots are currently at work in the automotive, electrical/electronics and metal and machinery industry segments. In 2015, the strongest growth in the number of operational units recorded here was registered in the electronics industry, which boasted a rise of 18 percent. The metal industry posted an increase of 16 percent, with the automotive sector growing by 10 percent. European Union well on course towards automation – China making up ground The strongest growth figures in Europe are being posted by the Central and Eastern European states – the rise in sales was about 25 percent in 2015. Also 2016 a similar growth rate is forecasted (29 percent). The positive trend is expected to continue. The average growth will remain steady at around 14 percent per year (2017-2019). The biggest climbers in sales of industrial robots are the Czech Republic and Poland. Between 2010 and 2015 the number of new robot installations climbed in the Czech Republic by 40 percent (compound annual growth rate) and in Poland by 26 percent (CAGR). In a worldwide comparison, the European Union member states as a whole are particularly far advanced regarding automation. This is evident from the robot density existing in the

Robotics and Applications

Robotics and Applications Xuemin Bai June 14, 2013 Electronics and Information Engineering S120400552 1.History of Robotics The word "Robot" comes from the 1921 play "R.U.R." by the Czech writer Karel Capek (pronounced"chop'ek"). "Robot" comes from the Czech word "robota", meaning "forced labor." The word "robotics" also comes from science fiction - "Runaround" (1942) by Isaac Asimov. The first modern industrial robots were probably the "Unimates", created by George Devol and Joe Engleberger in the 1950's and 60's. Engleberger started the first robotics company, called "Unimation", and has been called the "father of robotics." 2.General Framework of Robotics ?Robotics is the science studying the intelligent connection of perception to action ?Action: mechanical system (locomotion & manipulation) ?Perception: sensory system ?Connection: control system

基于MATLAB Robotics 的MOTOMAN

基于MATLAB Robotics 的MOTOMAN-HP6教学仿真 摘要:机器人技术课上介绍的MOTOMAN-HP机器人是一款典型的工业机器人,限制于教室的空间和工业机器人样机的缺少,同学们无法深入了解工业机器人的运动过程。使用MATLAB Robotics Toolbox编制了简单的程序语句,采用了标准D-H参数建立机器人模型。对机器人的轨迹进行了仿真。通过仿真,直观的表现了机器人关节和各臂的运动过程,得到了机器人关节角度轨迹曲线。Robotics Toolbox还可以对机器人进行图形仿真,分析真实机器人控制时的数据结果。 关键词:MOTOMAN-HP;MATLAB;教学仿真

机器人技术是一门高度交叉的前沿学科方向,也是自动化和机电工程等相关专业的一门重要专业基础课。在机器人技术的教学和培训中,实验内容一直是授课的重点和难点。实物机器人通常是比较昂贵的设备,这就决定了在实验教学中不可能运用许多实际的机器人来作为教学的试验设备。由于操作不方便、体积庞大等原因,往往也限制了实物机器人在课堂授课时的应用。而同学们对于机器人实际运动又有很多不了解的地方。这就催生了机器人仿真系统在教学中的应用。 MATLAB Robotics Toolbox是由澳大利亚科学家Peter Corke开发和维护的一套基于MATLAB的机器人学工具箱,当前最新版本为第9版,可在该工具箱的主页免费下载(https://www.wendangku.net/doc/0b18532445.html,/robot/)。Robotics Toolbox提供了机器人学研究中的许多重要功能函数,包括机器人运动学、动力学、轨迹规划等。该工具箱可以对机器人进行图形仿真,并能分析真实机器人控制时的实验数据结果,因此非常适宜于机器人技术的教学和研究。 选取的机器人为安川工业机器人MOTOMAN-HP6,这个系列的机器人是老师作为实例重点讲解的。本文讲述了使用标准D-H参数法建构MOTOMAN-HP6模型,以及机器人运动轨迹仿真的方法。 1.MOTOMAN-HP6机器人运动学模型 MOTOMAN-HP6机器人是一个六轴(垂直多关节型)多自由度机器人。一边固定在机座上,另一边为机器人的末端执行器,属于开链型空间连杆机构。HP 运动副都为转动副。以下为HP6的官方参数。 图1.1-HP6实物图图1.2-HP6机器人的结构尺寸及工作范围

robotics tool

Robotics Toolbox Homogeneous transformations(齐次变换) angvec2r - angle/vector to RM angvec2tr - angle/vector to HT eul2r - Euler angles to RM eul2tr - Euler angles to HT oa2r - orientation and approach vector to RM oa2tr - orientation and approach vector to HT r2t - RM to HT rt2tr - (R,t) to HT rotx - RM for rotation about X-axis roty - RM for rotation about Y-axis rotz - RM for rotation about Z-axis rpy2r - roll/pitch/yaw angles to RM rpy2tr - roll/pitch/yaw angles to HT se2 - HT in SE(2) t2r - HT to RM tr2angvec - HT/RM to angle/vector form tr2eul - HT/RM to Euler angles tr2rpy - HT/RM to roll/pitch/yaw angles tr2rt - HT to (R,t) tranimate - animate a coordinate frame transl - set or extract the translational component of HT trnorm - normalize HT trplot - plot HT as a coordinate frame trplot2 - plot HT, SE(2), as a coordinate frame trotx - HT for rotation about X-axis troty - HT for rotation about Y-axis trotz - HT for rotation about Z-axis Homogeneous points and lines(齐次点和线) e2h - Euclidean coordinates to homogeneous h2e - homogeneous coordinates to Euclidean homline - create line from 2 points homtrans - transform points * HT = homogeneous transformation, a 4x4 matrix, belongs to the group SE(3). * RM = RM, an orthonormal 3x3 matrix, belongs to the group SO(3). * Functions of the form tr2XX will also accept a RM as the argument. Differential motion(差分运动) delta2tr - differential motion vector to HT eul2jac - Euler angles to Jacobian rpy2jac - RPY angles to Jacobian skew - vector to skew symmetric matrix tr2delta - HT to differential motion vector tr2jac - HT to Jacobian vex - skew symmetric matrix to vector wtrans - transform wrench between frames

matlab中robotics-toolbox的函数解说

1. PUMA560的MATLAB仿真 要建立PUMA560的机器人对象,首先我们要了解PUMA560的D-H参数,之后我们可以利用Robotics Toolbox工具箱中的link和robot函数来建立PUMA560的机器人对象。 其中link函数的调用格式: L = LINK([alpha A theta D]) L =LINK([alpha A theta D sigma]) L =LINK([alpha A theta D sigma offset]) L =LINK([alpha A theta D], CONVENTION) L =LINK([alpha A theta D sigma], CONVENTION) L =LINK([alpha A theta D sigma offset], CONVENTION) 参数CONVENTION可以取‘standard’和‘modified’,其中‘standard’代表采用标准的D-H 参数,‘modified’代表采用改进的D-H参数。参数‘alpha’代表扭转角,参数‘A’代表杆件长度,参数‘theta’代表关节角,参数‘D’代表横距,参数‘sigma’代表关节类型:0代表旋转关节,非0代表移动关节。另外LINK还有一些数据域: LINK.alpha %返回扭转角 LINK.A %返回杆件长度 LINK.theta %返回关节角 LINK.D %返回横距 LINK.sigma %返回关节类型 LINK.RP %返回‘R’(旋转)或‘P’(移动) LINK.mdh %若为标准D-H参数返回0,否则返回1 LINK.offset %返回关节变量偏移 LINK.qlim %返回关节变量的上下限[min max] LINK.islimit(q) %如果关节变量超限,返回-1, 0, +1 LINK.I %返回一个3×3 对称惯性矩阵 LINK.m %返回关节质量 LINK.r %返回3×1的关节齿轮向量 LINK.G %返回齿轮的传动比 LINK.Jm %返回电机惯性 LINK.B %返回粘性摩擦 LINK.Tc %返回库仑摩擦 LINK.dh return legacy DH row LINK.dyn return legacy DYN row 其中robot函数的调用格式: ROBOT %创建一个空的机器人对象 ROBOT(robot) %创建robot的一个副本 ROBOT(robot, LINK) %用LINK来创建新机器人对象来代替robot ROBOT(LINK, ...) %用LINK来创建一个机器人对象 ROBOT(DH, ...) %用D-H矩阵来创建一个机器人对象 ROBOT(DYN, ...) %用DYN矩阵来创建一个机器人对象 2.变换矩阵 利用MATLAB中Robotics Toolbox工具箱中的transl、rotx、roty和rotz可以实现用齐次变换

《Robotics》实验报告模板

实验名称:实验 1、机器人的认识 实验地点实验日期 指导教师班级 小组成员报告人 一、实验目的: 二、实验设备及仪器 三、六自由度工业机器人机构简图 四、思考题 1. 说明工业机器人的基本组成及各部分之间的关系。 2. RBT-6T/S01S 机器人机械部分主要包括哪几部分?指出控制姿态与控制手腕动作的轴。

实验名称:实验 2. 机器人示教编程与轴孔装配实验 实验地点实验日期 指导教师班级 小组成员报告人 一、实验目的: 二、实验设备及仪器 三、实验步骤 四、程序 说明动作任务,记下动作程序,并在程序后面做适当的注解说明。 五、思考题 1.简述工业机器人在实际生产运用中采用示教控制与其它控制方式相比有什么优点? 2.回忆本次实验过程,你从中学到了哪些知识。

实验名称:实验 3. 机器人正运动学分析与验证实验 实验地点实验日期 指导教师班级 小组成员报告人 一、实验目的: 二、实验设备及仪器 三、实验步骤 四、正向运动学方程的推导过程 1、1.机器人的运动机构简图(图 3-1 所示),根据 D-H 方法建立机器人的笛卡尔坐标系,并且标出每个关节坐标系的原点 2、建好坐标系后填写表 3-1 的各个变量的值; 表 3-1 机器人的参数

i 1 3、根据表3-1的各个变量的值以及各杆件之间关系,写出相应的 A 矩阵; i 4、根据A 矩阵和T 矩阵之间的关系,写出T 矩阵 n o a p x x x n o a x p 0 0 1 2 3 4 5 y y y y T 6 A 1 A A A A A 2 3 4 5 6 n o a p z z z z 0 0 0 1 5、根据一一对应的关系,写出机器人正解的运算公式,并填入下表3-2中; 6、设定一组数据(符合各个关节角运动范围的)代入表3-2中,求出各个分量的值,并将其填入表3-3中;

相关文档
相关文档 最新文档