Rough Terrain Visual Odometry
Motilal Agrawal
Arti?cial Intelligence Center SRI International Menlo Park,CA,USA Email:agrawal@https://www.wendangku.net/doc/1413127012.html,
Kurt Konolige
Arti?cial Intelligence Center SRI International
Menlo Park,CA,USA Email:konolige@https://www.wendangku.net/doc/1413127012.html,
Abstract—We present an integrated system to localize a mobile robot in rough outdoor terrain using visual odometry. Our previous work[1]presented a visual odometry solution that estimates frame-to-frame motion from stereo cameras and integrated this incremental motion with a low cost GPS. We extend that work through the use of bundle adjustment over multiple frames.Bundle adjustment helps to reduce the error signi?cantly,thereby making our system more robust and accurate while still operating in real-time.Our new system can keep the robot well localized over several hundreds of meters to within1%error.We present experimental results for our system over a300meters run in a challenging environment and compare it with ground truth Real Time Kinematic(RTK)GPS.
I.I NTRODUCTION
Visual Odometry(VO)[1],[2]is a relatively new tech-nology for localizing mobile robots.VO works by?nding interest points in the images and matching them between successive images.Robust methods are then used to estimate the camera motion from these matched interest points.Since cameras are inexpensive and provide high information band-width,they can serve as cheap and precise localization sensor. In addition,processing power has increased signi?cantly to a point that such visual motion estimation can now be performed in real time on standard processors.
VO for outdoor terrains is in some ways more challenging than indoor environments.Firstly,outdoor environments are unstructured,and simpler features such as corners,planes and lines that are abundant in indoor environment rarely exist in natural environments.Secondly,outdoor terrains can be rough and undulating.Thus planar motion assumptions that are so common in indoor envoronments are not applicable. Succesful navigation over such rough terrains requires a euclidean motion model that utilizes the full six degrees of freedom.Another consequence of the rough terrain is that bumps in the ground can cause motion abruptness thereby making visual registration harder.Lighting changes and shadows also make it harder to match images. Although motion estimation from video has been a widely researched topic in computer vision,real-time systems utiliz-ing vision for localization of robots have been very few.One of the?rst of these systems was presented by Nister[3]for a monocular camera.Davison[4]also presented a real time monocular system using the extended kalman?lter.However, their approach is best suited for indoor environments because of algorithmic complexity and growing uncertainty in feature locations.When compared to monocular video,motion esti-mation from stereo images is relatively easy and tends to be more stable and well behaved.Another major advantage of using stereo cameras is that one need not worry about the scale ambiguity present in monocular camera case.VO for stereo cameras was presented by Nister et al;[2]and Agrawal et al;[5],[1].
In our earlier work[5],[1],we presented a real time VO system and integrated it with an inexpensive GPS system to provide localization with≈5%error over100meters or so.However,the motion estimation algorithm was sub optimal since it estimated the incremental motion between two consecutive frames.This work enhances our previous VO system through the use of bundle adjustment over multiple frames.In this work,features are tracked for as many frames as possible,and a sliding window of frames and features(the ‘bundle’)are adjusted in a non-linear optimization,to give the best motion estimate.
Bundle adjustment[6]is a process that iteratively adjusts the camera pose and the3D location of the interest points in order to minimize the reprojection error of the interest points in all the camera frames.Global bundle adjustment is a non linear,compute intensive process and is usually used in its sparse form[7],[8]and invoked whenever a new frame is added to the system.We approximate each iteration of the bundle adjustment by alternating between estimation of the camera motion and3D scene reconstruction. We initialize our approximate bundle adjustment with the camera pose obtained from our incremental VO algorithm[1]. Since we have stereo images,we initialize the3D location of the interest points through stereo triangulation.Bundle adjustment helps to reduce the error signi?cantly,thereby making our system more robust and accurate while still operating in real-time.
Our system is most similar to the recent work of Mouragnon et al.[9]with a few major differences.Firstly, they use a monocular camera while ours is a stereo algorithm. Secondly,our bundle adjustment process for stereo alternates between pose computations and3D leading to faster conver-
gence and run time.Therefore we can use all the frames of the sequence without discarding the non key frames.We integrate our VO system with a cheap IMU/GPS system to maintain global pose consistency.Finally we present results of our robot navigating over rough terrain whereas[9] presents results over smooth roads.
Figure1shows our robot in action on a typical terrain.
A pair of stereo cameras are located on the sensor mast of the robot looking outward.The cameras are pointed forward at a slight angle.The baseline is12cm and the height above ground is about0.5m.This arrangement presents a challenging situation:short baseline and a small offset from the ground plane makes it dif?cult to track points over longer distances.A pair of wheel encoders and an Xsens IMU are used to complement the visual pose system.A Garmin GPS sensor is located on top of the sensor
mast.
Fig.1.Our robot in typical terrain.Robot is part of a DARPA project, Learning Applied to Ground Robots(LAGR).Two stereo systems are on the upper crossbar.
The rest of the paper is organized as follows.First we present our incremental visual odometry algorithm in Section II.Our bundle adjustment process is explained in Section III.Fusion with GPS and IMU sensors is described in Section IV.Results on long robot runs are presented and compared to ground truth RTK GPS in Section V and?nally Section VI concludes this presentation and discusses ongoing and future work.
II.I NCREMENTAL VO
Our visual odometry system[5],[1]uses feature tracks to estimate the relative incremental motion between two frames that are close in time.Interest points which are Harris corners are detected in the left image of each stereo pair and tracked across consecutive frames.These interest points are then triangulated at each frame based on stereo correspondences. Three of these points are used to estimate the motion using absolute orientation.This motion is then scored using the pixel reprojection errors in both the cameras.We use the disparity space homography[10]to evaluate the inliers for the motion.In the end,the hypothesis with the best score (maximum number of inliers)is used as the starting point for a nonlinear minimization problem that minimizes the pixel reprojection errors in both the cameras simultaneously, resulting in a relative motion estimate between the two frames.The IMU and the wheel encoders are also used to ?ll in the relative poses when visual odometry fails.This happens due to sudden lighting changes,fast turns of the robot or lack of good features in the scene(e.g.blank wall). Thus it complements the visual pose system.
We have found that the approach outlined above is very ef?cient and works remarkably well,even for stereo rigs with a small baseline.The fact that we are triangulating the feature points for each frame,builds a?rewall for error propagation.However,this also means that there will be a drift when the rig is stationary.In order to avoid this drift, we update the reference frame(the frame with reference to which the motion of the next frame is computed)only when the robot has moved some minimum distance(taken to be 5cm in our implementation).The fundamental reason that our approach gives reliable motion estimates,even in small-baseline situations,is our use of image-based quantities,and treating both the left and right images symmetrically.
III.B UNDLE ADJUSTED VO
Bundle adjustment[6]iteratively adjusts the camera pose and the3D location of the interest points in order to minimize the reprojection error of the interest points in all the camera frames.For bundle adjustment to be effective,interest points need to be tracked over multiple frames.Our incremental VO was computed frame-to-frame,that is,feature points were extracted for frame1and frame2,the motion between the frames was computed,and then the features were discarded. In the bundle adjusted VO,features are tracked for as many frames as possible,and a sliding window of frames and features are adjusted in a non-linear optimization,to give the best motion estimate.This optimization step reduces the error resulting in very accurate poses.
Our two-frame matching algorithm[1]can be used to link the interest points over the multiple frames.In addition,since we get the motion from our incremental VO algorithm and we have stereo,we can predict the location of the interest point in the current frame.This drastically cuts down the search space for feature matching and helps?nd matches for additional interest points which did not match previously due to ambiguity caused by large regions in which to search for the match.Furthermore,we use only those interest points which are inliers to the computed motion from the two frame algorithm and also can be tracked over at least m frames.This ensures that only good tracks are used for bundle adjustment. Bundle adjustment is a non linear,compute intensive process and is usually used in its sparse incremental form[7], [8],[9]and invoked whenever a new frame is added to the system.Each bundle adjustment is performed over a sliding
window of N frames,n of whose motion is already known (1≤n ?Initialize the pose of all the other frames from previous bundle adjustment ?Iterate till convergence 1)Structure Computation:For each inlier feature, compute the3D location from the image tracks and the current poses of the frames 2)Motion Computation:For each of the N?n frames,compute its pose from the3D locations of the feature points and its corresponding image locations 3)Compute the reprojection errors for convergence test Our bundle adjustment interleaves structure and motion computation in the main loop.Motion of the most recent frame is initialized from the results of incremental VO. Motion of all the other frames is initialized from the results of previous bundle adjustment.Given the motion of each stereo frame,we can reconstruct each feature point and get its3D location.This structure computation is then followed by re?ning the motion using the structure to bootstrap.These two steps are described brie?y next.More details can be found in[11]. A.Structure Computation Denote by x i j the coordinates of the j?th point as seen by the i?th camera.Let P i denote the projection matrix of the i?th camera and let X j be the3D location of the point. Therefore we have x i j=P i X j.Given x i j and P i,X j can be recovered easily through Direct Linear Transform(DLT). B.Motion Computation Recovering P i from x i j and X j is nonlinear process and can be accomplished using Levenberg-Marquardt minimiza-tion.The quantity to be minimized in this case is the geometric distance between the measured and the projected point min P i ij d(P i X j,x i j)(1) where d(x,y)is the geometric image distance between x and y The above interleaving of structure and motion computa-tions minimizes the same cost functions as bundle adjustment and it should result in the same solution as the full blown bundle adjustment,provided it converges.Since we have a very good initial estimate of the motion from the incremental VO,it only takes a few iterations for the interleaving to converge.Interleaving has the added advantage that it is computationally very ef?cient since at each step we only are dealing with either motion or structure computations. IV.S ENSOR FUSION In order to maintain a long term accurate global pose,we performed two types of?ltering on the bundle adjusted VO output.These?lters provide the necessary corrections to keep errors from growing without bounds. 1)Gravity Normal:The IMU records tilt and roll based on gravity normal,calculated from the three accelerom-eters.This measurement is corrupted by robot motion, and is moderately noisy. 2)GPS Yaw:The IMU yaw data is very bad,and cannot be used for?ltering(for example,over the150m run, it can be off by60degrees).Instead,we used the yaw estimate available from the LAGR GPS.These yaw estimates are comparable to a good-quality IMU. Over a very long run,the GPS yaw does not have an unbounded error,as would an IMU,since it is globally corrected; Our?lters for the gravity normal and yaw are very simple linear?lters that essentially nudge the robot’s pose towards global consistency through a very small gain factor.GPS yaw ?ltering is done when the GPS receiver has at least a3D position?x and the vehicle is travelling0.5m/s or faster,to limit the effect of velocity noise from GPS on the heading estimate.In addition,?ltering is performed only if the robot has travelled a certain distance from the last?ltering.This limits the effect of short term noise in the fused pose and also makes sure that the robot’s pose does not change when the vehicle is stationary. Though the?lter is very simple,it is effective and improves the accuracy of the computed motion over long term.A better way would be to perhaps incorporate the yaw and grav estimates directly into the bundle adjustment,using their covariances.Unlike our previous work[1],we have turned off any position?ltering based on GPS i.e;we completely ignore position estimates from the GPS.The bundle adjusted VO does a very good job of estimating the distance travelled.As long as the yaw is accurate,the robot will stay well localized. V.R ESULTS We surveyed a course using an accurate RTK GPS receiver. The‘Canopy Course’was under tree cover,but the RTK GPS and the LAGR robot GPS functioned well.Sixteen waypoints were surveyed,all of which were within10cm error according to the RTK readout.The total length of the course was about150meters.Subsequently,our robot was joysticked over the course,stopping at the surveyed points. The robot was run forward over the course,and then turned around and sent backwards to the original starting position. The course itself was?at,with many small bushes,cacti, (a)Bundle adjusted VO (b)Bundle adjusted VO with gravity normal and yaw ?ltering (c)RMS error (d)Loop closure error Fig.2. XY localization error for the canopy sequence (a)Z wayoint RMS error (b)Z loop closure error (c)Z error along the route (d)Z error magni?ed Fig.3. Z error for the canopy sequence downed tree branches,and other small obstacles.Notable for VO was the sun angle,which was low and somewhat direct into the cameras on several portions of the course.Figure 4shows a good scene from the LAGR robot in the shadow of the trees,and a poor image where the sun washes out a large percentage of the scene.(The lines in the images are horizon lines taken from VO and from ground plane analysis).The uneven image quality makes it a good test of the ability of VO under challenging realistic conditions.The joystick control was moderate,with no sharp turns or quick accelerations and average speed of the robot about 1m/s.Our VO algorithm was able to match interest points along the whole route,even though the average data rate was only about 5Hz.Although there were some big jumps in the images (one of over 0.5meters),the feature tracker was able to track enough features to successfully match the frames.Bundle adjustment was performed over ?ve stereo frames by matching all the features that could be tracked over those frames.Results for varying the number of frames are presented in Section V-C. Since we do not know the exact initial heading of the robot in UTM coordinates,we used an alignment strategy that assumes there is an initial alignment error,and corrects it by rotating the VO path rigidly to align the endpoint as (a)A good scene as viewed from the left camera (b)Scene washed out by the sun Fig.4. Sample images from the LAGR robot for the canopy course best as possible.This strategy minimizes VO errors on the outward path,and may underestimate them.However,for the return path,the errors will be caused only by VO,and can be taken as a more accurate estimate of the error.Since the robot was moved in a loop,the difference between the initial and ?nal pose can also be used as a measure of error,and this measure is not corrupted by the initial heading problem.We analyzed the following errors: 1)XY distance error between the VO poses and the RTK poses. 2)Z distance error between the VO poses and the RTK poses.Z error is less accurate on the RTK poses,and is also globally correctable by gravity normal for VO. 3)Initial vs.?nal pose for VO,in XY and Z. A.XY Error Figure2compares the XY locations of the waypoints with ground truth RTK GPS.Figure2(a)shows the best result obtained using our bundle-adjusted VO with gravity normal and GPS yaw?ltering.As shown,the errors between way-points is very small,amounting to less than1%of distance traveled.Without?ltering,the results are worse(Figure2(b)), amounting to about3%of distance traveled.At some points in the middle of the return trip,the VO angle starts to drift, and at the end of the backward trip there is about a10m gap. Note that this effect is almost entirely caused by the error in the yaw angle,which can be corrected by a good IMU. Figure2(c)shows the RMS error between VO(with different?lters)and the RTK waypoints,on the return path. As noted above,the forward VO path of the robot has been aligned with the RTK path.Without yaw?ltering,the bundle adjustment does much better than the non-bundle adjusted VO.It is marginally better with yaw?ltering.Both versions of VO beat the LAGR GPS over the150meters of the return path. We can also examine the loop closure error(Figure2(d)), looking at the difference between start and end positions.As noted,this measure is not in?uenced by the initial angle.The loop errors,without?ltering,are quite large for non-bundled VO.The bundled VO does a respectable job of keeping the error low without?ltering,amounting to about3%of distance traveled(300m).With yaw?ltering,the errors are comparable to the waypoint errors,and less than1%of distance traveled. Again,both?ltered VO algorithms beat LAGR GPS. B.Z Error Figure3(a)shows the RMS error between VO(with different?lters)and the RTK waypoints,on the return path. For the Z direction,the forward-path alignment also aligns the IMU gravity normal to the camera frame.Normally,the IMU orientation with respect to the camera frame is indirect: camera to vehicle,and then vehicle to IMU.The latter is affected by things like tire pressure.For our gravity-normal ?ltering,we do a direct transformation from the IMU gravity normal to camera coordinates,based on alignment from the outward path. Without gravity-normal?ltering,the errors are modest, less than XY errors.Bundle adjustment gives two times improvement,with and without?ltering.Both versions of VO beat the notoriously unreliable LAGR GPS over the150 meters of the return path.We also examine the loop closure error in Figure3(b).The loop errors,without?ltering,are modest for non-bundled VO.The bundled VO does a great job of keeping the error low without?ltering,amounting to <0.3%of distance traveled(300m).With gravity normal ?ltering,the errors are much less than the waypoint errors, and almost vanish for bundled VO. Given the low error for loop closure,it is interesting to ask if the waypoint errors are caused by error in the RTK readings.Figures3(c)and3(d)plots RTK vs VO readings along the course.The?rst plot uses equal axis scales,and shows a gradual declination of about a1.5%grade along the route.We used the outward trip to align the IMU and the camera frames;then,the backward trip shows this alignment works very well,to adjust the VO readings.Most of the Z error comes in two readings in the middle of the course;we expanded the vertical scale in the next?gure.The course was relatively uniform the robot did not traverse any major ditches or hills.So the RTK readings in the middle of the course are probably anomalous.The RTK Z error is nominally40cm, and may have been worse under the tree canopy.So in this case,we can take the bundle-adjusted VO as the ground truth, and measure RTK Z errors. C.Effect of number of frames It is obvious that as we increase the number of frames, N,over which we perform bundle adjustment,the visual odometry results will become more accurate.Figure5plots the loop closure error for different number of frames.In the absence of any bundle adjustment(the?rst bar),the loop closure error is maximum and about23m.Performing bundle adjustment even over two frames brings the error down to 21m.This error keeps on decreasing with the number of frames until it reaches a minimum of11m for5frames. Further increasing the number of frames does not change the error much.This is due to the fact that most features can only be tracked for about?ve frames. Fig.5.Effect of the number of frames for bundle adjustment on the loop closure error VI.C ONCLUSION We have presented a complete system for localization of a robot in unstructured rough terrain,using stereo vision as the primary sensor.The system presented here enhances our previous system through the use of Bundle adjustment over multiple frames.This helps to keep the drift error down. Bundle-adjusted VO has the potential to be an accurate substitute for RTK GPS,over distances on the order of a kilometer.With good IMU readings for yaw,and noisy gravity normal readings,it is possible to get<1%error over300m. Our localization system has been tested in varied outdoor terrain,including under tree cover where GPS does not work very well and sandy terrain which causes lot of wheel slippage and wheel based odometer to fail.Our system is very robust-we can typically give it a goal position several hundred meters away,and expect it to get there within a meter or two.We are currently in the process of porting our system on a larger robot that can travel upto5m/s.Finally,we would also like to augment our VO system with visual landmarks to further reduce the drift error and recognize places seen before,thereby allowing us to do loop closures. A CKNOWLEDGEMENT The work reported in this paper was supported in part by a contract with DARPA under the Learning Applied to Ground Robots program. R EFERENCES [1]M.Agrawal and K.Konolige,“Real-time localization in outdoor environments using stereo vision and inexpensive gps,”in Proc. International Conference on Pattern Recognition,August2006. [2] D.Nister,O.Naroditsky,and J.Bergen,“Visual odometry,”in Proc. IEEE Conference on Computer Vision and Pattern Recognition,June 2004. [3] D.Nister,“An ef?cient solution to the?ve-point relative pose problem,” IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI),vol.26,no.6,pp.756–770,June2004. [4] A.Davison,“Real-time simultaneaous localisation and mapping with a single camera,”in Proc.International Conference on Computer Vision (ICCV),2003,pp.1403–1410. [5]M.Agrawal,K.Konolige,and R.Bolles,“Localization and mapping for autonomous navigation in outdoor terrains:A stereo vision ap-proach,”in Proc.IEEE Workshop in Applied Computer Vision(WACV), February2007,p.To Appear. [6] B.Triggs,P.F.McLauchlan,R.I.Hartley,and A.W.Fitzibbon, “Bundle adjustment-a modern synthesis,”in Vision Algorithms: Theory and Practice,ser.LNCS.Springer Verlag,2000,pp.298–375. [7]K.Konolige and M.Agrawal,“Frame-frame matching for realtime consistent visual mapping,”in Proc.International Conference on Robotics and Automation(ICRA),2007,p.To Appear. [8] C.Engels,H.Stewnius,and D.Nister,“Bundle adjustment rules,” Photogrammetric Computer Vision,September2006. [9] E.Mouragnon,M.Lhuillier,M.Dhome,F.Dekeyser,and P.Sayd, “Real time localization and3d reconstruction,”in https://www.wendangku.net/doc/1413127012.html,puter Vision and Pattern Recognition(CVPR),vol.1,June2006,pp.363–370. [10]M.Agrawal,K.Konolige,and L.Iocchi,“Real-time detection of independent motion using stereo,”in IEEE workshop on Motion (WACV/MOTION),January2005. [11]R.Hartley and A.Zisserman,Multiple View Geometry in Computer Vision.Cambridge University Press,2000. 同步练习1 二、选择题 01——05 CADAB 06——10 ACDAB 11——15 CBDBB 同步练习2 二、选择题 01——05 ABDCA 06——10 CACBC 11——15 DADAD 16——20 BDBBB 三、填空题 1.可视 2.LEFT、TOP、WIDTH、HEIGHT 3.按字母顺序 4.查看代码 5.工具、编辑器 6.FORM窗体、FONT 7.MULTILINE 8.在运行时设计是无效的 9.工程、工程属性、通用、FORM1.SHOW 10.TABINDEX、0 同步练习3 二、选择题 01——05 BCADB 06——10 ADBBC 11——15 DBCBA 16——20 BAABB 三、填空题 1.整型、长整型、单精度型、双精度型 2.SIN(30*3.14/180)+SQR(X+EXP(3))/ABS(X-Y)-LOG(3*X) 3.164、今天是:3-19 4.FALSE 5.-4、3、-3、3、-4、4 6.CDEF 7.(X MOD 10)*10+X\10 8.(35\20)*20=20 ( 35 \ 20 )* 20 = 20 9.X MOD 3=0 OR X MOD 5=0 10.27.6、8.2、8、1、100、397、TRUE、FALSE 同步练习4 一、选择题 01——05 DBCAD 06——10 CBBAB 11——15 D25BAC 16——20 CBACB 21——25 DAABC 二、填空题 1.正确性、有穷性、可行性、有0个或多个输入、有1个或多个输出2.1 2 3 3.X>=7 4.X VB 课后练习题参考答案 第一章 一、 1、C 2、C 3、B 4、B 5、D 6、B 7、B 8、D 二、 1、学习版、专业版、企业版 2、alt+Q 或 alt+F4 3、.vbp 、 .frm 4、固定、浮动 5、"abcd"、"VB Programing" 6、属性窗口、运行 7、对象框、事件框 8、窗体模块、标准模块、类模块 第二章 一、 1、B 2、B 3、B 4、B 5、D 6、D 二、 1、((x+y)+z)*80-5*(C+D) 2、cos(x)*sin(sin(x)+1 3、2*a*(7+b) 4、8*EXP(3)*LOG(2) 5、good morning 、 good morning 6、2001/8/25 8 2001 7 第三章 一、 1、C 2、B 3、D 4、A 5、D 、 3 6、C 7、B 8、C 9、C 10、D 11、B 12、C 13、B 14、B 15、A 16、B 17、D 18、C 19、C 二、 1、AutoSize 2、text1.setfocus 3、0 、 0 4、 picture1.picture=loadpic ture("yy.gif") 5、stretch 6、interval 7、enable 8、下拉式组合框、简单组 合框、下拉式列表框、style 9、下拉式列表框 10、条目1 、条目3 11、欢迎您到中国来、 welcome to china!! 第四章 一、 1、B 2、C 3、C 4、B 5、C 6、B 7、C 8、B 9、D 10、A 11、B 12、A 13、B 14、D 15、A 16、B 17、A 18、C 19、B 二、 1、2542=57 2、beijing 3、002.45、2.449、 24.49e-01、-2.449 4、9 10 11 5、9 6、1 2 3 7、 iif(x<=0,y=0,iif(x<=10, y=5+2*x,iif(x<=15,y=x- 5,y=0))) 8、x=7 或 x>6 或 x>5 9、x>=0 、x 习题 一、单项选择题 1. 在设计阶段,当双击窗体上的某个控件时,所打开的窗体是_____。 A. 工程资源管路器窗口 B. 工具箱窗体 C. 代码窗体 D. 属性窗体 2. VB中对象的含义是_____。 A. 封装了数据和方法的实体 B. 封装的程序 C. 具有某些特性的具体事物的抽象 D. 创建对象实例的模板 3. 窗体Form1的Name属性是MyForm,它的单击事件过程名是_____。 A. MyForm_Click B. Form_Click C. Form1_Click D. Frm1_Click 4. 如果要改变窗体的标题,需要设置窗体对象的_____属性。 A. BackColor B. Name C. Caption D. Font 5. 若要取消窗体的最大化功能,可将其_____属性设置为False来实现。 A. Enabled B.ControlBox C. MinButton D. MaxButton 6. 若要以代码方式设置窗体中显示文本的字体大小,可通过设置窗体对象_____属性来实现。 A. Font B.FontName C.FontSize D. FontBold 7. 确定一个控件在窗体上位置的属性是_____。 A. Width或Height B. Width和Height C. Top或Left D. Top和Left 8. 以下属性中,不属于标签的属性是_____。 A. Enabled B. Default C. Font D. Caption 9. 若要设置标签控件中文本的对齐方式,可通过_____属性实现。 A.Align B. AutoSize C. Alignment D. BackStyle 10. 若要使标签控件的大小自动与所显示文本的大小相适宜,可将其_____属性设置为True来实现。 A.Align B. AutoSize C. Alignment D. Visible 11. 若要设置或返回文本框中的文本,可通过设置其_____属性来实现。 A.Caption B. Name C. Text D. (名称) 12. 若要设置文本框最大可接受的字符数,可通过设置其_____属性来实现。 A.MultiLine B. Max C. Length D. MaxLength 第5章数组与记录 5.1 填空题 1.若要定义一个包含10个字符串元素,且下界为1的一维数组s,则数组说明语句为()。 答案:Dim s(1 To 10) As String 2.若要定义一个元素为整型数据的二维数组a,且第一维的下标从0到5,第二维下标从-3到6,则数组说明语句为()。 答案:Dim a(0 To 5,-3 To 6) As Integer 3.如果数组元素的下标值为实数,则VB系统会按()进行处理。 答案:四舍五入原则 4.数组元素个数可以改变的数组称为();数组元素可以存放不同类型数据的数组称为()。 答案:可调数组、可变类型数组 5.数组刷新语句用于()。若被刷新的数组是数值数组,则把所有元素置();若被刷新的数组为字符串数组,则把所有元素置()。 答案:清除指定数组内容、0、空字符串 10.控件数组是由一组类型和()相同的控件组成,共享()。 答案:名字、同一个事件过程 11.控件数组中的每一个控件都有唯一的下标,下标值由()属性指定。 答案:Index 12.建立控件数组有两种方法:()和()。 答案:在设计阶段通过相同Name属性值来建立、在程序代码中使用Load方法 5.2 选择题 1.下列一维数组说明语句错误的是()。 a) Dim b(100) AS Double b) Dim b(-5 To 0) AS Byte c) Dim b(-10 To –20) AS Integer d) Dim b(5 To 5) AS String 答案:c 2.若有数组说明语句为:Dim a(-3 To 8),则数组a包含元素的个数是()。 a) 5 b) 8 c) 11 d) 12 答案:d 3.设有数组说明语句:Dim c(1 To 10),则下面表示数组c的元素选项中()是错误的。 a) c(i-1) b) c(5+0.5) c) c(0) d) c(10) 答案:c 4.下列数组说明语句中正确的是()。 a) Dim a(-1 To 5,8)AS String b) Dim a(n,n)AS Integer c) Dim a(0 To 8,5 To –1)AS Single d) Dim a(10,-10)AS Double VB语言练习题及答案 1、算法的计算量的大小称为算法的________。 (A)现实性(B)难度(C)复杂性(D)效率 2、设栈S和队列Q的初始状态为空。元素a、b、c、d、e、f依次通过栈S,并且一个元素出栈后即进入队列Q,若出队的顺序为b、d、c、f、e、a,则栈S的容量至少应该为________。 (A)3(B)4(C)5(D)6 3、在深度为5的满二叉树中,叶子结点的个数为________。 (A)32(B)31(C)16(D)15 4、链表适用于________查找。 (A)顺序(B)二分法(C)顺序,也能二分法(D)随机 5、希尔排序法属于________类型的排序法。 (A)交换类排序法(B)插入类排序法(C)选择类排序法(D)建堆排序法 6、序言性注释的主要内容不包括________。 (A)模块的接口(B)模块的功能(C)程序设计者(D)数据的状态 7.在数据流图中,○(椭圆)代表________。 (A)源点(B)终点(C)加工(D)模块 8、软件测试的过程是________。 Ⅰ.集成测试Ⅱ.验收测试Ⅲ.系统测试Ⅳ.单元测试 (A)Ⅰ、Ⅱ、Ⅲ、Ⅳ(B)Ⅳ、Ⅲ、Ⅱ、Ⅰ(C)Ⅳ、Ⅰ、Ⅱ、Ⅲ、(D)Ⅱ、Ⅰ、Ⅳ、Ⅲ 9、数据的逻辑独立性是指________。 (A)存储结构与物理结构的逻辑独立性(B)数据与存储结构的逻辑独立性(C)数据与程序的逻辑独立性(D)数据元素之间的逻辑独立性VB程序设计课后习题答案(科学出版社)
vb课后习题答案
VB第一章课后习题答案讲课教案
vb课后练习答案习题解答 (5)
VB语言练习题及答案1