




已阅读5页,还剩5页未读, 继续免费阅读
版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
feature planning for robust execution of general robot tasks using visual servoing mads paulin the maersk mc-kinney moller institute for production technology university of southern denmark campusvej 55, 5230 odense m, denmark paulinmip.sdu.dk abstract in this paper we present a new method for automatic feature planning for visual tracking systems employed in visual servoing control of robot manipulators. such plan- ning of optimal feature sets is of utmost importance in or- dertoensureaccurateandrobustexecutionofgeneralrobot tasks using visual servoing. first we introduce a novel plat- form for simulation and preparation of visual servoing sys- tems. subsequently we demonstrate how this platform, to- gether with combinatorial optimization techniques and fi t- ness measures which consider several aspects related to the robustness of the tracking system, can be used to plan reli- able and information rich feature sets. finally, we present experimentswhichcomparetheperformanceofavisualser- voingsystem employingtheproposedfeatureplanningtech- niquetothatof aservoingsystem basedonfeaturesselected using traditional methods. these experiments demonstrate that our technique not only improves the robustness of the visual tracking system but also signifi cantly increases the accuracy of the visual servoing control loop. 1. introduction as a result of the tremendous research and develop- ment efforts invested by the international robotics commu- nity, modern robotized manufacturing systems are capable of both planningand executingextremelycomplex tasks in- volving diverse operations such as object handling, weld- ing,spray-paintingandassemblywith impressiveprecision. an unfortunate characteristic of such platforms is however a very low robustness with respect to workcell uncertain- ties. consequently, meticulous engineering of the workcell, including customized tools and fi xtures, is necessary in or- der to ensure successful task completion. such tailoring of the workcell severely limits the fl exibility of the manufac- turing platform. fortunately, recent progress in visual servoing has ma- tured this technology to a state where it is ready to be in- corporated in operational systems. closed-loop visual con- trol of the robot end-effectors relative to the target objects can ensure successful tasking even in case of signifi cant un- certainties and hence effectively eliminate the need for ac- curate modeling of the workcell. the result is a robotic sys- tem with greatly improved fl exibility. despite the massive attention visual servoing has re- ceived,little effort have been devoted to integrationalissues which must be addressed in order to apply the technology in real applications. most contributions in the fi eld are fo- cused on lower level control issues and do not address the entire problem of executing general tasks using visual ser- voing. according to many researchers 14, 5, one of these high-priorityresearch areas related to visual servoing based tasking is automatic planning of features used for real-time visual object tracking. the main reason for this is that the performance of visual servoing systems is highly affected by the object features used in the control loop. for exam- ple, during servoing of the robot, some features may be- come occluded, moveout of the fi eld of view or providede- fi cientinformationthatwill leadtotask failure.on theother hand, computational limitations prevents large feature sets from being used in the servo loop. consequently, the fea- tures used for tracking during execution of the task should be dynamically replaced to ensure that suitably sized sets of reliable and information rich features are used. as the aim of this work is to enable executionof robotic tasks rela- tive to a target object, such feature switches can, as demon- strated in the following, be planned prior to task execution in a way which ensures optimal performance of the visual control system. as the majority of contributions in visual servo- ing reduce the visual tracking problem to that of track- ing high-contrast dots on a homogeneous background,only few approaches to automatic feature planning for con- trol relative to real objects exist. feddema et al. 3 proceedings of the second canadian conference on computer and robot vision (crv05) 0-7695-2319-6/05 $ 20.00 ieee and papanikolopoulos et al. 11 have previously pre- sented feature selection approaches for image-based ser- voing schemes while janabi-sharifi and wilson 8 have presented a scheme which relates to position-based ser- voing systems. this method extracts a set of corners and holes from a cad model of the target object and uses sev- eral quality measures such as feature resolution, fi eld of view and expected durability of the individual features as the camera moves along a planned relative trajectory to se- lect sets of features which are suitable for robot control in intervals along this relative trajectory. the main shortcom- ing of this approach is that its heuristic feature selection strategy is designed especially for corner and hole fea- tures and hence is only suitable for trackers based on these kinds of features. the feature planning methodology proposed in this pa- per is applicable to a wide variety of visual tracking sys- tems and can be used to plan tasks for both image and posi- tion based control systems. however, to exemplifythe prin- ciples, we use a classical position-based eye-in-hand visual servoing system and a model-based object tracker based on therapid tracker proposedby harris 7 to track cartesian trajectories expressing the motion of the robot end-effector relative to a given target object. the rapid tracker calculates the pose of the target ob- ject in a recursive manner. for each video frame, an initial estimateofthepose(e.g.theposeestimatedinthelast frame or a more accurate prediction based on a motion model) is used to project the visible parts of the edges of the model onto the image. during this process, the image positions of a set of control points located on these edge segments are also determined. a 1-dimensional edge detection is car- ried out along the normal to the projected edge at the pro- jection of each control point. taking advantage of the aper- tureproblemwhichstates thatthemotionofahomogeneous contour is locally ambiguous, the edge detection can be re- duced from 2d to 1d searches. the detected image distances are subsequently used to calculate a correction, i.e. a small change in translation and orientation, to the expected pose. this correction is calcu- lated in such a way that the projection of the model using the updated pose coincides with the actual imaged object. the amountof motion in the six directions are computedby solving a linear least-squares system constructed from the coordinates of the control points both in the model frame and in the image as well as the detected image distances. an example of the pose correction process is shown in figure 1. during tracking of a cube, the predicted pose is used to project the visible model edges on to the image (shown in red in figure 1(a). based on this projection, a series of image measurements (shown in yellow) are made, anda correctionto thepredictedposeis calculated.thepro- jection of the model using the updated pose is shown in green in figure 1(b). (a) projection using predicted pose (red) and detected im- age distances (yellow). (b)projection using corrected pose (green). figure 1. object tracking using the rapid tracker. to defi ne the basis of our approach, we fi rst introduce a novel cad-based visual servoing simulation and prepara- tion framework which is used to analyze the target object model and extract salient features. subsequently, the auto- matic feature planning method as well as the fi tness mea- sures which ensureoptimal performanceof the visual track- ing system are formulated in section 3. experimental re- sults which demonstrate the effectiveness of the approach are presented in section 4 while section 5 contains the con- clusion and directions for future research. 2. task simulation and geometry analysis to facilitate automatic generation of visual servo- ing tasks from higher-level tasks descriptions, we have developed the visual servoing simulation and prepara- tion framework called the virtualworkcell 13. as other robotic simulators, the virtualworkcell supports model- ingof kinematicsandvisualizationofthe workcell,but con- trary to these systems, our framework is intended for sim- ulation of vision systems and associated robot control methods. the virtualworkcell includes workcell geome- try as well as proper rendering of the scene. by modelling the cameras in the virtualworkcell by the intrinsic and ex- trinsic parameters of their real equivalents, images practi- cally identical to those acquired using the real cameras can be synthesized. these images can be used for many pur- poses including offl ine performance evaluation of visual servoing systems and automatic sensor placement plan- ning. althoughthe virtualworkcell can be used for simulation of visual control techniques, its main purpose is, as shown in figure 2, to function as a link between classical robot proceedings of the second canadian conference on computer and robot vision (crv05) 0-7695-2319-6/05 $ 20.00 ieee motion planners and visual servoing control of robot end- effectors relative to the target objects in question. virtualworkcell motion planner workcell setup servoing robot trajectories offline servo strategy online figure 2. outline of the complete visual con- trol system. in the offl ine planning phase, a workcell setup consist- ing of 3d geometrical models and robot kinematics is uti- lized by a motion planner to plan collision free joint trajec- tories for the robots in the workcell. however, to enable the robot system to cope with uncertainties in the workcell, it is essential that motionsare specifi ed with respect to the target object at hand and not in joint space as the planned trajecto- ries. to convertthe joint trajectories into image or cartesian space for image-based or position-based servoing, they are passed along with the workcell setup to the virtualwork- cell. as shown in figure 3, the virtualworkcell samples the planned joint trajectories and at each sample, shown as coordinate frames along the end-effector trajectory, the ap- pearance of the target object when viewed from a given camera is determined. the resulting trajectory of the end- effector (shown in green) is converted into a cartesian tra- jectory of the target with respect to the end-effector frame. when determining the appearance of the target object, the correspondingcad model is analyzed and several geo- metricalandtopologicalfeaturesareextracted.thisensures that the virtualworkcell can be used to plan tasks for both tracking systems tracking distinct image features such as thoseproposedbye.g.hagerandtoyama6as well as sys- tems based on geometrical object models (e.g. perspective- n-point or edge fi tting algorithms). figure 4 shows the var- ious kinds of information obtained by analyzing the model. figure4(a)showsthe visibleedges.edgesonthecontourof the object are green while edges off the contour are orange. in addition to the contour status, information about image projection, information about the visible 3d edge segments are stored. similar data is stored for model vertices (fig- ure 4(b). endeffector trajectory initial robot configuration target object final robot configuration time frame figure 3. sampling of a planned joint trajec- tory. also,rapid-like trackingsystems,whichmodelthetar- get object as a set of 3d control points located on visi- ble edge segments, are supported. the fi rst time an edge is classifi ed as visible, its geometry is analyzed to determine proper control point positions. currently, the control points are distributed along an edge such that the curve length of the edge between two subsequent control points is equal to a user specifi ed value. figure 4(c) shows the visible con- trol points assigned to the edges of the metal fi tting used as an example target object throughout this paper. besides these geometric model features, a set of non- topological features is collected. included in this set are 2d and 3d curves which defi ne the contour of the object in the image plane and in space, respectively. figure 4(d) shows and example of an extracted image plane contour. note that a silhouette edge is a visualization cue that not necessar- ily corresponds to a model edge. this feature type is hence of little use for model based trackers. systems based on e.g. active contours can, on the other hand, benefi t from the ex- pected contours to increase robustness and eliminate outly- ing measurements. finally, image templates can be extracted from the ren- dered images. such templates are used by correlation based trackers such as for instance the ssd tracker proposed by nickels and hutchinson 10. the extractedfeaturesfi nally are stored as candidatesfor visual tracking of the object in the time frame between the current and the subsequent sample. 3. automatic feature planning when the simulation and geometry analysis phase is completed, a large database of features suitable for object tracking during tasking has been created. depending on the nature of the tracking system utilized in a given application, various combinations of these features should be used. proceedings of the second canadian conference on computer and robot vision (crv05) 0-7695-2319-6/05 $ 20.00 ieee (a) visible edges.(b) visible vertices.(c) visible control points.(d) object contour (not nec- essarily topological model edges). figure 4. features collected during target analysis. consider for example a sample along the simulated tra- jectory of a robot for which the feature database stores n suitable features on the target object. depending on the computationalresourcesavailabletothevisualtrackingsys- tem, a subset of m 0. here, ckis the “tempera- ture” in the k-th iteration. using this acceptance criterion leaves a chance, albeit very small, that a new solution with a lower quality than the existing solution is accepted as a new optimum. therefore, there is a corresponding chance for the system to get out of a local energy minimum and into a more global one. by formulating the feature planning problem as a com- binatorial optimization problem and employing the virtual- workcell in combination with simulated annealing, feature planning is reduced to the task of defi ning a proper gen- eration mechanism and objective function. these functions will of course depend on the nature of the tracking system employedin a givenapplication,but, as demonstratedin the following section, both functions are easily defi ned. 3.1. feature planning for rapid-like trackers as the rapid tracker used in this paper utilizes the con- trolpointsassignedtotheedgesofthe targetmodelto main- tain an estimate of the pose of the object with respect to the camera frame, the aim of the feature planner is, for each time frame along the planned relative trajectory, to select a subset of the visible control points which ensures optimal object tracking. to start feature planning for a given sample along the planned trajectory, the visible control points are initially di- vided into two disjoint sets of m active control points and n m inactive control points. the m active control points constitutes the current solution or guess on which control points should be used for object tracking. the mechanism for generating neighboring solutions is simply to exchange a random active control point with a random inactive one. this generation mechanism ensures good ergodicity of the possible solutions and, as argued later, inexpensive recalcu- lation of the objective function during solution transitions. the fi tness of each possible solution is defi ned by seven measures which ensure that all aspects related to the robust- ness of the tracking system are considered. each of these measures, of which the fi rst fi ve are referred to as simple and the last two as complex, returns a number in the range 0;1dependingonthe fi tness ofthe solutionwith respect to proceedings of the second canadian conference on computer and robot vision (crv05) 0-7695-2319-6/05 $ 20.00 ieee the given aspect. in the following, these measures are stud- ied and an objective function is defi ned. 3.1.1. contour status to avoid erroneous image mea- surements during object tracking control points with good radiometric properties should be favored. radiometric con- ditions are determined by the illumination and the contrast of the features with their surroundings. good radiometric properties are hence a prerequisite for reliable detection of image edges. it is assumed here that the contrast of edges between a visible and an invisible face is higher than the contrast of edges dividing two visible faces. this assump- tion is valid for most objects with little texture or chang- ing surface coloring. control points projecting to the image contourof the target object are hence to be favored overoff- contour control points. this is achieved by defi ning the fol- lowing fi tness which results in a minimum of off-contour members in the set of active control points: c1(s) = |s s contour(s)| m (1) here, s is a single control point and contour(s) is a boolean function which determines if s is on the contour of the target object or not. 3.1.2. edge curvature as described by harris 7, the control points used in the pose refi nement process should ideally be located on straight model edges. alternatively, model edges of low curvature can be used. to ensure opti- mal pose refi nements, control points located on edge seg- ments of low curvature should hence be selected. this is done by defi ning the edge curvature fi tness measure: c2(s) = (s) min max min (2)
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- Unit6-C-Story-time公开课教案【甘肃嘉峪关】
- Unit3-A-Lets-learn湖北公开课教案
- 动态舞台效果的计算机模拟-全面剖析
- 遵义接地施工方案
- 版本控制与知识产权-全面剖析
- 哮喘跨学科研究-全面剖析
- 社区与领导力培训
- 比较器在高密度存储系统中的应用-全面剖析
- 百日咳流行病学调查-全面剖析
- 智能织物能量传输-全面剖析
- 体育康养与心理健康促进的结合研究论文
- 天津市河东区2024-2025学年九年级下学期结课考试化学试题(含答案)
- 2025技术服务合同模板
- 2025年保安证学习资源题及答案
- 公司事故隐患内部报告奖励制度
- 如何通过合理膳食安排促进婴幼儿成长发育
- 人教版(2024)七年级下册生物期中复习必背知识点提纲
- 浙江省绍兴市2025届高三语文一模试卷(含答案)
- 2025届高三化学一轮复习 化学工艺流程题说题 课件
- 网线采购合同
- 2024年初级中式烹调师技能鉴定理论考前通关必练题库(含答案)
评论
0/150
提交评论