外文翻译视觉伺服控制教程_第1页
外文翻译视觉伺服控制教程_第2页
外文翻译视觉伺服控制教程_第3页
外文翻译视觉伺服控制教程_第4页
外文翻译视觉伺服控制教程_第5页
已阅读5页,还剩5页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

1、英 文 翻 译系 别专 业班 级学生姓名学 号指导教师英文资料及译文a tutorial on visual servo controlthis article provides a tutorial introduction to visual servo control of robotic manipulators. since the topic spans many disciplines our goal is limited to providing a basic conceptual framework. we begin by reviewing the prerequis

2、ite topics from robotics and computer vision, including a brief review of coordinate transformations, velocity representation, and a description of the geometric aspects of the image formation process. we then present a taxonomy of visual servo control systems. the two major classes of systems, posi

3、tion-based and image-based systems, are then discussed in detail. since any visual servo system must be capable of tracking image features in a sequence of images, we also include an overview of feature-based and correlation-based methods for tracking. we conclude the tutorial with a number of obser

4、vations on the current directions of the research field of visual servo control.the vast majority of todays growing robot population operate in factories where the environment can be contrived to suit the robot. robots have had far less impact in applications where the work environment and object pl

5、acement cannot be accurately controlled. this limitation is largely due to the inherent lack of sensory capability in contemporary commercial robot systems. it has long been recognized that sensor integration is fundamental to increasing the versatility and application domain of robots, but to date

6、this has not proven cost effective for the bulk of robotic applications, which are in manufacturing. the frontier of robotics, which is operation in the everyday world, provides new impetus for this research. unlike the manufacturing application, it will not be cost effective to re-engineer our worl

7、d to suit the robot.vision is a useful robotic sensor since it mimics the human sense of vision and allows for noncontact measurement of the environment. since the early work of shirai and inoue (who describe how a visual feedback loop can be used to correct the position of a robot to increase task

8、accuracy), considerable effort has been devoted to the visual control of robot manipulators. robot controllers with fully integrated vision systems are now available from a number of vendors. typically visual sensing and manipulation are combined in an open-loop fashion, looking then moving. the acc

9、uracy of the resulting operation depends directly on the accuracy of the visual sensor and the robot end-effector.an alternative to increasing the accuracy of these subsystems is to use a visual-feedback control loop that will increase the overall accuracy of the systema principal concern in most ap

10、plications. taken to the extreme, machine vision can provide closed-loop position control for a robot end-effectorthis is referred to as visual servoing. this term appears to have been first introduced by hill and park 2 in 1979 to distinguish their approach from earlier blocks world experiments whe

11、re the system alternated between picture taking and moving. prior to the introduction of this term, the less specific term visual feedback was generally used. for the purposes of this article, the task in visual servoing is to use visual information to control the pose of the robots end-effector rel

12、ative to a target object or a set of target features. the task can also be defined for mobile robots, where it becomes the control of the vehicles pose with respect to some landmarks.since the first visual servoing systems were reported in the early 1980s, progress in visual control of robots has be

13、en fairly slow, but the last few years have seen a marked increase in published research. this has been fueled by personal computing power crossing the threshold that allows analysis of scenes at a sufficient rate to servo a robot manipulator. prior to this, researchers required specialized and expe

14、nsive pipelined pixel processing hardware. applications that have been proposed or prototyped span manufacturing (grasping objects on conveyor belts and part mating), tele operation, missile tracking cameras, and fruit picking, as well as robotic ping-pong, juggling, balancing, car steering, and eve

15、n aircraft landing. a comprehensive review of the literature in this field, as well the history and applications reported to date, is given by cork and includes a large bibliography.visual seagoing is the fusion of results from many elemental areas including high-speed image processing, kinematics,

16、dynamics, control theory, and real-time computing. it has much in common with research into active vision and structure from motion, but is quite different from the often described use of vision in hierarchical task-level robot control systems. many of the control and vision problems are similar to

17、those encountered by active vision researchers who are building robotic heads. however the task in visual servoing is to control a robot to manipulate its environment using vision as opposed to just observing the environment.given the current interest in visual servoing it seems both appropriate and

18、 timely to provide a tutorial introduction to this topic. our aim is to assist others in creating visually servoed systems by providing a consistent terminology and nomenclature, and an appreciation of possible applications. is always to present those ideas and techniques that we have found to funct

19、ion well in practice and that have some generic applicability. another difficulty is the current rapid growth in the vision-based motion control literature, which contains solutions and promising approaches to many of the theoretical and technical problems involved. again we have presented what we c

20、onsider to be the most fundamental concepts, and again refer the reader to the bibliography.the remainder of this article is structured as follows. section ii reviews the relevant fundamentals of coordinate transformations, pose representation, and image formation. in section iii, we present a taxon

21、omy of visual servo control systems. the two major classes of systems, position-based visual servo systems and image-based visual servo systems, are then discussed in sections iv and v respectively. since any visual servo system must be capable of tracking image features in a sequence of images. sec

22、tion vi describes some approaches to visual tracking that have found wide applicability and can be implemented using a minimum of special-purpose hardware. finally, section vii presents a number of observations regarding the current directions of the research field of visual servo control.in this pa

23、per, the task space of the robot, represented by t. is the set of positions and orientations that the robot tool can attain. since the task space is merely the configuration space of the robot tool, the task space is a smooth m-manifold. if the tool is a single rigid body moving arbitrarily in a thr

24、ee-dimensional workspace, then t=se3=r3*so3 ,and m = 6. in some applications, the task space may be restricted to a subspace of se3. for example, for pick and place, we may consider pure translations. while for tracking an object and keeping it in view we might consider only rotations.to control the

25、 robot using information provided by a computer vision system, it is necessary to understand the geometric aspects of the imaging process. each camera contains a lens that forms a 2d projection of the scene on the image plane where the sensor is located. this projection causes direct depth informati

26、on to be lost so that each point on the image plane corresponds to a ray in 3d space. therefore, some additional information is needed to determine the 3d coordinates corresponding to an image plane point. this information may come from multiple cameras, multiple views with a single camera, or knowl

27、edge of the geometric relationship between several feature points on the target. in this section, we describe three projection models that have been widely used to model the image formation process: perspective projection, scaled orthographic projection, and affine projection. although we briefly de

28、scribe each of these projection models, throughout the remainder of the tutorial we will assume the use of perspective projection.in the computer vision literature, an image feature is any structural feature than can be extracted from an image. typically, an image feature will correspond to the proj

29、ection of a physical feature of some object onto the camera image plane. a good feature point is one that can be located unambiguously in different views of the scene, such as a hole in a gasket or a contrived pattern. we define an image feature parameter to be any real-valued quantity that can be c

30、alculated from one or more image features.2 some of the feature parameters that have been used for visual servo control include the image plane coordinates of points in the image, the distance between two points in the image plane and the orientation of the line connecting those two points, perceive

31、d edge length, the area of a projected surface and the relative areas of two projected surfaces, the centroid and higher order moments of a projected surface , the parameters of lines in the image plane , and the parameters of an ellipse in the image plane . in this tutorial we will restrict our att

32、ention to point features whose parameters are their image plane coordinates.视觉伺服控制教程本文提供了一个教程,介绍了机械手的视觉伺服控制。由于该专题跨越许多学科,而我们的目标是有限的,所以仅提供一个基本的概念框架。我们首先从机器人和计算机视觉的先决条件主题开始回顾,包括简要回顾坐标变换,速度表示和形象形成过程的几何方面的描述。然后,我们提出视觉伺服控制系统的分类。有基于位置和基于图像这两种主要类别的系统,之后再详细讨论。因为任何视觉伺服系统必须能够跟踪一个图像序列的图像特征,我们还包括了基于功能的和基于相关性的方法来

33、进行跟踪的概述。最后总结了一些关于视觉伺服控制目前研究领域发展方向观察的教程。如今在环境适宜工厂中,机器人的数量在不断则加。但机器人在工作环境和对象放置不能被精确控制的实际应用中所起到的作用就要少得多。这些限制主要是由于现代的商业机器人系统缺乏感官能力这一固有缺陷。人们早已认识到,传感器的集成可以根本的提高机器人的通用性和扩展其应用领域,但迄今为止,都没有证明在批量的机器人制造中有高性价比。机器人技术的“前沿”,它运作在当今世界,并为这项研究注入新的动力。视觉传感器是一个很有用的机器人传感器,因为它模仿了人类的视觉,并允许对环境的非接触式测量。由于白井和井上的早期工作(描述了如何将视觉反馈回路

34、可用于纠正机器人的位置,以提高工作精度),一直致力于机器人的视觉控制并且做了相当大的努力。有完全集成的视觉系统的机器人控制器,现在已经遍及众多销售商。通常情况下视觉传感和操作以开环方式相结合,先“找”,然后“移动”。所得到的操作的精确度直接依赖于视觉传感器和机器人末端执行器的精确度。另一种提高这些子系统的准确性是使用一个视觉反馈控制回路,这会增加系统的整体精度在大多数应用中较为关注。取到了极致,机器视觉可以为机器人的末端执行器 这被称为视觉伺服,提供闭环位置控制。这个词似乎先由山和公园于1979年提出,在引入这个词前,视觉反馈普遍使用。对于本文的目的,视觉伺服的任务是用视觉信息来控制机器人的末

35、端执行器相对于目标对象或一组目标特征的相对姿势。该任务也可以被定义为移动机器人,它为了车辆的姿势对于一些标志的控制。视觉伺服系统第一次被提出于上世纪80年代初,机器人视觉控制的进展一直相当缓慢,但近几年已经能够看到发表的研究报告显着增加。这已得益于个人计算功率超过允许场景分析以达到足够的速率来“伺服”一个机械臂的阈值。在此之前,研究人员需要专业并且昂贵的流水线像素处理的硬件。现在已经提出例如原型设计跨度制造业(把握上传送带和部分配对对象),远程操作,导弹跟踪摄像机和水果采摘,以及机器人乒乓球,杂耍,平衡,汽车转向器,甚至飞机降落等应用。在这一领域全面的文献整理,以及历史和迄今报告的应用程序,是

36、由科克发出,并包括大量书目。视觉海域航行技术是许多元素领域结合的成果,包括高速图像处理,运动学,动力学,控制理论,以及实时计算。它与主动视觉和运动结构的研究有很多相似之处,但是与通常所描述和使用的,任务分层机器人控制系统还是有很大区别的。许多控制和视觉问题与那些建立“机器人头”主动视觉的研究人员所遇到的是相似的。然而,视觉伺服的任务是操纵机器人控制周围的环境,而不仅仅是观察周围的环境。鉴于视觉伺服当前的利益,似乎要适时且适度的提一个来教程介绍这个话题。我们的目标是通过提供一种统一的术语和命名,协助他人创造视觉伺服系统,以及可能的应用升值。为协助新人,我们将介绍其中简单的视觉硬件(只是一个数字转换器),可自由查看的视觉软件,并做出关于机器人及其控制系统的一些假设的技术。这足以开展许多不要求较高性能的视觉与/或控制的应用。写这样一篇文章的一个困难是,话题跨越许多学科,无法在一篇文章中得到充分的解决。例如,相关的控制问题是根本上非线性的,还有视觉识别,跟踪和重构都有其各自的领域。因此,我们集中于各学科的一些基本方面,并提供了大量的参考书目,帮助想要了解更多细节的读者们。我们更倾向于提出那些已经发现的,在实践中运作良好,并具有一些通用的适用

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论