21一种基于线阵相机的激光彩色点云生成方法_第1页
21一种基于线阵相机的激光彩色点云生成方法_第2页
21一种基于线阵相机的激光彩色点云生成方法_第3页
21一种基于线阵相机的激光彩色点云生成方法_第4页
21一种基于线阵相机的激光彩色点云生成方法_第5页
已阅读5页,还剩16页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

1、a new method of fusing laser data and line scanning imagema hong',guo jiao2(1. chongqing survey institute, chongqing 400020,2.beijing digsur science & technology development co.ltd, beijing 100012)abstract: vehicle-borne laser scanning system (vlss) is equipped with laser> linear ccd came

2、ra and gps/ins/odometer-based navigation system, which are all tightly fixed to a stable elevating platform in vehicle. it is a direct way with high precision to get 3d information of the world, which shows great promise in 3d city reconstruction. but data captured by laser scanner is colorless, so

3、it's necessary to obtain color infonnation from other sensor, such as camera. this paper first recalls some existing methods of colorful point cloud captured by vlss and then raises a method to accomplish the fusion of laser data and line scanning image, especially focusing on rgb color, by para

4、llel-binding linear camera and laser considering configuration of our vlss it takes two steps to accomplish this work, namely rigidly fixing laser and linear ccd camera parallelly and fonning a color look-up table by image-forming principle of sensors so it becomes convenient and fast to generate co

5、lorful point cloud, for which wc conduct a serial of experiments to get some data captured in an urban environment utilizing our vlss and give an analysis of the results. through this method, if s possible to realize automatic fusion of laser data (point cloud) and line scanning imagery, avoiding co

6、mplex progress of calibration of camera and finally offering a fast and effective way of texture matching in realistic 3d city reconstructionkeywords: vehicle-borne laser scanning system; laster point cloud; 3-d city一种基于线阵相机的激光彩色点云生成方法(1.重庆市勘测院,重庆400020, 2北京帝测科技股份有限公司,北京100012)摘要:车载激光扫描系统(vlss)是多传感器

7、的集成系统,主要由激光扫描仪、线阵ccd相机.和gps/imu导航仪固定在稳定的可升降车载平台上。它可以快速获取髙精度三维信息,在三维数字城市建设上有着广泛的应用前景。激光数据本身没有颜色信息,需要相机同步采集数据,如何实现两种数据的快速融合是一个关键问题。本文基于激光扫描仪和线阵ccd均以推扫方式获取数据的原理,利用良好设计实现激光和ccd相机的空间同步和时 间同步,从而完成点云与rgb颜色值的自动映射。首先,通过前期检校的手段使两种仪器在机械安裝上保证刚性平行,然后同步获取目标数据,并建立点云与影像对应的颜色查找表,从而快速便捷获得目标的彩色点云。本文通过车载激光扫描系统获取城市环境数据进

8、 行了一系列试验和分析,验证了这种方案的可行性。这种方法避免了复杂的相机检校过程, 实现激光点云数据与线阵影像的自动融合,使高效快捷地构建三维数字城市成为可能。关键词:车载激光扫描系统;激光点云;三维城市1. introductionrecently the need of 3d gis data has been increasing for various applications like navigation, virtual reality, urban planning and management, disaster management and entertainment (c

9、inema, computer games). vehicle-borne laser system (vlss) is a good way with high precision to get 3d infonnation directly thus ifs under extensively study nowadays. it is equipped with lidar (light detection and ranging) > linear ccd camera and gps/ins/ odometer-based navigation system, which ar

10、e all tightly fixed to a stable elevating platform in vehicle. while data captured by laser scanner is colorless and not good enough for object interpretation, it's necessary to get color information from other sensor, such as camera so many researchers are exploring a good way to accomplish the

11、 fusion of point cloud data with imagery data, especially focusing on rgb color. there are two types of camera we can consider to be our source of image, namely frame camera and linear scanning camera .if we choose frame camera, there will be a lot of work to do and it would be problematic in part b

12、ecause of the complex processing required and the difficulties in accurately assigning a colorful pixel fi*om the image to the correct point in the cloudlet's take calibration for example, which is an inevitable procedure the purpose of calibration basically is to integrate all the sensors and p

13、ositioning devices to a single coordinate system so that we can integrate data captured by the sensors this also includes computation of some parameters like focal length of the camera lens, image center and lens distortionin vehiclebonie laser system, points are generated by laser sensor and be cal

14、culated utilizing the collinearity equation, which uses (x,y,z) of each point and nine orientation parameters of the camera, including six exterior orientation parameters(x,y,z, omega, phi, kappa) and three inner orientation parameters( ¥(,° ,f), to get its corresponding point in image pla

15、ne and find its color information. obviously it's necessary to finish calibration of frame camera at first and measure the posture of camera by gps/imu while scanning.in this way, it seems clear to achieve every part of work. but actually every part of work is problematic and cause different err

16、ors. firstly, to get the accurate point position(x,y,z), we need gps/imu to offer accurate location and posture of laser scanner so that we can get absolute position of points captured. then the camera should be calibrated to find out its inner orientation parameters in a calibration field while gps

17、/imu should ofter accurate posture of camera obviously every step is uncertain, leading to complex systemic error. besides, the process of calculation is massive considering all the shortages above and our purpose to generate colorful point cloud fast and conveniently, it seems better to choose line

18、ar scanning camera so that each color scan could be aligned with a laser scan such that each laser point would have the correct color attribute. on the other hand, linear ccd is similar with laser scanner in ways of data acquisition. they both can be fixed together and scan over object while vehicle

19、 moving, which are helpful with the following combination of these two data sources what' more, in this way the color attribute could be associated with each point in the cloud even in real time.as for fusion of laser data and line scanning image, there are various ways coming up.such as huijing

20、 zhao, ryosuke shibasaki in tokyo university l,they use single-row laser range scanners and line cameras to get information of an urban environment. their processing method can be described as two steps a geometric model is first generated using the georeferenced laser range data, feature classes ex

21、tracted in a hierarchical way. different urban features are represented using different geometric primitives, such as a planar face, a tin, and a triangle. then texture of the urban features is generated by projecting and resampling line images onto the geometric model. for their aim is to reconstru

22、cting a textured cad model of urban environment, they mixed the steps of data fusion and feature extraction. they need to calibrate linear ccd camera first, which is hard and complex due to the image is formed only when either the camera or the object (target) is moved. their method didn't make

23、good use of linear ccd camera actually.in the paper, we present a novel method for generating colorful point cloud utilizing a vlss, in which a 360° laser scanner and a linear scan cameras are mounted on a elevating platform, with which we can get colorful point cloud with absolute coordinate.

24、it takes two steps to accomplish this work, namely fixing laser and line scanning camera parallelly and forming a color look-up table by image-forming principle of sensors. so it becomes convenient and fast to generate colorful point cloud, for which we conduct a serial of experiments to obtain some

25、 data in an urban environment utilizing our vlss and give an analysis of results obtained from this method. through this method, it's possible to realize automatic fusion of laser data (point cloud) and line scanning imagery, avoiding complex progress of calibration of camera and finally offerin

26、g a fast and effective way of texture matching realistic 3d city reconstruction2. system configurationthe vlss is integrated with gps, imu, ccd camera and other sensors. all sensors are tightly fixed to an elevating platform in the mobile and are synchronized with movement of the platform gps is use

27、d to measure the position on every occasion in trajectory of the platform; imu is used to determine the position and posture of the platform, and can be used for integrated navigation working together with gps; laser scanner is used to record the distance and angle from objective point to platform 4

28、; linear ccd camera obtains image information along the street, which can be used as texture information to realistic 3d modeling fig.2.1 shows the sensors of the system. data can be acquired while vehicle moving at normal traffic speeds.figure 2.1 sensors of the systemall the sensors are carefully

29、chosen so as to satisfy need of obtaining realistic 3d information of city with high precision. as for the camera optics, each scanning line of linear ccd camera would cover a field of view slightly wider than the nominal swath with of the scanner it was mounted on. because camera and laser have sim

30、ilarity in scanning type, it will be helpful with following combination of these two data sources in vlss a microcontroller-controlled network was designed and built so as to control the various sensors and configurations as well as interface with the logging computer and gps/ins systems as a result

31、, laser and linear ccd camera are all synchronized to gps time, enabling us to uniquely reference the rgb attribute from the camera to the laser points in the cloud.tab.l gives specifications of sensors in our vlss the vlss uses inertia measuring unit (imu) of 200hz, gnss receiver of 20hz laser scan

32、ner ra360 and linear ccd camera xiimus 4096 ct. the types and specifications of various sensors are shown in tab.l.tablet. specifications of sensorssensortypetechnique datagpsnovatel dl-4post processing difference positioning precision : 5mm + lppm cepspeed difference precision : 0.03 m/simuhome pro

33、ductsrandom drift of gyroscope : 0.01 degree per hour; zero offset of accelerometer : loouglaser scannerra360angle resolution : 3mmrad : range resolution : 10mm; scanning frequency : 100klinear cameraxiimus 4096pixel size : 10um*10umctpixel depth : 12bitsfocal length : 28mmelevatingself-builtliftwhi

34、le operating experiments, we fixed laser scanner, linear scan camera the way showing in fig 22 a computer conducts these sensors, making them start working at the same time. all the sensors are mounted on an elevator lift, which and goes up and down at constant speed of 0.778cm/s, whose is 0.00373cm

35、.in the experiment we use the elevator lift as a guide rail to simulate the way while vehicle is moving so both laser and linear camera works as the mounting bracket moves upward. besides, while working in the city, the way we mount the fixed lascr-camcra on the elevator lift is random.figure2.2 mou

36、nting bracket and elevator lift3. mathematic principlefrom universal knowledge we know that there are six exterior orientation parameters of the camera, or laser, with respect to the imu, namely three linear offsets (x,y,z) and three rotations (omega, phi, kappa). but if we want to decide the relati

37、ve position of laser and camera, there is only one set of relative orientation parameters. once the relative position between the two main sensors is fixed, if s only necessary to record the posture of the laser, which will naturally attribute to calculate the posture of the linear scan camera our w

38、ork is to deal with the former in a novel way. in our system, we fixed the laser's scanning center and thecameraperspective center in a line parallel to the moving direction and defined two localcartesian coordinate systems : the laser coordinate system and the line scanning camera coordinate sy

39、stem.fiure3.1 coordinate systems of scanningfig 3.1 shows ideal situation of the two coordinate systems. the origin of laser coordinate system is at the laser's center of scanning (l) and the origin of camera coordination system is at the camera perspective center c. the lx and cx axis follow th

40、e moving direction of vehicle, perpendicular to the scanning plane the ly and cy axis are the scanning direction, coincident with camera optical axis the lz and cz axis are vertical. every set of xyz axis completes a cartesian axis triplet, which can reflect posture of the sensor. it's easy to f

41、ind that in ideal situation scanning plane of the two sensors are parallel.figure3.2 views representing the relationship between camera and laser coordinate systembut when linear ccd camera is fixed to the laser, their relationship is random so that we need to adjust it somehow in this paper, a nove

42、l way to solve this problem is presented that adjusting the two sensors rigidly parallel by adding feelerleaf between the nickelclad. as showing in fig 3.2, we need to adjust two angles of the two coordinate systems, namely the field angle and the intersection angle.in fig 3.2(plan view), when exten

43、ded line of ly-axis and cy-axis intersects, the field angle is not zero, which will affect the result of finding interval of homologous scanning line as showing in fig 3.2 (front view), if the intersection angle is not zero, the scanning plane of laser and linear camera will intersect. besides, side

44、 view shows when scanning plane of laser parallels to scanning plane linear ccd camera, we should also figure out which pixel in corresponding scanning line is the one wc want.figure3.3 intersection anleafter vlss scanning over a serial of targets on the wall, we get original data of camera and lase

45、r. because camera and laser lifted by elevating platform works accompanied with gps, they have the same time mark for every chips of data. then we can measure corresponding points of targets respectively in laser point cloud and imagery and compare their time mark. if the two sensors are in ideal re

46、lative position, time interval of adjacent points in either laser data or camera data will be the same. if not, it will cause an intersection angle a as showing in fig 3.3 to eliminate angle a, we should calculate the difference of points and find out how much distance we should add between the thre

47、e rating nuts, using a simple formula:21 = 71v,z1 =t2v3= (£1 - x2)/d /.d = tanaformula (1)among the formula, s means the distance interval of two rating nut, tl, t2 is the time interval of two points measured in laser data and camera data. d is the real distance between the two points measured

48、by total station and v means the speed of elevator lifting.at the end of these formulas, wc can know how wide should the rating nuts be and by adding a serial of tiny rating nuts we can make a zero.another angle of scanning plane of two sensors is the field angle, which can be eliminated by measurin

49、g distance between adjacent points after the intersection angle is fixed up, which means another experiment alike is to be carried out. but it turns out to be simpler for we just need to measure several pairs of corresponding points to get their distance in computer and compare them to the distance

50、between origins of the two coordinate systems measured by ruler. then we can use methods alike to adjust the intersection angle.this eliminated field angle and intersection angle between scanning plane of two sensors leave only a side (x) parallax to be removed by software, which make the rgb refere

51、ncing algorithm simpler and fasterbesides, if we want get more accurate answer, we should measure several pairs of points and calculate its average interval distance. so the formula should also be as follows:z1.1 =t11* v,z12 = t12*va =_212)/01 ;d = tana *s formula (2)4. contents of experimentsas pre

52、sented in the beginning, the aim of this paper is to solve the problem of fusion of lidar data with linear scan imagery so as to form a fast and correct corresponding between 3d position infonnation and color information, which would make for richer infonnation extraction from point cloud and reduce

53、 the amount of calculation to get texture of targets compared toutilizing custom methods or even realize texture automatic matching. besides, this method can somehow avoid external parameters calibration of camera so that colorful 3d model is available fast and correctly while its calculation is sim

54、plefollowing principle of section 3, we carry out a serial of experiments to finish the adjusting work. as showing in fig 4.2, we can easily fix the x parallel roughly by precise nickelclad, in which there arc three rating nuts like tripod base (fig 4.1). this give a good starting of our work, and n

55、ext two steps of data fusion processing are detailed they are paralleling laser and linecamera and then form the color look-up table.figure4 1 linear scan camera and supporting shelffigure4.2 installation site of line scan camera and laser4.1 parallel laser and line camerathe line scan color camera

56、is attached to the laser and then synchronized and aligned with respect to the laser so that each laser scan line was covered by a corresponding color scan line. to accomplish this, the camera mounting location was chosen to be in the laser scanning plane and as close as possible to the lidar coordi

57、nate reference center.figure4.3 vertical location of mounting bracket in experimentto conduct the experiment, we combine all the sensors in the way showing if fig 4.3. the two red arrows show direction of moving and scanning. the elevator lift is used as a moving guide to control sensors moving upwa

58、rd in constant speed firstly we attach the linear scan camera next to the laser, making sure that their scanning directions arc the same and it would be best to make sure their centers are nearly in a line. after that, the scanning planes of laser and linear scan camera will be almost parallel and t

59、he remaining work is to adjust their relative position slightly to make sure scanning planes of the two scanners are exact parallel by appropriately adding some feelerleaves between the double-deck supporting splints, as showing in fig 4.1 and fig 4.2.to satisfy later demand of measuring, the error should be controlled within lcm at first. by experimenting the system and adjusting it, namely adding a series of feelerleaves

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论