翻译以.原文和在同一文件中前_第1页
翻译以.原文和在同一文件中前_第2页
翻译以.原文和在同一文件中前_第3页
翻译以.原文和在同一文件中前_第4页
翻译以.原文和在同一文件中前_第5页
已阅读5页,还剩66页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

针对有光泽表面三维微观测量的基于条纹边界的高精度高稳定性结构光方法ZhanSong,RonaldChung,SeniorMember,IEEE,andXiao-Ting:关键字:边缘检测,有光泽表面,结构光系统,三维重建绪(CMMs),timeofflightsystems[3],stereovision[4],shapefromshading[5],laserscanning6]structuredlightsystemsSLSs)等几种方法。每种方法都有各CMMs相比,基于光学的方法具有非接触、快速(3-5微米,并且表面分布不均匀)将会对最终测量结果带来明显的物理特性,操作速度是受限的。测量精度也会受到激光散斑的影响[14]。SLSs包含KonicaMinoltaVIVID9iFAROLaserScanArm。另外也有专门针对微米级精30mm300Hz的频率下测量到被激光束照亮的目标点0.01mm。在文献[16]中,研究了9%的误差百分率。除了激光线,用投影设备照射出光条纹也可以用在[12],[18]。在空间编码方案中,一个图案元素的代号是用元素附近图案的值SLSs可以得到更大的数据密度张二进制条纹的序列来编码这个场景。这样,域中的场景会被分割成2n个子区域,像素的中心或者码图案的边缘[24]通常会被编码。为了实现更高的测量精度,图1中的一些方法比如相位移动[9]-[11],[25],[26],和算法被提出[28][29]。在实际应用中,为了提高展开过程的稳定性,通常会使用一系列的码。通过结合局部相位值和全局码值,每个图像点可以获得一个150mm的球体的实验中,统计了0.05mm的平均误差[25]。在文献[26]中,提出用正弦位移的方法来对倒装焊接凸起进行三维测量。在对一个标准的1mm块规的测量中,得到了2微米的平均精度。不过,受位移方法中[27],正弦周期图案被并行线图案所代替,如图1(b)所示。通过把直线图案分别在x和y方向平移6次,可以获得x方向的6和y方向的6张图。由于直线图案也是周期性的,它有其固有的歧义,但是码会被引入来减少歧义性。在一个测量平面度的实验中,报告记录了测量200x200mm的平面区域时具有0.028mm的标准偏差。在文献[30]中,一个和主动视觉系统被影结构光的主动视觉用来三维重建。在一次使用11.65x7.35cm矩形板误差均在1mm以内。然是一个。由于反射,图像数据中被投影图案[9]-[11],[25],[26]的正弦结构了消除带状图案的周期性误差,传统码图案被引入。为了更加确切的检测带基于条纹边界的编码策略P=G+{G∈{0,1,2,…,(2𝑛−S∈{0,1,2,…,(m−

其中,S是条纹边界产生的局部编码,G是全局码编码,P是最终具有唯图2.码结合条纹图案平移的编码策略。上:一系列码图案(其中n=9)用来构造亚像素精度条纹边界检测受到光学限制(调制传递参数,景深,等等,投影出的条纹边界在获3(a反射而增加。这使得fpfn的交叉线无法辨别。这样直接使用零叉边界𝑓𝐷=𝑓𝑃− fd+和fd-的零叉位置分别用 精度条纹边界位置可以用如下公式获得:xedge=(x{∇2fD+=0}+x{∇2fD−=0}) 系统标定和三位深度计算图4.相机、投影仪和世界坐标系之间的几何关系。几何关系标定之后,表面任意点P的深度值都可以通过三角算法从相机和投影仪中的两个对应点和计算出来。CP表示这是相机或者投影仪的测量参数,mM表示这是特定 集体表示成3x3的矩阵Ac或Ap,其 表示传感器阵列中u轴和v轴方向的比例因子 表传感器轴的倾斜度,u0v0表示传感器平面的原点[32]其中R和T分别表示相机和投影仪坐标系之间的旋转和平移矩阵相机和投影仪分别有6个外参数( 集体表示成4x4的矩阵在标定过程中,如图5(a)所示的标定板被用来标定相机。打印出的图案128255,并且该图案构成了标定板在标定过程中代替对象表面。 够被标定出来。关于标定的细节可以在文献[33]中找到。255标定过程可以计算出相对于同一世界坐标系的内参数Ac和Ap,相机和投影仪的畸变参数,以及他们的外参数Rc,Tc,Rp,Tp。得到Rc,Tc,Rp,Tp之后,相机和投影仪之间的变换矩阵E可以通过公式(6)得到。假设相机图像平面上提取出的条纹边界点(这里 示m的投影和投影仪图案平面上提取出的的条纹边界点,与同一个场景点相关联。假设二维位置通过x维度编码。我们可以通过匹配编码联的场景点的深度zc就能通过传统的三角测量算法[8]得到,如下所示:图实验系统配置了一个现成的微型投影仪(3M硅液晶显示,分辨率30ft/s-30°的夹角。虽然大的夹角可以提高测量精度,但是也会带来的闭塞和散序用来进行三维计算。在标准PC平台(Core2Duo3.3GHz,4G内存)上它测量精度评估如图7(a)所示,在迎面方向对准投影仪。该平面向着投影仪平移15次,每次0.1mm。每次移动之后,用该系统重建这个平面。总共有16个平面被重建。面相对于投影仪坐标系的方向是[0.027,0.055,0.998](单位法向量,也就是几乎7(b)中。图有光泽硬币的三维重建9(a(b)..对比与结论(x,y强度值:I1,I2,I3,I4。相位值可以按照如下方式求解:。投影线的峰值中心由以为信号的卷积图像发生变化的位置进行线性插值得到[27]。采用相同的码策略来消除正弦图案和直线图案中的周期性歧义。用第三部分C中介绍的相同过程。图14展示了一个球形网格阵列样品的实验。BGA上的凸起被相移法、直线平移法和本文方法三维重建。重建结果分别在图14(b)-(d。12.传统的相和(f)是由形态、有光泽表面的视觉测量仍然遗留着一个有性的问题。有光泽表面的测量精度微小厚度和不均匀。为了定量地评估这个参考结果和原本有光泽表面(图11所示)Geomagic如图14所示,一个凸起高度0.4mm的标准BGA样品被重建。BGA上0.1mm14(d)所示的用本文所提方见,得到了所有凸起的平均高度为0.396mm,标准偏差只有0.012mm。结论及将来的工作泽的硬币、金属工件和BGA凸起,证明了该系统强大的稳定性和高测量精度。域由于遮挡问题不能被和重建。一个额外的相机安装在投影仪的另一端可以当前系统需要3秒来完成一次完整扫描。将来,一种具有更快响应时间的微型投 A.N.Belbachir,M.Hofstätter,M.Litzenberger,andP.Schön,“Highspeedembedded-objectysisusingadual-linetimed-address-eventtemporal-contrastvisionsensor,”IEEETrans.Ind.Electron.,vol.58,no.3,pp.770–783,Mar.Electron.,vol.55,no.1,pp.348–363,Jan.H.ChoandS.W.Kim,“Mobilerobotlocalizationusingbiasedchirpspreadspectrumranging,”IEEETrans.Ind.Electron.,vol.57,no.8,pp.2826–2835,Aug.2010.E.GrossoandM.Tistarelli,“Activedynamicstereovision,”IEEE . l.,vol.17,no.9,pp.868–879,Sep.S.Y.ChoandW.S.Chow,“Aneural-learning-basedreflectancemodelfor3-Dshapereconstruction,”IEEETrans.Ind.Electron.,vol.47,no.6,pp.1346–1350,Dec.2000.F.Marino,P.DeRuvo,G.DeRuvo,M.Nitti,andE.Sla,“HiPER3-D:Anomnidirectionalsensorforhighprecisionenvironmental3-Dreconstruction,”IEEETrans.Ind.Electron.,vol.59,no.1,pp.579–591,Jan.2012.B.Curless,“Fromrangescansto3Dmodels,”ACMSIGGRAPHComput.Graph.,vol.33,4,pp.38–41,Nov.R.HartleyandP.Sturm,“Triangulation,”Comput.Vis.ImageUnderstanding,vol.68,no.2,146–157,Nov.D.Bergmann,“Newapproachforautomaticsurfacereconstructionwithcodedlight,”inSPIE—RemoteSensingReconstructionThreeDimensionalObjectsScenes,1995,vol.2572,pp.S.Zhang,“Phaseunwraperrorreductionframeworkforamultiplewavelengthphase-shiftingalgorithm,”Opt.Eng.,vol.48,no.10,p.105601,Oct.2009.L.Zhang,B.Cudess,andS.M.Seitz,“Rapidshapeacquisitionusingcolorstructuredlightandmulti-passdynamicprogramming,”inProc.Int.Symp.3DDataProcess.,Visual.,Transm.,2002,pp.24–36.J.Salvi,S.Fernandez,T.Pribanic,andX.Llado,“Astateoftheartinstructuredlightpatternsforsurfaceprofilometry,”PatternRecognit.,vol.43,no.8,pp.2666–2680,Aug.2010.A.Weckenmann,G.Peggs,andJ.Hoffmann,“Probingsystemsfordimensionalmicro-andnano-metrology,”Meas.Sci.Technol.,vol.17,no.3,pp.504–509,Mar.2006.S.JecicandN.Drvar,“Theassessmentofstructuredlightandlaserscanningmethodsin3Dshapemeasurements,”inProc.4thInt.Congr.CroatianSoc.Mech.,2003,pp.237–244.M.YaoandB.G.Xu,“Evaluatingwrinklesonlaminatedplasticsheetsusing3Dlaserscanning,”Meas.Sci.Technol.,vol.18,no.12,pp.37243730,Dec.2007.G.SaeedandY.M.Zhang,“Weldpoolsurfacedepthmeasurementusingacalibratedcameraandstructured-light,”Meas.Sci.Technol.,vol.18,no.8,pp.2570–2578,Aug.2007.L.Yao,L.Z.Ma,andD.Wu,“Lowcost3Dshapeacquisitionsystemusingstripshiftingpattern,”Digit.HumanModel.,ser.LectureNotesinComputerScience,vol.4561,pp.276–285,J.Salvi,J.Pagès,andJ.Batlle,“Patterncodificationstrategiesinstructuredlightsystems,”PatternRecognit.,vol.37,no.4,pp.827–849,Apr.2004.K.C.Wong,P.Y.Niu,andX.He,“Fastacquisitionofdensedepthdatabyanewstructured-lightscheme,”Comput.Vis.ImageUnderstanding,vol.98,no.3,pp.398–422,Jun.K.C.Wong,P.Y.Niu,andX.He,“Fastacquisitionofdensedepthdatabyanewstructured-lightscheme,”Comput.Vis.ImageUnderstanding,vol.98,no.3,pp.398–422,Jun.Z.SongandR.Chung,“Determiningbothsurfacepositionandorientationinstructured-lightbasedsensing,”IEEETrans.Pattern .Mach.In l.,vol.32,no.10,pp.1770–1780,Oct.I.C.Albitar,P.Graebling,andC. gnon,“Robuststructured-lightcodingfor3Dreconstructioncomputervision,”inProc.Int.Conf.Comput.Vis.,2007,pp.1–6.J.Salvi,J.Batlle,andE.Mouaddib,“Arobust-codedpatternprojectionfordynamic3Dscenemeasurement,”PatternRecognit.Lett.,vol.19,no.1,pp.1055–1065,Sep.1998.Y.C.Hsieh,“Decodingstructured-lightpatternsforthree-dimensionalimagingsystems,”PatternRecognit.,vol.34,no.2,pp.343–349,Feb.2001.H.B.Wu,Y.Chen,M.Y.Wu,C.R.Guan,andX.Y.Yu,“3Dmeasurementtechnologybystructuredlightusingstripe-edge-basedGraycode,”J.Phys.,Conf.Ser.,vol.48,no.48,pp.537–541,2006.F.SadloandT.Weyrich,“Apracticalstructured-lightacquisitionsystemforpoint-basedgeometryandtexture,”inProc.Eur.Symp.Point-BasedGraph.,2005,pp.89–98.H.N.Yen,D.M.Tsai,andS.K.Feng,“Full-field3Dflip-chipsolderbumpsmeasurementusingDLP-basedphaseshiftingtechnique,”IEEETrans.Adv.Packag.,vol.31,no.4,pp.830–840,Nov.2008.J.Gühring,“Dense3Dsurfaceacquisitionbystructured-lightusingofftheshelfinProc.SPIE,2000,vol.4309,pp.T.Pribanic,H.Dzapo,andJ.Salvi,“Efficientandlow-cost3Dstructuredlightsystembasedonamodifiednumber-theoreticapproach,”EURASIPJ.Adv.SignalProcess.,vol.2010,pp.474389-1–474389-11,2010.V.I.GushovandY.N.Solodkin,“Automaticprocessingoffringepatternsinintegerinterferometers,”Opt.LasersEng.,vol.14,no.4/5,pp.311324,1991.C.Y.ChenandY.F.Zheng,“Passiveandactivestereovisionforsmoothsurfacedetectionofdeformedplates,”IEEETrans.Ind.Electron.,vol.42,no.3,pp.300–306,Jun.1995.D.Ziouand ,“Edgedetectiontechniques:Anoverview,”Int.J.Pattern .,vol.8,no.4,pp.537–559,Mach.Inl.,vol.22,no.11,pp.1330–1334,Nov.Z.SongandR.Chung,“UseofLCDpanelforcalibratingstructured-lightbasedrangesensingsystem,”IEEETrans.Instrum.Meas.,vol.57,no.11,pp.2623–2630,Nov.2008.R.HartleyandA.Zisserman,MultipleViewGeometryinComputerVision.Cambridge,U.K.:CambridgeUniv.Press,2004.通过多个基于结构光的商业深度相机进行的三维场景重建+1 部表情[2],等等。视觉相结合1213像献71展示了两感机捕获同场景的情。注意两个相机同被打开时,深度信的质量明显下降了(图。须解决设置多个深度相机时的干扰问题。在文献]中,为了在相同场景中操作多个深度相机,作者们使用了三时(SLDC相关算法[6]。该方法对于实时深度重建非常有效,并且被应用到了时行的 区域的相同环境中时,每个相机中获取的深度信息有可能是错误的(如图因此在使用多个SLDC进行三维重建时我们两大:第一,怎样正确区分 定相等(例如,当额外的红外机被添加进来。我们把投影仪命名为P1,P2,...,PM,相机命名为。投影仪发出随机的但是不随时间改变的模X=(x,,z身

m n xPPPX;xCPm n xPmxCnmn个相机中的像。PPm

n n

PI0mI P m 式中,αm(αm≥0)是相应模式的反射率相机观测约束的可能情况当只考虑相机观测约束时,三维重建问题是一个非常典型的多视点MVS的最大问题之一,特征稀少表面。另一方面,当深度相机的数量很小的时候(例如23个深度相机)MVS可能仍然会由于自我和相互遮挡导致信在我们的实现中,我们计算了相关区块之间的均值移除交叉关联(MRCCX,我们用公式(1)取,并表示为ICn。MRCC是通过如下公式计算得出: ICICiICjICj ,

Cj

ICi

CC Iij其中,ICIC分别为相机Ci和Cj相应区块的亮度平均值。我们取不同相机对之间MRCC的最大值作为整个系统的相机观测约束可能情况:CC IijLmaxMRCCIC,IC i j MRCCI0,用来计算投影仪-

1 2 1

Cj

投影仪-相机约束的可能2投影仪-相机约束需要地复杂性探索,因为公式(3)中的线性权重αm是未知的。实际上,αm的取值至少跟表面点到每个投影仪之间的距离,表面取2mˆmargminmmIPmIm

投影仪-I0MRCC ,

L1LL 具体实现是一个非常具有性的问题。另一方面,虽然多个深度相机会由于随1(c4桌子或者桶,用流行的射线追踪软件POV-Ray2[11]渲染。每一个场景中,两个深图(a(显示了每个相机独立地通过基于互相关的方法进行深MCC.(MCC(dMS,在平面扫描期间为每条光线将公式()最大化,来重建中心视点的深度图。在5(e)显示了该方法((PSNR出的方法执行效果远好于直接深度合并或者MVS。56Batlle,J.,Mouaddib,E.andSalvi,J.,“Recentprogressincodedstructuredlightasatechniquetosolvethecorrespondenceproblem:asurvey,”PatternRecognition,Vol.31,No.7,pp.963—982,1998.Cai,Q.,Gallup,D.,Zhang,C.,andZhang,Z.,“3ddeformablefacetrackingwithacommoditydepthcamera,”inECCV,2010.Crabb,R.,Tracey,C.,Puranik,A.,andDavis,J.,“Real-timefore-groundsegmentationviarangeandcolorimaging,”inCVPRWorkshoponToF-CamerabasedComputerVision,2008.Cui,Y.,Schuon,S.,Chan,D.,Thrun,S.,andTheobalt,C.,“3Dshapescanningwithatime-of-flightcamera,”inCVPR,2010.Faugeras,O.et.al,“Real-timecorrelation-basedstereo:algorithm,implementationsandapplications,”INRIAtechnicalreport#2013,1993.Kang,Y.-S.andHo,Y.-S.,“High-qualitymulti-viewdepthgenerationusingmultiplecoloranddepthcameras,”ICME2010.Kolmogorov,V.andZabih,R.,“Multi-cameraSceneReconstructionviaGraphCuts,ECCVKutulakos,K.andSeitz,S.,“Atheoryofshapebyspacecarving,”IJCV,Vol.38,No.3,pp.199—218,2000., Ray,J.Zhu,L.Wang,R.Yang,andJ.Davis,“Fusionoftime-offlightdepthandstereoforhighaccuracydepthmaps,”CVPR,2008.QingxiongYang,Kar-HanTan,Culbertson,B.,andApostolopoulos,J.,"Fusionofactiveandpassivesensorsforfast3Dcapture,"MultimediaSignalProcessing(MMSP),2010.Sun,J.,Zheng,N.-N.andShum,H.-Y.,“StereoMatchingUsingBeliefPropagation,”IEEETrans.OnPAMI,Vol.25,No.7,pp.787-800,2003.IEEETRANSACTIONSONINDUSTRIALELECTRONICS,VOL.60,NO.3,MARCH AnAccurateandRobustStrip-Edge-BasedStructuredLightMeansforShinySurfaceMicromeasurementin3-DatDigitalObjectIdentifier—Three-dimensionalmeasurementofshinyorreflec-tivesurfaceisachallengingissueforoptical-basedinstru-mentations.Inthispaper,wepresentanovelstructuredlightapproachfordirectmeasurementofshinytargetsoastoskipthecoatingpreprocedure.Incomparisonwithtraditionalimage-intensity-basedstructuredlightcodingstrategieslikesinusoidalintheilluminatedpatterns.Withstripedgesgenerallybetterpthanindividualimageintensityintheimagedatainthepresenceofsurfacereflections,suchacodingstrategyismorerobust.Toremovetheperiodicambiguitywithinstrippatterns,traditionalGraycodepatternsareadopted.Tolocalizethestripedgesmoreprecisely,bothpositiveandnegativestrippatternsareaccuracyisproposedforstrip-edgelocalization.Theexperimentalsetupisconfiguredwithmerelyanoff-the-shelfpico-projectorandacamera.Extensiveexperimentsincludingaccuracyevaluation,comparisonwithpreviousstructuredlightalgorithms,andthere-constructionofsomerealshinyobjectsareshowntodemonstratethesystem’saccuracyandenduranceagainstreflectivenatureofIndexTerms—Edgedetection,shinysurface,structuredlightsystem(SLS),3-Dreconstruction.WWITHTHEdevelopmentofmicrofabricationandelec-tronicpackagingtechnology,thereisinindustryanincreasingneedofdemandingthree-dimensional(3-D)infor-mationinmicrometer-levelprecisionforsurfaceinspectionandqualitycontrolpurposes[1],[2].Eveninmundanetaskslikecoinanticounterfeitingandsignaturerecognition,ithasbeenrecognizedthat3-DmeasurementsthatarenecessarilyatmicrometerlevelconstituteanotherlevelofManuscriptreceivedMay21,2011;revisedAugust12,2011,November2011,December12,2011,andJanuary11,2012;acceptedFebruary13,2012.DateofpublicationFebruary24,2012;dateofcurrentversionOctober16,2012.ThisworkwassupportedinpartbytheNationalNaturalScienceFoun-dationofChina(NSFC)underGrant61002040,inpartbyNSFC-GuangDongunderGrant10171782619-2000007,andinpartbytheIntroducedInnovativeR&DTeamofGuangdongProvince-RobotandInligentInformationTech-nologyTeam.Z.SongandX.T.ZhangarewiththeShenzhenInstitutesofAdvancedTechnology,ChineseAcademyofSciences,Shenzhen518055,China(: ;xt R.ChungiswiththeDepartmentofMechanicalandAutomationEngi-neering,TheChineseUniversityof,Shatin,(:Colorversionsofoneormoreofthefiguresinthispaperareavailabletotheexisting2-Dmethods.However,instrumentsandmeansformicrometer-level3-Dmeasurementinadequateaccuracyandeconomyarestilllacking.Thefactthatmanyofthetargetobjectsareshinyorreflectiveposesadditionalchallengetothemeasurementprocess.Bytheworkingprinciples,3-Dmeasuringsystemscanbeclassifiedintocoordinatemeasuringmachines(CMMs),timeofflightsystems[3],stereovision[4],shapefromshading[5],laserscanning[6],andstructuredlightsystems(SLSs).Eachapproachcomeswithitsownlimitations,advantages,andcost.ComparedwithCMMs,optical-basedmethodshavethead-vantagesofbeingnoncontactandfastandhavebeenwidelyadoptedinthecommercialsector[7].Inthesemethods,laserbeamsorstructuredlightpatternsareprojectedontotheobjectsurface,andthereflectionsarecapturedbyimagingdevices.Depthinformationattheseilluminatedareascanthenbecalcu-latedviatriangulationHowever,existingopticalmethodsgenerallyencounterdiffi-cultieswithshinyorspecularobjects.Surfacesthatareshinygenerallyhavemostofthe inglightbeamsreflectedoffthesurfacestovariousdirections

thanthatoftheimagingdevice,andthatgreatlycompromisesthequalityofthecapturedimages.Subjecttothelowimagequality,exist-ingstructuredlightmeanswhichareusuallyintensitybased[9]–[11]aredifficulttooperate.Theoften-usedpracticeistocoattheshinysurfaceswithathinlayerofwhitepowdertosurfaceswithathinopaquelacquerjustformeasuringpurposeisacommonpractice.However,forapplicationswherehighac-thicknessof3–5μm,whichcouldbeunevenlydistributedoverthesurface)willinducedistincteffecttothefinalmeasurementaccuracy.Moreimportantly,suchatreatmentcomplicatesthewholescanningprocedureandcouldcausecorrosionstothetargetsurface.Allthesemakeexistingstructuredlightscanningtechniquesimpracticalinmanyapplications[12].Inthispaper,anovelstrip-edgebasedstructuredlightcodingstrategyispresented.Informationencodedinthepatternisnotindividualimageintensitybutstrip-edgeposition.Comparedwithrawimageintensities,stripedgeshavepreciselocationsintheimagedatathataremoredistinguishabledespitethepresenceofinfluencefromthespecularnatureoftheimagedscenetotheoverallimage-intensitydistribution.Thatmakesitscodinginformationmorerobust.Toremovetheperiodicambiguitywithstrippatterns,thetraditionalGraycodepatterns0278-0046/$31.00©2012 IEEETRANSACTIONSONINDUSTRIALELECTRONICS,VOL.60,NO.3,MARCHareadopted.Bytheuseofbothpositiveandnegativestrippatternsandazero-crossing-basedfeaturedetectorthatwedeveloped,stripedgescanbeextractedaccuray.Weshowexperimentalresultsfromasystemsetupthatisconfiguredwithmerelyanoff-the-shelfpico-projectorandanindustrialcamera.Bothdevicesareconsumergradetomakethesystemeconomicalandapplicableforwideindustrialapplications.Thispaperisorganizedasfollows.SectionIIgivesabriefreviewofthepreviousworkonoptical3-Dmeasuringtech-nologies.Theproposedstructuredlightcodingstrategy,strip-edgedetector,systemcalibration,and3-DreconstructionaredescribedinSectionIII.Experimentsonaccuracyevaluation,micrometer-levelmeasurementofsomerealshinyobjects,andcomparisonswithsometraditionalmethodsarepresentedinSectionIV.AconclusionandpossiblefutureworkareofferedinSectionV.PreviousThisworkfocusesonhowhigh-accuracy3-Dmeasurementcanbeconductedovershinymicro-objects.Traditionalme-chanicalprobingmeansadoptedinCMMscanachievehighprecision,butattheexpenseofmeasuringspeed[13].Non-contacttechniques,includinglaserscanningandSLS,havebeenadvancingquicklyinthelastdecade.Suchtechniquesaretriangulation-basedopticalmeasuringtechnologies.Laserscannerisoperableonalmostallsurfaces,butbecauseofitsphysicalscanning-basednature,theoperationspeedislimited.Themeasuringaccuracyisalsoaffectedbylaserspeckleeffect[14].SLSsconsistofaprojectiondeviceandcameras.Byprojectingsomespeciallydesignedlightpatternsontothetargetsurfaceandimagingtheilluminatedscene,imagepointsundertheilluminationscaneachbelabeledwithauniquecodeword.Suchcodewordsarep intheimagedata.Sincecode-wordsontheprojectionsideareknownapriori,point-to-pointcorrespondencesbetweentheimageplaneandtheprojectionplanecanbeestablishedreadily.Three-dimensionalinforma-tionatsuchpositionscanthenbedeterminedviatriangulation[8].SLSshavethebenefitsofdemandingonlyasimplesetup,lowcost,andfastoperationspeed,althoughitsperformancehascertaindependenceuponthesurfaceconditionoftheinspectedobjectandupontheworkingenvironment.Inthelaserscanningapproach,abeamoflaserlightpassedovertheobjectwhileacameramountedasideistorecordthepositionoftheprojectedlaserprofiles.Thereareanincreasingnumberofoff-the-shelf3-Dlaserscannersavailableinthemarket.Differencesofthevarioussystemsliemainlyonthelaserstrength,wavelength,workingdistance,andscanningspeed.RepresentativesystemsinthemarketincludeKonicaMinoltaVIVID9iandFAROLaserScanArm.Therearealsosystemsspecificallyformeasuringinmicrometer-levelof0.1×0.5×0.05mminameasuringfieldof150×25mm.wrinklesonlaminatedplasticsheets.Thesensoruses

spotlaserilluminationandperformspositiondetectionthroughsubpixelpeakdetection.ThelaserdisplacementequipmentisslightlytiltedtominimizetheeffectofspecularreflectionfromFig.1.(a)Sinusoidalpatternisshiftedfourtimestoletthephasevalueateachimagepointbedetermined[25].(b)Linepatternisshiftedsixtimestoencodeeachimagepointontheline[27].plasticsurfaces.Thesensorissetataheightof30mmabovetheplaneofthemechanicalstageandmeasuresthedistancetothetargetspotilluminatedbythelaserbeamatarateof300Hz.Intheexperiment,adepthresolutionof0.01mmwasreported.In[16],alaserlinescanningsystemisstudied.Intheimplementation,thecameraandthelaserarefirstcalibrated.Havingfoundthetopandbottomboundariesofthelaserlineintheimage,thecenterofthelaserlinebetweenthetopandbottomboundariesisfoundalongitslength.Thesecenterpointsareusedforraytracinganddepthcalculation.Whenmeasuringaweldpoolofdepthabout0.38mm,apercentageerrorofabout9%wasreportedintheexperiment.Insteadoflaserline,alightstripilluminatedbyaprojectiondevicecanalsobeusedwiththesamescanningstrategy[17].Intheuseofstructuredlightpattern,thecodingstrategiescanbecategorizedtospatialcodingandtemporalcodingschemes[12],[18].Inthespatialcodingscheme,codewordatapatternelementisdefinedbythepatternvaluesintheneighborhood,whichcanbeaboutvariousgraylevels[19],colors[20],orsomegeometricalprimitives[21]inthepattern.De-Bruijnsequences[22]andM-array[23]arethemaincodingschemesoftenemployed.Sinceuniquelabelatanypatternpositioncomeswithonlyacertain“spread”ofpatternvaluesinthevicinityoftheposition,thepatternpositionsthatcanbeuniquelycoded,andinturntheirdepthvaluessubsequentlyrecovered,cannotbetoodense.Inthetemporalcodingscheme,codingisachievednotinthespatial butinthetime.Aseriesofpatternsisprojectedatdifferentinstantsontothetargetsurface.Thecodewordforanygivenpixelonthecamera’simageplaneisdecidedbytheilluminationsatthatpixelovertime.SLSsusingthisencodingschemecanresultinstrongerdatadensityandhigheraccuracyinthemeasurement.Inparticular,Graycodeisawidelyusedcodingschemebecauseofitssimplicityandrobustness.IfabinaryGraycodepatternofcodewordlengthnistobeused,animagesequenceconsistingofn+1binarystrippatternsneedbeprojectedsequentiallytoencodethescene.Withthat,thesceneintheimage canbeseparatedinto2nsubregions,andthepixelcenterortheGraycodepattern’sedges[24]areusuallyencoded.

shifting[9]–[11],[25],[26]andlineshifting[27]areusuallyusedasshowninFig.1.Tosolvetheperiodicambiguitybetweensinusoidalandlinepatterns,variousphaseunwrapalgorithmshavebeenproposed[28],[29].Inrealapplications,SONGetal.:STRIP-EDGE-BASEDSTRUCTUREDLIGHTMEANSFORSHINYSURFACEMICROMEASUREMENTIN3- Theilluminatedobjectsurfaceisthenimaged,andthestripedgesintheimagedataarepreciselylocated.toimprovetherobustnessoftheunwrapprocedures,aseriesofGraycodepatternsareusuallyused.BycombiningthelocalphasevalueandtheglobalGraycodevalue,auniquecodewordforeachimagepointcanbeobtained.Inexperiment-ingwithasphereof150-mmdiameter,aagedeviationof0.05mmwasreported[25].In[26],asinusoidalshiftingSLSisproposedforthe3-Dmeasurementofflipchipsolderbumps.Inthemeasurementofastandard1-mmgaugeblock,aageaccuracyof2μmwasobtained.However,subjecttostrongreflectionsofbumpsurfaces,thereconstructed3-Dmodelsofthebumpshaveobviousdiscrepancywiththeactualones.Intheline-shiftingmethod[27],thesinusoidalperiodicprofileissubstitutedbyaparallellinepatternasshownin1(b).Byconsecutivelyshiftingthelinepatternsixtimesinthex-andy-directions,respectively,siximagesforthex-coordinateandsiximagesforthey-coordinatecanbeob-tained.Sincethelinepatternisalsoperiodic,ithasinherentambiguity,buttheGraycodepatternscanbebroughtintoastandarddeviationof0.028mmisreportedinmeasuringaplanarregionof200×200mm.In[30],apassiveandactivestereovisionsystemwasusedforreconstructingthesurfacethedeformedplate.Inthesystem,passivestereovisionisusedtodetectthesurfaceboundary,whileactivestereovisionwithaprojectionofstructuredlightisusedfor3-Dreconstruction.Inanexperimentwitharectangularplateofsize11.65×7.35cm,ataworkingdistanceofabout40cm,themeasuringerrorindepthwaswithin2mm,andtheerrorinthex-andy-directionswaswithin1mm.Shinysurfacewithstrongreflectivenatureisstillachallengetooptical-basedinstrumentationsincludingstructuredlightmeans.Sinusoidalstructureintheprojectedpattern[9]–[11],[25],[26]isdestroyedintheimagedataowingtothestrongreflections,andtheprojectedlinesarealsousuallyfloodedandundistinguishableinthecapturedimage[27].Toimprovethestructuredlightpattern’santireflectioncapability,astrip-edge-basedcodingstrategyisproposedinthispaper.Comparedwiththeimage-intensity-basedsinusoidalpatternandthethin-line-basedline-shiftingpattern,thestripscanbebetterpintheimagedatainthepresenceofsurfacereflections,andthatmakesthecodingproceduremorerobust.Toremovetheperiodicambiguitywithinstrippatterns,traditionalGraycodepatternsarebroughtin.Todeterminethestripedgesmoreprecisely,boththepositiveandnegativepatternsareused.Inparticular,animprovedzero-crossingedgedetectorisproposedforstrip-edgelocalizationinsubpixelaccuracy.Strip-Edge-BasedStructuredLightPatternDesignandStrip-EdgeDetectionThemajorprocessesinvolvedinthedevisedsystemarethusthefollowing.Inthestructuredlightpattern,auniquecodewordisassignedtoeachstrip-edgepoint,whichisacombinationofalocalstrip-edgecodevalueandaglobalGraycode

betweentheilluminatedpatternandtheimageplanecancolumnoftheFig.2.CodingstrategyofGraycodecombinedwithstripshiftingpat-tern.Top:SeriesofGraycodepatterns(withn=9)isusedtoconstruct256subregionseachwithauniquecodeword.Bottom:Strippatternofwidth4pixelsisshiftedthreetimestoencodepositionswithineachsubregion.Stripedgesoftheshiftingpatternwillbedetectedandencodedinfineraccuracy.established,andspatialdepthatthestrip-edgepointscanbeComparedwiththerawimageintensitiesandthinimagelinesthatarevulnerabletothereflectivenatureofshinysurfaces,edgesofbinarystripsintheilluminatedpatternaremorelocalizableandbetterp intheimagedatadespitethespecularnatureoftheobjectsurfaces.HigherlocalizabilityoftheedgescomeswithmoreaccuratereconstructionandhigherrobustnessofthemeasurementIntheimplementation,aseriesof(n+1)Graycodepatternsisfirstprojectedinordertodividethetargetsurfaceinto2nsubregions(itisnot2n+1regionsbecausetheall-zerocodewordisnotdistinguishableintheimagedataandgenerallynotused),eachwithauniquen-b-longGraycodeword.Supposethateachofsuchsubregionsismpixelwideontheprojector’spatterngenerationplane.Then,astrippatterninhalfofthefineststripwidthoftheGraycodesequenceisshiftedm−1times,instepsSuchaperiodicpattern,byitself,hasaperiodicambiguityofmpixels(i.e.,thewidthofthestrippatternontheprojector’spatterngenerationplane)indistinguishingdifferentpointsofthetargetsurface.However,bycombiningthetwocodewordstogether,onefromtheGraycodeandtheotherfromthestrippattern,themsubdivisionscanbeintroducedtoeachofthe2nsubregionstoachievefiner3-Dreconstruction.TheprocedurecanbeexpressedasP=G+G∈{0,1,2,...,(2n−S∈{0,1,2,...,(m− whereSisthelocalcodewordgeneratedbythestrippat-tern,GistheglobalGraycodewordusedtoremoveperiodicambiguityamongstrippatterns,andPisthefinaluniquecodeword.Forapatterngenerationplaneof1024-pixelwidthintheprojector,Graycodeof9-blength(i.e.,n=9)canseparateitinto256subregions.Then,astrippatternofwidth4pixelsisshiftedthreetimesinsteplengthof1pixelasillustratedbyFig.2.Upontheshifting, IEEETRANSACTIONSONINDUSTRIALELECTRONICS,VOL.60,NO.3,MARCHFig.3.(a)Zero-crossingedgedetector.(b)Withoutsurfacereflect

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论