外文翻译-一个鲁棒的基于机器视觉运动目标检测与跟踪系统_第1页
外文翻译-一个鲁棒的基于机器视觉运动目标检测与跟踪系统_第2页
外文翻译-一个鲁棒的基于机器视觉运动目标检测与跟踪系统_第3页
外文翻译-一个鲁棒的基于机器视觉运动目标检测与跟踪系统_第4页
外文翻译-一个鲁棒的基于机器视觉运动目标检测与跟踪系统_第5页
已阅读5页,还剩12页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

附录=1\*ROMANI外文文献翻译(1)原文:ARobustVision-basedMovingTargetDetectionandTrackingSystemAbstractInthispaperwepresentanewalgorithmforreal~timedetectionandtrackingofmovingtargetsinterrestrialscenesusingamobilecamera.Ouralgorithmconsistsoftwomodes:detectionandtracking.Inthedetectionmode,backgroundmotionisestimatedandcompensatedusinganaffinetransformation.Theresultantmotionrectifiedimageisusedfordetectionofthetargetlocationusingsplitandmergealgorithm.Wealsocheckedotherfeaturesforprecisedetectionofthetargetlocation.Whenthetargetisidentified,algorithmswitchestothetrackingmode.ModifiedMoravecoperatorisappliedtothetargettoidentifyfeaturepoints.Thefeaturepointsarematchedwithpointsintheregionofinterestinthecurrentframe.Thecorrespondingpointsarefurtherrefinedusingdisparityvectors.Thetrackingsystemiscapableoftargetshaperecoveryandthereforeitcansuccessfullytracktargetswithvaryingdistancefromcameraorwhilethecameraiszooming.Localandregionalcomputationshavemadethealgorithmsuitableforreal-timeapplications.Therefinedpointsdefinethenewpositionofthetargetinthecurrentframe.Experimentalresultshaveshownthatthealgorithmisreliableandcansuccessfullydetectandtracktargetsinmostcases.Keywords:realtimemovingtargettrackinganddetection,featurematching,affinetransformation,vehicletracking,mobilecameraimage.1IntroductionVisualdetectionandtrackingisoneofthemostchallengingissuesincomputervision.Applicationofthevisualdetectionandtrackingarenumerousandtheyspanawiderangeofapplicationsincludingsurveillancesystem,vehicletrackingandaerospaceapplication,tonameafew.Detectionandtrackingofabstracttargets(e.g.vehiclesingeneral)isaverycomplexproblemanddemandssophisticatedsolutionsusingconventionalpatternrecognitionandmotionestimationmethods.Motion-basedsegmentationisoneofthepowerfultoolsfordetectionandtrackingofmovingtargets.Itissimpletodetectmovingobjectsinimagesequencesobtainedbystationarycamera[1],[2],theconventionaldifference-basedmethodsfailtodetectmovingtargetswhenthecameraisalsomoving.Inthecaseofmobilecameraalloftheobjectsintheimagesequencehaveanapparentmotion,whichisrelatedtothecameramotion.Anumberofmethodshavebeenproposedfordetectionofthemovingtargetsinmobilecameraincludingdirectcameramotionparametersestimation[3],opticalflow[4],[5],andgeometrictransformation[6],[7].Directmeasurementofcameramotionparametersisthebestmethodforcancellationoftheapparentbackgroundmotionbutinsomeapplicationitisnotpossibletomeasuretheseparametersdirectly.Geometrictransformationmethodshavelowcomputationcostandaresuitableforrealtimepurpose.Inthesemethods,auniformbackgroundmotionisassumed.Anaffinemotionmodelcouldbeusedtomodelthismotion.Whentheapparentmotionofthebackgroundisestimated,itcanbeexploitedtolocatemovingobjects.Inthispaperweproposeanewmethodfordetectionandtrackingofmovingtargetsusingamobilemonocularcamera.Ouralgorithmhastwomodes:detectionandtracking.Thispaperisorganizedasfollows.InSection2,thedetectionprocedureisdiscussed.Section3describesthetrackingmethod.ExperimentalresultsareshowninSection4andconclusionappearsinSection5.2TargetdetectionInthedetectionmodeweusedaffinetransformationandLMedS(Leastmediansquared)methodforrobustestimationoftheapparentbackgroundmotion.Afterthecompensationofthebackgroundmotion,weapplysplitandmergealgorithmtothedifferenceofcurrentframeandthetransformedpreviousframetoobtainanestimationofthetargetpositions.Ifnotargetisfound,thenitmeanseitherthereisnomovingtargetinthesceneor,therelativemotionofthetargetistoosmalltobedetected.Inthelattercase,itispossibletodetectthetargetbyadjustingtheframerateofthecamera.Thealgorithmaccomplishesthisautomaticallybyanalyzingtheproceedingframesuntilamajordifferenceisdetected.Wedesignedavotingmethodtoverifythetargetsbasedonaprioriknowledgeofthetargets.Forthecaseofvehicledetectionweusedverticalandhorizontalgradientstolocateinterestingfeaturesaswellasconstraintonareaofthetargetasdiscussedinthissection.2.1BackgroundmotionestimationAffinetransformation[8]hasbeenusedtomodelmotionofthecamera.Thismodelincludesrotation,scalingandtranslation.2~Daffinetransformationisdescribedasfollow:(1)where(xi,yi)arelocationsofpointsinthepreviousframeand(Xi,Yi)arelocationsofpointsinthecurrentframeanda1~a6aremotionparameters.Thistransformationhassixparameters;therefore,threematchingpairsarerequiredtofullyrecoverthemotion.Itisnecessarytoselectthethreepointsfromthestationaryback~groundtoassureanaccuratemodelforcameramotion.WeusedMoravecoperator[9]tofinddistinguishedfeaturepointstoensureprecisematch.Moravecoperatorselectspixelswiththemaximumdirectionalgradientinthemin~maxsense.Ifthemovingtargetsconstituteasmallarea(i.e.lessthan50%)oftheimage,thenLMedSalgorithmcanbeappliedtodeterminetheaffinetransformationparametersoftheapparentbackgroundmotionbetweentwoconsecutiveframesaccordingtothefollowingprocedure.SelectNrandomfeaturepointfrompreviousframe,andusethestandardnormalizedcrosscorrelationmethodtolocatethecorrespondingpointsinthecurrentframe.Normalizedcorrelationequationisgivenby:(2)hereandaretheaverageintensitiesofthepixelsinthetworegionsbeingcompared,andthesummationsarecarriedoutoverallpixelswithinsmallwindowscenteredonthefeaturepoints.Thevaluerintheaboveequationmeasuresthesimilaritybetweentworegionsandisbetween1and-1.Sinceitisassumedthatmovingobjectsarelessthan50%ofthewholeimage,thereforemostoftheNpointswillbelongtothestationarybackground.2.SelectMrandomsetsofthreefeaturepoints:(xi,yi,Xi,Yi)fori=1,2,3,fromtheNfeaturepointsobtainedinstep1.(xi,yi)arecoordinatesofthefeaturepointsinthepreviousframe,and(Xi,Yi)aretheircorrespondsincurrentframe.3.Foreachsetcalculatetheaffinetransformationparameters.4.TransformNfeaturepointsinstep1usingMaffinetransformations,obtainedinstep3andcalculatetheMmediansofsquareddifferencesbetweencorrespondingpointsandtransformedpoints.Thenselecttheaffineparametersforwhichthemedianofsquareddifferenceistheminimum.Accordingtotheaboveprocedure,theprobabilitypthatatleastonedatasetinthebackgroundandtheircorrectcorrespondingpointsareobtainedisderivedfromthefollowingequation[7]:(3)where(<0.5)istheratioofthemovingobjectregionstowholeimageandqistheprobabilitythatcorrespondingpointsarecorrectlyfind.In[7]ithasbeenshownthattheabovemethodwillgiveanaccurateandreliablemodel.2.2MovingtargetdetectionusingbackgroundmotioncompensatedframesWhenaffineparametersareestimated,theycanbeusedforcancellationoftheapparentbackgroundmotion,bytransformationofpreviousframe.Nowdifferenceofthecurrentframeandtransformedpreviousframerevealstruemovingtargets.Thenweapplyathresholdtoproduceabinaryimage.Theresultsofthetransformationandsegmentationareshownisfigure1~aand1~b.Somepartsaresegmentedasmovingtargetsduetonoise.Connectedcomponentpropertycanbeappliedtoreduceerrorsduetonoise.Weusesplitandmergealgorithmtofindtargetbounding-boxes.Ifnotargetisfound,thenitmeanseitherthereisnomovingtargetinthesceneor,therelativemotionofthetargetistoosmalltobedetected.Inthelattercase,itispossibletodetectthetargetbyadjustingtheframerateofthecamera.Thealgorithmaccomplishesthisautomaticallybyanalyzingtheproceedingframesuntilatargetisdetected.Ourspecialinterestisdetectionandtrackingofthemovingvehiclessoweusedaspectratioandhorizontalandverticallineasconstraintstoverifyvehicles.Ourexperimentsshowthatcomparisonofthelengthofhorizontalandverticallinesinthetargetareawiththeperimeterofthetargetwillgiveagoodclueaboutthenatureofthetarget.3TargettrackingAfteratargetisverified,thealgorithmswitchesintothetrackingmode.ModifiedMoravecoperatorisappliedtothetargettoidentifyfeaturepoints.Thesefeaturepointsarematchedwithpointsintheregionofinterestinthecurrentframe.Disparityvectorsarecomputedforthematchedpairsofpoints.Weuseddisparityvectorstorefinethematchedpoints.Therefinedpointsdefinethenewpositionofthetargetinthecurrentframe.Thealgorithmswitchestothedetectionmodewheneverthetargetismissed.Althoughthedetectionalgorithmdescribedabovecanbeusedfortrackingtoobutthetrackingalgorithm,wedescribeinthissectionhasverylowcomputationcostincontrastwiththedetectionalgorithmdescribedabove.Ontheotherhandwhenthetargetisdetecteditisnotrestrictedtokeepmovingintrackingmode.Thetargetcanalsobelargerthan50%ofthesceneryinthetrackingmodeandthismeanscameracanzoomtohavealargerviewofthetargetwhiletracking.Figure1:twoconsecutiveframesanddifferenceofthemafterbackgroundmotioncompensation,thecalculatedaffineparametersare:a1=0.9973,a2=-0.004,a3=0.008,a4=1.0022,a5=1.23,a6=-2.51Whenthesizeofthetargetisfixedthenormalizedcross~correlationorSSD(SumofSquaredDifferences)methodcanbedirectlyappliedfortargettracking.Butwedon’trestrictthetargettohaveafixedsize.Ourtrackingalgorithmiscapableofupdatingthetargetshapeandsize.Toachievethisgoal,thealgorithmisbasedondynamicfeaturepointstracking.Weselectfeaturepointsfromthetargetareaandwetracktheminthenextframe.Horizontalandverticallinesareimportantfeaturesforvehicletracking.SoweusedoptimizedMoravecoperator,whichselectsfeaturepoints,consideringonlyhorizontalandverticalgradients.Thisimprovesselectionofinterestingfeaturepointslocatedonthegeometricallywell-definedstructuressuchasvehicles.Thisfeatureisveryusefulwhendealingwithocclusion.Ourtrackingalgorithmconsistsoffoursteps,describedasfollow.1.ApplythemodifiedMoravecalgorithmtoselectfeaturepointsinthetargetareainthepreviousframe.2.FindcorrespondsofthefeaturepointsintheROI(regionofinterest)ofthecurrentframeusingnormalizedcross-correlation.3.Calculatethedisparityvectorsandbasedonthesevectorsrefinefeaturepoints.Therefiningisdefinedasomittingfeatureswithinconsistentvectors.Thishelpsremovalofnon~targetfeaturepoints.4.Basedonthelocationofrefinedcorrespondingpointsandtheprevioussizeofthetargetdeterminethelocationandsizeofthetargetinthecurrentframe.Torefinethefeaturepointsdescribedinthestep3,wecalculatedthemeanandvarianceofthedisparityvectorsandbasedonthesevaluesweremovedpoints,whichhavedisparityvectorsfarfromthemeandisparity.Infigure2therefineddisparityvectorsareshownforatarget.Whenthereisnofeaturepointfortrackingorcorrespondpointsarenotfoundproperly,weassumethatthetargetislostsothealgorithmswitchestodetectionmodetofindatarget.4ResultsThealgorithmhasbeenimplementedonaPentiumIII500MhzusingaVisualC++program.Wehavetestedthealgorithmwithbothsimulatedandactualsequencesofimagesofvehiclesindifferentlandscapes.Thesystemcandetectandtracktargetsinrealtime.Weachievedtheframerateof4frames/secondfordetectionand15frames/secondfortrackingof100*90pixelstargetin352*288pixelsvideoframes.Wetestedtheproposedalgorithmwithwidevarietyofimagesequence.Figure3showssomeresultsfordetectionofvariousobjectsinarbitrarybackground.Asitisshownthealgorithmhassuccessfullydetectedthetargets.Infigure4thetrackingresultsforavehicleareshown.Inthisexample,thetrackedvehicleturnsovertheroadanditsshapeandsizechange.Asitshowninthepicturesboththesizeandtheshapeofthevehiclevariesbutourtrackingalgorithmcansuccessfullytrackit.Resultsshowedtheaccuracyofthemethodindetectingandtrackingofmovingobjects.Comparisonofresultsgeneratedbytheproposedmethodwiththoseofothermethodsshowedthatmorereliableresultscouldbeobtainedusingtheproposedmethodinrealtime.abcFigure2:Therefineddisparityvectorusedfortracking.(a)Previousframe,(b)Currentframeand(c)CurrentframewithdisparityvectorsFigure3:varioustargetsdetectedusingthedetectionalgorithm.Figure4:Trackingavehiclewhileitturnsoverandchangesitssizeandshape5ConclusionInthispaperweproposednewmethodfordetectionandtrackingofthemovingobjectsusingmobilecamera.Weusedtwodifferentmethodsfordetectionandtracking.Fordetection,weusedaffinetransformandLMedSmethodforestimationoftheapparentbackgroundmotion.Whentheapparentmotionofthebackgroundiscanceled,thedifferenceoftwoconsecutiveframesisusedfordetectionofthemovingtarget.Wealsocheckedthetargetforsomefeatures.WeusedmodifiedMoravecoperatorandfeaturematchingwithdisparityvectorsrefiningfortracking.Wetestedouralgorithmswithwidevarietyofimagessequence.Theproposedmethodssuccessfullydetectandtrackmovingvehiclesandobjectsinarbitraryscenesobtainedfromamobilevideocamera.Thetrackingsystemiscapableoftargetshaperecoveryandthereforeitcansuccessfullytracktargetswithvaryingdistancefromcameraorwhilethecameraiszooming.Localandregionalcomputationshavemadethealgorithmsuitableforrealtimeapplications.Moreover,itisimplementedinanunderstandableandstructuredway.Experimentalresultshaveshownthatthealgorithmisreliableandcansuccessfullydetectandtracktargetsinmostcases.Toaddmorerobustnesstothetrackingsystemitispossibletocombineedge/texturebasedtrackingmethodswiththecurrentapproach.

(2)译文一个鲁棒的基于机器视觉运动目标检测与跟踪系统摘要在本文中,我们提出了一种在使用移动相机在地面场景得到移动目标实时检测和跟踪的新算法。我们的算法包括两种模式:检测和跟踪。在检测模式,背景使用运动估计和补偿仿射变换。由此产生的运动整流图像用于探测目标位置时使用分割和合并算法。我们通过精确的检测,还检查了其他功能目标位置。当目标被锁定,算法转为跟踪模式。修改后的莫拉维克操作是用于识别目标的特征点,将当前帧特征点与参考点进行比较。相应的点使用差距进一步完善载体的阈值范围。对目标跟踪系统进行形状恢复,因此它可以成功的跟踪与相机间的距离不同的目标或在相机变焦时移动的目标。本地及局部转口计算算法适用于实时的应用中,用精确的点在当前帧确定新的目标位置。实验结果表明该算法是可靠的,在大多数示列中可以显示成功地探测和跟踪了目标。关键词:实时运动目标跟踪和检测,特征匹配,仿射变换,车辆跟踪,移动相机1简介视觉检测和跟踪是在计算机视觉中最具挑战性的问题。视觉检测和跟踪的应用众多,他们跨越了广泛的应用范围包括监控系统,车辆跟踪和航空航天应用,仅举数列。抽象的目标检测和跟踪(例如一般的车辆)是一个非常复杂的问题,并要求复杂的解决方案使用传统的模式识别和运动估计方法。基于运动的分割是一个强大的检测工具和跟踪移动目标。它是简单的检测固定相机[1]获得的图像序列运动的物体,[2],以传统的差异为基础的方法无法检测在移动相机时也移动的移动目标。在移动相机的情况下,所有的对象在图像序列中有一个明显的运动,这是关系到相机的运动。已经提出了一些方法在移动相机的运动目标检测包括直接的摄像机运动参数估计[3],光流[4][5],与几何改造[6][7]。直接测量摄像机运动参数是最好的方法取消明显的背景运动,但在某些应用中,它是不可能直接测量这些参数的。几何转化方法有较低的计算成本和适合实时的目的。在这些方法,一个统一的运动背景假设。可用于仿射运动模型建模。当视觉运动背景数据估计时,它可以被利用来定位移动的物体。在本文中,我们提出一种新的方法,使用的移动目标检测和跟踪移动单眼相机。我们的算法两种模式:探测和跟踪。本文是安排如下:第2节中,检测讨论过程。第3节介绍跟踪方法。在第4节介绍实验结果在第5节总结结论。2目标检测在检测模式下,我们使用了仿射变换和LMedS(最低位数的平方)明显的稳定的背景估计方法。经光补偿后运动背景的检测,我们应用分割和合并的算法生成电流差异框架和转化前一帧获得目标位置的估计。如果没有发现目标,那就意味着在此场景中没有运动目标,相关的运动目标太小,无法检测。在后者的情况下,我们可以通过调整相机的帧频检测目标。“自动实现算法直到一个重要的初始帧分析检测到差异。我们设计了一个随机选取的方法来验证在已知目标基础之上得到一个先验目标。对于在车辆检测的情况下,我们使用垂直CAL和水平梯度找到有趣特点以及制约该地区移动目标检测的原因将在本节中得到讨论。2.1背景运动估计仿射变换[8]已被用来检测相机模型的运动。这个模型包括旋转,缩放和平移。2-D仿射变换描述如下:(1)(xi,yi)点的位置在前一帧(Xi,Yi),地点在当前帧和A1~A6点的运动参数。这种转变有六个参数的;因此,三个配对的需要完全恢复的设置。从固定背景选择三点是非常必要的,这样才可以保证相机模型准确的设立。我们使用莫拉维克算法[9]发现突出的特征点,以确保精确匹配。莫拉维克算法的选择与像素最小或最大的方向梯度感有关。如果运动目标构成一个很小的区域(即低于50%的图像),就可以把LMedS算法按照下列程序应用于确定仿射连续两帧画面之间以明显的转换背景运动参数。1从上一帧选择N的随机特征点,使用标准的规范化交叉相关方法来定位当前帧中的对应点。归一化相关方程为:(2)这里和是平均强度在这两个区域的像素比较,求和是把所有像素进行的小型窗口集中在特征点。在上面的方程式r值采用两个区域之间相似度的和介于1和-1。因为我们假设移动对象是少于50%的整体形象,因此大多数的N点将属于静止的背景。2.选择M值随机设定三个特征点:(xi,yi,Xi,Yi)对于i=1,2,3,N特征点在步骤1中获得。(xi,yi)是坐标前一帧的特征点,(Xi,Yi)是对应于当前帧的特征点。

3.对每一组计算仿射变换参数。

4.在步骤1使用M变换n功能点仿射变换,在第3步获取计算平方差的M位数之间的对应点和转化点。然后选择仿射参数平方差的中位数的最低限度。根据上述过程中,至少有一个数据在后台设置的概率p和获得正确的对应点从下面的公式推导[7]:(3)其中,在(<0.5)的比例是移动物体区域的整体形象,q的概率值使对应点被正确地找到。[7]已被证明,上述方法会给一个准确和可靠的模型。2.2运动目标检测方法背景运动补偿帧当仿射参数估计时,他们可以用于消除明显的背景运动,通过变换的帧。现在不同的当前帧和转换的帧显示了真正的移动目标。然后我们应用一个阈值来产生一个二进制图像。转换的结果是图像显示和数据细分和1~b划归。另一些部分被分割为正在移动的目标归因于噪音。可用于连接组件属性减少噪声引起的错误。我们使用分割和合并算法找到目标包围盒。如果没有找到目标,那么就意味着或者是没有场景中移动的目标,相对运动的目标太小,无法检测。在后一种情况下,它通过调整相机的帧频可以探测目标。该算法会自动完成通过分析发出帧,直到发现目标。我们特别感兴趣的是检测和跟踪移动车辆,我们的实验表明,比较长度的水平和垂直的线在目标地区与周边关于自然的目标将给一个有用的线索。3目标跟踪一个目标是验证后,切换算法进入跟踪模式。把修改后的莫拉维克算法应用到目标识别功能点。这些特征点匹配在当前在该区域的兴趣点帧中。与帧差矢量的计算点配对。我们使用帧差矢量优化匹配点。用已设点确定新的目标位置的当前帧。该算法切换到每次错过目标的检测模式。虽然以上检测算法,可用于跟踪,但在与上述检测算法的描述对照我们在本节中的描述的跟踪算法有计算成本非常低的。另一方面,当被检测的目标是不限于保持跟踪模式时。

“目标也可以是大于50%的背景的跟踪模式,这意味着摄像机放大到一个更大的目标才能够跟踪。图1:连续两帧画面和背景运动补偿后,计算出的差异,其中仿射参数为:A1=0.9973,A2=-0.004,A3=0.008,A4=1.0022,A5=1.23,A6=-2.51当目标的大小是固定的正常互相关或SSD(平方和差异)方法时可以直接申请跟踪目标。但是,我们不能限制目标有一个固定的大小。我们的跟踪算法要具有更新目标的形状和大小的能力。为了实现这一目标,该算法选用了基于动态特征点的跟踪。我们选择功能TURE目标区域的点,跟踪他们在下一帧的运动轨迹。水平和垂直线是车辆跟踪的重要特征。所以我们用优化莫拉维克算法选择特征点,只考虑地平线TAL和垂直梯度。这提高了选择兴趣点的功能,位于几何点如已经定义的结构车辆,此功能是非常有用的,当交换与闭塞时。我们的跟踪算法包括四个步骤,如

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论