版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
arXiv:2207.04429v1[cs.RO]10Jul2022
LM-Nav:RoboticNavigationwithLargePre-TrainedModelsofLanguage,Vision,andAction
DhruvShah+β,BłaejOsiski+βω,BrianIchterγ,SergeyLevineβγβUCBerkeley,ωUniversityofWarsaw,γRoboticsatGoogle
Abstract:Goal-conditionedpoliciesforroboticnavigationcanbetrainedonlarge,unannotateddatasets,providingforgoodgeneralizationtoreal-worldset-tings.However,particularlyinvision-basedsettingswherespecifyinggoalsre-quiresanimage,thismakesforanunnaturalinterface.Languageprovidesamoreconvenientmodalityforcommunicationwithrobots,butcontemporarymethodstypicallyrequireexpensivesupervision,intheformoftrajectoriesannotatedwithlanguagedescriptions.Wepresentasystem,LM-Nav,forroboticnavigationthatenjoysthebenefitsoftrainingonunannotatedlargedatasetsoftrajectories,whilestillprovidingahigh-levelinterfacetotheuser.Insteadofutilizingalabeledinstructionfollowingdataset,weshowthatsuchasystemcanbeconstructeden-tirelyoutofpre-trainedmodelsfornavigation(ViNG),image-languageassocia-tion(CLIP),andlanguagemodeling(GPT-3),withoutrequiringanyfine-tuningorlanguage-annotatedrobotdata.WeinstantiateLM-Navonareal-worldmobilerobotanddemonstratelong-horizonnavigationthroughcomplex,outdoorenvi-ronmentsfromnaturallanguageinstructions.
Keywords:instructionfollowing,languagemodels,vision-basednavigation
1Introduction
Oneofthecentralchallengesinroboticlearningistoenablerobotstoperformawidevarietyoftasksoncommand,followinghigh-levelinstructionsfromhumans.Thisrequiresrobotsthatcanunderstandhumaninstructions,andareequippedwithalargerepertoireofdiversebehaviorstoexecutesuchinstructionsintherealworld.Priorworkoninstructionfollowinginnavigationhaslargelyfocusedonlearningfromtrajectoriesannotatedwithtextualinstructions[
1
–
5
].Thisenablesunderstandingoftextualinstructions,butthecostofdataannotationimpedeswideadoption.Ontheotherhand,recentworkhasshownthatlearningrobustnavigationispossiblethroughgoal-conditionedpoliciestrainedwithself-supervision.Theseutilizelarge,unlabeleddatasetstotrainvision-basedcontrollersviahindsightrelabeling[
6
–
11
].Theyprovidescalability,generalizability,androbustness,butusuallyinvolveaclunkymechanismforgoalspecification,usinglocationsorimages.Inthiswork,weaimtocombinethestrengthsofbothapproaches,enablingaself-supervisedsystemforroboticnavigationtoexecutenaturallanguageinstructionsbyleveragingthecapabilitiesofpre-trainedmodelswithoutanyuser-annotatednavigationaldata.Ourmethodusesthesemodelstoconstructan“interface”thathumanscanusetocommunicatedesiredtaskstorobots.Thissystemenjoystheimpressivegeneralizationcapabilitiesofthepre-trainedlanguageandvision-languagemodels,enablingtheroboticsystemtoacceptcomplexhigh-levelinstructions.
Ourmainobservationisthatwecanutilizeoff-the-shelfpre-trainedmodelstrainedonlargecorporaofvisualandlanguagedatasets—thatarewidelyavailableandshowgreatfew-shotgeneraliza-tioncapabilities—tocreatethisinterfaceforembodiedinstructionfollowing.Toachievethis,wecombinethestrengthsoftwosuchrobot-agnosticpre-trainedmodelswithapre-trainednavigationmodel.Weuseavisualnavigationmodel(VNM:ViNG[
11
])tocreateatopological“mentalmap”oftheenvironmentusingtherobot’sobservations.Givenfree-formtextualinstructions,weusea
+Theseauthorscontributedequally,orderdecidedbyacoinflip.Checkouttheprojectpageforexperimentvideos,code,andauser-friendlyColabnotebookthatrunsinyourbrowser:
/view/lmnav
2
Figure1:EmbodiedinstructionfollowingwithLM-Nav:Oursystemtakesasinputasetofrawobservationsfromthetargetenvironmentandfree-formtextualinstructions(left),derivinganactionableplanusingthreepre-trainedmodels:alargelanguagemodel(LLM)forextractinglandmarks,avision-and-languagemodel(VLM)forgrounding,andavisualnavigationmodel(VNM)forexecution.ThisenablesLM-Navtofollowtextualinstructionsincomplexenvironmentspurelyfromvisualobservations(right)withoutanyfine-tuning.
pre-trainedlargelanguagemodel(LLM:GPT-3[
12
])todecodetheinstructionsintoasequenceoftextuallandmarks.Wethenuseavision-languagemodel(VLM:CLIP[
13
])forgroundingthesetextuallandmarksinthetopologicalmap,byinferringajointlikelihoodoverthelandmarksandnodes.Anovelsearchalgorithmisthenusedtomaximizeaprobabilisticobjective,andfindaplanfortherobot,whichisthenexecutedbyVNM.
OurprimarycontributionisLargeModelNavigation,orLM-Nav,anembodiedinstructionfollow-ingsystemthatcombinesthreelargeindependentlypre-trainedmodels—aself-supervisedroboticcontrolmodelthatutilizesvisualobservationsandphysicalactions(VNM),avision-languagemodelthatgroundsimagesintextbuthasnocontextofembodiment(VLM),andalargelanguagemodelthatcanparseandtranslatetextbuthasnosenseofvisualgroundingorembodiment(LLM)—toenablelong-horizoninstructionfollowingincomplex,real-worldenvironments.Wepresentthefirstinstantiationofaroboticsystemthatcombinestheconfluenceofpre-trainedvision-and-languagemodelswithagoal-conditionedcontroller,toderiveactionableplanswithoutanyfine-tuninginthetargetenvironment.Notably,allthreemodelsaretrainedonlarge-scaledatasets,withself-supervisedobjectives,andusedoff-the-shelfwithnofine-tuning—nohumanannotationsoftherobotnavigationdataarenecessarytotrainLM-Nav.WeshowthatLM-Navisabletosuccess-fullyfollownaturallanguageinstructionsinnewenvironmentsoverthecourseof100sofmetersofcomplex,suburbannavigation,whiledisambiguatingpathswithfine-grainedcommands.
2RelatedWork
Earlyworksinaugmentingnavigationpolicieswithnaturallanguagecommandsusestatisticalma-chinetranslation[
14
]todiscoverdata-drivenpatternstomapfree-formcommandstoaformallan-guagedefinedbyagrammar[
15
–
19
].However,theseapproachestendtooperateonstructuredstatespaces.Ourworkiscloselyinspiredbymethodsthatinsteadreducethistasktoasequencepredic-tionproblem[
1,
20,
21
].Notably,ourgoalissimilartothetaskofVLN—leveragingfine-grainedinstructionstocontrolamobilerobotsolelyfromvisualobservations[
1,
2]
.
However,mostrecentapproachestoVLNusealargedatasetofsimulatedtrajectories—over1Mdemonstrations—annotatedwithfine-grainedlanguagelabelsinindoor[
1,
3
–
5,
22
]anddriv-ingscenarios[
23
–
28
],andrelyonsim-to-realtransferfordeploymentinsimpleindoorenviron-ments[
29,
30
].However,thisnecessitatesbuildingaphoto-realisticsimulatorresemblingthetargetenvironment,whichcanbechallengingforunstructuredenvironments,especiallyforthetaskofoutdoornavigation.Instead,LM-Navleveragesfree-formtextualinstructionstonavigatearobotincomplex,outdoorenvironmentswithoutaccesstoanysimulationoranytrajectory-levelannotations.Recentprogressinusinglarge-scalemodelsofnaturallanguageandimagestrainedondiversedatahasenabledapplicationsinawidevarietyoftextual[
31
–
33
],visual[
13,
34
–
38
],andembodieddomains[
39
–
44
].Inthelattercategory,Shridharetal.[
39
],Khandelwaletal.[
44
]andJangetal.
[40
]fine-tuneembeddingsfrompre-trainedmodelsonrobotdatawithlanguagelabels,Huangetal.
[41
]assumethatthelow-levelagentcanexecutetextualinstructions(withoutaddressingcontrol),
3
…
…
Text
Encoder
…
distance
CurrentObservation
IᐧTIᐧTIᐧT…IᐧT
1122131M
I
1
I
2
I
3
…
IN
actions
IᐧTIᐧTIᐧT…IᐧT
2122231M
ViT-L
ImageEncoder
IᐧTIᐧTIᐧT…IᐧT
3132331M
…………
(b)ViNGVNM
andAhnetal.[
42
]assumesthattherobothasasetoftext-conditionedskillsthatcanfollowatomictextualcommands.Alloftheseapproachesrequireaccesstolow-levelskillsthatcanfollowrudi-mentarytextualcommands,whichinturnrequireslanguageannotationsforroboticexperienceandastrongassumptionontherobot’scapabilities.Incontrast,wecombinethesepre-trainedvisionandlanguagemodelswithpre-trainedvisualpoliciesthatdonotuseanylanguageannotations[
11,
45]
withoutfine-tuningthesemodelsinthetargetenvironmentorforthetaskofVLN.
Data-drivenapproachestovision-basedmobilerobotnavigationoftenusephotorealisticsimula-tors[
46
–
49
]orsuperviseddatacollection
[50
]tolearngoal-reachingpoliciesdirectlyfromrawobservations.Self-supervisedmethodsfornavigation[
6
–
11,
51
]insteadcanuseunlabeleddatasetsoftrajectoriesbyautomaticallygeneratinglabelsusingonboardsensorsandhindsightrelabeling.Notably,suchapolicycanbetrainedonlarge,diversedatasetsandgeneralizetopreviouslyunseenenvironments[
45,
52
].Beingself-supervised,suchpoliciesareadeptatnavigatingtodesiredgoalsspecifiedbyGPSlocationsorimages,butareunabletoparsehigh-levelinstructionssuchasfree-formtext.LM-Navusesself-supervisedpoliciestrainedinalargenumberofpriorenvironments,augmentedwithpre-trainedvisionandlanguagemodelsforparsingnaturallanguageinstructions,anddeploystheminnovelreal-worldenvironmentswithoutanyfine-tuning.
3Preliminaries
LM-Navconsistsofthreelarge,pre-trainedmodelsforprocessinglan-guage,associatingimageswithlan-guage,andvisualnavigation.
Largelanguagemodelsaregener-ativemodelsbasedontheTrans-formerarchitecture[
53
],trainedonlargecorporaofinternettext.LM-NavusestheGPT-3LLM
[12
],toparsetextualinstructionsintoase-quenceoflandmarks.
Vision-and-languagemodelsrefertomodelsthatcanassociateimages
aphotoofa
stopsign
T1
T2
T3
…
TM
CommandedSubgoal
INᐧT1INᐧT2INᐧT3…INᐧT
M
(a)
CLIPVLM
Figure2:LM-NavusesVLMtoinferajointprobabilitydistribu-tionovertextuallandmarksandimageobservations.VNMconsti-tutesanimage-conditioneddistancefunctionandpolicythatcancontroltherobot.
andtext,e.g.imagecaptioning,visualquestion-answering,etc.[
54
–
56]
.WeusetheCLIPVLM
[13
],amodelthatjointlyencodesimagesandtextintoanembeddingspacethatallowsittodeterminehowlikelysomestringistobeassociatedwithagivenimage.WecanjointlyencodeasetoflandmarkdescriptionstobtainedfromtheLLMandasetofimagesiktoobtaintheirVLMembeddings{T,Ik}(seeFig.
3
).Computingthecosinesimilaritybetweentheseembeddings,fol-lowedbyasoftmaxoperationresultsinprobabilitiesP(ik|t),correspondingtothelikelihoodthatimageikcorrespondstothestringt.LM-Navusesthisprobabilitytoalignlandmarkdescriptionswithimages.
Visualnavigationmodelslearnnavigationbehaviorandnavigationalaffordancesdirectlyfromvi-sualobservations[
11,
51,
57
–
59
],associatingimagesandactionsthroughtime.WeusetheViNGVNM
[11
],agoal-conditionedmodelthatpredictstemporaldistancesbetweenpairsofimagesandthecorrespondingactionstoexecute(seeFig.
3
).Thisprovidesaninterfacebetweenimagesandembodiment.TheVNMservestwopurposes:(i)givenasetofobservationsinthetargetenviron-ment,thedistancepredictionsfromtheVNMcanbeusedtoconstructatopologicalgraphg(V,E)thatrepresentsa“mentalmap”oftheenvironment;(ii)givena“walk”,comprisingofasequenceofconnectedsubgoalstoagoalnode,theVNMcannavigatetherobotalongthisplan.Thetopologicalgraphgisanimportantabstractionthatallowsasimpleinterfaceforplanningoverpastexperienceintheenvironmentandhasbeensuccessfullyusedinpriorworktoperformlong-horizonnaviga-tion[
52,
60,
61
].Todeduceconnectivitying,weuseacombinationoflearneddistanceestimates,temporalproximity(duringdatacollection),andspatialproximity(usingGPSmeasurements).Foreveryconnectedpairofvertices{vi,vj},weassignthisdistanceestimatetothecorrespondingedgeweightD(vi,vj).Formoredetailsontheconstructionofthisgraph,seeAppendix
B.
4
Weformulatethetaskofinstruc-tionfollowingonthegraphasthatofmaximizingtheprobabilityofsuccessfullyexecutingawalkthatmatchestheinstruction.AswewilldiscussinSection
4.2
,wefirstparsetheinstructionintoalistoflandmarks
=l1,l2,...,lnthatshouldbevis-
itedinorder.RecallthattheVNMisusedtobuildatopologicalgraphthatrepresentstheconnectivityoftheen-vironmentfrompreviouslyseenob-servations,withnodes{vi}corre-spondingtopreviouslyseenimages.
Forawalk=v1,v2,...,vT,we
factorizetheprobabilitythatitcorre-
Figure3:Systemoverview:(a)VNMusesagoal-conditioneddistancefunctiontoinferconnectivitybetweenthesetofrawobservationsandconstructsatopologicalgraph.(b)LLMtranslatesnaturallanguageinstruc-tionsintoasequenceoftextuallandmarks.(c)VLMinfersajointprobabilitydistributionoverthelandmarkdescriptionsandnodesinthegraph,whichisusedby(d)agraphsearchalgorithmtoderivetheoptimalwalkthroughthegraph.(e)TherobotdrivesfollowingthewalkintherealworldusingtheVNMpolicy.
4LM-Nav:InstructionFollowingwithPre-TrainedModels
LM-Navcombinesthecomponentsdiscussedearliertofollowtextualinstructionsintherealworld.
TheLLMparsesfree-forminstructionsintoalistoflandmarks(Sec.
4.2
),theVLMassociates
theselandmarkswithnodesinthegraphbyestimatingtheprobabilitythateachnodecorresponds
toeachPl(|)(Sec.
4.3
),andtheVNMisthenusedtoinferhoweffectivelytherobotcannavigate
betweeneachpairofnodesinthegraph,whichweconvertintoaprobabilityP(vi,vj)derivedfromtheestimatedtemporaldistances.Tofindtheoptimal“walk”onthegraphthatboth(i)adherestotheprovidedinstructionsand(ii)minimizestraversalcost,wederiveaprobabilisticobjective(Sec.
4.1)
andshowhowitcanbeoptimizedusingagraphsearchalgorithm(Sec.
4.4
).ThisoptimalwalkisthenexecutedintherealworldbyusingtheactionsproducedbytheVNMmodel.
4.1ProblemFormulation
Algorithm1:GraphSearch
1:Input:Landmarks(l1,l2,...,ln).
2:Input:Graphg(V,E).
3:Input:StartingnodeS.
4:Vi=0,...,nQ[li,v]=_o
v=V
5:Q[0,S]=0
6:Dijkstraalgorithm(g,Q[0,*])
7:foriin1,2,...,ndo
8:Vv=VQ[i,v]=Q[i_1,v]+CLIP(li,v)
9:Dijkstraalgorithm(g,Q[i,*])
10:endfor
11:destination=argmax(Q[n,*])
12:returnbacktrack(destination,Q[n,*])
spondstothegiveninstructioninto:(i)Pl,theprobabilitythatthewalkvisitsalllandmarksfromthedescription;(ii)Pt,theprobabilitythatthewalkcanbeexecutedsuccessfully.Let=l1,l2,...,lnbethelistoflandmarksdescribedinthenaturallanguageinstructions,andletP(li|vj)denotetheprobabilitythatnodevjcorrespondstothelandmarkdescriptionli.Thenwehave:
Pl(|)=1≤t1≤t≤tn≤TUP(lk|vtk),(1)
1≤k≤n
wheret1,t2,...,tnisassignmentofasubsequenceofwalk’snodetolandmarkdescriptions.
5
ToobtaintheprobabilityPt(),wemustconvertthedistanceestimatesprovidedbytheVNMmodel
intoprobabilities.Thishasbeenstudiedintheliteratureongoal-conditionedpolicies[
62,
63
].AsimplemodelbasedonadiscountedMDPformulationistomodeltheprobabilityofsuccessfullyreachingthegoalasγtothepowernumberoftimesteps,whichcorrespondstoaprobabilityofterminationof1_γateachtimestep.Wethenhave
Pt()=ⅡP(vj,vj+1)=ⅡγD(vj,vj+1),(2)
1≤j<n1≤j<n
whereD(vj,vj+1)referstothelength(inthenumberoftimesteps)oftheedgebetweennodesvjandvj+1,whichisprovidedbytheVNMmodel.Thefinalprobabilisticobjectivethatoursystemneedstomaximizebecomes:
PM()=Pt()Pl(|)=ⅡγD(vj,vj+1)
1≤j<n
1≤t1≤x.≤tn≤tⅡP(lk|vtk).(3)
1≤k≤n
4.2ParsingFree-FormTextualInstructions
Theuserspecifiestheroutetheywanttherobottotakeusingnaturallanguage,whiletheobjectiveaboveisdefinedintermsofasequenceofdesiredlandmarks.Toextractthissequencefromtheuser’snaturallanguageinstructionweemployastandardlargelanguagemodel,whichinourprototypeisGPT-3[
12
].Weusedapromptwith3examplesofcorrectlandmarks’extractions,followedupbythedescriptiontobetranslatedbytheLLM.Suchanapproachworkedfortheinstructionsthatwetestediton.ExamplesofinstructionstogetherwithlandmarksextractedbythemodelcanbefoundinFig.
4.
Theappropriateselectionoftheprompt,includingthose3examples,wasrequiredformorenuancedcases.Fordetailsofthe“promptengineering”pleaseseeAppendix
A.
4.3VisuallyGroundingLandmarkDescriptions
AsdiscussedinSec.
4.1
,acrucialelementofselectingthewalkthroughthegraphiscomputingP(li|vj),theprobabilitythatlandmarkdescriptionlireferstonodevj(seeEquation
1
).Witheachnodecontaininganimagetakenduringinitialdatacollection,theprobabilitycanbecomputedusingCLIP[
13
]inthewaydescribedinSec.
3
astheretrievaltask.AspresentedinFig.
2,toemploy
CLIPtocomputeP(li|vj),weusetheimageatnodevjandcaptionpromptsintheformof“Thisisaphotoofa[li]”.TheresultingprobabilityP(li|vj),togetherwiththeinferrededges’distanceswillbeusedtoselecttheoptimalwalkinthegraph.
4.4GraphSearchfortheOptimalWalk
AsdescribedinSec.
4.1,
LM-Navaimsatfindingawalk=(v1,v2,...,vT)thatmaximizestheprobabilityofsuccessfulexecutionthatadherestothegiveninstructions.WeformalizedthisprobabilityPMdefinedbyEqn.
3
.WecandefineafunctionR(,)foramonotonicallyincreasingsequenceofindices=(t1,t2,...,tn):
n
T—1
R(,):=logP(li|vti)_αD(vj,vj+1),whereα=_logγ.(4)
whichhasthepropertythat()maximizesPMifandonlyifthereexistssuchthat,maximizesR.Inordertofindsuch,,weemploydynamicprogramming.InparticularwedefineahelperfunctionQ(i,v)forie{0,1,...,n},veV:
Q(i,v)=
max
=(v1,v2,...,vj),vj=v
=(t1,t2,...,ti)
R(,).(5)
Q(i,v)representsthemaximalvalueofRforawalkendinginvthatvisitedthelandmarksuptoindexi.ThebasecaseQ(0,v)visitsnoneofthelandmarks,anditsvalueofRissimplyequaltominusthelengthofshortestpathfromnodeS.Fori>0wehave:
Q(i,v)=max╱Q(i_1,v)+logP(li|v),w∈nrs(v)Q(i,w)_α.D(v,w)、.(6)
6
Figure4:QualitativeexamplesofLM-Navinreal-worldenvironmentsexecutingtextualinstructions(left).ThelandmarksextractedbyLLM(highlightedintext)aregroundedintovisualobservationsbyVLM(center;overheadimagenotavailabletotherobot).TheresultingwalkofthegraphisexecutedbyVNM(right).
ThebasecaseforDPistocomputeQ(0,V).Then,ineachstepofDPi=1,2,...,nwecomputeQ(i,v).ThiscomputationresemblestheDijkstraalgorithm([
64
]).Ineachiteration,wepickthenodevwiththelargestvalueofQ(i,v)andupdateitsneighborsbasedontheEqn.
6
.Algorithm
1
summarizesthissearchprocess.Theresultofthisalgorithmisawalk=(v1,v2,...,vT)that
maximizestheprobabilityofsuccessfullycarryingouttheinstruction.Givensuchawalk,VNMcanexecutethepathbyusingitsactionestimatestosequentiallynavigatetothesenodes.
5SystemEvaluation
WenowdescribeourexperimentsdeployingLM-Navinavarietyofoutdoorsettingstofollowhigh-levelnaturallanguageinstructionswithasmallgroundrobot.Forallexperiments,theweightsofLLM,VLM,andVNMarefrozen—thereisnofine-tuningorannotationinthetargetenvironment.Weevaluatethecompletesystem,aswellastheindividualcomponentsofLM-Nav,tounderstanditsstrengthsandlimitations.OurexperimentsdemonstratetheabilityofLM-Navtofollowhigh-levelinstructions,disambiguatepaths,andreachgoalsthatareupto800maway.
5.1MobileRobotPlatform
WeimplementLM-NavonaClearpathJackalUGVplatform(seeFig.
1
(right)).Thesensorsuiteconsistsofa6-DoFIMU,aGPSunitforapproximatelocalization,wheelencodersforlocalodom-etry,andfront-andrear-facingRGBcameraswitha170Оfield-of-viewforcapturingvisualobser-vationsandlocalizationinthetopologicalgraph.TheLLMandVLMqueriesarepre-computedonaremoteworkstationandthecomputedpathiscommandedtotherobotwirelessly.TheVNMrunson-boardandonlyusesforwardRGBimagesandunfilteredGPSmeasurements.
5.2FollowingInstructionswithLM-Nav
Ineachevaluationenvironment,wefirstconstructthegraphbymanuallydrivingtherobotandcollectingimageandGPSobservations.ThegraphisconstructedautomaticallyusingtheVNMfromthisdata,andinprinciplesuchdatacouldalsobeobtainedfrompasttraversals,orevenwithautonomousexplorationmethods[
45]
.Oncethegraphisconstructed,therobotcancarryoutin-structionsinthatenvironment.Wetestedoursystemon20queries,inenvironmentsofvaryingdifficulty,correspondingtoatotalcombinedlengthofover6km.Instructionsincludeasetofprominentlandmarksintheenvironmentthatcanbeidentifiedfromtherobot’sobservations,e.g.trafficcones,buildings,stopsigns,etc.
Fig.
4
showsqualitativeexamplesofthepathtakenbytherobot.Notethattheoverheadimageandspatiallocalizationofthelandmarksisnotavailabletotherobotandisshownforvisualizationonly.InFig.
4
(a),LM-Navisabletosuccessfullylocalizethesimplelandmarksfromitspriortraversalandfindashortpathtothegoal.Whiletherearemultiplestopsignsintheenvironment,theobjectiveinEqn.
3
causestherobottopickthecorrectstopsignincontext,soastominimizeoveralltraveldistance.Fig.
4
(b)highlightsLM-Nav’sabilitytoparsecomplexinstructionswith
7
Gostraighttowardthewhitebuilding.Continuestraightpassingbyawhitetruckuntilyoureachastopsign.
multiplelandmarksspecifyingtheroute—despitethepossibilityofashorterroutedirectlytothefinallandmarkthatignoresinstructions,therobotfindsapaththatvisitsallofthelandmarksinthecorrectorder.
Disambiguationwithinstructions.SincetheobjectiveofLM-Navistofollowinstructions,andnotmerelytoreachthefinalgoal,differentin-structionsmayleadtodifferenttraversals.Fig.
5
showsanexamplewheremodifyingtheinstruc-tioncandisambiguatemultiplepathstothegoal.Giventheshorterprompt(blue),LM-Navprefersthemoredirectpath.Onspecifyingamorefine-grainedroute(magenta),LM-Navtakesanalter-natepaththatpassesadifferentsetoflandmarks.
Missinglandmarks.WhileLM-Naviseffectiveatparsinglandmarksfrominstructions,localizingthemonthegraph,andfindingapathtothegoal,itreliesontheassumptionthatthelandmarks(i)existintheenvironment,and(ii)canbeidentifiedbytheVLM.Fig.
4
(c)illustratesacasewheretheexecutedpathfailsto
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 2025合作融资合同
- 销售空调工作计划七篇
- 父亲节演讲稿14篇
- 退休申请书范文集合8篇
- DB45T 2695-2023 牛羊规模养殖场疫病防治规范
- DB45T 2686-2023 青钱柳饮料加工技术规程
- 服装店个人工作计划
- 中学九九重阳节活动总结
- 2025【各类合同范本】隐名投资协议范本
- 2025企业集体合同书范文
- DB37 5155-2019 公共建筑节能设计标准
- 商务英语翻译之合同翻译
- 申办继承权亲属关系证明
- 钢筋统计表(插图有尺寸)
- T∕CTES 1033-2021 纺织定形机废气治理技术规范
- 关于购置64排128层CT考察报告
- 各种反时限保护计算公式
- 生物矿化与仿生材料的研究现状及展望
- 呼和浩特城规划管理技术规定
- 替票使用管理规定
- 供应商基本资料表格模板
评论
0/150
提交评论