




版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
arXiv:2307.04721v1[cs.AI]10Jul2023
2Dgridswithpatternsthatevokeabstractconcepts(e.g.,infilling,counting,it,:0,0ip,:0,0ip,t,:0,0androtatingshapes).Eachproblemprovidesasmallnumberofinput-output#,,examples,followedbytestinput(s)forwhichtheobjectiveistopredictottt,0thecorrespondingoutput.Mostmethods(basedonprogramsynthesis)are#,,00,,00,,0Bmanuallyengineeredwithdomain-specificlanguages[21,22,23,24]or0@evaluatedonsimplifiedextensionsorsubsetsofthebenchmark[
25
,
26
,
27
].Fig.1:LLMsout-of-the-boxEnd-to-endmachinelearningmethodsonlysolveahandfuloftestproblemscancomplete(highlighted)[
28
];however,ourexperimentsindicatethatLLMsin-contextpromptedincomplexARCpatterns[
20
]thestyleofASCIIart(see
Fig.1
)cancorrectlypredictsolutionsforupto85expressedinarbitrarytokens.(outof800)problems–exceedingsomeofthebestperformingmethodstodate[
21
,
22
,
24
],without
LargeLanguageModelsasGeneralPatternMachines
SuvirMirchandani1,FeiXia2,PeteFlorence2,BrianIchter2,DannyDriess23,MontserratGonzalezArenas2,KanishkaRao2,DorsaSadigh12,AndyZeng2
1StanfordUniversity,2GoogleDeepMind,3TUBerlin
https://general-pattern-machines.github.io
Abstract:Weobservethatpre-trainedlargelanguagemodels(LLMs)arecapableofau-toregressivelycompletingcomplextokensequences–fromarbitraryonesprocedurally
generatedbyprobabilisticcontext-freegrammars(PCFG),tomorerichspatialpatternsfoundintheAbstractReasoningCorpus(ARC),ageneralAIbenchmark,promptedinthestyleofASCIIart.Surprisingly,patterncompletionproficiencycanbepartiallyretainedevenwhenthesequencesareexpressedusingtokensrandomlysampledfromthevocabulary.Theseresultssuggestthatwithoutanyadditionaltraining,LLMscanserveasgeneralsequencemodelers,drivenbyin-contextlearning.Inthiswork,weinvestigatehowthesezero-shotcapabilitiesmaybeappliedtoproblemsinrobotics–fromextrapolatingsequencesofnumbersthatrepresentstatesovertimetocompletesimplemotions,toleast-to-mostpromptingofreward-conditionedtrajectoriesthatcandiscoverandrepresentclosed-looppolicies(e.g.,astabilizingcontrollerforCartPole).Whiledifficulttodeploytodayforrealsystemsduetolatency,contextsizelimitations,andcomputecosts,theapproachofusingLLMstodrivelow-levelcontrolmayprovideanexcitingglimpseintohowthepatternsamongwordscouldbetransferredtoactions.
Keywords:largelanguagemodels,in-contextlearning,languageforrobotics
1Introduction
Largelanguagemodels(LLMs)aretrainedtoabsorbthemyriadofpatternsthatarewovenintothestructureoflanguage.Theynotonlyexhibitvariousout-of-the-boxcapabilitiessuchasgeneratingchainsofreasoning[
1
,
2
],solvinglogicproblems[
3
,
4
],andcompletingmathpuzzles[
5
],butalsohavebeenappliedinroboticswheretheycanserveashigh-levelplannersforinstructionfollowingtasks[
6
,
7
,
8
,
9
,
10
,
11
,
12
],synthesizeprogramsrepresentingrobotpolicies[
13
,
14
],designrewardfunctions[
15
,
16
],andgeneralizeuserprefer-ences[
17
].Thesesettingsrelyonthefew-shotin-contextexamplesintextpromptsthatspecifythedomainandinput-outputformatfortheirtasks[
18
,
19
],andremainhighlysemanticintheirinputsandoutputs.
Akeyobservationofourwork–andperhapscontrarytothepredominant
intuition–isthatanLLM’sabilitytorepresent,manipulate,andextrapolate
moreabstract,nonlinguisticpatternsmayallowthemtoserveasbasicversions
ofgeneralpatternmachines.Toillustratethisidea,considertheAbstract
ReasoningCorpus[
20
],ageneralAIbenchmarkthatcontainscollectionsof
additionalmodeltrainingorfine-tuning.Surprisingly,wefindthisextendsbeyondASCIInumbers,andPreprint.
|
100
2
100
-
···78,76,72,66,60,53,46···
Fig.2:Pre-trainedLLMsout-of-the-boxmayserveasbasicversionsofgeneralpatternmachinesthatcanrecognizeandcompletesequencesofnumericorarbitrary(symbolic)tokensexpressingabstractproblemsinroboticsandsequentialdecision-making.Experimentsshowthattoanextent,LLMscanin-contextlearn(i)sequencetransformations(e.g.,toreasonoverspatialrearrangementsofsymbols,fordynamicsmodelingandnextstatepredictionondownsampledimages),(ii)completionofsimplefunctions(e.g.,toextrapolatekinestheticdemonstrations),or(iii)meta-patternstoimprovereturn-conditionedpolicies(e.g.,todiscoveroscillatorybehaviorstostabilizeaCartPole).
thatwhentheyarereplacedwithamappingtorandomlysampledtokensinthevocabulary,LLMscanstillgeneratevalidsolutions.Theseresultssuggestanintriguinginsight:thatLLMsmayexhibitmoregeneralcapabilitiesofrepresentingandextrapolatingsymbolicpatterns,invarianttothespecifictokensinvolved.Thisisin-linewith–andcomplementaryto–recentobservationsthatusingrandomorabstractlabelmappingsforin-contextclassificationretainssomeperformancecomparedtoground-truthlabels[
29
,
30
].WehypothesizethatthecapabilitiesthatdrivepatternreasoningontheARCmayallowgeneralpatternmanipulationatvariouslevelsofabstractionusefulforroboticsandsequentialdecisionmaking[
31
,
32
],whereinadiversearrayofproblemsinvolvepatternsthatmaybedifficulttoreasonaboutpreciselyinwords.Forexample,aprocedureforspatiallyrearrangingtabletopobjectscouldberepresentedusingarbitrarytokens(see
Fig.2
).Asanotherexample,optimizingatrajectorywithrespecttoarewardfunctioncanbeframedasextrapolatingasequenceconsistingofstateandactiontokenswithincreasingreturns.
Orthogonalandcomplementarytoeffortsthatdevelopmulti-taskpoliciesbypre-trainingonlargeamountsofrobotdata[
33
],orroboticsfoundationmodels[
34
]thatcanbefine-tunedfordownstreamtasks[
35
,
36
,
37
],ourgoalisinsteadto(i)assessthezero-shotcapabilitiesthatLLMsmayalreadycontaintoperformsomedegreeofgeneralpatternmanipulation,and(ii)investigatehowtheseabilitiescanbeusedinrobotics.Thesecapabilitiesarecertainlynotsufficienttoreplacespecializedalgorithms;nonetheless,theyareusefultocharacterize,anddoingsomayhelpinformprioritiesfortraininggeneralistmodelsinrobotics.
WeassessLLMsaspatternmachinescategorizedintothreeareas:sequencetransformation,sequencecompletion,andsequenceimprovement(see
Fig.2
).First,weshowthatLLMsarecapableofgeneralizingcertainsequencetransformationsofincreasingcomplexitywithadegreeoftokeninvariance,andpositthatthiscancarryovertospatialreasoningcapabilitiesinrobotictasks.Next,weassessLLMs’abilitytocompletepatternsfromsimplefunctions(e.g.,sinusoids)andshowthiscanbeappliedtorobotictaskslikeextendingawipingmotionfromkinestheticdemonstrations,ordrawingpatternsonawhiteboard.Thecombinationofin-contextsequencetransformationandextrapolationfurtherenablesLLMstodobasicformsofsequenceimprovement.Weshowthatprovidingreward-labeledtrajectoriesascontext,coupledwithonlineinteraction,canenableanLLM-basedagenttolearntonavigatethroughasmallgrid,discoverastabilizingCartPolecontroller,andoptimizesimpletrajectoriesviahuman-in-the-loop“clicker”rewardtraining.Code,benchmarks,andvideoswillbemadeavailableat
https://general-pattern-machines.github.io
.
3
2RelatedWork
Patternreasoningbypromptingpre-trainedLLMswithfew-shotinput-outputexamplesisdrivenbyin-contextlearning[
38
,
39
].Theexamplesserveasaformoftaskspecification,wherethemodelisexpectedtocompletefurtherinstancesofthetaskbysimplypredictingwhatcomesnext.In-contextlearningextendstheconceptof“taskprefixes”(predefinedtask-specifictokensequencese.g.,[
40
]),butswappedinwithactualtaskexamplesinstead.Brownetal.[
39
]observesthatitimproves(inparticular,out-of-distributiongeneralization)fromscalingmodelsize.Thisisincontrasttoscalingmodelsforpre-training+fine-tuning,whichhasbeenshowntonotnecessarilyimproveOODgeneralizationonlanguagetasks[
41
].Nonetheless,despitecompellingOODgeneralizationabilities,in-contextlearningstillcomesatacost,asitcontinuestolagbehindintermsofabsoluteperformanceonbenchmarkscomparedtotask-specificfine-tuning[
38
].
In-contextlearningisexplicitlytrainedforbypackingexamplesfromthesametaskanddatasetintothesamecontextbufferthatisfedasinputtoanLLMwithanunsupervisedautoregressiveobjective[
39
],sometimesreferredtoasmeta-training.However,itcanalsoemergeimplicitlyfromtrainingonunsuperviseddatasetswheretokensexhibitaZipfiandistribution[
42
]onTransformerarchitectures,butnotnecessarilywithrecurrentarchitectures(e.g.,vanillaRNNsorLSTMs)[
42
].Otherworkshaveshownthatin-contextlearningwithTransformerscanlearnsimplefunctionclassesonparwithleastsquares[
43
,
44
],andcangeneralizetoaseeminglyunboundednumberoftasks(whentrainedontasksfromthesametaskfamily)betterthanmultitaskMLPs[
45
],withBayesianinterpretationsofthisphenomenon[
46
][
47
].
In-contextlearningoccursduringinferencewithoutgradientupdatestotheweightsofthemodel,andcanbedifferentiatedfromin-weightslearning,whichreliesoninformationstoredintheweightsofthemodelduringLLMtraining[
48
](andcanbeusefulforcompletiontaskssuchas“AbrahamLincolnwasborn in”).Chanetal.[
48
]observesthatgeneralizationofin-contextlearningcanbecharacterizedasmore“exemplar-based”(onthebasisofsimilaritytoin-contextexamples[
49
]),asopposedtogeneralizationof in-weightslearningwhichtendstobemore“rule-based”(onthebasisofminimalfeaturesthatsupport categoryboundariesinthetrainingdata[
50
]).ThevastcapabilitiesofLLMs[
39
,
51
,
52
,
53
,
54
]havebeendrivenbyacombinationofbothformsoflearning.Inthiswork,weareparticularlyinterestedinin-context learning,and(dependingonthetask)usingthesemanticpriorsofnumerictokens(e.g.,“0”to“100”)todrivenewcapabilitiessuchasin-contextsequencecompletion(
Section5
)andimprovement(
Section6
).
LLMshavebeenappliedacrossanumberofareasinrobotics–mostrecentlyindecomposinghigh-leveltaskdomaindescriptionsinnaturallanguagetomid-levelstep-by-stepplans[
6
,
7
,
55
,
56
,
57
,
58
],robotcode[
13
,
17
,
14
,
59
],andplanningdomaindefinitionlanguages[
10
].ThesemethodsleveragethesemanticpriorsstoredinLLMstocomposenewplansorparameterizeprimitiveAPIs,butwhetherLLMscandirectlyinfluencecontrol(e.g.,attheleveloftrajectories)inazero-shotmannerremainsanopenproblem.Asareactiontothis,weinvestigatehowthepatternreasoningcapabilitiesofLLMsmaydrivevariouscontroltasks,toextendoroptimizelow-levelactionsequences.Whileitispossibletoexplicitlytrainmodelsforthesecapabilities[
60
,
61
,
62
,
63
],thisworkinsteadfocusesontheinherentabilitiesofLLMsout-of-the-box,whichmayhavedownstreamimplicationsfortheroleoflanguagepre-trainingforbuildinggeneralistembodiedAIsystems.Ourfindingsmayalsobenefitdomainswheredatacollectionisexpensiveordifficulttoscale.CloselyrelatedtoourworkisBrooksetal.[
64
],whichusesanLLMtorepresentarollout-policyandworld-modelin-context,andthenusesmodel-basedQ-learningtodrivepolicyimprovementacrossacollectionoftoyenvironmentswithlinguisticrepresentations.OuruseofLLMsforsequenceimprovementcanbeseenasasimplificationofin-contextpolicyiterationthatsupportsbothlearningfromdemonstrationsandin-contextRL,drivenbythegeneralityofLLMsaspatternmachines.
3LanguageModelsasGeneralPatternMachines
ThecapacityofLLMstoactasgeneralpatternmachinesisdrivenbytheirabilitytoperformin-contextlearningonsequencesofnumericorarbitrarytokens.AnLLMtypicallyrepresentssequencemodelingautoregressively,withadecoder-onlyTransformer[
65
],byfactorizingtheprobabilityofasequencex,whichisasequenceofsymbols(s1,...,sn),intotheproductofconditionalprobabilitiesp(x)=
4
∏⃞p(si|s1,...,si−1).Toperformin-contextlearning,themodelcanbeconditionedwithapromptthatprovidestheinitialtokensinthesequences1:k=(s1,...,sk)andusesthemodeltocompletesk+1:n.
Theadaptabilityofin-contextlearningliesintheamountofflexibilitythatcanbepackedintos1:k–thispromptsequencecanitselfcontainmanysequences,eachaninput-outputpair,andperhapsadditionaltaskconditioning[
38
,
29
].Specifically,amodelcanin-contextlearntocompleteapromptwhichisasetofNexampless1:k=(x1,x2,...,xN)whereeachxiisavariable-lengthsequence(s,s,...,si).
Ratherthaninvestigatingin-contextlearningwithnaturallanguagetasks[
39
],inthisworkweareinterestedininvestigatingmoreabstractnotionsofnon-linguisticpatterns.ThefollowingsectionsevaluatethesecapabilitiesacrossLLMs,andshowhowtheycanbeusedinrobotics.Byvaryingthenotionofwhateachxishouldbe,wecancharacterizein-contextpatternlearningcapabilitiesintothefollowing3categories.
•SequenceTransformation(
Section4
):eachx1,...,xN−1isasequence-to-sequenceinput-outputpair;i.e.,xi=(xnput,xutput),eachsubsequenceofvariablelength,andxNisthequeryinput(xut).
•SequenceCompletion(
Section5
):ratherthancontaininginput-outputpairs,andratherthancontainingmanyexamplesofdifferentsequences,thepromptx=(s1,...,sk)correspondstodiscretesamplesfromasinglefunction,e.g.,oftheformsi=a·sin(bi),whichcanbeextrapolated.
•SequenceImprovement(
Section6
):eachx1,...,xN−1isacollectionoftrajectories(potentiallylabeledwithcorrespondingtotalrewards),andxNpromptsthemodelto“improve”thesequencesbyinferringabetterone,e.g.,withleast-to-mostprompting[
66
]–thisprocesscanbeiterativeandappliedtoavariety
offormulations,e.g.,offlinetrajectoryoptimizationoronlinein-contextreinforcementlearning.
4SequenceTransformation
LLMsarecapableofin-contextlearningthedistributionoffunctionsthatrepresentsequencetransformationsbycompletingabstractpatternsobservedamongexamplesofinput-outputsequencesxi=(xnput,xutput)ofarbitrarytokens,eachdrawnfromafixedalphabetA.Forexample,supposethatwearegivenastringofinput-outputexamplessuchas“530,35;761,67;923,29;485,”.HereAconsistsoftokensthatrepresentspace-prefixeddigits0–9,acommatokentoseparateinputsfromoutputs,andasemi-colontokentodelineateexamplesfromeachother.Ageneralpatternmachineshouldinferthecompletion“84”byrecognizingthatthepatternistoswapthefirst2tokens,thenremovethe3rd.
WeusetheARCbenchmark[
20
]toevaluateLLMsonsuchsequencetransformations,wherebytokenpatternsaresub-
stantiallymorecomplex,coveringawiderangeofabstractspatialtasks:infilling,counting,translatingandrotatingshapes,etc.Eachtaskcomeswithseveralinput-outputexam-ples(3.3onaverage),and1-3testinputswhichcanberep-resentedas2Dgrids.Sizesbetweeninputsandoutputsmaydifferandarenotprovidedbeforehand,therebyaddingtothedifficultyofapplyingstandardmachinelearningalgorithms,whichtypicallyassumefixedsize.AutoregressiveLLMscanbeusedfortheARCbyflatteningthegridsandpredictingeachnewoutputgriditeminrow-majororder,whichnatu-rallysupportsvariablelengthoutputs.WhileLLMsarenotoriginallytrainedforrasterizingspatialoutputsinthisway,wehypothesizethatageneralpatternmachinewouldbeca-pableofimplicitlyrecognizingthelong-rangedependenciesbetweenrows(usingpositionalencodingasabias[
67
])topickuppatternsthatextendacrossthe2nddimension.
Method
Total(of800)
(d3)text-davinci-003
85
(d3)w/randomA
†44±6
(d2)text-davinci-002[
51
]
64
(p)PaLM[
53
,
54
]
42
(d1)text-davinci-001[
39
]
11
(d1)finetuned
9
Ainoosonetal,2023[
23
]
∗∗130
Kaggle1stPlace,2022
∗64
Xuetal.,2022[
22
]
∗57
Alfordetal.,2021[
24
]Ferretal.,2021[
21
]
35
32
*Reportedfrom[
22
]outof160object-orientedproblems.
†Numbersaveragedacross5randomlysampledalphabets.**Basedonbruteforcesearchoverarichhand-designedDSL.Tab.1:LLMsout-of-the-boxcansolveanon-trivialnumberofproblemsontheARC,compet-itivewiththebestexistingmethodsusinghand-crafteddomain-specificlanguages[
21
,
24
,
22
].
Result:ARCbenchmark.Ourexperimentsin
Table1
showthatLLMs(PaLM,InstructGPTseriesinacronymsd1-d3)promptedwithinputgridsrepresentedastokensdrawnfromanalphabetofdigits,cancorrectlyinfersolutionsforupto85problems.Surprisingly,thisoutperformsanumberofrecentsystems[
21
,
24
,
22
]basedonprogramsynthesisthatusemanuallyengineereddomain-specificlanguages(DSLs).
5
output:
36
WhileLLMshaveyettosurpassbrute-forcesearch[
23
]tocomposefunctionsfromahandcraftedAPIofgridoperators,LLMsareperhapsthebestperforminggeneralistmethodthatexiststoday.(WeaddresstheimportantcaveatthatpartsoftheARCmaybepresentinthetrainingdataofLLMslaterinthissection.)
Observation:consistenttokenizationmatters.TheARCcanbefoundamongthesuiteoftasksinBIG-Bench[
68
],buthasoftenbeenoverlookedsincemanylanguagemodelsappeartoperformpoorly(nearoratzeroperformance).Weobservethisoccursduetotheformattingofthebenchmark,wheregridelementsarerepresentedasneighboringcharactersinastringi.e.,“8686”(insteadof“8686”).Whilesubtle,thisdifferenceisenoughforcertainByte-PairEncoding(orSentencePiece)tokenizers[
69
,
70
](thatdonottokenizeperdigit)togrouptogethermultiplegridelements(“8”and“6”)intoasingletoken(“86”)whichmapstoadifferenttokenembeddingaltogetherinthevocabulary.Thiscausesinconsistencieswithhowthepatternsareexpressedatthetokenlevel.Forexample,givenataskexpressedinastring“8686,6868;7979,”iftheLLMtokenizergroupstogetherpairsofdigits86,68,79,respectively,thenthesequentialinductivepatternsofthetask(toswapandrepeatindividualdigits)islost.Asimplework-aroundistodirectlypasstokenindicesorembeddingstothelanguagemodel,orusetokenalphabetsunlikelytobegroupedbythetokenizer.Thiswork-aroundgeneralizestootherpatternmanipulationtasksbeyondtheARC;ingeneral,itisimportanttotokenizeinamannerthatisconsistentwiththepatternbeingrepresented.
Observation:tokenmappinginvariance.ThehypothesisthatLLMscanserveasgeneralpatternmachinesstemsfromtheobservationthattheycansurprisinglystillsolveanon-trivialnumberofARCproblemsusingalphabetsAsampledrandomlyfromtheLLM’stokenvocabulary.Forinstance,givenaparticularalphabet:{8→↦falls,6→↦+#,7→↦Ul,9→↦Chev,3→↦慶,2→↦2010},apatternmachineatsufficientproficiencycanbeexpectedtocompletetheprompt“falls+#falls+#,+#falls+#falls;UIChevUIChev,ChevUIChevUI;慶2010慶2010,”bypredicting“2010慶2010慶”.Forexample,text-davinci-003[
51
,
39
]withthefollowingmappingA={0→↦offence,1→↦Subject,2→↦Lub,3→↦Fail,4→↦Chev,5→↦symb,6→↦swung,7→↦Ul,8→↦escalate,9→↦Chromebook}solves52ARCproblems,andacross5differentrandomalphabetssolvesanaverageof43.6problems.Interestingly,wefindthattokenmappinginvarianceholdstoanextentonsimplepatterntransformationsforrandomlysampledembeddingsaswell(i.e.,suchthatembeddingsarenotassociatedwithanytokeninthevocabulary;seeAppendix).
Theimplicationsoftokenmappinginvariancearetwo-fold.First,notethatitispossiblethatpartsoftheARC(andotherstaticexamplesofpatterntransformations)arepresentinthetrainingdataofanLLM(i.e.,duetocontamination).Therefore,measuringtheperformanceofLLMsunderrandomalphabetsmayprovideacloserestimateoftheirtrueunderlyingin-contextsequencetransformationcapabilities.(AsadditionalevidencethatLLMs’sequencetransformationabilityisnotsimplyduetomemorization,wealsoprovideanewprocedurally-generatedpatterntransformationbenchmarkwhichwedescribebelow.)
Second,wehypothesizethatthepatternma-nipulationcapabilitieswhichtokeninvarianceimpliescouldhelptodrivepositivetransferfrompatternslearnedacrossInternet-scalelanguagedatatonewmodalitiesorsymbolicrepresentationsforrobotreasoning.Asanexampleofthisidea,(i)
Fig.3
(top)showsagrasp(Skittles)detectorwhichoutputstar-getcoordinateswithinadownsampledimage(with6in-contextexamples),and(ii)
Fig.3
(bottom)showsspatialrearrangementviapre-dictingsimpleforwarddynamicswherethe
Output(Rendered)
Input
Input(Low-Res)Input&Output(Tokens)
input:
676792
868687
916187
929293
879293
629314692
6262.91
44438787
4468112112
93118117118
93
92
87
93
93
12361
12487
12343
12369
118123
input:
63474763777761575862
634241424.237373742
63464646464637374142
63626262626262625842
63636262626262626262
output:
63474763777761575862
633737424.242424242
63535357464242424242
63585862466262624642
63636363626262626262
Fig.3:ExampleLLMpredictionasanin-contextgraspdetector(top)andasimpleforwarddynamicsmodel(bottom).
redbowlmovestothegreenplate(with9in-contextexamplesofdownsampledimagesasinputsandoutputs).Thegeneralityofwhatthearbitrarytokenscouldrepresentmayallowpatterntransformationcapabilities–especiallyasLLMsimprove–tobeleveragedatvariouslevelsofabstractioninrobotics(includingatthelevelofpixelsorrobotjointpositions).Incorporatingmoresemanticpriorsintorepre-sentationsmayalsoboostperformanceandenablefurtherLLM-drivenreasoning(e.g.,reducingvisual
6
Function
ExampleInputs
ExampleOutputs
530
35
remove_second(swap(s1,s2),s3)
761
67
echo(copy(swap(swap(
prepend(removesecond(
6
77815989
1
59897766
swap(echo(s1s2)),s3s4),s5s6s7s8s9s10)
430350238
502383344
Tab.2:IllustrationsoftransformationsinourPCFGbenchmark.Row1showsatransformationcomposedofk=2operationsoverw=3tokens,androw2showsatransformationcomposedofk=8operationsoverw=10tokens,respectively.Foreachtransformationfunction,weshowtwoexampleinputsandthecorrespondingoutputs.
dataintomoresemanticspatialrepresentations).Itmay
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 电视设备智能云计算平台考核试卷
- 核辐射测量在核设施辐射防护系统设计中的应用考核试卷
- 电子出版物市场渠道拓展考核试卷
- 检验医学在感染性疾病流行病学调查中的应用考核试卷
- 液压系统的故障诊断专家系统考核试卷
- 激发学生的学科兴趣考核试卷
- 2025广告设计委托合同协议
- 2025建筑材料采购协议合同范本
- 2025年份第一季度跨境会展服务委托借款应收账款质押协议
- 《宝马品牌规划与发展》课件
- 外科学进展与发展史
- 【工业送料六轴机械手结构设计9400字(论文)】
- SH/T 3533-2024 石油化工给水排水管道工程施工及验收规范(正式版)
- 智研咨询发布《2024年中国新中式服装行业市场规模分析及发展趋势预测报告》
- 如何合理控制销售费用
- 加利福尼亚批判性思维技能测试后测试卷班附有答案
- 机电深化设计BIM应用工作流程
- 华南农业大学招生宣传
- 山东省泰安市新泰市2023年七年级下学期期中数学试题【含答案】
- 建筑概论(第二版)课件
- 版国际《压力性损伤的预防与治疗:临床实践指南》解读
评论
0/150
提交评论