版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
NavigatingThePathtoAutonomous
Mobility
Prof.AmnonShashua,CEO
Prof.ShaiShalev-Schwartz,CTO
HowtoSolveAutonomy
-Reachingareal“fullselfdriving”system(eyes-off)
-Whilemaintainingasustainablebusiness
*SubjecttodefinedOperationalDesignDomainandproductsspecifications
HowtoSolveAutonomy
Sensors
AIApproach
Cost
Modularity
GeographicScalability
MTBF
Waymo
Lidar-centric
CAIS
?
Tesla
Cameraonly
End-to-end
?
Mobileye
Camera-centric
CAIS
?
HowtoSolveAutonomy
Sensors
AIApproach
Cost
Modularity
GeographicScalability
MTBF
Waymo
Lidar-centric
CAIS
?
Tesla
Cameraonly
End-to-end
?
Mobileye
Camera-centric
CAIS
?
Whichismorelikely
tosucceed?
End-to-EndApproach
PremiseReality
Nogluecode
GluecodeshiftedtoofflineRare&correctvs.common&incorrect
“AValignment”problem
UnsuperviseddataalonecanreachsufficientMTBF
Really?
-Calculator
-Shortcutlearningproblem
-Longtailproblem
“NoGlueCode”:AVAlignmentProblem
End-to-endaimstomaximizeP[ylx]whereyisthefuturetrajectoryhumanwouldtake,denotedy,giventhepreviousvideo,denotedx
Thislearningobjectiveprefers'common&incorrect'over'rare&correct’
Examples:
1.Mostdriversslowdownatastopsignbutdonotcometoafullstop
-Rollingstop三common&incorrect
-Fullstop三rare&correct
2.“Rudedrivers”thatcutinline
3.Recklessdrivers
ThisiswhyRLHFisusedinLLMs:therewardmechanismdifferentiatesbetween‘correct’and‘incorrect’
Gluecodeshiftedtooffline
CanUnsupervisedDataAloneReachHighMTBF?
Calculators
End-to-endlearningfromdataoftenmissesimportantabstractionsandthereforedoesn’tgeneralizewell
Example
Learningtomultiply2numbers,ataskwhereeventhelargestLLMsstruggle
/yuntiandeng/status/1836114401213989366
CanUnsupervisedDataAloneReachHighMTBF?
Calculators
End-to-endlearningfromdataoftenmissesimportantabstractionsandthereforedoesn’tgeneralizewell
Example
Learningtomultiply2numbers,ataskwhereeventhelargestLLMsstruggle
Whatcanbedone?
ChatGPT
Callatool(calculator)
-ProvidetoolstoLLMs
-→CompoundAISystems(CAIS)
CanUnsupervisedDataAloneReachHighMTBF?
ShortcutLearningProblem
Relyingondifferentsensormodalitiesisawell-establishedmethodologyforincreasingMTBFThequestion:Howtofusethedifferentsensors?
The“end-to-endapproach”:Justfeedallsensorsintoonebignetworkandtrainit
“TheShortcutLearningProblem”
Whendifferentinputmodalitieshavedifferentsamplecomplexities,end-to-endStochasticGradientDescentstrugglesinleveragingtheadvantagesofallmodalities
CanUnsupervisedDataAloneReachHighMTBF?
ShortcutLearningProblemConsider3typesofsensors
Radar
Lidar
Camera
SupposethateachsystemhasinherentlimitationsthatcauseafailureprobabilityofE,whereEissmall(e.g.,onein1000hours)
Additionally,assumethatthefailuresofthedifferentsensorsareindependent
Wecomparetwooptions
-Lowlevel,end-to-end,fusion(trainasystembasedonthecombinedinput)
-CAIS:Decomposabletrainingofasystempereachmodality,followedbyhigh-levelfusion
Whichoptionisbetter?
ShortcutLearningProblem:ASimpleSyntheticExample
Distribution:allvariablesareover{+1,-1},anddataiscreatedbythefollowingsimplegenerativemodel:
y-B(),r1,r2,r3-i·i·d.B,x1=yr1x2=yr2x4x5-i·i·d.B()x3=yr3x4x5
ThisisasimplemodeloffusionbetweenLidar,Radar,Camerasystemswiththefollowingproperties:
-The3systemshaveuncorrelatederrors(modeledbyr1,r2,r3)oflevele
-x1andx2are”simpler”systems(modelingradarandlidar),whiletheproductofx3x4x5equalstoyr3,andthereforeisa“complicatedtolearn”system(modelingthecamera)
Theorem:
-CaneasilyreacherrorofO(e2)withdecomposabletrainingof1-hidden-layerFCN+majority
-End-to-endSGDtrainingwillbe“stuck”atanerrorofeforT/ewhereTisthetimecomplexityoflearningthecomplicatedsystem(camera)individually
Whathappened?Isn’tend-to-endalwaysbetter?
Shortcutlearningproblem:End-to-endSGDstrugglestoleveragesystemswithdifferentsamplecomplexities
CanUnsupervisedDataAloneReachHighMTBF?
TheLongTailProblem
Intheoptimisticscenario,afewrareeventsreducetheprobabilitymassconsiderablyInthepessimisticscenario,eachrareeventhasminimalimpactontheprobabilitymass
P(event)
PessimisticScenario
ToomanyrareeventswhereeachdoesnotreduceP(event)noticeably
OptimalScenario
Events
LongTailofTeslaFSD
-TeslaFSDtrackerindicatesthatreducingvariancesolelythroughadatapipelineresultsinincrementalprogress
/news/735038/tesla-fsd-occasionally-dangerously-inept-independent-test/
*-publicdataonTesla'srecent12.5.x
HowtoSolveAutonomy
Sensors
AIApproach
Cost
Modularity
GeographicScalability
MTBF
Waymo
Lidar-centric
CAIS
?
Tesla
Cameraonly
End-to-end
?
Mobileye
Camera-centric
CAIS
?
TheBias-VarianceTradeoffinMachineLearning
Bias(‘approximationerror’)
Totalerror
ThelearningsystemcannotreflectthefullrichnessofrealityVariance(‘generalizationerror’)
Thelearningsystemoverfitstotheobserveddata,andfailstogeneralizetounseenexamples
Error
ε
VarianceBias
Totalerror
AbstractionInjections
MobileyeCompoundAISystem(CAIS)
AVAlignment
RSS
Separatescorrectfromincorrect
ReachingSufficientMTBF
Abstractions
-Sense/Plan/Act
-Analyticcalculations:RSS,time-to-contact…
Redundancies
Sensors
Algo
Highlevelfusion
MobileyeCompoundAISystem(CAIS)
AVAlignment
RSS
Separatescorrectfromincorrect
ReachingSufficientMTBF
Abstractions
-Act
ExtremelyEfficientAI
(Shaiwillcover)
Sense/Plan/
-Analyticcalculations:RSS,time-to-contact…
Redundancies
Highlevelfusion
Algo
Sensors
PGF
HighLevelFusion:HowtoPerform
Considerasimplecase
Wearefollowingaleadvehicle,andwehave3sensors
Camera
Radar
Lidar
Iftherearecontradictionsbetweenthesensors,wheresomedictateastrongbrakingwhileothersnot,whatshouldwedo?
Majority:2outof3(2oo3)Propertyofmajority
IfeachmodalityhasanerrorprobabilityofatmostE,andtheerrorsareindependent,then-majorityvotehasanerrorprobabilityofo(e2)
MajorityisNotAlwaysApplicable
Nowconsider3systems,eachonepredictswhereisourlane
Majorityisnotdefinedfornon-binarydecisions,sowhatcanbedone?
ThePrimary-Guardian-Fallback(PGF)Fusion
Weproposeageneralapproachforgeneralizingthemajorityruletononbinarydecisions
Webuild3systems
-Primary(P)-Predictswherethelaneis
-Guardian(G)-Checksifthepredictionoftheprimarysystemisvalidornot
-Fallback(F)-Predictswherethelaneis
Fusion:
-IfGuardiandictatesPrimaryisvalid,choosevalid
-Otherwise,chooseFallback
Theorem:ThePGFhasthesamepropertyofthemajorityrule
IfthefailureprobabilityofeachsystemisatmostEandtheseprobabilitiesareindependent,thenthefusedsystemhasanerrorofo(E2)
MobileyeCompoundAISystem(CAIS)
AVAlignment
RSS
Separatescorrectfromincorrect
ExtremelyEfficientAI
ReachingSufficientMTBF
Abstractions
-Act
Sense/Plan/
-Analyticcalculations:RSS,time-to-contact…
Redundancies
Sensors
Algo
Highlevelfusion
ExtremelyEfficientAI
TransformersforSensingandPlanningatx100efficiency
Inferencechip(EyeQ6H):Designforefficiency
EfficientlabelingbyAutoGroundTruth
Efficientmodularitybyteacher-studentarchitecture
Prologue
6AIRevolutions
MachineLearning
DeepLearning
GenerativeAI
UniversalLearning
Transformers
Sim2Real
Reasoning
Pre-Transformers:ObjectDetectionPipeline
Clusteringandmax
suppression
2Dto3D
ThreeRevolutionsof
GenerativePretrainedTransformers(GPTs)
Tokenizeeverything
Generative,Auto-regressive
Transformerarchitecture:’Attentionisallyouneed’
ThreeRevolutionsofGenerativePretrainedTransformers
Tokenizeeverything
Input:Transcribeeachinputmodality(e.g.,text,images)intoasequenceoftokens
Output:Transcribeeachoutputmodalityasasequenceoftokensandemploygenerative,auto-regressivemodelswithsuitablelossfunction
Accommodates:Complexinputandoutputstructures(e.g.,sets,sequences,trees)
Objectdetectionpipelineexample:
Input
Singleimage
’Tokenized’input
Sequenceofimagepatches
‘Tokenized’output
Sequenceof4coordinatesdeterminingthelocationoftheobjectsintheimage
ThreeRevolutionsofGenerativePretrainedTransformers
Generative,Auto-regressive
Previousapproach:Classificationorregressionwithfixed,smallsize,outputs(e.g.,ImageNet)Currentapproach:Learnprobabilitiesforsequencesofarbitrarylength(e.g.,sentence
generation)
KeyFeatures:ChainRule-Modelssequencedependencies
Generative-FitsdatausingmaximumlikelihoodEnables:Self-supervision(e.g.,futurewordsinadocument)
Handlesuncertainty(multiplevalidoutputsbylearningP[ylx])
ThreeRevolutionsofGenerativePretrainedTransformers
Example:Considera1000x1000pixelimagecontaining4vehicles,withtheimagedividedinto10x10pixelpatches.Whataretheprobabilitiesforidentifyingvehiclepositionswhennotusingthechainrulecomparedtowhenusingthechainrule?
x1,1,y1,1,X1,2,y1,21……,X4,1,y4,1,X4,2,y4,2
Listof4coordinatespervehicle
Usingthechainrule
PVehiclesII
=Px1)1II*Py1)1Ix1)1)I*…*Py4)2Ix1)1)…)x4)2)I
Dim=100
Withoutusingthechainrule
PVehiclesII=Px1)1)y1)1)x1)2)y1)2)….)x4)1)y4)1)x4)2)y4)2IIDim=1032
ThreeRevolutionsofGenerativePretrainedTransformers
Transformerarchitecture:’Attentionisallyouneed’
TailoredforproblemofpredictingPtokenn+1tokennstokenn1,tokenol]
...
Transformerlayern
Self-reflection
Self-attention
FCNFCNFCNFCNFCNFCNFCN
...
...
TransformersLayer:GroupThinkingAnalogy
Imagineateamdiscussingaproject
-Eachpersonhastheirownareaofexpertise
-theyallcontributetotheoveralloutcome
-Everyoneisworkingsimultaneouslyratherthanoneafteranother
Self-attention
Eachmemberlistenstoothersandrespondsin
real-time,adjustingtheirinputbasedon
importantpointsraised
Somethingis
fullyblocking
myview,maybe
atruck
Doesanyoneseea
closetruckonour
leftside?
Ipartiallysawaverybigwheel
Ihavenoidea
No
Self-reflection
Eachparticipanttakestimealonetoprocessideasandorganizetheirthoughts
TransformersLayer:Self-Reflection
-Eachtokenindividuallyprocessesits‘knowledge’usingamulti-layer-perceptron,withoutinteractingwithothertokens
n
Input
}d
Self-reflection
Self-reflection
FCN
...
FCN
...
d2n
...
Self-reflection
FCN
...
Output
TransformersLayer:Self-Attention
-Eachtokensend(query’totheothertokens,whichrespondwithvaluesiftheir(key’matchthe(query’
-Thequeryingtokenthenaveragesthereceivedvalues,facilitatinginter-tokenconnectivity
...
Questions
QueryKeyValueQueryKeyValueQueryKeyValue
ExamplefromtheGroupThinkingAnalogy
Personiasks:“Doesanyoneknowssomethingaboutx?”
Personjresponds:“Yes,Ihavewhattosayaboutit”
Personj′responds:”No,Idon’tknowanythingaboutit”
Relevancy
...
...
queryikeyj
..
i,j..
.
.
n2d
...
...
...
No,Idon’t
knowanything
aboutit
Yes,Ihave
whattosay
aboutit
Doesanyone
knowsomething
aboutx?
TransformersLayer:Self-Attention
NormalizesScores:Itconvertsrawattentionscoresintonormalized
probabilities
ProbabilityDistribution:Eachsetofattentionscoresistransformed
sothattheirprobabilitiessumto1
FocusMechanism:Thisallowsthemodeltoweighdifferentpartsof
theinputdifferently,focusingmoreon
relevantpartsbasedontheprobabilities
...
...
...
i,j
...
...
...
...
Normalize
eachrowby
SoftMax
...
Messageigetsfromthegroup
...
aijvj
j
...
...
αi,j
...
...
...
Indicateshowmuchi
wantstopayattentiontoj
Transformers:Complexity
L*(nd2+n2d)
#layers
Self
reflection
Self
attention
Costperlayerforalternativearchitectures:
FullyConnectedNetwork(FCN)Flattenndvalues
RecurrentNeuralNetwork(RNN)‘Talks’onlywithprevioustoken
...
...
Input
...
...
...
...
...
...
...
...
Output
...
Connections:d2n2
Connections:nd2
Transformers
‘EffectiveSparsity’ofTransformers
Sparserd2n+n2d
Anymodality
ConvolutionalNeural
FullyConnectedNetwork(FCN)
Networks(CNNs)
d2n2ConnectionsSparsityspecifictoimages
Denser,buteffectivelyselects
onlyafewpasttokensfor
communication
Long-Short-Term-Memory
(LSTM)
RecurrentNeuralNetworks
(RNN)
Markovsparsitycontext
representedbyastatevector
The3RevolutionsEnableaUniversalSolution
Handlealltypesofinputs
Dealswithuncertainty(bylearningprobability)Enablesalltypesofoutputs
Theultimatelearningmachine?
ATransformerEnd-to-endObjectDetectionNetwork
Input:images
Output:allobjects
ATransformerEnd-to-endObjectDetectionNetwork
The5“Multi”problems
Multi-camera:surround
Multi-frame::frommultipletimestamps
Multi-object:needstooutputall(vehicles,pedestrians,hazards,…)
Multi-scale:needstodetectfarandcloseobjectsatdifferentresolutions
Multi-lanes:needstoassignobjectstorelevantlanes/crosswalks
-UniversalityofTransformers
-Encodeimagepatches(fromdifferentcameras,differentframes,anddifferentresolutions)astokens
-Encodeobjectsasasequenceoftokens(foreachobject:position,velocity,dimensions,type)
-ApplyaTransformertogeneratetheprobabilityofoutputtokensgiveninputtokensinanAuto-Regressivemanner
NetworkArchitecture:VanillaTransformer
-CNNbackboneforcreatingimagetokens:
-C=32highresolutionimagesareconvertedto32imagesofresolution20x15yieldingNp=300"pixels))perimage,andd=256channels
-Encoder:
-WehaveN=C*Np=9600uimagetokens)),eachatdimensiond=256
-AvanillatransformernetworkwithLlayersrequireso(L*N2d+d2N)
-Encoderalonerequiresaround100TOPs(assuming10Hz,L=32)
-Decoder:
-Predictasequenceoftokensrepresentingalltheobjects(hundredsoftokens)
-AvanillaARdecodingissequential,andwithKVcache,eachiterationinvolvescomputeofatleasto(LNd)pertokenprediction(buttherealissueisIOofLNdhere)
-Around100Mbpertokenprediction!
VanillaTransformersareNotEffiecient
Transformersareabruteforceapproachwithlimitedwaytoutilizepriorknowledge
Thisisthe“darkside”ofuniversality
Self-connectivity:nd2Inter-connectivity:n2d
n2d
InAVn≈104,whichbecomesabottleneck
GPT3
d=12288n=2048
nd2=317B
Wepayboth
-Samplecomplexity(dislargeasitneedstohandlealltheinformationineachtoken)
-Computationalcomplexityofinference(n,darelarge)
-(bothissuesareknownintheliterature,andgeneralmitigationssuchas“mixture-of-experts”and“state-space-models”wereproposed)
WhatAboutEnd-to-EndFromPixelstoControlCommands
Weaknessesoftransformers
Bruteforce
Thelearningobjective(oflearningpylx])prefers‘common&incorrect’yover'rare&correct’y
QuestionablewhetheritcanreachsufficientlyhighMTBF
-Missesimportantabstractionsandthereforedoesn’tgeneralizewell
-TheShortcutLearningProblem
(aspartofCAIS,oure2earchitecturehasanadditionalheadthatoutputscontrolcommandsdirectlyaswell,whichisfineasalowMTBFredundantcomponent)
MobileyeCompoundAISystem(CAIS)
AVAlignment
RSS
Separatescorrectfromincorrect
ReachingSufficientMTBF
Abstractions
-Sense/Plan/Act
-Analyticcalculations:RSS,time-to-contact…
Redundancies
Sensors
Algo
Highlevelfusion
Implications
-MustoutputSensingState
-Eachsubsystemmustbesuperefficientbecausewedon’thaveasinglesystem
ExtremelyEfficientAI
TransformersforSensingandPlanningatx100efficiency
EfficientlabelingbyAutoGroundTruth
STAT:SparseTypedAttention
Vanillatransformer:n2d+d2n
STAT:
-TokenTypes:Eachtokenhasa“type”
-Dimensionality:ofembeddingsandself-reflectionmatricesmayvarybasedonthetokentype.
-TokenConnectivity:Theconnectivitybetweentokensissparseanddependsontheirtypes
-LinkTokens:Weadd“link”tokensforcontrollingtheconnectivity
-InferenceEfficiency:Forourend-to-endobjectdetectiontask,STATisx100fasteratinferencetimeandatthesametimeslightlyimprovesperformance
STAT:SparseTypedAttention
Vanillatransformer:n2d+d2n
STATEncoderforObjectDetection:
-Tokentypes:
-Imagetokens:recall,wehaveC=32imageseachwithNp=300“pixels”,yielding9600imagetokens
-WeaddNL=32“Link”tokensperimage
-STATBlock:
-Withineachimage,CrossAttentionbetweenthe300imagetokensandthe32linktokens(C∗Np∗NL∗d)
-Acrossimages,fullselfattentionbetweenalllinktokensC∗NL2d
2
-ComparedtoC∗Npdinvanillatransformers,wegetafactorimprovementof,whichisapproximatelyx100fasterinourcase
-Performance:Forourend-to-endobjectdetectiontask,STATisnotonlyx100,butalsoimprovesperformance(weenlargetheexpressivityofthenetworkwhilemakingitmuchfasteratinferencetime)
...
300imagetokens
...
32Linktokens
...
300imagetokens
...
32Linktokens
...
C=32images
...
300imagetokens
...
32Linktokens
Crossattention
300imagetokens
...
...
32Linktokens
...
300imagetokens
...
32Linktokens
...
300imagetokens
...
32Linktokens
...
Crossimage
300imagetokens
...
32Linktokens
...
ParallelAuto-Regressive(PAR)
Weneedtodetectallobjectsinthescene:Whatistheorder?Auto-Regressive:Itdoesn’tmatterduetothechainrule!
Priceofsequentialdecoding
-Sequentialdecodingiscostlyonallmoderndeeplearningchips(duetoIO)
-Weaddedun-needed”fakeuncertainty”(whatistheorder)
”Truckandtrailer”problem
DeTR(DETectionTransformer,FacebookAI,May2020)
-Outputallobjectsinparallel
-Hungarianmatchingtodeterminetherelativeorderbetweenthenetwork’spredictionsandtheorderofthegroundtruth
-Problem:Doesn’tdealwellwithtrueuncertainty
-The“truckandtrailer”problem
-Streetswhichcanbe1or2lanes,etc.
Parisstreets
ParallelAuto-Regressive(PAR)
-Thedecodercontainsqueryheadswhich
performcrossattentionwiththeencoder’slinktokensentirelyinparallel
-Eachqueryheadoutputs,auto-regressively,
0/1/2objects(independentlyandinparalleltotheotherqueryheads)
-→dealingonlywith“trueuncertainties”andnotwith“fakeuncertainties”
Inputimages
CNNTokenization
STATEncoder
Outputtokens
Queryheads
IntermediateSummary
MachineLearning
TransformersrevolutionizedAI
-Thegood
-Universal,generative,AI
DeepLearning
-Thebad
Transformers
-Can’tseparate“correct&rare”from“wrong&common”
-Missimportantabstractions
GenerativeAI
-Questionablewhenveryhighaccuracyisrequired
-Theugly
-Bruteforceapproach,unnecessarilyexpensive
UniversalLearning
Workingsmarterwithtransformers
Sim2Real
-STAT:x100faster&betteraccuracy
-PAR:x10faster&embraceuncertaintyonlywhenitisneeded
Reasoning
ExtremelyEfficientAI
efficiency
Inferencechip(EyeQ6H):Designforefficiency
EfficientlabelingbyAutoGroundTruth
LowHigh
efficiencyEfficiencyefficiency
HardwareArchitecturesTradeoff:Flexibilityvs.Efficiency
●Fixed-function
GPU
.CPU
SpecialpurposeFlexibilityGeneralpurpose
EyeQ6High:5DistinctArchitectures
EyeQ6H
LowHigh
efficiencyEfficiencyefficiency
XNN
-AddressMobileye’shighefficiencyandflexibilityneeds
PMA
-Enableacceleratingrangeofparallelcomputeparadigms
VMP
MPC
MIPS
SpecialpurposeFlexibilityGeneralpurpose
5DistinctArchitectures:EnhancedParallelProcessing
●MIPS
-Ageneral-purposeCPU
MPC
-ACPUspecializedforthreadlevelparallelism
●VMP
-Very-Long-Instruction-Width(VLIW)-Single-Instruction-Multiple-Data(SIMD)
-Designedfordata-levelparallelismoffixedpointsarithmetic(e.g.,convergethe12-bitrawimageintoasetof8-bitimagesofdifferentresolutionsandtone-maps)
-Basically,performsoperationsonvectorsofintegers
●PMA
-Coarse-Grain-Reconfigurable-Array(CGRA)
-Designedfordata-levelparallelismincludingfloatingpointarithmetic
-Basically,performsoperationsonvectorsoffloats
●XNN
-Dedicatedtofixedfunctionsfordeeplearning:convolutions,matrix-multiplication/fully-connect,andrelatedactivationpost-processingcomputations:ExcelsinCNNs,FCNs,Transformers
EyeQ5H
EyeQ6H
EyeQ6Hvs.EyeQ5H:2xinTOPS,But10xinFPS!
1200
1000
Framesper
Second
800
600
400
200
0
16TOPS(int8)
27W(max)
34TOPS(int8)
33W(max)
1151
1062
975
252
EyeQ6H
126
82
25
EyeQ5H
91
WeightedAverage
PixelLabelingRoadMultiObject
Detection
NeuralNetwork
EyeQ6Hvs.Orin:It’sNotAllAboutTOPS
TheoreticalTOPS
34TOPS(int8)
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 茶花女读后感
- 经典伤感语录摘录63条
- 企业现场施工安全管理培训
- 《元线性回归分析》课件
- 《建议书制作及促成》课件
- 斯利安美容仪社会化营销策略规划案
- 母乳喂养课件教学
- 《汽车销售技术》课件
- 2024年新高一数学初升高衔接《函数的奇偶性》含答案解析
- 企业年金方案草案的决议模板-公司管理
- 数字媒体艺术大学生职业生涯规划
- 门式起重机方案
- 人工智能在医疗健康中的应用案例
- 危化品运输安全监测与报警系统
- 跑团活动方案
- 2024年实验中学减负工作实施方案
- 大学生发展生涯展示
- 上海复旦附中2024年高三最后一模英语试题含解析
- 社会主义现代化建设教育科技人才战略
- 抗凝药物的使用和注意事项课件
- 水利工程测量的内容和任务
评论
0/150
提交评论