版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
arXivarXivv[cs.LG]10Jul2023AdvancesandChallengesinMeta-Learning:echnicalReviewAnnaVettoruzzo1,Mohamed-RafikBouguelia1,JoaquinVanschoren2,ThorsteinnRgnvaldsson1,andKCSantosh3tMeta-learningempowerslearningsystemswiththeabilitytoacquireknowledgefrommultipletasks,enablingfasteradaptationandgeneraliza-tiontonewtasks.Thisreviewprovidesacomprehensivetechnicaloverviewofmeta-learning,emphasizingitsimportanceinreal-worldapplicationswheredatamaybescarceorexpensivetoobtain.Thepapercoversthestate-of-the-artmeta-learningapproachesandexplorestherelationshipbetweenmeta-learningandmulti-tasklearning,transferlearning,domainadapta-tionandgeneralization,self-supervisedlearning,personalizedfederatedlearning,andcontinuallearning.Byhighlightingthesynergiesbetweenthesetopicsandthefieldofmeta-learning,thepaperdemonstrateshowadvancementsinoneareacanbenefitthefieldasawhole,whileavoid-ingunnecessaryduplicationofefforts.Additionally,thepaperdelvesintoadvancedmeta-learningtopicssuchaslearningfromcomplexmulti-modaltaskdistributions,unsupervisedmeta-learning,learningtoefficientlyadapttodatadistributionshifts,andcontinualmeta-learning.Lastly,thepaperhighlightsopenproblemsandchallengesforfutureresearchinthefield.Bysynthesizingthelatestresearchdevelopments,thispaperprovidesathor-oughunderstandingofmeta-learninganditspotentialimpactonvariousmachinelearningapplications.Webelievethatthistechnicaloverviewwillcontributetotheadvancementofmeta-learninganditspracticalimplica-tionsinaddressingreal-worldproblems.Keywords:Meta-learning,transferlearning,few-shotlearning,representa-tionlearning,deepneuralnetworks1IntroductionContextandmotivationDeeprepresentationlearninghasrevolutionizedthefieldofmachinelearningbyenablingmodelstolearneffectivefeaturesfromdata.However,itoftenre-quireslargeamountsofdataforsolvingaspecifictask,makingitimpracticalin1©ThisworkhasbeensubmittedtotheIEEEforpossiblepublication.Copyrightmaybetransferredwithoutnotice,afterwhichthisversionmaynolongerbeaccessible.2scenarioswheredataisscarceorcostlytoobtain.Mostexistingapproachesrelyoneithersupervisedlearningofarepresentationtailoredtoasingletask,orun-supervisedlearningofarepresentationthatcapturesgeneralfeaturesthatmaynotbewell-suitedtonewtasks.Furthermore,learningfromscratchforeachtaskisoftennotfeasible,especiallyindomainssuchasmedicine,robotics,andrarelanguagetranslationwheredataavailabilityislimited.Toovercomethesechallenges,meta-learninghasemergedasapromisingapproach.Meta-learningenablesmodelstoquicklyadapttonewtasks,evenwithfewexamples,andgeneralizeacrossthem.Whilemeta-learningsharessimilaritieswithtransferlearningandmultitasklearning,itgoesbeyondtheseapproachesbyenablingalearningsystemtolearnhowtolearn.Thiscapabilityisparticularlyvaluableinsettingswheredataisscarce,costlytoobtain,orwheretheenvironmentisconstantlychanging.Whilehumanscanrapidlyacquirenewskillsbyleveragingpriorexperienceandarethereforeconsideredgeneral-ists,mostdeeplearningmodelsarestillspecialistsandarelimitedtoperformingwellonspecifictasks.Meta-learningbridgesthisgapbyenablingmodelstoef-ficientlyadapttonewtasks.ContributionThisreviewpaperprimarilydiscussestheuseofmeta-learningtechniquesindeepneuralnetworkstolearnreusablerepresentations,withanemphasisonfew-shotlearning;itdoesnotcovertopicssuchasAutoMLandNeuralArchi-tectureSearch[1],whichareoutofscope.Distinctfromexistingsurveysonmeta-learning,suchas[2,3,4,5],thisreviewpaperhighlightsseveralkeydif-ferentiatingfactors:•Inclusionofadvancedmeta-learningtopics.Inadditiontocoveringfun-damentalaspectsofmeta-learning,thisreviewpaperdelvesintoadvancedtopicssuchaslearningfrommultimodaltaskdistributions,meta-learningwithoutexplicittaskinformation,learningwithoutdatasharingamongclients,adaptingtodistributionshifts,andcontinuallearningfromastreamoftasks.Byincludingtheseadvancedtopics,ourpaperprovidesacom-prehensiveunderstandingofthecurrentstate-of-the-artandhighlightsthechallengesandopportunitiesintheseareas.•Detailedexplorationofrelationshipwithothertopics.Wenotonlyex-aminemeta-learningtechniquesbutalsoestablishclearconnectionsbe-tweenmeta-learningandrelatedareas,includingtransferlearning,mul-titasklearning,self-supervisedlearning,personalizedfederatedlearning,andcontinuallearning.Thisexplorationoftherelationshipsandsyner-giesbetweenmeta-learningandtheseimportanttopicsprovidesvaluableinsightsintohowmeta-learningcanbeefficientlyintegratedintobroadermachinelearningframeworks.•Clearandconciseexposition.Recognizingthecomplexityofmeta-learning,thisreviewpaperprovidesaclearandconciseexplanationofthecon-cepts,techniquesandapplicationsofmeta-learning.Itiswrittenwiththeintentionofbeingaccessibletoawiderangeofreaders,includingbothresearchersandpractitioners.Throughintuitiveexplanations,illustrative3examples,andreferencestoseminalworks,wefacilitatereaders’under-standingofthefoundationofmeta-learninganditspracticalimplications.•Consolidationofkeyinformation.Asafast-growingfield,meta-learninghasinformationscatteredacrossvarioussources.Thisreviewpapercon-solidatesthemostimportantandrelevantinformationaboutmeta-learning,presentingacomprehensiveoverviewinasingleresource.Bysynthesiz-ingthelatestresearchdevelopments,thissurveybecomesanindispens-ableguidetoresearchersandpractitionersseekingathoroughunder-standingofmeta-learninganditspotentialimpactonvariousmachinelearningapplications.Byhighlightingthesecontributions,thispapercomplementsexistingsurveysandoffersuniqueinsightsintothecurrentstateandfuturedirectionsofmeta-learning.OrganizationInthispaper,weprovidethefoundationsofmoderndeeplearningmethodsforlearningacrosstasks.Todoso,wefirstdefinethekeyconceptsandintro-ducerelevantnotationsusedthroughoutthepaperinsection2.Then,wecoverthebasicsofmultitasklearningandtransferlearningandtheirrelationtometa-learninginsection3.Insection4,wepresentanoverviewofthecurrentstateofmeta-learningmethodsandprovideaunifiedviewthatallowsustocategorizethemintothreetypes:black-boxmeta-learningmethods,optimization-basedmeta-learningmethods,andmeta-learningmethodsthatarebasedondistancemetriclearning[6].Insection5,wedelveintoadvancedmeta-learningtop-ics,explainingtherelationshipbetweenmeta-learningandotherimportantma-chinelearningtopics,andaddressingissuessuchaslearningfrommultimodaltaskdistributions,performingmeta-learningwithoutprovidedtasks,learningwithoutsharingdataacrossclients,learningtoadapttodistributionshifts,andcontinuallearningfromastreamoftasks.Finally,thepaperexplorestheap-plicationofmeta-learningtoreal-worldproblemsandprovidesanoverviewofthelandscapeofpromisingfrontiersandyet-to-be-conqueredchallengesthatlieahead.Section6focusesonthesechallenges,sheddinglightonthemostpressingquestionsandfutureresearchopportunities.Inthissection,weintroducesomesimplenotationswhichwillbeusedthrough-outthepaperandprovideaformaldefinitionoftheterm“task”withinthescopeofthispaper.Weuseθ(andsometimesalsoϕ)torepresentthesetofparameters(weights)ofadeepneuralnetworkmodel.D={(xj,yj)}=1denotesadataset,whereinputsxjaresampledfromthedistributionp(x)andoutputsyjaresampledfromp(y|x).ThefunctionL(.,.)denotesalossfunction,forexample,L(θ,D)representsthelossachievedbythemodel’sparametersθonthedatasetD.ThesymbolTreferstoatask,whichisprimarilydefinedbythedata-generatingdistributionsp(x)andp(y|x)thatdefinetheproblem.4Inastandardsupervisedlearningscenario,theobjectiveistooptimizetheparametersθbyminimizingthelossL(θ,D),wherethedatasetDisderivedfromasingletaskT,andthelossfunctionLdependsonthattask.Formally,inthissetting,ataskTiisatripletTi全{pi(x),pi(y|x),Li}thatincludestask-specificdata-generatingdistributionspi(x)andpi(y|x),aswellasatask-specificlossfunctionLi.Thegoalistolearnamodelthatperformswellondatasam-pledfromtaskTi.Inamorechallengingsetting,weconsiderlearningfrommultipletasks{Ti},whichinvolves(adatasetof)multipledatasets{Di}.Inthisscenario,asetoftrainingtasksisusedtolearnamodelthatperformswellontesttasks.Dependingonthespecificsetting,atesttaskcaneitherbesampledfromthetrainingtasksorcompletelynew,neverencounteredduringthetrainingphase.Ingeneral,taskscandifferinvariouswaysdependingontheapplication.Forexample,inimagerecognition,differenttaskscaninvolverecognizinghand-writtendigitsoralphabetsfromdifferentlanguages[7,8],whileinnaturallan-guageprocessing,taskscanincludesentimentanalysis[9,10],machinetrans-lation[11],andchatbotresponsegeneration[12,13,14].Tasksinroboticscaninvolvetrainingrobotstoachievedifferentgoals[15],whileinautomatedfeed-backgeneration,taskscanincludeprovidingfeedbacktostudentsondifferentexams[16].Itisworthnotingthattaskscansharestructures,eveniftheyap-pearunrelated.Forexample,thelawsofphysicsunderlyingrealdata,thelan-guagerulesunderlyingtextdata,andtheintentionsofpeopleallsharecommonstructuresthatenablemodelstotransferknowledgeacrossseeminglyunrelatedtasks.ultitaskandtransfertometalearningMeta-learning,multitasklearning,andtransferlearningencompassdifferentapproachesaimedatlearningacrossmultipletasks.Multitasklearningaimstoimproveperformanceonasetoftasksbylearningthemsimultaneously.Trans-ferlearningfine-tunesapre-trainedmodelonanewtaskwithlimiteddata.Incontrast,meta-learningacquiresusefulknowledgefrompasttasksandlever-agesittolearnnewtasksmoreefficiently.Inthissection,wetransitionfromdiscussing“multitasklearning”and“transferlearning”tointroducingthetopicof“meta-learning”.3.1MultitasklearningproblemAsillustratedinFigure1(A),multitasklearning(MTL)trainsamodeltoper-formmultiplerelatedtaskssimultaneously,leveragingsharedstructureacrosstasks,andimprovingperformancecomparedtolearningeachtaskindividu-ally.Inthissetting,thereisnodistinctionbetweentrainingandtesttasks,andwerefertothemas{Ti}.OnecommonapproachinMTLishardparametersharing,wherethemodelparametersθaresplitintosharedθshandtask-specificθiparameters.TheseparametersarelearnedsimultaneouslythroughanobjectivefunctionthattakesTmin工Tmin工wiLi({θsh,θi},Di),i=1i=15Figure1:Multitasklearningvstransferlearningvsmeta-learning.wherewicanweighttasksdifferently.Thisapproachisoftenimplementedus-ingamulti-headedneuralnetworkarchitecture,whereasharedencoder(pa-rameterizedbyθsh)isresponsibleforfeatureextraction.Thissharedencodersubsequentlybranchesoutintotask-specificdecodingheads(parameterizedbyθi)dedicatedtoindividualtasksTi[17,18,19].SoftparametersharingisanotherapproachinMTLthatencouragesparam-etersimilarityacrosstask-specificmodelsusingregularizationpenalties[20,21,22].Inthisapproach,eachtasktypicallyhasitsownmodelwithitsownsetofparametersθi,whilethesharedparameterssetθshcanbeempty.Theobjec-tivefunctionissimilartothatofhardparametersharing,butwithanadditionalregularizationtermthatcontrolsthestrengthofparametersharingacrosstasks.Thestrengthofregularizationisdeterminedbythehyperparameterλ.InthecaseofL2regularization,theobjectivefunctionisgivenby:TTmin工wiLi({θsh,θi},Di)+i=1i\=1However,softparametersharingcanbemorememory-intensiveasseparatesetsofparametersarestoredforeachtask,anditrequiresadditionaldesigndecisionsandhyperparameters.Anotherapproachtosharingparametersistoconditionasinglemodelonataskdescriptorzithatcontainstask-specificinformationusedtomodulatethenetwork’scomputation.Thetaskdescriptorzicanbeasimpleone-hotencod-ingofthetaskindexoramorecomplextaskspecification,suchaslanguage6descriptionoruserattributes.Whenataskdescriptorisprovided,itisusedtomodulatetheweightsofthesharednetworkwithrespecttothetaskathand.Throughthismodulationmechanism,thesignificanceofthesharedfeaturesisdeterminedbasedontheparticulartask,enablingthelearningofbothsharedandtask-specificfeaturesinaflexiblemanner.Suchanapproachgrantsfine-grainedcontrolovertheadjustmentofthenetwork’srepresentation,tailoringittoeachindividualtask.Variousmethodsforconditioningthemodelonthetaskdescriptoraredescribedin[23].Morecomplexmethodsarealsoprovidedin[24,25,26].Choosingtheappropriateapproachforparametersharing,determiningthelevelofthenetworkarchitectureatwhichtoshareparameters,anddecidingonthedegreeofparametersharingacrosstasksarealldesigndecisionsthatdependontheproblemathand.Currently,thesedecisionsrelyonintuitionandknowledgeoftheproblem,makingthemmoreofanartthanascience,similartotheprocessoftuningneuralnetworkarchitectures.Moreover,mul-titasklearningpresentsseveralchallenges,suchasdeterminingwhichtasksarecomplementary,particularlyinscenarioswithalargenumberoftasks,asin[27].Interestedreaderscanfindamorecomprehensivediscussionofmultitasklearningin[28,29].Insummary,multitasklearningaimstolearnasetofTtasks{Ti}atonce.EventhoughthemodelcangeneralizetonewdatafromtheseTtasks,itmightnotbeabletohandleacompletelynewtaskthatithasnotbeentrainedon.Thisiswheretransferlearningandmeta-learningbecomemorerelevant.3.2Transferlearningviafine-tuningTransferlearningisavaluabletechniquethatallowsamodeltoleveragerep-resentationslearnedfromoneormoresourcetaskstosolveatargettask.AsillustratedinFigure1(B),themaingoalistousetheknowledgelearnedfromthesourcetask(s)Tatoimprovetheperformanceofthemodelonanewtask,usuallyreferredtoasthetargettaskTb,especiallywhenthetargettaskdatasetDbislimited.Inpractice,thesourcetaskdataDaisofteninaccessible,eitherbecauseitistooexpensivetoobtainortoolargetostore.Onecommonapproachfortransferlearningisfine-tuning,whichinvolvesstartingwithamodelthathasbeenpre-trainedonthesourcetaskdatasetDa.Theparametersofthepre-trainedmodel,denotedasθ,arethenfine-tunedonthetrainingdataDbfromthetargettaskTbusinggradientdescentoranyotheroptimizerforseveraloptimizationsteps.Anexampleofthefine-tuningprocessforonegradientdescentstepisexpressedasfollows:ϕ←θ−α∇θL(θ,Db),whereϕdenotestheparametersfine-tunedfortaskTb,andαisthelearningrate.Modelswithpre-trainedparametersθareoftenavailableonline,includingmodelspre-trainedonlargedatasetssuchasImageNetforimageclassification[30]andlanguagemodelslikeBERT[31],PaLM[32],LLaMA[33],andGPT-4[34],trainedonlargetextcorpora.Modelspre-trainedonotherlargeanddi-versedatasetsorusingunsupervisedlearningtechniques,asdiscussedinsec-tion5.3,canalsobeusedasastartingpointforfine-tuning.7However,asdiscussedin[35],itiscrucialtoavoiddestroyinginitializedfeatureswhenfine-tuning.Somedesignchoices,suchasusingasmallerlearn-ingrateforearlierlayers,freezingearlierlayersandgraduallyunfreezing,orre-initializingthelastlayer,canhelptopreventthisissue.Recentstudiessuchas[36]showthatfine-tuningthefirstormiddlelayerscansometimesworkbetterthanfine-tuningthelastlayers,whileothersrecommendatwo-steppro-cessoftrainingthelastlayerfirstandthenfine-tuningtheentirenetwork[35].Moreadvancedapproaches,suchasSTILTs[37],proposeanintermediatestepoffurthertrainingthemodelonalabeledtaskwithabundantdatatomitigatethepotentialdegradationofpre-trainedfeatures.In[38],itwasdemonstratedthattransferlearningviafine-tuningmaynotalwaysbeeffective,particularlywhenthetargettaskdatasetisverysmallorverydifferentfromthesourcetasks.Toinvestigatethis,theauthorsfine-tunedapre-traineduniversallanguagemodelonspecifictextcorporacorrespondingtonewtasksusingvaryingnumbersoftrainingexamples.Theirresultsshowedthatstartingwithapre-trainedmodeloutperformedtrainingfromscratchonthenewtask.However,whenthesizeofthenewtaskdatasetwasverysmall,fine-tuningonsuchalimitednumberofexamplesledtopoorgeneralizationperformance.Toaddressthisissue,meta-learningcanbeusedtolearnamodelthatcaneffectivelyadapttonewtaskswithlimiteddatabyleveragingpriorknowledgefromothertasks.Infact,meta-learningisparticularlyusefulforlearningnewtasksfromveryfewexamples,andwewilldiscussitinmoredetailintheremainderofthispaper.3.3Meta-learningproblemMeta-learning(orlearningtolearn)isafieldthataimstosurpassthelimita-tionsoftraditionaltransferlearningbyadoptingamoresophisticatedapproachthatexplicitlyoptimizesfortransferability.Asdiscussedinsection3.2,tradi-tionaltransferlearninginvolvespre-trainingamodelonsourcetasksandfine-tuningitforanewtask.Incontrast,meta-learningtrainsanetworktoefficientlylearnoradapttonewtaskswithonlyafewexamples.Figure1(C)illustratesthisapproach,whereatmeta-trainingtimewelearntolearntasks,andatmeta-testtimewelearnanewtaskefficiently.Duringthemeta-trainingphase,priorknowledgeenablingefficientlearningofnewtasksisextractedfromasetoftrainingtasks{Ti}.Thisisachievedbyusingameta-datasetconsistingofmultipledatasets{Di},eachcorrespond-ingtoadifferenttrainingtask.Atmeta-testtime,asmalltrainingdatasetDnewisobservedfromacompletelynewtaskTnewandusedinconjunctionwiththepriorknowledgetoinferthemostlikelyposteriorparameters.Asintransferlearning,accessingpriortasksatmeta-testtimeisimpractical.Althoughthedatasets{Di}icomefromdifferentdatadistributions(sincetheycomefromdifferenttasks{Ti}i),itisassumedthatthetasksthemselves(bothfortrainingandtesting)aredrawni.i.d.fromanunderlyingtaskdistributionp(T),im-plyingsomesimilaritiesinthetaskstructure.Thisassumptionensurestheef-fectivenessofmeta-learningframeworksevenwhenfacedwithlimitedlabeleddata.Moreover,themoretasksthatareavailableformeta-training,thebetterthemodelcanlearntoadapttonewtasks,justashavingmoredataimprovesperformanceintraditionalmachinelearning.8Inthenextsection,weprovideamoreformaldefinitionofmeta-learningandvariousapproachestoit.4Meta-learningmethodsTogainaunifiedunderstandingofthemeta-learningproblem,wecandrawananalogytothestandardsupervisedlearningsetting.Inthelatter,thegoalistolearnasetofparametersϕforabasemodelhϕ(e.g.,aneuralnetworkparametrizedbyϕ),whichmapsinputdatax∈Xtothecorrespondingoutputy∈Yasfollows:hϕ:X→Y(1)x'→y=hϕ(x).Toaccomplishthis,atypicallylargetrainingdatasetD={(xj,yj)}=1specifictoaparticulartaskTisusedtolearnϕ.Inthemeta-learningsetting,theobjectiveistolearnpriorknowledge,whichconsistsofasetofmeta-parametersθ,foraprocedureFθ(D,xts).Thispro-cedureusesθtoefficientlylearnfrom(oradaptto)asmalltrainingdatasetD={(xk,yk)}fromataskTi,andthenmakeaccuratepredictionsonun-labeledtestdataxtsfromthesametaskTi.Aswewillseeinthefollowingsections,Fθistypicallycomposedoftwofunctions:(1)ameta-learnerfθ(.)thatproducestask-specificparametersϕi∈ΦfromD∈XK,and(2)abasemodelhϕ(.)thatpredictsoutputscorrespondingtothedatainxts:fθ:XK→ΦD'→ϕi=fθ(D),hϕi:X→Yx'→y=hϕi(x).(2)Notethattheprocessofobtainingtask-specificparametersϕi=fθ(D)isoftenreferredtoas“adaptation”intheliterature,asitadaptstothetaskTiusingasmallamountofdatawhileleveragingthepriorknowledgesummarizedinθ.Theobjectiveofmeta-trainingistolearnthesetofmeta-parametersθ.Thisisaccomplishedbyusingameta-dataset{Di},whichconsistsofadatasetofdatasets,whereeachdatasetDi={(xj,yj)}=1isspecifictoataskTi.Theunifiedviewofmeta-learningpresentedhereisbeneficialbecauseitsimplifiesthemeta-learningproblembyreducingittothedesignandopti-mizationofFθ.Moreover,itfacilitatesthecategorizationofthevariousmeta-learningapproachesintothreecategories:black-boxmeta-learningmethods,optimization-basedmeta-learningmethods,anddistancemetric-basedmeta-learningmethods(asdiscussedin[6]).Anoverviewofthesecategoriesispro-videdinthesubsequentsections.4.1Black-boxmeta-learningmethodsBlack-boxmeta-learningmethodsrepresentfθasablack-boxneuralnetworkthattakestheentiretrainingdataset,D,andpredictstask-specific-parameters,ϕi.Theseparametersarethenusedtoparameterizethebasenetwork,hϕ,andmakepredictionsfortestdata-points,yts=hϕ(xts).ThearchitectureofthisapproachisshowninFigure2.Themeta-parameters,θ,areoptimizedasshowninEquation3,andageneralalgorithmforthesekindsofblack-boxmethodsis9Figure2:Black-boxmeta-learning.Algorithm1Black-boxmeta-learning7:endwhile1:7:endwhile2:whilenotdonedo3:SampleataskTi∼p(T)(oramini-batchoftasks)4:SampledisjointdatasetsD,DfromTi6:Updateθusing∇θL(ϕi,6:Updateθusing∇θL(ϕi,D)8:returnθoutlinedinAlgorithm1.(3)(3)ϕiTϕiHowever,thisapproachfacesamajorchallenge:outputtingalltheparam-etersϕiofthebasenetworkhϕisnotscalableandisimpracticalforlarge-scalemodels.Toovercomethisissue,black-boxmeta-learningmethods,suchasMANN[39]andSNAIL[40],onlyoutputsufficientstatisticsinsteadofthecompletesetofparametersofthebasenetwork.Thesemethodsallowfθtooutputalow-dimensionalvectorzithatencodescontextualtaskinformation,ratherthanafullsetofparametersϕi.Inthiscase,ϕiconsistsof{zi,θh},whereθhdenotesthetrainableparametersofthenetworkhϕ.Thebasenetworkhϕismodulatedwithtaskdescriptorsbyusingvarioustechniquesforconditioningontaskdescriptorsdiscussedinsection3.1.Severalblack-boxmeta-learningmethodsadoptdifferentneuralnetworkar-chitecturestorepresentfθ.Forinstance,methodsdescribedin[39],useLSTMsorarchitectureswithaugmentedmemorycapacities,suchasNeuralTuringMachines,whileothers,likeMetaNetworks[41],employexternalmemorymechanisms.SNAIL[40]definesmeta-learnerarchitecturesthatleveragetem-poralconvolutionstoaggregateinformationfrompastexperienceandatten-tionmechanismstopinpointspecificpiecesofinformation.Alternatively,somemethods,suchastheoneproposedin[42],useafeedforwardplusaveragingstrategy.Thislatterfeedseachdata-pointinD={(xj,yj)}throughaneu-ralnetworktoproducearepresentationrjforeachdata-point,andthenaver-agestheserepresentationstocreateataskrepresentationzi=对rj.ThisstrategymaybemoreeffectivethanusingarecurrentmodelsuchasLSTM,asitiiFigure3:Optimization-basedmeta-learningwithgradient-basedoptimization.doesnotrelyontheassumptionoftemporalrelationshipsbetweendata-points.inDtr.Black-boxmeta-learningmethodsareexpressive,versatile,andeasytocom-binewithvariouslearningproblems,includingclassification,regression,andreinforcementlearning.However,theyrequirecomplexarchitecturesforthemeta-learnerfθ,makingthemcomputationallydemandinganddata-inefficient.Asanalternative,onecanrepresentϕi=fθ(D)asanoptimizationprocedureinsteadofaneuralnetwork.Thenextsectio
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 二零二五版办公家具展会租赁与销售合作合同3篇
- 二零二五年度武汉东湖风景区旅游开发合同3篇
- 二零二五年度艺术品共同创作与展览合同2篇
- 二零二五版房屋租赁合同免责及维修保障3篇
- 二零二五版灯光照明工程设计咨询合同2篇
- 二零二五版班组分包消防设施分包服务合同样本3篇
- 二零二五版新媒体行业劳动合同制度及知识产权保护协议2篇
- 二零二五年空调销售与绿色消费倡导合同3篇
- 二零二五年度钢管模板租赁环保要求及价格评估合同3篇
- 二零二五版网络安全威胁情报共享与预警服务合同范本3篇
- 2025-2030年中国糖醇市场运行状况及投资前景趋势分析报告
- 八年级散文阅读专题训练-八年级语文上册知识梳理与能力训练
- 2024年杭州市中医院高层次卫技人才招聘笔试历年参考题库频考点附带答案
- 2024-2025学年人教版八年级数学上册期末测试模拟试题(含答案)
- 《环境感知技术》2024年课程标准(含课程思政设计)
- GB/T 45079-2024人工智能深度学习框架多硬件平台适配技术规范
- 2024年安徽省铜陵市公开招聘警务辅助人员(辅警)笔试自考练习卷二含答案
- 国家安全教育高教-第六章坚持以经济安全为基础
- 水处理药剂采购项目技术方案(技术方案)
- 2024年城市环卫一体化服务合同
- 工地春节安全培训
评论
0/150
提交评论