深度学习之模型优化 延伸参考文献资料_第1页
深度学习之模型优化 延伸参考文献资料_第2页
深度学习之模型优化 延伸参考文献资料_第3页
深度学习之模型优化 延伸参考文献资料_第4页
深度学习之模型优化 延伸参考文献资料_第5页
已阅读5页,还剩3页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

第3章:[1]WangZJ,TurkoR,ShaikhO,etal.CNNexplainer:learningconvolutionalneuralnetworkswithinteractivevisualization[J].IEEETransactionsonVisualizationandComputerGraphics,2020,27(2):1396-1406.[2]BauD,ZhouB,KhoslaA,etal.Networkdissection:Quantifyinginterpretabilityofdeepvisualrepresentations[C]//ProceedingsoftheIEEEconferenceoncomputervisionandpatternrecognition,2017:6541-6549.[3]SmilkovD,ThoratN,KimB,etal.Smoothgrad:removingnoisebyaddingnoise[J].arXivpreprintarXiv:1706.03825,2017.[4]SundararajanM,TalyA,YanQ.Axiomaticattributionfordeepnetworks[C]//Internationalconferenceonmachinelearning.PMLR,2017:3319-3328.[5]SpringenbergJT,DosovitskiyA,BroxT,etal.Strivingforsimplicity:Theallconvolutionalnet[J].arXivpreprintarXiv:1412.6806,2014.[6]ZhouB,KhoslaA,LapedrizaA,etal.Learningdeepfeaturesfordiscriminativelocalization[C]//ProceedingsoftheIEEEconferenceoncomputervisionandpatternrecognition,2016:2921-2929.[7]SelvarajuRR,CogswellM,DasA,etal.Grad-cam:Visualexplanationsfromdeepnetworksviagradient-basedlocalization[C]//ProceedingsoftheIEEEinternationalconferenceoncomputervision,2017:618-626.[8]ErhanD,BengioY,CourvilleA,etal.Visualizinghigher-layerfeaturesofadeepnetwork[J].UniversityofMontreal,2009,1341(3):1.[9]SimonyanK,VedaldiA,ZissermanA.Deepinsideconvolutionalnetworks:Visualisingimageclassificationmodelsandsaliencymaps[J].arXivpreprintarXiv:1312.6034,2013.[10]MahendranA,VedaldiA.Understandingdeepimagerepresentationsbyinvertingthem[C]//ProceedingsoftheIEEEconferenceoncomputervisionandpatternrecognition.2015:5188-5196.[11]YosinskiJ,CluneJ,NguyenA,etal.Understandingneuralnetworksthroughdeepvisualization[J].arXivpreprintarXiv:1506.06579,2015.[12]MordvintsevA,OlahC,TykaM.Inceptionism:Goingdeeperintoneuralnetworks[J/OL]./2015/06/inceptionism-going-deeper-into-neural.html,2015.[13]Wei,Donglai,etal.UnderstandingIntra-ClassKnowledgeInsideCNN[J].arXivpreprintarXiv:1507.02379.2015.[14]NguyenA,DosovitskiyA,YosinskiJ,etal.Synthesizingthepreferredinputsforneuronsinneuralnetworksviadeepgeneratornetworks[J].Advancesinneuralinformationprocessingsystems,2016,29.[15]ZeilerMD,FergusR.Visualizingandunderstandingconvolutionalnetworks[C]//Europeanconferenceoncomputervision.Springer,Cham,2014:818-833.[16]DosovitskiyA,BroxT.Invertingvisualrepresentationswithconvolutionalnetworks[C]//ProceedingsoftheIEEEconferenceoncomputervisionandpatternrecognition.2016:4829-4837.[17]QinZ,YuF,LiuC,etal.Howconvolutionalneuralnetworkseetheworld-Asurveyofconvolutionalneuralnetworkvisualizationmethods[J].arXivpreprintarXiv:1804.11191,2018.[18]/utkuozbulak/pytorch-cnn-visualizations第4章[1]LinM,ChenQ,YanS.Networkinnetwork[J].arXivpreprintarXiv:1312.4400,2013.[2]ZeilerMD,FergusR.Visualizingandunderstandingconvolutionalnetworks[C]//Europeanconferenceoncomputervision.Springer,Cham,2014:818-833.[3]IandolaFN,HanS,MoskewiczMW,etal.SqueezeNet:AlexNet-levelaccuracywith50xfewerparametersand<0.5MBmodelsize[J].arXivpreprintarXiv:1602.07360,2016.[4]JinJ,DundarA,CulurcielloE.Flattenedconvolutionalneuralnetworksforfeedforwardacceleration[J].arXivpreprintarXiv:1412.5474,2014.[5]SzegedyC,VanhouckeV,IoffeS,etal.Rethinkingtheinceptionarchitectureforcomputervision[C]//ProceedingsoftheIEEEConferenceonComputerVisionandPatternRecognition.2016:2818-2826.[6]SifreL,MallatS.Rigid-motionscatteringfortextureclassification[J].arXivpreprintarXiv:1403.1687,2014.[7]CholletF.Xception:Deeplearningwithdepthwiseseparableconvolutions[C]//ProceedingsoftheIEEEconferenceoncomputervisionandpatternrecognition.2017:1251-1258.[8]HowardAG,ZhuM,ChenB,etal.Mobilenets:Efficientconvolutionalneuralnetworksformobilevisionapplications[J].arXivpreprintarXiv:1704.04861,2017.[9]SandlerM,HowardA,ZhuM,etal.Mobilenetv2:Invertedresidualsandlinearbottlenecks[C]//ProceedingsoftheIEEEconferenceoncomputervisionandpatternrecognition.2018:4510-4520.[10]ZhangX,ZhouX,LinM,etal.Shufflenet:Anextremelyefficientconvolutionalneuralnetworkformobiledevices[C]//ProceedingsoftheIEEEConferenceonComputerVisionandPatternRecognition.2018:6848-6856.[11]MaN,ZhangX,ZhengHT,etal.Shufflenetv2:Practicalguidelinesforefficientcnnarchitecturedesign[C]//ProceedingsoftheEuropeanconferenceoncomputervision(ECCV).2018:116-131.[12]ZhangT,QiGJ,XiaoB,etal.Interleavedgroupconvolutions[C]//ProceedingsoftheIEEEInternationalConferenceonComputerVision.2017:4373-4382.[13]XieG,WangJ,ZhangT,etal.Interleavedstructuredsparseconvolutionalneuralnetworks[C]//ProceedingsoftheIEEEConferenceonComputerVisionandPatternRecognition.2018:8847-8856.[14]SunK,LiM,LiuD,etal.Igcv3:Interleavedlow-rankgroupconvolutionsforefficientdeepneuralnetworks[J].arXivpreprintarXiv:1806.00178,2018.[15]TanM,LeQV.MixNet:MixedDepthwiseConvolutionalKernels[J].arXivpreprintarXiv:1907.09595,2019.[16]ChenCF,FanQ,MallinarN,etal.Big-littlenet:Anefficientmulti-scalefeaturerepresentationforvisualandspeechrecognition[J].arXivpreprintarXiv:1807.03848,2018.[17]ChenY,FangH,XuB,etal.Dropanoctave:Reducingspatialredundancyinconvolutionalneuralnetworkswithoctaveconvolution[J].arXivpreprintarXiv:1904.05049,2019.[18]GennariM,FawcettR,PrisacariuVA.DSConv:EfficientConvolutionOperator[J].arXivpreprintarXiv:1901.01928,2019.[19]ShangW,SohnK,AlmeidaD,etal.Understandingandimprovingconvolutionalneuralnetworksviaconcatenatedrectifiedlinearunits[C]//internationalconferenceonmachinelearning.PMLR,2016:2217-2225.[20]HanK,WangY,TianQ,etal.Ghostnet:Morefeaturesfromcheapoperations[C]//ProceedingsoftheIEEE/CVFconferenceoncomputervisionandpatternrecognition,2020:1580-1589.[21]HuangG,LiuZ,VanDerMaatenL,etal.Denselyconnectedconvolutionalnetworks[C]//ProceedingsoftheIEEEconferenceoncomputervisionandpatternrecognition.2017:4700-4708.[22]JinX,YangY,XuN,etal.Wsnet:Compactandefficientnetworksthroughweightsampling[C]//InternationalConferenceonMachineLearning.PMLR,2018:2352-2361.[23]ZhouD,JinX,WangK,etal.Deepmodelcompressionviafilterauto-sampling[J].arXivpreprintarXiv:1907.05642,2019.[24]HuangG,SunY,LiuZ,etal.Deepnetworkswithstochasticdepth[C]//Europeanconferenceoncomputervision.Springer,Cham,2016:646-661.[25]VeitA,WilberMJ,BelongieS.Residualnetworksbehavelikeensemblesofrelativelyshallownetworks[C]//Advancesinneuralinformationprocessingsystems.2016:550-558.[26]TeerapittayanonS,McDanelB,KungHT.Branchynet:Fastinferenceviaearlyexitingfromdeepneuralnetworks[C]//201623rdInternationalConferenceonPatternRecognition(ICPR).IEEE,2016:2464-2469.[27]FigurnovM,CollinsMD,ZhuY,etal.Spatiallyadaptivecomputationtimeforresidualnetworks[C]//ProceedingsoftheIEEEConferenceonComputerVisionandPatternRecognition.2017:1039-1048.[28]WuZ,NagarajanT,KumarA,etal.Blockdrop:Dynamicinferencepathsinresidualnetworks[C]//ProceedingsoftheIEEEConferenceonComputerVisionandPatternRecognition.2018:8817-8826.[29]WangX,YuF,DouZY,etal.Skipnet:Learningdynamicroutinginconvolutionalnetworks[C]//ProceedingsoftheEuropeanConferenceonComputerVision(ECCV).2018:409-424.[30]AlmahairiA,BallasN,CooijmansT,etal.Dynamiccapacitynetworks[C]//InternationalConferenceonMachineLearning.PMLR,2016:2549-2558.[31]LinJ,RaoY,LuJ,etal.RuntimeNeuralPruning[C].Advancesinneuralinformationprocessingsystems,2017:2181-2191.[32]WuB,WanA,YueX,etal.Shift:AZeroFLOP,ZeroParameterAlternativetoSpatialConvolutions[C].computervisionandpatternrecognition,2018:9127-9135.[33]ChenW,XieD,ZhangY,etal.Allyouneedisafewshifts:Designingefficientconvolutionalneuralnetworksforimageclassification[C]//ProceedingsoftheIEEE/CVFConferenceonComputerVisionandPatternRecognition.2019:7241-7250.[34]HeY,LiuX,ZhongH,etal.Addressnet:Shift-basedprimitivesforefficientconvolutionalneuralnetworks[C]//2019IEEEWinterconferenceonapplicationsofcomputervision(WACV).IEEE,2019:1213-1222.[35]ChenH,WangY,XuC,etal.AdderNet:Dowereallyneedmultiplicationsindeeplearning?[C]//ProceedingsoftheIEEE/CVFconferenceoncomputervisionandpatternrecognition.2020:1468-1477.[36]YouH,ChenX,ZhangY,etal.Shiftaddnet:Ahardware-inspireddeepnetwork[J].AdvancesinNeuralInformationProcessingSystems,2020,33:2771-2783.[37]LiD,WangX,KongD.Deeprebirth:Acceleratingdeepneuralnetworkexecutiononmobiledevices[C]//ProceedingsoftheAAAIConferenceonArtificialIntelligence.2018,32(1).[38]DingX,ZhangX,MaN,etal.Repvgg:Makingvgg-styleconvnetsgreatagain[C]//ProceedingsoftheIEEE/CVFconferenceoncomputervisionandpatternrecognition.2021:13733-13742.[39]LiD,HuJ,WangC,etal.Involution:Invertingtheinherenceofconvolutionforvisualrecognition[C]//ProceedingsoftheIEEE/CVFConferenceonComputerVisionandPatternRecognition.2021:12321-12330.[40]DentonEL,ZarembaW,BrunaJ,etal.Exploitinglinearstructurewithinconvolutionalnetworksforefficientevaluation[C]//AdvancesinNeuralInformationProcessingSystems.2014:1269-1277.第5章[1]WenW,WuC,WangY,etal.Learningstructuredsparsityindeepneuralnetworks[C]//Advancesinneuralinformationprocessingsystems,2016:2082-2090.[2]LiuZ,LiJ,ShenZ,etal.Learningefficientconvolutionalnetworksthroughnetworkslimming[C]//ProceedingsoftheIEEEInternationalConferenceonComputerVision.2017:2736-2744.[3]LuoJH,WuJ.Autopruner:Anend-to-endtrainablefilterpruningmethodforefficientdeepmodelinference[J].PatternRecognition,2020,107:107461.[4]HuangZ,WangN.Data-drivensparsestructureselectionfordeepneuralnetworks[C]//ProceedingsoftheEuropeanConferenceonComputerVision(ECCV).2018:304-320.[5]LeCunY,DenkerJS,SollaSA.Optimalbraindamage[C]//Advancesinneuralinformationprocessingsystems.1990:598-605.[6]LeeN,AjanthanT,TorrPHS.Snip:Single-shotnetworkpruningbasedonconnectionsensitivity[J].arXivpreprintarXiv:1810.02340,2018.[7]HanS,PoolJ,TranJ,etal.Learningbothweightsandconnectionsforefficientneuralnetwork[C]//Advancesinneuralinformationprocessingsystems.2015:1135-1143.[8]GuoY,YaoA,ChenY.Dynamicnetworksurgeryforefficientdnns[C]//AdvancesInNeuralInformationProcessingSystems.2016:1379-1387.[9]AnwarS,HwangK,SungW.Structuredpruningofdeepconvolutionalneuralnetworks[J].ACMJournalonEmergingTechnologiesinComputingSystems(JETC),2017,13(3):1-18.[10]LiH,KadavA,DurdanovicI,etal.Pruningfiltersforefficientconvnets[J].arXivpreprintarXiv:1608.08710,2016.[11]HuH,PengR,TaiYW,etal.Networktrimming:Adata-drivenneuronpruningapproachtowardsefficientdeeparchitectures[J].arXivpreprintarXiv:1607.03250,2016.[12]HeY,LiuP,WangZ,etal.Filterpruningviageometricmedianfordeepconvolutionalneuralnetworksacceleration[C]//ProceedingsoftheIEEE/CVFconferenceoncomputervisionandpatternrecognition.2019:4340-4349.[13]HeY,ZhangX,SunJ.Channelpruningforacceleratingverydeepneuralnetworks[C]//ProceedingsoftheIEEEInternationalConferenceonComputerVision.2017:1389-1397.[14]LuoJH,ZhangH,ZhouHY,etal.Thinet:pruningcnnfiltersforathinnernet[J].IEEEtransactionsonpatternanalysisandmachineintelligence,2018,41(10):2525-2538.[15]MolchanovP,TyreeS,KarrasT,etal.Pruningconvolutionalneuralnetworksforresourceefficientinference[J].arXivpreprintarXiv:1611.06440,2016.[16]ZhuangZ,TanM,ZhuangB,etal.Discrimination-awareChannelPruningforDeepNeuralNetworks[C].neuralinformationprocessingsystems,2018:883-894.[17]LiuZ,SunM,ZhouT,etal.Rethinkingthevalueofnetworkpruning[J].arXivpreprintarXiv:1810.05270,2018.[18]ZhuM,GuptaS.Toprune,ornottoprune:exploringtheefficacyofpruningformodelcompression[J].arXivpreprintarXiv:1710.01878,2017.[19]YuR,LiA,ChenCF,etal.Nisp:Pruningnetworksusingneuronimportancescorepropagation[C]//ProceedingsoftheIEEEConferenceonComputerVisionandPatternRecognition.2018:9194-9203.[20]LinJ,RaoY,LuJ,etal.RuntimeNeuralPruning[C].neuralinformationprocessingsystems,2017:2181-2191.第6章[1]CourbariauxM,BengioY,DavidJ,etal.BinaryConnect:trainingdeepneuralnetworkswithbinaryweightsduringpropagations[C].neuralinformationprocessingsystems,2015:3123-3131.[2]CourbariauxM,HubaraI,SoudryD,etal.Binarizedneuralnetworks:Trainingdeepneuralnetworkswithweightsandactivationsconstrainedto+1or-1[J].arXivpreprintarXiv:1602.02830,2016.[3]LiuZ,WuB,LuoW,etal.Bi-RealNet:EnhancingthePerformanceof1-BitCNNswithImprovedRepresentationalCapabilityandAdvancedTrainingAlgorithm[C].europeanconferenceoncomputervision,2018:747-763.[4]RastegariM,OrdonezV,RedmonJ,etal.Xnor-net:Imagenetclassificationusingbinaryconvolutionalneuralnetworks[C]//Europeanconferenceoncomputervision.Springer,Cham,2016:525-542.[5]BulatA,TzimiropoulosG.Xnor-net++:Improvedbinaryneuralnetworks[J].arXivpreprintarXiv:1909.13863,2019.[6]LiF,ZhangB,LiuB.Ternaryweightnetworks[J].arXivpreprintarXiv:1605.04711,2016.[7]ZhuC,HanS,MaoH,etal.Trainedternaryquantization[J].arXivpreprintarXiv:1612.01064,2016.[8]DingR,ChinT,LiuZ,etal.RegularizingActivationDistributionforTrainingBinarizedDeepNetworks[C].computervisionandpatternrecognition,2019:11408-11417.[9]DarabiS,BelbahriM,CourbariauxM,etal.Regularizedbinarynetworktraining[J].arXivpreprintarXiv:1812.11800,2018.[10]BulatA,TzimiropoulosG,KossaifiJ,etal.Improvedtrainingofbinarynetworksforhumanposeestimationandimagerecognition[J].arXivpreprintarXiv:1904.05868,2019.[11]MartinezB,YangJ,BulatA,etal.Trainingbinaryneuralnetworkswithreal-to-binaryconvolutions[J].arXivpreprintarXiv:2003.11535,2020.[12]LiuZ,ShenZ,SavvidesM,etal.Reactnet:Towardsprecisebinaryneuralnetworkwithgeneralizedactivationfunctions[C]//ComputerVision–ECCV2020:16thEuropeanConference,Glasgow,UK,August23–28,2020,Proceedings,PartXIV16.SpringerInternationalPublishing,2020:143-159.[13]ZhangY,PanJ,LiuX,etal.FracBNN:AccurateandFPGA-efficientbinaryneuralnetworkswithfractionalactivations[C]//The2021ACM/SIGDAInternationalSymposiumonField-ProgrammableGateArrays.2021:171-182.[14]ZhangY,ZhangZ,LewL.Pokebnn:Abinarypursuitoflightweightaccuracy[C]//ProceedingsoftheIEEE/CVFConferenceonComputerVisionandPatternRecognition.2022:12475-12485.[15]GuoN,BethgeJ,MeinelC,etal.JointheHighAccuracyClubonImageNetwithABinaryNeuralNetworkTicket[J].arXivpreprintarXiv:2211.12933,2022.[16]JacobB,KligysS,ChenB,etal.Quantizationandtrainingofneuralnetworksforefficientinteger-arithmetic-onlyinference[C]//ProceedingsoftheIEEEConferenceonComputerVisionandPatternRecognition.2018:2704-2713.[17]HubaraI,CourbariauxM,SoudryD,etal.Quantizedneuralnetworks:Trainingneuralnetworkswithlowprecisionweightsandactivations[J].TheJournalofMachineLearningResearch,2017,18(1):6869-6898.[18]ZhouS,WuY,NiZ,etal.DoReFa-Net:TrainingLowBitwidthConvolutionalNeuralNetworkswithLowBitwidthGradients[J].arXiv:NeuralandEvolutionaryComputing,2016.[19]WangK,LiuZ,LinY,etal.HAQ:Hardware-AwareAutomatedQuantizationwithMixedPrecision[C]//ProceedingsoftheIEEEConferenceonComputerVisionandPatternRecognition.2019:8612-8620.[20]MicikeviciusP,NarangS,AlbenJ,etal.Mixedprecisiontraining[J].arXivpreprintarXiv:1710.03740,2017.[21]HanS,MaoH,DallyWJ.Deepcompression:Compressingdeepneuralnetworkswithpruning,trainedquantizationandhuffmancoding[J].arXivpreprintarXiv:1510.00149,2015.[22]ZhangD,YangJ,YeD,etal.LQ-Nets:LearnedQuantizationforHighlyAccurateandCompactDeepNeuralNetworks[C].europeanconferenceoncomputervision,2018:373-390.[23]ChoiJ,WangZ,VenkataramaniS,etal.Pact:Parameterizedclippingactivationforquantizedneuralnetworks[J].arXivpreprintarXiv:1805.06085,2018.[24]ZhouA,YaoA,GuoY,etal.Incrementalnetworkquantization:Towardslosslesscnnswithlow-precisionweights[J].arXivpreprintarXiv:1702.03044,2017.[25]ZhuF,GongR,YuF,etal.TowardsUnifiedINT8TrainingforConvolutionalNeuralNetwork.[J].arXiv:Learning,2019.第7章[1]HintonG,VinyalsO,DeanJ.Distillingtheknowledgeinaneuralnetwork[J].arXivpreprintarXiv:1503.02531,2015,2(7).[2]XuZ,HsuY,HuangJ,etal.TrainingShallowandThinNetworksforAccelerationviaKnowledgeDistillationwithConditionalAdversarialNetworks.[J].arXiv:Learning,2017.[3]RaviS.Projectionnet:Learningefficienton-devicedeepnetworksusingneuralprojections[J].arXivpreprintarXiv:1708.00630,2017.[4]RomeroA,BallasN,KahouSE,etal.Fitnets:Hintsforthindeepnets[J].arXivpreprintarXiv:1412.6550,2014.第8章[1]CubukED,ZophB,ManeD,etal.AutoAugment:LearningAugmentationPoliciesfromData.[J].arXiv:ComputerVisionandPatternRecognition,2018.[2]ZophB,CubukED,GhiasiG,etal.LearningDataAugmentationStrategiesforObjectDetection[J].arXivpreprintarXiv:1906.11172,2019.[3]HoD,LiangE,StoicaI,etal.PopulationBasedAugmentation:EfficientLearningofAugmentationPolicySchedules[J].arXivpreprintarXiv:1905.05393,2019.[4]EgerS,YoussefP,GurevychI.Isittimetoswish?comparingdeeplearningactivationfunctionsacrossNLPtasks[J].arXivpreprintarXiv:1901.02671,2019.[5]LuoP,RenJ,PengZ,etal.Differentiablelearning-to-normalizeviaswitchablenormalization[J].arXivpreprintarXiv:1806.10779,2018.[6]BelloI,ZophB,VasudevanV,etal.Neuraloptimizersearchwithreinforcementlearning[C]//InternationalConferenceonMachineLearning.PMLR,2017:459-468.[7]TanM,LeQV.EfficientNet:RethinkingModelScalingforConvolutionalNeuralNetworks[C].internationalconferenceonmachinelearning,2019:6105-6114.[8]TanM,LeQV.MixNet:MixedDepthwiseConvolutionalKernels[J].arXivpreprintarXiv:1907.09595,2019.[9]ZophB,LeQV.NeuralArchitectureSearchwithReinforcementLearning[J].internationalconferenceonlearningrepresentations,2017.[10]ZophB,VasudevanV,ShlensJ,etal.LearningTransferableArchitecturesforScalableImageRecognition[J].computervisionandpatternrecognition,2018:8697-8710.[11]TanM,Che

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论