基于深强化学习的flappybird_第1页
基于深强化学习的flappybird_第2页
基于深强化学习的flappybird_第3页
基于深强化学习的flappybird_第4页
基于深强化学习的flappybird_第5页
已阅读5页,还剩5页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

SHANGHAIJIAOTONGUNIVERSITYProjectTitle:PlayingtheGameofFlappyBirdwithDeepReinforcementLearningGroupNumber:G-07GroupMembers:WangWenqingGaoXiaoningContents1 Introduction (10)endforEveryCstepsreset=:endforendforExperimentsThissectionwilldescribeouralgorithm’sparameterssettingandtheanalysisofexperimentresults.ParametersSettingsREF_Ref484591565\hFigure6illustratesourCNN’slayerssetting.Theneuralnetworkshas3CNNhiddenlayersfollowedby2fullyconnectedhiddenlayers.Table1showthedetailedparametersofeverylayer.HerewejustuseamaxpoolinginthefirstCNNhiddenlayer.Also,weusetheReLUactivationfunctiontoproducetheneuraloutput.FigureSEQFigure\*ARABIC6:ThelayersettingofCNN:thisCNNhas3convolutionallayersfollowedby2fullyconnectedlayers.Asfortraining,weuseAdamoptimizertoupdatetheCNN’sparameters.TableSEQTable\*ARABIC1:ThedetailedlayerssettingofCNNLayerInputFiltersizeStrideNumfiltersActivationOutputconv180×80×48×8432ReLU20×20×32max_pool20×20×322×2210×10×32conv210×10×324×4264ReLU5×5×64conv35×5×643×3164ReLU5×5×64fc45×5×64512ReLU512fc55122Linear2REF_Ref484591593\hTable1listsalltheparametersettingofDQN.Weuseadecayedrangingfrom0.1to0.001tobalanceexplorationandexploitation.What’smore,REF_Ref484591626\hTable2showsthatthebatchstochasticgradientdescentoptimizerisAdamwithbatchsizeof32.Finally,wealsoallocatealargereplaymemory.TableSEQTable\*ARABIC2:ThetrainingparametersofDQNParametersvalueObservesteps100000Exploresteps3000000Initial_epsilon0.1Final_epsilon0.001Replay_memory50000batchsize32learningrate0.000001FPS30optimizationalgorithmAdamResultsAnalysisWetrainourmodelabout4millionepochs.REF_Ref484591669\hFigure7showstheweightsandbiasesofCNN’sfirsthiddenlayer.Theweightsandbiasesfinallycentralizearound0,withlowvariance,whichdirectlystabilizeCNN’soutputQ-valueandreduceprobabilityofrandomaction.ThestabilityofCNN’sparametersleadstoobtainingoptimalpolicy.FigureSEQFigure\*ARABIC7:Left(right)figureisthehistogramofweights(biases)ofCNN’sfirsthiddenlayerREF_Ref484591680\hFigure8isthecostvalueofDQNduringtraining.Thecostfunctionhasaslowdowntrend,closeto0after3.5millionepochs.ItmeansthatDQNhaslearnedthemostcommonstatesubspaceandwillperformoptimalactionwhencomingacrossknownstate.Inaword,DQNhasobtaineditsbestactionpolicy.FigureSEQFigure\*ARABIC8:DQN’scostfunction:theplotshowsthetrainingprogressofDQN.Wetrainedourmodelabout4millionepochs.Whenplayingflappybird,ifthebirdgetsthroughthepipe,wegiveareward1,ifdead,give-1,otherwise0.1.REF_Ref484591694\hFigure9istheaveragereturnedrewardfromenvironment.Thestabiltiyinfinaltrainingstatemeansthattheagentcanautomaticallychoosethebestaction,andtheenvironmentgivesthebestrewardinturns.Weknowthattheagentandenvironmenthasenterintoafriendlyinteraction,guaranteeingthemaximaltotalreward.FigureSEQFigure\*ARABIC9:Theaveragereturnedrewardfromenvironment.Weaveragethereturnedrewardevery1000epochs.FromthisREF_Ref484591711\hFigure10,thepredictedmaxQ-valuefromCNNconvergesandstabilizesinavalueafterabout100000.ItmeansthatCNNcanaccuratelypredictthequalityofactionsinspecificstate,andwecansteadilyperformactionswithmaxQ-value.TheconvergenceofmaxQ-valuesstatesthatCNNhasexploredthestatespacewidelyandgreatlyapproximatedtheenvironmentwell.FigureSEQFigure\*ARABIC10:TheaveragemaxQ-valueobtainedfromCNN’soutput.WeaveragethemaxQ-valueevery1000epochs.REF_Ref484591726\hFigure11illustratestheDQN’sactionstrategy.IfthepredictedmaxQ-valueissohigh,weareconfidentthatwewillgetthroughthegapwhenperformtheactionwithmaxQ-valuelikeA,C.IfthemaxQ-valueisrelativelylow,andweperformtheaction,wemighthitthepipe,likeB.Inthefinalstateoftraining,themaxQ-valueisdramaticallyhigh,meaningthatweareconfidenttogetthroughthegapsifperformingtheactionswithmaxQ-value.FigureSEQFigure\*ARABIC11:TheleftmostplotshowstheCNN’spredictedmaxQ-valuefora100framessegmentofthegameflappybird.ThethreescreenshotscorrespondtotheframeslabeledbyA,B,andCrespectively.ConclusionWesuccessfullyuseDQNtoplayflappybird,whichcanoutperformhumanbeings.DQNcanautomaticallylearnknowledgefromenvironmentjustusingrawimagetoplaygameswithoutpriorknowledge.ThisfeaturegiveDQNthepowertoplayalmostsimplegames.Moreover,theuseofCNNasafunctionapproximationallowDQNtodealwithlargeenvironmentwhichhasalmostinfinitestatespace.Lastbutnotleast,CNNcanalsogreatlyrepresentfeaturespacewithouthandcraftedfeatureextractionreducingthemassivemanualwork.

ReferencesC.ClarkandA.Storkey.Teachingdeepconvolutionalneuralnetworkstoplaygo.arXivpreprintarXiv:1412.3409,2014.1.AlexKrizhevsky,IlyaSutskever,andGeoffHinton.Imagenetclassificationwithdeepconvolutionalneuralnetworks.InAdvancesinNeuralInformationProcessingSystems25,pages1106–1114,2012.GeorgeE.Dahl,DongYu,LiDeng,andAlexAcero.Context-dependentpre-traineddeepneuralnetworksforlarge-vocabularyspeechrecognition.Audio,Speech,andLanguageProcessing,IEEETransactionson,20(1):30–42,2012,1.RichardSuttonandAndrewBarto.ReinforcementLearning:AnIntroduction.MITPress,1998.BrianSallansandGeoffreyE.Hinton.Reinforcementlearningwithfactoredstatesandactions.JournalofMachineLearningResearch,5:1063–1088,2004.ChristopherJCHWatkinsandPeterDayan.Q-learning.Machinelearning,8(3-4):279–292,1992.HamidMaei,CsabaSzepesv´ari,ShalabhBhatnagar,andRichardS.Sutton.Towardoff-policylearningcontrolwithfunctionapproximation.InProceedingsofthe27thI

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论