![基于深强化学习的flappybird_第1页](http://file4.renrendoc.com/view/c2f4645e7ef52bf3ac9f4950623dcd87/c2f4645e7ef52bf3ac9f4950623dcd871.gif)
![基于深强化学习的flappybird_第2页](http://file4.renrendoc.com/view/c2f4645e7ef52bf3ac9f4950623dcd87/c2f4645e7ef52bf3ac9f4950623dcd872.gif)
![基于深强化学习的flappybird_第3页](http://file4.renrendoc.com/view/c2f4645e7ef52bf3ac9f4950623dcd87/c2f4645e7ef52bf3ac9f4950623dcd873.gif)
![基于深强化学习的flappybird_第4页](http://file4.renrendoc.com/view/c2f4645e7ef52bf3ac9f4950623dcd87/c2f4645e7ef52bf3ac9f4950623dcd874.gif)
![基于深强化学习的flappybird_第5页](http://file4.renrendoc.com/view/c2f4645e7ef52bf3ac9f4950623dcd87/c2f4645e7ef52bf3ac9f4950623dcd875.gif)
版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
SHANGHAIJIAOTONGUNIVERSITYProjectTitle:PlayingtheGameofFlappyBirdwithDeepReinforcementLearningGroupNumber:G-07GroupMembers:WangWenqingGaoXiaoningContents1 Introduction (10)endforEveryCstepsreset=:endforendforExperimentsThissectionwilldescribeouralgorithm’sparameterssettingandtheanalysisofexperimentresults.ParametersSettingsREF_Ref484591565\hFigure6illustratesourCNN’slayerssetting.Theneuralnetworkshas3CNNhiddenlayersfollowedby2fullyconnectedhiddenlayers.Table1showthedetailedparametersofeverylayer.HerewejustuseamaxpoolinginthefirstCNNhiddenlayer.Also,weusetheReLUactivationfunctiontoproducetheneuraloutput.FigureSEQFigure\*ARABIC6:ThelayersettingofCNN:thisCNNhas3convolutionallayersfollowedby2fullyconnectedlayers.Asfortraining,weuseAdamoptimizertoupdatetheCNN’sparameters.TableSEQTable\*ARABIC1:ThedetailedlayerssettingofCNNLayerInputFiltersizeStrideNumfiltersActivationOutputconv180×80×48×8432ReLU20×20×32max_pool20×20×322×2210×10×32conv210×10×324×4264ReLU5×5×64conv35×5×643×3164ReLU5×5×64fc45×5×64512ReLU512fc55122Linear2REF_Ref484591593\hTable1listsalltheparametersettingofDQN.Weuseadecayedrangingfrom0.1to0.001tobalanceexplorationandexploitation.What’smore,REF_Ref484591626\hTable2showsthatthebatchstochasticgradientdescentoptimizerisAdamwithbatchsizeof32.Finally,wealsoallocatealargereplaymemory.TableSEQTable\*ARABIC2:ThetrainingparametersofDQNParametersvalueObservesteps100000Exploresteps3000000Initial_epsilon0.1Final_epsilon0.001Replay_memory50000batchsize32learningrate0.000001FPS30optimizationalgorithmAdamResultsAnalysisWetrainourmodelabout4millionepochs.REF_Ref484591669\hFigure7showstheweightsandbiasesofCNN’sfirsthiddenlayer.Theweightsandbiasesfinallycentralizearound0,withlowvariance,whichdirectlystabilizeCNN’soutputQ-valueandreduceprobabilityofrandomaction.ThestabilityofCNN’sparametersleadstoobtainingoptimalpolicy.FigureSEQFigure\*ARABIC7:Left(right)figureisthehistogramofweights(biases)ofCNN’sfirsthiddenlayerREF_Ref484591680\hFigure8isthecostvalueofDQNduringtraining.Thecostfunctionhasaslowdowntrend,closeto0after3.5millionepochs.ItmeansthatDQNhaslearnedthemostcommonstatesubspaceandwillperformoptimalactionwhencomingacrossknownstate.Inaword,DQNhasobtaineditsbestactionpolicy.FigureSEQFigure\*ARABIC8:DQN’scostfunction:theplotshowsthetrainingprogressofDQN.Wetrainedourmodelabout4millionepochs.Whenplayingflappybird,ifthebirdgetsthroughthepipe,wegiveareward1,ifdead,give-1,otherwise0.1.REF_Ref484591694\hFigure9istheaveragereturnedrewardfromenvironment.Thestabiltiyinfinaltrainingstatemeansthattheagentcanautomaticallychoosethebestaction,andtheenvironmentgivesthebestrewardinturns.Weknowthattheagentandenvironmenthasenterintoafriendlyinteraction,guaranteeingthemaximaltotalreward.FigureSEQFigure\*ARABIC9:Theaveragereturnedrewardfromenvironment.Weaveragethereturnedrewardevery1000epochs.FromthisREF_Ref484591711\hFigure10,thepredictedmaxQ-valuefromCNNconvergesandstabilizesinavalueafterabout100000.ItmeansthatCNNcanaccuratelypredictthequalityofactionsinspecificstate,andwecansteadilyperformactionswithmaxQ-value.TheconvergenceofmaxQ-valuesstatesthatCNNhasexploredthestatespacewidelyandgreatlyapproximatedtheenvironmentwell.FigureSEQFigure\*ARABIC10:TheaveragemaxQ-valueobtainedfromCNN’soutput.WeaveragethemaxQ-valueevery1000epochs.REF_Ref484591726\hFigure11illustratestheDQN’sactionstrategy.IfthepredictedmaxQ-valueissohigh,weareconfidentthatwewillgetthroughthegapwhenperformtheactionwithmaxQ-valuelikeA,C.IfthemaxQ-valueisrelativelylow,andweperformtheaction,wemighthitthepipe,likeB.Inthefinalstateoftraining,themaxQ-valueisdramaticallyhigh,meaningthatweareconfidenttogetthroughthegapsifperformingtheactionswithmaxQ-value.FigureSEQFigure\*ARABIC11:TheleftmostplotshowstheCNN’spredictedmaxQ-valuefora100framessegmentofthegameflappybird.ThethreescreenshotscorrespondtotheframeslabeledbyA,B,andCrespectively.ConclusionWesuccessfullyuseDQNtoplayflappybird,whichcanoutperformhumanbeings.DQNcanautomaticallylearnknowledgefromenvironmentjustusingrawimagetoplaygameswithoutpriorknowledge.ThisfeaturegiveDQNthepowertoplayalmostsimplegames.Moreover,theuseofCNNasafunctionapproximationallowDQNtodealwithlargeenvironmentwhichhasalmostinfinitestatespace.Lastbutnotleast,CNNcanalsogreatlyrepresentfeaturespacewithouthandcraftedfeatureextractionreducingthemassivemanualwork.
ReferencesC.ClarkandA.Storkey.Teachingdeepconvolutionalneuralnetworkstoplaygo.arXivpreprintarXiv:1412.3409,2014.1.AlexKrizhevsky,IlyaSutskever,andGeoffHinton.Imagenetclassificationwithdeepconvolutionalneuralnetworks.InAdvancesinNeuralInformationProcessingSystems25,pages1106–1114,2012.GeorgeE.Dahl,DongYu,LiDeng,andAlexAcero.Context-dependentpre-traineddeepneuralnetworksforlarge-vocabularyspeechrecognition.Audio,Speech,andLanguageProcessing,IEEETransactionson,20(1):30–42,2012,1.RichardSuttonandAndrewBarto.ReinforcementLearning:AnIntroduction.MITPress,1998.BrianSallansandGeoffreyE.Hinton.Reinforcementlearningwithfactoredstatesandactions.JournalofMachineLearningResearch,5:1063–1088,2004.ChristopherJCHWatkinsandPeterDayan.Q-learning.Machinelearning,8(3-4):279–292,1992.HamidMaei,CsabaSzepesv´ari,ShalabhBhatnagar,andRichardS.Sutton.Towardoff-policylearningcontrolwithfunctionapproximation.InProceedingsofthe27thI
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 2025年企业供应链物流外包项目协议
- 2025年债权让与四方合同策划范本
- 2025年仓库管理员职责与待遇合同
- 2025年具有法律效力的个人投资对赌协议
- 2025年电子点火沼气灯项目申请报告模范
- 2025年热熔胶胶粉及胶粒项目规划申请报告模范
- 2025年双方教育合作框架协议
- 2025年冬季社会实践活动协议范本
- 2025年教育实践基地联盟发展与协作策划协议
- 2025年生育保险赔付条款示范性策划协议
- T-CACM 1560.6-2023 中医养生保健服务(非医疗)技术操作规范穴位贴敷
- 07J912-1变配电所建筑构造
- 纠正冤假错案申诉范文
- 锂离子电池串并联成组优化研究
- 人教版小学数学一年级下册第1-4单元教材分析
- JTS-215-2018码头结构施工规范
- 大酒店风险分级管控和隐患排查治理双体系文件
- 财务实习生合同
- 2024年湘潭医卫职业技术学院单招职业适应性测试题库含答案
- 2024年长沙卫生职业学院单招职业适应性测试题库含答案
- 地质灾害危险性评估的基本知识
评论
0/150
提交评论