版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
1
AI-PoweredBugHunting-Evolutionand
benchmarking
AlfredoOrtega-ortegaalfredo@X:@ortegaalfredo
Neuroengine.aiJune27,2024
WhileAIholdspromiseforassistingwithbughunting,itsactualimpactre-mainsunclear.ThispresentationaddressesthisdoubtbyintroducingCrash-Benchmark,astandardizedevaluationframeworkforAI-drivenstaticanaly-sistools.We’llshareresultsfromasimplebug-huntingAIagent,AutoKaker,anddiscusstheimplicationsforoptimizingAI-basedbughuntinginC/C++codebases.
1Introduction
Opiniononautomaticbugfindingiscontroversial.Atthedateofthisarticle’spubli-cation,thereisnoconsensusaboutwhetherthisispossibleornot,ortowhatextent.ThisispartlyduetotherapidadvancementofLLMmodels;upuntilmonthsago,open-sourcemodelswerenotadvancedenoughtobeeffectiveatbugfinding.ThereexistsathresholdinthecomplexityofLLMsbeyondwhichbugfindingbecomespossible,andinthisarticlewebenchmarkvariousmodelsandfoundthatthisthresholdhasbeenreachedforsomevulnerabilities.
2CrashBench
Crashbench
[1]isasimpleautomatictest-casebasedbenchmarktool
.ItconnectstoserveralLLMservicesofferingopensourceandprivateAImodels,sendatestcaseandthenmeasureresults.IftheLLMfoundthebuginthecorrectline,thenscoreisincreasedbyone.
2.1Design
Mostofthetestcasesforthev1versionarebasedonGera’sInsecureProgrammingexercises
[2],plus3realvulnerabilityexamples
.TheLLMisassignedascorebased
2
onthenumberofvulnerabilitiesthatwerereported,withrealvulnerabilitieshaving10timesthescore.
TheconfigurationofCrashbenchisasingle.inifilecontainingtheprompt,testcasefilesandexpectedlineswherethebugisfound.
[SETTINGS]
SystemPrompt="Youareanexpertsecurityresearcher,programmerandbugfinder."
Prompt="Checkthiscodeforanyout-of-boundsor
integer-overflowvulnerability,explainitandpointatthelinewiththeproblem,
andnothingmore,inthisway:\n’Bugline=X’whereXisthelinenumberofthebug,
andthenprintthatlinenumber.
Ifthecodehasnobugs,thenprint’Bugline=0’."
[Basic]
file1=stack1.c,6file2=stack2.c,6file3=stack3.c,6file4=stack4.c,6file5=stack5.c,6
[ABOs]
file1=abo1.c,4
...
Inthisway,thetestnotonlytestsbugfindingcapabilities,butalsoaccuracyinreporting.Manymodelsaregoodatfindingvulnerabilities,buttheyfailataccuratelypointingexactlywherethebugislocatedinthecode.Tocreatenegativetests(testswherenovulnerabilityshouldbedetected),justsettheexpectedvulnlinenumbertozero.
2.2Parameters
Softwareusedwasvllmv0.5.0.post1[3]forAWQquantizationandaphrodite-engine
v0.5.3
[4]forEXL2quantization
.Parametersusedforinferenceusingvllmwere:
•temperature:1.2
•topp=1.0
•frequencypenalty=0.6
•presencepenalty=0.8
3
2.3Results
Thebenchmarkranagainst16LLMs,mostofthembeingthelatestversions,butalso
someoldermodelsbasedonLlama-2tocomparethem.Additionally,severalquanti-zationsofthesamemodelweretestedtomeasuretheeffectofquantizationonLLMbug-reportingaccuracy.
Figure1:Crashbenchscore
AsshowninFigure
1,Oldermodelsarenotcompetitiveatcodeunderstandingand
bugfinding,withnewermodelsbeingsignificantlybetter.EvenclosedmodelslikeChatGPTaresurpassedbythesenewermodelsintermsofperformance.Additionally,therelativelysmalleffectofquantizationonresultsisevident,asastrongquantizationofLlama-3-70B(2.25bpw)didnothaveasignificantimpactonthemodel’sscore.
2.4Quantizationeffects
AtFigure
2,wenowfocusontheeffectsofquantizationonthescore
.Quantizationisatechniquethatcompressesmodelsbyrepresentingweightsusingfewerbits,losingsomequalitybutreducingtheamountofmemoryneeded.Thisresultsinincreasedspeedandefficiency.SincecurrentGPUsaremostlylimitedbymemorybandwidth,theefficiencyofinferencedecreasesnearlylinearlywithsize.
Wesetthey-axisto0sothatitcanbemoreeasilyseenhowlowaneffectquantizationhadonthescores.Wecanalsoseetherapidincreaseinsizewiththeincreaseofbitsperword,withoutanycorrespondingincreaseinscore.
4
Figure2:Quantizationeffectsonscore.Model:Meta-LLama-3-70B-Instruct.
WecanplotasecondgraphatFig.
3,showingefficiencyofthedifferentmodels,
meaningthescorepersizeinGigabytes.Withdecreasedsize,speedandpowerrequiredforinferencealsodecreaseslinearly,increasingefficiencyofoperation.
WecanseehowthecurrentmostefficientmodelsarehighlyquantizedversionsofLlama-370B.Ataround25GB,thosemodelsarestilloutofreachformostpersonalhomecomputers.ThebestnextoptionwouldbetouseahighlyquantizedversionofMistral-8x7B,whichcanrunonCPUonmostmoderncomputersatanacceptablespeed.
2.5CrashbenchvsLMSysELO
TheLMsysleaderboard
[5]hasbecometheindustrystandardformodelbenchmarking
.Wecancomparehowourbug-findingbenchmarkcorrelateswiththeoverallmodelscore.
IntuitivelywewouldassumethatoverallELOandcrashbenchscoresshouldbesome-whatrelated.Butin
4
wecanseesomeinconsistencies,especiallywithmodernOpenAImodels.ThesemodelshavemuchbetterELOscoresthanCrashbenchscores.Thismeansthatthesemodelsaremuchbetterasgenericassistantsandcodegenerationthanatbugfinding.Wesuspectthatsuper-alignmentmightcausethesemodelstorefusetoshowbugs,asananalysisofgpt-4andgpt-4oshowsthattheydonotshowmanywrongbugsorlinesonthetest-cases;instead,theirlowscoresaremostlyduetodenyingthatthereisabugatall.Lowscoresmightalsoindicateproblemsonthebenchmark,aswediscussinthefollowingsection.
5
Figure3:Totalmodelefficiency.ThisgraphicshowshowmanypointsthemodelhaveforeveryGBinsize.
2.6Problems
Problemsthatmayaffectthisbenchmarkaccuracyare:
Incorrectparametersand/orpromptformat:Instructmodelshaveaspecificformat
thatmustbeusedonthepromptstomaximizetheirunderstandingoftherequests.ManyLLMsarequiteflexibleonthisformat,whilesomearenot.It’simportanttorespectthepromptformatofeachLLMtomaximizetheircode-understandingcapacity.
Modeltrainedonthesolutionsofthebenchmark:Asmostmodelsaretrainedonter-abytesoftokens,itisverylikelythatthetestcases,bothartificialandreal,werepartoftheirtraining,alongwiththesolutions.Thismightintroduceabiaswheremodelsareverygoodatpassingthebenchmark,butnotsogoodinreal-worldapplications.ThesolutiontothisproblemistocreatemoreunpublishedtestcasesthattheLLMdidn’tseeduringtraining.However,thisisashort-livedsolutionasit’sverylikelythatnewerversionsoftheLLMswillcontainthesenewtestcases,sotheymustbediscardedineverynewversionofthebenchmark.
Bugsoninferencesoftware/quantizationquality:Inferencesoftwareisevolvingrapidly,anditcontainsbugsthataffectqualityandreasoning.Asolutiontothisproblemforbenchmarkingistoalwaysusethesameinferencesoftware.Inourcase,weuseeithervLLMorAphroditeengine,whichinternallyusesvLLM.
6
Figure4:CrashbenchscorevsOverallmodelELOscore.Wecanseeageneralcorrelationexceptonclosedmodels.
Refusalsduetoalignmnet:Somemodelsrefusetodiscoverbugsbecausetheyreasonthattheycanbeusedformaliciouspurposes.Thiscanbebypassedwithseveraltechniquessuchaspromptjailbreakingorabliteration,butbothtechniquesmightaffectthecode-understandingcapacityofthemodel.However,theabliteratedversionofLlama-3-70Bwascomparedagainsttheoriginalversionandshowedaminimaleffectontheresults.
3AutoKaker:Automaticvulnerabilitydiscovery
Usingthesametechniqueofthebenchmarkwecaneasilyconstructatool[6]thatprocess
sourcecodeandannotateseveryvulnerabilityfound.Thealgoritmdescribedinfig
5
issimple:
1.Separatesourcecodeintoindividualchunksthatcontainoneormorefunctions
2.AssembleapromptaskingtheLLMtoanalyzethecode
3.Annotatetheresults
Thistool(seefig
6)canbelaunchedoncompletecodebasesandwillannotateevery
functionwithpossiblevulnerabilities,readyfortriageandexploitationbyahumanoperator.Unlikeotherapproaches,thistooldoesnotattempttoverifyorexploitthe
7
Figure5:Autokakermainloop
vulnerabilitiesfound,asthisisamuchmorecomplextask.Weproposeinthenextsectionthatitisunnecessary.ThetoolcurrentlysupportsonlyCcode,butthisisalimitationofthecurrentcodeparserduetoitsinabilitytoseparatefunctions.ThetoolcanrunonC++/Rustcodewithamodifiedcodeparser.
3.1ProblemswithautomatedAIexploitation
Wecanseeasimplifieddiagramofthestagesofvulnerabilitydiscoveryat
7.
Oncewefoundapossiblevulnerability,wehavetwopaths:Eitherconfirmitviaexploitation,orfixitviaapatch.Wecandotwoimportantobservations:
•Isnotnecessarytoconfirmapossiblevulnerabilitytopatchit.Thisfollowthephilosophyofdefensiveprogramming.
•Patchingavulnerabilityrequiresmuchlessskillsthanexploitingit,orevenfindingit.
Similartools/benchmarkssuchasMeta’sCybersecEval2[7]andGoogleProjectZero
Naptime
[8]aimtofindandverifyvulnerabilities,andduetothehigh-skillandhigh
-complexitynatureofthistask,currentAIsystemsperformpoorlyatthis.Theycanonlysucceedinbasicexampleswithoutanysoftwareprotectionsorexploitcountermeasures. WhileoffensiveAIwilleventuallybecomeadvancedenoughtosucceedatthistask,duetotheobservationthatit’softeneasiertofixavulnerabilitythantocreatean
8
Figure6:AutoKakerGUI
exploitforit,wecanassumethattheasymmetrybetweendefenseandattackwillcauseoffensiveAI-generatedexploitstoalmostneversucceed.ThisisbecauselesscomplexdefensiveAIwilldiscoverandpatchthemfirst.
AnotherconclusionisthatsincecurrentLLMsareadvancedenoughtodiscoversomevulnerabilities,theyalsohavethecapacitytoautomaticallypatchthem,asshowninthenextsection
4Auto-patching
Vulnerabilitydiscovery/annotationandvulnerabilitypatchinghavesimilarworkflow,butinsteadofaddingacommentdescribingthevulnerability,weasktheLLMtogenerateandaddcodethatfixesit.Theautokakertoolcanalreadyperformthistaskbyusingthe–patchcommand-lineargument,displayingasimpleGUI(seefig
8)
.
4.1Iterativepatching
MostSOTALLMslikeLlama-3,Mistral-Large,GPT4,GeminiorClaudearealreadycapableofgeneratingpatchesbuttheydonothavea100%rateofsuccess.Meaningthatthegeneratedfixeswillsometimeseithernotcompileorcreateadditionalbugs.
Wesolvethisproblemusingaclosed-loopapproach(seefig
9),inwhichafterevery
patchgeneration,theautokakeragentchecksifthecodecompilesandpassesalltests.IftheLLMcodefailstopassthesetests,wecantrymultipletimesuntilthegeneratedcodepassesalltests.Notably,mostSOTALLMsgeneratecorrectpatchesonthefirsttry.
9
Figure7:Simplifiedvulndiscoverystages
4.2Example:zlib-hardcored
Zlib
[9]isacompressionlibrarythatissmall,andincludeexampleutilitiesthatcom
-press/decompressbinarydata,thatcanbeusedasatestforthecorrectworkingsoftheseveralalgorithmsimplemented.Theautopatcherutilitywasrunonthiscodeusingthiscommandline:
cdzlib;pythonautok.py--patch--make"make&&example64".
Thiswillruntheautopatchrecursivelyonall.cfilesandrunthecommand’make&&example64’aftereachmodificationtocheckforthecorrectnessandvalidityofeverypatch.
Thisgeneratedacompatiblerefactoroftheoriginalzliblibrarywithover200ap-pliedsecuritypatches.
Thehardenedzlibcodecanbedownloadedat[10]
.Notably,themodificationofthisprojecttoaddadditionalcheckswasdone100%automatically
10
Figure8:AutopatcherGUI
withnohumanintervention.Whilenotallpatchesfixexploitablevulnerabilities,theyadddefensiveprogrammingthatprotectsthezlibfunctionfrommanyfutureunknownvulnerabilities,withtheaddedbenefitofrandomizingtheimplementationitself,makingROPattacksmuchharder.
4.3Example:OpenBSD-hardcored
SecondexampleistheOpenBSDkernel.
OpenBSD[12]isanoperatingsystemknown
foritssecurityandcorrectness.However,theAutokakertooldiscoveredmanyvulnera-bilities,makingitacandidateforautopatching.
Atthistime,autopatcherwasrunonthecompletenetinet/netinet6systemusing
GPT4asamodel,generatingaround2000securitychecks[11]
.Notethatmostpatcheswillresultinunusedcode,andmostchecksarenotreallyneeded,followingthesamephilosophyasdefensiveprogramming.
AsOpenBSDdoesnothaveteststhatcheckthecorrectnessoftheIPv4/IPv6stack,patchingwas’blind’inthesensethattheymaygenerateerrors.Therefore,thepatcheshadtobereviewedmanually.However,outofthousandsofmodifications,only2patchesneededmanualcorrection.
Itisnotrecommendedtousethis’hardened’codeinproductionasitstillmightcontainbugsintroducedbytheautopatcherandnotyetdetected.Also,aswediscusslater,thepatchescanbeeasilyregeneratedwithanewer,morepowerfulLLM.
4.4cost
Currently,thecompleterefactorofthenetinet/netinet6subsystemofOpenBSD7.5isthebiggestprojectthathasbeenautopatched.Wecancitesomenumbersoftheassociated
cost:
11
Figure9:Autopatcherdesign
SubsystemAPIreqContextTok.GeneratedTok.TotalTok.Cost(GPT-4o)
netinet3011752411249133001542.75$netinet65652609051876434585484.27$
Inthistest-run,costwasunder10usdforthecompletenetinet/netinet6processing,usingoneofthemostexpensivemodelsavailable(GPT-4o).Thiscostisverysmallcomparedtothecostofadeveloper,butmostofthecostofhardeningsoftwarewillbethecostofpatchreview.Performanceofdifferentmodelsregardingautopatchingwasnotmeasuredinthisarticle.Totaltimespentpatchingthenetinet/netinet6subsystemwasabout12hs.
4.5Recommendedusage
Theautopatchercangeneratecodewithadditionalchecksthatmaypreventmanyun-knownbugsfrombeingexploited.However,aswecanassumethatLLMswillcontinue
12
Figure10:OpenBSD7.5withAI-hardenedIPstackpatchesbooting.
toimproveatafastrate,itisnotrecommendedtocommitthegeneratedcheckstothecodepermanently,astheycanbeeasilyregeneratedwhenneededwithmoread-vancedLLMs,generatingbetterchecks.Inthisway,wecanseetheautopatcherasapre-compilationstageformostprojects.
5Conclusion
Thisarticleshowsthatcurrentstate-of-the-artLLMscandiscoversomeclassesofvulner-abilitiesonrealC/C++projects,specificallymemorycorruptionbugs.Andwhiletheyarenotadvancedenoughtoverify/exploitthem,theAIcaneasilygenerateandintegratepatchesthatpreventthem.Wearguethattheriskofauto-exploitationof
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 2024物业公司关于小区公共设施更新合同
- 练腹肌教学课程设计表格
- 2025年度油气田钻井工程地质风险防控合同3篇
- 2025年度红砖供应风险管理合同3篇
- 2024年沪教版五年级英语下册月考试卷
- 2025年苏科版七年级英语上册阶段测试试卷含答案
- 2025版服务业员工集体合同及劳动合同签订与实施细则2篇
- 2025年浙科版四年级英语下册阶段测试试卷
- 人脸识别考勤机精英安全操作规程
- 2025年外研版必修1英语上册月考试卷
- 2024年销售员工年度工作总结
- 2024-2025学年广东省深圳市南山区监测数学三年级第一学期期末学业水平测试试题含解析
- 电子招投标平台搭建与运维服务合同
- 人工智能 课件 第五章 机器学习
- 食品研发调研报告范文
- 2024-2030年国家甲级资质:中国干热岩型地热资源融资商业计划书
- 【MOOC】人因工程学-东北大学 中国大学慕课MOOC答案
- 食材配送服务方案投标文件(技术方案)
- 中国慢性阻塞性肺疾病基层诊疗指南(2024年)解读
- 高中政治统编版选择性必修二《法律与生活》综合测试卷(一)(原卷版)
- 带状疱疹后神经痛的诊治课件教案
评论
0/150
提交评论