人工智能安全关键概念:机器学习中可靠的不确定性量化方法(英文版)_第1页
人工智能安全关键概念:机器学习中可靠的不确定性量化方法(英文版)_第2页
人工智能安全关键概念:机器学习中可靠的不确定性量化方法(英文版)_第3页
人工智能安全关键概念:机器学习中可靠的不确定性量化方法(英文版)_第4页
人工智能安全关键概念:机器学习中可靠的不确定性量化方法(英文版)_第5页
已阅读5页,还剩17页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

CenterforSecurityandEmergingTechnology|1

Thispaperisthefifthinstallmentinaserieson“AIsafety,”anareaofmachinelearningresearchthataimstoidentifycausesofunintendedbehaviorinmachinelearning

systemsanddeveloptoolstoensurethesesystemsworksafelyandreliably.OtherpapersintheseriesdescribethreecategoriesofAIsafetyissues—problemsof

robustness,assurance,andspecification.Thispaperintroducestheideaofuncertaintyquantification,i.e.,trainingmachinelearningsystemsthat“knowwhattheydon’t

know.”

Introduction

Thelastdecadeofprogressinmachinelearningresearchhasgivenrisetosystems

thataresurprisinglycapablebutalsonotoriouslyunreliable.ThechatbotChatGPT,

developedbyOpenAI,providesagoodillustrationofthistension.Usersinteracting

withthesystemafteritsreleaseinNovember2022quicklyfoundthatwhileitcouldadeptlyfindbugsinprogrammingcodeandauthorSeinfeldscenes,itcouldalsobeconfoundedbysimpletasks.Forexample,onedialogueshowedthebotclaimingthatthefastestmarinemammalwastheperegrinefalcon,thenchangingitsmindtothesailfish,thenbacktothefalcon—despitetheobviousfactthatneitherofthesechoicesisamammal.Thiskindofunevenperformanceischaracteristicofdeeplearning

systems—thetypeofAIsystemsthathaveseenmostprogressinrecentyears—andpresentsasignificantchallengetotheirdeploymentinreal-worldcontexts.

Anintuitivewaytohandlethisproblemistobuildmachinelearningsystemsthat

“knowwhattheydon’tknow”—thatis,systemsthatcanrecognizeandaccountfor

situationswheretheyaremorelikelytomakemistakes.Forinstance,achatbotcoulddisplayaconfidencescorenexttoitsanswers,oranautonomousvehiclecouldsoundanalarmwhenitfindsitselfinascenarioitcannothandle.Thatway,thesystemcouldbeusefulinsituationswhereitperformswell,andharmlessinsituationswhereitdoesnot.ThiscouldbeespeciallyusefulforAIsystemsthatareusedinawiderangeof

settings,suchaslargelanguagemodels(thetechnologythatpowerschatbotslike

ChatGPT),sincethesesystemsareverylikelytoencounterscenariosthatdivergefromwhattheyweretrainedandtestedfor.

Unfortunately,designingmachinelearningsystemsthatcanrecognizetheirlimitsis

morechallengingthanitmayappearatfirstglance.Infact,enablingmachinelearningsystemsto“knowwhattheydon’tknow”—knownintechnicalcirclesas“uncertaintyquantification”—isanopenandwidelystudiedresearchproblemwithinmachine

learning.Thispapergivesanintroductiontohowuncertaintyquantificationworks,whyitisdifficult,andwhattheprospectsareforthefuture.

CenterforSecurityandEmergingTechnology|2

TheChallengeofReliablyQuantifyingUncertainty

Inprinciple,thekindofsystemwewouldliketobuildsoundssimple:amachine

learningmodelthatgenerallymakescorrectpredictions,butthatcanindicatewhenitspredictionsaremorelikelytobeincorrect.Ideally,suchamodelwouldindicatehighlevelsofuncertaintyneithertoooftennortooseldom.Asystemthatconstantly

expressesunder-confidenceinsituationsthatitcouldactuallyhandlewellisnotveryuseful,butifthesystemsometimesdoesnotindicateuncertaintywheninfactitis

abouttofail,thenthisdefeatsthepurposeoftryingtoquantifyuncertaintyinthefirstplace.Expertsusetheideaof“calibration”todescribethedesiredbehaviorhere:thelevelofuncertaintythatamachinelearningmodelassignstoagivenprediction—its“predictiveuncertainty”—shouldbecalibratedtotheprobabilitythatthepredictionisinfactincorrect.

Figure1:CalibrationCurvesDepictingUnder-Confidence,Near-PerfectCalibration,andOver-Confidence

Thefiguresshowunder-confident(left),well-calibrated(center),andover-confident(right)calibrationcurves.Ideally,theconfidenceexpressedbythemodel(onthex-axis)shouldcorrespondtothechancethatthepredictioniscorrect(onthey-axis).Amodelisunder-confidentifitspredictionsaremoreoftencorrectthanitsconfidencelevelswouldimply(perthechartontheleft),whiletheinverseistrueforanover-confidentmodel(ontheright).

Source:CSET.

Forexample,imagineamedicalmachinelearningclassificationsystemthatusesascanofapatient’seyetopredictwhetherthepatienthasaretinaldisease

.1

Ifthesystemiscalibrated,thenitspredictions—typicallyexpressedaspercentages—should

correspondtothetrueproportionofdiseasedretinas.Thatis,itshouldbethecasethat

CenterforSecurityandEmergingTechnology|3

oftheretinaimagespredictedtobeexhibitingsignsofdiseasewitha50%chance,halfareinfactdiseased,orthateightoutoftenretinaimagespredictedtohavean80%

probabilityofexhibitingsignsofdiseaseinfactdo,andsoon.Theclosertheassignedprobabilitiesaretotherealproportionintheevaluationdata,thebettercalibratedthesystemis.Awell-calibratedsystemisusefulbecauseitallowsuserstoaccountfor

howlikelythepredictionistobecorrect.Forexample,adoctorwouldlikelymake

differentdecisionsaboutfurthertestingandtreatmentforapatientwhosescan

indicateda0.1%chanceofdiseaseversusonewhosescanindicateda30%chance—eventhoughneitherscanwouldbeclassifiedaslikelydiseased.

UnderstandingDistributionShift

Buildingasystemthatcanexpresswell-calibratedpredictiveuncertaintyinthe

laboratory—whilenotstraightforward—isachievable.Thechallengeliesincreatingmachinelearningmodelsthatcanreliablyquantifyuncertaintywhensubjectedtothemessinessoftherealworldinwhichtheyaredeployed.

Attherootofthischallengeliesanideacalled“distributionshift.”Thisreferstothewaysinwhichthetypesofdatathatamachinelearningsystemencounters(the“datadistribution”)changefromonesettingtoanother.Forinstance,aself-drivingcar

trainedusingdatafromSanFrancisco’sroadsisunlikelytoencountersnow,soifthesamecarweredeployedinBostonduringthewinter,itwouldencounteradifferentdatadistribution(onethatincludessnowontheroads),makingitmorelikelytofail.

Distributionshiftiseasytodescribeinformally,butverydifficulttodetect,measure,ordefineprecisely.Thisisbecauseitisespeciallydifficulttoforeseeandaccountforallthepossibletypesofdistributionshiftsthatasystemmightencounterinpractice.

Whenaparticularshiftcanbeanticipated—forinstance,iftheengineersthattrainedtheself-drivingcarinSanFranciscowereplanningaBostondeploymentand

consideringweatherdifferences—thenitisrelativelystraightforwardtomanage.Inmostcases,however,itisimpossibletoknowinadvancewhatkindsofunexpectedsituations—whatunknownunknowns—asystemdeployedinthemessyrealworldmayencounter.

Theneedtodealwithdistributionshiftsmakesquantifyinguncertaintydifficult,

similarlytothebroaderproblemofgeneralizationinmodernmachinelearning

systems.Whileitispossibletoevaluateamodel’saccuracyonalimitedsetofdata

pointsinthelab,therearenomathematicalguaranteesthatensurethatamodelwillperformaswellwhendeployed(i.e.,thatwhatthesystemlearnedwill“generalize”

beyonditstrainingdata).Likewise,foruncertaintyquantification,thereisnoguarantee

CenterforSecurityandEmergingTechnology|4

thataseeminglywell-calibratedmodelwillremaincalibratedondatapointsthataremeaningfullydifferentfromthetrainingdata.Butwhilethereisavastamountof

empiricalandtheoreticalliteratureonhowwellmodelsgeneralizetounseen

examples,thereisrelativelylittleworkonmodels’abilitytoreliablyidentifysituationswheretheiruncertaintyshouldbehigh,making“uncertaintygeneralization”oneofthemostimportantandyetrelativelyunderexploredareasofmachinelearningresearch.

AccuratelyCharacterizingUncertainty

Inthemedicalimagingexampleabove,wedescribedhowmachinelearningmodelsusedforclassificationproduceprobabilitiesforeachclass(e.g.,diseasedversusnot

diseased),butsuchprobabilitiesmaynotbesufficientforreliableuncertainty

quantification.Theseprobabilityscoresindicatehowstronglyamodelpredictsthatagiveninputcorrespondstoagivenoutput.Forinstance,animageclassifierforreadingzipcodestakesinanimageofahandwrittendigit,thenassignsascoretoeachofthe

tenpossibleoutputs(correspondingtothedigitintheimagebeinga“0,”“1,”“2,”etc.).Theoutputwiththehighestscoreindicatesthedigitthattheclassifierthinksismostlikelytobeintheimage.

Unfortunately,thesescoresaregenerallynotusefulindicatorsofthemodel’s

uncertainty,fortworeasons.First,theyaretheresultofatrainingprocessthatwas

optimizingforthemodeltoproduceaccurateoutputs,notcalibratedprobabilities

;2

thus,thereisnoparticularreasontobelievethatascoreof99.9%reliablycorrespondstoahigherchancethattheoutputiscorrectthanascoreof95%.Second,systems

designedthiswayhavenowaytoexpress“noneoftheabove”—say,ifthezipcodereaderencounteredabugsplatteredacrossthepage.Themodelismathematicallyforcedtoassignprobabilityscorestotheavailableoutputs,andtoensurethatthosescoressumtoone

.3

Thisnaturallyraisesthequestionofwhyaddinga“noneoftheabove”optionisnotpossible.Thereasonissimple:modelslearnfromdataand,duetothechallengesofdistributionshiftdescribedabove,AIdeveloperstypicallydonothavedatathat

representsthebroadrangeofpossibilitiesthatcouldfitintoa“noneoftheabove”

option.Thismakesitinfeasibletotrainamodelthatcanconsistentlyrecognizeinputsasbeingmeaningfullydifferent.

Tosummarize,thecoreproblemmakinguncertaintyquantificationdifficultisthatinmanyreal-worldsettings,wecannotcleanlyarticulateandprepareforeverytypeofsituationamodelmayneedtobeabletohandle.Theaimistofindawayforthe

systemtoidentifysituationswhenitislikelytofail—butbecauseitisimpossibleto

CenterforSecurityandEmergingTechnology|5

exposethesystemtoeverykindofscenarioinwhichitmightperformpoorly,itis

impossibletoverifyinadvancethatthesystemwillappropriatelyestimateitschancesofperformingwellundernovel,untestedconditions.Inthenextsection,wediscussseveralapproachesthattrytonavigatethisdifficulty.

ExistingApproachestoUncertaintyQuantification

Thekeychallengeofuncertaintyquantificationistodevelopmodelsthatcan

accuratelyandreliablyexpresshowlikelytheirpredictionsaretobecorrect.Awiderangeofapproacheshavebeendevelopedthataimtoachievethisgoal.Some

approachesprimarilytreatuncertaintyquantificationasanengineeringchallengethatcanbeaddressedwithtailoredalgorithmsandmoretrainingdata.Othersseektousemoremathematicallygroundedtechniquesthatcould,intheory,providewatertightguaranteesthatamodelcanquantifyitsownuncertaintywell.Unfortunately,itisnotcurrentlypossibletoproducesuchmathematicalguaranteeswithoutusingunrealisticassumptions.Instead,thebestwecandoisdevelopmodelsthatquantifyuncertaintywelloncarefullydesignedempiricaltests.

Approachestouncertaintyquantificationinmodernmachinelearningfallintofourdifferentcategories:

1.DeterministicMethods

2.ModelEnsembling

3.ConformalPrediction

4.BayesianInference

Eachoftheseapproacheshasdistinctbenefitsanddrawbacks,withsomeproviding

mathematicalguaranteesandothersperformingparticularlywellonempiricaltests.Weelaborateoneachtechniqueintheremainderofthissection.Readersarewelcometoskiptothenextsectionifthesomewhatmoretechnicalmaterialbelowisnotof

interest.

DeterministicMethods

Deterministicmethodsworkbyexplicitlyencouragingthemodeltoexhibithigh

uncertaintyoncertaininputexamplesduringtraining.Forexample,researchersmightstartbytrainingamodelononedataset,thenintroduceadifferentdatasetwiththeexpectationthatthemodelshouldexpresshighuncertaintyonexamplesfromthe

datasetitwasnottrainedon.Usingthisapproachresultsinmodelsthatareveryaccurateondatasimilartowhattheyweretrainedon,andthatindicatehigh

uncertaintyforotherdata

.4

CenterforSecurityandEmergingTechnology|6

However,itisnotclearhowmuchwecanrelyontheseresearchresultsinpractice.

Modelstrainedthiswayareoptimizedtorecognizethatsometypesofinputare

outsidethescopeofwhattheycanhandle.Butbecausetherealworldiscomplexandunpredictable,itisimpossibleforthistrainingtocoverallpossiblewaysinwhichan

inputcouldbeoutofscope.Forexample,evenifwetrainedthemedicalimaging

classifierdescribedabovetohavehighpredictiveuncertaintyonimagesthatexhibit

commonlyknownimagecorruptions,itmaystillfailatdeploymentifthemodelwas

trainedonimagesobtainedinonehospitalwithacertaintypeofequipment,and

deployedinanotherhospitalwithadifferenttypeofequipment.Asaresult,this

approachispronetofailurewhenthemodelisdeployed,andthereisnoknownwaytoguaranteethatthepredictiveuncertaintyestimateswillinfactbereliable.

ModelEnsembling

Modelensemblingisasimplemethodthatcombinesmultipletrainedmodelsand

averagestheirpredictions.Thisapproachoftenimprovespredictiveaccuracycomparedtojustusingasinglemodel.Anensemble’spredictiveuncertaintyisexpressedasthestandarddeviationofthedifferentpredictions,meaningthatifallofthemodelsintheensemblemakesimilarpredictions,thenuncertaintyislow;iftheymakeverydifferentpredictions,uncertaintyishigh.Ensemblemethodsareoftensuccessfulatproviding

goodpredictiveuncertaintyestimatesinpractice,andarethereforeapopular

approach—thoughtheycanbeexpensive,giventhatmultiplemodelsmustbetrained.Theunderlyingmechanismofusingensemblingforuncertaintyquantificationisthat

differentmodelsinanensemblewillbelikelytoagreeoninputexamplessimilartothetrainingdata,butmaydisagreeoninputexamplesmeaningfullydifferentfromthe

trainingdata.Assuch,whenthepredictionsoftheensemblecomponentsdiffer,thiscanbeusedasastand-inforuncertainty

.5

However,thereisnowaytoverifythatthismechanismworksforanygivenensembleandinputexample.Inparticular,itispossiblethatforsomeinputexamples,multiplemodelsintheensemblemayallgivethesameincorrectanswer,whichwouldgiveafalseimpressionofconfidence,anditisimpossibletoensurethatagivenensemble

willprovidereliable,well-calibratedpredictiveuncertaintyestimatesacrosstheboard.Forsomeusecases,thefactthatensemblingtypicallyprovidesfairlygooduncertaintyestimatesmaybesufficienttomakeitworthusing.Butincaseswheretheuserneedstobeabletotrustthatthesystemwillreliablyidentifysituationswhereitislikelytofail,ensemblingshouldnotbeconsideredareliablemethod.

CenterforSecurityandEmergingTechnology|7

ConformalPrediction

Conformalprediction,incontrastwithdeterministicmethodsandensembling,isa

statisticallywell-foundedapproachthatprovidesmathematicalreliabilityguarantees,butreliesonakeyassumption:thatthedatathemodelwillencounteroncedeployedisgeneratedbythesameunderlyingdata-generatingprocessasthetrainingdata(i.e.,thatthereisnodistributionshift).Usingthisassumption,conformalpredictioncan

providemathematicalguaranteesoftheprobabilitythatagivenpredictionrangeincludedthecorrectprediction.Forinstance,inaweatherforecastingsetting,

conformalpredictioncouldguaranteea95%chancethattheday’smaximum

temperaturewillfallwithinacertainrange.(Thatis,itcouldprovideamathematical

guaranteethat95outof100similarpredictionswouldfallwithintherange.

)6

A

predictedrangeof,say,82ºF-88ºFwouldimplymoreuncertaintythanarangeof83ºF-85ºF.

Conformalprediction’smajoradvantageisthatitispossibletomathematicallyguaranteethatitspredictiveuncertaintyestimatesarecorrectundercertain

assumptions.Itsmajordisadvantageisthatthoseassumptions—primarilythatthe

modelwillencountersimilardatawhiledeployedtothedataitwastrainedon—oftendonothold.Worse,itisoftenimpossibletodetectwhentheseassumptionsare

violated,meaningthatthesamekindofchangesininputsthatmaytripup

deterministicmethodsarealsolikelytocauseconformalpredictiontofail.Infact,inalloftheexampleapplicationproblemswheremachinelearningmodelsarepronetofailandforwhichwewouldliketofindapproachestoimprovinguncertaintyquantification,standardassumptionsofconformalpredictionwouldbeviolated.

BayesianInference

Lastly,BayesianuncertaintyquantificationusesBayesianinference,whichprovidesamathematicallyprincipledframeworkforupdatingtheprobabilityofahypothesisas

moreevidenceorinformationbecomesavailable

.7

Bayesianinferencecanbeusedto

trainaneuralnetworkthatrepresentseachparameterinthenetworkasarandom

variable,ratherthanasinglefixedvalue(asistypicallythecase).Whilethisapproachisguaranteedtoprovideanaccuraterepresentationofamodel’spredictiveuncertainty,itiscomputationallyinfeasibletocarryoutexactBayesianinferenceonmodern

machinelearningmodelssuchasneuralnetworks.Instead,thebestresearcherscandoistouseapproximations,meaningthatanyguaranteethatthemodel’suncertaintywillbeaccuratelyrepresentedislost.

CenterforSecurityandEmergingTechnology|8

PracticalConsiderationsinUsingUncertaintyQuantification

Uncertaintyquantificationmethodsformachinelearningareapowerfultoolformakingmodernmachinelearningsystemsmorereliable.Whilenoexistingapproachisasilverbulletandeachapproachhasdistinctpracticalshortcomings,researchhasshownthatmethodsspecificallydesignedtoimprovetheabilityofmodernmachinelearning

systemstoquantifytheiruncertainty—suchastheapproachesdescribedabove—

succeedatdoingsoinmostsettings.Thesemethodsthereforeoftenserveas“add-ons”tostandardtrainingroutines.Theycanbecustom-designedtomeetthespecificchallengesofagivenpredictiontaskordeploymentsettingandcanaddanadditionalsafetylayertodeployedsystems.

Consideringhuman-computerinteractioniscrucialformakingeffectiveuseof

uncertaintyquantificationmethods.Forexample,beingabletointerpretamodel’s

uncertaintyestimates,determiningthelevelofuncertaintyinmachinelearning

systemsthathumanoperatorsarecomfortablewith,andunderstandingwhenandwhyasystem’suncertaintyestimatesmaybeunreliableisextremelyimportantforsafety-criticalapplicationsettings.Choicesaroundthedesignofuserinterfaces,datavisualizations,andusertrainingcanmakeabigdifferenceinhowusefuluncertaintyestimatesareinpractice

.8

Giventhelimitationsofexistingapproachestouncertaintyquantification,itisessentialthattheuseofuncertaintyestimatesdoesnotcreateafalsesenseofconfidence.

Systemsmustbedesignedtoaccountforthefactthatamodeldisplayinghigh

confidencecouldstillbewrongifithasencounteredanunknownunknownthatgoesbeyondwhatitwastrainedandtestedfor.

CenterforSecurityandEmergingTechnology|9

Outlook

Thereisincreasinginterestinhowuncertaintyquantificationcouldbeusedtomitigatetheweaknessesoflargelanguagemodels,suchastheirtendencytohallucinate.Whilemuchpastworkinthespacehasfocusedonimageclassificationorsimpletabular

datasets,someresearchersarebeginningtoexplorewhatitwouldlooklikefor

chatbotsorotherlanguage-basedsystemsto“knowwhattheydon’tknow.

”9

This

researchneedstograpplewithchallengesspecifictolanguagegeneration,suchasthefactthatthereisoftennosinglecorrectanswer.(Forinstance,correctanswerstothequestion:“WhatisthecapitalofFrance?”couldinclude,“Paris,”“It’sParis,”or“The

capitalofFranceisParis,”eachofwhichrequiresthelanguagemodeltomakedifferentpredictionsaboutwhichwordshouldcomenext.)

Duetothefundamentalchallengesofreliablyquantifyinguncertainty,weshouldnotexpectaperfectsolutiontobedevelopedforlanguagegenerationoranyothertypeofmachinelearning.Justaswiththebroaderchallengeofbuildingmachinelearning

systemsthatcangeneralizetonewcontexts,thepossibilityofdistributionshiftmeansthatwemayneverbeabletobuildAIsystemsthat“knowwhattheydon’tknow”withcompletecertainty.

Nonetheless,researchintoreliableuncertaintyquantificationinchallengingdomains—suchascomputervisionorreinforcementlearning—hasmadegreatstridesin

improvingthereliabilityandrobustnessofmodernmachinelearningsystemsoverthepastfewyearsandwillplayacrucialroleinimprovingthesafety,reliability,and

interpretabilityoflargelanguagemodelsinthenearfuture.Overtime,uncertainty

quantificationinmachinelearningsystemsislikelytomovefrombeinganareaofbasicresearchtoapracticalengineeringchallengethatcanbeapproachedwiththedifferentparadigmsandmethodsdescribedinthispaper.

CenterforSecurityandEmergingTechnology|10

Authors

TimG.J.Rudnerisanon-residentAI/MLfellowwithCSETandafacultyfellowatNewYorkUniversity.

HelenToneristhedirectorofstrategyandfoundationalresearchgrantsatCSET.

Acknowledgments

Forfeedbackandassistance,wearegratefultoAlexEngler,HeatherFrase,MargaritaKonaev,LarryLewis,EmeliaProbasco,andThomasWoodside.

©2024bytheCenterforSecurityandEmergingTechnology.ThisworkislicensedunderaCreativeCommonsAttribution-NonCommercial4.0InternationalLicense.

Toviewacopyofthislicense,visit

/licenses/by-nc/4.0/.

DocumentIdentifier:doi:10.51593/20220013

CenterforSecurityandEmergingTechnology|11

Endnotes

1NeilBandetal.,BenchmarkingBayesianDeepLearningonDiabeticRetinopathyDetectionTasks,AdvancesinNeuralInformationProcessingSystems,2021,

/forum?id=jyd4Lyjr2iB.

2Wenotethat,technically,modelsaretrainedtoachieveahighcross-entropybetweenthedatalabelsandthepredictedprobabilities.Thismetricdoesdiscouragethemodel—tosomeextent—frombeingconfidentandwrongonthetrainingdatabutdoesnotnecessarilyleadtowell-calibratedpredictions.

3Forsimplicity,weonlydiscussclassificationproblems,whereamodelpredictsclassprobabilitiesthatmakeiteasytocomputethecalibrationofapredictivemodel.

4See,forexample,thispaperonpredictingretinaldisease(includingTable4onexpectedcalibration

error):JoostvanAmersfoort,LewisSmith,YeeWhyeTeh,andYarinGal,“UncertaintyEstimat

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论