《人工智能安全治理框架》1.0版(英文)_第1页
《人工智能安全治理框架》1.0版(英文)_第2页
《人工智能安全治理框架》1.0版(英文)_第3页
《人工智能安全治理框架》1.0版(英文)_第4页
《人工智能安全治理框架》1.0版(英文)_第5页
已阅读5页,还剩51页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

Content

1.PrinciplesforAIsafetygovernance 1

2.FrameworkforAIsafetygovernance 4

3.ClassificationofAIsafetyrisks 5

3.1AI'sinherentsafetyrisks 6

3.2SafetyrisksinAIapplications 9

4.Technologicalmeasurestoaddressrisks 13

4.1AddressingAI’sinherentsafetyrisks 13

4.2AddressingsafetyrisksinAIapplications 15

5.Comprehensivegovernancemeasures 17

6.SafetyguidelinesforAIdevelopmentandapplication 21

6.1Safetyguidelinesformodelalgorithmdevelopers 21

6.2SafetyguidelinesforAIserviceproviders 23

6.3Safetyguidelinesforusersinkeyareas 24

6.4Safetyguidelinesforgeneralusers 26

-1-

AISafetyGovernanceFramework

AISafetyGovernanceFramework

(V1.0)

ArtificialIntelligence(AI),anewareaofhumandevelopment,presentssignificantopportunitiestotheworldwhileposingvariousrisksandchallenges.Upholdingapeople-centeredapproachandadheringtotheprincipleofdevelopingAIforgood,thisframeworkhasbeenformulatedtoimplementtheGlobalAIGovernanceInitiativeandpromoteconsensusandcoordinatedeffortsonAIsafetygovernanceamonggovernments,internationalorganizations,companies,researchinstitutes,civilorganizations,andindividuals,aimingtoeffectivelypreventanddefuseAIsafetyrisks.

1.PrinciplesforAIsafetygovernance

-Committoavisionofcommon,comprehensive,cooperative,andsustainablesecuritywhileputtingequalemphasisondevelopmentandsecurity

-PrioritizetheinnovativedevelopmentofAI

-TakeeffectivelypreventinganddefusingAIsafetyrisksasthestartingpointandultimategoal

-2-

AISafetyGovernanceFramework

-Establishgovernancemechanismsthatengageallstakeholders,integratetechnologyandmanagement,andensurecoordinatedeffortsandcollaborationamongthem

-EnsurethatallpartiesinvolvedfullyshouldertheirresponsibilitiesforAIsafety

-Createawhole-process,all-elementgovernancechain

-Fosterasafe,reliable,equitable,andtransparentAIforthetechnicalresearch,development,andapplication

-PromotethehealthydevelopmentandregulatedapplicationofAI

-Effectivelysafeguardnationalsovereignty,securityanddevelopmentinterests

-Protectthelegitimaterightsandinterestsofcitizens,legalpersonsandotherorganizations

-GuaranteethatAItechnologybenefitshumanity

1.1Beinclusiveandprudenttoensuresafety

WeencouragedevelopmentandinnovationandtakeaninclusiveapproachtoAIresearch,development,andapplication.WemakeeveryefforttoensureAIsafety,andwilltaketimelymeasurestoaddressanyrisksthatthreatennationalsecurity,harmthepublicinterest,orinfringeuponthelegitimaterightsandinterestsofindividuals.

-3-

AISafetyGovernanceFramework

1.2Identifyriskswithagilegovernance

BycloselytrackingtrendsinAIresearch,development,andapplication,weidentifyAIsafetyrisksfromtwoperspectives:thetechnologyitselfanditsapplication.Weproposetailoredpreventivemeasurestomitigatetheserisks.Wefollowtheevolutionofsafetyrisks,swiftlyadjustingourgovernancemeasuresasneeded.Wearecommittedtoimprovingthegovernancemechanismsandmethodswhilepromptlyrespondingtoissueswarrantinggovernmentoversight.

1.3Integratetechnologyandmanagementforcoordinatedresponse

WeadoptacomprehensivesafetygovernanceapproachthatintegratestechnologyandmanagementtopreventandaddressvarioussafetyrisksthroughouttheentireprocessofAIresearch,development,andapplication.WithintheAIresearch,development,andapplicationchain,itisessentialtoensurethatallrelevantparties,includingmodelandalgorithmresearchersanddevelopers,serviceproviders,andusers,assumetheirrespectiveresponsibilitiesforAIsafety.Thisapproachwellleveragestherolesofgovernancemechanismsinvolvinggovernmentoversight,industryself-regulation,andpublicscrutiny.

1.4Promoteopennessandcooperationforjointgovernanceandsharedbenefits

-4-

AISafetyGovernanceFramework

WepromoteinternationalcooperationonAIsafetygovernance,withthebestpracticessharedworldwide.WeadvocateestablishingopenplatformsandadvanceeffortstobuildbroadconsensusonaglobalAIgovernancesystemthroughdialogueandcooperationacrossvariousdisciplines,fields,regions,andnations.

2.FrameworkforAIsafetygovernance

Basedonthenotionofriskmanagement,thisframeworkoutlinescontrolmeasurestoaddressdifferenttypesofAIsafetyrisksthroughtechnologicalandmanagerialstrategies.AsAIresearch,development,andapplicationrapidlyevolves,leadingtochangesintheforms,impacts,andourperceptionofsafetyrisks,itisnecessarytocontinuouslyupdatecontrolmeasures,andinviteallstakeholderstorefinethegovernanceframework.

2.1Safetyandsecurityrisks

ByexaminingthecharacteristicsofAItechnologyanditsapplicationscenariosacrossvariousindustriesandfields,wepinpointsafetyandsecurityrisksandpotentialdangersthatareinherentlylinkedtothetechnologyitselfanditsapplication.

2.2Technicalcountermeasures

Regardingmodelsandalgorithms,trainingdata,computingfacilities,

-5-

AISafetyGovernanceFramework

productsandservices,andapplicationscenarios,weproposetargetedtechnicalmeasurestoimprovethesafety,fairness,reliability,androbustnessofAIproductsandapplications.Thesemeasuresincludesecuresoftwaredevelopment,dataqualityimprovement,constructionandoperationssecurityenhancement,andconductingevaluation,monitoring,andreinforcementactivities.

2.3Comprehensivegovernancemeasures

Inaccordancewiththeprincipleofcoordinatedeffortsandjointgovernance,weclarifythemeasuresthatallstakeholders,includingtechnologyresearchinstitutions,productandserviceproviders,users,governmentagencies,industryassociations,andsocialorganizations,shouldtaketoidentify,prevent,andrespondtoAIsafetyrisks.

2.4SafetyguidelinesforAIdevelopmentandapplication

WeproposeseveralsafetyguidelinesforAImodelandalgorithmdevelopers,AIserviceproviders,usersinkeyareas,andgeneralusers,todevelopandapplyAItechnology.

3.ClassificationofAIsafetyrisks

SafetyrisksexistateverystagethroughouttheAIchain,fromsystemdesigntoresearchanddevelopment(R&D),training,testing,deployment,

-6-

AISafetyGovernanceFramework

utilization,andmaintenance.Theserisksstemfrominherenttechnicalflawsaswellasmisuse,abuse,andmalicioususeofAI.

3.1AI'sinherentsafetyrisks

3.1.1Risksfrommodelsandalgorithms(a)Risksofexplainability

AIalgorithms,representedbydeeplearning,havecomplexinternal

workings.Theirblack-boxorgrey-boxinferenceprocessresultsinunpredictableanduntraceableoutputs,makingitchallengingtoquicklyrectifythemortracetheiroriginsforaccountabilityshouldanyanomaliesarise.

(b)Risksofbiasanddiscrimination

Duringthealgorithmdesignandtrainingprocess,personalbiasesmaybeintroduced,eitherintentionallyorunintentionally.Additionally,poor-qualitydatasetscanleadtobiasedordiscriminatoryoutcomesinthealgorithm'sdesignandoutputs,includingdiscriminatorycontentregardingethnicity,religion,nationalityandregion.

(c)Risksofrobustness

Asdeepneuralnetworksarenormallynon-linearandlargeinsize,AIsystemsaresusceptibletocomplexandchangingoperationalenvironmentsormaliciousinterferenceandinductions,possiblyleadingtovariousproblemslikereducedperformanceanddecision-makingerrors.

(d)Risksofstealingandtampering

-7-

AISafetyGovernanceFramework

Corealgorithminformation,includingparameters,structures,andfunctions,facesrisksofinversionattacks,stealing,modification,andevenbackdoorinjection,whichcanleadtoinfringementofintellectualpropertyrights(IPR)andleakageofbusinesssecrets.Itcanalsoleadtounreliableinference,wrongdecisionoutputandevenoperationalfailures.

(e)Risksofunreliableoutput

GenerativeAIcancausehallucinations,meaningthatanAImodelgeneratesuntruthfulorunreasonablecontent,butpresentsitasifitwereafact,leadingtobiasedandmisleadinginformation.

(f)Risksofadversarialattack

Attackerscancraftwell-designedadversarialexamplestosubtlymislead,influenceandevenmanipulateAImodels,causingincorrectoutputsandpotentiallyleadingtooperationalfailures.

3.1.2Risksfromdata

(a)Risksofillegalcollectionanduseofdata

ThecollectionofAItrainingdataandtheinteractionwithusersduringserviceprovisionposesecurityrisks,includingcollectingdatawithoutconsentandimproperuseofdataandpersonalinformation.

(b)Risksofimpropercontentandpoisoningintrainingdata

Ifthetrainingdataincludesillegalorharmfulinformationlikefalse,biasedandIPR-infringingcontent,orlacksdiversityinitssources,theoutputmayincludeharmfulcontentlikeillegal,malicious,orextremeinformation.Trainingdataisalsoatriskofbeingpoisonedfromtampering,error

-8-

AISafetyGovernanceFramework

injection,ormisleadingactionsbyattackers.Thiscaninterferewiththemodel'sprobabilitydistribution,reducingitsaccuracyandreliability.

(c)Risksofunregulatedtrainingdataannotation

Issueswithtrainingdataannotation,suchasincompleteannotationguidelines,incapableannotators,anderrorsinannotation,canaffecttheaccuracy,reliability,andeffectivenessofmodelsandalgorithms.Moreover,theycanintroducetrainingbiases,amplifydiscrimination,reducegeneralizationabilities,andresultinincorrectoutputs.

(d)Risksofdataleakage

InAIresearch,development,andapplications,issuessuchasimproperdataprocessing,unauthorizedaccess,maliciousattacks,anddeceptiveinteractionscanleadtodataandpersonalinformationleaks.

3.1.3RisksfromAIsystems

(a)Risksofexploitationthroughdefectsandbackdoors

ThestandardizedAPI,featurelibraries,toolkitsusedinthedesign,training,andverificationstagesofAIalgorithmsandmodels,developmentinterfaces,andexecutionplatforms,maycontainlogicalflawsandvulnerabilities.Theseweaknessescanbeexploited,andinsomecases,backdoorscanbeintentionallyembedded,posingsignificantrisksofbeingtriggeredandusedforattacks.

(b)Risksofcomputinginfrastructuresecurity

ThecomputinginfrastructureunderpinningAItrainingandoperations,whichreliesondiverseandubiquitouscomputingnodesandvarioustypes

-9-

AISafetyGovernanceFramework

ofcomputingresources,facesriskssuchasmaliciousconsumptionofcomputingresourcesandcross-boundarytransmissionofsecuritythreatsatthelayerofcomputinginfrastructure.

(c)Risksofsupplychainsecurity

TheAIindustryreliesonahighlyglobalizedsupplychain.However,certaincountriesmayuseunilateralcoercivemeasures,suchastechnologybarriersandexportrestrictions,tocreatedevelopmentobstaclesandmaliciouslydisrupttheglobalAIsupplychain.Thiscanleadtosignificantrisksofsupplydisruptionsforchips,software,andtools.

3.2SafetyrisksinAIapplications

3.2.1Cyberspacerisks

(a)Risksofinformationandcontentsafety

AI-generatedorsynthesizedcontentcanleadtothespreadoffalseinformation,discriminationandbias,privacyleakage,andinfringementissues,threateningthesafetyofcitizens'livesandproperty,nationalsecurity,ideologicalsecurity,andcausingethicalrisks.Ifusers’inputscontainharmfulcontent,themodelmayoutputillegalordamaginginformationwithoutrobustsecuritymechanisms.

(b)Risksofconfusingfacts,misleadingusers,andbypassingauthentication

AIsystemsandtheiroutputs,ifnotclearlylabeled,canmakeitdifficultforuserstodiscernwhethertheyareinteractingwithAIandtoidentifythe

-10-

AISafetyGovernanceFramework

sourceofgeneratedcontent.Thiscanimpedeusers'abilitytodeterminetheauthenticityofinformation,leadingtomisjudgmentandmisunderstanding.Additionally,AI-generatedhighlyrealisticimages,audio,andvideosmaycircumventexistingidentityverificationmechanisms,suchasfacialrecognitionandvoicerecognition,renderingtheseauthenticationprocessesineffective.

(c)Risksofinformationleakageduetoimproperusage

Staffofgovernmentagenciesandenterprises,iffailingtousetheAIserviceinaregulatedandpropermanner,mayinputinternaldataandindustrialinformationintotheAImodel,leadingtoleakageofworksecrets,businesssecretsandothersensitivebusinessdata.

(d)Risksofabuseforcyberattacks

AIcanbeusedinlaunchingautomaticcyberattacksorincreasingattackefficiency,includingexploringandmakinguseofvulnerabilities,crackingpasswords,generatingmaliciouscodes,sendingphishingemails,networkscanning,andsocialengineeringattacks.Alltheselowerthethresholdforcyberattacksandincreasethedifficultyofsecurityprotection.

(e)Risksofsecurityflawtransmissioncausedbymodelreuse

Re-engineeringorfine-tuningbasedonfoundationmodelsiscommonlyusedinAIapplications.Ifsecurityflawsoccurinfoundationmodels,itwillleadtorisktransmissiontodownstreammodels.

3.2.2Real-worldrisks

(a)Inducingtraditionaleconomicandsocialsecurityrisks

-11-

AISafetyGovernanceFramework

AIisusedinfinance,energy,telecommunications,traffic,andpeople'slivelihoods,suchasself-drivingandsmartdiagnosisandtreatment.Hallucinationsanderroneousdecisionsofmodelsandalgorithms,alongwithissuessuchassystemperformancedegradation,interruption,andlossofcontrolcausedbyimproperuseorexternalattacks,willposesecuritythreatstousers'personalsafety,property,andsocioeconomicsecurityandstability.

(b)RisksofusingAIinillegalandcriminalactivities

AIcanbeusedintraditionalillegalorcriminalactivitiesrelatedtoterrorism,violence,gambling,anddrugs,suchasteachingcriminaltechniques,concealingillicitacts,andcreatingtoolsforillegalandcriminalactivities.

(c)Risksofmisuseofdual-useitemsandtechnologies

Duetoimproperuseorabuse,AIcanposeseriousriskstonationalsecurity,economicsecurity,andpublichealthsecurity,suchasgreatlyreducingthecapabilityrequirementsfornon-expertstodesign,synthesize,acquire,andusenuclear,biological,andchemicalweaponsandmissiles;designingcyberweaponsthatlaunchnetworkattacksonawiderangeofpotentialtargetsthroughmethodslikeautomaticvulnerabilitydiscoveringandexploiting.

3.2.3Cognitiverisks

(a)Risksofamplifyingtheeffectsof"informationcocoons"

AIcanbeextensivelyutilizedforcustomizedinformationservices,collectinguserinformation,andanalyzingtypesofusers,theirneeds,intentions,preferences,habits,andevenmainstreampublicawarenessoveracertain

-12-

AISafetyGovernanceFramework

period.Itcanthenbeusedtoofferformulaicandtailoredinformationandservice,aggravatingtheeffectsof"informationcocoons."

(b)Risksofusageinlaunchingcognitivewarfare

AIcanbeusedtomakeandspreadfakenews,images,audio,andvideos,propagatecontentofterrorism,extremism,andorganizedcrimes,interfereininternalaffairsofothercountries,socialsystems,andsocialorder,andjeopardizesovereigntyofothercountries.AIcanshapepublicvaluesandcognitivethinkingwithsocialmediabotsgainingdiscoursepowerandagenda-settingpowerincyberspace.

3.2.4Ethicalrisks

(a)Risksofexacerbatingsocialdiscriminationandprejudice,andwideningtheintelligencedivide

AIcanbeusedtocollectandanalyzehumanbehaviors,socialstatus,economicstatus,andindividualpersonalities,labelingandcategorizinggroupsofpeopletotreatthemdiscriminatingly,thuscausingsystematicalandstructuralsocialdiscriminationandprejudice.Atthesametime,theintelligencedividewouldbeexpandedamongregions.

(b)Risksofchallengingtraditionalsocialorder

ThedevelopmentandapplicationofAImayleadtotremendouschangesinproductiontoolsandrelations,acceleratingthereconstructionoftraditionalindustrymodes,transformingtraditionalviewsonemployment,fertility,andeducation,andbringingchallengestostableperformanceoftraditionalsocialorder.

-13-

AISafetyGovernanceFramework

(c)RisksofAIbecominguncontrollableinthefuture

WiththefastdevelopmentofAItechnologies,thereisariskofAIautonomouslyacquiringexternalresources,conductingself-replication,becomeself-aware,seekingforexternalpower,andattemptingtoseizecontrolfromhumans.

4.Technologicalmeasurestoaddressrisks

Respondingtotheaboverisks,AIdevelopers,serviceproviders,andsystemusersshouldpreventrisksbytakingtechnologicalmeasuresinthefieldsoftrainingdata,computinginfrastructures,modelsandalgorithms,productservices,andapplicationscenarios.

4.1AddressingAI’sinherentsafetyrisks

4.1.1Addressingrisksfrommodelsandalgorithms

(a)ExplainabilityandpredictabilityofAIshouldbeconstantlyimprovedtoprovideclearexplanationfortheinternalstructure,reasoninglogic,technicalinterfaces,andoutputresultsofAIsystems,accuratelyreflectingtheprocessbywhichAIsystemsproduceoutcomes.

(b)Securedevelopmentstandardsshouldbeestablishedandimplementedinthedesign,R&D,deployment,andmaintenanceprocessestoeliminateasmanysecurityflawsanddiscriminationtendenciesinmodelsandalgorithmsaspossibleandenhancerobustness.

-14-

AISafetyGovernanceFramework

4.1.2Addressingrisksfromdata

(a)Securityrulesondatacollectionandusage,andonprocessingpersonalinformationshouldbeabidedbyinallproceduresoftrainingdataanduserinteractiondata,includingdatacollection,storage,usage,processing,transmission,provision,publication,anddeletion.Thisaimstofullyensureuser’slegitimaterightsstipulatedbylawsandregulations,suchastheirrightstocontrol,tobeinformed,andtochoose.

(b)ProtectionofIPRshouldbestrengthenedtopreventinfringementonIPRinstagessuchasselectingtrainingdataandresultoutputs.

(c)Trainingdatashouldbestrictlyselectedtoensureexclusionofsensitivedatainhigh-riskfieldssuchasnuclear,biological,andchemicalweaponsandmissiles.

(d)Datasecuritymanagementshouldbestrengthenedtocomplywithdatasecurityandpersonalinformationprotectionstandardsandregulationsiftrainingdatacontainssensitivepersonalinformationandimportantdata.

(e)Tousetruthful,precise,objective,anddiversetrainingdatafromlegitimatesources,andfilterineffective,wrong,andbiaseddatainatimely

manner.

(f)Thecross-borderprovisionofAIservicesshouldcomplywiththeregulationsoncross-borderdataflow.TheexternalprovisionofAImodelsandalgorithmsshouldcomplywithexportcontrolrequirements.

4.1.3AddressingrisksfromAIsystem

(a)Toproperlydisclosetheprinciples,capacities,applicationscenarios,and

-15-

AISafetyGovernanceFramework

safetyrisksofAItechnologiesandproducts,toclearlylabeloutputs,andtoconstantlymakeAIsystemsmoretransparent.

(b)Toenhancetheriskidentification,detection,andmitigationofplatformswheremultipleAImodelsorsystemscongregate,soastopreventmaliciousactsorattacksandinvasionsthattargettheplatformsfromimpactingtheAImodelsorsystemstheysupport.

(c)Tostrengthenthecapacityofconstructing,managing,andoperatingAIcomputingplatformsandAIsystemservicessafely,withanaimtoensureuninterruptedinfrastructureoperationandserviceprovision.

(d)Tofullyconsiderthesupplychainsecurityofthechips,software,tools,computinginfrastructure,anddatasourcesadoptedforAIsystems.Totrackthevulnerabilitiesandflawsofbothsoftwareandhardwareproductsandmaketimelyrepairandreinforcementtoensuresystemsecurity.

4.2AddressingsafetyrisksinAIapplications

4.2.1Addressingcyberspacerisks

(a)Asecurityprotectionmechanismshouldbeestablishedtopreventmodel

frombeinginterferedandtamperedduringoperationtoensurereliableoutputs.

(b)AdatasafeguardshouldbesetuptomakesurethatAIsystemscomplywithapplicablelawsandregulationswhenoutputtingsensitivepersonalinformationandimportantdata.

4.2.2Addressingreal-worldrisks

-16-

AISafetyGovernanceFramework

(a)Toestablishservicelimitationsaccordingtousers’actualapplicationscenariosandcutAIsystems’featuresthatmightbeabused.AIsystemsshouldnotprovideservicesthatgobeyondthepresetscope.

(b)ToimprovetheabilitytotracetheenduseofAIsystemstopreventhigh-riskapplicationscenariossuchasmanufacturingofweaponsofmassdestruction,likenuclear,biological,chemicalweaponsandmissiles.

4.2.3Addressingcognitiverisks

(a)Toidentifyunexpected,untruthful,andinaccurateoutputsviatechnologicalmeans,andregulatetheminaccordancewithlawsandregulations.

(b)StrictmeasuresshouldbetakentopreventabuseofAIsystemsthatcollect,connect,gather,analyze,anddigintousers’inquiriestoprofiletheiridentity,preference,andpersonalmindset.

(c)TointensifyR&DofAI-generatedcontent(AIGC)testingtechnologies,aimingtobetterprevent,detect,andnavigatethecognitivewarfare.

4.2.4Addressingethicalrisks

(a)Trainingdatashouldbefilteredandoutputsshouldbeverifiedduringalgorithmdesign,modeltrainingandoptimization,serviceprovisionandotherprocesses,inanefforttopreventdiscriminationbasedonethnicities,beliefs,nationalities,region,gender,age,occupationandhealthfactors,amongothers.

(b)AIsystemsappliedinkeysectors,suchasgovernmentdepartments,criticalinformationinfrastructure,andareasdirectlyaffectingpublicsafety

-17-

AISafetyGovernanceFramework

andpeople'shealthandsafety,shouldbeequippedwithhigh-efficientemergencymanagementandcontrolmeasures.

5.Comprehensivegovernancemeasures

Whileadoptingtechnologicalcontrols,weshouldformulateandrefinecomprehensiveAIsafetyandsecurityriskgovernancemechanismsandregulationsthatengagemulti-stakeholderparticipation,includingtechnologyR&Dinstitutions,serviceproviders,users,governmentauthorities,industryassociations,andsocialorganizations.

5.1Toimplementatieredandcategory-basedmanagementforAIapplication.WeshouldclassifyandgradeAIsystemsbasedontheirfeatures,functions,andapplicationscenarios,andsetupatestingandassessmentsystembasedonAIrisklevels.Weshouldbolsterend-usemanagementofAI,andimposerequirementsontheadoptionofAItechnologiesbyspecificusersandinspecificscenarios,therebypreventingAIsystemabuse.WeshouldregisterAIsystemswhosecomputingandreasoningcapacitieshavereachedacertainthresholdorthoseareappliedinspecificindustriesandsectors,anddemandthatsuchsystemspossessthesafetyprotectioncapacitythroughoutthelifecycleincludingdesign,R&D,testing,deployment,utilization,andmaintenance.

5.2TodevelopatraceabilitymanagementsystemforAIservices.

WeshouldusedigitalcertificatestolabeltheAIsystemsservingthepublic.WeshouldformulateandintroducestandardsandregulationsonAIoutput

-18-

AISafetyGovernanceFramework

labeling,andclarifyrequirementsforexplicitandimplicitlabelsthroughoutkeystagesincludingcreationsources,transmissionpaths,anddistributionchannels,withaviewtoenableuserstoidentifyandjudgeinformationsourcesandcredibility.

5.3ToimproveAIdatasecurityandpersonalinformationprotectionregulations.WeshouldexplicatetherequirementsfordatasecurityandpersonalinformationprotectioninvariousstagessuchasAItraining,labeling,utilization,andoutputbasedonthefeaturesofAItechnologiesandapplications.

5.4TocreatearesponsibleAIR&Dandapplicationsystem.Weshouldproposepragmaticinstructionsandbestpracticestoupholdthepeople-centeredapproachandadheretotheprincipleofdevelopingAIforgoodinAIR&Dandapplication,andcontinuouslyalignAI’sdesign,R&D,andapplicationprocesseswithsuchvaluesandethics.Weshouldexplorethecopyrightprotection,developmentandutilizationsystemsthatadapttotheAIeraandcontinuouslyadvancetheconstructionofhigh-qualityfoundationalcorporaanddatasetstoprovidepremiumresourcesforthesafedevelopmentofAI.WeshouldestablishAI-relatedethicalreviewstandards,norms,andguidelinestoimprovetheethicalreviewsystem.

5.5TostrengthenAIsupplychainsecurity.WeshouldpromoteknowledgesharinginAI,makeAItechnologiesavailabletothepublicunderopen-sourceterms,andjointlydevelopAIchips,frameworks,andsoftware.Weshouldguidetheindustrytobuildanopenecosystem,enhancethe

-19-

AISafetyGovernanceFramework

diversityofsupplychainsources,andensurethesecurityandstabilityoftheAIsupplychain.

5.6ToadvanceresearchonAIexplainability.Weshouldorganizeandconductresearchonthetransparency,trustworthiness,anderror-correctionmechanisminAIdecision-makingfromtheperspectivesofmachinelearningtheory,trainingmethodsandhuman-computerinteraction.ContinuouseffortsshouldbemadetoenhancetheexplainabilityandpredictabilityofAItopreventmaliciousconsequencesresultingfromunintendeddecisionsmadebyAIsystems.

5.7Toshareinformation,andemergencyresponseofAIsafetyrisksandthreats.Weshouldcontinuouslytrackandanalyzesecurityvulnerabilities,defects,risks,threats,andsafetyincidentsrelatedtoAItechnologies,softwareandhardwareproducts,services,andotheraspects.Weshouldcoordinatewithrelevantdevelopersandserviceproviderstoestablishareportingandsharinginformationmechanismonrisksandthreats.WeshouldestablishanemergencyresponsemechanismforAIsafetyandsecurityincidents,formulateemergencyplans,conductemergencydrills,andhandleAIsafetyhazards,AIsecuritythreats,andeventstimely,rapidly,andeffectively.

5.8ToenhancethetrainingofAIsafetytalents.WeshouldpromotethedevelopmentofAIsafetyeducationinparallelwithAIdiscipline.Weshouldleverageschoolsandresearchinstitutionstostrengthentalentcultivationinthefieldsofdesign,development

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论