ENISA:2023年AI和标准化网络安全报告_第1页
ENISA:2023年AI和标准化网络安全报告_第2页
ENISA:2023年AI和标准化网络安全报告_第3页
ENISA:2023年AI和标准化网络安全报告_第4页
ENISA:2023年AI和标准化网络安全报告_第5页
已阅读5页,还剩61页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

EUROPEANUNIONAGENCY

FORCYBERSECURITY

enIsa

CYBERSECURITY

OFAIAND

STANDARDISATION

MARCH2023

CYBERSECURITYOFAIANDSTANDARDISATION

1

ABBREVIATIONS

Abbreviation

AI

Definition

ArtificialIntelligence

CEN-

CENELEC

EuropeanCommitteeforStandardisation-EuropeanCommitteeforElectrotechnicalStandardisation

CIA

Confidentiality,IntegrityandAvailability

EN

EuropeanStandard

ESO

EuropeanStandardisationOrganisation

ETSI

EuropeanTelecommunicationsStandardsInstitute

GR

GroupReport

ICT

InformationAndCommunicationsTechnology

ISG

IndustrySpecificationGroup

ISO

InternationalOrganizationforStandardization

IT

InformationTechnology

JTC

JointTechnicalCommittee

ML

MachineLearning

NIST

NationalInstituteofStandardsandTechnology

R&D

ResearchAndDevelopment

SAI

SecurityofArtificialIntelligence

SC

Subcommittee

SDO

Standards-DevelopingOrganisation

TR

TechnicalReport

TS

TechnicalSpecifications

WI

WorkItem

2

ABOUTENISA

TheEuropeanUnionAgencyforCybersecurity,ENISA,istheUnion’sagencydedicatedto

achievingahighcommonlevelofcybersecurityacrossEurope.Establishedin2004and

strengthenedbytheEUCybersecurityAct,theEuropeanUnionAgencyforCybersecurity

contributestoEUcyberpolicy,enhancesthetrustworthinessofICTproducts,servicesand

processeswithcybersecuritycertificationschemes,cooperateswithMemberStatesandEU

bodies,andhelpsEuropeprepareforthecyberchallengesoftomorrow.Throughknowledge

sharing,capacitybuildingandawarenessraising,theAgencyworkstogetherwithitskey

stakeholderstostrengthentrustintheconnectedeconomy,toboostresilienceoftheUnion’s

infrastructure,and,ultimately,tokeepEurope’ssocietyandcitizensdigitallysecure.More

informationaboutENISAanditsworkcanbefoundhere:

www.enisa.europa.eu.

CONTACT

Forcontactingtheauthorspleaseuse

team@enisa.europa.eu

Formediaenquiriesaboutthispaper,pleaseuse

press@enisa.europa.eu.

AUTHORS

P.Bezombes,S.Brunessaux,S.Cadzow

EDITOR(S)

ENISA:

E.Magonara

S.Gorniak

P.Magnabosco

E.Tsekmezoglou

ACKNOWLEDGEMENTS

WewouldliketothanktheJointResearchCentreandtheEuropeanCommissionfortheiractive

contributionandcommentsduringthedraftingstage.Also,wewouldliketothanktheENISAAd

HocExpertGrouponArtificialIntelligence(AI)cybersecurityforthevaluablefeed-backand

commentsinvalidatingthisreport.

3

LEGALNOTICE

ThispublicationrepresentstheviewsandinterpretationsofENISA,unlessstatedotherwise.It

doesnotendorsearegulatoryobligationofENISAorofENISAbodiespursuanttothe

Regulation(EU)No2019/881.

ENISAhastherighttoalter,updateorremovethepublicationoranyofitscontents.Itis

intendedforinformationpurposesonlyanditmustbeaccessiblefreeofcharge.Allreferences

toitoritsuseasawholeorpartiallymustcontainENISAasitssource.

Third-partysourcesarequotedasappropriate.ENISAisnotresponsibleorliableforthecontent

oftheexternalsourcesincludingexternalwebsitesreferencedinthispublication.

NeitherENISAnoranypersonactingonitsbehalfisresponsiblefortheusethatmightbemade

oftheinformationcontainedinthispublication.

ENISAmaintainsitsintellectualpropertyrightsinrelationtothispublication.

COPYRIGHTNOTICE

©EuropeanUnionAgencyforCybersecurity(ENISA),2023

ThispublicationislicencedunderCC-BY4.0“Unlessotherwisenoted,thereuseofthis

documentisauthorisedundertheCreativeCommonsAttribution4.0International(CCBY4.0)

licence

/licenses/by/4.0/

).Thismeansthatreuseisallowed,

providedthatappropriatecreditisgivenandanychangesareindicated”.

Coverimage©.

ForanyuseorreproductionofphotosorothermaterialthatisnotundertheENISAcopyright,

permissionmustbesoughtdirectlyfromthecopyrightholders.

ISBN978-92-9204-616-3,DOI10.2824/277479,TP-03-23-011-EN-C

4

TABLEOFCONTENTS

1.INTRODUCTION

8

1.1DOCUMENTPURPOSEANDOBJECTIVES

8

1.2TARGETAUDIENCEANDPREREQUISITES

8

1.3STRUCTUREOFTHESTUDY

8

2.SCOPEOFTHEREPORT:DEFINITIONOFAIANDCYBERSECURITYOFAI

9

2.1ARTIFICIALINTELLIGENCE

9

2.2CYBERSECURITYOFAI

10

3.STANDARDISATIONINSUPPORTOFCYBERSECURITYOFAI12

3.1RELEVANTACTIVITIESBYTHEMAINSTANDARDS-DEVELOPINGORGANISATIONS

12

3.1.1CEN-CENELEC

12

3.1.2ETSI

13

3.1.3ISO-IEC

14

3.1.4Others

14

4.ANALYSISOFCOVERAGE

16

4.1STANDARDISATIONINSUPPORTOFCYBERSECURITYOFAI-NARROWSENSE

16

4.2STANDARDISATIONINSUPPORTOFTHECYBERSECURITYOFAI-TRUSTWORTHINESS19

4.3CYBERSECURITYANDSTANDARDISATIONINTHECONTEXTOFTHEDRAFTAIACT

21

5.CONCLUSIONS

24

5.1WRAP-UP

24

5.2RECOMMENDATIONS

25

5.2.1Recommendationstoallorganisations

25

5.2.2Recommendationstostandards-developingorganisations

25

5.2.3RecommendationsinpreparationfortheimplementationofthedraftAIAct

25

5.3FINALOBSERVATIONS

26

AANNEX:

27

A.1SELECTIONOFISO27000SERIESSTANDARDSRELEVANTTOTHECYBERSECURITYOFAI27

5

A.2RELEVANTISO/IECSTANDARDSPUBLISHEDORPLANNED/UNDERDEVELOPMENT29

A.3CEN-CENELECJOINTTECHNICALCOMMITTEE21ANDDRAFTAIACTREQUIREMENTS31

A.4ETSIACTIVITIESANDDRAFTAIACTREQUIREMENTS33

6

EXECUTIVESUMMARY

Theoverallobjectiveofthepresentdocumentistoprovideanoverviewofstandards(existing,

beingdrafted,underconsiderationandplanned)relatedtothecybersecurityofartificial

intelligence(AI),assesstheircoverageandidentifygapsinstandardisation.Itdoessoby

consideringthespecificitiesofAI,andinparticularmachinelearning,andbyadoptingabroad

viewofcybersecurity,encompassingboththe‘traditional’confidentiality–integrity–availability

paradigmandthebroaderconceptofAItrustworthiness.Finally,thereportexamineshow

standardisationcansupporttheimplementationofthecybersecurityaspectsembeddedinthe

proposedEUregulationlayingdownharmonisedrulesonartificialintelligence(COM(2021)206

final)(draftAIAct).

ThereportdescribesthestandardisationlandscapecoveringAI,bydepictingtheactivitiesofthe

mainStandards-DevelopingOrganisations(SDOs)thatseemtobeguidedbyconcernabout

insufficientknowledgeoftheapplicationofexistingtechniquestocounterthreatsand

vulnerabilitiesarisingfromAI.Thisresultsintheongoingdevelopmentofadhocreportsand

guidance,andofadhocstandards.

Thereportarguesthatexistinggeneralpurposetechnicalandorganisationalstandards(suchas

ISO-IEC27001andISO-IEC9001)cancontributetomitigatingsomeoftherisksfacedbyAI

withthehelpofspecificguidanceonhowtheycanbeappliedinanAIcontext.This

considerationstemsfromthefactthat,inessence,AIissoftwareandthereforesoftware

securitymeasurescanbetransposedtotheAIdomain.

Thereportalsospecifiesthatthisapproachisnotexhaustiveandthatithassomelimitations.

Forexample,whilethereportfocusesonsoftwareaspects,thenotionofAIcanincludeboth

technicalandorganisationalelementsbeyondsoftware,suchashardwareorinfrastructure.

Otherexamplesincludethefactthatdeterminingappropriatesecuritymeasuresreliesona

system-specificanalysis,andthefactthatsomeaspectsofcybersecurityarestillthesubjectof

researchanddevelopment,andthereforemightbenotmatureenoughtobeexhaustively

standardised.Inaddition,existingstandardsseemnottoaddressspecificaspectssuchasthe

traceabilityandlineageofbothdataandAIcomponents,ormetricson,forexample,

robustness.

Thereportalsolooksbeyondthemereprotectionofassets,ascybersecuritycanbeconsidered

asinstrumentaltothecorrectimplementationoftrustworthinessfeaturesofAIand–conversely

–thecorrectimplementationoftrustworthinessfeaturesiskeytoensuringcybersecurity.Inthis

context,itisnotedthatthereisariskthattrustworthinessishandledseparatelywithinAI-

specificandcybersecurity-specificstandardisationinitiatives.Oneexampleofanareawhere

thismighthappenisconformityassessment.

Lastbutnotleast,thereportcomplementstheobservationsabovebyextendingtheanalysisto

thedraftAIAct.Firstly,thereportstressestheimportanceoftheinclusionofcybersecurity

aspectsintheriskassessmentofhigh-risksystemsinordertodeterminethecybersecurityrisks

thatarespecifictotheintendeduseofeachsystem.Secondly,thereporthighlightsthelackof

standardscoveringthecompetencesandtoolsoftheactorsperformingconformity

assessments.Thirdly,itnotesthatthegovernancesystemsdrawnupbythedraftAIActandthe

7

CybersecurityAct(CSA)1shouldworkinharmonytoavoidduplicationofeffortsatnational

level.

Finally,thereportconcludesthatsomestandardisationgapsmightbecomeapparentonlyas

theAItechnologiesadvanceandwithfurtherstudyofhowstandardisationcansupport

cybersecurity.

1Regulation(EU)2019/881oftheEuropeanParliamentandoftheCouncilof17April2019onENISA(theEuropeanUnion

AgencyforCybersecurity)andoninformationandcommunicationstechnologycybersecuritycertificationandrepealing

Regulation(EU)No526/2013(CybersecurityAct)(https://eur-lex.europa.eu/eli/reg/2019/881/oj).

8

1.INTRODUCTION

1.1DOCUMENTPURPOSEANDOBJECTIVES

Theoverallobjectiveofthepresentdocumentistoprovideanoverviewofstandards(existing,

beingdrafted,underconsiderationandplanned)relatedtothecybersecurityofartificial

intelligence(AI),assesstheircoverageandidentifygapsinstandardisation.Thereportis

intendedtocontributetotheactivitiespreparatorytotheimplementationoftheproposedEU

regulationlayingdownharmonisedrulesonartificialintelligence(COM(2021)206final)(the

draftAIAct)onaspectsrelevanttocybersecurity.

1.2TARGETAUDIENCEANDPREREQUISITES

Thetargetaudienceofthisreportincludesanumberofdifferentstakeholdersthatare

concernedbythecybersecurityofAIandstandardisation.

Theprimaryaddresseesofthisreportarestandards-developingorganisations(SDOs)and

publicsector/governmentbodiesdealingwiththeregulationofAItechnologies.

Theambitionofthereportistobeausefultoolthatcaninformabroadersetofstakeholdersof

theroleofstandardsinhelpingtoaddresscybersecurityissues,inparticular:

•academiaandtheresearchcommunity;

•theAItechnicalcommunity,AIcybersecurityexpertsandAIexperts(designers,developers,machinelearning(ML)experts,datascientists,etc.)withaninterestindevelopingsecure

solutionsandinintegratingsecurityandprivacybydesignintheirsolutions;

•businesses(includingsmallandmedium-sizedenterprises)thatmakeuseofAIsolutionsand/orareengagedincybersecurity,includingoperatorsofessentialservices.

Thereaderisexpectedtohaveadegreeoffamiliaritywithsoftwaredevelopmentandwiththe

confidentiality,integrityandavailability(CIA)securitymodel,andwiththetechniquesofboth

vulnerabilityanalysisandriskanalysis.

1.3STRUCTUREOFTHESTUDY

Thereportisstructuredasfollows:

•definitionoftheperimeteroftheanalysis(Chapter

2)

:introductiontotheconceptsofAIandcybersecurityofAI;

•inventoryofstandardisationactivitiesrelevanttothecybersecurityofAI(Chapter

3)

:

overviewofstandardisationactivities(bothAI-specificandnon-AIspecific)supportingthe

cybersecurityofAI;

•analysisofcoverage(Chapter

4)

:analysisofthecoverageofthemostrelevantstandards

identifiedinChapter3withrespecttotheCIAsecuritymodelandtotrustworthiness

characteristicssupportingcybersecurity;

•wrap-upandconclusions(Chapter

5)

:buildingontheprevioussections,recommendations

onactionstoensurestandardisationsupporttothecybersecurityofAI,andonpreparationfor

theimplementationofthedraftAIAct.

9

2.SCOPEOFTHEREPORT:

DEFINITIONOFAIAND

CYBERSECURITYOFAI

2.1ARTIFICIALINTELLIGENCE

UnderstandingAIanditsscopeseemstobetheveryfirststeptowardsdefiningcybersecurityof

AI.Still,acleardefinitionandscopeofAIhaveproventobeelusive.TheconceptofAIis

evolvingandthedebateoverwhatitis,andwhatitisnot,isstilllargelyunresolved–partlydue

totheinfluenceofmarketingbehindtheterm‘AI’.Evenatthescientificlevel,theexactscopeof

AIremainsverycontroversial.Inthiscontext,numerousforumshaveadopted/proposed

definitionsofAI.2

Box1:Example–DefinitionofAI,asincludedinthedraftAIAct

Initsdraftversion,theAIActproposesadefinitioninArticle3(1):

‘artificialintelligencesystem’(AIsystem)meanssoftwarethatisdevelopedwithoneormoreofthetechniquesandapproacheslistedinAnnexIandcan,foragivensetofhuman-defined

objectives,generateoutputssuchascontent,predictions,recommendations,ordecisions

influencingtheenvironmentstheyinteractwith.ThetechniquesandapproachesreferredtoinAnnexIare:

•Machinelearningapproaches,includingsupervised,unsupervisedandreinforcementlearning,usingawidevarietyofmethodsincludingdeeplearning;

•logic-andknowledge-basedapproaches,includingknowledgerepresentation,inductive(logic)programming,knowledgebases,inferenceanddeductiveengines,(symbolic)reasoningandexpertsystems;

•statisticalapproaches,Bayesianestimation,searchandoptimisationmethods

InlinewithpreviousENISAwork,whichconsidersitthedrivingforceintermsofAI

technologies,thereportmainlyfocusesonML.Thischoiceisfurthersupportedbythefactthat

thereseemtobeageneralconsensusonthefactthatMLtechniquesarepredominantin

currentAIapplications.Lastbutnotleast,itisconsideredthatthespecificitiesofMLresultin

vulnerabilitiesthataffectthecybersecurityofAIinadistinctivemanner.Itistobenotedthatthe

reportconsidersAIfromalifecycleperspective3.ConsiderationsconcerningMLonlyhavebeen

flagged.

2Forexample,theUnitedNationsEducational,ScientificandCulturalOrganization(UNESCO)inthe‘Firstdraftofthe

recommendationontheethicsofartificialintelligence’,andtheEuropeanCommission’sHigh-LevelExpertGroupon

ArtificialIntelligence.

3SeethelifecycleapproachportrayedintheENISAreportSecuringMachineLearningAlgorithms

(https://www.enisa.europa.eu/publications/securing-machine-learning-algorithms).

10

Box2:Specificitiesofmachinelearning–examplesfromasupervisedlearningmodel4

MLsystemscannotachieve100%inbothprecisionandrecall.Dependingonthesituation,MLneedstotradeoffprecisionforrecallandviceversa.ItmeansthatAIsystemswill,onceinawhile,makewrong

predictions.ThisisallthemoreimportantbecauseitisstilldifficulttounderstandwhentheAIsystemwillfail,butitwilleventually.

ThisisoneofthereasonsfortheneedforexplainabilityofAIsystems.Inessence,algorithmsare

deemedtobeexplainableifthedecisionstheymakecanbeunderstoodbyahuman(e.g.,adeveloperoranauditor)andthenexplainedtoanenduser(ENISA,SecuringMachineLearningAlgorithms).

AmajorspecificcharacteristicofMListhatitreliesontheuseoflargeamountsofdatatodevelop

MLmodels.Manuallycontrollingthequalityofthedatacanthenbecomeimpossible.Specifictraceabilityordataqualityproceduresneedtobeputinplacetoensurethat,tothegreatestextentpossible,thedatabeinguseddonotcontainbiases(e.g.forgettingtoincludefacesofpeoplewithspecifictraits),havenotbeen

deliberatelypoisoned(e.g.addingdatatomodifytheoutcomeofthemodel)andhavenotbeendeliberatelyorunintentionallymislabelled(e.g.apictureofadoglabelledasawolf).

2.2CYBERSECURITYOFAI

AIandcybersecurityhavebeenwidelyaddressedbytheliteraturebothseparatelyandin

combination.TheENISAreportSecuringMachineLearningAlgorithms5describesthe

multidimensionalrelationshipbetweenAIandcybersecurity,andidentifiesthreedimensions:

•cybersecurityofAI:lackofrobustnessandthevulnerabilitiesofAImodelsandalgorithms,

•AItosupportcybersecurity:AIusedasatool/meanstocreateadvancedcybersecurity(e.g.,bydevelopingmoreeffectivesecuritycontrols)andtofacilitatetheeffortsoflawenforcementandotherpublicauthoritiestobetterrespondtocybercrime,

•malicioususeofAI:malicious/adversarialuseofAItocreatemoresophisticatedtypesofattacks.

Thecurrentreportfocusesonthefirstofthesedimensions,namelythecybersecurityofAI.Still,

therearedifferentinterpretationsofthecybersecurityofAIthatcouldbeenvisaged:

•anarrowandtraditionalscope,intendedasprotectionagainstattacksontheconfidentiality,integrityandavailabilityofassets(AIcomponents,andassociateddataandprocesses)

acrossthelifecycleofanAIsystem,

•abroadandextendedscope,supportingandcomplementingthenarrowscopewith

trustworthinessfeaturessuchasdataquality,oversight,robustness,accuracy,explainability,transparencyandtraceability.

Thereportadoptsanarrowinterpretationofcybersecurity,butitalsoincludesconsiderations

aboutthecybersecurityofAIfromabroaderandextendedperspective.Thereasonisthatlinks

betweencybersecurityandtrustworthinessarecomplexandcannotbeignored:the

requirementsoftrustworthinesscomplementandsometimesoverlapwiththoseofAI

cybersecurityinensuringproperfunctioning.Asanexample,oversightisnecessarynotonlyfor

thegeneralmonitoringofanAIsysteminacomplexenvironment,butalsotodetectabnormal

behavioursduetocyberattacks.Inthesameway,adataqualityprocess(includingdata

traceability)isanaddedvaluealongsidepuredataprotectionfromcyberattack.Hence,

4Besidestheonesmentionedinthebox,the‘FalseNegativeRate”andthe‘FalsePositiveRate”andthe‘Fmeasure”are

examplesofotherrelevantmetrics.

5https://www.enisa.europa.eu/publications/securing-machine-learning-algorithms

11

trustworthinessfeaturessuchasrobustness,oversight,accuracy,traceability,explainabilityand

transparencyinherentlysupportandcomplementcybersecurity.

12

3.STANDARDISATIONIN

SUPPORTOF

CYBERSECURITYOFAI

3.1RELEVANTACTIVITIESBYTHEMAINSTANDARDS-DEVELOPING

ORGANISATIONS

ItisrecognisedthatmanySDOsarelookingatAIandpreparingguidesandstandardisation

deliverablestoaddressAI.Therationaleformuchofthisworkisthatwheneversomethingnew

(inthisinstanceAI)isdevelopedthereisabroadrequirementtoidentifyifexistingprovisions

applytothenewdomainandhow.Suchstudiesmayhelptounderstandthenatureofthenew

andtodetermineifthenewissufficientlydivergentfromwhathasgonebeforetojustify,or

require,thedevelopmentandapplicationofnewtechniques.Theycouldalsogivedetailed

guidanceontheapplicationofexistingtechniquestothenew,ordefineadditionaltechniquesto

fillthegaps.

Still,inthescopeofthisreport,thefocusismainlyonstandardsthatcanbeharmonised.This

limitsthescopeofanalysistothoseoftheInternationalOrganizationforStandardization(ISO)

andInternationalElectrotechnicalCommission(IEC),theEuropeanCommitteefor

Standardization(CEN)andEuropeanCommitteeforElectrotechnicalStandardization

(CENELEC),andtheEuropeanTelecommunicationsStandardsInstitute(ETSI).CENand

CENELECmaytransposestandardsfromISOandIEC,respectively,toEUstandardsunderthe

auspicesof,respectively,theViennaandFrankfurtagreements.

3.1.1CEN-CENELEC

CEN-CENELECaddressesAIandCybersecuritymainlywithintwojointtechnicalcommittees

(JTCs).

•JTC13‘Cybersecurityanddataprotection’hasasitsprimaryobjectivetotransposerelevantinternationalstandards(especiallyfromISO/IECJTC1subcommittee(SC)27)asEuropeanstandards(ENs)intheinformationtechnology(IT)domain.Italsodevelops‘homegrown’ENs,wheregapsexist,insupportofEUdirectivesandregulations.

•JTC21‘Artificialintelligence’isresponsibleforthedevelopmentandadoptionofstandardsforAIandrelateddata(especiallyfromISO/IECJTC1SC42),andprovidingguidancetoother

technicalcommitteesconcernedwithAI.

JTC13addresseswhatisdescribedasthenarrowscopeofcybersecurity(seeSection2.2).

ThecommitteehasidentifiedalistofstandardsfromISO-IECthatareofinterestforAI

cybersecurityandmightbeadopted/adaptedbyCEN-CENELECbasedontheirtechnical

cooperationagreement.ThemostprominentidentifiedstandardsbelongtotheISO27000

seriesoninformationsecuritymanagementsystems,whichmaybecomplementedbytheISO

15408seriesforthedevelopment,evaluationand/orprocurementofITproductswithsecurity

functionality,aswellassector-specificguidance,e.g.ISO/IEC27019:2017Information

technology–Securitytechniques–Informationsecuritycontrolsfortheenergyutilityindustry

(seetheannex

A.1,

forthefulllistofrelevantISO27000seriesstandardsthathavebeen

identifiedbyCEN-CENELEC).

13

Inaddition,thefollowingguidanceandusecasedocumentsaredraftsunderdevelopment

(someataveryearlystage)andexploreAImorespecifically.Itisprematuretoevaluatethe

impactsofthesestandards.

•ISO/IECAWI27090,Cybersecurity–Artificialintelligence–Guidanceforaddressingsecuritythreatsandfailuresinartificialintelligencesystems:ThedocumentaimstoprovideinformationtoorganisationstohelpthembetterunderstandtheconsequencesofsecuritythreatstoAI

systems,throughouttheirlifecycles,anddescribeshowtodetectandmitigatesuchthreats.Thedocumentisatthepreparatorystage.

•ISO/IECCDTR27563,Cybersecurity–ArtificialIntelligence–Impactofsecurityandprivacyinartificialintelligenceusecases:Thedocumentisatthecommitteestage.

Bydesign,JTC21isaddressingtheextendedscopeofcybersecurity(seeSection

4.2)

,which

includestrustworthinesscharacteristics,dataquality,AIgovernance,AImanagementsystems,

etc.Giventhis,afirstlistofISO-IEC/SC42standardshasbeenidentifiedashavingdirect

applicabilitytothedraftAIActandisbeingconsideredforadoption/adaptionbyJTC21:

•ISO/IEC22989:2022,Artificialintelligenceconceptsandterminology(published),

•ISO/IEC23053:2022,Frameworkforartificialintelligence(AI)systemsusingmachinelearning(ML)(published),

•ISO/IECDIS42001,AImanagementsystem(underdevelopment),

•ISO/IEC23894,GuidanceonAIriskmanagement(publicationpending),

•ISO/IECTS4213,Assessmentofmachinelearningclassificationperformance(published),

•ISO/IECFDIS24029-2,Methodologyfortheuseofformalmethods(underdevelopment),

•ISO/IECCD5259series:DataqualityforanalyticsandML(underdevelopment).

Inaddition,JTC21hasidentifiedtwogapsandhaslaunchedaccordinglytwoadhocgroups

withtheambitionofpreparingnewworkitemproposals(NWIPs)supportingthedraftAIAct.

Thepotentialfuturestandardsare:

•AIsystemsriskcatalogueandriskmanagement,

•AItrustworthinesscharacterisation(e.g.,robustness,accuracy,safety,explainability,transparencyandtraceability).

Finally,ithasbeendeterminedthatISO-IEC42001onAImanagementsystemsandISO-IEC

27001oncybersecuritymanagementsystemsmaybecomplementedbyISO9001onquality

managementsystemsinordertohavepropercoverageofAIanddataqualitymanagement.

3.1.2ETSI

ETSIhassetupadedicatedOperationalCo-ordinationGrouponArtificialIntelligence,which

coordinatesthestandardisationactivitiesrelatedtoAIthatarehandledinthetechnicalbodies,

committeesandindustryspecificationgroups(ISGs)ofETSI.Inaddition,ETSIhasaspecific

grouponthesecurityofAI(SAI)thathasbeenactivesince2019indevelopingreportsthatgive

amoredetailedunderstandingoftheproblemsthatA

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论