确保人工智能时代的关键基础设施安全(英文)_第1页
确保人工智能时代的关键基础设施安全(英文)_第2页
确保人工智能时代的关键基础设施安全(英文)_第3页
确保人工智能时代的关键基础设施安全(英文)_第4页
确保人工智能时代的关键基础设施安全(英文)_第5页
已阅读5页,还剩55页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

CenterforSecurityandEmergingTechnology|1

ThisworkshopandtheproductionofthefinalreportwasmadepossiblebyagenerouscontributionfromtheMicrosoftCorporation.Theviewsinthisdocumentarestrictlytheauthors’anddonotnecessarilyrepresenttheviewsoftheU.S.government,the

MicrosoftCorporation,orofanyinstitution,organization,orentitywithwhichtheauthorsmaybeaffiliated.

Referencetoanyspecificcommercialproduct,process,orservicebytradename,

trademark,manufacturer,orotherwise,doesnotconstituteorimplyanendorsement,recommendation,orfavoringbytheU.S.government,includingtheU.S.DepartmentoftheTreasury,theU.S.DepartmentofHomelandSecurity,andtheCybersecurityand

InfrastructureSecurityAgency,oranyotherinstitution,organization,orentitywithwhichtheauthorsmaybeaffiliated.

CenterforSecurityandEmergingTechnology|2

ExecutiveSummary

Asartificialintelligencecapabilitiescontinuetoimprove,criticalinfrastructure(CI)

operatorsandprovidersseektointegratenewAIsystemsacrosstheirenterprises;

however,thesecapabilitiescomewithattendantrisksandbenefits.AIadoptionmayleadtomorecapablesystems,improvementsinbusinessoperations,andbettertoolstodetectandrespondtocyberthreats.Atthesametime,AIsystemswillalso

introducenewcyberthreatsthatCIprovidersmustcontendwith.Lastyear’sAI

executiveorderdirectedthevariousSectorRiskManagementAgencies(SRMAs)to

“evaluateandprovide…anassessmentofpotentialrisksrelatedtotheuseofAIin

criticalinfrastructuresectorsinvolved,includingwaysinwhichdeployingAImaymakecriticalinfrastructuresystemsmorevulnerabletocriticalfailures,physicalattacks,andcyber-attacks.”

Despitetheexecutiveorder’srecentdirection,AIuseincriticalinfrastructureisnot

new.AItoolsthatexcelinpredictionandanomalydetectionhavebeenusedforcyberdefenseandotherbusinessactivitiesformanyyears.Forexample,providershavelongreliedoncommercialinformationtechnologysolutionsthatarepoweredbyAIto

detectmaliciousactivity.WhathaschangedisthatnewgenerativeAItechniqueshavebecomemorecapableandoffernovelopportunitiesforCIoperators.Potentialuses

includemorecapablechatbotsforcustomerinteraction,enhancedthreatintelligencesynthesisandprioritization,fastercodeproductionprocesses,and,morerecently,AIagentsthatcanperformactionsbasedonuserprompts.

CIoperatorsandsectorsareattemptingtonavigatethisrapidlychanginganduncertainlandscape.Fortunately,thereareanaloguesfromcybersecuritythatwecandrawon.Yearsago,innovationsinnetworkconnectivityprovidedCIoperatorswithawayto

remotelymonitorandoperatemanysystems.However,thisalsocreatednewattackvectorsformaliciousactors.PastlessonscanhelpinformhoworganizationsapproachtheintegrationofAIsystems.Today,riskmayariseintwoways:fromAIvulnerabilitiesorfailuresinsystemsdeployedwithinCIandfromthemalicioususeofAIsystems

againstCIsectors.

Thisworkshopreportprovidestechnicalmitigationsandpolicyrecommendationsfor

managingtheuseofAIincriticalinfrastructure.Severalfindingsandrecommendationsemergedfromthisdiscussion.

●ResourcedisparitiesbetweenCIproviderswithinandacrosssectorshavea

majorimpactontheprospectsofAIadoptionandmanagementofAI-relatedrisks.Furtherprogramsareneededtosupportlesswell-resourcedproviders

CenterforSecurityandEmergingTechnology|3

withAI-relatedassistance,includingfinancialresources,datafortraining

models,requisitetalentandstaff,forumsforcommunication,andavoiceinthebroaderAIdiscourse.Expandingformalandinformalmeansofmutual

assistancecouldhelpclosethedisparitygap.Theseinitiativesshareresources,talent,andknowledgeacrossorganizationstoimprovethesecurityand

resiliencyofthesectorasawhole.Theyincludeformalprograms,suchas

sharingpersonnelinresponsetoincidentsoremergencies,andinformaleffortssuchasdevelopingbestpracticesorvettingproductsandservices.

●ThereisarecognizedneedtointegrateAIriskmanagementintoexisting

enterpriseriskmanagementpractices;however,ownershipofAIriskcanbe

ambiguouswithincurrentcorporatestructures.ThisriskwasreferredtobyoneparticipantastheAI“hotpotato”beingtossedaroundtheC-suite.Aclear

designationofresponsibilityforAIriskwithinthecorporatestructureisneeded.

●AmbiguitybetweenAIsafetyandAIsecurityalsoposessubstantialchallengestooperationalizingAIriskmanagement.OrganizationsareoftenunsurehowtoapplyguidancefromtheNationalInstituteofStandardsandTechnology’s

recentlypublishedAIriskmanagementframeworkalongsidethecybersecurityframework.FurtherguidanceonhowtoimplementaunifiedapproachtoAIriskisneeded.Tailoringandprioritizingthisguidancewouldhelpmakeitmoreaccessibletolesswell-resourcedprovidersandthosewithspecific,often

bespoke,needs.

●Whiletherearewell-establishedchannelsforcybersecurityinformationsharing,thereisnoanalogueinthecontextofAI.SRMAsshouldleverageexisting

venues,suchastheInformationSharingandAnalysisCenters,forAIsecurityinformationsharing.SharingAIsafetyissues,mitigations,andbestpracticesisalsocritical,butthechannelstodosoareunclear.ClarityonwhatconstitutesanAIincident,whichincidentsshouldbereported,thethresholdsforreporting,andwhetherexistingcyber-incidentreportingchannelsaresufficientwouldbe

valuable.Topromotecross-sectorvisibilityandanalysisthatspansbothAIsafetyandsecurity,thesectorsshouldconsiderestablishingacentralizedanalysiscenterforAIsafetyandsecurity.

●SkillstomanagecyberandAIrisksaresimilarbutnotidentical.The

implementationofAIsystemswillrequireexpertisethatmanyCIprovidersdonotcurrentlyhave.Assuch,providersandoperatorsshouldactivelyupskilltheircurrentworkforcesandseekopportunitiestocross-trainstaffwith

CenterforSecurityandEmergingTechnology|4

relevantcybersecurityskillstoeffectivelyaddresstherangeofAI-andcyber-relatedrisks.

●GenerativeAIintroducesnewissuesthatcanbemoredifficulttomanageand

thatwarrantcloseexamination.CIprovidersshouldremaincautiousand

informedbeforeadoptingnewerAItechnologies,particularlyforsensitiveormission-criticaltasks.Assessingwhetheranorganizationisevenreadyto

adoptthesesystemsisacriticalfirststep.

CenterforSecurityandEmergingTechnology|5

TableofContents

ExecutiveSummary 2

Introduction 6

Background 7

ResearchMethodology 7

TheCurrentandFutureUseofAIinCriticalInfrastructure 8

Figure1.ExamplesofAIUseCasesinCriticalInfrastructurebySector 10

Risks,Opportunities,andBarriersAssociatedwithAI 11

Risks 11

Opportunities 12

BarrierstoAdoption 13

Observations 14

DisparitiesBetweenandWithinSectors 14

UnclearBoundaryBetweenAIandCybersecurity 16

ChallengesinAIRiskManagement 17

FracturedGuidanceandRegulation 18

Recommendations 21

Cross-CuttingRecommendations 21

ResponsibleGovernmentDepartmentsandAgencies 23

Sectors 25

Organizations 25

CriticalInfrastructureOperators 26

AIDevelopers 26

Authors 28

AppendixA:BackgroundResearchSources 29

Government/Intergovernmental 29

Science/Academia/NongovernmentalOrganizations/FederallyFundedResearch

andDevelopmentCenters/Industry 29

DocumentsMentionedDuringWorkshop 30

Endnotes 31

CenterforSecurityandEmergingTechnology|6

Introduction

InOctober2023,theWhiteHousereleasedanExecutiveOrderontheSafe,Secure,

andTrustworthyDevelopmentandUseofArtificialIntelligence.Section4.3ofthe

orderspecificallyfocusesonthemanagementofAIincriticalinfrastructureand

cybersecurity

.1

WhileregulatorsdebatestrategiesforgoverningAIatthestate,

federal,andinternationallevels,protectingCIremainsatoppriorityformany

stakeholders.However,therearenumerousoutstandingquestionsonhowbesttoaddressAI-relatedriskstoCI,giventhefracturedregulatorylandscapeandthe

diversityamongthe16CIsectors.

Toaddresssomeofthesequestions,theCenterforSecurityandEmergingTechnology(CSET)hostedanin-personworkshopinJune2024thatbroughttogether

representativesfromtheU.S.federalgovernment,thinktanks,industry,academia,andfiveCIsectors(communications,informationtechnology,water,energy,andfinancialservices).ThediscussionwasframedaroundtheissueofsecurityinCI,includingtheriskfrombothAI-enabledcyberthreatsandpotentialvulnerabilitiesorfailuresin

deployedAIsystems.Theintentionoftheworkshopwastofosteracandid

conversationaboutthecurrentstateofAIincriticalinfrastructure,identify

opportunitiesandrisks—particularlyrelatedtocybersecurity—presentedbyAI

adoption,andrecommendtechnicalmitigationsandpolicyoptionsformanagingtheuseofAIandmachinelearningincriticalsystems.

ThediscussionfocusedonCIintheUnitedStates,withsomelimitedconversationontheglobalregulatorylandscape.Thisreportsummarizestheworkshop’sfindingsinfourprimarysections.TheBackgroundsectioncontainsCSETresearchonthecurrentandpotentialfutureuseofAItechnologiesinvariousCIsectors.TheRisks,

Opportunities,andBarrierssectionaddressestheseissuesassociatedwithAIthatparticipantsraisedoverthecourseoftheworkshop.Thethirdsection,Observations,categorizesvariousthemesfromthediscussion,andthereportconcludeswith

Recommendations,whichareorganizedbytargetaudience(government,CIsectors,andindividualorganizationswithinboththesectorsandtheAIindustry).

CenterforSecurityandEmergingTechnology|7

Background

Inpreparationforthisworkshop,CSETresearchersexaminedthereportssubmittedbyvariousfederaldepartmentsandagenciesinresponsetotheWhiteHouseAIexecutiveorder,section4.3.ThesereportsprovidedinsightintohowsomeCIownersand

operatorsarealreadyusingAIwithintheirsector,butitwassometimesunclearwhattypesofAIsystemsCIproviderswereemployingorconsidering.Forexample,theU.S.DepartmentofEnergy(DOE)summaryreportoverviewedthepotentialforusingAI-directedorAI-assistedsystemstosupportthecontrolofenergyinfrastructure,butitdidnotspecifywhethertheseweregenerativeAIortraditionalmodels.Thiswasthecaseformanyofthesourcesandusecasesassessedforthebackgroundresearch,

spanninginformationtechnology(IT),operationaltechnology(OT),andsector-specificusecases.ThisambiguityreducesvisibilityintothecurrentstateofAIadoptionacrosstheCIsectors,limitingtheeffectivenessofecosystemmonitoringandriskassessment.

ThissectionsummarizesCSET’spreliminaryresearchfortheworkshopandprovidesexamplesofmanyofthecurrentandpotentialfutureAIusecasesinthreesectors—

financialservices,water,andenergy—basedonfederalagencyreporting.

ResearchMethodology

TheU.S.DepartmentofHomelandSecurity(DHS)recentlyreleasedguidelinesforCIownersandoperatorsthatcategorizeover150individualAIusecasesinto10

categories

.2

Whilethereportencompassedall16CIsectors,theusecaseswerenotspecified.ToidentifyAIusecasesforthesectorsthatparticipatedintheworkshop,weassessedreportsfromtheU.S.DepartmentoftheTreasury(financialservices),DOE(energy),andtheU.S.EnvironmentalProtectionAgency(EPA,water).Wealso

examinedtheAIinventoriesforeachdepartmentandagency,buttheyonlyincludedusecasesinternaltothoseorganizations,notthesectorsgenerally.

TheTreasuryandDOEreportswerewrittenfollowingtheAIexecutiveorder,were

relativelycomprehensive,andconsideredmanyAIusecases

.3

Furtherusecasesinthefinanceandenergysectorswerepulledfromnongovernmentalsources(e.g.,the

JournalofRiskandFinancialManagementandIndigoAdvisoryGroup)

.4

TheEPA

sourcesweredatedandlackeddetailsonAIusecases

.5

Toidentifymoreusecasesinthewatersector,weassessedliteraturereviewsfromWaterResourcesManagement(aforumforpublicationsonthemanagementofwaterresources)andWater(ajournalonwaterscienceandtechnology)

.6

AlthoughweprimarilyfocusedonsourcescoveringU.S.CI,someresearchencompassedCIabroad.Afulllistofsourcescanbefoundin

AppendixA.

CenterforSecurityandEmergingTechnology|8

TheCurrentandFutureUseofAIinCriticalInfrastructure

WeclassifyAIusecasesinCIintothreebroadcategories:IT,OT,andsector-specific

usecases.ITencompassestheuseofAIfor“traditional”cybersecuritytaskssuchasnetworkmonitoring,anomalydetection,andclassificationofsuspiciousemails.AllCIsectorsuseIT,andthereforetheyallhavethepotentialtouseAIinthiscategory.OTencompassesAIuseinmonitoringorcontrollingphysicalsystemsandinfrastructure,suchasindustrialcontrolsystems.Sector-specificusecasesincludetheuseofAIfordetectingfraudinthefinancialsectororforecastingpowerdemandintheenergy

sector.ThesebroadcategoriesprovideasharedframeofreferenceandcapturethebreadthofAIusecasesacrosssectors.However,theyarenotmeanttobe

comprehensiveorconveythedepthofAIuse(orlackthereof)acrossorganizationswithinsectors.

WhendiscussingusecasesforCI,weconsiderabroadspectrumofAIapplications.

WhilenewertechnologiessuchasgenerativeAI(e.g.,largelanguagemodels)have

recentlybeentopofmindformanypolicymakers,moretraditionaltypesofmachine

learningsystems,includingpredictiveAIsystemsthatforecastandidentifypatternswithindata(asopposedtogeneratingcontent),havelongbeenusedinCI.ThevariousAIsystemspresentdifferingopportunitiesandchallenges,butgenerativeAI

introducesnewissuesthatcanbemoredifficulttomanageandthatwarrantcloseexamination.Thisincludesdifficultiesininterpretinghowmodelsprocessinputs,

explainingtheiroutputs,managingunpredictablebehaviors,andidentifying

hallucinationsandfalseinformation.Evenmorerecently,generativemodelshavebeenusedtopowerAIagents,enablingthesemodelstotakemoredirectactionintherealworld.Althoughthesesystemsarestillnascent,theirpotentialtoautomatetasks—

whetherroutineworkstreamsorcyberattacks—deservesclosewatching.

ThemesinAI-CIusecasesfromthereportsexaminedinclude:

•ManyITusecasesemployAItosupplementexistingcybersecuritypracticesandhavecommonalitiesacrosssectors.Forexample,AIisoftenusedtodetect

maliciouseventsorthreatsinIT,beitatafinancialfirmorwaterfacility.SomeAIITusecases,suchasscanningsecuritylogsforanomalies,gobacktothe1990s.Othershaveemergedoverthepast20years,suchasanomalousor

maliciouseventdetection.NewpotentialusecaseshavesurfacedwiththerecentadventofgenerativeAI,suchasmitigatingcodevulnerabilitiesandanalyzingthreatactorbehavior.

CenterforSecurityandEmergingTechnology|9

•Basedonreportedusecases,therearenoexplicitexamplesofgenerativeAI

beingusedinOT.WhilesomeapplicationsoftraditionalAIarebeingused,suchasininfrastructureoperationalawareness,broaderadoptionisstillfairlylimited.ThisisinpartduetoconcernsovercausingerrorsincriticalOT.However,futureusecasesarebeingactivelyconsidered,suchasreal-timecontrolofenergy

infrastructurewithhumansintheloop.

•Manysector-specificAIusecasesseektoimprovethereliability,robustness,

andefficiencyofCI.However,theyalsoraiseconcernsaboutdataprivacy,

cybersecurity,AIsecurity,andtheneedforgovernanceframeworkstoensureresponsibleAIdeployment.Itcanbemorechallengingtoimplementacommonriskmanagementframeworkfortheseusecasesbecausetheyarespecializedandhavelimitedoverlapacrosssectors.

•AIadoptionvarieswidelyacrossCIsectors.Organizationsacrosseachsectorhavevaryingtechnicalexpertise,funding,experienceintegratingnew

technologies,regulatoryorlegalconstraints,anddataavailability.Moreover,itisnotclearwhethercertainAIusecaseswereactivelybeingimplemented,

consideredinthenearterm,orfeasibleinthelongterm.ManyofthepotentialAIusecaseshighlightedinrelevantliteraturearetheoretical,withexperimentsconductedonlyinlaboratory,controlled,orlimitedsettings.Oneexampleisaproposedintelligentirrigationsystemprototypeforefficientwaterusagein

agriculturewhichwasdevelopedusingdatacollectedfromreal-world

environments,butnottestedinthefield

.7

Thefeasibilityofimplementingtheseapplicationsinpracticeandacrossorganizationsiscurrentlyunclear.

•ThedepthofAIuseacrossorganizationswithinsectorsisdifficulttoassess.

Therearethousandsoforganizationsacrossthefinancial,energy,andwater

sectors.ItisunknownhowmanyorganizationswithinthesesectorsareusingorwilluseAI,forwhatpurposes,andhowtherisksfromthosedifferentusecases

vary.

Figure1aggregatesallAIusecasesidentifiedinthepreliminaryresearch

.*

EachsectorisdividedintoIT,OT,andsector-specificusecasesandsubdividedintocurrent/near-termandlong-termusecases.

Figure1.ExamplesofAIUseCasesinCriticalInfrastructurebySector

Source:CSET(SeeAppendixA).

+Thesourcesexaminedduringourpreliminaryresearchdidnotcontainanycurrent,near-term,orfutureexamplesofAIusecasesinfinancialsectorOT,currentornear-termexamplesofAIusecasesinwatersectorOTorIT,noranyfutureAIusecasesinenergysectorIT.

CenterforSecurityandEmergingTechnology|10

CenterforSecurityandEmergingTechnology|11

Risks,Opportunities,andBarriersAssociatedwithAI

AsevidencedbythewiderangeofcurrentandpotentialusecasesforAIincritical

infrastructure,manyworkshopparticipantsexpressedinterestinadoptingAI

technologiesintheirrespectivesectors.However,manywerealsoconcernedaboutthebroadandunchartedspectrumofrisksassociatedwithAIadoption,bothfromexternalmaliciousactorsandfrominternaldeploymentofAIsystems.CIsectorsalsofacea

varietyofbarrierstoAIadoption,evenforusecasesthatmaybeimmediately

beneficialtothem.Thissectionwillbrieflysummarizethediscussionconcerningthese

threetopics:risks,opportunities,andbarrierstoadoption.

Risks

AIriskistwofold,encompassingbothmalicioususeofAIsystemsandAIsystemvulnerabilitiesorfailures.Thissubsectionwilladdressbothofthesecategories,

startingwithrisksfrommalicioususe,whichseveralworkshopparticipantsraised

concernsaboutgiventhecurrentprevalenceofcyberattacksonU.S.critical

infrastructure.TheseconcernsincludedhowAImighthelpmaliciousactorsdiscovernewattackvectors,conductreconnaissanceandmappingofcomplexCInetworks,andmakecyberattacksmoredifficulttodetectordefendagainst.AI-poweredtoolslowerthebarriertoentryformaliciousactors,givingthemanew(andpotentiallylow-cost)waytosynthesizevastamountsofinformationtoconductcyberandphysicalsecurityattacks.However,theadditionofAIalonedoesnotnecessarilypresentanovelthreat,asCIsystemsarealreadytargetsforvariouscapableandmotivatedcyberactors

.8

MostconcernsaboutAIinthiscontextcenteredonitspotentialtoenableattacksthatmaynotcurrentlybepossibleorincreasetheseverityoffutureattacks.Amore

transformativeuseofAIbyattackerscouldinvolveseekingimprovedinsightsastowhatsystemsanddataflowstodisruptorcorrupttoachievethegreatestimpact.

GenerativeAIcapabilitiesarecurrentlyincreasingthreatstoCIprovidersincertain

cases.Thesethreatsincludeenhancedspearphishing,enabledbylargelanguage

models.Researchershaveobservedthreatactorsexploringthecapabilitiesof

generativeAIsystems,whicharenotnecessarilygame-changingbutcanbefairly

usefulacrossawiderangeoftaskssuchasscripting,reconnaissance,translation,andsocialengineering

.9

Furthermore,asAIdevelopersstrivetoimprovegenerative

models’capabilitiesbyenablingthemodeltouseexternalsoftwaretoolsandinteract

withotherdigitalsystems,digital“agents”thatcantranslategeneralhumaninstructionsintoexecutablesubtasksmaysoonbeusedforcyberoffense.

CenterforSecurityandEmergingTechnology|12

TheotherriskcategoryparticipantsidentifiedwasrelatedtoAIadoption,suchasthepotentialfordataleakage,alargercybersecurityattacksurface,andgreatersystem

complexity.Dataleakagewasasignificantconcern,regardingboththepossibilityofaCIoperator’sdatabeingstoredexternally(suchasbyanAIprovider)andthepotentialforsensitiveinformationtoaccidentallyleakduetoemployeeusageofAI(suchasbypromptinganexternallargelanguagemodel).

IncorporatingAIsystemscouldalsoincreaseaCIoperator’scybersecurityattack

surfaceinnew—orunknown—ways,especiallyiftheAIsystemisusedforeitherOTorIT.(AusecaseencompassingOTandIT,whicharetypicallystrictlyseparatedwith

firewallstolimittheriskofcompromise,wouldincreasetheattacksurfaceeven

further.)Forcertainsectors,participantspointedoutthatevenmappinganoperator’snetworkstoevaluateanAIsystem’susefulness—andsubsequentlystoringorsharingthatsensitiveinformation—couldpresentatargetformotivatedthreatactors.CI

operatorsfacemoreconstraintsthanorganizationsinotherindustriesandthereforeneedtobeextracautiousaboutdisclosinginformationabouttheirsystems.NewerAIproducts,especiallygenerativeAIsystems,mayalsofailunexpectedlybecauseitisimpossibletothoroughlytesttheentirerangeofinputstheymightreceive.

Finally,AIsystems’complexitypresentsachallengefortestingandevaluation,

especiallygiventhatsomesystemsarenotfullyexplainable(inthesenseofnotbeingabletotracetheprocessesthatleadtotherelationshipbetweeninputsandoutputs).RisksassociatedwithcomplexityarecompoundedbythefactthatthereisagenerallackofexpertiseattheintersectionofAIandcriticalinfrastructure,bothwithintheCI

communityandonthepartofAIproviders.

Opportunities

DespiteacknowledgmentoftherisksassociatedwiththeuseofAI,therewasgeneralagreementamongparticipantsthattherearemanybenefitstousingAItechnologiesincriticalinfrastructure.

AItechnologiesarealreadyinuseinseveralsectorsfortaskssuchasanomaly

detection,operationalawareness,andpredictiveanalytics.Thesearerelativelymatureusecasesthatrelyonolder,establishedformsofAIandmachinelearning(suchas

classificationsystems)ratherthannewergenerativeAItools.

OtheropportunitiesforAIadoptionacrossCIsectorsincludeissuetriageor

prioritization(suchasforfirstresponders),thefacilitationofinformationsharinginthecybersecurityorfraudcontexts,forecasting,threathunting,SecurityOperationsCenter

CenterforSecurityandEmergingTechnology|13

(SOC)operations,andpredictivemaintenanceofOTsystems.Moregenerally,

participantswereinterestedinAI’spotentialtohelpusersnavigatecomplexsituationsandhelpoperatorsprovidemoretailoredinformationtocustomersorstakeholders

withspecificneeds.

BarrierstoAdoption

Evenafterconsideringtherisk-opportunitytrade-offs,however,severalparticipantsnotedthatCIoperatorsfaceavarietyofbarriersthatcouldpreventthemfrom

adoptinganAIsystemevenwhenitmaybefullybeneficial.

SomeofthesebarrierstoadoptionarerelatedtohesitancyaroundAI-relatedrisks,

suchasdataprivacyandthepotentialbroadeningofone’scybersecurityattacksurface.SomeoperatorsareparticularlyhesitanttoadoptAIinOT(whereitmightaffect

physicalsystems)orcustomer-facingapplications.Thetrustworthiness—orlackthereof—ofAIsystemsisalsoasourceofhesitancy.

OtherbarriersareduetotheuniqueconstraintsfacedbyCIoperators.Forinstance,thefactthatsomesystemshavetobeconstantlyavailableisachallengeuniquetoCI.

Operatorsinsectorswithimportantdependencies—suchasenergy,water,and

communications—havelimitedwindowsinwhichtheycantaketheirsystemsoffline.OT-heavysectorsalsomustcontendwithadditionaltechnicalbarrierstoentry,suchasagenerallackofusefuldataorarelianceonlegacysystemsthatdonotproduce

usabledigitaloutputs.Incertaincases,itmayalsobeprohibitivelyexpensive—oreventechnicallyimpossible—toconductthoroughtestingandevaluationofAIapplicationswhencontrolofphysicalsystemsisinvolved.

Athirdcategoryofbarriersconcernscompliance,liability,andregulatoryrequirements.CIoperatorsareconcernedaboutrisksstemmingfromtheuseofuserdatainAI

modelsandtheneedtocomplywithfracturedregulatoryrequirementsacrossdifferentstatesordifferentcountries.Forexample,multinationalcorporationsinsectorssuchas

ITorcommunicationsarebeholdentothelawsofmultiplejurisdictionsandneedtoadheretoregulationssuchastheEuropeanUnion’sGeneralDataProtection

Regulation(GDPR),whichmaynotapplytomorelocalCIoperators.

Finally,asignificantbarriertoentryacrossalmostallsectorsistheneedforworkerswithAI-relevantskills.Participantsnotedthatalleviatingworkforceshortagesby

hiringnewworkersorskillingupcurrentemployeesisaprerequisiteforadoptingAIinanyrealcapacity.

CenterforSecurityandEmergingTechnology|14

Observations

Throughouttheworkshop,fourcommontrendsemergedfromthebroaderdiscussion.

Differentparticipants,eachrepresentingdifferentsectorsorgovernmentagencies,

raisedthematmultiplepointsduringtheconversation,anindicatoroftheirsaliency.

ThesetopicsincludethedisparitiesbetweenlargeandsmallCIproviders,thedifficultyindefininglinesbetweenAI-andcyber-relatedissues,thelackofclearowners

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论