AI组织责任-核心安全责任 AI Organizational Responsibilities Core Security Responsibilities_第1页
AI组织责任-核心安全责任 AI Organizational Responsibilities Core Security Responsibilities_第2页
AI组织责任-核心安全责任 AI Organizational Responsibilities Core Security Responsibilities_第3页
AI组织责任-核心安全责任 AI Organizational Responsibilities Core Security Responsibilities_第4页
AI组织责任-核心安全责任 AI Organizational Responsibilities Core Security Responsibilities_第5页
已阅读5页,还剩94页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

AIOrganizationalResponsibilities:

CoreSecurityResponsibilities

ThepermanentandofficiallocationfortheAIOrganizationalResponsibilitiesWorkingGroupis

/research/working-groups/ai-organizational-responsibilities

©2024CloudSecurityAlliance–AllRightsReserved.Youmaydownload,store,displayonyour

computer,view,print,andlinktotheCloudSecurityAllianceat

subjectto

thefollowing:(a)thedraftmaybeusedsolelyforyourpersonal,informational,noncommercialuse;(b)thedraftmaynotbemodifiedoralteredinanyway;(c)thedraftmaynotberedistributed;and(d)thetrademark,copyrightorothernoticesmaynotberemoved.Youmayquoteportionsofthedraftas

permittedbytheFairUseprovisionsoftheUnitedStatesCopyrightAct,providedthatyouattributetheportionstotheCloudSecurityAlliance.

©Copyright2024,CloudSecurityAlliance.Allrightsreserved.2

Acknowledgments

LeadAuthors

JerryHuangKenHuang

Contributors/Co-Chairs

KenHuang

NickHamiltonChrisKirschkeSeanWright

Reviewers

CandyAlexanderIlangoAllikuzhiErayAltili

AakashAlurkarRomeoAyalinRenuBedi

SauravBhattacharya

SergeiChaschinHongChen

JohnChiu

SatchitDokras

RajivGunja

HongtaoHao,PhDGraceHuang

OnyekaIlloh

KrystalJackson

ArvinJakkamreddyReddySimonJohnson

GianKapoor

BenKereopa-Yorke

ChrisKirschke

MaduraMalwatte

MadhaviNajana

RajithNarasimhaiah

GabrielNwajiaku

GovindarajPalanisamyMeghanaParwate

PareshPatel

RangelRodrigues

MichaelRoza

LarsRuddigkeit

DavideScatto

MariaSchwengerMj

BhuvaneswariSelvadurai

HimanshuSharmaAkshayShetty

NishanthSingarapuAbhinavSingh

Dr.ChantalSpleissPatriciaThaine

EricTierling

AshishVashishthaPeterVentura

JiewenWangWickeyWang

UdithWickramasuriyaSounilYu

CSAGlobalStaff

MarinaBregkouSeanHeide

AlexKaluza

ClaireLehnertStephenLumpe

©Copyright2024,CloudSecurityAlliance.Allrightsreserved.3

TableofContents

Acknowledgments 3

TableofContents 4

ExecutiveSummary 6

Introduction 7

AISharedResponsibilityModel 7

KeyLayersinanAI-EnabledApplication 7

FoundationalComponentsofaData-CentricAISystem 9

Assumptions 13

IntendedAudience 13

ResponsibilityRoleDefinitions 14

ManagementandStrategy 14

GovernanceandCompliance 15

TechnicalandSecurity 16

OperationsandDevelopment 16

NormativeReferences 18

1.IncorporatingDataSecurity&PrivacyinAITraining 19

1.1DataAuthenticityandConsentManagement 19

1.2AnonymizationandPseudonymization 20

1.3DataMinimization 21

1.4AccessControltoData 22

1.5SecureStorage&Transmission 23

2.ModelSecurity 24

2.1.AccessControlstoModels 24

2.1.1AuthenticationandAuthorizationFrameworks 24

2.1.2.ModelInterfacesRateLimiting 25

2.1.3.AccessControlinModelLifecycleManagement 25

2.2.SecureModelRuntimeEnvironment 26

2.2.1.Hardware-BasedSecurityFeatures 26

2.2.2.NetworkSecurityControls 27

2.2.3.OS-LevelHardeningandSecureConfigurations 28

2.2.4.K8sandContainerSecurity 29

2.2.5.CloudEnvironmentSecurity 29

2.3VulnerabilityandPatchManagement 30

2.3.1MLCodeIntegrityProtections 30

2.3.2VersionControlSystemsforMLTrainingandDeploymentCode 31

2.3.3CodeSigningtoValidateApprovedVersions 32

2.3.4InfrastructureasCodeApproaches 32

2.4MLOpsPipelineSecurity 33

2.4.1.SourceCodeScansforVulnerabilities 33

©Copyright2024,CloudSecurityAlliance.Allrightsreserved.4

2.4.2.TestingModelRobustnessAgainstAttacks 34

2.4.3.ValidatingPipelineIntegrityatEachStage 35

2.4.4.MonitoringAutomationScripts 36

2.5AIModelGovernance 37

2.5.1.ModelRiskAssessments 37

2.5.2.BusinessApprovalProcedures 37

2.5.3.ModelMonitoringRequirements 38

2.5.4.NewModelVerificationProcesses 39

2.6SecureModelDeployment 39

2.6.1.CanaryReleases 40

2.6.2.Blue-GreenDeployments 40

2.6.4.RollbackCapabilities 41

2.6.5.DecommissioningModels 41

3.VulnerabilityManagement 42

3.1.AI/MLAssetInventory 42

3.2.ContinuousVulnerabilityScanning 43

3.3.Risk-BasedPrioritization 44

3.4.RemediationTracking 45

3.5.ExceptionHandling 45

3.6.ReportingMetrics 46

Conclusion 48

Acronyms 49

©Copyright2024,CloudSecurityAlliance.Allrightsreserved.5

ExecutiveSummary

ThiswhitepaperisaworkingdraftthatfocusesontheinformationsecurityandcybersecurityaspectsoforganizationalresponsibilitiesinthedevelopmentanddeploymentofArtificialIntelligence(AI)and

MachineLearning(ML)systems.Thepapersynthesizesexpert-recommendedbestpracticeswithincoresecurityareas,includingdataprotectionmechanisms,modelvulnerabilitymanagement,Machine

LearningOperations(MLOps)pipelinehardening,andgovernancepoliciesfortraininganddeployingAIresponsibly.

Keypointsdiscussedinthewhitepaperinclude:

●DataSecurityandPrivacyProtection:Theimportanceofdataauthenticity,anonymization,pseudonymization,dataminimization,accesscontrol,andsecurestorageandtransmissioninAItraining.

●ModelSecurity:Coversvariousaspectsofmodelsecurity,includingaccesscontrols,secureruntimeenvironments,vulnerabilityandpatchmanagement,MLOpspipelinesecurity,AImodelgovernance,andsecuremodeldeployment.

●VulnerabilityManagement:DiscussesthesignificanceofAI/MLassetinventory,continuousvulnerabilityscanning,risk-basedprioritization,remediationtracking,exceptionhandling,andreportingmetricsinmanagingvulnerabilitieseffectively.

Thewhitepaperanalyzeseachresponsibilityusingquantifiableevaluationcriteria,theResponsible,

Accountable,Consulted,Informed(RACI)modelforroledefinitions,high-levelimplementationstrategies,continuousmonitoringandreportingmechanisms,accesscontrolmapping,andadherenceto

foundationalguardrails.ThesearebasedonindustrybestpracticesandstandardssuchasNISTAIRMF,NISTSSDF,NIST800-53,CSACCM,andothers.

Byoutliningrecommendationsacrossthesekeyareasofsecurityandcompliance,thispaperaimstoguideenterprisesinfulfillingtheirobligationsforresponsibleandsecureAIdesign,development,anddeployment

©Copyright2024,CloudSecurityAlliance.Allrightsreserved.6

Introduction

Thiswhitepaperfocusesonwhatwedefineasanenterprise's"coresecurityresponsibilities"around

ArtificialIntelligence(AI)andMachineLearning(ML),datasecurity,modelsecurity,andvulnerability

management.AsorganizationshavedutiestoupholdsecureandsafeAIpractices,thiswhitepaperandtwoothersinthisseriesprovideablueprintforenterprisestofulfillsuchorganizationalresponsibilities.

Specifically,thiswhitepapersynthesizesexpert-recommendedbestpracticeswithincoresecurityareas-dataprotectionmechanisms,modelvulnerabilitymanagement,MLOpspipelinehardening,and

governancepoliciesfortraininganddeployingAIresponsibly.TheothertwowhitepapersinthisseriesdiscussadditionalaspectsofsecureAIdevelopmentanddeploymentforenterprises.Byoutlining

recommendationsacrossthesekeyareasofsecurityandcomplianceinthreetargetedwhitepapers,thisseriesaimstoguideenterprisesinfulfillingtheirobligationsforresponsibleandsecureAIdesign,

development,anddeployment.

AISharedResponsibilityModel

TheAISharedResponsibilityModeloutlinesthedivisionoftasksbetweenAIplatformproviders,AIapplicationowners,AIdevelopersandAIusage,varyingbyservicemodels(SaaS,PaaS,IaaS).

ThesecureoperationofAIapplicationsinvolvesacollaborativeeffortamongmultiplestakeholders.InthecontextofAI,responsibilitiesaresharedbetweenthreekeyparties:theAIserviceusers,theAIapplicationownersanddevelopers,andAIplatformproviders.

WhenevaluatingAI-enabledintegration,itiscrucialtocomprehendthesharedresponsibilitymodelanddelineatethespecifictaskshandledbyeachparty.

KeyLayersinanAI-EnabledApplication

1.AIPlatform:

○ThislayerprovidestheAIcapabilitiestoapplications.Itinvolvesbuildingand

safeguardingtheinfrastructurethathostsAImodels,trainingdata,andconfigurationsettings.

○Securityconsiderationsincludeprotectingagainstmaliciousinputsandoutputs

generatedbytheAImodel.AIsafetysystemsshouldprotectagainstpotentialharmfulinputsandoutputslikehate,jailbreaks,andsoon.

○AIPlatformLayerhasfollowingtasks:

■Modelsafetyandsecurity

■Modeltuning

■Modelaccountability

■Modeldesignandimplementation

■Modeltrainingandgovernance

■AIcomputeanddatainfrastructure

©Copyright2024,CloudSecurityAlliance.Allrightsreserved.7

2.AIApplicationLayer:

○TheAIapplicationlayerinterfaceswithusers,leveragingtheAIcapabilities.Itscomplexitycanvarysignificantly.Attheirmostbasiclevel,standaloneAIapplicationsserveasa

conduittoacollectionofAPIs,whichprocesstextualpromptsfromusersandrelaythemtotheunderlyingmodelforaresponse.MoresophisticatedAIapplicationsarecapableofenrichingthesepromptswithadditionalcontext,utilizingelementssuchasapersistencelayer,asemanticindex,orpluginsthatprovideaccesstoabroaderrangeofdatasources.ThemostadvancedAIapplicationsaredesignedtointegrateseamlesslywithpre-existingapplicationsandsystems,enablingamulti-modalapproachthatencompassestext,

audio,andvisualinputstoproducediversecontentoutputs.

○AsanAIapplicationowner,youensureseamlessuserexperiencesandhandleany

additionalfeaturesorservices.TosafeguardanAIapplicationfromharmfulactivities,itisessentialtoestablisharobustapplicationsafetysystem.AGenerativeAI(GenAI)systemshouldthoroughlyexaminethecontentutilizedinthepromptdispatchedtotheAImodel.Additionally,itmustscrutinizetheexchangeswithanyadd-onslikepluginsandfunctions,dataconnectors,andinteractionswithotherAIapplications,aprocessreferredtoasAIorchestration.ForthosedevelopingAIapplicationsonanInfrastructure-as-a-Service

(IaaS)orPlatform-as-a-Service(PaaS)services,integratingadedicatedAIcontent

safetyfeatureisadvisable.Dependingonspecificrequirements,additionalfeaturesmaybeimplementedtoenhanceprotection.

○AIApplicationhasthefollowingtasks:

■AIpluginsanddataconnections

■Applicationdesignandimplementation

■Applicationinfrastructure

■AIsafetysystem

3.AIUsage:

○TheAIusagelayeroutlinestheapplicationandconsumptionofAIfunctionalities.GenAIintroducesaninnovativeuser/computerinteractionmodel,distinctfromtraditional

interfaceslikeAPIs,commandprompts,andGUIs.Thisnewinterfaceisinteractiveandadaptable,moldingthecomputer’scapabilitiestotheuser’sintentions.Unlikeearlierinterfacesthatrequireduserstoconformtothesystem’sdesignandfunctions,the

generativeAIinterfaceprioritizesuserinteraction.Thisallowstheusers’inputstosignificantlyshapethesystem’soutput,emphasizingtheimportanceofsafety

mechanismstosafeguardindividuals,data,andcorporateresources.

○SecurityconsiderationsforAIusageareakintothoseforanycomputersystem,relyingonrobustmeasuresforidentityandaccessmanagement,devicesecurity,monitoring,datagovernance,andadministrativecontrols.

○Giventhesignificantimpactuseractionscanhaveonsystemoutputs,agreaterfocusonuserconductandresponsibilityisnecessary.Itisessentialtorevisepoliciesfor

acceptableuseandtoinformusersaboutthedistinctionsbetweenconventionalIT

applicationsandthoseenhancedbyAI.ThiseducationshouldcoverAI-specificissuesconcerningsecurity,privacy,andethicalstandards.Moreover,it’simportanttoraiseawarenessamongusersaboutthepotentialforAI-drivenattacks,whichmayinvolvesophisticatedlyfabricatedtext,audio,video,andothermediadesignedtodeceive.

○AIusagelayerhasthefollowingtasks:

■Usertrainingandaccountability

■Acceptableusagepolicyandadmincontrols

■IdentityandAccessManagement(IAM)anddevicecontrols

©Copyright2024,CloudSecurityAlliance.Allrightsreserved.8

■Datagovernance

Rememberthatthissharedresponsibilitymodelhelpsdemarcaterolesandensuresaclearseparationofduties,contributingtothesafeandeffectiveuseofAItechnologies.Thedistributionofworkload

responsibilitiesvariesbasedonthetypeofAIintegrationbasedonservicemodels.

1.SoftwareasaService(SaaS):

○InSaaS-basedAIintegrations,theAIplatformproviderassumesresponsibilityfor

managingtheunderlyinginfrastructure,securitycontrols,andcompliancemeasures.

○Asauser,yourprimaryfocusliesinconfiguringandcustomizingtheAIapplicationtoalignwithyourspecificrequirements.

2.PlatformasaService(PaaS):

○PaaS-basedAIplatformsofferamiddleground.WhiletheprovidermanagesthecoreAIcapabilities,youretainsomecontroloverconfigurationsandcustomization.

○YouareresponsibleforensuringthesafeuseoftheAImodel,handlingtrainingdata,andadjustingmodelbehavior(e.g.,weightsandbiases).

3.InfrastructureasaService(IaaS):

○InIaaSscenarios,youhavegreatercontrolovertheinfrastructure.However,thisalsomeanstakingonmoreresponsibilities.

○Youmanagetheentirestack,includingtheAImodel,trainingdata,andinfrastructuresecurity.

FoundationalComponentsofaData-CentricAISystem

Thefoundationalcomponentsofadata-centricAIsystemencompasstheentirelifecycleofdataandmodelmanagement.ThesecomponentsworktogethertocreateasecureandeffectiveAIsystemthatcanprocessdataandprovidevaluableinsightsorautomateddecisions.

●RawData:Theinitialunprocesseddatacollectedfromvarioussources.

●Datapreparation:Theprocessofcleaningandorganizingrawdataintoastructuredformat.

●Datasets:Curatedcollectionsofdata,readyforanalysisandmodeltraining.

●DataandAIgovernance:PoliciesandprocedurestoensuredataqualityandethicalAIusage.

●MachineLearningalgorithms:Thecomputationalmethodsusedtointerpretdata.

●Evaluation:Assessingtheperformanceofmachinelearningmodels.

●MachineLearningModels:Theoutputofalgorithmstrainedondatasets.

●Modelmanagement:Overseeingthelifecycleofmachinelearningmodels.

●Modeldeploymentandinference:Implementingmodelstomakepredictionsordecisions.

●Inferenceoutcomes:Theresultsproducedbydeployedmodels.

●MachineLearningOperations(MLOps):PracticesfordeployingandmaintainingAImodels.

●DataandAIPlatformsecurity:Measurestoprotectthesystemagainstthreats.

DataOperations:Involvestheacquisitionandtransformationofdata,coupledwiththeassuranceofdatasecurityandgovernance.TheefficacyofMLmodelsiscontingentupontheintegrityofdata

pipelinesandafortifiedDataOpsframework.

ModelOperations:EncompassesthecreationofpredictiveMLmodels,procurementfrommodel

marketplaces,ortheutilizationofLargeLanguageModels(LLMs)suchasthoseprovidedbyOpenAIor

©Copyright2024,CloudSecurityAlliance.Allrightsreserved.9

throughFoundationModelAPIs.Modeldevelopmentisaniterativeprocessthatnecessitatesasystematicapproachtodocumentandevaluatevariousexperimentalconditionsandoutcomes.

ModelDeploymentandServing:Entailsthesecureconstructionofmodelcontainers,theisolatedandprotecteddeploymentofmodels,andtheimplementationofautomatedscaling,ratelimiting,and

surveillanceofactivemodels.Italsoincludestheprovisionoffeaturesandfunctionsforhigh-availability,low-latencyservicesinRetrievalAugmentedGeneration(RAG)applications,aswellastherequisite

featuresforotherapplications,includingthosethatdeploymodelsexternallytotheplatformorrequiredatafeaturesfromthecatalog.

OperationsandPlatform:Coversthemanagementofplatformvulnerabilities,updates,model

segregation,andsystemcontrols,alongwiththeenforcementofauthorizedmodelaccesswithinasecurearchitecturalframework.Additionally,itinvolvesthedeploymentofoperationaltoolsforContinuous

Integration/ContinuousDeployment(CI/CD),ensuringthattheentirelifecycleadherestoestablishedstandardsacrossseparateexecutionenvironments—development,staging,andproduction—forsecureMLoperations(MLOps).

Table1alignstheoperationswiththecoreaspectsofadata-centricAIsystem,highlightingtheirrolesandinterdependencies

FoundationalComponent

Description

DataOperations

Ingestion,transformation,security,andgovernanceofdata.

ModelOperations

Building,acquiring,andexperimentingwithMLmodels.

ModelDeploymentandServing

Securedeployment,serving,andmonitoringofMLmodels.

OperationsandPlatform

Platformsecurity,modelisolation,andCI/CDforMLOps.

Table1:MappingData-CentricAISystemComponentsandTheirInterconnectedRoles

©Copyright2024,CloudSecurityAlliance.Allrightsreserved.10

Table2providesasynthesizedviewofthepotentialsecurityrisksandthreatsateachstageofanAI/MLsystem,alongwithexamplesandrecommendedmitigationstoaddresstheseconcerns.

SystemStage

System

Components

Potential

SecurityRisks

Threats

Mitigations

DataOperations

RawData,DataPrep,Datasets

Dataloss:Unauthorizeddeletionorcorruptionofdata.Datapoisoning:

Deliberatemanipulationofdatatocompromisethemodel’sintegrity.

Compliancechallenges:Failuretomeet

regulatoryrequirementsfordataprotection.

Compromise/poisoningofdata:Attackersmayinjectfalsedataoralterexistingdata.

Implementrobustdatagovernanceframeworks.Deployanomaly

detectionsystems.Establishrecovery

protocolsandregulardatabackups.

Model

Operations

MLAlgorithms,

Model

Management

Modeltheft:Stealingofproprietarymodels.

Unauthorizedaccess:Gainingaccessto

modelswithoutpermission.

AttacksviaAPIaccess:ExploitingAPI

vulnerabilitiestoaccessormanipulatemodels.Modelstealing

(extraction):Replicatingamodelfor

unauthorizeduse.

Strengthenaccesscontrolsand

authentication

mechanisms.SecureAPIendpointsthrough

encryptionandratelimiting.Regularlyupdateandpatchsystems.

Model

Deploymentand

Serving

ModelServing,InferenceResponse

Unauthorizedaccess:Accessingthemodel

servinginfrastructurewithoutauthorization.Dataleakage:Exposingsensitiveinformationthroughmisconfiguredsystems.

Modeltricking

(evasion):Altering

inputstoreceivea

specificoutputfromthemodel.Trainingdata

recovery(inversion):Extractingprivate

trainingdatafromthemodel.

Securedeployment

practices,including

containerizationand

networksegmentation.Activemonitoringandloggingofmodel

interactions.Implementratelimitingand

anomalydetection.

OperationsandPlatform

MLOperations,

DataandAI

PlatformSecurity

Inadequatevulnerabilitymanagement:Not

addressingknown

vulnerabilitiesinatimelymanner.Modelisolationissues:Failureto

properlyisolatemodels,leadingtopotential

cross-contamination.

AttackingMLsupply

chain:Introducing

vulnerabilitiesor

backdoorsinthird-partycomponents.Model

contamination

(poisoning):Corruptingtrainingdatatocausemisclassificationor

systemunavailability.

Continuousvulnerabilitymanagementand

patching.CI/CD

processesforconsistentdeployment.Isolation

controlsandsecurearchitecturedesign.

Table2:AI/MLSecurityRiskOverview

©Copyright2024,CloudSecurityAlliance.Allrightsreserved.11

Weanalyzeeachresponsibilityinthefollowingdimensions.

1.EvaluationCriteria:WhendiscussingAIresponsibility,considerquantifiablemetricsforassessingthesecurityimpactofAIsystems.Byquantifyingtheseaspects,stakeholderscanbetterunderstandthe

associatedrisksofAItechnologiesandhowtoaddressthoserisks.Organizationsmustfrequently

evaluatetheirAIsystemstoensuresecurityandreliability.Theyshouldassessmeasurablethingslikehowwellthesystemhandlesattacks(adversarialrobustness),whetheritleakssensitivedata,howoftenit

makesmistakes(false-positiverates),andwhetherthetrainingdataisreliable(dataintegrity).Evaluatingandmonitoringthesecriticalmeasuresaspartoftheorganization'ssecurityplanwillhelpimproveoverallsecuritypostureofAIsystems.

2.RACIModel:ThismodelhelpsclarifywhoisResponsible,Accountable,Consulted,andInformed

(RACI)regardingAIdecision-makingandoversight.ApplyingtheRACImodeldelineatesrolesand

responsibilitiesinAIgovernance.ThisallocationofresponsibilitiesisessentialforsecureAIsystems.Itisimportanttounderstandthatdependingonanorganization'ssizeandbusinessfocus,thespecificrolesandteamsdelineatedinthiswhitepaperareforreferenceonly.Theemphasisshouldbeonclearly

outliningthekeyresponsibilitiesfirst.Organizationscanthendeterminetheappropriaterolestomaptothoseresponsibilities,andsubsequently,theteamstofillthoseroles.Theremaybesomeoverlapping

responsibilitiesacrossteams.TheRACIframeworkdefinedhereinaimstoprovideinitialroleandteam

designationstoaidorganizationsindevelopingtheirowntailoredRACImodels.However,implementationmayvaryacrosscompaniesbasedontheiruniqueorganizationalstructuresandpriorities.

3.High-levelImplementationStrategies:ThissectionoutlinesstrategiesforseamlesslyintegratingcybersecurityconsiderationsintotheSoftwareDevelopmentLifecycle(SDLC).Organizationsmust

prioritizetheenforcementofCIAprinciples—ensuringtheconfidentiality,integrity,andavailabilityofdataandsystems.Accesscontrolmechanismsshouldbeimplementedrigorouslytomanageuserpermissionsandpreventunauthorizedaccess.Robustauditingmechanismsmusttracksystemactivityandpromptlydetectsuspiciousbehavior.Impactassessmentsshouldevaluatepotentialcybersecurityrisks,focusingonidentifyingvulnerabilitiesandmitigatingthreatstosafeguardsensitiveinformationinAIsystems

4.ContinuousMonitoringandReporting:ContinuousMonitoringandReportingensurestheongoingsecurity,safety,andperformanceofAIsystems.Criticalcomponentsincludereal-timemonitoring,alertsforpoormodelperformanceorsecurityincidents,audittrails/logs,andregularreporting,followedby

actiontoimplementimprovementsandresolveissues.ContinuousMonitoringandReportinghelpsorganizationsmaintaintransparency,enhanceperformanceandaccountability,andbuildtrustinAI

systems.

5.AccessControl:AccesscontroliscrucialforsecuringAIsystems.ThisincludesstrongAPI

authentication/authorizationpolicies,managingmodelregistries,controllingaccesstodatarepositories,overseeingcontinuousintegrationanddeploymentpipelines(CI/CD),handlingsecrets,andmanaging

privilegedaccess.BydefininguserrolesandpermissionsforvariouspartsoftheAIpipeline,sensitivedatacanbesafeguarded,andmodelscan'tbetamperedwithoraccessedwithoutproperauthorization.

ImplementingstrongidentityandaccessmanagementnotonlyprotectsintellectualpropertybutalsoensuresaccountabilitythroughoutAIworkflows.

©Copyright2024,CloudSecurityAlliance.Allrightsreserved.12

6.AdherencetoFoundationalGovernance,RiskandCompliance,Security,Safety,andEthicalGuardrails:Emphasizeadherencetoguardrailsbasedonindustrybestpracticesandregulatory

requirementssuchasthefollowing:

●NISTSSDFforsecuresoftwaredevelopment

●NISTArtificialIntelligenceRiskManagementFramework(AIRMF)

●ISO/IEC42001:2023AIManagementSystem(AIMS)

●ISO/IEC27001:2022InformationSecurityManagementSystem(ISMS)

●ISO/IEC27701:2019PrivacyInformationManagementSystem(PIMS)

●ISO31700-1:2023ConsumerprotectionPrivacybydesignforconsumergoodsandservices

●OWASPTop10forLLMApplications

●NISTSP800-53Rev.5SecurityandPrivacyControlsforInformationSystemsandOrganizations

●GeneralDataProtectionRegulation(GDPR)ondataanonymizationandpseudonymizationandguidance

●Guidancefortokenizationoncloud-basedservices

Assumptions

Thisdocumentassumesanindustry-neutralstance,providingguidelinesandrecommendationsthatcanbeapplicableacrossvarioussectorswithoutaspecificbiastowardsaparticularindustry.

IntendedAudience

Thewhitepaperisintendedtocatertoadiverserangeofaudiences,eachwithdistinctobjectivesandinterests.

1.ChiefInformationSecurityOfficers(CISOs):ThiswhitepaperisspecificallydesignedtoaddresstheconcernsandresponsibilitiesofCISOs.ItprovidesvaluableinsightsintointegratingcoresecurityprincipleswithinAIsystems.PleasenotethattheroleofChiefAIOfficer(CAIO)isemerginginmanyorganizations,andit'santicipatedthatamajorityofrelatedresponsibilitiesdefinedinthiswhitepapermayshiftfromCISOtoCAIOinthenearfuture.

2.AIresearchers,engineers,dataprofessionals,scientists,analystsanddevelopers:ThepaperofferscomprehensiveguidelinesandbestpracticesforAIresearchersandengineers,aidingthemin

developingethicalandtrustworthyAIsystems.ItservesasacrucialresourceforensuringresponsibleAIdevelopment.

3.Businessleadersanddecisionmakers:Forbusinessleadersanddecision-makerssuchasCIO,CPO,CDO,CRO,CEOandCTOthewhitepaperoffersessentialinformationandawarenessforcybersecuritystrategiesrelatedtoAIsystemdevelopment,deployment,andlifecyclemanagement.

©Copyright2024,CloudSecurityAlliance.Allrightsreserved.13

4.Policymakersandregulators:Policymakersandregulatorswillfindthispaperinvaluableasit

providescriticalinsightstohelpshapepolicyandregulatoryframeworksconcerningAIe

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论