欧盟和美国在人工智能监管问题上存在分歧:跨大西洋比较和协调步骤 The EU and U.S. diverge on AI regulation A transatlantic comparison and steps to alignment 2023_第1页
欧盟和美国在人工智能监管问题上存在分歧:跨大西洋比较和协调步骤 The EU and U.S. diverge on AI regulation A transatlantic comparison and steps to alignment 2023_第2页
欧盟和美国在人工智能监管问题上存在分歧:跨大西洋比较和协调步骤 The EU and U.S. diverge on AI regulation A transatlantic comparison and steps to alignment 2023_第3页
欧盟和美国在人工智能监管问题上存在分歧:跨大西洋比较和协调步骤 The EU and U.S. diverge on AI regulation A transatlantic comparison and steps to alignment 2023_第4页
欧盟和美国在人工智能监管问题上存在分歧:跨大西洋比较和协调步骤 The EU and U.S. diverge on AI regulation A transatlantic comparison and steps to alignment 2023_第5页
已阅读5页,还剩83页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

TheEUandU.S.divergeonAI

regulation:Atransatlanticcomparisonandstepstoalignment

EXECUTIVESUMMARY

TheEUandtheU.S.arejointlypivotaltothefutureofglobalAIgovernance.EnsuringthatEUandU.S.approachestoAIriskmanagementaregenerallyalignedwillfacilitatebilateraltrade,improveregulatoryoversight,andenablebroadertransatlanticcooperation.

TheU.S.approachtoAIriskmanagementishighlydistributedacrossfederalagencies,manyadaptingtoAIwithoutnewlegalauthorities.Meanwhile,theU.S.hasinvestedinnon-regulatoryinfrastructure,suchasanewAIriskmanagementframework,evaluationsoffacialrecognitionsoftware,andextensivefundingofAIresearch.TheEUapproachtoAIriskmanagementischaracterizedbyamorecomprehensiverangeoflegislationtailoredtospecificdigitalenvironments.TheEUplanstoplacenewrequirementsonhigh-riskAIinsocioeconomicprocesses,thegovernmentuseofAI,andregulatedconsumerproductswithAIsystems.OtherEUlegislationenablesmorepublictransparencyandinfluenceoverthedesignofAIsystemsinsocialmediaande-commerce.

TheEUandU.S.strategiesshareaconceptualalignmentonarisk-basedapproach,agreeonkeyprinciplesoftrustworthyAI,andendorseanimportantroleforinternationalstandards.However,thespecificsoftheseAIriskmanagementregimeshavemoredifferencesthansimilarities.RegardingmanyspecificAIapplications,especiallythoserelatedtosocioeconomicprocessesandonlineplatforms,theEUandU.S.areonapathtosignificantmisalignment.

TheEU-U.S.TradeandTechnologyCouncilhasdemonstratedearlysuccessworkingonAI,especiallyonaprojecttodevelopacommonunderstandingofmetricsandmethodologiesfortrustworthyAI.Throughthesenegotiations,theEUandU.S.havealsoagreedtoworkcollaborativelyoninternationalAIstandards,whilealsojointlystudyingemergingrisksofAIandapplicationsofnewAItechnologies.

MorecanbedonetofurthertheEU-U.S.alignment,whilealsoimprovingeachcountry’sAIgovernanceregime.Specifically:

oTheU.S.shouldexecuteonfederalagencyAIregulatoryplansanduse

thesefordesigningstrategicAIgovernancewithaneyetowardsEU-U.S.alignment.

oTheEUshouldcreatemoreflexibilityinthesectoralimplementationof

theEUAIAct,improvingthelawandenablingfutureEU-U.S.cooperation.

oTheU.S.needstoimplementalegalframeworkforonlineplatform

governance,butuntilthen,theEUandU.S.shouldworkonshareddocumentationofrecommendersystemsandnetworkalgorithms,aswellasperformcollaborativeresearchononlineplatforms.

oTheU.S.andEUshoulddeepenknowledgesharingonanumberoflevels,

includingonstandardsdevelopment;AIsandboxes;largepublicAIresearchprojectsandopen-sourcetools;regulator-to-regulatorexchanges;anddevelopinganAIassuranceecosystem.

MorecollaborationbetweentheEUandtheU.S.willbecrucial,asthesegovernmentsareimplementingpoliciesthatwillbefoundationaltothe

democraticgovernanceofAI.

INTRODUCTION

Approachestoartificialintelligence(AI)riskmanagement—shapedbyemerginglegislation,regulatoryoversight,civilliability,softlaw,andindustrystandards—arebecomingkeyfacetsofinternationaldiplomacyandtradepolicy.Inadditiontoencouragingintegratedtechnologymarkets,amoreunifiedinternationalapproachtoAIgovernancecanstrengthenregulatoryoversight,guideresearchtowardssharedchallenges,promotetheexchangeofbestpractices,andenabletheinteroperabilityoftoolsfortrustworthyAIdevelopment.

EspeciallyimpactfulinthislandscapearetheEUandtheU.S.,whicharebothcurrentlyimplementingfoundationalpoliciesthatwillsetprecedentsforthefutureofAIriskmanagementwithintheirterritoriesandglobally.ThegovernanceapproachesoftheEUandU.S.touchonawiderangeofAIapplicationswithinternationalimplications,includingmoresophisticatedAIinconsumerproducts;aproliferationofAIinregulatedsocioeconomicdecisions;anexpansionofAIinawidevariety

ofonlineplatforms;andpublic-facingweb-hostedAIsystems,suchas

generativeAIandfoundationmodels

.[i]

Thispaperconsidersthebroad

approachesoftheU.S.andtheEUtoAIriskmanagement,comparespolicydevelopmentsacrosseightkeysubfields,anddiscussescollaborativestepstakensofar,especiallythroughtheEU-U.S.TradeandTechnologyCouncil.Further,thispaperidentifieskeyemergingchallengestotransatlanticAIriskmanagementandofferspolicymakingrecommendationsthatmightadvancewell-alignedandmutually

beneficialEU-U.S.AIpolicy.

THEU.S.APPROACHTOAIRISKMANAGEMENT

TheU.S.federalgovernment’sapproachtoAIriskmanagementcanbroadlybecharacterizedasrisk-based,sectorallyspecific,andhighlydistributedacrossfederalagencies.Thereareadvantagestothisapproach,howeveritalsocontributestotheunevendevelopmentofAI

policies.WhilethereareseveralguidingfederaldocumentsfromtheWhiteHouseonAIharms,theyhavenotcreatedanevenorconsistent

federalapproachtoAIrisks.

“Byandlarge,federalagencieshavestillnotdevelopedtherequiredAIregulatoryplans.”

TheFebruary2019executiveorder,MaintainingAmericanLeadershipinArtificialIntelligence(EO13859),anditsensuingOfficeofManagement

andBudget(OMB)guidance(M-21-06)presentedthefirstfederal

approachtoAIoversight

.[1]

DeliveredinNovember2020,15

monthsafterthedeadlinesetinEO13859,theOMBguidanceclearlyarticulatedarisk-basedapproach,stating“themagnitudeandnatureoftheconsequencesshouldanAItoolfail…canhelpinformthelevelandtypeofregulatoryeffortthatisappropriatetoidentifyandmitigaterisks.”ThesedocumentsalsourgedagenciestoconsiderkeyfacetsofAIriskreductionthroughregulatoryandnon-regulatoryinterventions.ThisincludesusingscientificevidencetodetermineAI’scapabilities,enforcingnon-discriminationstatutes,consideringdisclosurerequirements,andpromotingsafeAIdevelopmentanddeployment.WhilethesedocumentsreflectedtheTrumpadministration’sminimalist

regulatoryperspective,theyalsorequiredagenciestodevelopplansto

regulateAIapplications

.[2]

Byandlarge,federalagencieshavestillnotdevelopedtherequiredAIregulatoryplans.InDecember2022,StanfordUniversity’sCenterfor

Human-CenteredAIreleasedareportstatingthatonlyfiveof41major

agenciescreatedanAIplanasrequired

.[3,]

[ii]

Thisisagenerous

interpretation,asonlyonemajoragency,theDepartmentofHealthand

HumanServices(HHS),providedathoroughplaninresponse

.[4]

HHS

extensivelydocumentedtheagency’sauthorityoverAIsystems(through12differentstatutes),itsactiveinformationcollections(e.g.,onAIforgenomicsequencing),andtheemergingAIusecasesofinterest(mostlyinillnessdetection).ThethoroughnessoftheHHS’sregulatoryplanshowshowvaluablethisendeavorcouldbeforfederalagencyplanningandinformingthepublicifotheragenciesweretofollowinHHS’sfootsteps.

RatherthanfurtherimplementingEO13859,theBidenadministration

insteadrevisitedthetopicofAIrisksthroughtheBlueprintforanAIBill

ofRights(AIBoR)

.[5]

DevelopedbytheWhiteHouseOfficeofScienceand

TechnologyPolicy(OSTP),theAIBoRincludesadetailedexpositionofAIharmstoeconomicandcivilrights,fiveprinciplesformitigatingtheseharms,andanassociatedlistoffederalagencies’actions.TheAIBoR

endorsesasectorallyspecificapproachtoAIgovernance,withpolicyinterventionstailoredtoindividualsectorssuchashealth,labor,andeducation.Itsapproachisthereforequitereliantontheseassociatedfederalagencyactions,ratherthancentralizedaction,especiallybecausetheAIBoRisnonbindingguidance.

ThattheAIBoRdoesnotdirectlycompelfederalagenciestomitigateAI

risksisclearfromthepatchworkofresponses,withsignificanteffortsin

someagenciesandnon-responseinothers

.[6]

Further,despitethefivebroadprinciplesoutlinedintheAIBoR

,[iii]

mostfederalagenciesare

onlyabletoadapttheirpre-existinglegalauthoritiestoalgorithmicsystems.ThisisbestdemonstratedbyagenciesregulatingAIusedtomakesocioeconomicdecisions.ThisincludestheFederalTradeCommission(FTC),whichcanuseitsauthoritytoprotectagainst“unfair

anddeceptive”practicestoenforcetruthinadvertisingandsomedata

privacyguaranteesinAIsystems

.[7]

TheFTCisalsoactivelyconsidering

howitsexistingauthoritiesaffectdata-drivencommercialsurveillance,includingalgorithmicdecision-making,andsomeadvocacy

organizationshavearguedtheFTCcanplacetransparencyandfairness

requirementsonsuchalgorithmicsystems

.[8]

TheEqualEmployment

OpportunityCommission(EEOC)canimposesometransparency,requireanon-AIalternativeforpeoplewithdisabilities,andenforcenon-

discriminationinAIhiring

.[9]

TheConsumerFinancialProtectionBureau

(CFPB)requiresexplanationsforcreditdenialsfromAIsystemsand

couldpotentiallyenforcenon-discriminationrequirements

.[10]

Thereare

otherexamples,however,innosectordoesanyagencyhavethelegalauthoritiesnecessarytoenforcealloftheprinciplesexpressedbytheAIBoR,northoseinEO13859.

Oftheseprinciples,theBidenadministrationhasbeenespeciallyvocalonracialequityandinFebruary2023publishedtheexecutiveorderFurtherAdvancingRacialEquityandSupportforUnderservedCommunitiesThroughtheFederalGovernment(EO14091).Thesecondexecutiveorderonthissubject,EO14091,directsfederalagenciesto

addressemergingriskstocivilrights,including“algorithmic

discriminationinautomatedtechnology.

”[11]

Itistoosoontoknowthe

impactofthisnewexecutiveorder.

Federalagencieswithregulatorypurviewoverconsumerproductsarealsomakingadjustments.OneleadingagencyistheFoodandDrugAdministration(FDA),whichhasbeenworkingtoincorporateAI,and

specificallymachinelearning,inmedicaldevicessinceatleast

2019

.[12]

TheFDAnowpublishesbestpracticesforAIinmedicaldevices,

documentscommerciallyavailableAI-enabledmedicaldevices,andhaspromisedtoperformrelevantpilotsandadvanceregulatorysciencein

itsAIactionplan

.[13]

AsidefromtheFDA,theConsumerProductsSafety

Commission(CPSC)statedin2019itsintentiontoresearchandtrackincidentsofAIharmsinconsumerproducts,aswellastoconsiderpolicy

interventionsincludingpubliceducationcampaigns,voluntary

standards,mandatorystandards,andpursuingrecalls

.[14]

In2022,CPSC

issuedadraftreportonhowtotestandevaluateconsumerproducts

whichincorporatemachinelearning

.[15]

Issuedinthefinaldaysofthe

Trumpadministration,theDepartmentofTransportation’sAutomated

VehiclesComprehensivePlansoughttoremoveregulatoryrequirements

forsemi-andfully-autonomousvehicles

.[16]

InparallelwiththeunevenstateofAIregulatorydevelopments,theU.S.iscontinuingtoinvestininfrastructureformitigatingAIrisks.MostnotableistheNationalInstituteofStandardsandTechnology’s(NIST)AI

RiskManagementFramework(RMF),firstreleasedasadraftonMarch

17,2022,withafinalreleaseonJanuary26,2023

.[17]

TheNISTAIRMFis

avoluntaryframeworkthatbuildsofftheOrganizationforEconomicCooperationandDevelopment’s(OECD)Frameworkforthe

ClassificationofAISystemsbyofferingcomprehensivesuggestionson

whenandhowriskcanbemanagedthroughouttheAIlifecycle

.[18]

NIST

isalsodevelopinganewAIRMFPlaybook,withconcreteexamplesofhowentitiescanimplementtheRMFacrossthedatacollection,

development,deployment,andoperationofAI

.[19]

TheNISTAIRMFwill

alsobeaccompaniedbyaseriesofcasestudies,eachofwhichwill

documentthestepsandinterventionstakentomitigateriskwithina

specificAIapplication

.[20]

Whileitistoosoontotellwhatdegreeof

adoptiontheNISTAIRMFwillachieve,the2014NISTCybersecurity

Frameworkhasbeenwidelyadapted(usuallyentailingpartialadoption)

byindustry

.[21]

NISTalsoplaysaroleinevaluatingandpubliclyreportingonthe

accuracyandfairnessoffacialrecognitionalgorithmsthroughits

ongoingFaceRecognitionVendorTestprogram

.[22]

Inoneanalysis,NIST

testedandcompared189commercialfacialrecognitionalgorithmsforaccuracyondifferentdemographicgroups,contributingvaluable

informationtotheAImarketplaceandimprovingpublicunderstanding

ofthesetools

.[23]

Anassortmentofotherpolicyactionsaddressessomealgorithmicharmsandcontributestofutureinstitutionalpreparednessandthuswarrantsmention,evenifAIriskisnottheprimaryorientation.LaunchedinApril2022,theNationalAIAdvisoryCommitteemayplayanexternaladvisoryroleinguidinggovernmentpolicyonmanagingAIrisksinareassuchas

lawenforcement,althoughitisprimarilyconcernedwithadvancingAIas

anationaleconomicresource

.[24]

Thefederalgovernmenthasalsorun

severalpilotsofanimprovedhiringprocess,aimedatattractingdata

sciencetalenttothecivilservice,akeyaspectofpreparednessforAI

governance

.[25]

Currently,the“datascientist”occupationalseriesisthe

mostrelevantfederalgovernmentjobforthetechnicalaspectsofAIriskmanagement.However,thisroleismoreorientedtowardsperforming

datasciencethanreviewingorauditingAImodelscreatedbyprivate

sectordatascientists

.[26]

[iv]

TheU.S.governmentfirstpublishedanationalAIResearchand

DevelopmentStrategicPlanin2016,andin2022,13federaldepartments

fundedAIresearchanddevelopment

.[27]

TheNationalScience

Foundationhasnowfunded19interdisciplinaryAIresearchinstitutes,

andtheacademicworkcomingfromsomeoftheseinstitutesis

advancingtrustworthyandethicalAImethods

.[28]

Similarly,the

DepartmentofEnergywastaskedwithdevelopingmorereliableAI

methodswhichmightinformcommercialactivity,suchasinmaterials

discovery

.[29]

Further,theBidenadministrationwillseekanadditional

$2.6billionoversixyearstofundAIinfrastructureundertheNationalAI

ResearchResource(NAIRR)project,whichstatesthatencouraging

trustworthyAIisoneofitsfourkeygoals

.[30]

Specifically,theNAIRR

couldbeusedtobetterstudytherisksofemerginglargeAImodels,manyofwhicharecurrentlydevelopedwithoutpublicscrutiny.

Inasignificantrecentdevelopment,aseriesofstateshaveintroduced

legislationtotacklealgorithmicharms,includingCalifornia,Connecticut,

andVermont

.[31]

WhilethesemightmeaningfullyimproveAIprotections,

theycouldalsopotentiallyleadtofuturepre-emptionissuesthatwouldmirrortheongoingchallengetopassingfederalprivacylegislation

(namely,howshouldthefederallegislationreplaceoraugmentvarious

statelaws)

.[32]

THEEUAPPROACHTOAIRISKMANAGEMENT

TheEU’sapproachtoAIriskmanagementiscomplexandmultifaceted,buildingonimplementedlegislation,especiallytheGeneralDataProtectionRegulation(GDPR),andspanningnewlyenactedlegislation,namelytheDigitalServicesActandDigitalMarketsAct,aswellaslegislationstillbeingactivelydebated,particularlytheAIAct,amongotherrelevantendeavors.TheEUhasconsciouslydevelopeddifferentregulatoryapproachesfordifferentdigitalenvironments,eachwitha

differentdegreeofemphasisonAI.

“TheEUhasconsciouslydevelopeddifferentregulatoryapproachesfordifferentdigitalenvironments,eachwithadifferentdegreeofemphasisonAI.”

Asidefromitsdataprivacyimplications,GPDRcontainstwoimportantarticlesrelatedtoalgorithmicdecision-making.First,GDPRstatesthat

algorithmicsystemsshouldnotbeallowedtomakesignificantdecisions

thataffectlegalrightswithoutanyhumansupervision

.[33]

Basedonthis

clause,in2021,Uberwasrequiredtoreinstatesixdriverswhowere

foundtohavebeenfiredsolelybythecompany’salgorithmic

system

.[34]

Second,GDPRguaranteesanindividual’srightto“meaningful

informationaboutthelogic”ofalgorithmicsystems,attimes

controversiallydeemeda“righttoexplanation.

”[35]

Inpractice,

companiessuchashomeinsuranceprovidershaveofferedlimited

responsestorequestsforinformationaboutalgorithmic

decisions

.[36]

Therearemanyopenquestionsaboutthisclause,including

howoftenaffectedindividualsrequestthisinformation,howvaluablethe

informationistothem,andwhathappenswhencompaniesrefuseto

providetheinformation

.[37]

TheEUAIActwillbeanespeciallycriticalcomponentoftheEU’s

approachtoAIriskmanagementinmanyareasofAIrisk

.[38]

WhiletheAI

Actisnotyetfinalized,enoughcanbeinferredfromtheEuropeanCommissionproposalfromApril2021,thefinalCounciloftheEUproposalfromDecember2022,andtheavailableinformationfromtheongoingEuropeanParliamentdiscussionsinordertoanalyzeitskeyfeatures.

Althoughitisoftenreferredtoas“horizontal,”theAIActimplementsa

tieredsystemofregulatoryobligationsforaspecificallyenumeratedlist

ofAIapplications

.[39]

SeveralAIapplications,includingdeepfakes,

chatbots,andbiometricanalysis,mustclearlydisclosethemselvestoaffectedpersons.AdifferentsetofAIsystemswith“unacceptablerisks”

wouldbebannedcompletely,potentiallyincludingAIforsocial

scoring

,[v]

AI-enabledmanipulativetechnologies,and,withseveral

importantexceptions,biometricidentificationbylawenforcementinpublicspaces.

Betweenthesetwotierssits“high-risk”AIsystems,whichisthemostinclusiveandimpactfulofthedesignationsintheEUAIAct.TwocategoriesofAIapplicationswillbedesignatedashigh-riskundertheAIAct:regulatedconsumerproductsandAIusedforimpactfulsocioeconomicdecisions.Allhigh-riskAIsystemswillhavetomeetstandardsofdataquality,accuracy,robustness,andnon-discrimination,whilealsoimplementingtechnicaldocumentation,record-keeping,ariskmanagementsystem,andhumanoversight.Entitiesthatsellordeploycoveredhigh-riskAIsystems,calledproviders,willneedtomeettheserequirementsandsubmitdocumentationthatattesttotheconformityoftheirAIsystemsorotherwisefacefinesashighas6%ofannualglobalturnover.

Thefirstcategoryofhigh-riskAIincludesconsumerproductsthatarealreadyregulatedundertheNewLegislativeFramework,theEU’ssingle-

marketregulatoryregime,whichincludesproductssuchasmedical

devices,vehicles,boats,toys,andelevators

.[40]

Generallyspeaking,this

meansthatAI-enabledconsumerproductswillstillgothroughthepre-existingregulatoryprocessunderthepertinentproductharmonizationlegislationandwillnotneedasecond,independentconformityassessmentjustfortheAIActrequirements.Therequirementsforhigh-riskAIsystemswillbeincorporatedintotheexistingproductharmonizationlegislation.Asaresult,ingoingthroughthepre-existingregulatoryprocess,businesseswillhavetopaymoreattentiontoAIsystems,reflectingthefactthatsomemodernAIsystemsmaybemoreopaque,lesspredictable,orplausiblyupdateafterthepointofsale.Notably,someEUagencieshavealreadybeguntoconsiderhowAIaffectstheirregulatoryprocesses.Oneleadingex

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论