人工智能安全与自动化偏见:人机交互的风险_第1页
人工智能安全与自动化偏见:人机交互的风险_第2页
人工智能安全与自动化偏见:人机交互的风险_第3页
人工智能安全与自动化偏见:人机交互的风险_第4页
人工智能安全与自动化偏见:人机交互的风险_第5页
已阅读5页,还剩62页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

ExecutiveSummary

Automationbiasisthetendencyforanindividualtoover-relyonanautomated

system.Itcanleadtoincreasedriskofaccidents,errors,andotheradverseoutcomeswhenindividualsandorganizationsfavortheoutputorsuggestionofthesystem,eveninthefaceofcontradictoryinformation.

Automationbiascanendangerthesuccessfuluseofartificialintelligencebyerodingtheuser’sabilitytomeaningfullycontrolanAIsystem.AsAIsystemshave

proliferated,sotoohaveincidentswherethesesystemshavefailedorerredinvariousways,andhumanusershavefailedtocorrectorrecognizethesebehaviors.

Thisstudyprovidesathree-tieredframeworktounderstandautomationbiasbyexaminingtheroleofusers,technicaldesign,andorganizationsininfluencing

automationbias.Itpresentscasestudiesoneachofthesefactors,thenofferslessonslearnedandcorrespondingrecommendations.

UserBias:TeslaCaseStudy

Factorsinfluencingbias:

●User’spersonalknowledge,experience,andfamiliaritywithatechnology.

●User’sdegreeoftrustandconfidenceinthemselvesandthesystem.

Lessonslearnedfromcasestudy:

●Disparitiesbetweenuserperceptionsandsystemcapabilitiescontributetobiasandmayleadtoharm.

Recommendation:

●Createandmaintainqualificationstandardsforuserunderstanding.Usermisunderstandingofasystem’scapabilitiesorlimitationsisa

significantcontributortoincidentsofharm.Sinceuserunderstandingiscriticaltosafeoperation,systemdevelopersandvendorsmustinvestinclearcommunicationsabouttheirsystems.

CenterforSecurityandEmergingTechnology|1

TechnicalDesignBias:AirbusandBoeingDesignPhilosophiesCaseStudy

Factorsinfluencingbias:

●Thesystem’soveralldesign,userinterface,andhowitprovidesuserfeedback.

Lessonslearnedfromcasestudy:

●Evenwithhighlytraineduserssuchaspilots,systemsinterfacescontributetoautomationbias.

●Differentdesignphilosophieshavedifferentrisks.Nosingleapproachisnecessarilyperfect,andallrequireclear,consistentcommunicationandapplication.

Recommendation:

●Valueandenforceconsistentdesignanddesignphilosophiesthat

accountforhumanfactors,especiallyforsystemslikelytobeupgraded.

Whennecessary,justifyandmakeclearanydeparturesfromadesign

philosophytolegacyusers.Wherepossible,developcommondesign

criteria,standards,andexpectations,andconsistentlycommunicatethem(eitherthroughorganizationalpolicyorindustrystandard)toreducetheriskofconfusionandautomationbias.

OrganizationalPoliciesandProcedureBias:ArmyPatriotMissileSystemvs.NavyAEGISCombatSystemCaseStudy

Factorsinfluencingbias:

●Organizationaltraining,processes,andpolicies.

Lessonslearnedfromcasestudy:

●Organizationscanemploythesametoolsandtechnologiesinvery

differentwaysbasedonprotocols,operations,doctrine,training,andcertification.Choicesineachoftheseareasofgovernancecanembedautomationbiases.

●Organizationaleffortstomitigateautomationbiascanbesuccessfulbutmishapsarestillpossible,especiallywhenhumanusersareunderstress.

CenterforSecurityandEmergingTechnology|2

Recommendation:

●Whereautonomoussystemsareusedbyorganizations,designand

regularlyrevieworganizationalpoliciesappropriatefortechnical

capabilitiesandorganizationalpriorities.Updatepoliciesandprocesses

astechnologieschangetobestaccountfornewcapabilitiesandmitigate

novelrisks.Ifthereisamismatchbetweenthegoalsoftheorganization

andpoliciesgoverninghowcapabilitiesareused,automationbiasandpooroutcomesaremorelikely.

Acrossthesethreecasestudies,itisclearthat“human-in-the-loop”cannotpreventallaccidentsorerrors.Properlycalibratingtechnicalandhumanfail-safesforAI,however,posesthebestchanceformitigatingtherisksofusingAIsystems.

CenterforSecurityandEmergingTechnology|3

TableofContents

ExecutiveSummary 1

Introduction 5

WhatIsAutomationBias? 6

AFrameworkforUnderstandingandMitigatingAutomationBias 8

CaseStudies 10

CaseStudy1:HowUserIdiosyncrasiesCanLeadtoAutomationBias 10

Tesla’sRoadtoAutonomy 10

BehindtheWheel:Tesla’sAutopilotandtheHumanElement 11

CaseStudy2:HowTechnicalDesignFactorsCanInduceAutomationBias 13

TheHuman-MachineInterface:AirbusandBoeingDesignPhilosophies 14

BoeingIncidents 16

AirbusIncidents 17

CaseStudy3:HowOrganizationsCanInstitutionalizeAutomationBias 18

DivergentOrganizationalApproachestoAutomation:Armyvs.Navy 19

Patriot:ABiasTowardstheSystem 21

AEGIS:ABiasTowardstheHuman 22

Conclusion 24

Authors 26

Acknowledgments 26

Endnotes 27

CenterforSecurityandEmergingTechnology|4

Introduction

Incontemporarydiscussionsaboutartificialintelligence,acriticalbutoftenoverlookedaspectisautomationbias—thetendencyofhumanuserstooverlyrelyonAIsystems.Leftunaddressed,automationbiascanandhasharmedbothAIandautonomous

systemusersandinnocentbystandersinexamplesthatrangefromfalselegal

accusationstodeath.Automationbias,therefore,presentsasignificantchallengeinthe

real-worldapplicationofAI,particularlyinhigh-stakescontextssuchasnationalsecurityandmilitaryoperations.

SuccessfuldeploymentofAIsystemsreliesonacomplexinterdependencebetweenAIsystemsandthehumansresponsibleforoperatingthem.Addressingautomationbias

isnecessarytoensuresuccessful,ethical,andsafeAIdeployment,especiallywhentheconsequencesofoverrelianceormisusearemostsevere.AssocietiesincorporateAI

intosystems,decision-makersthusneedtobepreparedtomitigatetherisksassociatedwithautomationbias.

Automationbiascanmanifestandbeinterceptedattheuser,technicaldesign,and

organizationallevels.Weprovidethreecasestudiesthatexplainhowfactorsateachoftheselevelscanmakeautomationbiasmoreorlesslikely,derivelessonslearned,and

highlightpossiblemitigationstrategiestoalleviatethesecomplexissues.

CenterforSecurityandEmergingTechnology|5

WhatIsAutomationBias?

Automationbiasisthetendencyforahumanusertooverlyrelyonanautomated

system,reflectingacognitivebiasthatemergesfromtheinteractionbetweenahumanandanAIsystem.

Whenaffectedbyautomationbias,userstendtodecreasetheirvigilanceinmonitoringboththeautomatedsystemandthetaskitisperforming.1Instead,theyplaceexcessivetrustinthesystem’sdecision-makingcapabilitiesandinappropriatelydelegatemore

responsibilitytothesystemthanitisdesignedtohandle.Insevereinstances,usersmightfavorthesystem’srecommendationsevenwhenpresentedwithcontradictoryevidence.

Automationbiasmostoftenpresentsintwoways:asanerrorofomission,whena

humanfailstotakeactionbecausetheautomationdidnotalertthem(asdiscussedinthefirstcasestudyonvehicles);orasanerrorofcommission,whenahumanfollowsincorrectdirectionsfromtheautomation(asdiscussedinthecasestudyonthePatriotMissileSystem).2Inthisanalysis,wealsodiscussaninstancewhereabiasagainsttheautomationcausesharm(i.e.,thethirdcasestudyontheAEGISweaponssystem).

Automationbiasdoesnotalwaysresultincatastrophicevents,butitincreasesthelikelihoodofsuchoutcomes.Mitigatingautomationbiascanhelptoimprovehumanoversight,operation,andmanagementofAIsystemsandthusmitigatesomerisksassociatedwithAI.

ThechallengeofautomationbiashasonlygrownwiththeintroductionofprogressivelymoresophisticatedAI-enabledsystemsandtoolsacrossdifferentapplicationareas

includingpolicing,immigration,socialwelfarebenefits,consumerproducts,and

militaries(seeBox1).HundredsofincidentshaveoccurredwhereAI,algorithms,andautonomoussystemsweredeployedwithoutadequatetrainingforusers,clear

communicationabouttheircapabilitiesandlimitations,orpoliciestoguidetheiruse.3

CenterforSecurityandEmergingTechnology|6

Box1.AutomationBiasandtheUKPostOfficeScandal

Inanotablecaseofautomationbias,afaultyaccountingsystememployedbytheUKPostOfficeledtothewrongfulprosecutionof736UKsub-postmastersforembezzlement.AlthoughitdidnotinvolveanAIsystem,automationbiasandthemythof“infalliblesystems”playedasignificantrole—userswillinglyacceptedsystemerrorsdespitesubstantialevidencetothecontrary,favoringtheunlikelycasethathundredsofpostmasterswereinvolvedintheftandfraud.4Asoneauthorofanongoingstudyintothecasehighlighted,“Thisisnotascandalabouttechnologicalfailing;itisascandalaboutthegrossfailureofmanagement.”5

Whileautomationbiasisachallengingproblem,itisatractableissuethatsocietycantacklethroughouttheAIdevelopmentanddeploymentprocess.Theavenuesthroughwhichautomationbiascanmanifest—namelyattheuser,technical,andorganizationallevels—alsorepresentpointsofinterventiontomitigateautomationbias.

CenterforSecurityandEmergingTechnology|7

AFrameworkforUnderstandingandMitigatingAutomationBias

Technologymustbefitforpurposes,andusersmustunderstandthosepurposestobeabletoappropriatelycontrolsystems.Furthermore,knowingwhentotrustAIand

whenandhowtocloselymonitorAIsystemoutputsiscriticaltoitssuccessful

deployment.6Avarietyoffactorscalibratetrustandrelianceinthemindsofoperators,andtheygenerallyfallintooneofthreecategories(thougheachcategorycanbe

shapedbythecontextwithinwhichtheinteractionmayoccur,suchassituationsofextremestressor,conversely,fatigue):7

•factorsintrinsictothehumanuser,suchasbiases,experience,andconfidenceinusingthesystem;

•factorsinherenttotheAIsystem,suchasitsfailuremodes(thespecificwaysinwhichitmightmalfunctionorunderperform)andhowitpresentsand

communicatesinformation;and,

•factorsshapedbyorganizationalorregulatoryrulesandnorms,mandatoryprocedures,oversightrequirements,anddeploymentpolicies.

OrganizationsimplementingAImustavoidmyopicallyfocusingonlyonthetechnical“machine”sidetoensurethesuccessfuldeploymentofAI.Managementofthehumanaspectofthesesystemsdeservesequalconsideration,andmanagementstrategies

shouldbeadjustedaccordingtocontext.

Recognizingthesecomplexitiesandpotentialpitfalls,thispaperpresentscasestudiesforthreecontrollablefactorsaffectingautomationbias(user,technical,organizational)thatcorrespondtotheaforementionedfactorsthatshapethedynamicsofhuman-

machineinteraction(seeTable1).

CenterforSecurityandEmergingTechnology|8

Table1.FactorsAffectingAutomationBias

Factors

Description

CaseStudy

User

User’spersonalknowledge,

experience,andfamiliaritywithatechnology

User’sdegreeoftrustand

confidenceinthemselvesandthesystem

Teslaanddrivingautomation

TechnicalDesign

Thesystem’soveralldesign,thestructureofitsuserinterface,andhowitprovidesuserfeedback

AirbusandBoeingdesignphilosophies

Organization

OrganizationalprocessesshapingAIuseandreliance

U.S.Army’smanagement

andoperationofthePatriotMissileSystemvs.U.S.

Navy’smanagementandoperationoftheAEGIS

CombatSystem

Anadditionallayeroftask-specificfactors,suchastimeconstraints,taskdifficulty,

workload,andstress,canexacerbateoralternativelyreduceautomationbias.8Thesefactorsshouldbedulyconsideredinthedesignofthesystem,aswellastrainingandorganizationalpolicies,butarebeyondthescopeofthispaper.

CenterforSecurityandEmergingTechnology|9

CaseStudies

CaseStudy1:HowUserIdiosyncrasiesCanLeadtoAutomationBias

Individualsbringtheirpersonalexperiences—andbiases—totheirinteractionswithAIsystems.9Researchshowsthatgreaterfamiliarityanddirectexperiencewithself-

drivingcarsandautonomousvehicletechnologiesmakeindividualsmorelikelyto

supportautonomousvehicledevelopmentandconsiderthemsafetouse.Conversely,behavioralscienceresearchdemonstratesthatalackoftechnologicalknowledgecanleadtofearandrejection,whilehavingonlyalittlefamiliaritywithaparticular

technologycanresultinoverconfidenceinitscapabilities.10Thecaseofincreasingly

“driverless”carsillustrateshowtheindividualcharacteristicsandexperiencesofuserscanshapetheirinteractionsandautomationbias.Furthermore,asthecasestudyon

Teslabelowilluminates,evensystemimprovementsdesignedtomitigatetherisksofautomationbiasmayhavelimitedeffectivenessinthefaceofaperson’sbias.

Tesla’sRoadtoAutonomy

Carshavebecomeincreasinglyautomatedovertime.Manufacturersandengineers

haveintroducedcruisecontrolandaflurryofotheradvanceddriverassistancesystems(ADAS)aimedatimprovingdrivingsafetyandreducingthelikelihoodofhumanerror,alongsideotherfeaturessuchaslanedriftsystemsandblindspotsensors.TheU.S.

NationalHighwayTrafficSafetyAdministrationsuggeststhatfullautomationhasthepotentialto“offertransformativesafetyopportunitiesattheirmaturity,”butcaveatthattheseareafuturetechnology.*Astheymakeclearontheirwebsiteinboldedcapital

letters,carsthatperform“allaspectsofthedrivingtaskwhileyou,asthedriver,areavailabletotakeoverdrivingifrequested...ARENOTAVAILABLEONTODAY’S

VEHICLESFORCONSUMERPURCHASEINTHEUNITEDSTATES.”11Evenifthese

*TheSocietyofAutomotiveEngineers(SAE)(incollaborationwiththeInternationalOrganizationfor

Standardization,orISO)hasestablishedsixlevelsofdrivingautomation,from0to5.Level0,orno

automation,representscarswithoutsystemssuchasadaptivecruisecontrol.Ontheotherendofthe

spectrum,Levels4and5suggestcarsthatmaynotevenrequireasteeringwheeltobeinstalled.Levels

1and2includethosesystemswithincreasinglycompetentdriversupportfeatureslikethosementionedabove.Inallofthesesystems,however,thehumanisdriving,“evenifyourfeetareoffthepedalsand

youarenotsteering.”ItisatLevel3,whereautomationbeginstotakeover,thatthelinebetween“self-driving”and“driverless”becomesfuzzier,withthevehiclerelyinglessonthedriverunlessthevehicle

requeststheirengagement.Levels4and5neverrequirehumanintervention.See“SAELevelsofDrivingAutomation™RefinedforClarityandInternationalAudience,”SAEInternationalBlog,May3,2021,

/blog/sae-j3016-update

.

CenterforSecurityandEmergingTechnology|10

carswereavailable,itisimportanttoconsiderthepossibilitythatwhileautonomy

mighteliminatecertainkindsofaccidentsorhumanerrors(likedistracteddriving),ithasthepotentialtocreatenewones(likeover-trustingautopilot).12

StudiessuggestthatADASadoptionbydriversisoftenopportunistic,andsimplya

byproductofupgradingtheirvehicles.Driverslearnaboutthevehicle’scapabilitiesinanad-hocmanner,sometimesjustreceivinganover-the-airsoftwareupdatethatcomes

withwrittennotes.Therearenoexamsorcertificationsrequiredfortheseupdates.

StudieshavealsoshownthatwhereuseofanADASsystemissolelyexperiential,suchaswhenadriveradoptsanautonomousvehiclewithoutpriortraining,humanmisuse

ormisunderstandingofADASsystemscanhappenafteronlyafewencountersbehindthewheel.13Furthermore,atleastonestudyfoundthatdriverswhoareexposedto

morecapableautomatedsystemsfirsttendedtoestablishabaselineoftrustwhen

interactingwithother(potentiallylesscapable)automatedsystems.14Thistrustand

confidenceinADASvehiclescanmanifestasdistracteddriving,tothepointofdriversignoringwarnings,takinglongertoreacttoemergencies,ortakingriskstheywouldnottakeintheabsenceofautomation.15

BehindtheWheel:Tesla’sAutopilotandtheHumanElement

IntheweeksleadinguptothefirstfatalU.S.accidentinvolvingTesla’sAutopilotin

2016,thecompany’sthen-president,JonMcNeill,personallytestedthesystemina

ModelX.Inanemailfollowinghistest,McNeillpraisedthesystem’sseeminglyflawlessperformance,admitting,“IgotsocomfortableunderAutopilotthatIendedupblowingbyexitsbecauseIwasimmersedinemailsorcalls(Iknow,Iknow,notarecommendeduse).”16

DespitemarketingthatsuggeststheTeslaFullSelf-DrivingCapability(FSD)might

achievefullautonomywithouthumanintervention,thesefeaturescurrentlyreside

firmlywithinthesuiteofADAScapabilities.17Investigationsintothatfirstfatalaccidentfoundthatthedriverhadbeenwatchingamovieandhadignoredmultiplealertsto

maintainhandsonthewheelwhentheAutopilotfailedtodistinguishawhitetrailer

fromabrightsky,leadingtoacollisionthatkilledthedriver.18Sincethen,therehave

beenarangeofincidentsinvolvingTesla’sAutopilotsuiteofsoftware,whichincludeswhatiscalleda“FullSelf-DrivingCapability.”TheseincidentsledtheNationalHighwayTrafficSafetyAdministration(NHTSA)toexaminenearlyonethousandcrashesand

launchover40investigationsintoaccidentsinwhichAutopilotfeatureswerereportedtohavebeeninuse.19Initsinitialinvestigations,NHTSAfound“atleast13crashes

involvingoneormorefatalitiesandmanymoreinvolvingseriousinjuriesinwhich

CenterforSecurityandEmergingTechnology|11

foreseeabledrivermisuseofthesystemplayedanapparentrole.”20Also,amongNHTSA’sconclusionswasthat“Autopilot’sdesignwasnotsufficienttomaintaindrivers’engagement.”21

InresponsetoNHTSA’sinvestigationandincreasingscrutiny,inDecember2023Tesla

issuedasafetyrecalloftwomillionofitsvehiclesequippedwiththeAutosteerfunctionality.22Initsrecallannouncement,Teslaacknowledgedthat:

“IncertaincircumstanceswhenAutosteerisengaged,theprominenceandscopeofthefeature’scontrolsmaynotbesufficienttopreventdrivermisuseofthe

SAELevel2advanceddriver-assistancefeature.”23

Asapartofthisrecall,Teslasoughttoaddressthedriverengagementproblemwithan

over-the-airsoftwareupdatethataddedmorecontrolsandalertsto“encouragethedrivertoadheretotheircontinuousdrivingresponsibilitywheneverAutosteeris

engaged.”Thatencouragementmanifestedas:

“increasingtheprominenceofvisualalertsontheuserinterface,simplifying

engagementanddisengagementofAutosteer,additionalchecksuponengagingAutosteerand…eventualsuspensionfromAutosteeruseifthedriverrepeatedlyfailstodemonstratecontinuousandsustaineddrivingresponsibilitywhilethe

featureisengaged.”24

Trainingorcertificationwasnotincludedwiththesoftwareupdate;however,atext

summaryofthesoftwareupdatewasprovidedforuserstooptionallyreview,and

videosofusersindicatethattheinstructionswereeasytoignore.Usersalsohadtheoptiontoignoresafetyfeaturesintheupdatealtogether.Theefficacyofthesespecific

changes(eitherindividuallyorintotal)isnotyetclear.InApril2024,NHTSAlauncheda

newinvestigationintoTesla’sAutosteerandthesoftwareupdateitperformedin

December2023but,asexplainedearlier,experientialencountersalonecanimproperlycalibratethetrustnewdriversplaceintheirautonomousvehicles.25

CaseStudy1:KeyTakeawaysfromUserLevelCaseStudy

●Widergapsinmisalignmentbetweenperceivedandactualtechnologycapabilitiescanleadto,orotherwiseexacerbate,automationbias.

●Automationbiaswillbeimpactedbytheuser’slevelofpriorknowledgeandexperience,whichshouldbeofparticularconcerninsafetycriticalsituations.

CenterforSecurityandEmergingTechnology|12

IntheU.S.,driversareoftenconsideredtheresponsiblepartyincaraccidents,

particularlywhenitcomestotheroleofthedriverandtheroleofthesystem.26AsDavidZipper,SeniorFellowattheMITMobilityInitiative,explained:

“IntheUnitedStates,theresponsibilityforroadsafetylargelyfallsontheindividualsittingbehindthewheel,orridingabike,orcrossingthestreet.

Americantransportationdepartments,lawenforcementagencies,andnews

outletsfrequentlymaintainthatmostcrashes—indeed,94percentofthem,

accordingtothemostwidelycirculatedstatistic—aresolelyduetohumanerror.Blamingthebaddecisionsofroadusersimpliesthatnobodyelsecouldhave

preventedthem.”27

However,eventhemostexperiencedandknowledgeablehumanusersarenotfree

fromtheriskofoverrelianceinthefaceofpoorinterfaceandsystemdesign,andthereisapeculiardynamicatplaywithautonomousvehicles:Whenincidentsoccur,blameoftenfallsonthesoftware.28Whilethesoftwaremaynotbeblameless,the

combinationofthesystemandinappropriatehumanusemustalsobeconsideredin

identifyingthecausesofharm.Therefore,waysofinterveningormonitoringtopreventinappropriateusebydriversshouldbesoughtoutalongsidewaysofimprovingthe

system’stechnicalfeaturesanddesign.

CaseStudy2:HowTechnicalDesignFactorsCanInduceAutomationBias

Areviewofcrashesintheaviationindustrydemonstratesthatevenincaseswhere

usersarehighlytrained,activelymonitored,possessathoroughunderstandingofthe

technology’scapabilitiesandlimitations,andcanbeassurednottomisuseorabusethetechnology,apoorlydesignedinterfacecanmakeautomationbiasmorelikely.

Fieldsdedicatedtooptimizingtheselinksbetweentheuserandthesystem,suchas

humanfactorsengineeringandUI/UXdesign,aredevotedtointegratingandapplying

knowledgeabouthumancapabilities,limitations,andpsychologyintothedesignand

developmentoftechnologicalsystems.29Physicaldetails,fromthesizeandlocationofabuttontotheshapeofaleverorselectionmenutothecolorofaflashinglightorimage,seemsmallorinsignificant.Yetthesefeaturescanplayapivotalroleinshapinghumaninteractionswithtechnologyandultimatelydeterminingasystem’sutility.

Theimportanceofconsideringhumaninteractioninthedesignandoperationofthesesystemscannotbeoverstated—neglectingthehumanelementindesigncanleadto

inefficienciesatbest,andunsafeanddangerousconditionsatworst.Poorlydesigned

CenterforSecurityandEmergingTechnology|13

interfaces,characterizedbyfeaturesassimpleasdrop-downmenuswithalackofcleardistinctions,were,forexample,atthecoreoftheaccidentalissuanceofawidespread

emergencyalertinHawaiithatwarnedofanimminent,inboundballisticmissileattack.30

Designchoices,intentionallyornot,shapeandestablishspecificbehavioralpathwaysforhowhumansoperateandrelyonthesystemsthemselves.Inotherwords,these

designchoicescandirectlyembedand/orexacerbatecertaincognitivebiases,includingautomationbias.Thesedesignchoicesareespeciallyconsequentialwhenitcomesto

hazardalerts,suchasvisual,haptic,andauditoryalarms.Thecommercialaviation

industryillustrateshowautomationbiascanbedirectlyinfluencedbysystemdesigns:

TheHuman-MachineInterface:AirbusandBoeingDesignPhilosophies

Automationhasbeencentraltotheevolutionoftheairplanesinceitsinception—ittooklessthantenyearsfromthefirstpoweredflighttotheearliestiterationsofautopilot.31Intheyearssince,aircraftflightmanagementsystems,includingthosethatareAI-

enabled,havebecomesuccessivelycapable.Today,agreatdealoftheroutineworkofflyingaplaneishandledbyautomatedsystems.Thishasnotrenderedpilotsobsolete,however.32Onthecontrary,pilotsmustnowincorporatetheaircraftsystem’s

interpretationandreactiontoexternalconditionsbeforedeterminingthemost

appropriateresponse,ratherthandirectlyengagingwiththeirsurroundings.While

overall,flyinghasbecomesaferduetoautomation,automationbiasrepresentsanever-presentriskfactor.33Asearlyas2002,ajointFAA-industrystudywarnedthatthe

significantchallengefortheindustrywouldbetomanufactureaircraftanddesign

proceduresthatarelesserror-proneandmorerobusttoerrorsinvolvingincorrecthumanresponseafterfailure.34

Whilethereareinternationalstandardsaswellasageneralconsensusamongaircraftmanufacturersthatflightcrewsareultimatelyresponsibleforsafeaircraftoperation,thetwoleadingcommercialaircraftprovidersintheUnitedStates,AirbusandBoeing,areknownfortheiroppositedesignphilosophies.35Thedifferencesbetweenthem

illustratedifferentapproachestotheautomationbiaschallenge.

InAirbusaircraft,theautomatedsystemisdesignedtoinsulateandprotectpilotsandflightcrewsfromhumanerror.Thepilot’scontrolisboundedby“hard”limits,designedtoallowformanipulationoftheflightcontrolsbutprohibitiveofanychangesinaltitudeorspeed,forexample,thatwouldleadtostructuraldamageorlossofcontrolofthe

aircraft(inotherwords,actionstoexceedthemanufacturer’sdefinedflightenvelope).

CenterforSecurityandEmergingTechnology|14

Incontrast,inBoeingaircraft,thepilotistheabsoluteandfinalauthorityandcanusenaturalactionswiththesystemstoessentially“insist”uponacourseofaction.These“soft”limitsexisttowarnandalertthepilotbutcanbeoverriddenanddisregarded,evenifitmeanstheaircraftwillexceedthemanufacturer’sflightenvelope.

Thesedesigndifferencesmayhelpexplainwhysomeairlinesonlyoperatesingle-typefleets;pilotstypicallysticktoonetypeofaircraft,andcross-trainingpilotsispossiblebutcostlyand,therefore,uncommon.36

Table2showsanFAAsummaryofthedifferentdesignphilosophies:

Table2:AirbusandBoeingDesignPhilosophies

Airbus

Boeing

Automationmustnotreduceoverall

aircraftreliability;itshouldenhance

aircraftandsystemssafety,efficiency,andeconomy.

Thepilotisthefinalauthorityfortheoperationoftheairplane.

Bothcrewmembersareultimately

responsibleforthesafeconductofthe

Automationmustnotleadtheaircraftoutofthesafeflightenvelope,anditshouldmaintaintheaircraftwithinthenormal

flightenvelope.

flight.

Flightcrewtasks,ino

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论