版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
ExecutiveSummary
Automationbiasisthetendencyforanindividualtoover-relyonanautomated
system.Itcanleadtoincreasedriskofaccidents,errors,andotheradverseoutcomeswhenindividualsandorganizationsfavortheoutputorsuggestionofthesystem,eveninthefaceofcontradictoryinformation.
Automationbiascanendangerthesuccessfuluseofartificialintelligencebyerodingtheuser’sabilitytomeaningfullycontrolanAIsystem.AsAIsystemshave
proliferated,sotoohaveincidentswherethesesystemshavefailedorerredinvariousways,andhumanusershavefailedtocorrectorrecognizethesebehaviors.
Thisstudyprovidesathree-tieredframeworktounderstandautomationbiasbyexaminingtheroleofusers,technicaldesign,andorganizationsininfluencing
automationbias.Itpresentscasestudiesoneachofthesefactors,thenofferslessonslearnedandcorrespondingrecommendations.
UserBias:TeslaCaseStudy
Factorsinfluencingbias:
●User’spersonalknowledge,experience,andfamiliaritywithatechnology.
●User’sdegreeoftrustandconfidenceinthemselvesandthesystem.
Lessonslearnedfromcasestudy:
●Disparitiesbetweenuserperceptionsandsystemcapabilitiescontributetobiasandmayleadtoharm.
Recommendation:
●Createandmaintainqualificationstandardsforuserunderstanding.Usermisunderstandingofasystem’scapabilitiesorlimitationsisa
significantcontributortoincidentsofharm.Sinceuserunderstandingiscriticaltosafeoperation,systemdevelopersandvendorsmustinvestinclearcommunicationsabouttheirsystems.
CenterforSecurityandEmergingTechnology|1
TechnicalDesignBias:AirbusandBoeingDesignPhilosophiesCaseStudy
Factorsinfluencingbias:
●Thesystem’soveralldesign,userinterface,andhowitprovidesuserfeedback.
Lessonslearnedfromcasestudy:
●Evenwithhighlytraineduserssuchaspilots,systemsinterfacescontributetoautomationbias.
●Differentdesignphilosophieshavedifferentrisks.Nosingleapproachisnecessarilyperfect,andallrequireclear,consistentcommunicationandapplication.
Recommendation:
●Valueandenforceconsistentdesignanddesignphilosophiesthat
accountforhumanfactors,especiallyforsystemslikelytobeupgraded.
Whennecessary,justifyandmakeclearanydeparturesfromadesign
philosophytolegacyusers.Wherepossible,developcommondesign
criteria,standards,andexpectations,andconsistentlycommunicatethem(eitherthroughorganizationalpolicyorindustrystandard)toreducetheriskofconfusionandautomationbias.
OrganizationalPoliciesandProcedureBias:ArmyPatriotMissileSystemvs.NavyAEGISCombatSystemCaseStudy
Factorsinfluencingbias:
●Organizationaltraining,processes,andpolicies.
Lessonslearnedfromcasestudy:
●Organizationscanemploythesametoolsandtechnologiesinvery
differentwaysbasedonprotocols,operations,doctrine,training,andcertification.Choicesineachoftheseareasofgovernancecanembedautomationbiases.
●Organizationaleffortstomitigateautomationbiascanbesuccessfulbutmishapsarestillpossible,especiallywhenhumanusersareunderstress.
CenterforSecurityandEmergingTechnology|2
Recommendation:
●Whereautonomoussystemsareusedbyorganizations,designand
regularlyrevieworganizationalpoliciesappropriatefortechnical
capabilitiesandorganizationalpriorities.Updatepoliciesandprocesses
astechnologieschangetobestaccountfornewcapabilitiesandmitigate
novelrisks.Ifthereisamismatchbetweenthegoalsoftheorganization
andpoliciesgoverninghowcapabilitiesareused,automationbiasandpooroutcomesaremorelikely.
Acrossthesethreecasestudies,itisclearthat“human-in-the-loop”cannotpreventallaccidentsorerrors.Properlycalibratingtechnicalandhumanfail-safesforAI,however,posesthebestchanceformitigatingtherisksofusingAIsystems.
CenterforSecurityandEmergingTechnology|3
TableofContents
ExecutiveSummary 1
Introduction 5
WhatIsAutomationBias? 6
AFrameworkforUnderstandingandMitigatingAutomationBias 8
CaseStudies 10
CaseStudy1:HowUserIdiosyncrasiesCanLeadtoAutomationBias 10
Tesla’sRoadtoAutonomy 10
BehindtheWheel:Tesla’sAutopilotandtheHumanElement 11
CaseStudy2:HowTechnicalDesignFactorsCanInduceAutomationBias 13
TheHuman-MachineInterface:AirbusandBoeingDesignPhilosophies 14
BoeingIncidents 16
AirbusIncidents 17
CaseStudy3:HowOrganizationsCanInstitutionalizeAutomationBias 18
DivergentOrganizationalApproachestoAutomation:Armyvs.Navy 19
Patriot:ABiasTowardstheSystem 21
AEGIS:ABiasTowardstheHuman 22
Conclusion 24
Authors 26
Acknowledgments 26
Endnotes 27
CenterforSecurityandEmergingTechnology|4
Introduction
Incontemporarydiscussionsaboutartificialintelligence,acriticalbutoftenoverlookedaspectisautomationbias—thetendencyofhumanuserstooverlyrelyonAIsystems.Leftunaddressed,automationbiascanandhasharmedbothAIandautonomous
systemusersandinnocentbystandersinexamplesthatrangefromfalselegal
accusationstodeath.Automationbias,therefore,presentsasignificantchallengeinthe
real-worldapplicationofAI,particularlyinhigh-stakescontextssuchasnationalsecurityandmilitaryoperations.
SuccessfuldeploymentofAIsystemsreliesonacomplexinterdependencebetweenAIsystemsandthehumansresponsibleforoperatingthem.Addressingautomationbias
isnecessarytoensuresuccessful,ethical,andsafeAIdeployment,especiallywhentheconsequencesofoverrelianceormisusearemostsevere.AssocietiesincorporateAI
intosystems,decision-makersthusneedtobepreparedtomitigatetherisksassociatedwithautomationbias.
Automationbiascanmanifestandbeinterceptedattheuser,technicaldesign,and
organizationallevels.Weprovidethreecasestudiesthatexplainhowfactorsateachoftheselevelscanmakeautomationbiasmoreorlesslikely,derivelessonslearned,and
highlightpossiblemitigationstrategiestoalleviatethesecomplexissues.
CenterforSecurityandEmergingTechnology|5
WhatIsAutomationBias?
Automationbiasisthetendencyforahumanusertooverlyrelyonanautomated
system,reflectingacognitivebiasthatemergesfromtheinteractionbetweenahumanandanAIsystem.
Whenaffectedbyautomationbias,userstendtodecreasetheirvigilanceinmonitoringboththeautomatedsystemandthetaskitisperforming.1Instead,theyplaceexcessivetrustinthesystem’sdecision-makingcapabilitiesandinappropriatelydelegatemore
responsibilitytothesystemthanitisdesignedtohandle.Insevereinstances,usersmightfavorthesystem’srecommendationsevenwhenpresentedwithcontradictoryevidence.
Automationbiasmostoftenpresentsintwoways:asanerrorofomission,whena
humanfailstotakeactionbecausetheautomationdidnotalertthem(asdiscussedinthefirstcasestudyonvehicles);orasanerrorofcommission,whenahumanfollowsincorrectdirectionsfromtheautomation(asdiscussedinthecasestudyonthePatriotMissileSystem).2Inthisanalysis,wealsodiscussaninstancewhereabiasagainsttheautomationcausesharm(i.e.,thethirdcasestudyontheAEGISweaponssystem).
Automationbiasdoesnotalwaysresultincatastrophicevents,butitincreasesthelikelihoodofsuchoutcomes.Mitigatingautomationbiascanhelptoimprovehumanoversight,operation,andmanagementofAIsystemsandthusmitigatesomerisksassociatedwithAI.
ThechallengeofautomationbiashasonlygrownwiththeintroductionofprogressivelymoresophisticatedAI-enabledsystemsandtoolsacrossdifferentapplicationareas
includingpolicing,immigration,socialwelfarebenefits,consumerproducts,and
militaries(seeBox1).HundredsofincidentshaveoccurredwhereAI,algorithms,andautonomoussystemsweredeployedwithoutadequatetrainingforusers,clear
communicationabouttheircapabilitiesandlimitations,orpoliciestoguidetheiruse.3
CenterforSecurityandEmergingTechnology|6
Box1.AutomationBiasandtheUKPostOfficeScandal
Inanotablecaseofautomationbias,afaultyaccountingsystememployedbytheUKPostOfficeledtothewrongfulprosecutionof736UKsub-postmastersforembezzlement.AlthoughitdidnotinvolveanAIsystem,automationbiasandthemythof“infalliblesystems”playedasignificantrole—userswillinglyacceptedsystemerrorsdespitesubstantialevidencetothecontrary,favoringtheunlikelycasethathundredsofpostmasterswereinvolvedintheftandfraud.4Asoneauthorofanongoingstudyintothecasehighlighted,“Thisisnotascandalabouttechnologicalfailing;itisascandalaboutthegrossfailureofmanagement.”5
Whileautomationbiasisachallengingproblem,itisatractableissuethatsocietycantacklethroughouttheAIdevelopmentanddeploymentprocess.Theavenuesthroughwhichautomationbiascanmanifest—namelyattheuser,technical,andorganizationallevels—alsorepresentpointsofinterventiontomitigateautomationbias.
CenterforSecurityandEmergingTechnology|7
AFrameworkforUnderstandingandMitigatingAutomationBias
Technologymustbefitforpurposes,andusersmustunderstandthosepurposestobeabletoappropriatelycontrolsystems.Furthermore,knowingwhentotrustAIand
whenandhowtocloselymonitorAIsystemoutputsiscriticaltoitssuccessful
deployment.6Avarietyoffactorscalibratetrustandrelianceinthemindsofoperators,andtheygenerallyfallintooneofthreecategories(thougheachcategorycanbe
shapedbythecontextwithinwhichtheinteractionmayoccur,suchassituationsofextremestressor,conversely,fatigue):7
•factorsintrinsictothehumanuser,suchasbiases,experience,andconfidenceinusingthesystem;
•factorsinherenttotheAIsystem,suchasitsfailuremodes(thespecificwaysinwhichitmightmalfunctionorunderperform)andhowitpresentsand
communicatesinformation;and,
•factorsshapedbyorganizationalorregulatoryrulesandnorms,mandatoryprocedures,oversightrequirements,anddeploymentpolicies.
OrganizationsimplementingAImustavoidmyopicallyfocusingonlyonthetechnical“machine”sidetoensurethesuccessfuldeploymentofAI.Managementofthehumanaspectofthesesystemsdeservesequalconsideration,andmanagementstrategies
shouldbeadjustedaccordingtocontext.
Recognizingthesecomplexitiesandpotentialpitfalls,thispaperpresentscasestudiesforthreecontrollablefactorsaffectingautomationbias(user,technical,organizational)thatcorrespondtotheaforementionedfactorsthatshapethedynamicsofhuman-
machineinteraction(seeTable1).
CenterforSecurityandEmergingTechnology|8
Table1.FactorsAffectingAutomationBias
Factors
Description
CaseStudy
User
User’spersonalknowledge,
experience,andfamiliaritywithatechnology
User’sdegreeoftrustand
confidenceinthemselvesandthesystem
Teslaanddrivingautomation
TechnicalDesign
Thesystem’soveralldesign,thestructureofitsuserinterface,andhowitprovidesuserfeedback
AirbusandBoeingdesignphilosophies
Organization
OrganizationalprocessesshapingAIuseandreliance
U.S.Army’smanagement
andoperationofthePatriotMissileSystemvs.U.S.
Navy’smanagementandoperationoftheAEGIS
CombatSystem
Anadditionallayeroftask-specificfactors,suchastimeconstraints,taskdifficulty,
workload,andstress,canexacerbateoralternativelyreduceautomationbias.8Thesefactorsshouldbedulyconsideredinthedesignofthesystem,aswellastrainingandorganizationalpolicies,butarebeyondthescopeofthispaper.
CenterforSecurityandEmergingTechnology|9
CaseStudies
CaseStudy1:HowUserIdiosyncrasiesCanLeadtoAutomationBias
Individualsbringtheirpersonalexperiences—andbiases—totheirinteractionswithAIsystems.9Researchshowsthatgreaterfamiliarityanddirectexperiencewithself-
drivingcarsandautonomousvehicletechnologiesmakeindividualsmorelikelyto
supportautonomousvehicledevelopmentandconsiderthemsafetouse.Conversely,behavioralscienceresearchdemonstratesthatalackoftechnologicalknowledgecanleadtofearandrejection,whilehavingonlyalittlefamiliaritywithaparticular
technologycanresultinoverconfidenceinitscapabilities.10Thecaseofincreasingly
“driverless”carsillustrateshowtheindividualcharacteristicsandexperiencesofuserscanshapetheirinteractionsandautomationbias.Furthermore,asthecasestudyon
Teslabelowilluminates,evensystemimprovementsdesignedtomitigatetherisksofautomationbiasmayhavelimitedeffectivenessinthefaceofaperson’sbias.
Tesla’sRoadtoAutonomy
Carshavebecomeincreasinglyautomatedovertime.Manufacturersandengineers
haveintroducedcruisecontrolandaflurryofotheradvanceddriverassistancesystems(ADAS)aimedatimprovingdrivingsafetyandreducingthelikelihoodofhumanerror,alongsideotherfeaturessuchaslanedriftsystemsandblindspotsensors.TheU.S.
NationalHighwayTrafficSafetyAdministrationsuggeststhatfullautomationhasthepotentialto“offertransformativesafetyopportunitiesattheirmaturity,”butcaveatthattheseareafuturetechnology.*Astheymakeclearontheirwebsiteinboldedcapital
letters,carsthatperform“allaspectsofthedrivingtaskwhileyou,asthedriver,areavailabletotakeoverdrivingifrequested...ARENOTAVAILABLEONTODAY’S
VEHICLESFORCONSUMERPURCHASEINTHEUNITEDSTATES.”11Evenifthese
*TheSocietyofAutomotiveEngineers(SAE)(incollaborationwiththeInternationalOrganizationfor
Standardization,orISO)hasestablishedsixlevelsofdrivingautomation,from0to5.Level0,orno
automation,representscarswithoutsystemssuchasadaptivecruisecontrol.Ontheotherendofthe
spectrum,Levels4and5suggestcarsthatmaynotevenrequireasteeringwheeltobeinstalled.Levels
1and2includethosesystemswithincreasinglycompetentdriversupportfeatureslikethosementionedabove.Inallofthesesystems,however,thehumanisdriving,“evenifyourfeetareoffthepedalsand
youarenotsteering.”ItisatLevel3,whereautomationbeginstotakeover,thatthelinebetween“self-driving”and“driverless”becomesfuzzier,withthevehiclerelyinglessonthedriverunlessthevehicle
requeststheirengagement.Levels4and5neverrequirehumanintervention.See“SAELevelsofDrivingAutomation™RefinedforClarityandInternationalAudience,”SAEInternationalBlog,May3,2021,
/blog/sae-j3016-update
.
CenterforSecurityandEmergingTechnology|10
carswereavailable,itisimportanttoconsiderthepossibilitythatwhileautonomy
mighteliminatecertainkindsofaccidentsorhumanerrors(likedistracteddriving),ithasthepotentialtocreatenewones(likeover-trustingautopilot).12
StudiessuggestthatADASadoptionbydriversisoftenopportunistic,andsimplya
byproductofupgradingtheirvehicles.Driverslearnaboutthevehicle’scapabilitiesinanad-hocmanner,sometimesjustreceivinganover-the-airsoftwareupdatethatcomes
withwrittennotes.Therearenoexamsorcertificationsrequiredfortheseupdates.
StudieshavealsoshownthatwhereuseofanADASsystemissolelyexperiential,suchaswhenadriveradoptsanautonomousvehiclewithoutpriortraining,humanmisuse
ormisunderstandingofADASsystemscanhappenafteronlyafewencountersbehindthewheel.13Furthermore,atleastonestudyfoundthatdriverswhoareexposedto
morecapableautomatedsystemsfirsttendedtoestablishabaselineoftrustwhen
interactingwithother(potentiallylesscapable)automatedsystems.14Thistrustand
confidenceinADASvehiclescanmanifestasdistracteddriving,tothepointofdriversignoringwarnings,takinglongertoreacttoemergencies,ortakingriskstheywouldnottakeintheabsenceofautomation.15
BehindtheWheel:Tesla’sAutopilotandtheHumanElement
IntheweeksleadinguptothefirstfatalU.S.accidentinvolvingTesla’sAutopilotin
2016,thecompany’sthen-president,JonMcNeill,personallytestedthesystemina
ModelX.Inanemailfollowinghistest,McNeillpraisedthesystem’sseeminglyflawlessperformance,admitting,“IgotsocomfortableunderAutopilotthatIendedupblowingbyexitsbecauseIwasimmersedinemailsorcalls(Iknow,Iknow,notarecommendeduse).”16
DespitemarketingthatsuggeststheTeslaFullSelf-DrivingCapability(FSD)might
achievefullautonomywithouthumanintervention,thesefeaturescurrentlyreside
firmlywithinthesuiteofADAScapabilities.17Investigationsintothatfirstfatalaccidentfoundthatthedriverhadbeenwatchingamovieandhadignoredmultiplealertsto
maintainhandsonthewheelwhentheAutopilotfailedtodistinguishawhitetrailer
fromabrightsky,leadingtoacollisionthatkilledthedriver.18Sincethen,therehave
beenarangeofincidentsinvolvingTesla’sAutopilotsuiteofsoftware,whichincludeswhatiscalleda“FullSelf-DrivingCapability.”TheseincidentsledtheNationalHighwayTrafficSafetyAdministration(NHTSA)toexaminenearlyonethousandcrashesand
launchover40investigationsintoaccidentsinwhichAutopilotfeatureswerereportedtohavebeeninuse.19Initsinitialinvestigations,NHTSAfound“atleast13crashes
involvingoneormorefatalitiesandmanymoreinvolvingseriousinjuriesinwhich
CenterforSecurityandEmergingTechnology|11
foreseeabledrivermisuseofthesystemplayedanapparentrole.”20Also,amongNHTSA’sconclusionswasthat“Autopilot’sdesignwasnotsufficienttomaintaindrivers’engagement.”21
InresponsetoNHTSA’sinvestigationandincreasingscrutiny,inDecember2023Tesla
issuedasafetyrecalloftwomillionofitsvehiclesequippedwiththeAutosteerfunctionality.22Initsrecallannouncement,Teslaacknowledgedthat:
“IncertaincircumstanceswhenAutosteerisengaged,theprominenceandscopeofthefeature’scontrolsmaynotbesufficienttopreventdrivermisuseofthe
SAELevel2advanceddriver-assistancefeature.”23
Asapartofthisrecall,Teslasoughttoaddressthedriverengagementproblemwithan
over-the-airsoftwareupdatethataddedmorecontrolsandalertsto“encouragethedrivertoadheretotheircontinuousdrivingresponsibilitywheneverAutosteeris
engaged.”Thatencouragementmanifestedas:
“increasingtheprominenceofvisualalertsontheuserinterface,simplifying
engagementanddisengagementofAutosteer,additionalchecksuponengagingAutosteerand…eventualsuspensionfromAutosteeruseifthedriverrepeatedlyfailstodemonstratecontinuousandsustaineddrivingresponsibilitywhilethe
featureisengaged.”24
Trainingorcertificationwasnotincludedwiththesoftwareupdate;however,atext
summaryofthesoftwareupdatewasprovidedforuserstooptionallyreview,and
videosofusersindicatethattheinstructionswereeasytoignore.Usersalsohadtheoptiontoignoresafetyfeaturesintheupdatealtogether.Theefficacyofthesespecific
changes(eitherindividuallyorintotal)isnotyetclear.InApril2024,NHTSAlauncheda
newinvestigationintoTesla’sAutosteerandthesoftwareupdateitperformedin
December2023but,asexplainedearlier,experientialencountersalonecanimproperlycalibratethetrustnewdriversplaceintheirautonomousvehicles.25
CaseStudy1:KeyTakeawaysfromUserLevelCaseStudy
●Widergapsinmisalignmentbetweenperceivedandactualtechnologycapabilitiescanleadto,orotherwiseexacerbate,automationbias.
●Automationbiaswillbeimpactedbytheuser’slevelofpriorknowledgeandexperience,whichshouldbeofparticularconcerninsafetycriticalsituations.
CenterforSecurityandEmergingTechnology|12
IntheU.S.,driversareoftenconsideredtheresponsiblepartyincaraccidents,
particularlywhenitcomestotheroleofthedriverandtheroleofthesystem.26AsDavidZipper,SeniorFellowattheMITMobilityInitiative,explained:
“IntheUnitedStates,theresponsibilityforroadsafetylargelyfallsontheindividualsittingbehindthewheel,orridingabike,orcrossingthestreet.
Americantransportationdepartments,lawenforcementagencies,andnews
outletsfrequentlymaintainthatmostcrashes—indeed,94percentofthem,
accordingtothemostwidelycirculatedstatistic—aresolelyduetohumanerror.Blamingthebaddecisionsofroadusersimpliesthatnobodyelsecouldhave
preventedthem.”27
However,eventhemostexperiencedandknowledgeablehumanusersarenotfree
fromtheriskofoverrelianceinthefaceofpoorinterfaceandsystemdesign,andthereisapeculiardynamicatplaywithautonomousvehicles:Whenincidentsoccur,blameoftenfallsonthesoftware.28Whilethesoftwaremaynotbeblameless,the
combinationofthesystemandinappropriatehumanusemustalsobeconsideredin
identifyingthecausesofharm.Therefore,waysofinterveningormonitoringtopreventinappropriateusebydriversshouldbesoughtoutalongsidewaysofimprovingthe
system’stechnicalfeaturesanddesign.
CaseStudy2:HowTechnicalDesignFactorsCanInduceAutomationBias
Areviewofcrashesintheaviationindustrydemonstratesthatevenincaseswhere
usersarehighlytrained,activelymonitored,possessathoroughunderstandingofthe
technology’scapabilitiesandlimitations,andcanbeassurednottomisuseorabusethetechnology,apoorlydesignedinterfacecanmakeautomationbiasmorelikely.
Fieldsdedicatedtooptimizingtheselinksbetweentheuserandthesystem,suchas
humanfactorsengineeringandUI/UXdesign,aredevotedtointegratingandapplying
knowledgeabouthumancapabilities,limitations,andpsychologyintothedesignand
developmentoftechnologicalsystems.29Physicaldetails,fromthesizeandlocationofabuttontotheshapeofaleverorselectionmenutothecolorofaflashinglightorimage,seemsmallorinsignificant.Yetthesefeaturescanplayapivotalroleinshapinghumaninteractionswithtechnologyandultimatelydeterminingasystem’sutility.
Theimportanceofconsideringhumaninteractioninthedesignandoperationofthesesystemscannotbeoverstated—neglectingthehumanelementindesigncanleadto
inefficienciesatbest,andunsafeanddangerousconditionsatworst.Poorlydesigned
CenterforSecurityandEmergingTechnology|13
interfaces,characterizedbyfeaturesassimpleasdrop-downmenuswithalackofcleardistinctions,were,forexample,atthecoreoftheaccidentalissuanceofawidespread
emergencyalertinHawaiithatwarnedofanimminent,inboundballisticmissileattack.30
Designchoices,intentionallyornot,shapeandestablishspecificbehavioralpathwaysforhowhumansoperateandrelyonthesystemsthemselves.Inotherwords,these
designchoicescandirectlyembedand/orexacerbatecertaincognitivebiases,includingautomationbias.Thesedesignchoicesareespeciallyconsequentialwhenitcomesto
hazardalerts,suchasvisual,haptic,andauditoryalarms.Thecommercialaviation
industryillustrateshowautomationbiascanbedirectlyinfluencedbysystemdesigns:
TheHuman-MachineInterface:AirbusandBoeingDesignPhilosophies
Automationhasbeencentraltotheevolutionoftheairplanesinceitsinception—ittooklessthantenyearsfromthefirstpoweredflighttotheearliestiterationsofautopilot.31Intheyearssince,aircraftflightmanagementsystems,includingthosethatareAI-
enabled,havebecomesuccessivelycapable.Today,agreatdealoftheroutineworkofflyingaplaneishandledbyautomatedsystems.Thishasnotrenderedpilotsobsolete,however.32Onthecontrary,pilotsmustnowincorporatetheaircraftsystem’s
interpretationandreactiontoexternalconditionsbeforedeterminingthemost
appropriateresponse,ratherthandirectlyengagingwiththeirsurroundings.While
overall,flyinghasbecomesaferduetoautomation,automationbiasrepresentsanever-presentriskfactor.33Asearlyas2002,ajointFAA-industrystudywarnedthatthe
significantchallengefortheindustrywouldbetomanufactureaircraftanddesign
proceduresthatarelesserror-proneandmorerobusttoerrorsinvolvingincorrecthumanresponseafterfailure.34
Whilethereareinternationalstandardsaswellasageneralconsensusamongaircraftmanufacturersthatflightcrewsareultimatelyresponsibleforsafeaircraftoperation,thetwoleadingcommercialaircraftprovidersintheUnitedStates,AirbusandBoeing,areknownfortheiroppositedesignphilosophies.35Thedifferencesbetweenthem
illustratedifferentapproachestotheautomationbiaschallenge.
InAirbusaircraft,theautomatedsystemisdesignedtoinsulateandprotectpilotsandflightcrewsfromhumanerror.Thepilot’scontrolisboundedby“hard”limits,designedtoallowformanipulationoftheflightcontrolsbutprohibitiveofanychangesinaltitudeorspeed,forexample,thatwouldleadtostructuraldamageorlossofcontrolofthe
aircraft(inotherwords,actionstoexceedthemanufacturer’sdefinedflightenvelope).
CenterforSecurityandEmergingTechnology|14
Incontrast,inBoeingaircraft,thepilotistheabsoluteandfinalauthorityandcanusenaturalactionswiththesystemstoessentially“insist”uponacourseofaction.These“soft”limitsexisttowarnandalertthepilotbutcanbeoverriddenanddisregarded,evenifitmeanstheaircraftwillexceedthemanufacturer’sflightenvelope.
Thesedesigndifferencesmayhelpexplainwhysomeairlinesonlyoperatesingle-typefleets;pilotstypicallysticktoonetypeofaircraft,andcross-trainingpilotsispossiblebutcostlyand,therefore,uncommon.36
Table2showsanFAAsummaryofthedifferentdesignphilosophies:
Table2:AirbusandBoeingDesignPhilosophies
Airbus
Boeing
Automationmustnotreduceoverall
aircraftreliability;itshouldenhance
aircraftandsystemssafety,efficiency,andeconomy.
Thepilotisthefinalauthorityfortheoperationoftheairplane.
Bothcrewmembersareultimately
responsibleforthesafeconductofthe
Automationmustnotleadtheaircraftoutofthesafeflightenvelope,anditshouldmaintaintheaircraftwithinthenormal
flightenvelope.
flight.
Flightcrewtasks,ino
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 2025年茶山茶叶出口贸易合作协议书模板4篇
- 二零二四年度专业派遣员工管理服务合同3篇
- 二零二五年度充电桩充电站市场营销与推广合同3篇
- 二零二五年度厨房设备用品市场调研与推广合同2篇
- 二零二四年新材料研发入股投资协议3篇
- 二零二四年皮革生产线全套设备买卖合作协议书3篇
- 2025年度环保节能设备代理招商合同3篇
- 2025年度瓷砖铺贴施工安全教育培训合同4篇
- 2025年个人汽车贷款担保合同专业版范本3篇
- 二零二五年度产权式商铺租赁与市场调研服务合同3篇
- 再生障碍性贫血课件
- 产后抑郁症的护理查房
- 2024年江苏护理职业学院高职单招(英语/数学/语文)笔试历年参考题库含答案解析
- 电能质量与安全课件
- 医药营销团队建设与管理
- 工程项目设计工作管理方案及设计优化措施
- 围场满族蒙古族自治县金汇萤石开采有限公司三义号萤石矿矿山地质环境保护与土地复垦方案
- 小升初幼升小择校毕业升学儿童简历
- 资金支付审批单
- 第一单元(金融知识进课堂)课件
- 介入导管室护士述职报告(5篇)
评论
0/150
提交评论