版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
ParallelProgrammingInstructor:ZhangWeizhe(张伟哲)ComputerNetworkandInformationSecurityTechniqueResearchCenter,SchoolofComputerScienceandTechnology,HarbinInstituteofTechnologyProgrammingUsingtheMessage-PassingParadigm3IntroductionProgramingwithMPIProgrammingwithPVMComparisionwithMPIandPVMOutline4AParallelMachineModelInterconnect…TheclusterAnodecancommunicatewithothernodesbysendingandreceivingmessagesoveraninterconnectionnetwork可以通过在互连网络上发送和接收消息来与其他节点进行通信的节点ThevonNeumanncomputer5Principlesof
Message-PassingProgrammingEachprocessorinamessagepassingprogramrunsaseparateprocess(sub-program,task)Thelogicalviewofamachinesupportingthemessage-passingparadigmconsistsofpprocesses,eachwithitsownexclusiveaddressspace.AllvariablesareprivateEachdataelementmustbelongtooneofthepartitionsofthespace;hence,datamustbeexplicitlypartitionedandplaced.CommunicateviaspecialsubroutinecallsAllinteractions(read-onlyorread/write)requirecooperationoftwoprocesses-theprocessthathasthedataandtheprocessthatwantstoaccessthedata.6Principlesof
Message-PassingProgramming消息传递程序中的每个处理器都运行一个单独的进程(子程序,任务)支持消息传递范例的机器的逻辑视图由p个进程组成,每个进程都有自己的独占地址空间。所有变量都是私有的每个数据元素必须属于该空间的一个分区;因此,必须明确分区和放置数据。通过特殊的子程序通话所有交互(只读或读/写)都需要两个进程的协作,这两个进程是具有数据访问数据的进程。7Principlesof
Message-PassingProgrammingSPMDSingleProgramMultipleDataSameprogramrunseverywhere EachprocessonlyknowsandoperatesonasmallpartofdataMPMDMultipleProgramMultipleData Eachprocessperformadifferentfunction(input,problemsetup,solution,output,display)8MessagesMessagesarepacketsofdatamovingbetweenprocessesThemessagepassingsystemhastobetoldthefollowinginformation:SendingprocessSourcelocationDatatypeDatalengthReceivingprocess(es)DestinationlocationDestinationsize9MessagePassingMessage-passingprogramsareoftenwrittenusingtheasynchronous异步orlooselysynchronous松散同步paradigms.Asynchronouscommunicationdoesnotcompleteuntilthemessagehasbeenreceived.Anasynchronouscommunicationcompletesassoonasthemessageisonitsway
10IntroductionProgramingwithMPIProgrammingwithPVMComparisionwithMPIandPVMOutline11WhatisMPI?ThedevelopmentofMPIstartedinApril1992.MPIwasdesignedbytheMPIForum(adiversecollectionofimplementors,librarywriters,andendusers)quiteindependentlyofanyspecificimplementation
MPI由MPI论坛(实施者,图书馆作家和最终用户的多样化集合)设计,完全独立于任何具体的实现Website
//mpi/12WhatisMPI?MPIdefinesastandardlibraryformessage-passingthatcanbeusedtodevelopportablemessage-passingprogramsusingeitherCorFortran.Afixedsetofprocessesiscreatedatprograminitialization,oneprocessiscreatedperprocessor
mpirun–np5programEachprocessknowsitspersonalnumber(rank)EachprocessknowsnumberofallprocessesEachprocesscancommunicatewithotherprocessesProcesscan’tcreatenewprocesses(inMPI-1)13WhatisMPI?MPI定义了消息传递的标准库,可用于使用C或Fortran开发便携式消息传递程序。在程序初始化时创建一组固定的进程,每个处理器创建一个进程每个进程知道其个人号码(等级)每个进程知道所有进程的数量每个进程都可以与其他进程进行通信进程无法创建新进程(在MPI-1中)14MPI:theMessagePassingInterfaceTheminimalsetofMPIroutines.MPI_InitInitializesMPI.MPI_FinalizeTerminatesMPI.MPI_Comm_sizeDeterminesthenumberofprocesses.MPI_Comm_rankDeterminesthelabelofcallingprocess.MPI_SendSendsamessage.MPI_RecvReceivesamessage.15StartingandTerminatingtheMPILibrary
MPI_InitiscalledpriortoanycallstootherMPIroutines.ItspurposeistoinitializetheMPIenvironment.MPI_Finalizeiscalledattheendofthecomputation,anditperformsvariousclean-uptaskstoterminatetheMPIenvironment.Theprototypesofthesetwofunctionsare:
intMPI_Init(int*argc,char***argv) intMPI_Finalize()
MPI_InitalsostripsoffanyMPIrelatedcommand-linearguments.MPI_Init也剥离任何与MPI相关的命令行参数。AllMPIroutines,data-types,andconstantsareprefixedby“MPI_”.ThereturncodeforsuccessfulcompletionisMPI_SUCCESS.所有MPI例程,数据类型和常量都以“MPI_”作为前缀。成功完成的返回码为MPI_SUCCESS。16CommunicatorsAcommunicatordefinesacommunicationdomain-asetofprocessesthatareallowedtocommunicatewitheachother.InformationaboutcommunicationdomainsisstoredinvariablesoftypeMPI_Comm.CommunicatorsareusedasargumentstoallmessagetransferMPIroutines.Aprocesscanbelongtomanydifferent(possiblyoverlapping)communicationdomains.MPIdefinesadefaultcommunicatorcalledMPI_COMM_WORLDwhichincludesalltheprocesses.17Communicators通信者定义通信域-允许彼此通信的一组进程。有关通信域的信息存储在MPI_Comm类型的变量中。通信器用作所有消息传输MPI例程的参数。进程可以属于许多不同(可能重叠)的通信域。MPI定义了一个名为MPI_COMM_WORLD的默认通讯器,包括所有进程。18QueryingInformationThe
MPI_Comm_size
and
MPI_Comm_rank
functionsareusedtodeterminethenumberofprocessesandthelabelofthecallingprocess,respectively.Thecallingsequencesoftheseroutinesareasfollows:
intMPI_Comm_size(MPI_Commcomm,int*size) intMPI_Comm_rank(MPI_Commcomm,int*rank)
Therankofaprocessisanintegerthatrangesfromzerouptothesizeofthecommunicatorminusone.进程的等级是从零到通信者的大小减一的整数。19OurFirstMPIProgram#include<mpi.h>main(intargc,char*argv[]){ intnpes,myrank;
MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD,&npes); MPI_Comm_rank(MPI_COMM_WORLD,&myrank); printf("Fromprocess%doutof%d,HelloWorld!\n", myrank,npes);
MPI_Finalize();}20ParallelProgrammingWithMPICommunication通讯Basicsend/receive(blocking)基本发送/接收(阻塞)Non-blocking非阻塞Collective集体Synchronization同步Implicitinpoint-to-pointcommunication隐含的点对点通信Globalsynchronizationviacollectivecommunication通过集体交流进行全球同步ParallelI/O(MPI2)21Basic
SendingandReceivingMessagesThebasicfunctionsforsendingandreceivingmessagesinMPIaretheMPI_SendandMPI_Recv,respectively.Thecallingsequencesoftheseroutinesareasfollows:
intMPI_Send(void*buf,intcount,MPI_Datatype datatype,intdest,inttag,MPI_Commcomm) intMPI_Recv(void*buf,intcount,MPI_Datatype datatype,intsource,inttag, MPI_Commcomm,MPI_Status*status)
MPI_Send22MPI_SendThemessagetobesentisdeterminedbypointingtothememoryblock(buffer),whichcontainsthemessage.Thetriad,whichisusedtopointtothebuffer(buf,count,type),isincludedintotheparametersofpracticallyalldatapassingfunctions,Theprocesses,amongwhichdataispassed,shouldbelongtothecommunicator,specifiedinthefunctionMPI_Send,Theparametertagisusedonlywhenitisnecessarytodifferentiateamongthemessagesbeingpassed.Otherwise,anarbitraryintegernumbercanbeusedastheparametervalue23MPI_Send要发送的消息通过指向包含该消息的存储器块(缓冲器)来确定。用于指向缓冲区(buf,count,type)的三元组被包含在几乎所有数据传递函数的参数中,数据传递的过程应属于MPI_Send函数中指定的通信器,仅当需要区分正在传递的消息之间时才使用参数标签。否则,可以使用任意整数作为参数值24MPI_Recv2526SendingandReceivingMessages
MPIallowsspecificationofwildcardargumentsforbothsourceandtag.IfsourceissettoMPI_ANY_SOURCE,thenanyprocessofthecommunicationdomaincanbethesourceofthemessage.IftagissettoMPI_ANY_TAG,thenmessageswithanytagareaccepted.Onthereceiveside,themessagemustbeoflengthequaltoorlessthanthelengthfieldspecified.27SendingandReceivingMessages
MPI允许为源和标签指定通配符参数。如果source设置为MPI_ANY_SOURCE,则通信域的任何进程都可以是消息的源。如果标签设置为MPI_ANY_TAG,则会接受带有任何标签的消息。在接收端,该消息的长度必须等于或小于指定的长度字段。28MPIDatatypes
MPIDatatypeCDatatypeMPI_CHARsignedcharMPI_SHORTsignedshortintMPI_INTsignedintMPI_LONGsignedlongintMPI_UNSIGNED_CHARunsignedcharMPI_UNSIGNED_SHORTunsignedshortintMPI_UNSIGNEDunsignedintMPI_UNSIGNED_LONGunsignedlongintMPI_FLOATfloatMPI_DOUBLEdoubleMPI_LONG_DOUBLElongdoubleMPI_BYTEMPI_PACKED29Point-to-pointExampleProcess0 Process1#defineTAG999floata[10];intdest=1;MPI_Send(a,10,MPI_FLOAT,dest,TAG,MPI_COMM_WORLD);#defineTAG999MPI_Statusstatus;intcount;floatb[20];intsender=0;MPI_Recv(b,20,MPI_FLOAT,sender,TAG,MPI_COMM_WORLD,&status);MPI_Get_count(&status,MPI_FLOAT,&count);30Non-blockingCommunicationInordertooverlapcommunicationwithcomputation,MPIprovidesapairoffunctionsforperformingnon-blocking非阻塞sendandreceiveoperations.intMPI_Isend(void*buf,intcount,MPI_Datatypedatatype, intdest,inttag,MPI_Commcomm, MPI_Request*request)intMPI_Irecv(void*buf,intcount,MPI_Datatypedatatype, intsource,inttag,MPI_Commcomm, MPI_Request*request)Theseoperationsreturnbeforetheoperationshavebeencompleted.FunctionMPI_Testtestswhetherornotthenon-blockingsendorreceiveoperationidentifiedbyitsrequesthasfinished.intMPI_Test(MPI_Request*request,int*flag, MPI_Status*status)
Non-blockingCommunicationThefollowingschemeofcombiningthecomputationsandtheexecutionofthenonblockingcommunicationoperationispossible:组合计算和非阻塞通信操作的执行的以下方案是可能的:31
ProgramsEvaluatingofMPIProgramExecutionTime
Theexecutiontimeneedstoknowforestimatingtheobtainedspeedupofparallelcomputation执行时间需要知道估计获得的并行计算加速Obtainingthetimeofthecurrentmomentoftheprogramexecutionisprovidedbymeansofthefollowingfunction:Theaccuracyoftimemeasurementcandependontheenvironmentoftheparallelprogramexecution.Thefollowingfunctioncanbeusedinordertodeterminethecurrentvalueoftimemeasurementaccuracy:时间测量的准确度可以取决于并行程序执行的环境。可以使用以下功能来确定时间测量精度的当前值:3233MPICollectiveCommunicationRoutinesthatsendmessage(s)toagroupofprocessesorreceivemessage(s)fromagroupofprocesses将消息发送到一组进程或从一组进程接收消息Potentiallymoreefficientthanpoint-to-pointcommunication潜在地比点对点通信更有效率ExamplesBroadcastReductionBarrierScatterGatherAll-to-allCollectiveCommunication-BroadcastThereistheneedfortransmittingthevaluesofthevectorXtoalltheparallelprocesses,AnevidentwayistousetheabovediscussedMPIcommunicationfunctionstoprovideallrequireddatatransmissions:Therepetitionofthedatatransmissionsleadstosummingupthelatenciesofthecommunicationoperations,Therequireddatatransmissionscanbeexecutedwiththesmallernumberofiterations34CollectiveCommunication-Broadcast需要将矢量X的值发送到所有并行进程,一个明显的方法是使用上述讨论的MPI通信功能来提供所有必需的数据传输:数据传输的重复导致总结通信操作的延迟,所需的数据传输可以用较少的迭代次数执行3536CollectiveCommunication-BroadcastTheone-to-allbroadcastoperationis:
intMPI_Bcast(void*buf,intcount,MPI_Datatypedatatype,intsource,MPI_Commcomm)SendsdatafromroottoallothersinagroupCollectiveCommunication-Broadcast3738CollectiveCommunication–ReduceTheall-to-onereductionoperationis:
intMPI_Reduce(void*sendbuf,void*recvbuf,intcount, MPI_Datatypedatatype,MPI_Opop,inttarget, MPI_Commcomm)Combinesdatafromallprocessesingroup-Performs(associative)reductionoperation(SUM,MAX)-ReturnsthedatatooneprocessCollectiveCommunication–Reduce39CollectiveCommunication–ReduceTheBasicMPIOperationTypesforDataReductionFunctions…4041CollectiveCommunication–synchronizationThebarriersynchronizationoperationisperformedinMPIusing:
intMPI_Barrier(MPI_Commcomm)
Abarrieroperationsynchronisesanumberofprocessors.屏障操作同步多个处理器。42CollectiveCommunication–ScatterThecorrespondingscatteroperationis:
intMPI_Scatter(void*sendbuf,intsendcount, MPI_Datatypesenddatatype,void*recvbuf, intrecvcount,MPI_Datatyperecvdatatype, intsource,MPI_Commcomm)
SendseachelementofarrayinroottoseparateprocessCollectiveCommunication–GatherGatheringDatafromAlltheProcessestoaProcess…Gatheringdatafromalltheprocessestoaprocessisreversetodatascattering.ThefollowingMPIfunctionprovidestheexecutionofthisoperation:4344CollectiveCommunication–GatherThegatheroperationisperformedinMPIusing:
intMPI_Gather(void*sendbuf,intsendcount, MPI_Datatypesenddatatype,void*recvbuf, intrecvcount,MPI_Datatyperecvdatatype, inttarget,MPI_Commcomm)
Collectsdatafromsetofprocesses45OtherCollectiveCommunicationMPIalsoprovidestheMPI_Allgatherfunctioninwhichthedataaregatheredatalltheprocesses.
intMPI_Allgather(void*sendbuf,intsendcount, MPI_Datatypesenddatatype,void*recvbuf,intrecvcount, MPI_Datatyperecvdatatype,MPI_Commcomm)Iftheresultofthereductionoperationisneededbyallprocesses,MPIprovides:
intMPI_Allreduce(void*sendbuf,void*recvbuf,intcount,MPI_Datatypedatatype,MPI_Opop,MPI_Commcomm)Tocomputeprefix-sums,MPIprovides:
intMPI_Scan(void*sendbuf,void*recvbuf,intcount,MPI_Datatypedatatype,MPI_Opop,MPI_Commcomm)
46OtherCollectiveCommunicationTheall-to-allpersonalizedcommunicationoperationisperformedby:
intMPI_Alltoall(void*sendbuf,intsendcount, MPI_Datatypesenddatatype,void*recvbuf, intrecvcount,MPI_Datatyperecvdatatype, MPI_Commcomm)Usingthiscoresetofcollectiveoperations,anumberofprogramscanbegreatlysimplified.使用这套核心集体操作,可以大大简化一些程序。4748TopologiesandEmbeddings
MPIallowsaprogrammertoorganizeprocessorsintologicalk-dmeshes.TheprocessoridsinMPI_COMM_WORLDcanbemappedtoothercommunicators(correspondingtohigher-dimensionalmeshes)inmanyways.Thegoodnessofanysuchmappingisdeterminedbytheinteractionpatternoftheunderlyingprogramandthetopologyofthemachine.MPIdoesnotprovidetheprogrammeranycontroloverthesemappings.
49TopologiesandEmbeddings
MPI允许程序员将处理器组织成逻辑k-d网格。MPI_COMM_WORLD中的处理器ID可以通过多种方式映射到其他通讯器(对应于高维网格)。任何此类映射的优点由底层程序的交互模式和机器的拓扑结构决定。MPI不提供程序员对这些映射的任何控制。50TopologiesandEmbeddingsDifferentwaystomapasetofprocessestoatwo-dimensionalgrid.and(b)showarow-andcolumn-wisemappingoftheseprocesses,(c)showsamappingthatfollowsaspace-llingcurve(dottedline)(d)showsamappinginwhichneighboringprocessesaredirectlyconnectedinahypercube.51CreatingandUsing
CartesianTopologies
Wecancreatecartesiantopologiesusingthefunction:我们可以使用以下功能创建笛卡尔拓扑:
intMPI_Cart_create(MPI_Commcomm_old,intndims, int*dims,int*periods,intreorder,MPI_Comm*comm_cart) Thisfunctiontakestheprocessesintheoldcommunicatorandcreatesanewcommunicatorwithdimsdimensions.该功能采用旧通信器中的进程,并创建一个具有dims维度的新通信器。Eachprocessorcannowbeidentifiedinthisnewcartesiantopologybyavectorofdimensiondims.每个处理器现在可以在这个新的笛卡尔拓扑由尺寸DIMS的向量标识。Example:CalculatingπThevalueofconstantπcanbecomputedbymeansoftheintegralTocomputethisintegralthemethodofrectanglescanbeusedfornumericalintegration52Example:CalculatingπCyclicschemecanbeusedtodistributethecalculationsamongtheprocessorsPartialsums,thatwerecalculatedondifferentprocessors,havetobesummed5354Calculatingπ–SequentialProgram
intnum_steps=1000;doublewidth;voidmain() { inti; doublex,pi,sum=0.0; width=1.0/(double)num_steps;
for(i=1;i<=num_steps;i++){ x=(i-0.5)*width; sum=sum+4.0/(1.0+x*x); } pi=sum*width;}55MPIExample–Calculatingπ56MPIExample–Calculatingπ57CollectiveCommunication–Calculatingπ58CollectiveCommunication–Calculatingπ59IntroductionProgramingwithMPIProgrammingwithPVMComparisionwithMPIandPVMOutline60WhatisPVM?
ThedevelopmentofPVMstartedinsummer1989atOakRidgeNationalLaboratory(ORNL).Isasoftwarepackagethatallowsaheterogeneouscollectionofworkstations(hostpool)tofunctionasasinglehighperformanceparallelmachine(virtual)PVM,throughits
virtualmachine
providesasimpleyetusefuldistributedoperatingsystemIthas
daemon
runningonallcomputersmakingupthe
virtualmachine61WhatisPVM?PVM的发展始于1989年夏天在橡树岭国家实验室(ORNL)。是一个软件包,允许异构收集的工作站(主机池)作为单个高性能并行机器(虚拟)PVM通过其虚拟机提供了一个简单而有用的分布式操作系统它具有在组成虚拟机的所有计算机上运行的守护程序62PVMResourcesWebsite
/pvm/pvm_home.html
Book PVM:ParallelVirtualMachine
AUsers'GuideandTutorialforNetworkedParallelComputing
AlGeist,AdamBeguelin,JackDongarra,WeichengJiang,RobertManchek,VaidySunderam
/pvm3/book/pvm-book.html
63HowPVMisDesigned64BasicPVMFunctionsEnrollsthecallingprocessintoPVMandgeneratesauniquetaskidentifierifthisprocessisnotalreadyenrolledinPVM.IfthecallingprocessisalreadyenrolledinPVM,thisroutinesimplyreturnstheprocess'stid.将调用进程注册到PVM中,并生成唯一的任务标识符,如果此进程尚未注册在PVM中。如果调用进程已经注册在PVM中,这个例程只需返回进程的tid。 tid=pvm_mytid();StartsnewPVMprocesses.Theprogrammercanspecifythemachinearchitectureandmachinenamewhereprocessesaretobespawned.开始新的PVM流程。程序员可以指定要生成进程的机器体系结构和机器名称。numt=pvm_spawn("worker",0,PvmTaskDefault,"",1,&tids[i]);65BasicPVMFunctionsTellsthelocalpvmdthatthisprocessisleavingPVM.ThisroutineshouldbecalledbyallPVMprocessesbeforetheyexit.Addhoststothevirtualmachine.Thenamesshouldhavethesamesyntaxaslinesofapvmdhostfile.pvm_addhosts(hostarray,4,infoarray);Deleteshostsfromthevirtualmachine.
pvm_delhosts(hostarray,4);66BasicPVMFunctionsImmediatelysendsthedatainthemessagebuffertothespecifieddestinationtask.Thisisablocking,sendoperation.Returns0ifsuccessful,<0otherwise.
pvm_send(tids[1],MSGTAG);Multicasts组播amessagestoredintheactivesendbuffertotasksspecifiedinthetids[].Themessageisnotsenttothecallereveniflistedinthearrayoftids.
pvm_mcast(tids,ntask,msgtag);67BasicPVMFunctionsBlocksthereceivingprocessuntilamessagewiththespecifiedtaghasarrivedfromthespecifiedtid.Themessageisthenplacedinanewactivereceivebuffer,whichalsoclearsthecurrentreceivebuffer.
pvm_recv(tid,msgtag);Sameaspvm_recv,exceptanon-blocking非阻塞receiveoperationisperformed.Ifthespecifiedmessagehasarrived,thisroutinereturnsthebufferidofthenewreceivebuffer.Ifthemessagehasnotarrived,itreturns0.Ifanerroroccurs,thenaninteger<0isreturned.
pvm_nrecv(tid,msgtag);68BasicPVMFunctionspvm_barrier("worker",5);pvm_bcast("worker",msgtag);pvm_gather(&getmatrix,&myrow,10,PVM_INT,msgtag,"workers",root);pvm_scatter(&getmyrow,&matrix,10,PVM_INT,
msgtag,"workers",root);pvm_reduce(PvmMax,&myvals,10,PVM_INT,msgtag,"workers",root);69PVMExample:HelloWorld!#include<stdio.h>#include"pvm3.h"main(){intcc,tid;charbuf[100];printf("i'mt%x\n",pvm_mytid());cc=pvm_spawn("hello_other",0,0,"",1,&tid);if(cc==1){ cc=pvm_recv(-1,-1); pvm_bufinfo(cc,0,0,&tid); pvm_upkstr(buf); printf("fromt%x:%s\n",tid,buf);}else printf("can'tstarthello_other\n");pvm_exit();exit(0);}#include"pvm3.h"main(){ intptid; charbuf[100]; ptid=pvm_parent(); strcpy(buf,"hello,worldfrom"); gethostname(buf+strlen(buf),64); pvm_initsend(PvmDataDefault); pvm_pkstr(buf); pvm_send(ptid,1); pvm_exit(); exit(0);}70SetuptoUsePVMSetPVM_ROOTandPVM_ARCHinyour.cshrcfileBuildPVMforeacharchitecturetypeCreatea.rhostsfileoneachhostlistingallthehostsyouwishtouseCreatea$HOME/.xpvm_hostsfilelistingallthehostsyouwishtouseprependedbyan``&''71StartingPVMBefore
wegooverthestepstocompileandrunparallelPVMprograms,youshouldbesureyoucanstartupPVMandconfigureavirtualmachine.OnanyhostonwhichPVMhasbeeninstalledyoucantype%pvmandyoushouldgetbackaPVMconsolepromptsignifyingthatPVMisnowrunningonthishost.Youcanaddhoststoyourvirtualmachinebytypingattheconsolepromptpvm>addhostnameAndyoucandeletehosts(excepttheoneyouareon)fromyourvirtualmachinebytypingpvm>deletehostnameIfyougetthemessage``Can'tStartpvmd,''thencheckthecommonstartupproblemssectionandtryagain.72StartingPVM在完成编译并运行并行PVM程序的步骤之前,您应该确保可以启动PVM并配置虚拟机。在任何安装了PVM的主机上,您可以键入%pvm并且您应该返回PVM控制台提示符,表示PVM正在此主机上运行。您可以通过在控制台提示符下键入来将主机添加到虚拟机pvm>addhostname并且您可以通过键入从虚拟机中删除主机(您所在的除外)pvm>deletehostname如果您收到消息“无法启动pvmd”,请检查常见的启动问题部分,然后重试。73StartingPVMToseewhatthepresentvirtualmachinelookslike,youcantypepvm>confToseewhatPVMtasksarerunningonthevirtualmachine,youtypepvm>ps–aOfcourseyoudon'thaveanytasksrunningyet;that'sinthenextsection.Ifyoutype``quit"attheconsoleprompt,theconsolewillquitbutyourvirtualmachineandtaskswillcontinuetorun.AtanyUnixpromptonanyhostinthevirtualmachine,youcantype%pvmandyouwillgetthemessage``pvmalreadyrunning"andtheconsoleprompt.Whenyouarefinishedwiththevirtualmachine,youshouldtypepvm>haltThiscommandkillsanyPVMtasks,shutsdownthevirtualmachine,andexitstheconsole.ThisistherecommendedmethodtostopPVMbecauseitmakessurethatthevirtualmachineshutsdowncleanly.YoushouldpracticestartingandstoppingandaddinghoststoPVMuntilyouarecomfortablewiththePVMconsole.AfulldescriptionofthePVMconsoleanditsmanycommandoptionsisgivenattheendofthischapter.74StartingPVM要查看当前虚拟机的外观,可以键入pvm>conf要查看虚拟机上运行的PVM任务,请键入pvm>ps–a当然你还没有任何运行的任务;这在下一节。如果您在控制台提示符下键入“quit”,则控制台将退出,但您的虚拟机和任务将继续运行。在虚拟机中的任何主机上的任何Unix提示符下,可以键入%pvm您将收到消息“pvmalreadyrunning”和控制台提示。完成虚拟机后,您应该键入pvm>halt该命令可以杀死任何PVM任务,关闭虚拟机,并退出控制台。这是阻止PVM的推荐方法,因为它确保虚拟机完全关闭。您应该练习启动和停止并将主机添加到PVM,直到您对PVM控制台感到满意为止。在本章末尾给出了PVM控制台及其许多命令选项的完整说明。75IntroductionProgramingwithMPIProgrammingwithPVMComparisionwithMPIandPVMOutline76PVMandMPIGoalsPVMAdistributedoperatingsystemPortabilityHeterogeneityHandlingcommunicationfailuresMPIAlibraryforwritingapplicationprogram,notadistributedoperatingsystemportabilityHighPerformanceHeterogeneityWell-definedbehavior77PVMandMPIGoalsPVM分布式操作系统可移植性异质性处理通信故障MPI用于编写应用程序的库,而不是分布式操作系统可移植性高性能异质性明确的行为78WhatisNotDifferent?Portability
–sourcecodewrittenforonearchitecturecanbecopiedtoasecondarchitecture,compiledandexecutedwithoutmodification(tosomeextent)Support
MPMD
programsaswellas
SPMDInteroperability–theabilityofdifferentimplementationsofthesamespecificationtoexchangemessagesHeterogeneity(tosomeextent)
PVM&MPIaresystemsdesignedtoprovideuserswithlibrariesforwritingportable,heterogeneous,MPMDprograms79WhatisNotDifferent?可移植性-为一个架构编写的源代码可以复制到第二个架构,编译并执行,无需修改(在某种程度上)支持MPMD程序以及SPMD互操作性-相同规范的不同实现交换消息的能力异质性(某种程度上)PVM和MPI是为用户提供写入便携式,异构的MPMD程序的库的系统80ProcesscontrolAbilitytostartandstoptasks,tofindoutwhichtasksarerunning,andpossiblywheretheyarerunning.PVM
containsallofthesecapabilities–itcanspawn/killtasksdynamicallyMPI-1
hasnodefinedmethodtostartnewtask.
MPI-2
containfunctionstostartagroupoftasksandtosendakillsignaltoagroupoftasks81Processcontrol能够启动和停止任务,查找正在运行的任务,以及可能运行的位置。PVM包含所有这些功能-
它可以动态地生成/删除任务MPI-1没有定义的方法来启动新任务。MPI-2包含启动一组任务并向一组任务发送杀死信号的功能82ResourceControlPVM
isinherently
dynamic
innature,andithasarichsetofresourcecontrolfunctions.HostscanbeaddedordeletedloadbalancingtaskmigrationfaulttoleranceefficiencyMPI
isspecificallydesignedtobe
static
innaturetoimproveperformance83ResourceControlPVM本质上是动态的,它具有丰富的资源控制功能。主机可以添加或删除负载均衡任务迁移容错效率MPI是专门设计为静态的,以提高性能84Virtualtopology
-onlyfor
MPIConvenientprocessnamingNamingschemetofitthecommunicationpatternSimplifieswritingofcodeCanal
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 益阳职业技术学院《影视后期特效与包装》2023-2024学年第一学期期末试卷
- 益阳职业技术学院《机械制造装备》2023-2024学年第一学期期末试卷
- 德阳2024年四川德阳市口腔医院招聘专业技术人员9人笔试历年典型考点(频考版试卷)附带答案详解
- 辣椒回收合同范本3篇
- 酒厂承包合同签订条件3篇
- 酒吧承包经营合同协议书3篇
- 德阳2024年四川德阳什邡市融媒体中心招聘专业技术人才3人笔试历年典型考点(频考版试卷)附带答案详解
- 钢管采购合同格式3篇
- 留学服务协议3篇
- 餐饮服务劳动合同模板3篇
- 2024年上海海洋大学马克思主义基本原理概论(期末考试题+答案)
- 社会实践-形考任务四-国开(CQ)-参考资料
- 军事理论智慧树知到期末考试答案2024年
- 小班故事《小狗卖冷饮》课件
- 2023水库大坝震后安全检查技术指南
- 2024年中小学财务管理知识笔试历年真题荟萃含答案
- CNC数控编程述职报告
- 小学口才课教学大纲
- 生产车间环境改善方案
- 2023-2024学年四川省成都市锦江区七年级(上)期末数学试卷(含解析)
- 消防内务条令全文文档
评论
0/150
提交评论