版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
1、Chapter 9: Virtual Memory第1页,共70页。Chapter ObjectivesTo describe the benefits of a virtual memory systemTo explain the concepts of demand paging, replacement algorithms, and allocation of page framesTo discuss the principle of the working-set model第2页,共70页。Content OverviewBackgroundDemand PagingCopy-
2、on-WritePage ReplacementAllocation of Frames ThrashingMemory-Mapped FilesAllocating Kernel MemoryOther ConsiderationsOperating-System Examples第3页,共70页。9.1 BackgroundVirtual memory separation of user logical memory from physical memory.Only part of the program needs to be in memory for executionLogic
3、al address space can therefore be much larger than physical address spaceAllows address spaces to be shared by several processesAllows for more efficient process creationVirtual memory can be implemented via:Demand paging Demand segmentation第4页,共70页。Virtual Memory That is Larger Than Physical Memory
4、第5页,共70页。Virtual-address Space第6页,共70页。Shared Library Using Virtual Memory第7页,共70页。9.2 Demand PagingBring a page into memory only when it is neededLess I/O neededLess memory needed Faster responseMore usersPage is needed reference to itinvalid reference abortnot-in-memory bring to memoryLazy swapper
5、 never swaps a page into memory unless page will be neededSwapper that deals with pages is a pager第8页,共70页。Transfer of a Paged Memory to Contiguous Disk Space第9页,共70页。Valid-Invalid BitWith each page table entry a validinvalid bit is associated(v in-memory, i not-in-memory)Initially validinvalid bit
6、is set to i on all entriesExample of a page table snapshot:During address translation, if validinvalid bit in page table entry is I page faultvvvviii.Frame #valid-invalid bitpage table第10页,共70页。Page Table When Some Pages Are Not in Main Memory第11页,共70页。Page FaultIf there is a reference to a page, fi
7、rst reference to that page will trap to operating system: page faultOperating system looks at another table to decide:Invalid reference abortJust not in memoryGet empty frameSwap page into frameReset tablesSet validation bit = vRestart the instruction that caused the page fault第12页,共70页。Page Fault (
8、Cont.)Restart instructionblock moveauto increment/decrement location第13页,共70页。Steps in Handling a Page Fault第14页,共70页。Performance of Demand PagingPage Fault Rate 0 p 1.0if p = 0 no page faults if p = 1, every reference is a faultEffective Access Time (EAT)EAT = (1 p) x memory access+ p (page fault o
9、verhead + swap page out + swap page in + restart overhead )第15页,共70页。Demand Paging ExampleMemory access time = 200 nanosecondsAverage fault service time = 8 millisecondsEAT = (1 p) x 200 + p (8 milliseconds) = (1 p x 200 + p x 8,000,000 = 200 + p x 7,999,800If one access out of 1,000 causes a page f
10、ault, then EAT = 8.2 microseconds. This is a slowdown by a factor of 40!第16页,共70页。Process CreationVirtual memory allows other benefits during process creation:- Copy-on-Write- Memory-Mapped Files (later)第17页,共70页。9.3 Copy-on-WriteCopy-on-Write (COW) allows both parent and child processes to initiall
11、y share the same pages in memoryIf either process modifies a shared page, only then is the page copiedCOW allows more efficient process creation as only modified pages are copiedFree pages are allocated from a pool of zeroed-out pages第18页,共70页。Before Process 1 Modifies Page C第19页,共70页。After Process
12、1 Modifies Page C第20页,共70页。What happens if there is no free frame?Page replacement find some page in memory, but not really in use, swap it outalgorithmperformance want an algorithm which will result in minimum number of page faultsSame page may be brought into memory several times第21页,共70页。9.4 Page
13、 ReplacementPrevent over-allocation of memory by modifying fault service routine to include page replacementUse modify (dirty) bit to reduce overhead of page transfers only modified pages are written to diskPage replacement completes separation between logical memory and physical memory large virtua
14、l memory can be provided on a smaller physical memory第22页,共70页。Need For Page Replacement第23页,共70页。Basic Page ReplacementFind the location of the desired page on diskFind a free frame: - If there is a free frame, use it - If there is no free frame, use a page replacement algorithm to select a victim
15、frameBring the desired page into the (newly) free frame; update the page and frame tablesRestart the process第24页,共70页。Page Replacement第25页,共70页。Page Replacement AlgorithmsWant lowest fault rateEvaluate algorithm by running it on a particular string of memory references (reference string) and computi
16、ng the number of page faults on that stringIn all our examples, the reference string is 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5第26页,共70页。Graph of Page Faults Versus The Number of Frames第27页,共70页。First-In-First-Out (FIFO) AlgorithmReference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 53 frames (3 pages can b
17、e in memory at a time per process)4 framesBeladys Anomaly: more frames more page faults1231234125349 page faults1231235124510 page faults443第28页,共70页。FIFO Page Replacement第29页,共70页。FIFO Illustrating Beladys Anomaly第30页,共70页。Optimal AlgorithmReplace page that will not be used for longest period of ti
18、me4 frames example 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5How do you know this?Used for measuring how well your algorithm performs12346 page faults45第31页,共70页。Optimal Page Replacement第32页,共70页。Least Recently Used (LRU) AlgorithmReference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5Counter implementationEve
19、ry page entry has a counter; every time page is referenced through this entry, copy the clock into the counterWhen a page needs to be changed, look at the counters to determine which are to change52431234125412531243第33页,共70页。LRU Page Replacement第34页,共70页。LRU Algorithm (Cont.)Stack implementation ke
20、ep a stack of page numbers in a double link form:Page referenced:move it to the toprequires 6 pointers to be changedNo search for replacement第35页,共70页。Use Of A Stack to Record The Most Recent Page References第36页,共70页。LRU Approximation AlgorithmsReference bitWith each page associate a bit, initially
21、= 0When page is referenced bit set to 1Replace the one which is 0 (if one exists)We do not know the order, howeverAdditional-Reference bitA timern bits Reference bits shiftSecond chanceNeed reference bitClock replacementIf page to be replaced (in clock order) has reference bit = 1 then:set reference
22、 bit 0leave page in memoryreplace next page (in clock order), subject to same rules第37页,共70页。Second-Chance (clock) Replacement Algorithm第38页,共70页。Counting AlgorithmsKeep a counter of the number of references that have been made to each pageLFU Algorithm: replaces page with smallest countMFU Algorith
23、m: based on the argument that the page with the smallest count was probably just brought in and has yet to be used第39页,共70页。9.5 Allocation of FramesEach process needs minimum number of pagesExample: IBM 370 6 pages to handle SS MOVE instruction:instruction is 6 bytes, might span 2 pages2 pages to ha
24、ndle from2 pages to handle toTwo major allocation schemesfixed allocationpriority allocation第40页,共70页。Fixed AllocationEqual allocation For example, if there are 100 frames and 5 processes, give each process 20 frames.Proportional allocation Allocate according to the size of process第41页,共70页。Priority
25、 AllocationUse a proportional allocation scheme using priorities rather than sizeIf process Pi generates a page fault,select for replacement one of its framesselect for replacement a frame from a process with lower priority number第42页,共70页。Global vs. Local AllocationGlobal replacement process select
26、s a replacement frame from the set of all frames; one process can take a frame from anotherLocal replacement each process selects from only its own set of allocated frames第43页,共70页。9.6 ThrashingIf a process does not have “enough” pages, the fault rate is very high. This leads to:low CPU utilizationo
27、perating system thinks that it needs to increase the degree of multiprogramminganother process added to the systemThrashing a process is busy swapping pages in and out第44页,共70页。Thrashing (Cont.)第45页,共70页。Demand Paging and Thrashing Why does demand paging work?Locality modelProcess migrates from one
28、locality to anotherLocalities may overlapWhy does thrashing occur? size of locality total memory size第46页,共70页。Locality In A Memory-Reference Pattern第47页,共70页。Working-Set Model working-set window a fixed number of page references Example: 10,000 instructionWSSi (working set of Process Pi) =total num
29、ber of pages referenced in the most recent (varies in time)if too small will not encompass entire localityif too large will encompass several localitiesif = will encompass entire programD = WSSi total demand frames if D m ThrashingPolicy if D m, then suspend one of the processes第48页,共70页。Working-set
30、 model第49页,共70页。Keeping Track of the Working SetApproximate with interval timer + a reference bitExample: = 10,000Timer interrupts after every 5000 time unitsKeep in memory 2 bits for each pageWhenever a timer interrupts copy and sets the values of all reference bits to 0If one of the bits in memory
31、 = 1 page in working setWhy is this not completely accurate?Improvement = 10 bits and interrupt every 1000 time units第50页,共70页。Fault Frequency SchemeEstablish “acceptable” fault rateIf actual rate too low, process loses frameIf actual rate too high, process gains frame第51页,共70页。9.7 Memory-Mapped Fil
32、esMemory-mapped file I/O allows file I/O to be treated as routine memory access by mapping a disk block to a page in memoryA file is initially read using demand paging. A sized portion of the file is read from the file system into a physical page. Subsequent reads/writes to/from the file are treated
33、 as ordinary memory accesses.Simplifies file access by treating file I/O through memory rather than read() write() system callsAlso allows several processes to map the same file allowing the pages in memory to be shared第52页,共70页。Memory Mapped Files第53页,共70页。Memory-Mapped Shared Memory in Windows第54页
34、,共70页。9.8 Allocating Kernel MemoryTreated differently from user memoryOften allocated from a free-memory poolKernel requests memory for structures of varying sizesSome kernel memory needs to be contiguous第55页,共70页。Buddy SystemAllocates memory from fixed-size segment consisting of physically-contiguo
35、us pagesMemory allocated using power-of-2 allocatorSatisfies requests in units sized as power of 2Request rounded up to next highest power of 2When smaller allocation needed than is available, current chunk split into two buddies of next-lower power of 2Continue until appropriate sized chunk availab
36、le第56页,共70页。Buddy System Allocator第57页,共70页。Slab AllocatorAlternate strategySlab is one or more physically contiguous pagesCache consists of one or more slabsSingle cache for each unique kernel data structureEach cache filled with objects instantiations of the data structureWhen cache created, fille
37、d with objects marked as freeWhen structures stored, objects marked as usedIf slab is full of used objects, next object allocated from empty slabIf no empty slabs, new slab allocatedBenefits include no fragmentation, fast memory request satisfaction第58页,共70页。Slab Allocation第59页,共70页。9.9 Other Issues
38、 - PrepagingPrepaging To reduce the large number of page faults that occurs at process startupPrepage all or some of the pages a process will need, before they are referencedBut if prepaged pages are unused, I/O and memory was wastedAssume s pages are prepaged and of the pages is usedIs cost of s *
39、save pages faults or than the cost of prepaging s * (1- ) unnecessary pages? near zero prepaging loses 第60页,共70页。Other Issues Page SizePage size selection must take into consideration:fragmentationtable size I/O overheadlocality第61页,共70页。Other Issues TLB Reach TLB Reach - The amount of memory access
40、ible from the TLBTLB Reach = (TLB Size) X (Page Size)Ideally, the working set of each process is stored in the TLBOtherwise there is a high degree of page faultsIncrease the Page SizeThis may lead to an increase in fragmentation as not all applications require a large page sizeProvide Multiple Page
41、SizesThis allows applications that require larger page sizes the opportunity to use them without an increase in fragmentation第62页,共70页。Other Issues Program StructureProgram structureInt128,128 data;Each row is stored in one page Program 1 for (j = 0; j 128; j+) for (i = 0; i 128; i+) datai,j = 0; 128 x 128 = 16,384 page faults Program 2 for (i = 0; i 128; i+) for (j = 0; j 128; j+) datai,j = 0;128 page faults第63页,共70页。Other Issues I/O interlockI/O Interlock Pages must sometimes be locked into memoryConsider I/O - Pages that
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 代理销售合同范本格式
- 全面建设劳务分包合同范本
- 建筑项目分包劳务协议
- 室内装潢劳务分包协议
- 2024年版:股权投资协议3篇
- 2024年电商平台用户隐私保护合同
- 2024年围墙工程防水保温一体化承包协议3篇
- 2024年度智能家居公寓商品房代理销售服务协议3篇
- 2024年土地置换与环保治理项目实施协议3篇
- 2024-2030年青铜粉行业市场现状供需分析及重点企业投资评估规划分析研究报告
- 消化性溃疡.课件
- VR游戏品质评估体系与评价标准
- 2024年北京中考记叙文阅读专题03 小说阅读(含答案解析)
- 宜春古城改造计划书
- 人力资源部副经理个人述职报告
- 小学一年级科学(上册)期末试卷含参考答案
- 机械制造厂质量管理手册
- 2024年四川成都高新科技服务有限公司招聘笔试参考题库含答案解析
- 湖北省随州市曾都区2022-2023学年七年级上学期期末学业质量监测心理健康教育试题(含答案)
- 足部健康宣教课件
- TSZSA 032-2023 SMD 塑料载带技术规范
评论
0/150
提交评论