操作系统英文课件:ch3 Memory management a_第1页
操作系统英文课件:ch3 Memory management a_第2页
操作系统英文课件:ch3 Memory management a_第3页
操作系统英文课件:ch3 Memory management a_第4页
操作系统英文课件:ch3 Memory management a_第5页
已阅读5页,还剩57页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

1、1Memory ManagementChapter 33.1No Memory Abstraction3.2A Memory Abstraction: Address Space3.3Virtual Memory3.4Page Replacement Algorithm3.5Segmentation2Ideally programmers want memory that isLargeFastNonvolatileInexpensiveMemory hierarchy A few MBs of fast, expensive, volatile cache memoryA few GBs o

2、f medium-speed, medium price, volatile main memory (RAM)A few TBs of slow, cheap, nonvolatile disk storageMemory manager handles the memory hierarchy.Storage Hierarchy3Storage Hierarchy4MemoryProgram must be brought into memory and placed within a process for it to be run.Consecutive address space (

3、memory unit),is used for store the code and data of process. System section and User section5ObjectiveSupport multiprogrammingConvenience to user, Hide hardware details to user program, Auto-load user programSolve program space memory spaceProcess can be flexible in memoryQuick accessStorage protect

4、ion and securityShare and communicationPerformance and cost6TaskMemory allocate and freeAddress relocationMemory share and protectionMemory expansion7No Memory AbstractionThe simplest memory abstraction is no memory abstraction. Every program simply saw the physical memory. It is not possible to run

5、 two programs in memory at the same time.Three different ways to organize memory with OS and one user program.8Multiple Programs without Memory AbstractionIBM 360Divide memory into 2-KB blocksAssign each block a 4-bit protection key, which saved in special registers of CPU PSW also contains a 4-bit

6、keyWhen a process access memory, compare the protection code of the memory block with the PSW key.9Multiple Programs without Memory AbstractionIllustration of the relocation problem. 10Address SpacesTo allow multiple applications to be in memory at the same time without interfering with each other,

7、two problems must be solved: ProtectionPrimitive solution: Label chunks of memory with a protection key, and compare the key of the executing process to that of every memory word fetched.RelocationA better solution: Address Space Address space is a set of addresses that a process can use to address

8、memory.11Address SpacesEach process has its own address space, independent of those belonging to other processes.Logical address space, range (0 to max) Logical addresses, Virtual addressesDynamic RelocationMap each process address space onto a different part of physical memory.Physical address spac

9、e, range (R+0 to R+max) for base value RPhysical addresses, Real addressesBase and Limit Registers12MemoryLimit RegisterBase RegisterCPUAddress+MemoryAddress MALogicalAddress LAPhysical AddressPAFaultBase AddressLimit AddressMA+BABaseAddressBABase and Limit Registers13Base and Limit RegistersBase an

10、d limit registers can be used to give each process a separate address space.14Swapping (1)Lots of programs, total size exceeds memory Swapping: brings the whole process into memory, runs it for a while, and then move it back on the disk.Virtual memory: allows programs to run even when they are only

11、partially in main memory.Swapping allows several processes to share a partition Processes that grow can be swapped out and swapped back in a bigger partition 15Swapping (2)Memory allocation changes as processes come into memory and leave it. The shaded regions are unused memory.16Swapping (3)Allocat

12、ing memory dynamicallyWhen a process come, allocate space as it demand if have enough space;Or process wait on disk;With processes swapping in and swapping out, there will be many holes (External Fragmentation)171819The holes need be compacted together to generate a large free space.20Compaction (Si

13、milar to Garbage Collection)Assumes programs are all re-locatableProcesses must be suspended during compaction Need be done only when fragmentation gets very badMonitorJob 3FreeJob 5Job 6Job 7Job 85MonitorJob 3FreeJob 5Job 6Job 7Job 86MonitorJob 3FreeJob 5Job 6Job 7Job 87MonitorJob 3FreeJob 5Job 6Jo

14、b 7Job 88MonitorJob 3FreeJob 5Job 6Job 7Job 8921Memory Management with BitmapsUse bitmaps or free lists. Memory is divided into allocation units. One allocation unit corresponds to 1bit in the bitmap0: free, 1: allocated22Size of allocation unitThe smaller the allocation unit, the larger the bitmap.

15、 Problem: allocationWhen a new process arrives, the manager must find consecutive 0 bits in the map. Searching a bitmap for a run of a given length is a slow operation. 23Memory Management with Linked ListsUse a linked list of allocated and free memory segments (called hole)sorted by the address or

16、by the size24Memory Management with Linked ListsSorting by address has the advantage:When a process terminates or is swapped out, updating the list is straightforward.Four neighbor combinations for the terminating process X25How to satisfy a request of size n from a list of free holes.First FitUse t

17、he first available hole whose size is sufficient to meet the need. Problem: Creates average size holes. Next FitMinor variation of first fit: search for the last hole stopped. Problem: slightly worse performance than first fit. Storage Placement Strategies 26Storage Placement Strategies Best FitUse

18、the hole whose size is equal to the need, or if none is equal, the hole that is larger but closest in size. Problem: Creates small holes that cant be used. Worst FitUse the largest available hole. Problem: Gets rid of large holes making it difficult to run large programs. Quick Fitmaintains separate

19、 lists for some of the more common sizes requested. When a request comes for placement it finds the closest fit. This is a very fast scheme, but a merge is expensive. 27After8K12K6K8K14K6K2KLastallocatedblock (14K)Before8K12K22K18K6K8K14K36K20KNext Fit, Worst FitFree blockAllocated blockBest FitFirs

20、t Fit16K28OverlayingOne program, using lots of memoryOverlaying (1960s): split programs into little pieces, called overlays. Allow one or some overlays in memory at the same time, many overlays can use common memory space. (see next page)The work of swapping overlays in and out is done by OS, but th

21、e programmer should split the program into pieces manually, which increases the complexity of programming.29OverlayingOverlay ManagerOverlay AreaMain ProgramOverlay 1Overlay 2Overlay 3Secondary StorageOverlay 1Overlay 2Overlay 3Overlay 10K5k7k12k30Overlaying ExampleAn alternative approach:(100K)A(20

22、K) :20K;B(50K)、D(20K)、E(40K) :50K;C(30K)、F(30K) :30K;31Virtual MemoryVirtual memory separation of user logical memory from physical memory.Provide user with virtual memory that is as big as user needsLogical address space can therefore be much larger than physical address space.Store virtual memory

23、on diskOnly part of the program needs to be in memory for execution. Allows address spaces to be shared by several processes.Allows for more efficient process creation.Load and store cached virtual memory without user program intervention32Principle of LocalityLocality of reference:During the phase

24、of execution the process references relatively small fraction of its pages.Time localitySpace locality33Basic idea of Virtual MemoryPaging is a technique used to implement virtual memory.Pages: the virtual address is divided up into units .Page frames: the corresponding units in the physical memory.

25、The MMU (memory management unit) translates a virtual address into a physical address34Suppose the computer can generate 16-bit addresses, (0-64k). However, the computer only has 32k of memory 64k program can be written, but not loaded into memory.A Present/Absent bit keeps track of whether or not t

26、he page is mapped.Reference to an unmapped page causes the CPU to trap to the OS.This trap is called a Page fault. The MMU selects a little used page frame, writes its contents back to disk, fetches the page just referenced, and restarts the trapped instruction.35Paging The relation between virtual

27、addressesand physical memory addresses given by page table.36Paging Model Example 37Virtual address: Virtual page number (high-order bits) and an offset (low-order bits)The purpose of the page table is to map virtual pages onto page frames.Page Table38Typical page table entryPage frame number - map

28、the frame numberPresent/absent bit - 1/0 indicates valid/invalid entryProtection bit - what kinds of access are permitted.Modified bit (dirty bit) - set when modified and writing to the disk occurReferenced bit - Set when page is referenced (help decide which page to evict)Caching disabled - Cache i

29、s used to keep data that logically belongs on the disk in memory to improve performance. 39Paging RequestRequest Page 312341DiskMemoryVirtual Memory Stored on Disk1234567812341234Page TableVMFrameReal Memory5678340PagingRequest Page 1DiskMemoryVirtual Memory Stored on Disk1234567812341234Page TableV

30、MFrameReal Memory12234156783141PagingRequest Page 61DiskMemoryVirtual Memory Stored on Disk1234567812341234Page TableVMFrameReal Memory12234156783342PagingRequest Page 2DiskMemoryVirtual Memory Stored on Disk1234567812341234Page TableVMFrameReal Memory1122344156783343PagingRequest Page 8: Swap page

31、2 to disk firstDiskMemoryVirtual Memory Stored on Disk1234567812341234Page TableVMFrameReal Memory23441567833144PagingLoad Page 8 to MemoryDiskMemoryVirtual Memory Stored on Disk1234567812341234Page TableVMFrameReal Memory234415678323145Paging ExampleA logical address (on 32-bit machine with 4K page

32、 size) is divided into:a page number consisting of 20 bits.a page offset consisting of 12 bits.Thus, a logical address is as follows:page numberpage offsetpd20124647Page Tables Issues Two major issues of the page tables are faced: The mapping must be fast because it is done on every memory access!Pa

33、ge tables may be extremely large (e.g. most computers use) 32-bit address with 4k page size, 12-bit offset 20 bits for virtual page number 1 million entries!48Single page table consisting of an array of hardware registers. As a process is started up, the registers are loaded with page table.Advantag

34、e - simpleDisadvantage - expensive if table is large and loading the full page table at every context switch hurts performance. Leave page table in memory - a single register points to the tableAdvantage - context switch cheapDisadvantage - one or more memory references to read table entriesPage Tab

35、les Issues49Virtual-To-Physical LookupsPrograms only know virtual addressesEach virtual address must be translatedMay involve walking hierarchical page tablePage table stored in memorySo, each program memory access requires several actual memory accessesSolution: cache “active” part of page tableTra

36、nslation Look-aside Buffers (TLBs)TLB also called “associative memory”50Bits in a TLB EntryCommon (necessary) bitsVirtual page number: match with the virtual addressPhysical page number: translated addressValidAccess bits: kernel and user (nil, read, write)Optional (useful) bitsProcess tagReferenceM

37、odifyCacheable51Translation Look-aside Buffer (TLB)5253TLB FunctionIf a virtual address is presented to MMU, the hardware checks TLB by comparing all entries simultaneously (in parallel). If match is valid, the page is taken from TLB without going through page table. If match is not validMMU detects

38、 miss and does an ordinary page table lookup.It then evicts one page out of TLB and replaces it with the new entry, so that next time that page is found in TLB. TLB hit ratio (Page address cache hit ratio)Percentage of time page found in associative memory 54Hardware-Controlled TLBOn a TLB missHardw

39、are loads the PTE (Page Table Entry) into the TLBNeed to write back if there is no free entryGenerate a fault if the page containing the PTE is invalidVM software performs fault handlingRestart the CPUOn a TLB hit, hardware checks the valid bitIf valid, pointer to page frame in memoryIf invalid, the

40、 hardware generates a page faultPerform page fault handlingRestart the faulting instruction55Software-Controlled TLBOn a miss in TLB, generate a TLB fault, then trap to OS (software)Check if the page containing the PTE is in memoryIf no, perform page fault handlingWrite back if there is no free entry, then load the PTE into the TLBRestart the faulting instructionOn a hit in TLB, the hardware checks valid bitIf valid, pointer to page frame in memoryIf invalid, the hardware generates a page faultPerform page fault handlingRestart the faulting inst

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论