29 -OutlineRecapPage TablePage FaultsFragmentationMemory SharingImproving PagingTranslation Lookaside Buffer (TLB)AnnouncementsReading:Memory SharingFragmentationPaging ImprovementsDivide virtual memory into fixed-size pages thatare loaded into same-sized frames in physical RAM.Page table is kept in physical memory, maintainedby kernel:Paging solves external fragmentation.Paging allows process memory to be split up non-contiguously and used on an as-needed basis.Internal fragmentation can still occur within pagesif process does not use the allocated pagesefficiently.Kernel can enable memory sharing betweenprocesses by mapping pages to the same frame.Shared libraries don't need to be duplicated inRAM for each process.What happens on fork() ?32-bit virtual address space.4kB page.4B page table entry.How large is the page table for a process?How much memory needed for 100 processes?space wasted between processesspace wasted within a pageEach page table entry typically contain:MMU keeps a pointer to the page table in memoryso it can look up the entry during translationExact fields and layout depend on OS and MMUA page may not be loaded if not needed.A page fault occurs when the process attempts toaccess a page that is invalid.On page fault, the requested page needs to beloaded into RAM.Process attempts to access memory locationMMU raises a page fault and traps to OS ifpage table entry is invalid (valid bit == 0)OS checks if the page exists in diskKernel copies page from disk to a free framein memory (page replacement)Kernel updates page table entry with newframe number and sets valid bitKernel sets process PC to same read/writeinstruction to re-execute it next time it isscheduled.page doesn't exist => segfault, kill processPage TablePage Faults1)3)4)5)2)MOS 3.3 - 3.7PA 5 releasedQuiz FridayVirtual Memory0xdeadb......frame number0xbframe 0xbframe #......0xeefoffsetpage tablephysicalmemoryvirtual addressphysical address0xbeef---one entry per pageframe number(obviously)whether page has been written towhether page has been accessedread/write/execute protectionswhether page is currently loadeddirty bitreferenced bitprotection bitsvalid bitone page table per processkernel updates pointer on context switchvalid (present/absent)protection bitsreferenced (modified)dirtyExample entry layout:processOSdisk......free frame......page tablemain memoryi(1)(6)(2)(3)(4)(5)(1)(2)(3)(4)(5)(6)process 5process 5process 5process 5process 4process 6process 4process 6OSOSExternal =Internal =instead of-e.g.fragmentation of heap by mallocmalloc needs 1 word, but gets a whole pagesplits always occur on page boundaries, sothere are no gaps that cannot be usedExercisePollEv-->>>>>>-recallkernel copies page table to child as wellfaster than full copy + memory efficientif parent or child writes to a page, copy thepage to a new frame and update thatevery page needs an entry in the page tablepage table lookup adds additional memoryaccessgrows exponentially with address bitspreserves parent/child isolationkernel configures MMU to trigger copy-on-writeandfor IPCshm_openmmapstack.........0x1c0x1c...stackheapheapdatadatatexttextlibclibc...libc...main memoryframe0x1cP1P2P1page tableP2page table2 performance concerns with naive implementation:Solution: dedicated cache for the page tableTLB is a dedicated hardware cache for page tableentries.We will discuss access performance more next time.Like caches, "hit" is fast, but "miss" is slow.------page table sizeaddress translationonly accessible to MMUworks because of temporal and spatial localityTLB is associative memory, so lookup occurs inparallel and is quite faston miss, MMU must load entry from page table0xdeadb......0xbframe 0xbf......0xeefopage tableTLBphysicalmemoryvirtual addressphysical address0xbeef