CS322: Virtual Memory

Web Links

Introduction

  1. In the earliest computer systems memory was assigned in single, contiguous regions

    We saw that the earliest computer systems operated on a memory model that required that all of the memory required by a given process be allocated to it in a single, contiguous region of physical memory.

  2. Paging and/or segmentation can allow us to relax this requirement, but requires special hardware

    Last lecture, we saw that this requirement could be relaxed to allow non-contiguous allocation of physical memory by using paging and/or segmentation techniques. Central to any such approach is the concept of mapping logical addresses generated by the process into actual physical addresses by special hardware, which makes use of information stored either in special registers or in a page/segment table in memory.

    1. Such a mapping scheme leads to a distinction between two different address spaces:

      1. logical address space

        The logical address space is the range of logical addresses that can be generated by a process running on the CPU.

        • On many systems, the logical address space is dictated by the architecture of the CPU - specifically by the number of bits in the address portion of an instruction. For example, on a machine that uses 16-bit addresses in its instructions, the logical address space would be 0 .. 65535.

        • On machines whose architecture supports relatively long addresses, logical space may be dictated by the operating system to be some subset of the potential logical space allowed by the architecture. For example, the VAX architecture utilizes 32-bit addresses, which would conceivably allow logical addresses to range from 0 .. 4 Gig; but the actual logical space allowed to any one user is dictated by system management. For example, the size of a process's page table is dictated by a management-set parameter. If this size is set at (say) 10K, then the process's logical space is restricted to a total size of 5 Meg, since each page is 512 bytes long.

      2. physical address space

        The physical address space is the range of physical addresses that can be recognized by the memory. For example, if 1 Meg of physical memory is installed on a given machine, then the physical address space would probably be 0 .. 1,048,575. (I say probably because it could just as well be installed as 1,048,576 .. 2,097,151 or 0 .. 524,287 + 1,048,576 .. 1,572,863.)

    2. The mapping need not be one-to-one.

      1. some logical address may not map to physical addresses

        It may be that certain logical addresses do not map to any physical address. In the schemes we have discussed thus far, this would mean that any process generating such an address would be aborted with a memory management violation.

      2. some physical addresses may not have corresponding logical addresses

        It may be that more than one logical address maps to the same physical address. This would not be likely to be the case within a single process, but often occurs between processes that are sharing code or data.

    3. Relative sizes of logical and physical memory

      One question that is of interest is the relative size of the two address spaces. For the schemes we have discussed thus far, it would necessarily have to be the case that

      | logical space | < | physical space |.

      (Recall that an attempt to use a logical address for which there is no corresponding physical address leads to abortion of the process.) This means that several entire processes can be resident in physical memory at the same time. (Recall the example of the PDP-11/70: logical space is 64K due to the 16-bit architecture of the CPU; but physical space can be as big as 4 Meg since physical addresses are 22 bits long.

  3. Overlays

    One problem that occurs over and over again in practice is that certain programs have a need for a larger logical address space than the machine they are running on will allow. Even though available memory capacities have been growing rapidly due to progress in memory chip technology, this problem never seems to go away because larger machines lead to attempts to use the computer to solve larger problems.

    1. Program design

      The earliest solution to this problem (and one that is still used sometimes today) is the technique of using overlays. A well designed program will have a tree-like module structure:

    2. Only part of program code actually needs to be in memory at one time

      Observe that, at any given point in time, it is only necessary for a single path from root to leaf to actually be resident in memory. For example, if main calls procedure A which in turn calls procedure C, then only these three routines need actually be present in memory. The remainder can be kept on disk, to be brought in as needed. For example, if C returns to A, which then calls D, then the code for D can be brought in from disk and can be put into the memory occupied by C (assuming that the code for D will fit into this space.) If A returns to main, which in turn calls B, then B's code can be brought into the space where A was.

      This can give rise to an overlay structure like the following, where the vertical axis corresponds to logical address space and the horizontal axis to time. The shaded regions are portions of logical space left vacant when certain shorter routines are resident to allow room for a larger routine to be brought into the space later:

    3. The Overlay Manager

      To make this scheme work, the compiler must translate procedure calls into calls to an overlay driver. The overlay driver checks to be sure that the desired procedure is already in memory; if not, it brings it in. Then it transfers control to the procedure. Thus, a slight extra overhead is introduced whenever a procedure is called. Note, too, that only procedure calls "down" the tree are allowed. Calls at the same level or up the tree must be forbidden, since the calling routine may be overwritten by a new overlay, which would cause the return from procedure instruction to transfer control to the wrong routine. (Of course, any procedure may contain internal procedures whose call and return do not go through the overlay driver; for these, any direction of call is allowed.)

    4. Overlays are restricted to code, not data

      A significant limitation of overlaying is that it only deals with the space needed for code - not the space needed for data. Thus, if a program needed a 100 x 1000 array, it could not be made to run this way on a machine that restricted it to a 64K logical address space.

  4. Another way of solving the problem of running a process that is too big for available memory is the memory management scheme known as virtual memory. Virtual memory is characterized by having

    | logical space | > | physical space |

    that is, a process may potentially address more logical memory than there is actual physical memory to support it. This means that, in general, only a portion of a given process will be resident in physical memory at any one time. The part which is not memory resident will be kept in secondary storage, ready to be brought in when needed. Virtual memory is an alternate way of solving the problem addressed by overlaying - but with several major advantages over the former approach:

    1. Programmer must work out overlay structure

      The overlay structure for a program must be worked out by the programmer, a potentially laborious process. But virtual memory provides a form of overlaying that is invisible to the programmer.

    2. Overlays are "coarse grained"

      Overlay structures are typically rather coarse. There is no provision for overlaying mutually-exclusive portions of a single long procedure (e.g. different clauses in a case statement).

    3. Overlays work for code, not data

      Overlaying can solve the problem of a routine whose code is too big for available memory, but is of no use when the problem is the space required for data (e.g. large arrays.) Virtual memory handles either problem (or a combination of the two) equally well.

    Note, however, that virtual memory requires that the underlying architecture allow for a large logical address space. That is, it only deals with the second of the two constraints on logical address space size mentioned above. A process running on, say, a PDP-11 can never address more than 64K of memory regardless of what memory management scheme is used. If a machine's architecture severely restricts the address space, then overlaying is still the only viable way of allowing large programs to run.

  5. Virtual memory is advantageous even on systems with a lot of physical memory

    Even if physical memory were large enough to hold any process that might be run on a given system, virtual memory can still be advantageous by allowing more processes to be (partially) memory resident at once, thus allowing for higher utilization of the CPU and/or allowing more interactive users to be online.

Virtual memory basics

  1. Virtual memory is an extension of paging and/or segmentation

    The basic implementation of virtual memory is very much like paging or segmentation. In fact, from a hardware standpoint, virtual memory can be thought of as a slight modification to one of these techniques. For the sake of simplicity, we will discuss virtual memory as an extension of paging; but the same concepts would apply if virtual memory were implemented as an extension of segmentation.

  2. Page table used to translate logical to physical addresses

    Recall that in a paging scheme each process has a page table which serves to map logical addresses generated by the process to actual physical addresses. The address translation process can be described as follows:

    1. Break the logical address down into a page number and an offset.

    2. Use the page number as an index into the page table to find the corresponding frame number.

    3. Using the frame number found there, generate a physical address by concatenating the frame number and the offset from the original address.

    Example: suppose the page table for a process looks like this. Assume that the page size is 256 bytes, that logical addresses are 16 bits long, and that physical addresses are 24 bits long. (All numbers in the table are hexadecimal):

    A logical address 02FE would be translated into the physical address 01A0FE.

  3. Security in a paging system

    In a paging system, one security provision that is needed is a check to be sure that the page number portion of a logical address corresponds to a page that has been allocated to the process. This can be handled either by comparing it against a maximum page number or by storing a validity indication in the page table. This can be done by providing an additional bit in the page table entry in addition to the frame number. In a paging system, an attempt to access an invalid page causes a hardware trap, which passes control to the operating system. The OS in turn aborts the process.

  4. Situations that cause traps to the Operating System

    In a virtual memory system, we no longer require that all of the pages belonging to a process be physically resident in memory at one time. Thus, there are two reasons why a logical address generated by a process might give rise to a hardware trap:

    1. violations

      The logical address is outside the range of valid logical addresses for the process. This will lead to aborting the process, as before. (We will call this condition a memory-management violation.)

    2. page faults

      The logical address is in the range of valid addresses, but the corresponding page is not currently present in memory, but rather is stored on disk. The operating system must bring it into memory before the process can continue to execute. (We will call this condition a page fault).

  5. Need a paging device to store pages not in memory

    In a paging system, a program is read into memory from disk all at once. Further, if swapping is used, then the entire process is swapped out or in as a unit. In a virtual memory system, processes are paged in/out in a piece-wise fashion. Thus, the operating system will need a paging device (typically a disk) where it can store those portions of a process which are not currently resident.

    1. When a fault for a given page occurs, the operating system will read the page in from the paging device.

    2. Further, if a certain page must be moved out of physical memory to make room for another being brought in, then the page being removed may need to be written out to the paging device first. (It need not be written out if it has not been altered since it was brought into memory from the paging device.)

    3. When a page is on the paging device rather than in physical memory, the page table entry is used to store a pointer to the pages's location on a the paging device.

  6. Virtual memory has an impact on CPU scheduling

    In a virtual memory system, the hardware can behave in basically the same way as for paging. However, the operating system no longer simply aborts the process when the process accesses an invalid page. Instead, it determines which of the above two reasons caused the trap. If it is the latter, then the operating system must initiate the process of bringing in the appropriate page. The process, of course, must be placed into a wait state until this is completed. So our set of possible process states must be extended from:

    RUNNING
    READY
    WAITING for IO to complete

    to:

    RUNNING
    READY
    WAITING for IO to complete
    WAITING for a page to be brought in

    (Note, though, that a page wait is in reality just another form of IO wait, except that here the reason for the wait is not an explicit IO instruction in the process.)

  7. Hardware support beyond that for paging along is required for virtual memory

    Though the burden of recognizing and handling page faults falls on the operating system, certain provisions must be present in the hardware that are not needed with simple paging:

    1. A page fault could occur while a single instruction is being carried out

      The ability to restart an instruction that caused a fault in mid-stream. This can be tricky if the instruction accesses large blocks of memory - e.g. a block move that copies a character string en masse.

    2. Page table entry should include a "dirty" bit

      Though it is not strictly necessary, it is desirable to include a "written-in" bit in the page table entry, along with the valid bit noted above. This bit is set if any location in the page has been modified since it was brought into physical memory. This bit comes into play when the operating system finds it necessary to take the frame away from a page to make room for a new page being faulted in. If the old page has not been written in, then it need not be written back to disk, since it is the same as the copy on disk that was brought in originally.

    3. May want a bit to indicate that a page has been accessed

      Some implementations also require a per-page accessed bit that is set whenever any access (read or write) to the page occurs. This can be used to help decide which pages are no longer being actively used and so can be paged out to make room for new pages coming in. Not all memory management strategies require this, however.

Virtual memory design issues

  1. Policy for bringing pages into memory

    1. When does the OS decide to bring a page in?

      We have already noted that, in general, only a portion of the pages belonging to a given process will actually be resident in physical memory at any given time. Under what circumstances is a given page brought in from the paging device?

    2. Demand paging

      The simplest policy is demand paging. Simply stated, under demand paging, a given page is only brought into memory when the process it belongs to attempts to access it. Thus, the number of page faults generated by a process will at least be equal to the number of pages it uses. (The number of faults will be higher if a page that has been used is removed from memory and then is used again.) In particular, when a process starts running a program there will be a period of time when the number of faults generated by the process is very high:

      1. Page faults occur one-by-one as program begins running

        To start running the program, the CPU PC register is set to the first address in the program. Immediately, a page fault occurs and the first page of the program is brought in. Once control leaves this page (due either to running off the end or to a subroutine call) another fault occurs etc. Further, any access to data will also generate a fault.

      2. Startup and post-swapped time can be slow

        An implication of pure demand paging is that the initial startup of a new program may take a significant amount of time, since each page needed will require a disk access to get it. Likewise, if a process is ever swapped out of memory due to a long IO wait then when it is brought back in it will be paged in one page at a time.

      3. No pages are brought into memory unnecessarily

        The chief advantage of demand paging is that no pages are ever brought into memory unnecessarily. For example, if a program contains code for handling a large number of different kinds of input data, only the code needed for the actual data presented to it will ever be brought in.

    3. Anticipatory or Pre-paging

      Some systems combine demand paging with some form of anticipatory paging or pre-paging. Here, the idea is to bring a page in before it is accessed because it is felt that there is good reason to expect that it will be accessed. This will reduce the number of page faults a process generates, and thus speed up its startup at the expense of possibly wasting physical memory space on unneeded pages. Anticipatory paging becomes increasingly attractive as physical memory costs go down.

      1. Pages known to be initially required can all be loaded at once

        When initially loading a program, there may be a certain minimum set of pages that have to be accessed for program initialization before branching based on the input data begins to occur. These can all be read in at once.

      2. All pages swapped out can later be swapped back in at once

        If a process is totally swapped out during a long IO wait, then swap the whole set of pages that were swapped out back in when it is resumed instead of paging it back in a little bit at a time.

      3. Structure of page device may make it advantageous to read several pages at once

        Another form of anticipatory paging is based on the clustering of the paging device. If several pages reside in the same cluster on the paging device, then it may be advantageous to read all of them in if any one of them is demanded, since the added transfer time is only a small fraction of the total time needed for a disk access. This is especially advantageous if the pages correspond to logically-adjacent memory locations.

  2. Page replacement policies: What page do we remove from memory?

    Over time, the number of pages physically resident in memory on a system under any significant load will eventually equal the number of available frames. At this point, before any new page can be faulted in a currently resident page must be moved out to make room for it. The question of how to select a page to be replaced is a very important one. In general, there are two kinds of page replacement policies.

    1. Global policies

      When process X needs to fault in a new page, the set of candidates for replacement includes all pages belonging to all processes on the system. Note that unless a page belonging to X already happens to be chosen, this will result in an increase in the total amount of physical memory allocated to X.

    2. Local policies

      When process X needs to fault in a new page, the set of candidates for replacement includes only those pages currently belonging to process X. Note that this means that the total amount of physical memory allocated to X will not change.

    3. In general, a system will have to incorporate both kinds of policy:

      1. At startup, we must use a global policy

        When a process is just starting up, a global policy will have to be used since the new process has few pages available as replacement candidates.

      2. Local paging may be used to keep a particular process from using too much memory

        Eventually, however, a local policy may have to be imposed to keep a given process from consuming too much of the system's resources.

    4. The working set of a process

      Many of the policies to be discussed below can be applied either locally or globally. The notion of a process's working set can be used to help decide whether the process should be allowed to grow by taking pages from other processes or should be required to page against itself.

      1. The working set is the set of pages that a process has accessed in the time interval [ T - DT , T ]

        The working set for a process is defined in terms of some interval DT back from the current time T. Building on the principle of locality of reference, it is assumed that this is a good approximation to the set of pages that the process must have physically resident in order to run for an interval DT into the future without a page fault. (The interval DT is chosen to keep the percentage of memory accesses resulting in a fault to an acceptable level. A time corresponding to around 10,000 memory accesses being a good rule of thumb.)

      2. During the life of a process, there are times when the working set changes slowly and other times when it changes rapidly

        Studies of the memory access behavior of processes show that typically there are periods of time during which the working set of a given process changes very little. During these periods, if sufficient physical memory is allocated to the process then it can page locally against itself with an acceptably low rate of page faults. These periods are separated by bursts of paging activity when the process's working set is changing rapidly. These correspond to major stages in the program execution - e.g. the termination of one top level subroutine and the starting up of another. When this happens performance is improved if the global paging is used.

      3. Maintaining a working set requires some system overhead

        Of course, determining what the actual working set of a process is requires a certain amount of overhead - notably keeping track of what pages have been referenced during a past interval. (This is one of the places that a hardware referenced bit comes in.) One way to keep track of a process's working set involves using a timer that interrupts at the chosen interval DT:

        • At the start of the interval, turn off all of the referenced bits in the page table for the currently running process.

        • When the timer interrupts, include in the working set only those pages whose referenced bit is now on.

      4. The working set concept can also be applied without going to all of the effort needed to determine the exact working set:

        • If the page fault rate for a process lies within a certain empirically determined range, then assume that it has sufficient physical memory allocated to it to hold its (slowly evolving) working set and page it locally.

        • If the page fault rate increases above the upper limit, assume its working set is expanding and page it globally, allowing its physical memory allocation to grow to keep pace with its presumably growing working set.

        • If the page fault rate drops too low, then consider reducing its physical memory allocation by not only paging it against itself but also allowing other processes to take page frames from it. This corresponds to an assumption that the size of its working set is less than the amount of physical memory currently allocated to it.

    5. We defer detailed discussion of page replacement policies until we briefly note one further issue.

  3. The degree of memory overallocation.

    1. It is unusual in today's multiprogrammed systems for a single process to exceed the limits of the system's physical memory

      We have seen that, under a virtual memory system, it is possible for the logical memory allocated to any one process to exceed the amount of physical memory available. In practice, however, this does not often occur, since virtual memory systems are generally multiprogrammed and thus are configured with sufficient physical memory to allow portions of many processes to be resident at once.

    2. However, the sum of memory required by all processes on the system often exceeds the amount of physical memory

      However, the sum total of the logical address spaces allocated to all the processes on the system will generally be far greater than the total amount of physical memory available. (If this were not so, then virtual memory would be of no benefit.) When memory is overallocated, each page faulted in will result in another page having to be moved out to make room for it. In general:

      1. Too little overallocation (or none at all)

        This means that the resource of physical memory is not really being used well. Pages that could be moved out to the paging device without harm are being kept in physical memory needlessly.

      2. But too much overallocation can lead to a serious performance problem known as thrashing

        Thrashing occurs when all of the pages that are memory resident are high-demand pages that will be referenced in the near future. Thus, when a page fault occurs, the page that is removed from memory will soon give rise to a new fault, which in turn removes a page that will soon give rise to a new fault ... In a system that is thrashing, a high percentage of the system's resources is devoted to paging, and overall CPU utilization and throughput drop dramatically.

      3. The only way to prevent thrashing is to limit the number of processes that are actively competing for physical memory.

        This can be done by using a form of intermediate scheduling, with certain processes being swapped out wholesale as in a non virtual memory system.

        Ex: VMS has the concept of the balance set - which is the set of processes currently allowed to compete for physical memory. The size of the balance set is determined by the criterion: sum total of the working sets of all processes in the balance set <= available physical memory.

Page Replacement Policies

  1. Goals:

    1. Minimize the number of page faults by replacing a page that is least likely to be faulted back in soon.

      It can be shown that the optimal scheme would be to replace that page whose next reference is furthest in the future. But since this requires knowledge of the future which we don't have, the optimal scheme becomes a goal which other schemes try to approximate.

    2. Minimize disk traffic.

      This involves both minimizing the number of page faults and choosing replace a page that has not been written in over one that has been written in, since the latter needs to be written back to disk before its frame can be reallocated.

  2. Random

    Select a page to be replaced randomly.

    1. The simplest algorithm - but also the furthest from optimal.

    2. Can be made viable by combining it with a second chance policy:

      1. The OS keeps a queue of pages that have been selected for replacement. Each has been marked as invalid in the corresponding page table, but has not actually been removed from memory.

      2. When a new page frame is needed, the front frame in this queue is used (meaning the page is now really removed from memory) and a new victim frame is found and added to the end of the queue.

      3. Should a fault occur for a page that is in the queue, then the operating system can resolve the fault without actually going to disk by simply removing the frame from the queue and giving it a second chance. (Of course, another page must now be selected for replacement and added to the end of the queue.)

      4. With a reasonably long queue, this can give good performance. A high demand page selected for replacement is likely to be faulted before it gets to the front of the queue and is actually removed from memory.

  3. FIFO (First In, First Out)

    Keep a queue of pages in the order in which they were faulted in. When a page must be selected for replacement, select the front page in this queue.

    1. The rationale for this is that a page that was faulted in a long time ago may belong to a routine that is no longer being used (e.g. initialization code) - but certainly not necessarily so!

    2. FIFO requires little overhead. The operating system simply keeps a queue of page frames as a linked list.

    3. FIFO suffers from Belady's anomaly: Under certain circumstances, it is possible to make the page fault rate for a process worse by increasing the memory allocated to it. Example (from Deitel):

      FIFO with 3 pages FIFO with 4 pages
      Page Referenced Pages now residentFault occurred Pages now residentFault occurred
      A AF AF
      B A BF A BF
      C A B CF A B CF
      D B C DF A B C DF
      A C D AF A B C D 
      B D A BF A B C D 
      E A B EF B C D EF
      A A B E  C D E AF
      B A B E  D E A BF
      C B E CF E A B CF
      D E C DF A B C DF
      E E C D  B C D EF
      (9 faults) (10 faults!)

      (Note: this anomalous behavior is very unlikely to occur in practice. It is more an indicator of the complexity of the issues than a real threat.)

    4. In its pure form, FIFO is not a very good scheme.

      But, like random, it can be combined with a second chance strategy to produce a very workable strategy. A page that is "saved" from removal by a fault while it is in the "victim" queue is put at the end of the FIFO queue so that it will not be selected for replacement again until all other resident pages have been tried. (This scheme is used by VMS.)

      Note: Random and FIFO do not require special hardware support (other than a written in bit to prevent writing back a clean page). The other schemes require hardware support such as a referenced bit.

  4. LRU (Least Recently Used)

    Replace the page which was used least recently, assuming that it belongs to a portion of the program we have now moved beyond.

    1. Future behavior is assumed to be like past behavior

      This is a good approximation to optimal, since it is based on letting the past behavior of the program serve as a predictor of its future behavior.

    2. This can backfire in the face of very long loops

      This is hard to avoid with any scheme.

    3. LRU cannot suffer from Belady's anomaly

      This is true because for a larger memory allocation the pages in memory are always a superset of the pages in memory for a smaller allocation. Thus, if the larger allocation faults on a given reference, so must the smaller.

      LRU with 3 pages LRU with 4 pages
      Page Referenced Pages now residentFault occurred Pages now residentFault occurred
      A AF AF
      B A BF A BF
      C A B CF A B CF
      D B C DF A B C DF
      A C D AF B C D A 
      B D A BF C D A B 
      E A B EF D A B EF
      A B E A  D B E A 
      B E A B  D E A B 
      C A B CF E A B CF
      D B C DF A B C DF
      E C D EF B C D EF
      (10 faults) (8 faults)
    4. The major problem with LRU is implementing it.

      As with FIFO, we could keep a queue of pages; but we would need to re-order the queue on each memory reference by taking the page referenced out of the middle and re-inserting it at the rear. However, the overhead of doing this in software would be unbearable, since the queue is restructured at each memory reference rather than at each fault as with FIFO. Pure LRU could be achieved with fairly sophisticated special hardware, but even then would probably not be worth the price.

    5. The other algorithms we will discuss here are basically attempts to approximate LRU - which is in turn an approximation to optimal.

  5. LFU (Lease Frequently Used)

    Replace the page that has been used least frequently in some interval.

    1. Needs substantial hardware support

      A per-page reference counter in the page table, incremented each time the page is referenced. Each time a page replacement is to be done, scan all the counters looking for the smallest value and replace that page. Periodically reset all the counters to zero.

    2. High hardware overhead

      This approach is really only feasible if the page table is kept in high speed registers.

    3. May remove page before it really starts getting used

      A recently read-in page may be selected for replacement because it has not be in memory long enough to show much activity, even though it will be come highly active.

  6. NRU (Not recently used)

    A simplification of LFU.

    1. Hardware

      A per-page referenced bit, automatically set whenever the page is referenced. (This is not a serious overhead problem), plus a written-in bit as with all other schemes.

    2. The OS periodically resets all referenced bits to 0. When a replacement must occur, the pages can be grouped into 4 categories:

      ReferencedWrittenComments
      0 0 has not been referenced recently and never written to
      0 1 possible since the writing may have been done before the last reset of referenced bits
      1 0 has been read from recently but not written to
      1 1 has been read from or written to recently and the page has been written to

      Select a page for replacement from the first-listed non-empty category.

  7. FINUFO (clock)

    An improvement to NRU.

    1. Hardware: same as NRU.

    2. The OS treats all pages as a circular list based on the page number, with a pointer to one page maintained by the OS (Note resemblance to face of a clock - hence the name).

    3. When a frame must be replaced, start at the pointer and scan pages clockwise:

      1. If the referenced bit is on, turn it off.

      2. Otherwise, the page to replace has been found. Replace it with the new page and stop the scan. (Note: if all pages have their referenced bit on, the scan will go full circle, stopping with the page the pointer originally pointed to, whose bit would have been turned off during the first time it was looked at. In this case, FINUFO degenerates to FIFO).

      3. In any case, leave the pointer at the page after the one replaced.

    4. Example: Pages which have the reference bit turned on are written in lowercase.

      FINUFO with 3 pages FINUFO with 4 pages
      Page Referenced Pages now residentFault occurred Pages now residentFault occurred
      A AF AF
      B A BF A BF
      C A B CF A B CF
      D B C DF A B C DF
      A C D AF a B C D 
      B D A BF a b C D 
      E A B EF D A B EF
      A a B E  D a B E 
      B a b E  D a b E 
      C A B CF a b E CF
      D B C DF C A B DF
      E C D EF A B D EF
      (10 faults) (8 faults)


$Id: virtual_memory.html,v 1.3 2000/03/21 02:32:32 senning Exp $

These notes were written by R. Bjork of Gordon College. They were edited, revised and converted to HTML by J. Senning of Gordon College in March 1998.