CS322: Protection

Introduction

  1. Protection has long been an issue for operating systems

    From the very beginning, operating systems have been concerned with the protection of system resources from accidental or malicious corruption by users:

    1. Two-mode Systems

      Two-mode CPU allowed early batch monitors to protect themselves from modification by one user that might cause the next user's job to run incorrectly.

    2. Monitor performs all IO operations

      The requirement that all IO operations be done by the monitor served to protect the cards of one job from being inadvertently read as data by a preceding job.

    3. File systems need per-user protection

      With the advent of on-line mass-storage, file systems were developed with a provision for protecting the files belonging to one user from unauthorized access by another user.

  2. Protection is becoming increasingly important as system complexity grows.

    1. Additional issues are raised by networking and telecommunications.

    2. Protection schemes can be used to enhance program integrity

      There is a concern to extend the concept of protection so as to enhance the integrity of applications programs. For example, in a large applications system, it might be desirable to require that all accesses to a common data base be done by calling library routines that have had their correctness verified, so as to minimize the harm that might be done by a buggy program. To make this work, it would be desirable to enforce this requirement on all programs in the system, perhaps through an operating system provided facility.

      (This is analogous to what has happened in programming languages. Early languages left the burden for checking of data types on the programmer - e.g. a FORTRAN subroutine defined to take integer parameters could be erroneously called with real parameters and the compiler would not complain. But newer languages are strongly typed; the compiler enforces the typing rules.)

  3. Protection is not the same thing as security

    although they are related...

    1. Protection is concerned with providing facilities that allow the "owner" of an object to constrain accesses to it.

      The more specific the control that can be arranged, the better the protection system.

      Example: a minimal protection system may allow the owner of a file to specify either that he be the only user allowed to access the file or that all users be allowed to access the file - the choices are totally private or totally public. A better protection system would allow the owner to grant access to selected users other than himself without granting access to the world at large.

    2. Security is concerned with seeing that system protection mechanisms are not violated by malicious users.

    3. Level of protection and level of security are not necessarily related.

      A certain system might provide very little in terms of protection facilities - but the facilities it provides might be very secure. (E.g. a system with only a two-mode CPU might provide very secure protection for monitor memory; but no protection at all for files.) Conversely, a system with very rich protection facilities might be insecure due to weaknesses that penetrators might exploit.

  4. In discussing protection, we distinguish between mechanisms and policies:

    1. A mechanism is a facility the operating system provides for specifying how an object may be accessed.

    2. A policy is a decision made by the object's owner (or system management) as to how the mechanisms are to be applied in a specific case.

    3. Example: a particular system may allow the owner of a file to specify who may read the file. This is a mechanism. If the owner of a file decides that anyone whose last name begins with "A" may read a certain file, then he has chosen a policy which (hopefully) can be implemented using the mechanism provided.

    4. Ideally, mechanisms should be flexible, so as to allow as broad as possible a range of policy choices.

    5. One key principle to use in making policy decisions is the "need to know" principle. Insofar as possible, a process ought only to be granted levels of access to resources that it needs to have. (This may not apply, though, to access rights that pose no threat of misuse - e.g. the ability to read system help files or use library subroutines.)

    6. The remainder of our discussion will focus on mechanisms.

      Deciding policies is a concern of the system management, not the operating system designer.

Access Matrices

  1. An access matrix provides an abstract model of a protection mechanism

    We should note that an access matrix is, at least at the present time, more a model to be used in describing real protection systems than a practical protection system in its own right. Actually implementing an access matrix as a protection mechanism would usually not be practical. It is an abstract model - but a helpful one.

  2. Access matrix columns correspond to objects

    In an access matrix, the columns correspond to objects - entities to be protected. These might include:

    1. Regions of physical or virtual memory.

    2. Individual disk files or groups of files.

    3. Peripheral devices.

    4. System services that affect the overall state of the system (e.g. creating new processes, modifying quotas etc.)

    5. Various kinds of operations on a shared database.

    etc.

    An Access matrix could be very large...

    Note that in a system of any size, the number of columns will be very large - especially if individual files are each objects of protection (as well they might be.)

  3. Domains

    In an access matrix, the individual rows correspond to domains. These are not as easily defined. At any given time, a process may be said to be executing in a protection domain.

    1. Protection domains may be based on users.

      A process runs in the protection domain of the user who is executing it. This is the case on most multi-user timeshared systems. Typically, when a user logs in the process that is created for him is placed in the protection domain specified for that user by system management.

    2. Protection domains may be established on a per-process basis.

      As we shall see, for example, it is possible for a Unix process to run in a different protection domain than that of the user on whose behalf it is being executed. (In the Unix case, this is determined by the program that the process is executing.)

    3. Protection domains may be established on a per-procedure basis

      When a process calls a given procedure it changes to a protection domain associated with that procedure (and returns to the original protection domain when the procedure exits.)

      1. Some of this takes place with system services on VMS. A VMS system service call is a procedure call, but may execute with the CPU in a more privileged mode.

      2. One research area is the extension of this idea to allow any procedure to be associated with its its own set of access rights, which may differ from those of its caller. This would facilitate the kind of protection of data structures we talked about earlier.

    4. Processes could even change domains during execution

      Finally, a system may allow for a given process to totally change domains during the course of execution. This can be modeled by adding a column for each domain to the matrix, where an entry in one domain column for another domain indicates that a process executing in the "row" domain may switch to the "column" domain.

  4. Temporary protection domain amplification

    As a related concept, a system may allow a process's protection domain to be temporarily amplified by privileges belonging to a privileged program it is executing. For example, on most systems, ordinary users are not allowed direct access to IO devices. However, when a user executes an IO system service call, the code that is executed on behalf of the process does do physical IO. In effect, the process's domain has temporarily been amplified by the system service's right of direct access to the IO device.

    1. Provision for rights amplification allows a protection system to be more flexible.

      Suppose, for the sake of illustration, that it is desired to grant certain users access to a certain file only on their birthday. (Perhaps a certain game?) Clearly, no commercial operating system will provide a protection mechanism that includes this as a standard option. However, such a policy might be implemented as follows:

      1. The access control matrix could deny access to the file to all users. Thus, no user could access the file directly.

      2. A program could be written that would be granted the right to access the file. A user running this program would have his rights temporarily amplified by the right to access the file.

      3. The program would include code to check to see if the user's birthday matches the current date, and would abort itself otherwise.

      A trusted user is required to set things up...

      Such a program (which may be a conventional stand-alone program or a utility procedure installed in a system library) assumes the status of a trusted user that mediates between the protected resource and the world at large.

    2. Increased risk of security penetration

      Rights amplification, while increasing protection flexibility, also increases the risk of system security penetration. Therefore, code that is to be installed as "trusted code" must be guarded very carefully.

      1. Changing code of a privileged program

        Suppose a trusted program is stored in a file that is accessible to a user who is allowed to run the program to gain amplified capabilities. If the user can modify the file - thus changing code that is privileged - he can make it do other things than what it was intended to do. Thus, trusted code must be protected against alteration by anyone who does not possess the same rights that the protected code possesses.

      2. Dynamically changing parameter values (Bait and switch...)

        A user who can fool the code's access validation procedures can illicitly penetrate the resource it protects. For example, the user may find a way to invoke a trusted procedure with a parameter that specifies an access that he is allowed, but may then find a way to change the parameter that was passed between the time that the trusted code validates it and the time that the trusted code actually does the requested operation. The changed parameter may specify an illicit operation. This might be done, for example, by passing the parameter by reference, with the actual parameter being stored in memory that is shared by two processes both belonging to the same user. After one process requests the desired access, the second process modifies the parameter.

      3. Need to ensure nothing privileged is left hanging around

        Of course, provision must be made to ensure that the amplified rights do not linger on after the trusted code terminates. For example, if the trusted code reads a memory location or file that ordinary users cannot read, it must be written in such a way as to ensure that no portion of what it read remains in the user's memory after termination. This can be tricky if the user aborts execution of the protected code mid-way through - before it has a chance to clean up.

      4. Mis-using amplified rights is a common approach taken by hackers

    3. Modifications to access matrices to support amplified rights

      We can model rights amplification in an access matrix as a domain which has the amplified rights, which domain users in other domains can enter. (I.e. the domain appears as a row with its amplified rights, and as a column for which certain domains possess the switch domain right.) However, rights amplification differs from an ordinary domain switch in that the process reverts to its original domain when the code with the amplified rights exits.

  5. Access matrices can be changed dynamically

    Note that the access matrix itself need not necessarily be static, either. We can model the ability to alter the matrix itself by means of various new kinds of right. Unfortunately, the terminology used for these is not uniform - you will sometimes find the same name used for different kinds of right.

    1. The owner right allows a domain to set the access rights of any domain for a given object - i.e. to change any entry in the column for the object.

    2. The grant right extends the owner rights to individuals other than the actual owner of an object. One who has the grant right for an object can grant or revoke access to that object to other domains the same way as the owner can. (Sometimes this is called the control right.)

    3. The copy right allows a domain that has access to a given object to grant that same access (or a subset of it) to other domains - i.e. to copy an entry within a column. (This is more limited than grant - one can only copy what one has oneself.)

    4. The control right (which only applies to objects that are themselves domains) allows a "row" domain to take rights away from a "column" domain. (Actually, the term control is sometimes used with another meaning. E.g. on VMS it means what we have called "grant".)

  6. In practice access matrices would be large and sparse

    The reason why access matrices are not implemented directly in practice is that, in general, the matrix is very large and contains many blank entries - i.e. it is a sparse matrix. For example, files are frequently accessible only to their owners - thus the access domain column corresponding to such a file would contain an entry in only one row, with all other rows being blank. However, the effect of an access matrix can be achieved in one of several ways:

    1. Access control lists

      We can associate a list of domains allowed to access an object with the object. Each access control list, then,contains the non-blank entries from a column of the access matrix.

      1. In many cases, this will work well - the access control list will be short (perhaps only the owner) and so will consume little space and search time.

      2. But for a resource that is made available to many or all domains, the access list could become quite long, and searching the list to see if a given domain is allowed access could be time-consuming. This can be minimized by associating with each "public" resource a default access that applies to all domains plus individual entries for domains allowed more than the minimal public access.

    2. Capability lists

      With each domain we can associate a list of the objects it is allowed to access. Each capability list, then, contains the non-blank entries from one row of the matrix.

      1. In general, this is less workable for permanent storage of access information, since a given user domain may have potential access to hundreds of objects.

      2. Its major usefulness is in a hybrid system, in which capabilities are used to record the rights held by a particular process, after it has acquired those rights by some other mechanism. (We will discuss this shortly.)

      3. A typical implementation for a capability is as a "protected pointer" - i.e. it is a pointer to the object whose access it allows, and is itself stored in such a way that the user cannot alter its value (thus making it point to some other object instead.)

        • One way to manage this is to store capabilities in a table in a a region of memory that belongs to the operating system, with the process that wants to use a given capability specifying it by means of an index into that table.

        • As the text notes, some capability-based systems have been built around special hardware that treats capabilities as a special "read-only" data type. A process that owns a capability may copy it and pass it as a parameter, but may not alter it.

      4. One significant problem with capabilities is revocation: to take away a capability that has been granted, it is necessary to locate and destroy all copies. (The book discusses a number of ways this might be accomplished.)

    3. Locks and keys

      With each resource, we associate a lock, and with each domain, we associate one or more keys. A domain is allowed access to an object if and only if one of its keys fits the lock. (Note that this approach only simplifies matters if a given key fits the locks of many objects - else if a process needs one key for each object it is allowed to access then its collection of keys is in fact as cumbersome as a capability list.)

      Note: this can be extended by also allowing a given object to have multiple locks, so that an access is allowed if any of a domain's keys fit any of the object's locks.

    4. Hybrid approach

      When a process first requests access to a resource, its request may be checked by looking at an access list for the resource. If the access is granted, the process may be given a capability for that resource which it can present for further accesses to avoid having to re-search the access list.

      Example: the "open" operation for files works like this on many systems. When a process attempts to open a file, the request is validated in terms of the protection specified for the file. If the request is valid, an entry is made in a system table. Subsequent accesses to the file (read and/or write) are allowed on the basis of the table entry, which functions as a capability. (In Unix and some other systems, the return value of an open call is an index into a table of open file descriptors maintained for each process by the kernel. This index is passed as a parameter to subsequent operations on the file, and the kernel uses it to access a pointer to the internal data structure for actually manipulating the file.

    5. However implemented, an access matrix can be made less cumbersome by grouping. E.g. it may be possible to group domains in such a way as to allow a single access list entry to specify the access allowed to all the members of a given group.

An example: the protection mechanisms of VMS described in terms of the access matrix model.

  1. Logging in

    Under VMS, a process is ordinarily created by a new user logging on the system. During the login process, a user identifies himself by giving a username and password, which is checked against a user authorization file: SYS$SYSTEM:SYSUAF.DAT. (Note that an individual user may have several entries in the UAF under different usernames, and thus may appear as several different users to the system depending on what he is intending to do during a given session. Likewise, several individuals may share a common username and password and so appear to the system to be a single user.) The UAF entry for each username includes information that establishes the protection domain in which that user's processes will run. The domain is established by two pieces of information in the UAF.

    1. A user identification code (UIC).

      This is a two-part number that is specified for each username. The first part identifies the group to which he belongs, and the second part identifies him as an individual within the group.

      Example: At Gordon, students in course CS121 will be given UICs of the form [122,n] where n is an octal number and is typically 1,2,3,.. # of students in the course. General student users have UIC's of the form [104, n]. Faculty have UIC's of the form [50,n] or (for CS faculty) [20,n]. System management personnel have UIC's of the form [1,n].

      Note: It may be the case that several individual usernames share the same UIC. This would be true, for example, if several individuals were working together on a project and needed equal access to all of the files belonging to it.

      Note: UIC's of the form [1,*] are system UIC's. Processes running under system UIC have special access to resources. Among other things, they may gain access to any resource regardless of UIC.

    2. A list of privileges.

      Most users are assigned very minimal privileges - commonly just TMPMBX. However, there are some 20+ different possible privileges which may be necessary for certain users - e.g.

      1. Privileges allowing various degrees of special access to IO devices: the ability to MOUNT a tape or disk; the ability to allocate a spooled device; the ability to do physical IO to a device etc.

      2. Privileges allowing a process to affect the execution of other processes on the system (stopping them etc.)

      3. Privileges allowing a process to create shared regions in main memory.

      etc.

  2. Most system resources, notably disk files, are protected by UIC.

    With each resource is associated an owner UIC - normally the UIC of the process that created it. Access to the resource is governed by a protection code (that is actually a rudimentary form of access control list) with four entries:

    1. Owner access

      An entry describing the access allowed to any process running under the resource's owner UIC. Possible accesses are any combination of Read, Write, Execute, and Delete.

    2. Group access

      An entry describing the access allowed to any process in the same group as the resource's owner UIC. (Same possibilities as above.)

    3. World access

      An entry describing the access allowed to any process on the system (world access) - same possibilities.

    4. System access An entry describing the access allowed to any process running under system UIC - normally [1,*]. Normally, this entry is RWED, to allow system management full access to the resource. (Setting it to any other value does no good, since a system user can always alter the protection code of any file anyway!)

    5. Protection entries are additive

      Note that a given process's access to a given resource may be described by more than one protection code entry. In this case, the allowed access is the sum of all allowed accesses - e.g.

      Suppose a certain file with owner UIC [20,20] has the following protection code: S:RWED, O:D, G:W, W:RE

      1. a process with UIC [20,20] has access under owner, group, and world entries. Its access is thus D + W + RE = RWED (everything).

      2. a process with UIC [20,30] has access both under the group entry and the world entry. Its access is thus W + RE = RWE - but not D.

      3. a process with UIC [100,20] has access only under world - therefore RE but not WD.

      4. a system process has access RWED.

      Note: for clarity, the protection code should be specified by the equivalent list S:RWED, O:RWED, G:RWE, W:RE.

    6. Access Control Lists appeared in VMS 4.0

      Note that this is a very limited form of access control list. While an owner can list access for users in the various categories, there is no way to grant access to a specific user in a category without granting the same access to all users in that category. Therefore, beginning with version 4.0, VMS has also included the option of adding a full-blown access control list to an object, in addition to the protection codes for the four categories of user.

      1. The access rights of a given user are the sum of those granted by the access control list and those granted by the protection code; thus to give access selectively to only some users in a group, one denies access to the group using the protection code and then grants it individually using the ACL.

      2. An access control list entry (ACE in VMS parlance) has the general form (IDENTIFIER=user, ACCESS=access).

        • A user may be specified by UIC (optionally with wildcards included) or by the use of a special "rights" identifier that is essentially a form of capability granted by system management and recorded in the user authorization file.)

        • The access rights that may be granted are the usual four plus CONTROL, which gives the holder the same rights to grant and deny access as the file's owner has.

  3. Some system functions are controlled by privileges

    Execution of certain system services is controlled by privilege - e.g. the privilege TMPMBX is needed to execute the system service that creates a mailbox, MOUNT is needed to execute the system service that mounts a new volume on a tape drive or disk etc. VMS privileges are capabilities to invoke certain system functions.

  4. Situations where processes can change protection domains

    Ordinarily, a process remains in the protection domain established for it when it was created. But some processes have privileges (capabilities) that, in effect, allow them to change domains.

    1. One privilege - SETPRV - allows its holder to claim any privilege he needs (though he must do so explicitly before he attempts the operation for which the privilege is required.) Apart from this, the list of privileges granted a process when it is created may not be increased.

    2. One privilege - CMKRNL - allows a user to execute the $CMKRNL system service. This takes as a parameter the address of a procedure which is to be executed with the CPU running in kernel mode.

      1. The chief use of this privilege is to alter certain values stored in system memory - i.e. various system tables.

      2. One use of this system service is to allow a process to change its UIC from that established at login. The SET UIC DCL command does this by attempting to execute (in kernel mode) a procedure that modifies the location in the process's PCB where the UIC is stored. This will fail if the process lacks CMKRNL privilege.

      3. Another use of this system service is to add images to the list of installed images that can execute with special privilege. We will discuss these shortly.

    3. One privilege - BYPASS - allows a process to bypass the normal UIC-based protection.

  5. Images installed with privilege

    The privileges belonging to a given process are temporarily amplified when it runs a program image installed with extra privileges by the system management. For example, the MAIL utility is installed with SYSPRV - which allows it to access files as specified under the system access entry, regardless of the process's UIC. This allows a user who is sending mail to write into another user's mail file to which he otherwise would have no access.

    1. Installing an image with amplified privilege requires that the one requesting the installation have certain privileges normally granted only to system management personnel.

    2. Privileged images are normally installed as part of system startup, and remain installed (unless explicitly removed) until system shutdown. The disk file containing an installed image is protected against modification while the image remains installed, so a user cannot steal its privileges by modifying its code.

  6. As an aid to system managers, the VMS documentation categorizes privileges into four categories:

    1. Normal privileges: a user granted one or more privileges from this category cannot do significant harm to other users or system operation.

    2. Interfere privileges: a user granted one or more privileges from this category can interfere with the execution of processes belonging to other users. For example, GROUP privilege allows a user to examine, suspend, or terminate any process running under a UIC in the same group as his own. Clearly, such a privilege should only be granted to a user who needs this capability and can be trusted to use it properly.

    3. Devour privileges: a user granted one or more privileges from this category can devour system resources. For example, the privilege EXQUOTA allows a user to exceed the quota for file storage space granted to him by the management. Again, such a privilege should only be granted in the face of demonstrated need and responsibility.

    4. All privileges: a user granted any one privilege in this category can potentially acquire any privilege. For example, a user with system UIC or SYSPRV thereby has access to all files, including the user authorization file, and could by this means give himself any additional privilege he wants. A user with CMKRNL has the ability to change his UIC and thus could assume system UIC and proceed as above etc. Clearly, any privilege from this category should only be granted to a user who is part of system management or can be trusted in the same way as a system manager.

The Unix approach to protection

  1. Simple but elegant...

    Unix's protection mechanisms are much simpler than those of VMS, and do not allow nearly the same degree of fine-grained control. However, they do include one very elegant feature - so elegant that it is patented. (One of the few cases of a patent being granted for something that is strictly software.)

  2. User and Group

    Unix uses a two-part user identification like VMS does, with the first part being the group number and the second identifying the individual user. (On the SGI systems, ordinary users are group 20 and have individual user identifiers 1110, 1111, 1112 ...) The group and user IDs associated with a given login name are found (as on VMS) in a user authorization file (/etc/passwd on Unix.) There are two important differences from VMS, however.

    1. User numbers are unique

      The individual user numbers are unique across the system, not just within the group. Thus, the numbers 99,100 and 100,100 would both refer to the same individual.

    2. User can belong to several groups

      An individual can be a member of multiple groups. One group is the primary group for the individual, and is specified in the password file. Additional groups may be listed in /etc/group.

      1. When an individual creates a file, its group ownership is set to his primary group.

      2. However, an individual can access a file under group permissions if the group that owns the file is any of the groups he is a member of.

  3. Unlike VMS, only one set of permissions is checked by the system

    As we noted in discussing file systems, access permission for each file is established in terms of user (u), group (g) and others (o), with read, write, and execute access being capable of being granted or denied for each category. Unlike VMS, however, only one permission entry is checked for any given user.

    1. For the file's owner, only the user entry is checked. Thus, it is possible for a file's owner to deny himself access to a file while granting it to his group and/or the world. (But note that the file's owner can always alter the protection on it, so this is not permanent.)

    2. For a non-owner that is a member of the same group as the owner, only the g entry is checked. Thus, a file's owner could deny access to his own group while granting it to the world.

    3. There is no provision for an access control list beyond this.

  4. The superuser: Root

    One user is above the protection system - the superuser (user ID 0 - login name root).

    1. The superuser can access any file, regardless of protection, and may also alter the protection and/or ownership of any file.

    2. Some very important system files are owned by the superuser and are alterable only by him (write access granted only to owner.) One of these is the password file (/etc/passwd) that lists authorized users.

    3. Certain system services can only be executed by the superuser (e.g. the system service to shut down the system). Others can be used by any user, but offer more functionality to the superuser.

    4. To somewhat reduce the risk of unauthorized users gaining superuser status, two special limitations apply to logging into username root.

      1. Secure terminals

        A system terminal configuration file specifies, for each terminal, whether or not it is a "secure" terminal. Root login attempts from a non-secure terminal are always denied.

      2. The su program

        Since this restriction could be too severe, it is also possible for an authorized superuser to log in under his own name and then use the su (switch user) command to change to superuser. (This requires the one using it to known the root password.) On some systems, mainly those based on BSD Unix, only users who are members of the special group "wheel" are allowed to use the su command to become superuser. All attempts at using the su program are recorded by the system.

    5. Hard to apply the "need to know" principle.

      Since special privilege on Unix is essentially "all or nothing" it is hard to apply the "need to known principle". This is one reason why Unix has a reputation for being weak on security - though some newer versions being developed are working on tightening this up.

  5. Rights amplification: SetUID and SetGID

    Some relief from this "all or nothing" problem is provided by the rights amplification mechanism.

    1. Additional protection/permission codes

      In addition to the nine protection code bits we mentioned earlier (r, w, and x access for each of three categories of user), there are two more bits that the owner of a file may set in the protection code of an executable file.

    2. setuid (4000 octal)

      One of these is the "set user ID" bit. If this bit is set on an executable program, then whenever the program is run (by someone otherwise authorized to do so) the effective user ID of the process running it is changed to that of the file's owner. Thus, the program is able to access files according to the rights of the owner of the program, rather than the actual user. (The user ID reverts to its actual value when the program terminates.)

      Note: In a long directory listing (ls -l), this bit shows up as an "s" where the "x" for owner execute permission would appear.

    3. setgid (2000 octal)

      Another bit is the "set group ID" bit. It has a similar effect, but sets the group ID of the process to match the group ownership of the file. (This shows up in a long directory listing as an "s" in the slot where group execute permission would occur.)

    4. Example:

      1. The utility program ps allows anyone to look at a synopsis of all the processes currently running. The ps program does this by reading through a process table maintained in kernel memory.

      2. Of course, ordinarily users are not permitted to directly access kernel memory, so the ps program needs to run with amplified access rights. This is handled as follows:

        • The file system contains an entry for a pseudo-device called /dev/kmem. The kernel translates read/write requests on this "file" into accesses to addresses in kernel memory.

        • The /dev/kmem "file" is owned by the user root and the group sys, and allows read/write access to owner, read access to group, and no access to others.

        • The ps program is owned by the group sys, and has the set group ID bit turned on in its permissions. Thus, when any user runs it, he is temporarily made a member of the sys group and allowed to access kernel memory through the program.

        • Question: why create a special group for this? Why not simply have the program owned by root and use set user ID? (ASK CLASS)

    5. Setuid and setgid programs can be used by normal users

      A distinctive feature of this mechanism is that it can be used by ordinary users, not just by system management. (This contrasts with the provisions systems like VMS have for installing trusted images.)

      Example: Suppose someone has a file of data that he wants to let only two other people read.

      1. Rather than trying to talk the system manager into creating a special group for these two people, he could use the set user ID mechanism.

      2. He would have to write out a program that checks to see if it is being run by one of these two people, and then simply types out the contents of the file to the screen. This program would be set up with the set user ID bit.

      3. If the file was protected to allow only its owner to read it, it could still be accessed by anyone running this program. But the program itself would limit access to only the two chosen individuals.

    6. setuid / setgid security problems

      This mechanism, though elegant, is also the source of a number of the security problems Unix has experienced. (The Internet worm made illicit use of this feature by taking advantage of some debugging code that was left installed in running systems - more on this in a later lecture.) To reduce the threat somewhat, the kernel automatically clears the set user ID and set group ID bits whenever a file is written to or has its ownership changed - except when these operations are done by the superuser.

    7. This feature of Unix was judged innovative enough to be protected by a patent.

Various other approaches to protection

  1. Use of groups of users rather than ACLs is common

    We have noted that implementing an access matrix per se is not usually practical. Likewise, access control lists and/or capability lists can become unmanageably large. Thus, most practical systems rely on less flexible mechanisms in which users are grouped into broad categories, which effectively reduces the number of rows in the access matrix to a manageable number. For example, the grouping of users into categories like owner, group, and world vis-a-vis a certain resource (as is done in VMS with UICs) is a very common approach that allows for short access control lists to be associated with each protected resource (one entry per category).

    1. Many operating systems use such a mechanism.

    2. rings of protection

      This is a special case of a more general concept known as rings of protection. For example, the UIC mechanism can be pictured this way:

      	
                    _________
                   /	WORLD  \ 
                  /  _______  \
                 /  / GROUP \  \
                /  /  _____  \  \
               /  /  /OWNER\  \  \
               \  \  \_____/  /  /
                \  \_________/  /
                 \_____________/
      

      An inner ring is a subset of an outer ring, and possesses all of the privileges of the outer ring plus privileges uniquely its own.

    3. Some systems use a ring-like approach to capabilities. For example, prior to the most recent release (version 9.0), RSTS/E had two categories of users vis-a-vis system services: ordinary users and privileged users. In the documentation for the system services, one finds that certain services are either restricted to privileged users only (such as the service that creates new user accounts) or have increased options when executed by privileged users (e.g. the system service to change the characteristics of a user terminal can be used by any process to change the characteristics of its own terminal only; but a privileged user can invoke it to change the characteristics of any terminal on the system.) This can be pictured as a ring structure like this:

      	
                  _______________
                 /   Any User    \
                / _______________ \
               / /Privileged User\ \
               \ \_______________/ /
                \_________________/ 
      

      (RSTS 9.0 and later has VMS-like privileges.)

    4. MULTICS used a similar structure, but with 8 rings of decreasing privilege (ring 0 = maximum privilege). Text - page 602.

    5. Note that ring-like mechanisms are much less flexible. However, a UIC-type approach is more flexible than a level of privilege approach, since the rings are defined vis-a-vis each individual resource. (A given user is in the owner ring for files he owns and in either the group or world ring for other files.) Many systems combine both UIC and capability rings - e.g. Unix, RSTS/E, and in a certain sense VMS.

  2. Hydra and Cambridge CAP

    The text mentioned two systems that are based around explicit capabilities: Hydra and Cambridge CAP. The IBM system/38 also uses this approach, with hardware support for capabilities.

  3. Compile time access control

    Another research area is the building of protection into programming languages in such a way that rights of access to resources are validated by the compiler at compile time, rather than by the operating system at run time. Of course, on such a system only code compiled by a trusted compiler could be allowed to be executed; in particular, all coding in assembly language or run-time patching of compiled programs must be forbidden.


$Id: protection.html,v 1.3 2000/04/11 23:10:20 senning Exp $

These notes were written by R. Bjork of Gordon College. They were edited, revised and converted to HTML by J. Senning of Gordon College in April 1998.