`
`.
`
`1l
`
`.
`
`GTL 1006
`IPR of U.S. Patent 7,324,637
`
`
`
`Operating System
`Concepts THIRD EDITION
`
`Abraham Si lberschatz
`
`University of Texas at Austin
`
`James L. Peterson
`
`IBM Corporation
`
`Peter B. Galvin
`
`Brown University
`
`A
`VV
`Addison-Wesley Publishing Company
`
`Reading, Massachusetts (cid:9) (cid:149) (cid:9)
`Menlo Park, California (cid:9) (cid:149) (cid:9)
`New York
`Don Mills, Ontario (cid:149) Wokingham, England (cid:149) Amsterdam (cid:149) Bonn
`Sydney (cid:149) Singapore (cid:149) Tokyo (cid:149) Madrid (cid:149) San Juan (cid:149) Milan (cid:149) Paris
`
`1.r
`
`
`
`This book is in the Addison-Wesley Series in Computer Science.
`Consulting Editor: Michael Harrison
`
`VMS is a trademark of Digital Equipment Corporation. UNIX is a registered trademark of
`AT&T Bell Laboratories. OS/2 is a registered trademark of International Business Machines
`Corporation. MS/ DOS is a registered trademark of Microsoft Corporation. NFS is a
`registered trademark of Sun Microsystems, Inc. Macintosh is a registered trademark licensed
`to Apple Computer, Inc.
`Many of the designations used by manufacturers and sellers to distinguish their products are
`claimed as trademarks. Where those designations appear in this book, and Addison-Wesley
`was aware of a trademark claim, the designations have been printed in initial caps or all caps.
`
`The programs and applications presented in this book have been included for their
`instructional value. They have been tested with care, but are not guaranteed for any
`particular purpose. The publisher does not offer any warranties or representations, nor does
`it accept any liabilities with respect to the programs or applications.
`
`Fig. 3.8 from laccobucci, OS12 Programmer’s Guide, ' 1988, McGraw-Hill, Inc., New York, New
`York. Fig. 1.7, p.20. Reprinted with permission of the publisher.
`Fig. 8.15 reprinted with permission from IBM Systems Journal, Vol. 10, No. 3, ' 1971
`International Business Machines Corporation.
`Fig. 15.1 from Leffler/ McKu sick/ Karels/Quarterman, The Design and Implementation of the
`4.3BSD UNIX Operating System, ' 1989, Addison-Wesley Publishing Co., Inc., Reading,
`Massachusetts. Fig. I.I. Reprinted with permission of the publisher.
`Figs. 16.1, 16.5, and 16.6 reproduced with permission from Open Software Foundation, Inc.
`Excerpted from Mach Lecture Series, OSF, October 1989, Cambridge, Massachusetts.
`Figs. 16.1 and 16.6 presented by R. Rashid of Carnegie-Mellon University and Fig. 16.5
`presented by D. Julin of Carnegie-Mellon University.
`Fig. 16.4 from Accetta /Baron/ Bolosky/Golub /Rashid /Tevanian /Young, "Mach: a new
`kernel foundation for UNIX development," Proceedings of Summer USENIX, June 1986,
`Atlanta, Georgia. Reprinted with permission of the authors.
`
`Library of Congress Cataloging-in-Publication Data
`Silberschatz, Abraham.
`Operating system concepts / Abraham Silberschatz, James L.
`Peterson, and Peter Calvin. -- 3rd ed.
`cm.
`P. (cid:9)
`Includes bibliographical references and index.
`ISBN 0-201-51379-X
`1. Operating systems (Computers) I. Peterson, James Lyle.
`III. Title.
`II. Calvin, Peter. (cid:9)
`QA76.76.063S5583 1991
`005.4’3--dc2O (cid:9)
`
`Reprinted with corrections September, 1991 (cid:9)
`
`90-44561
`CIP
`
`Copyright ' 1991 by Addison-Wesley Publishing Company, Inc.
`All rights reserved. No part of this publication may be reproduced, stored in a retrieval
`system, or transmitted, in any form, or by any means, electronic, mechanical, photocopying,
`recording, or otherwise, without the prior written permission of the publisher. Printed in the
`United States of America.
`
`4 5 6 7 8 9 10 Do 9594939291
`
`/1 (cid:9)
`
`IF (cid:9)
`
`To my parents, Wira and Mietek,
`my wife, Haya,
`and my children, Lemor, Sivan and Aaron.
`
`Avi Silberschatz
`
`To my wife, Jeanne,
`and my children, Jennifer and Kathryn.
`
`Jim Peterson
`
`To Carla.
`
`Peter Galvin
`
`
`
`PREFACE
`
`Operating systems are an essential part of a computer system.
`Similarly, a course on operating systems is an essential part of a
`computer-science education. This book is intended as a text for an
`introductory course in operating systems at the junior or senior under-
`graduate level, or first-year-graduate level. It provides a clear description
`of the concepts that underlie operating systems.
`This book does not concentrate on any particular operating system
`or hardware. Instead, it discusses fundamental concepts that are
`applicable to a variety of systems.
`
`Content of this Book
`As prerequisites, we assume the reader is familiar with general
`assembly-language programming and computer organization. The text
`is organized in six major parts:
`
`(cid:149) Overview (Chapters 1 to 3). These chapters explain what operating
`systems are, what they do, and how they are designed and constructed.
`These chapters explain how the concept of an operating system has
`developed, what the common features of an operating system are,
`what an operating system does for the user, and what it does for the
`computer-system operator. The presentation is motivational,
`historical, and explanatory in nature. We have avoided a discussion
`of how things are done internally in these chapters. Therefore, they
`are suitable for individuals or lower-level classes who want to learn
`
`V
`
`S
`
`. (cid:9)
`
`L
`
`- (cid:9)
`
`-
`
`
`
`Vi (cid:9)
`
`Preface (cid:9)
`
`what an operating system is, without getting into the details of the
`internal algorithms.
`(cid:149) Process management (Chapters 4 to 6). The process concept and
`concurrency are at the very heart of modern operating systems. A
`process is the unit of work in a system. Such a system consists of a
`collection of concurrently executing processes, some of which are
`operating-system processes (those that execute system code), and
`the rest of which are user processes (those that execute user code).
`These chapters cover various methods for process management, CPU
`scheduling, process synchronization and communication, and
`deadlock handling.
`(cid:149) Storage management (Chapters 7 to 9). These chapters deal with
`the classic internal algorithms and structures of storage
`management. They provide a firm practical understanding of the
`algorithms used the properties, advantages, and disadvantages.
`The algorithms are presented in a natural order, so that new, more
`complicated systems can be built on the understanding of simpler
`systems.
`(cid:149) Files and protection (Chapters 10 and 11). A file is a collection of
`related information defined by its creator. Files are mapped, by the
`operating system, onto physical devices. Protection mechanisms
`provide controlled access by limiting the types of file access that can
`be made by the various users. Protection must also be available to
`ensure that the besides files, memory segments, CPU, and other
`resources can be operated on by only those processes that have
`gained proper authorization from the operating system.
`(cid:149) Distributed systems (Chapters 12 to 14). A distributed system is a
`collection of processors that do not share memory or a clock. Such a
`system provides the user with access to the various resources the
`system maintains. Access to a shared resource allows computation
`speedup and improved data availability and reliability. A
`distributed system must provide various mechanisms for process
`synchronization and communication, for dealing with the deadlock
`problem and variety of failures that are not encountered in a
`centralized system.
`(cid:149) Case studies (Chapters 15 to 17). These chapters illustrate how the
`(cid:149) many concepts described can be put together in a real system. Two
`UNIX-based operating systems are covered in detail - Berkeley’s
`4.313SD and Mach. These operating systems were chosen in part
`because UNIX at one time was almost small enough to understand
`and yet is not a toy operating system. Most of its internal
`
`Preface (cid:9)
`
`vii
`
`algorithms were selected for
`simplicity, not for speed or
`sophistication. UNIX is readily available to computer-science
`departments, so many students have access to it. Mach provides an
`opportunity for us to study a modern operating system that
`provides compatibility with 4.313SD but has a drastically different
`design and implementation. Chapter 17 briefly describes some of
`the most influential operating systems.
`
`Organization
`Operating systems first began to appear in the late 1950s, and for 20
`years underwent major changes in concepts and technology. As a result,
`the first-generation operating-system textbooks that appeared during
`this period (such as Brinch Hansen [1973a], Madnick and Donovan
`[1974], Shaw [1974], Tsichritzis and Bernstein [1974], and Habermann
`[1976]) tried to explain a subject that was changing even as the books
`were being written.
`Over time, however, operating-system theory and practice appeared
`to mature and stabilize. The fundamental operating-system concepts
`seemed well defined and well understood. The basic approaches to
`process management, CPU scheduling, process coordination, memory
`management, file systems, and so on, appeared unlikely to change. At
`this point, a second generation of operating-systems texts appeared
`(such as Deitel [1983] and the first edition of this book, Peterson and
`Silberschatz [19831).
`For the first edition, our primary goal was to present well-
`understood, agreed-on, classic operating-system material. The basic
`concepts were organized and presented carefully; the material flowed
`naturally from basic principles to more sophisticated ones. The
`bibliographic notes contained pointers to the research papers in which
`results were first presented.
`The fundamental concepts and algorithms covered in the first
`edition were often based on those used in existing commercial or
`experimental operating systems. Our aim was to present these concepts
`and algorithms in a general setting that was not tied to one particular
`operating system. While the general presentation in the first edition was
`well-received, there was a strong demand for a more complete practical
`example, and so in the second edition, Peterson and Silberschatz [1985]
`we added a chapter on the UNIX operating system.
`The first and second editions were organized to stress the basic
`practical aspects of operating systems (CPU scheduling and memory
`management) before presenting the unifying theoretical concept of the
`process model for operating systems. An alternate edition, Silberschatz
`
`
`
`_______.—__.————
`
`CI=§APTER
`
`1
`
`Intrflduetéfin
`
`An operating system is a program that acts as an intermediary between a
`user of a computer and the computer hardware. The purpose of an
`operating system is to provide an environment in which a user can
`execute programs. The primary goal of an operating system is thus to
`make the computer system convenient to use. A secondary goal is to use
`the computer hardware in an efficient manner.
`first
`are, we must
`To understand what operating systems
`understand how they have developed.
`In this chapter, we trace the
`development of operating systems from the first hands-on systems to
`current multiprogrammed and time-shared systems. As we move
`through the various stages, we see how the components of operating
`systems evolved as natural solutions to problems in early computer
`systems. Understanding the reasons behind the development of
`operating systems gives an appreciation for what tasks an operatténg
`system does and how it does them.
`
`1.1 What is an Operating System?
`
`An operating system is an important part of almost every computer
`SYstem. A computer
`system can be
`roughly divided into four
`COmponents: the hardware, the operating system, the applications programs,
`and the users (Figure 1.1).
`(CPU), memory, and
`'
`The hardware (the central processing unit
`mPut/Output (I/O) devices) provides the basic computing resources. The
`
`3
`
`
`
`Chapter 1: Introduction
`
`1.1 What Is an Operating System?
`
`applications programs. The operating system controls and coordinates
`the use of the hardware among the various application programs for the
`various users.
`An operating system is similar to a government. The components of a
`computer system are its hardware, software, and data. The operating
`system provides the means for the proper use of these resources in the
`operation of the computer system. Like a government, the operating
`system performs no useful function by itself. It simply provides an
`environment within which other programs can do useful work.
`We can view an operating system as a resource allocator. A computer
`system has many resources (hardware and software) that may be
`required to solve a problem: CPU time, memory space, file storage space,
`iio devices, and so on. The operating system acts as the manager of
`these resources and allocates them to specific programs and users as
`necessary for the latter’s tasks. Since there may be many, possibly
`conflicting, requests for resources, the operating system must decide
`which requests are allocated resources to operate the computer system
`efficiently and fairly.
`A slightly different view of an operating system focuses on the need
`I/O devices and user programs. An operating
`to control the various
`system is a control program. A control program controls the execution of
`user programs to prevent errors and improper use of the computer. It is
`especially concerned with the operation and control of iio devices.
`In general, however, there is no completely adequate definition of
`an operating system. Operating systems exist because they are a
`reasonable way to solve the problem of creating a usable computing
`system. The fundamental goal of computer systems is to execute user
`programs and to make solving user problems easier. Toward this goal,
`computer hardware is constructed. Since bare hardware alone is not
`very easy to use, applications programs are developed. These various
`programs require certain common operations, such as controlling the i/o
`devices. The common functions of controlling and allocating resources
`are then brought together into one piece of software: the operating
`System.
`There is also no universally accepted definition of what is part of the
`operating system and what is not.
`A simple viewpoint is that
`everything a vendor ships when you order "the operating system"
`should be considered. The memory requirements and features included,
`however, vary greatly between systems. Some take up less than 1
`megabyte of space and lack even a full-screen editor, whereas others
`require hundreds of megabytes of space and include spelling checkers
`and entire window systems. A more common definition is that the
`Operating system is the one program running at all times on the
`Computer (usually called the kernel), with all else being application
`
`S
`
`Figure 1.1 Abstract view of the components of a computer system.
`
`applications programs (compilers, database systems, video games, and
`business programs) define the ways in which these resources are used
`to solve the computing problems of the users. There may be many
`different users (people, machines, other computers) trying to solve
`different problems. Accordingly, there may be many different
`
`S
`
`RI
`
`
`
`Chapter 1: Introduction
`
`1.1 What Is an Operating System?
`
`applications programs. The operating system controls and coordinates
`the use of the hardware among the various application programs for the
`various users.
`An operating system is similar to a government. The components of a
`computer system are its hardware, software, and data. The operating
`system provides the means for the proper use of these resources in the
`operation of the computer system. Like a government, the operating
`system performs no useful function by itself. It simply provides an
`environment within which other programs can do useful work.
`We can view an operating system as a resource allocator. A computer
`system has many resources (hardware and software) that may be
`required to solve a problem: CPU time, memory space, file storage space,
`iio devices, and so on. The operating system acts as the manager of
`these resources and allocates them to specific programs and users as
`necessary for the latter’s tasks. Since there may be many, possibly
`conflicting, requests for resources, the operating system must decide
`which requests are allocated resources to operate the computer system
`efficiently and fairly.
`A slightly different view of an operating system focuses on the need
`I/O devices and user programs. An operating
`to control the various
`system is a control program. A control program controls the execution of
`user programs to prevent errors and improper use of the computer. It is
`especially concerned with the operation and control of iio devices.
`In general, however, there is no completely adequate definition of
`an operating system. Operating systems exist because they are a
`reasonable way to solve the problem of creating a usable computing
`system. The fundamental goal of computer systems is to execute user
`programs and to make solving user problems easier. Toward this goal,
`computer hardware is constructed. Since bare hardware alone is not
`very easy to use, applications programs are developed. These various
`programs require certain common operations, such as controlling the i/o
`devices. The common functions of controlling and allocating resources
`are then brought together into one piece of software: the operating
`System.
`There is also no universally accepted definition of what is part of the
`operating system and what is not.
`A simple viewpoint is that
`everything a vendor ships when you order "the operating system"
`should be considered. The memory requirements and features included,
`however, vary greatly between systems. Some take up less than 1
`megabyte of space and lack even a full-screen editor, whereas others
`require hundreds of megabytes of space and include spelling checkers
`and entire window systems. A more common definition is that the
`Operating system is the one program running at all times on the
`Computer (usually called the kernel), with all else being application
`
`S
`
`Figure 1.1 Abstract view of the components of a computer system.
`
`applications programs (compilers, database systems, video games, and
`business programs) define the ways in which these resources are used
`to solve the computing problems of the users. There may be many
`different users (people, machines, other computers) trying to solve
`different problems. Accordingly, there may be many different
`
`S
`
`RI
`
`
`
`30 (cid:9)
`
`Chapter.": Computer System Structures
`
`2.1 Interrupt-Based Systems (cid:9)
`
`31
`
`In contrast, in older systems, such data transfer was under the CPU
`control. The CPU had to execute, or at least to monitor, the data
`transfer, disallowing the overlapping of CPU and I/o operations. As an
`example, consider the steps needed to print data from memory:
`
`1. Check whether the printer is ready for the next character.
`2. If the printer is not ready, go to step 1.
`3. If printer is ready, check whether there is another character to print.
`4. If there is another character, go to step 1.
`5. If there are no further characters, we are done with the printing.
`
`This overlap method is known as busy waiting, since the CPU must keep
`checking the status of the I/o, and is therefore busy while it waits for
`each I/o to complete. Although it is true that in theory the CPU could do
`other processing and come back to print the next character later, this is
`not always possible in practice. Consider a fast input device that is
`being used. The CPU must still be faster, and thus will need to wait
`between inputs. If, however, the CPU switches to some other task to
`avoid this waste of time, it might risk missing some input, since two
`I/O is the perfect
`inputs may arrive during that time. Interrupt-based
`solution to this scenario.
`The general structure of an interrupt-based system is depicted in
`Figure 2.1. Each device controller is in charge of a specific type of
`device (for example, disk drives, tape drives, line printers). A device
`controller maintains some local buffer storage and a set of special-
`purpose registers. The device controller is responsible for moving data
`between the peripheral device it controls and its local buffer storage.
`To start an I/O, the CPU loads the appropriate registers within the
`device controller and then resumes its normal operation. The device
`controller, in turn, examines the contents of these registers to determine
`what action to take. For example, if it finds a read request, the
`controller will start the transfer of data from the device to its local
`buffer. Once the transfer of data is complete, the device controller
`informs the CPU that its has finished its operation. It accomplishes this
`communication by causing an interrupt.
`When the CPU is interrupted, it stops what it is doing and
`immediately transfers execution to a fixed location. The fixed location
`usually contains the starting address where the service routine for the
`interrupt is located. The interrupt service routine transfers data from
`the local buffer of the device controller to main memory. Once this
`transfer is accomplished, the CPU can then resume the i nterrupted
`
`Figure 2.1 Interrupt-based computer system.
`
`iio devices and the CPU can be operated
`computation. In this way,
`concurrently. A time line of this operation is shown in Figure 2.2.
`Interrupts are an important part of a computer architecture. Each
`computer design has its own interrupt mechanism, but several functions
`are common. The interrupt must transfer control to the interrupt service
`routine. Generally, this is done through reservation of a set of locations
`in ’Ow memory (the first 100 or so locations) to hold the addresses of the
`Interrupt service routines for the various devices. This array, or vector of
`addresses is then indexed by a unique device number given with the
`interrupt request to provide the address of the interrupt service routine
`or the interrupting device.
`The interrupt architecture must also save the address of the
`interrupted instruction. Many old designs simply stored the interrupt
`addness in a fixed location or in a location indexed by the device
`
`I(cid:149)I(cid:149)IIIILA
`
`(cid:9)
`
`
`30 (cid:9)
`
`Chapter.": Computer System Structures
`
`2.1 Interrupt-Based Systems (cid:9)
`
`31
`
`In contrast, in older systems, such data transfer was under the CPU
`control. The CPU had to execute, or at least to monitor, the data
`transfer, disallowing the overlapping of CPU and I/o operations. As an
`example, consider the steps needed to print data from memory:
`
`1. Check whether the printer is ready for the next character.
`2. If the printer is not ready, go to step 1.
`3. If printer is ready, check whether there is another character to print.
`4. If there is another character, go to step 1.
`5. If there are no further characters, we are done with the printing.
`
`This overlap method is known as busy waiting, since the CPU must keep
`checking the status of the I/o, and is therefore busy while it waits for
`each I/o to complete. Although it is true that in theory the CPU could do
`other processing and come back to print the next character later, this is
`not always possible in practice. Consider a fast input device that is
`being used. The CPU must still be faster, and thus will need to wait
`between inputs. If, however, the CPU switches to some other task to
`avoid this waste of time, it might risk missing some input, since two
`I/O is the perfect
`inputs may arrive during that time. Interrupt-based
`solution to this scenario.
`The general structure of an interrupt-based system is depicted in
`Figure 2.1. Each device controller is in charge of a specific type of
`device (for example, disk drives, tape drives, line printers). A device
`controller maintains some local buffer storage and a set of special-
`purpose registers. The device controller is responsible for moving data
`between the peripheral device it controls and its local buffer storage.
`To start an I/O, the CPU loads the appropriate registers within the
`device controller and then resumes its normal operation. The device
`controller, in turn, examines the contents of these registers to determine
`what action to take. For example, if it finds a read request, the
`controller will start the transfer of data from the device to its local
`buffer. Once the transfer of data is complete, the device controller
`informs the CPU that its has finished its operation. It accomplishes this
`communication by causing an interrupt.
`When the CPU is interrupted, it stops what it is doing and
`immediately transfers execution to a fixed location. The fixed location
`usually contains the starting address where the service routine for the
`interrupt is located. The interrupt service routine transfers data from
`the local buffer of the device controller to main memory. Once this
`transfer is accomplished, the CPU can then resume the i nterrupted
`
`Figure 2.1 Interrupt-based computer system.
`
`iio devices and the CPU can be operated
`computation. In this way,
`concurrently. A time line of this operation is shown in Figure 2.2.
`Interrupts are an important part of a computer architecture. Each
`computer design has its own interrupt mechanism, but several functions
`are common. The interrupt must transfer control to the interrupt service
`routine. Generally, this is done through reservation of a set of locations
`in ’Ow memory (the first 100 or so locations) to hold the addresses of the
`Interrupt service routines for the various devices. This array, or vector of
`addresses is then indexed by a unique device number given with the
`interrupt request to provide the address of the interrupt service routine
`or the interrupting device.
`The interrupt architecture must also save the address of the
`interrupted instruction. Many old designs simply stored the interrupt
`addness in a fixed location or in a location indexed by the device
`
`I(cid:149)I(cid:149)IIIILA
`
`(cid:9)
`
`
`2: Computer System Structures
`
`2.1 Interrupt-Based Systems (cid:9)
`
`33
`
`CPU (cid:9)
`
`user
`process
`executing
`
`I/O interrupt
`processing
`
`idle
`device transferring
`
`
`
`I/O (cid:9)
`request (cid:9)
`
`transfer (cid:9)
`done (cid:9)
`
`I/O (cid:9)
`request (cid:9)
`
`transfer
`done
`
`Figure 2.2 Interrupt time line.
`
`number. More recent architectures store the return address on the
`system stack. Other registers, such as accumulators or index registers,
`may need to be explicitly saved and restored by the interrupt service
`routine. After the interrupt is serviced, a jump back to the saved return
`address will resume the interrupted computation as though the
`interrupt had not occurred. Usually, interrupts are disabled while an
`interrupt is being processed, delaying the incoming interrupt until the
`operating system is done with the current one and interrupts are enabled.
`If they were not thus disabled, the processing of the second interrupt
`while the first was being serviced would overwrite the first’s data, and
`the first would be a lost interrupt. Sophisticated interrupt architectures
`allow for one interrupt to be processed during another, often by a
`priority scheme in which request types are assigned priorities according
`to their relative importance, and interrupt processing information is
`stored separately for each priority. A higher-priority interrupt will be
`taken even if a lower-priority interrupt is active, but interrupts at the
`same or lower levels are masked, or selectively disabled, to prevent lost
`interrupts or unnecessary ones.
`For example, consider a simple terminal input driver. When a line is
`to be read from the terminal, the first character typed is sent to the
`computer. When that character is received, the asynchronous
`communication (or serial port) device to which the terminal line is
`connected will interrupt the CPU. When the interrupt request from the
`terminal arrives, the cpu will be about to execute some instruction. (If
`the ciu is in the middle of executing an instruction, the interrupt 15
`
`normally held pending until the instruction execution is complete.) The
`address of this interrupted instruction is saved, and control is
`transferred to the interrupt service routine for the device.
`The interrupt service routine saves the contents of any registers it
`will need to use. It checks for any error conditions that might have
`resulted from the last input Operation. It then takes the character from
`the device and stores it in a buffer. The interrupt routine must also
`adjust pointer and counter variables, to be sure the next input character
`will be stored at the next location in the buffer. The interrupt routine
`next sets a flag in memory indicating to the other parts of the operating
`system that new input has been received. The other parts are
`responsible for processing the data in the buffer and transferring the
`characters to the program requesting input (see Section 2.5). Then, the
`interrupt service routine restores the contents of any saved registers and
`transfers control back to the interrupted instruction.
`If characters are being typed to a 1200-baud terminal, the terminal
`can accept and transfer one character approximately every 8
`milliseconds, or 8000 microseconds. A well-written interrupt service
`routine to input characters into a buffer may require 20 microseconds
`per character, leaving 7980 microseconds out of every 8000 for ciu
`computation (and servicing of other interrupts). Given this disparity,
`asynchronous I/O is usually assigned a low interrupt priority, allowing
`other, more important interrupts to be processed first, or even to
`preempt the current interrupt for another. A high-speed device,
`however, such as a tape, disk, or communications network, may be able
`to transmit information at close to memory speeds; the CPU would need
`20 microseconds to respond to each interrupt, with interrupts arriving
`every 4 microseconds (for example).
`To solve this problem, direct-memory access (DMA) is used for high-
`speed I/O devices. After setting up buffers, pointers, and counters for
`the I/O device, the device controller transfers an entire block of data to or
`from its own buffer storage to memory directly, with no intervention by
`the CPU. Only one interrupt is generated per block, rather than the one
`Interrupt per byte (or word) generated for low-speed devices.
`The basic operation of the cpu is the same. A user program, or the
`operating system itself, may request data transfer. The operating system
`finds a buffer (an empty buffer for input or a full buffer for output) from
`a queue of buffers for the transfer. (A buffer is typically 128 to 4096
`bytes, depending on the device type.) The DMA controller then has its
`registers set to the appropriate source and destination addresses, and
`transfer length. This is usually done by a device driver, which knows
`exactly
`DMA how this information is to be provided to the controller. The
`controller is then instructed (via control bits in a control register) to
`
`
`
`2: Computer System Structures
`
`2.1 Interrupt-Based Systems (cid:9)
`
`33
`
`CPU (cid:9)
`
`user
`process
`executing
`
`I/O interrupt
`processing
`
`idle
`device transferring
`
`
`
`I/O (cid:9)
`request (cid:9)
`
`transfer (cid:9)
`done (cid:9)
`
`I/O (cid:9)
`request (cid:9)
`
`transfer
`done
`
`Figure 2.2 Interrupt time line.
`
`number. More recent architectures store the return address on the
`system stack. Other registers, such as accumulators or index registers,
`may need to be explicitly saved and restored by the interrupt service
`routine. After the interrupt is serviced, a jump back to the saved return
`address will resume the interrupted computation as though the
`interrupt had not occurred. Usually, interrupts are disabled while an
`interrupt is being processed, delaying the incoming interrupt until the
`operating system is done with the current one and interrupts are enabled.
`If they were not thus disabled, the processing of the second interrupt
`while the first was being serviced would overwrite the first’s data, and
`the first would be a lost interrupt. Sophisticated interrupt architectures
`allow for one interrupt to be processed during another, often by a
`priority scheme in which request types are assigned priorities according
`to their relative importance, and interrupt processing information is
`stored separately for each priority. A higher-priority interrupt will be
`taken even if a lower-priority interrupt is active, but interrupts at the
`same or lower levels are masked, or sel