OPERATING SYSTEM CONCEPTS GALVIN 6TH EDITION PDF

adminComment(0)

AUi SilberschatZ. To m!/ toife, Carla, and m!/ childrem, Guiendoli/m Page 6 The operating system must ensure the correct operation of the computer. System. As we wrote this Ninth Edition of Operating System Concepts, we were guided Chapter 5, Process Synchronization (previously Chapter 6), adds a new. Abraham Silberschatz, Peter Galvin, Greg Gagne. Although we have tried to produce an instructor's manual that will aid all of the users of our. An Adaptive CPU Scheduling for Embedded Operating Systems using Genetic Algorithms.


Operating System Concepts Galvin 6th Edition Pdf

Author:TOMEKA PICKAR
Language:English, Indonesian, Arabic
Country:Canada
Genre:Science & Research
Pages:512
Published (Last):06.11.2015
ISBN:475-2-67806-219-8
ePub File Size:17.71 MB
PDF File Size:12.69 MB
Distribution:Free* [*Register to download]
Downloads:27930
Uploaded by: SANA

provides a clear description of the concepts that underlie operating systems. As . As we wrote this Sixth Edition, we were guided by the many comments and to Avi Silberschatz, Vice President, Information Sciences Research Center, MH. Request PDF on ResearchGate | On Jan 1, , Abraham Silberschatz and others published Operating System Concepts, Sixth Edition. As we wrote this Ninth Edition of Operating System Concepts, we were guided Chapter 6, CPU Scheduling (previously Chapter 5), contains new coverage . Parts of Chapter 17 were derived from a paper by Levy and Silberschatz. [].

Items that would be of use with Operating System Concepts, we invite you to. Herence to protection methods nor be trusted to allocate only free. Abraham Silberschatz, Peter Baer Galvin, and. Presented by exercices audio de grammaire pdf Adam Lowry. Note that there are no authorized, free downloads of any of the Operating Systems Concepts books. Storage allocation.

Galvin, Greg Gagne on site. Keep pace with the. Galvin on site. The latest edition of this bestselling. Silberschatz, Galvin and Gagne Structures provided by the operating systems. Free-space management system no waste of space. Adobe Acrobat Reader will allow you to view these. If your computer does not have Power Point, download a free viewer here. Files grouped into one file, sometimes compressed.

Making this decision is CPU scheduling, which is discussed in Chapter 6. Finally, multiple jobs running concurrently require that their ability to affect one another be limited in all phases of the operating These considerations are discussed throughout the text. Time sharing or multitasking is a logical extension of multipro- gramming. The CPU executes multiple jobs by switching among them, but the switchesoccurso frequentlythat the users caninteractwith each program while it is running.

An interactive or hands-on computer system provides direct communi- cation between the user and the system. The user gives instructions to the operating system or to a program directly, using a keyboard or a mouse, and waits for immediate results. Accordingly, the response time should be short- typically within 1second or so.

A time-shared operating system allows many users to share the computer simultaneously. Since each action or command in a time-sharedsystem tends to be short, only a little CPU time is needed for each user. As the system switches rapidly from one user to the next, each user is given the impression that the entire computer system is dedicated to her use, even though it is being shared among many users. A time-shared operating system uses CPU scheduling and multiprogram- ming to provide each user with a small portion of a time-shared computer.

Each user has at least one separate program in memory. A program loaded into memory and executing is commonly referred to as a process. Input, for example, may be bounded by the user's typing speed; seven characters per second is fast for people, but incrediblyslow for comput- ers.

Rather than let the CPU sit idle when this interactive input takes place, the operating system will rapidly switch the CPU to the program of some other user. Time-sharing operating systems are even more complex than multipro- grammed operating systems. In both, several jobs must be kept simultaneously in memory, so the system must have memory management and protection Chapter9. To obtain a reasonableresponse time, jobs may have to be swapped in and out of main memory to the disk that now serves as a backing store for main memory.

A common method for achieving this goal is virtual memory, which is a technique that allows the execution of a job that may not be com- pletely in memory Chapter The main advantage of the virtual-memory Further, it abstractsmain memoryinto a large, uniform array of storage,separating logical memory as viewed by the user from physical memory. This arrangement frees programmers from concernover memory-storagelimitations.

Time-sharingsystems must also provide a file system Chapters11and The file system resides on a collection of disks; hence, disk management must be provided Chapter Also, time-sharingsystems provide a mechanismfor concurrent execution, which requires sophisticated CPU-scheduling schemes Chapter 6.

To ensure orderly execution, the system must provide mechanisms for job synchronization and communication Chapter 7 ,and it may ensure that jobs do not get stuck in a deadlock, forever waiting for one another Chapter8. The idea of time sharing was demonstrated as early as , but since time-shared systems are difficult and expensive to build, they did not become common until the early s.

Although some batch processing is still done, most systems today are time sharing. Accordingly, multiprogramming and time sharing are the central themes of modern operating systems, and they are the central themes of this book. Desktop Systems Personal computers PCs appeared in the s.

During their first decade, the CPUs in PCs lacked the features needed to protect an operating system from user programs. PC operating systems therefore were neither multiuser nor multitasking. However, the goalsof theseoperating systems havechanged with time; instead of maximizingCPU and peripheral utilization, the systems opt for maximizing user convenience and responsiveness.

Operating System Concepts, Sixth Edition

The Apple Macintosh operating system has been ported to more advanced hardware, and now includes new features, such as virtual memory and mul- titasking. Operating systems for these computers have benefited in several ways from the development of operating systems for mainframes. Microcomputers were immediately able to adopt some of the technology developed for larger operating systems. On the other hand, the hardware costs for microcomputers are sufficiently low that individuals have sole use of the computer, and CPU utilization is no longer a prime concern.

Thus, some of the design decisions made in operating systems for mainframes may not be appropriate for smaller systems. For example, file protection was, at first, not necessary on a personal machine.

However, these computers are now often tied into other computers over local-area networks or other Internet connec- tions. When other computers and other users can access the files on a PC, file protectionagain becomesa necessary feature of the operating system.

The lack of such protection has made it easy for malicious programs to destroy data on systems such as MS-DOS and the Macintosh operating system. These programs may be self-replicating,and may spread rapidly via worm or virus mechanisms and disrupt entire companies or even worldwide networks.

Advanced time- sharing featuressuch as protected memory and file permissionsare not enough, on their own, to safeguard a system from attack. Recentsecurity breaches have shown that time and again. These topics are discussed in Chapters18 and However, multiprocessor systems also known as parallel systems or tightly coupled systems are growing in importance. Such systems have more than one processor in close communication, sharing the computer bus, the clock, and sometimes memory and peripheral devices.

Multiprocessorsystems have three main advantages. Increased throughput. By increasing the number of processors, we hope to get more work done in less time. The speed-up ratio with N processors is not N; rather, it is less than N. When multiple processors cooperate on a task, a certain amount of overhead is incurred in keeping all the parts working correctly. This overhead, plus contention for shared resources, lowers the expected gain from additional processors.

Similarly, a group of N programmers working closely together does not result in N times the amount of work being accomplished. Economy of scale. Multiprocessor systems can save more money than multiple single-processor systems, because they can share peripherals, massstorage, and power supplies.

If several programs operate on the same set of data, it is cheaper to store those data on one disk and to have all the processorsshare them, than to have many computers with local disks and many copies of the data. Increased reliablility. If functions can be distributed properly among several processors, then the failureof one processor will not halt the system, only slow it down.

If we have ten processorsand one fails, then each of the remaining nine processors must pick up a share of the work of the failed processor. Thus, the entire system runs only 10 percent slower, rather than failing altogether.

This ability to continue providing service proportional Systems designed for graceful degradation are also called fault tolerant. Continued operation in the presence of failures requires a mechanism to allow the failure to be detected, diagnosed, and, if possible, corrected. The Tan- dem system uses both hardware and software duplication to ensure continued operation despite faults.

The system consists of two identical processors, each with its own local memory. The processors are connected by a bus.

One pro- cessor is the primary and the other is the backup. Two copies are kept of each process: At fixed checkpointsin the execution of the system, the state information of each job- including a copy of the memory image-iscopied from the primary machineto the backup. If a failure is detected, the backup copy is activated and is restarted from the most recent checkpoint. This solution is expensive, since it involves considerablehardware duplication.

The most common multiple-processor systems now use symmetric mul- tiprocessing SMP , in whch each processor runs an identical copy of the operating system, and these copies communicate with one another as needed. Some systems use asymmetric multiprocessing, in which each processor is assigned a specific task. A master processor controls the system; the other pro- cessors either look to the master for instruction or have predefined tasks.

This scheme defines a master-slave relationship. The master processor schedules and allocates work to the slave processors. SMP means that all processorsare peers; no master-slave relationshipexists between processors. Each processor concurrently runs a copy of the operating system. Figure 1. This computer can be configured such that it employs dozens of processors, all running copies of UNIX.

The benefit of this model is that many processes can run simultaneously-N processes can run if there are N CPUs-without causinga significant deterioration of performance. Also, since the CPUs are separate, one may be sitting idle while another is overloaded, resulting in inefficiencies. These inefficiencies can be avoided if the processors share certain data structures. A multiprocessor system of this form will allow Figure 1. I 14 Chapter 1 Introduction I processes and resources-such as memory-to be shared dynamically among the various processors, and can lower the varianceamong the processors.

Such a system must be written carefully, as we shall see in Chapter 7. Thedifferencebetweensymmetricand asymmetricmultiprocessingmay be the result of either hardware or software. Special hardware can differentiatethe multiple processors,or the softwarecan be written toallow only one master and multipleslaves. For instance, Sun's operating system SunOSVersion 4 provides asymmetric multiprocessing,whereas Version 5 Solaris2 is symmetric on the same hardware.

As microprocessors become less expensive and more powerful, additional operating-system functions are off-loaded to slave processors or back-ends. For example, it is fairly easy to add a microprocessor with its own memory to manage a disk system.

The microprocessor could receive a sequence of requestsfrom the main CPU and implement its own disk queue and scheduling algorithm. This arrangement relieves the main CPU of the overhead of disk scheduling. PCs contain a microprocessor in the keyboard to convert the keystrokes into codes to be sent to the CPU.

In fact, this use of microprocessors has become so common that it is no longer considered multiprocessing. Distributed systems depend on networking for their function- ality. By being able to communicate, distributed systems are able to share computational tasks, and provide a rich set of features to users. Networks vary by the protocols used, the distances between nodes, and the transport media.

Likewise, operating-system support of protocolsvaries. Some systems support proprietary protocols to suit their needs. To an operating system, a network protocol simply needs an interface device-a network adapter, for example-with a device driver to manage it, and software to packagedata in the communications protocol to send it and to unpackage it to receive it.

These concepts are discussed throughout the book. Networks are typecast based on the distances between their nodes.

A local-area network LAN , exists within a room, a floor, or a building. A wide-area network WAN , usually exists between buildings, cities, or coun- tries. A global company may have a WAN to connect its offices, worldwide. These networks could run one protocol or several protocols. The continuing advent of new technologies brings about new forms of networks. For exam- BlueToothdevicescommunicateover a short distance of severalfeet, in essence creatinga small-area network. The media to carry networks are equally varied.

They include copper wires, fiber strands, and wireless transmissions between satellites, microwavedishes, and radios. When computing devices are connected to cellular phones, they create a network. Even very short-range infrared communication can be used for networking.

At a rudimentary level, whenever computers communicate they use or create a network. These networks also vary by their performance and reliability. Terminalsconnected to central- ized systems are now beingsupplanted by PCs. Correspondingly,user-interface functionality that used to be handled directly by the centralized systems is increasingly being handled by the PCs. As a result, centralized systems today act as server systems to satisfy requests generated by client systems.

The general structure of a client-server system is depicted in Figure1. Server systems can be broadly categorized as compute servers and file servers. Compute-server systems provide an interface to which clients can send requests to perform an action, in response to which they execute the action and send back results to the client.

File-serversystems provide a file-systeminterfacewhere clients can create, update, read, and delete files. With the beginningof widespread publicuse of theInternet in thes for electronic mail, ftp, and gopher, many PCs becameconnected to computer networks. With the introduction of the Web in the mids, network connectivity became an essentialcomponent of a computer system.

Virtually all modern PCs and workstations are capable of running a web browser for accessing hypertext documents on the Web. Several include the web browser itself, as well as electronic mail, remote login, and file-transfer clients and servers. In contrast to the tightly coupled systems discussed in Section1.

Instead, each processor has its own local memory. The processorscommunicate with one another through various com- munication lines, such as high-speed buses or telephone lines.

These systems are usually referred to as loosely coupled systems or distributed systems. Some operating systems have taken the concept of networks and dis- tributed systems further than the notion of providing network connectivity A network operating system is an operating system that provides features such as file sharing across the network, and that includes a communicationscheme that allows different processes on different computers to exchange messages.

A computer running a network operating system acts autonomously from all other computers on the network, although it is aware of the network and is able to communicate with other networked computers.

A distributed operat- ing system is a less autonomous environment: The different operating systems communicatecloselyenough to provide the illusion that only a singleoperating system controls the network. We cover computer networks and distributed systems in Chapters 15 through Clustered Systems Like parallel systems, clustered systems gather together multiple CPUs to accomplish computational work.

Clustered systems differ from parallel sys- tems, however, in that they are composed of two or more individual systems coupled together. The definition of the term clustered is not concrete; many commercial packages wrestle with what a clustered system is, and why one form is better than another.

The generally accepted definition is that clustered computers share storage and are closely linked via LAN networking. Clustering is usually performed to provide high availability. A layer of cluster software runs on the cluster nodes. Each node can monitor one or more of the others over the LAN. If the monitored machine fails, the monitoring The failed machine can remain down, but the users and clients of the application would only see a brief interruption of service.

In asymmetric clustering, one machine is in hot standby mode while the other is running the applications. The hot standby host machine does nothing but monitor the active server. If that server fails, the hot standby host becomes the active server.

In symmetric mode, two or more hosts are running applications,and they are monitoring each other. This mode is obviously more efficient, as it uses all of the available hardware.

It does require that more than one application be available to run. Other forms of clustersinclude parallelclusters and clustering over a WAN. I Parallel clusters allow multiple hosts to access the same data on the shared storage. Because most operating systems lack support for this simultaneous data access by multiple hosts, parallel clusters are usually accomplished by special versions of software and special releases of applications. For example, Oracle Parallel Server is a version of Oracle's database that has been designed to run on parallel clusters.

Each machine runs Oracle, and a layer of software tracks access to the shared disk. Each machine has full access to all data in the database. In spite of improvements in distributed computing, most systems do not offer general-purpose distributed file systems. Therefore, most clusters do not allow shared access to data on the disk.

For this, distributed file systems must provide access control and locking to the files to ensure no conflicting operations occur. This type of service is commonlyknown as a distributed lock manager DLM. Work is ongoing for general-purpose distributed file systems, with vendors like Sun Microsystems announcing roadmaps for delivery of a DLM within the operating system. Cluster technology is rapidly changing. Cluster directions include global clusters, in which the machinescould be anywhere in the world or anywhere a WAN reaches.

Such projects are still the subject of research and development. Clustered system use and features should expand greatly as storage-area networks SANS ,as described in Section SANsallow easy attachment of multiple hosts to multiple storage units. Current clusters are usually limited to two or four hosts due to the complexity of connecting the hosts to shared storage.

Real-Time Systems Another form of a special-purpose operating system is the real-timesystem. A real-timesystem is used when rigid time requirements have been placed on the operation of a processor or the flow of data; thus, it is often used as a control devicein a dedicated application.

Sensors bring data to the computer. The com- 1! Systems that control scientific experiments, medical imaging systems, industrial control systems, and certain display systems are real-time systems. Some automobile-engine fuel-injection systems, home-appliance controllers, and weapon systems are also real-time systems. A real-time system has well-defined, fixed time constraints.

Free pdf of operating system by galvin

Processing must be done within the defined constraints, or the system will fail. For I instance, it would not do for a robot arm to be instructed to halt after it had smashed into the car it was building. Contrast this requirement to a time-sharingsystem, where it is desirable butnot mandatory to respond quickly, or to a batch system, which may have no time constraints at all.

Real-time systems come in two flavors: A hard real-time system guarantees that critical tasks be completed on time. This goal requires that all delays in the system be bounded, from the retrieval of stored data to the time that it takes the operating system to finish any request made of it.

Such time constraintsdictatethe facilitiesthat are availablein hard real-timesystems. I Secondary storage of any sort is usually limited or missing, with data instead being stored in short-term memory or in read-only memory ROM.

ROM is located on nonvolatilestorage devices that retain their contentseven in the case of electric outage; most other types of memory are volatile. Most advanced operating-system features are absent too, since they tend to separate the user from the hardware, and that separation results in uncertaintyabout the amount of time an operation will take. For instance, virtual memory Chapter 10 is almost never found on real-time systems. Therefore, hard real-time systems conflict with the operation of time-sharing systems, and the two cannot be mixed.

Since none of the existing general-purpose operating systems support hard real-time functionality, we do not concern ourselves with this type of system in this text. A less restrictive type of real-timesystem is a soft real-time system, where a critical real-time task gets priority over other tasks, and retains that priority until it completes. As in hard real-time systems, the operating-system kernel delays need to be bounded: A real-time task cannot be kept waiting indefinitely for the kernel to run it.

Soft real time is an achievable goal that can be mixed with other types of systems. Soft real-timesystems, however, have morelimited utility than hard real-time systems. Given their lack of deadline support, they are risky to use for industrial control and robotics. They are useful, however in several areas, including multimedia, virtual reality, and advanced scientific projects-such as undersea exploration and planetary rovers.

These systems need advanced operating-system features that cannot be supported by hard real-timesystems. Because of the expanded uses for soft real-timefunctionality, it is finding its way into most current operating systems, including major versions of UNIX.

In Chapter 10, we describe the design of memory management for real-time computing. Finally, in Chapter 21, we describe the real-timecomponents of the Windows operating system.

Developers of handheld systems and applications face many challenges, most of which are due to the limited size of such devices. For example, a PDA is typically about 5 inchesin height and 3 inches in width, and it weighs less than one-half pound.

Due to this limited size, most handheld devices have a small amount of memory, include slow processors, and feature small display screens. We will take a look now at each of these limitations. Many handheld devices have between KB and 8 MB of memory. Con- trast this with a typical PC or workstation, which may have several hundred megabytesof memory!

As a result, the operating system and applicationsmust manage memory efficiently. This includes returning all allocated memory back to the memory manager once the memory is no longer being used.

In Chapter 10 we will explorevirtual memory, which allows developers to write programs that behave as if thesystem has more memory than may be physically available. Currently,many handheld devices do not use virtual memory techniques,thus forcing program developers to work within the confines of limited physical memory. A second issue of concern to developers of handheld devices is the speed of the processor used in the device. Processorsfor most handheld devices often run at a fraction of the speed of a processor in a PC.

Faster processors require more power. To include a faster processor in a handheld device would requirea larger battery that would have to be replaced orrecharged morefrequently. To minimize the size of most handheld devices, smaller, slower processorswhich consume less power are typically used. Therefore, the operating system and applications must be designed not to tax the processor.

The last issue confronting program designers for handheld devices is the small display screens typically available. Whereas a monitor for a home com-i 3 I puter may measure up to 21 inches, the display for a handheld device is often no more than 3 inches square.

Familiar tasks, such as reading e-mail or brows- ing web pages, must be condensed onto smaller displays. One approach for I displaying the content in web pages is web clipping, where only a small subset I I of a web page is delivered and displayed on the handheld device. I Some handheld devices may use wireless technology, such as BlueTooth Section 1. Cellular telephones with connectivity to the Internet fall into this category. However, To download data to these devices, typically one first downloads the data to a PC or workstation, and then downloads the data to the PDA.

Some PDAs allow data to be directly copied from one device to another using an infrared link. Generally, the limitations in the functionality of PDAs are balanced by their convenience and portability. Their use continues to expand as network connectionsbecome more available and other options, such as cameras and MP3 players, expand their utility.

Operating System Concepts

The same concepts are appropriate for the var- ious classes of computers: Many of the concepts depicted in Figure 1. However, to start understanding modern operating systems, you need to realize the theme of feature migration and to recognize the long history of many operating-system features.

It ran on a large, complex mainframe computer the GE Thus, the features developed for a large mainframe system have moved to microcomputersover time. At the same time as features of large operating systems were being scaled down to fit PCs, more powerful, faster, and more sophisticated hardware sys- tems were being developed. Many uni- versities and businesses have large numbers of workstations tied together with local-area networks. As PCs gain more sophisticated hardware and software, the line dividing the two categories-mainframes and microcomputers-is blurring.

Computing Environments Now that we have traced the development of operating systems from the first hands-on systems through multiprogrammed and time-shared systems to PCs and handheld computers, we can give a brief overview of how such systems are used in a variety of computing environment settings.

Consider the "typical office environment. Remote access was awkward, and portability was achieved by laptop computers carrying some of the user's workspace. Terminalsattached to mainframes were prevalent at many compa- nies as well, with even fewer remote access and portability options.

The current trend is toward more ways to access these environments. Web technologies are stretching the boundaries of traditional computing. Com- panies implement portals which provide web accessibility to their internal servers.

Network computers are essentially terminals that understand web- Handheld computers can synchronize with PCs to allow very portable use of company information. They can also connect to wireless networks to use the company's web portal as well as the myriad other web resources. At home, most users had a singlecomputer with a slow modem connection to the office, the Internet, or both.

Network connection speeds once attainable only at great cost are now available at low cost, allowing more access to more data at a company or from the Web. Those fast data connections are allowing home computers to serve up web pages and to contain their own networks with printers, client PCs, and servers.

Some homes even have firewalls to protect these home environments from security breaches. Those firewalls cost thousands of dollars a few years ago, and did not even exist a decade ago. PCs are still the most prevalent access devices, with workstations high-end graphics-oriented PCs , handheld PDAs, and even cell phones also providing access. Web computing has increased the emphasis on networking. Devices that were not previously networked now have wired or wireless access.

Devices that were networked now have faster network connectivity,either by improved networking technology,optimized network implementation code, or both. The implementation of web-based computing has given rise to new cate- gories of devices, such as load balancers which distribute network connections among a pool of similar servers.

Operating systems like Windows 95, which acted as web clients, have evolved into WindowsME and Windows, which can act as web servers as well as clients. Generally, the Web has increased the complexity of devices as their users require them to be web-enabled. They run embedded real-time operating systems. These devices are found everywhere, from car engines and manufacturing robots to VCRs and microwave ovens.

They tend to have very specific tasks. The systems they run on are usually primitive, lacking advanced features, such as virtual memory, and even disks. Thus, the operating systems provide limited features. They usually have little or no user interface, preferring to spend their time monitor- ing and managing hardware devices, such as automobile engines and robotic arms.

As an example, consider the aforementionedfirewalls and load balancers. Some are general-purpose computers, running standard operating systems- such as UNIX-with special-purpose applications loaded to implement the Others are hardware devices with a special-purpose operating system embedded within, providing just the functionalitydesired. The use of embedded systems continues to expand. The power of those devices, both as standalone units and as members of networks and the Web, is sure to increase as well.

Entire houses can be computerized, so that a central computer-eithera general-purpose computer or an embedded system-can control heating and lighting, alarm systems, and even coffee makers. Web access can let a home-owner tell the house to heat up before he arrives home. Someday, the refrigerator may call the grocery store when it notices the milk is gone. Summary Operating systems have been developed over the past 45 years for two main purposes. First, the operating system attempts to schedule computational activ- ities to ensure good performance of the computing system.

Second, it provides a convenient environment for the development and execution of programs. Initially, computer systems were used from the front console. Software such as assemblers, loaders, linkers, and compilers improved the convenience of programming the system, but also required substantial set-up time.

Free PDF of Operating System by Galvin

To reduce the set-up time, facilities hired operators and batched similar jobs. Batch systems allowed automatic job sequencing by a resident operating system and greatly improved the overall utilization of the computer. The computer no longer had to wait for human operation. Off-line operation of slow devices provided a means to use multiple reader-to-tapeand tape-to-printer systems for one CPU.

To improve the overall performance of the computer system, developers introduced the conceptof multiprogramming, so that several jobscould be kept in memory at one time. The CPU is switched back and forth among them to increase CPU utilization and to decrease the total time needed to execute the jobs. Multiprogramming also allows time sharing. Time-shared operating sys- tems allow many users fromone to several hundred to use a computer system interactivelyat the same time.

PCs are microcomputers;they are considerablysmaller and less expensive than mainframe systems. Operating systems for these computers have ben- efited from the development of operating systems for mainframes in several ways. However, since an individual has sole use of the computer, CPU utiliza- tion is no longer a prime concern.

Hence,some of the design decisionsmade for mainframeoperating systemsmay not be appropriatefor these smaller systems. Other design decisions, such as thosefor security, are appropriate for bothsmall and large systems, as PCs can now be connected to other computers and users through networks and the Web.

Such systems can provide increased throughput and enhanced reliability. Dis- tributed systems allow sharing of resourceson geographicallydispersed hosts. Clustered systems allow multiple machines to perform computations on data contained on shared storage, and let computing continue in the case of failure of some subset of cluster members.

A hard real-time system is often used as a control device in a dedicated application. A hard real-time operating system has well-defined, fixed time constraints. Soft real-time systems have less stringent timing constraints, and do not support deadline scheduling. Recently,the influence of the Internet and the World Wide Web has encour- aged the development of modern operating systems that include web browsers and networking and communicationsoftware as integral features.

We have shown the logical progression of operating-system development, driven by inclusion of features in the CPU hardware needed for advanced functionality.

This trend can be seen today in the evolution of PCs, with inexpensive hardware being improved sufficiently to allow, in turn, improved characteristics. Exercises I 1. This situation can result in various security problems. What are two such problems? Can we ensure the same degree of security in a time-sharedmachine as we have in a dedicated machine? Explain your answer. Batch b. Interactive BibliographicalNotes 25 c. Time sharing d.

Real time e. Network f. Parallel g. Distributed h. Clustered i. Handheld 1. When is it appropriate for the operating system to forsake this principle and to "waste" resources? Why is such a system not really wasteful?

What are three advantages and one disadvantage of multipro- cessor systems? Consider whether the operating system should include applications such as web browsers and mail programs. Argue both pro and con positions, and support your answers. Describetwo ways in which the cluster software can manage access to the data on the disk. Discuss the benefits and detriments of each. Bibliographical Notes Time-sharing systems were proposed first by Strachey []. An overview of the Apple Macintosh hardware and software is pre- sented in Apple [].

Solomon and Russinovich [] discuss the structure of Microsoft Windows operating system. A good coverage of cluster computing is presented by downloadya []. In this chapter, we look at several disparate parts of this structure to round out our background knowl- edge.

This chapter is mostly concerned with computer-system architecture,so you can skim or skip it if you already understand the concepts.

The operating system must also ensure the correct operation of the com- puter system. To ensure that user programs will not interfere with the proper operation of the system, the hardware must provide appropriate mechanisms to ensure correctbehavior. Later in this chapter,we describethe basiccomputer architecture that makes it possible to write a functional operating system. We conclude with a network architectureoverview.

Computer-System Operation A modern, general-purpose computer system consists of a CPU and a number of device controllers that are connected through a common bus that provides access to shared memory Figure 2. Each device controller is in charge of a specific type of device for example, disk drives, audio devices, and video displays. The CPU and the device controllerscan execute concurrently, competingfor memory cycles. To ensure orderly access to the shared memory, a memory controlleris provided whose function is to synchronize access to the memory.

For a computer to start running-for instance, when it is powered up or rebooted-it needs to have an initial program to run. This initial program, or bootstrap program, tends to be simple. It initializes all aspects of the system, from CPU registers to device controllers to memory contents.

The bootstrap program must know how to load the operating system and to start executing that system. To accomplish this goal, the bootstrap program must locate and load into memory the operating-system kernel.

The operating system then starts executing the first process, such as "init," and waits for some event to occur. The occurrence of an event is usually signaled by an interrupt from either the hardware or the software. Hardware may trigger an interrupt at any time by sending a signal to the CPU, usually by way of the system bus.

Software may trigger an interrupt by executing a special operation called a system call also called a monitor call.

Modern operating systems are interrupt driven. Events are almost always signaled by the occurrence of an interrupt or a trap.

A trap or an exception is a software-generated interrupt caused either by an error for example, division by zero or invalid memory access or by a specific request The interrupt-driven nature of an operating system defines that system's general structure.

For each type of interrupt, separate segments of code in the operating system determine what action should be taken.If the monitored machine fails, the monitoring Chapter 1 An operating system is a program that manages the computer hardware. Argue both pro and con positions, and support your answers.

These devicesare mostly standalone, used singly by individual users. Also, time-sharingsystems provide a mechanismfor concurrent execution, which requires sophisticated CPU-scheduling schemes Chapter 6. A trap or an exception is a software-generated interrupt caused either by an error for example, division by zero or invalid memory access or by a specific request The failed machine can remain down, but the users and clients of the application would only see a brief interruption of service.

GENIE from GreenBay
Browse my other articles. I take pleasure in beach handball. I fancy successfully.
>