Mechanisms are provided to ensure that the files, memory segments, CPU, and other resources can be operated on by only those processes that have gained proper authorization from the operating system. Memory addressing hardware. Device control registers. Protection refers to a mechanism for controlling the access of programs, processes, or users to the resources defined by a computer system. This mechanism must provide a means for specification of the controls to be imposed, together with a means of enforcement.
Command-Interpreter system: The command interpreter is one of the most important system programs. It is the interface between the user and the operating system. Many commands are given to the operating system by control programs.
When a new job is started in a batch system, or a user logs on to a time-shared system, a program that reads and interprets control statements is executed automatically.
Its function is simple: get the next command statement and execute it. An operating system provides an environment for the execution of programs. To achieve this, it provides certain services to programs and to the users of those programs. Services differ from system to system- however some are common. Program execution: The operating system provides the mechanisms for the loading and execution of programs. The system must be able to load a program to memory and execute it.
The OS also provides the means for process termination program must be able to end its execution, either normally or abnormally. A program should be able to read and write to.
File system manipulation: Programs store their data in files, and also read file inputs. Programs also need to create and delete files by name. The OS must provide the mechanisms necessary for programs that need to read and write files. It should also provide the means for file creation and deletion. Communication: In certain circumstances, a processes need to exchange some information with other processes.
The operating systems provides the necessary inter- process communication IPC mechanisms. Two common approaches: Shared memory. Message passing. Error detection: Errors may occur in the system during program execution.
The operating system constantly needs to be aware of possible errors. For each type of error, the operating system should take the appropriate action to ensure correct and consistent computing. Other Services: Other operating system functions exist to ensure the efficient operation of the system itself. They include: 6. Resource allocation.
Open navigation menu. Close suggestions Search Search. User Settings. Skip carousel. Carousel Previous. Carousel Next. What is Scribd? Explore Ebooks. Bestsellers Editors' Picks All Ebooks. Explore Audiobooks. Bestsellers Editors' Picks All audiobooks.
Explore Magazines. Editors' Picks All magazines. Explore Podcasts All podcasts. Difficulty Beginner Intermediate Advanced. Explore Documents. Lecture3 Components and Services. Thread management is done in user space. When thread makes a blocking system call, the entire process will be blocks. Only one thread can access the Kernel at a time, so multiple threads are unable to run in parallel on multiprocessors.
If the user level thread libraries are implemented in the operating system in such a way that system does not support them then Kernel threads use the many to one relationship modes. One to One Model There is one to one relationship of user level thread to the kernel level thread. This model provides more concurrency than the many to one model. It also another thread to run when a thread makes a blocking system call. It support multiple thread to execute in parallel on microprocessors. Disadvantage of this model is that creating user thread requires the corresponding Kernel thread.
Implementation is by a thread Operating system supports creation of 2 library at the user level. Kernel threads. User level thread is generic and Kernel level thread is specific to the 3 can run on any operating system. Multi-threaded application cannot Kernel routines themselves can 4 take advantage of multiprocessing. Race Condition? A race condition is an undesirable situation that occurs when a device or system attempts to perform two or more operations at the same time, but because of the nature of the device or system, the operations must be done in the proper sequence to be done correctly.
A race condition occurs when two threads access a shared variable at the same time. The first thread reads the variable, and the second thread reads the same value from the variable. Then the first thread and second thread perform their operations on the value, and they race to see which thread can write the value last to the shared variable.
The value of the thread that writes its value last is preserved, because the thread is writing over the value that the previous thread wrote. Memory management is the functionality of an operating system which handles or manages primary memory.
Memory management keeps track of each and every memory location either it is allocated to some process or it is free. It checks how much memory is to be allocated to processes. It decides which process will get memory at what time. It tracks whenever some memory gets freed or unallocated and correspondingly it updates the status. Memory management provides protection by using two registers, a base register and a limit register. The base register holds the smallest legal physical memory address and the limit register specifies the size of the range.
For example, if the base register holds and the limit register is , then the program can legally access all addresses from through All routines are kept on disk in a re-locatable load format. The main program is loaded into memory and is executed. Other routines methods or modules are loaded on request. Dynamic loading makes better memory space utilization and unused routines are never loaded. Operating system can link system level libraries to a program. When it combines the libraries at load time, the linking is called static linking and when this linking is done at the time of execution, it is called as dynamic linking.
In static linking, libraries linked at compile time, so program code size becomes bigger whereas in dynamic linking libraries linked at execution time so program code size remains smaller. Swapping Swapping is a mechanism in which a process can be swapped temporarily out of main memory to a backing store, and then brought back into memory for continued execution.
Backing store is a usually a hard disk drive or any other secondary storage which fast in access and large enough to accommodate copies of all memory images for all users.
It must be capable of providing direct access to these memory images. Operating system uses the following memory allocation mechanism. N Memory Description. Allocation In this type of allocation, relocation-register scheme is used to protect Single-partition user processes from each other, and from changing operating-system allocation code and data. Relocation register contains value of smallest physical 1 address whereas limit register contains range of logical addresses.
Each logical address must be less than the limit register. In this type of allocation, main memory is divided into a number of Multiple- fixed-sized partitions where each partition should contain only one partition process. When a partition is free, a process is selected from the input 2 allocation queue and is loaded into the free partition. When the process terminates, the partition becomes available for another process.
Fragmentation As processes are loaded and removed from memory, the free memory space is broken into little pieces. It happens after sometimes that processes cannot be allocated to memory blocks considering their small size and memory blocks remains unused.
This problem is known as Fragmentation. Fragmentation is of two types S. Fragmentation Description External Total memory space is enough to satisfy a request or to reside a 1 fragmentation process in it, but it is not contiguous so it cannot be used. Some portion of fragmentation memory is left unused as it cannot be used by another process.
External fragmentation can be reduced by compaction or shuffle memory contents to place all free memory together in one large block. External fragmentation is avoided by using paging technique.
Paging is a technique in which physical memory is broken into blocks of the same size called pages size is power of 2, between bytes and bytes. When a process is to be executed, it's corresponding pages are loaded into any available memory frames.
Logical address space of a process can be non-contiguous and a process is allocated physical memory whenever the free memory frame is available. Operating system keeps track of all free frames.
Operating system needs n free frames to run a program of size n pages. Segmentation Segmentation is a technique to break memory into logical pieces where each piece represents a group of related information. For example, data segments or code segment for each process, data segment for operating system and so on. Segmentation can be implemented using or without using paging.
Speed differences between two devices. A slow device may write data into a buffer, and when the buffer is full, the entire buffer is sent to the fast device all at once. So that the slow device still has somewhere to write while this is going on, a second buffer is used, and the two buffers alternate as each becomes full.
This is known asdouble buffering. Double buffering is often used in animated graphics, so that one screen image can be generated in a buffer while the other completed buffer is displayed on the screen. This prevents the user from ever seeing any half-finished screen images. Data transfer size differences.
Buffers are used in particular in networking systems to break messages up into smaller packets for transfer, and then for re-assembly at the receiving side. To support copy semantics. For example, when an application makes a request for a disk write, the data is copied from the user's memory area into a kernel buffer. Now the application can change their copy of the data, but the data which eventually gets written out to disk is the version of the data at the time the write request was made.
VirtualMemory This section describes concepts of virtual memory, demand paging and various page replacement algorithms. Virtual memory is a technique that allows the execution of processes which are not completely available in memory.
The main visible advantage of this scheme is that programs can be larger than physical memory. Virtual memory is the separation of user logical memory from physical memory. This separation allows an extremely large virtual memory to be provided for programmers when only a smaller physical memory is available.
Following are the situations, when entire program is not required to be loaded fully in main memory. Virtual memory is commonly implemented by demand paging. It can also be implemented in a segmentation system. Demand segmentation can also be used to provide virtual memory.
Virtual memory algorithms Page replacement algorithms Page replacement algorithms are the techniques using which Operating System decides which memory pages to swap out, write to disk when a page of memory needs to be allocated. Paging happens whenever a page fault occurs and a free page cannot be used for allocation purpose accounting to reason that pages are not available or the number of free pages is lower than required pages. This process determines the quality of the page replacement algorithm: the lesser the time waiting for page-ins, the better is the algorithm.
A page replacement algorithm looks at the limited information about accessing the pages provided by hardware, and tries to select which pages should be replaced to minimize the total number of page misses, while balancing it with the costs of primary storage and processor time of the algorithm itself.
There are many different page replacement algorithms. We evaluate an algorithm by running it on a particular string of memory reference and computing the number of page faults.
Reference String The string of memory references is called reference string. Reference strings are generated artificially or by tracing a given system and recording the address of each memory reference. The latter choice produces a large number of data, where we note two things. A translation look-aside buffer TLB : A translation lookaside buffer TLB is a memory cache that stores recent translations of virtual memory to physical addresses for faster retrieval.
When a virtual memory address is referenced by a program, the search starts in the CPU. First, instruction caches are checked. At this point, TLB is checked for a quick reference to the location in physical memory. When an address is searched in the TLB and not found, the physical memory must be searched with a memory page crawl operation. As virtual memory addresses are translated, values referenced are added to TLB. TLBs also add the support required for multi-user computers to keep memory separate, by having a user and a supervisor mode as well as using permissions on read and write bits to enable sharing.
TLBs can suffer performance issues from multitasking and code errors. This performance degradation is called a cache thrash.
Cache thrash is caused by an ongoing computer activity that fails to progress due to excessive use of resources or conflicts in the caching system. Use the time when a page is to be used. OperatingSystemSecurity This section describes various security related aspects like authentication, one time password, threats and security classifications.
So a computer system must be protected against unauthorized access, malicious access to system memory, viruses, worms etc. We're going to discuss following topics in this article. One time passwords provides additional security along with normal authentication. In One- Time Password system, a unique password is required every time user tries to login into the system.
Once a one-time password is used then it cannot be used again. One time password are implemented in various ways. System asks for numbers corresponding to few alphabets randomly chosen. System asks for such secret id which is to be generated every time prior to login. Operating system's processes and kernel do the designated task as instructed. If a user program made these process do malicious tasks then it is known as Program Threats.
One of the common examples of program threat is a program installed in a computer which can store and send user credentials via network to some hacker. Following is the list of some well-known program threats. It is harder to detect. A virus is generally a small code embedded in a program. System threats refer to misuse of system services and network connections to put user in trouble. System threats can be used to launch program threats on a complete network called as program attack.
Following is the list of some well-known system threats. A Worm process generates its multiple copies where each copy uses system resources, prevents all other processes to get required resources. Worm processes can even shut down an entire network. Definition motivates a generic model of language processing activities. We refer to the collection of language processor components engaged in analyzing a source program as the analysis phase of the language processor.
Components engaged in synthesizing a target program constitute the synthesis phase. Hardware is just a piece of mechanical device and its functions are being controlled by a compatible software.
Hardware understands instructions in the form of electronic charge, which is the counterpart of binary language in software programming. Binary language has only two alphabets, 0 and 1. To instruct, the hardware codes must be written in binary format, which is simply a series of 1s and 0s.
It would be a difficult and cumbersome task for computer programmers to write such codes, which is why we have compilers to write such codes. Language Processing System We have learnt that any computer system is made of hardware and software.
The hardware understands a language, which humans cannot understand. So we write programs in high-level language, which is easier for us to understand and remember. These programs are then fed into a series of tools and OS components to get the desired code that can be used by the machine.
This is known as Language Processing System. They may perform the following functions. Macro processing: A preprocessor may allow a user to define macros that are short hands for longer constructs.
File inclusion: A preprocessor may include header files into the program text. Rational preprocessor: these preprocessors augment older languages with more modern flow-of- control and data structuring facilities. As an important part of a compiler is error showing to the programmer.
They begin to use a mnemonic symbols for each machine instruction, which they would subsequently translate into machine language. Such a mnemonic machine language is now called an assembly language. Programs known as assembler were written to automate the translation of assembly language in to machine language. The input to an assembler program is called source program, the output is a machine language translation object program. What is an assembler?
A tool called an assembler translates assembly language into binary instructions. Symbolic names for operations and locations are one facet of this representation. An assembler reads a single assembly language source file and produces an object file containing machine instructions and bookkeeping information that helps combine several object files into a program. Figure 1 illustrates how a program is built. Most programs consist of several files—also called modules— that are written, compiled, and assembled independently.
A program may also use prewritten routines supplied in a program library. A module typically contains References to subroutines and data defined in other modules and in libraries. The code in a module cannot be executed when it contains unresolved References to labels in other object files or libraries. Another tool, called a linker, combines a collection of object and library files into an executable file , which a computer can run. The Assembler Provides: a. This includes access to the entire instruction set of the machine.
A means for specifying run-time locations of program and data in memory. Provide symbolic labels for the representation of constants and addresses. Perform assemble-time arithmetic. Provide for the use of any synthetic instructions. Emit machine code in a form that can be loaded and executed. Report syntax errors and provide program listings h. Provide an interface to the module linkers and program loader. Expand programmer defined macro routines. This require more overhead and the process becomes complex While, impure, the source code is subjected to some initial preprocessing before the code is eventually interpreted.
The actual analysis overhead is now reduced and the processor speed enabling faithful and efficient interpretation. JAVA also uses interpreter. The process of interpretation can be carried out in following phases. Lexical analysis 2. Synatx analysis 3.
Semantic analysis 4. Direct Execution e Loader and Link-editor: Once the assembler procedures an object program, that program must be placed into memory and executed. The assembler could place the object program directly in memory and transfer control to it, thereby causing the machine language program to be execute.
Also the programmer would have to retranslate his program with each execution, thus wasting translation time. To overcome this problems of wasted translation time and memory. It is also expected that a compiler should make the target code efficient and optimized in terms of time and space.
Compiler design principles provide an in-depth view of translation and optimization process. It includes lexical, syntax, and semantic analysis as front end, and code generation and optimization as back- end.
Analysis Phase Known as the front-end of the compiler, the analysis phase of the compiler reads the source program, divides it into core parts and then checks for lexical, grammar and syntax errors. The analysis phase generates an intermediate representation of the source program and symbol table, which should be fed to the Synthesis phase as input. Analysis and Synthesis phase of compiler Synthesis Phase Known as the back-end of the compiler, the synthesis phase generates the target program with the help of intermediate source code representation and symbol table.
A compiler can have many phases and passes. Pass : A pass refers to the traversal of a compiler through the entire program. Phase : A phase of a compiler is a distinguishable stage, which takes input from the previous stage, processes and yields output that can be used as input for the next stage. A pass can have more than one phase. A common division into phases is described below. In some compilers, the ordering of phases may differ slightly, some phases may be combined or split into several phases or some extra phases may be inserted between those mentioned below.
Lexical analysis This is the initial part of reading and analysing the program text: The text is read and divided into tokens, each of which corresponds to a sym- bol in the programming language, e. Syntax analysis This phase takes the list of tokens produced by the lexical analysis and arranges these in a tree-structure called the syntax tree that reflects the structure of the program.
This phase is often called parsing. Type checking This phase analyses the syntax tree to determine if the program violates certain consistency requirements, e. Intermediate code generation The program is translated to a simple machine- independent intermediate language.
Register allocation The symbolic variable names used in the intermediate code are translated to numbers, each of which corresponds to a register in the target machine code. In terms of programming languages, words are objects like variable names, numbers, keywords etc. Lexical analysis is the first phase of a compiler. It takes the modified source code from language preprocessors that are written in the form of sentences.
The lexical analyzer breaks these syntaxes into a series of tokens, by removing any whitespace or comments in the source code. If the lexical analyzer finds a token invalid, it generates an error. The lexical analyzer works closely with the syntax analyzer. It reads character streams from the source code, checks for legal tokens, and passes the data to the syntax analyzer when it demands.
Tokens Lexemes are said to be a sequence of characters alphanumeric in a token. The term job refers to a unit of work submitted by a user to the operating system. The term job was used in the early batch operating systems.
The term process basically refers to an execution instance of a program. An embedded operating system is an operating system for embedded computer systems. Most embedded operating systems are real-time operating systems.
Early computer systems had no operating systems and the programs had to di- rectly control all necessary hardware components. The first types of operating systems were the simple batch operating systems in which users had to submit their jobs on punched cards or tape. One of the most important concepts for these systems was automatic job sequenc- ing.
The next types of operating systems developed were batch systems with multi- programming. These systems were capable of handling several programs active in memory at any time and required more advanced memory management. Time-sharing operating systems were the next important generation of systems developed. The operating system provided CPU service during a small and fixed interval to a program and then switched to the next program. Real-time operating systems were developed.
The Unix family includes Linux, which was primarily designed and first imple- mented by Linus Torvals and other collaborators in This group of operating systems consists of Windows 95, 98, ME, and CE, which can be considered the smaller members of the family. One of the important benefits of running a bit OS is that it can address more mem- ory.
Mac OS X Snow Leopard incorporates new technologies that offer immediate improvements while also smartly setting it up for the future. Mac OS X is faster, more secure, and completely ready to support future applications.
This chapter has presented the fundamental concepts of operating systems. Discussions on the general structure, user interfaces, and functions of an operating system have also been presented.
Two examples of well-known operating systems are Linux and Microsoft Windows. The process manager is considered the most important component of the OS. Explain the main facilities provided by an operating system. Give examples. Explain and give examples.
0コメント