Sunday, March 4, 2012

MCS-041 Free Solved 2012 Assignment

Course Code                                       :           MCS-041
Course Title                                        :           Operating Systems
Assignment Number                           :           MCA (4)/041/Assign/11
Maximum Marks                                :           100
Weightage                                           :           25%
Last Date of Submission                     :           15th October, 2011 (for July, 2011 session)                                                                                                   15th April, 2012 (for January, 2012 session)
This assignment has four questions. Answer all questions. Rest 20 marks are for viva voce. You may use illustrations and diagrams to enhance the explanations. Please go through the guidelines regarding assignments given in the Programme Guide for the format of presentation. Answer of each part of the question should be confined to about 300 words.

Question 1:

(a)        Describe the Banker’s algorithm                                                                                              (7)

Ans: The Banker's algorithm is a resource allocation & deadlock avoidance algorithm developed by Edsger Dijkstra that tests for safety by simulating the allocation of pre-determined maximum possible amounts of all resources, and then makes a "safe-state" check to test for possible deadlock conditions for all other pending activities, before deciding whether allocation should be allowed to continue.The algorithm was developed in the design process for the THE operating system and originally described (in Dutch) in EWD108[1]. The name is by analogy with the way that bankers account for liquidity constraints.


The Banker's algorithm is run by the operating system whenever a process requests resources.[2] The algorithm avoids deadlock by denying or postponing the request if it determines that accepting the request could put the system in an unsafe state (one where deadlock could occur). When a new process enters a system, it must declare the maximum number of instances of each resource type that may not exceed the total number of resources in the system. Also, when a process gets all its requested resources it must return them in a finite amount of time.

[edit] Resources

For the Banker's algorithm to work, it needs to know three things:
  • How much of each resource each process could possibly request
  • How much of each resource each process is currently holding
  • How much of each resource the system currently has available
Resources may be allocated to a process only if it satisfies the following conditions:
  1. request ≤ max, else set error condition as process has crossed maximum claim made by it.
  2. request ≤ available, else process waits until resources are available.
Some of the resources that are tracked in real systems are memory, semaphores and interface access.
The Banker's Algorithm derives its name from the fact that this algorithm could be used in a banking system to ensure that the bank does not run out of resources, because the bank would never allocate its money in such a way that it can no longer satisfy the needs of all its customers. By using the Banker's algorithm, the bank ensures that when customers request money the bank never leaves a safe state. If the customer's request does not cause the bank to leave a safe state, the cash will be allocated, otherwise the customer must wait until some other customer deposits enough.
Basic data structures to be maintained to implement the Banker's Algorithm:
Let n be the number of processes in the system and m be the number of resource types. Then we need the following data structures:
  • Available: A vector of length m indicates the number of available resources of each type. If Available[j] = k, there are k instances of resource type Rj available.
  • Max: An n×m matrix defines the maximum demand of each process. If Max[i,j] = k, then Pi may request at most k instances of resource type Rj.
  • Allocation: An n×m matrix defines the number of resources of each type currently allocated to each process. If Allocation[i,j] = k, then process Piis currently allocated k instance of resource type Rj.
  • Need: An n×m matrix indicates the remaining resource need of each process. If Need[i,j] = k, then Pimay need k more instances of resource type Rj to complete task.
Note: Need = Max - Allocation.

[edit] Example

Assuming that the system distinguishes between four types of resources, (A, B, C and D), the following is an example of how those resources could be distributed. Note that this example shows the system at an instant before a new request for resources arrives. Also, the types and number of resources are abstracted. Real systems, for example, would deal with much larger quantities of each resource.
Available system resources are:
3 1 1 2
Processes (currently allocated resources):
   A B C D
P1 1 2 2 1
P2 1 0 3 3
P3 1 2 1 0
Processes (maximum resources):
   A B C D
P1 3 3 2 2
P2 1 2 3 4
P3 1 3 5 0

[edit] Safe and Unsafe States

A state (as in the above example) is considered safe if it is possible for all processes to finish executing (terminate). Since the system cannot know when a process will terminate, or how many resources it will have requested by then, the system assumes that all processes will eventually attempt to acquire their stated maximum resources and terminate soon afterward. This is a reasonable assumption in most cases since the system is not particularly concerned with how long each process runs (at least not from a deadlock avoidance perspective). Also, if a process terminates without acquiring its maximum resources, it only makes it easier on the system.
Given that assumption, the algorithm determines if a state is safe by trying to find a hypothetical set of requests by the processes that would allow each to acquire its maximum resources and then terminate (returning its resources to the system). Any state where no such set exists is an unsafe state.

[edit] Example

We can show that the state given in the previous example is a safe state by showing that it is possible for each process to acquire its maximum resources and then terminate.
  1. P1 acquires 2 A, 1 B and 1 D more resources, achieving its maximum
    • The system now still has 1 A, no B, 1 C and 1 D resource available
  2. P1 terminates, returning 3 A, 3 B, 2 C and 2 D resources to the system
    • The system now has 4 A, 3 B, 3 C and 3 D resources available
  3. P2 acquires 2 B and 1 D extra resources, then terminates, returning all its resources
    • The system now has 5 A, 3 B, 6 C and 6 D resources
  4. P3 acquires 1 B and 4 C resources and terminates
    • The system now has all resources: 6 A, 5 B, 7 C and 6 D
  5. Because all processes were able to terminate, this state is safe
Note that these requests and acquisitions are hypothetical. The algorithm generates them to check the safety of the state, but no resources are actually given and no processes actually terminate. Also note that the order in which these requests are generated – if several can be fulfilled – doesn't matter, because all hypothetical requests let a process terminate, thereby increasing the system's free resources.
For an example of an unsafe state, consider what would happen if process 2 was holding 1 more unit of resource B at the beginning.

[edit] Requests

When the system receives a request for resources, it runs the Banker's algorithm to determine if it is safe to grant the request. The algorithm is fairly straight forward once the distinction between safe and unsafe states is understood.
  1. Can the request be granted?
    • If not, the request is impossible and must either be denied or put on a waiting list
  2. Assume that the request is granted
  3. Is the new state safe?
    • If so grant the request
    • If not, either deny the request or put it on a waiting list
Whether the system denies or postpones an impossible or unsafe request is a decision specific to the operating system.

[edit] Example

Starting in the same state as the previous example started in, assume process 3 requests 2 units of resource C.
  1. There is not enough of resource C available to grant the request
  2. The request is denied

On the other hand, assume process 3 requests 1 unit of resource C.
  1. There are enough resources to grant the request
  2. Assume the request is granted
    • The new state of the system would be:
    Available system resources
     A B C D
Free 3 1 0 2
    Processes (currently allocated resources):
     A B C D
P1   1 2 2 1
P2   1 0 3 3
P3   1 2 2 0
    Processes (maximum resources):
     A B C D
P1   3 3 2 2
P2   1 2 3 4
P3   1 3 5 0
  1. Determine if this new state is safe
    1. P1 can acquire 2 A, 1 B and 1 D resources and terminate
    2. Then, P2 can acquire 2 B and 1 D resources and terminate
    3. Finally, P3 can acquire 1B and 3 C resources and terminate
    4. Therefore, this new state is safe
  2. Since the new state is safe, grant the request

Finally, from the state we started at, assume that process 2 requests 1 unit of resource B.
  1. There are enough resources
  2. Assuming the request is granted, the new state would be:
    Available system resources:
     A B C D
Free 3 0 1 2
    Processes (currently allocated resources):
     A B C D
P1   1 2 2 1
P2   1 1 3 3
P3   1 2 1 0
    Processes (maximum resources):
     A B C D
P1   3 3 2 2
P2   1 2 3 4
P3   1 3 5 0
  1. Is this state safe? Assuming P1, P2, and P3 request more of resource B and C.
    • P1 is unable to acquire enough B resources
    • P2 is unable to acquire enough B resources
    • P3 is unable to acquire enough B resources
    • No process can acquire enough resources to terminate, so this state is not safe
  2. Since the state is unsafe, deny the request

[edit] Limitations

Like other algorithms, the Banker's algorithm has some limitations when implemented. Specifically, it needs to know how much of each resource a process could possibly request. In most systems, this information is unavailable, making it impossible to implement the Banker's algorithm. Also, it is unrealistic to assume that the number of processes is static since in most systems the number of processes varies dynamically. Moreover, the requirement that a process will eventually release all its resources (when the process terminates) is sufficient for the correctness of the algorithm, however it is not sufficient for a practical system. Waiting for hours (or even days) for resources to be released is usually not acceptable.

(b)        Why must the bitmap image for the file allocation be kept on mass storage rather
            than in main memory?                                                                                                 (3)       

In case of system crash (memory failure) the free-space list would not
be lost as it would be if the bit map had been stored in main memory.

Question 2:

Explain Dell and La-Padulla model for security and protection. Why is security a critical
issue in Distributed OS environment

Bell and La Padula Model
Bell and La Padula use finite-state machines to formalise their model. They define the various components of the finite state machine, define what it means (formally) for a given system state to be secure, and then consider the transitions that can be allowed so that a secure state can never lead to an insecure state.
In addition to the subjects and objects of the access matrix model, it includes the security levels of the military security system: each subject has a authority and each object has a classification. Each subject also has a current security level, which may not exceed the subject’s authority. The access matrix is defined as above, and four modes of access are named and specified as follows:
Read-only: subject can read the object but not modify it;
Append: subject can write the object but cannot read it;
Execute: subject can execute the object but cannot read or write it directly; and
Read-write: subject can both read and write the object.

A control attribute, which is like an ownership flag, is also defined. It allows a subject to pass to other subjects some or all of the access modes it possesses for the controlled object. The control attribute itself cannot be passed to other subjects; it is granted to the subject that created the object.
For a system state to be secure, it should hold the following two properties:
(1) The simple security property: No subject has read access to any object that has a classification greater than the clearance of the subject; and
(2) The *-property (pronounced “star-property”): No subject has append-access to an object whose security level is not at least the current security level of the subject; no subject has read-write access to an object whose security level is not equal to the current security level of the subject; and no subject has read access to an object whose security level is not at most the current security level of the subject.
An example restatement of the model is discussed below:
• In this case, we use security levels as (unclassified < confidential < secret < top-secret).

The security levels are used to determine appropriate access rights.
The essence of the model can be defined as follows:
• A higher-level subject (e.g., secret level) can always “read-down” to the objects with level which is either equal (e.g., secret level) or lower (e.g., confidential / unclassified level). So the system high (top security level in the system) has the read-only access right to all the objects in the entire system.
• A lower-level subject can never “read-up” to the higher-level objects, as the model suggests that these objects do not have enough authority to read the high security level information.
• The subject will have the Read-Write access right when the objects are in the same level as the subject.
• A lower-level subject (e.g., confidential level) can always “write-up” (Append access right) to the objects with level, which is either equal (e.g., confidential level) or higher (e.g., secret / top-secret level). This happens as a result of all the subjects in the higher security levels (e.g., secret / top-secret level) having the authority to read the object from lower levels.
• A higher-level subject can never “write-down” to the lower level objects, as the model suggests that these objects are not secure enough to handle the high security level information.

There are also a few problems associated with this model:
• For a subject of a higher-level, all of its information is considered as the same level. So there is no passing of information from this subject to any object in the lower level.
• For an environment where security levels are not hierarchical related, the model does not account for the transfer of information between these domains.
• The model does not allow changes in access permission.
• The model deals only with confidentiality but not on integrity.

Question 3:

Provide a solution to the Readers-Writers problem using semaphores. Explain with an
example.                                                                                                                                  (10)

Readers and Writers Problem
The readers/writers problem is one of the classic synchronization problems. It is often used to compare and contrast synchronization mechanisms. It is also an eminently used practical problem. A common paradigm in concurrent applications is isolation of shared data such as a variable, buffer, or document and the control of access to that data. This problem has two types of clients accessing the shared data. The first type, referred to as readers, only wants to read the shared data. The second type, referred to as writers, may want to modify the shared data. There is also a designated central data server or controller. It enforces exclusive write semantics; if a writer is active then no other writer or reader can be active. The server can support clients that wish to both read and write. The readers and writers problem is useful for modeling processes which are competing for a limited shared resource. Let us understand it with the help of a practical example:
An airline reservation system consists of a huge database with many processes that read and write the data. Reading information from the database will not cause a problem since no data is changed. The problem lies in writing information to the database. If no constraints are put on access to the database, data may change at any moment. By the time a reading process displays the result of a request for information to the user, the actual data in the database may have changed. What if, for instance, a process reads the number of available seats on a flight, finds a value of one, and reports it to the customer? Before the customer has a chance to make their reservation, another process makes a reservation for another customer, changing the number of available seats to zero.
The following is the solution using semaphores:
Semaphores can be used to restrict access to the database under certain conditions. In this example, semaphores are used to prevent any writing processes from changing information in the database while other processes are reading from the database.
semaphore mutex = 1; // Controls access to the reader count
semaphore db = 1; // Controls access to the database
int reader_count; // The number of reading processes accessing the data
while (TRUE) { // loop forever
down(&mutex); // gain access to reader_count
reader_count = reader_count + 1; // increment the reader_count
if (reader_count == 1)
down(&db); //If this is the first process to read the database,
// a down on db is executed to prevent access to the
// database by a writing process
up(&mutex); // allow other processes to access reader_count
read_db(); // read the database
down(&mutex); // gain access to reader_count
reader_count = reader_count - 1; // decrement reader_count
if (reader_count == 0)
up(&db); // if there are no more processes reading from the
// database, allow writing process to access the data
up(&mutex); // allow other processes to access reader_countuse_data();
// use the data read from the database (non-critical)
while (TRUE) { // loop forever
create_data(); // create data to enter into database (non-critical)
down(&db); // gain access to the database
write_db(); // write information to the database
up(&db); // release exclusive access to the database

Question 4:

Write and explain an algorithm used for ordering of events in distributed environments.
Implement the algorithm with an example and explain.                                                                       (10)

Question 5:

What do you understand by disk scheduling? Calculate the total head movement using FCFS,
SSTF and SCAN disk scheduling algorithms for the given block sequences:

91, 130, 150, 30, 100, 120, 50
Initially the head is at cylinder number 0. Draw the diagram for all.                                        (10)

The disk is a resource which has to be shared. It is therefore has to be scheduled for use, according to some kind of scheduling system. The secondary storage media structure is one of the vital parts of the file system. Disks are the one, providing lot of the secondary storage. As compared to magnetic tapes, disks have very fast access time and disk bandwidth. The access time has two major constituents: seek time and the rotational latency.
The seek time is the time required for the disk arm to move the head to the cylinder with the desired sector. The rotational latency is the time required for the disk to rotate the desired sector until it is under the read-write head. The disk bandwidth is the total number of bytes transferred per unit time.
Both the access time and the bandwidth can be improved by efficient disk I/O requests scheduling. Disk drivers are large single dimensional arrays of logical blocks to be transferred. Because of large usage of disks, proper scheduling algorithms are required.
A scheduling policy should attempt to maximize throughput (defined as the number of requests serviced per unit time) and also to minimize mean response time (i.e., average waiting time plus service time). These scheduling algorithms are discussed below:
FCFS Scheduling
First-Come, First-Served (FCFS) is the basis of this simplest disk scheduling technique. There is no reordering of the queue. Suppose the requests for inputting/ outputting to blocks on the cylinders have arrived, forming the following disk queue: (different example)
50, 91, 150, 42, 130, 18, 140, 70, 60
Also assume that the disk head is initially at cylinder 50 then it moves to 91, then to 150 and so on. The total head movement in this scheme is 610 cylinders, which makes the system slow because of wild swings. Proper scheduling while moving towards a particular direction could decrease this. This will further improve performance. FCFS scheme is clearly depicted in Figure 1.

Figure 1: FCFS Scheduling

SSTF Scheduling
The basis for this algorithm is Shortest-Seek-Time-First (SSTF) i.e., service all the requests close to the current head position and with minimum seeks time from current head position.
In the previous disk queue sample the cylinder close to critical head position i.e., 50, is 42 cylinder, next closest request is at 60. From there, the closest one is 70, then 91,130,140,150 and then finally 18-cylinder. This scheme has reduced the total head movements to 248 cylinders and hence improved the performance. Like SJF (Shortest Job First) for CPU scheduling SSTF also suffers from starvation problem. This is because requests may arrive at any time. Suppose we have the requests in disk queue for cylinders 18 and 150, and while servicing the 18-cylinder request, some other request closest to it arrives and it will be serviced next. This can continue further also making the request at 150-cylinder wait for long. Thus a continual stream of requests near one another could arrive and keep the far away request waiting indefinitely. The SSTF is not the optimal scheduling due to the starvation problem. This whole scheduling is shown in Figure 2
Figure2: SSTF Scheduling

SCAN Scheduling
The disk arm starts at one end of the disk and service all the requests in its way towards the other end, i.e., until it reaches the other end of the disk where the head movement is reversed and continue servicing in this reverse direction. This scanning is done back and forth by the head continuously.
In the example problem two things must be known before starting the scanning process. Firstly, the initial head position i.e., 50 and then the head movement direction (let it towards 0, starting cylinder). Consider the disk queue again: 91, 150, 42, 130, 18, 140, 70, 60
Starting from 50 it will move towards 0, servicing requests 42 and 18 in between. At cylinder 0 the direction is reversed and the arm will move towards the other end of the disk servicing the requests at 60, 70, 91,130,140 and then finally 150.
As the arm acts like an elevator in a building, the SCAN algorithm is also known as elevator algorithm sometimes. The limitation of this scheme is that few requests need to wait for a long time because of reversal of head direction. This scheduling algorithm results in a total head movement of only 200 cylinders. Figure 3 shows this scheme:

Figure 3: SCAN Scheduling

Question 6:

Explain memory management schemes in Windows 2000. List at least three system cells to
memory management in Windows 2000 and explain.                                                             (10)

Windows 2000 consists of an advanced virtual memory management system. It provides a number of Win32 functions for using it and part of the executives and six dedicated kernel threads for managing it. In Windows 2000, each user process has its own virtual address space, which is 32 bit long (or 4 GB of virtual address space). The lower 2 GB minus approx 256 MB are reserved for process’s code and data; the upper 2 GB map onto to kernel memory in a protected way. The virtual address space is demand paged with fixed pages size. Virtual address space for a user is shown in Figure 4 below:

Figure 4: Virtual Address space layout for user processes

The bottom and top 64 Kb process virtual space is normally unmapped. This is done intentionally to catch programming errors as invalid pointers are often 0 or –1. From 64 KB onward comes the user’s private code and data, which extends upto 2 GB. Some system counters and timers are available at high end of 2 GB. This makes them visible and allows processes to access them without the overhead of a system call. The next 2 GB of virtual address space contains the operating system, code, data, paged and non-paged pools. The upper 2 GB is shared by all user processes, except page tables, which are exclusively reserved for each process’ own page table. The upper 2 GB of memory is not writable and mostly not even readable by user-mode processes. When a thread makes a system call, it traps into kernel mode and keeps on running in the same thread. This makes the whole OS and all of its data structure including user process visible within thread’s address space by switching to thread’s kernel stack. This results in less private address space per process in return for faster system calls. Large database server feel short of address space, and that is why 3-GB user space option is available on Advanced Server and Datacenter server.
There are three states available for each virtual space: free, reserved, or committed. A free page is not currently in use and reference to it results in page fault. When a process is initiated, all pages are in free state till the program and initial data are loaded onto the address space. A page is said to be committed when code or data is mapped onto a page. A committed page is referenced using virtual memory hardware and succeeds if the page is available in the main memory. A virtual page in reserved state cannot be mapped until the reservation is explicitly removed. In addition to the free, reserved, and committed states, pages also have other states like readable, writable, and executable.
In order to avoid wasting disk space, Windows 2000 committed pages that are not permanent on disk (e.g. stack pages) are not assigned a disk page till they have to be paged out. This strategy makes system more complex, but on the other hand no disk space need be allocated for pages that are never paged out.
Free and reserved pages do not consist of shadow pages on disk and a reference to these pages cause page fault. The shadow pages on the disk are arranged into one or more paging files. It is possible to have 16 paging files, spread over 16 separate disks, for higher I/O bandwidth, each created with an initial and maximum size. These files can be created at the maximum size at the time of system installation to avoid fragmentation.
Like many versions of UNIX, Windows 2000 allows files to be mapped directly onto the virtual address spaces. A mapped file can be read or written using ordinary memory references. Memory-mapped files are implemented in the same way as other committed pages, only the shadow pages are in the user’s file instead of in the paging file. Due to recent writes to the virtual memory, the version in memory may not be identical to the disk version. On unmapping or explicitly flushing, the disk version is brought up to date. Windows 2000 permits two or more processes to map onto to the same portion of a file at the same time, at different virtual addresses as depicted in Figure 5. The processes can communicate with each and pass data back and forth at very high bandwidth, since no copying is required. The processes may have different access permissions. The change made by one process is immediately visible to all the others, even if the disk file has not yet been updated.

There is another technique called copy-on-write, where data can be safely written without affecting other users or the original copy on disk. This is to handle a situation where two programs share a DLL file and one of them changes the file’s static data.
To allow the programs to use more physical memory than fit in the address space, there is technique to place a piece of code at any virtual address without relocation is called position independent code. There is a technique called bank switching in which a program, in which program could substitute some block of memory above the 16-bit or 20-bit limit for a block of its own memory. When 32- bit machine came, it was thought that there would be enough address space forever. Now large programs often need more than the 2 GB or 3 GB of user address space Windows 2000 allocates to them, so bank switching is back, now called address windowing extensions. This technique is only used on servers with more than 2 GB of physical memory.
System Calls
Win32 API contains a number of functions that allow a process to manage its virtual memory explicitly. Some of important functions are listed below in Figure 6.
Table 6: Important Win32 API functions for managing virtual memory in Windows 2000.

These functions operate on a region having one or a sequence of pages that are consecutive in the virtual address space. To minimize porting problems to upcoming architectures with pages larger than existing ones, allocated regions always begin on 64-KB boundaries. Actual address space is in multiple of 64 KB. Windows 2000 has a API function to allow a process to access the virtual memory of a different process over which it has been given control or handle.
Windows 2000 supports a single linear 4-GB demand-paged address space per process and segmentation is not supported in any form. Pages size is shown in Table 7 below:

The operating system can use 4-MB pages to reduce page table space consumed. Whereas the scheduler selects individual threads to run and does not care much about the processes, the memory manager deals with processes and does not care much about threads. As and when virtual address space is allocated for a process, the memory manager creates a VAD (Virtual Address Descriptor). VAD consists of a range of addresses mapped, the backing store file and offset where it is mapped, and the protection code. On touching the first page, the directory of page tables is created and pointer to it added in the VAD. The address space is completely defined by the list of its VAD’s.
There is no concept of pre-paging in Windows 2000. All pages are brought dynamically to the memory as page fault occurs. On each page fault:
• A trap to the kernel occurs
• Kernel builds a machine independent descriptor and passes this to memory manager
• Memory manager checks its validity
• If the faulted page falls within a committed or reserved region, then it looks for address in the list of VAD, finds/create the page table, and looks up the relevant entry.

Question 7:
Explain any two mutual exclusion algorithms in Distributed systems.                                     (10)

Mutual Exclusion Servers

In the printer example being used here, the problem of storage space in the spooler typically becomes acute with graphics printers. In such a context, it is desirable to block an applications process until the printer is ready to accept data from that applications process, and then let that process directly deliver data to the printer. For example, an applications process may be coded as follows:
Send request for permission to the spooler
Await reply giving permission to print
send data directly to printer
End Loop
Send notice to spooler that printing is done
If all users of the spooler use this protocol, the spooler is no longer serving as a spooler, it is merely serving as a mutual exclusion mechanism! In fact, it implements exactly the same semantics as a binary semaphore, but it implements it using a client server model.
The process implementing a semaphore using message passing might have the following basic structure:
Await message from client
Case message type of
If count > 0
Send immediate reply
count = count - 1
Enqueue identity of client
End if
If queue is empty,
count = count + 1
Dequeue one blocked client
Send a delayed reply
End if
End case
End Loop
This requires a count and a queue of return addresses for each semaphore. Note that, by presenting this, we have proven that blocking message passing can be used to implement semaphores. Since we already know that semaphores plus shared memory are sufficient to implement blocking message passing, we have proven the equivalence, from a computation theory viewpoint, of these two models of interprocess communication.
The disadvantage of implementing semaphores using a server process is that server becomes a potential source of reliability problems. If we can build a mutual exclusion algorithm that avoids use of a dedicated server, for example, by having the processes that are competing for entry to a critical section negotiate directly with each other, we can potentially eliminate the reliability problem.

Token Based Mutual Exclusion

One alternative to the mutual-exclusion server given above is to arrange the competing processes in a ring and let them exchange a token. If a process receives the token and does not need exclusive use of the resource, it must pass the token on to the next process in the ring. If a process needs exclusive use of the resource, it waits for the token and then holds it until it is done with the resource, at which point it puts the token back in circulation.
This is the exact software analog of a token ring network. In a token ring network, only one process at a time may transmit, and the circulating token is used to assure this, exactly as described. The token ring network protocol was developed for the hardware level or the link-level of the protocol hierarchy. Here, we are proposing building a virtual ring at or above the transport layer and using essentially the same token passing protocol.
This solution is not problem free. What if the token is lost? What if a process in the ring ceases transmission? Nonetheless, it is at the root of a number of interesting and useful distributed mutual exclusion algorithms. The advantage of such distributed algorithms is that they do not rest on a central authority, and thus, they are ideal candidates for use in fault tolerant applications.
An important detail in the token-based mutual exclusion algorithm is that, on receiving a token, a process must immediately forward the token if it is not waiting for entry into the critical section. This may be done in a number of ways:
• Each process could periodically check to see if the token has arrived. This requires some kind of non-blocking read service to allow the process to poll the incoming network connection on the token ring. The UNIX FNDELAY flag allows non-blocking read, and the Unix select() kernel call allows testing an I/O descriptor to see if a read from that descriptor would block; either of these is sufficient to support this polling implementation of the token passing protocol. The fact that UNIX offers two such mechanisms is good evidence that these are afterthoughts added to Unix after the original implementation was complete.
• The receipt of an incoming token could cause an interrupt. Under UNIX, for example, the SIGIO signal can be attached to a socket or communications line (see the FASYNC flag set by fcntl). To await the token, the process could disable the SIGIO signal and do a blocking read on the incoming token socket. To exit the critical section, the process could first enable SIGIO and then send the token. The SIGIO handler would read the incoming token and forward it before returning.
• A thread or process could be dedicated to the job of token management. We’ll refer to such a thread or process as the mutual exclusion agent of the application process. Typically, the application would communicate with its agent using shared memory, semaphores, and other uni-processor tools, while the agent speaks to other agents over the net. When the user wants entry to the critical section, it sets a variable to “let me in” and then does a wait on the entry semaphore it shares with the agent. When the user is done, the user sets the shared variable to “done” and then signals the go-on semaphore it shares with the agent. The agent always checks the shared variable when it receives the token, and only forwards it when the variable is equal to “done”.

Lamport's Bakery Algorithm

One decentralised algorithm in common use, for example, in bakeries, is to issue numbers to each customer. When the customers want to access the scarce resource (the clerk behind the counter), they compare the numbers on their slips and the user with the lowest numbered slip wins.
The problem with this is that there must be some way to distribute numbers, but this has been solved. In bakeries, we use a very small server to distribute numbers, in the form of a roll of tickets where conflicts between two customers are solved by the fact that human hands naturally exclude each other from the critical volume of space that must be occupied to take a ticket. We cannot use this approach for solving the problem on a computer.
Before going on to more interesting implementations for distributing numbers, note that clients of such a protocol may make extensive use of their numbers! For example, if the bakery contains multiple clerks, the clients could use their number to select a clerk (number modulo number of clerks). Similarly, in a FIFO queue implemented with a bounded buffer, the number modulo the queue size could indicate the slot in the buffer to be used, allowing multiple processes to simultaneously place values in the queue.
Lamport’s Bakery Algorithm provides a decentralised implementation of the “take a number” idea. As originally formulated, this requires that each competing process share access to an array, but later distributed algorithms have eliminated this shared data structure. Here is the original formulation:
For each process, i, there are two values, C[i] and N[i], giving the status of process I and the number it has picked. In more detail:
N[i] = 0 --> Process i is not in the bakery.
N[i] > 0 --> Process i has picked a number and is in the bakery.
C[i] = 0 --> Process i is not trying to pick a number.
C[i] = 1 --> Process i is trying to pick a number.
N[i] = min( for all j, N[j] where N[j] > 0 )
Process i is allowed into the critical section.
Here is the basic algorithm used to pick a number:
C[i] := 1;
N[i] := max( for all j, N[j] ) + 1;
C[i] := 0;
In effect, the customer walks into the bakery, checks the numbers of all the waiting customers, and then picks a number one larger than the number of any waiting customer.
If two customers each walk in at the same time, they are each likely to pick the same number. Lamport’s solution allows this but then makes sure that customers notice that this has happened and break the tie in a sensible way.
To help the customers detect ties, each customer who is currently in the process of picking a number holds his hand up (by setting C[i] to 1. s/he pulls down his hand when s/he is done selecting a number -- note that selecting a number may take time, since it involves inspecting the numbers of everyone else in the waiting room.
A process does the following to wait for the baker:
Step 1:
while (for some j, C(j) = 1) do nothing;
First, wait until any process which might have tied with you has finished selecting their numbers. Since we require customers to raise their hands while they pick numbers, each customer waits until all hands are down after picking a number in order to guarantee that all ties will be cleanly recognised in the next step.
Step 2:
W := (the set of j such that N[j] > 0)
(where W is the set of indeces of waiting processes)
M := (the set of j in W
such that N[j] <= N[k]
for all k in W)
(where M is the set of process indices with minimum numbers)
j := min(M)
(where is in M and the tie is broken)
until i = j;
Second, wait until your ticket number is the minimum of all tickets in the room. There may be others with this minimum number, but in inspecting all the tickets in the room, you found them! If you find a tie, see if your customer ID number is less than the ID numbers of those with whom you’ve tied, and only then enter the critical section and meet with the baker.
This is inefficient, because you might wait a bit too long while some other process picks a number after the number you picked, but for now, we'll accept this cost.
If you are not the person holding the smallest number, you start checking again. If you hold the smallest number, it is also possible that someone else holds the smallest number. Therefore, what you’ve got to do is agree with everyone else on how to break ties.
The solution shown above is simple. Instead of computing the value of the smallest number, compute the minimum process ID among the processes that hold the smallest value. In fact, we need not seek the minimum process ID, all we need to do is use any deterministic algorithm that all participants can agree on for breaking the tie. As long as all participants apply the same deterministic algorithms to the same information, they will arrive at the same conclusion.
To return its ticket, and exit the critical section, processes execute the following trivial bit of code:
N[i] := 0; 
When you return your ticket, if any other processes are waiting, then on their next scan of the set of processes, one of them will find that it is holding the winning ticket.

Moving to a Distributed Context

In the context of distributed systems, Lamport's bakery algorithm has the useful property that process i only modifies its own N[i] and C[i], while it must read the entries for all others. In effect, therefore, we can implement this in a context where each process has read-only access to the data of all other processes, and read-write access only to its own data.
A distributed implementation of this algorithm can be produced directly by storing N[i] and C[i] locally with process i, and using message passing when any process wants to examine the values of N and C for any process other than itself. In this case, each process must be prepared to act as a server for messages from the others requesting the values of its variables; we have the same options for implementing this service as we had for the token passing approach to mutual exclusion. The service could be offered by an agent process, by an interrupt service routine, or by periodic polling of the appropriate incoming message queues.
Note that we can easily make this into a fault tolerant model by using a fault-tolerant client-server protocol for the requests. If there is no reply to a request for the values of process i after some interval and a few retries, we can simply assume that process i has failed.
This demonstrates that fault tolerant mutual exclusion can be done without any central authority! This direct port of Lamport’s bakery algorithm is not particularly efficient, though. Each process must read the variables of all other processes a minimum of 3 times-once to select a ticket number, once to see if anyone else is in the process of selecting a number, and once to see if it holds the minimum ticket.
For each process contending for entry to the critical section, there are about 6N messages exchanged, which is clearly not very good. Much better algorithms have been devised, but even this algorithm can be improved by taking advantage of knowledge of the network structure. On an Ethernet or on a tree-structured network, a broadcast can be done in parallel, sending one message to N recipients in only a few time units. On a tree-structured network, the reply messages can be merged on the way to the root (the originator of the request) so that sorting and searching for the maximum N or the minimum nonzero N can be distributed efficiently.

Ricart and Agrawala's Mutual Exclusion Algorithm

Another alternative is for anyone wishing to enter a critical section to broadcast their request; as each process agrees that it is OK to enter the section, they reply to the broadcaster saying that it is OK to continue; the broadcaster only continues when all replies are in.
If a process is in a critical section when it receives a request for entry, it defers its reply until it has exited the critical section, and only then does it reply. If a process is not in the critical section, it replies immediately.
This sounds like a remarkably naive algorithm, but with point-to-point communications between N processes, it takes only 2(N-1) messages for a process to enter the critical section, N-1 messages to broadcast the request and N-1 replies.
There are some subtle issues that make the result far from naive. For example, what happens if two processes each ask at the same time? What should be done with requests received while a process is waiting to enter the critical section?
Ricart and Agrawala’s mutual exclusion algorithm solves these problems. In this solution, each process has 3 significant states, and its behaviour in response to messages from others depends on its state:
Outside the critical section
The process replies immediately to every entry request.
After requesting entry, awaiting permission to enter.
The process replies immediately to higher priority requests and defers all other replies until exit from the critical section.
Inside critical section.
The process defers all replies until exit from the critical section.
As with Lamport’s bakery algorithm, this algorithm has no central authority. Nonetheless, the interactions between a process requesting entry to a critical section and each other process have a character similar to client-server interactions. That is, the interactions take the form of a request followed (possibly some time later) by a reply.
As such, this algorithm can be made fault tolerant by applying the same kinds of tricks as are applied in other client server applications. On receiving a request, a processor can be required to immediately send out either a reply or a negative acknowledgement. The latter says “I got your request and I can’t reply yet!”
With such a requirement, the requesting process can wait for either a reply or a negative acknowledgement from every other process. If it gets neither, it can retry the request to that process. If it retries some limited number of times and still gets no answer, it can assume that the distant process has failed and give up on it.
If a process receives two consecutive requests from the same process because acknowledgements have been lost, it must resend the acknowledgement. If a process waits a long time and doesn’t get an acknowledgement, it can send out a message saying “are you still there”, to which the distant process would reply “I got your request but I can’t reply yet”. If it gets no reply, it can retry some number of times and then give up on the server as being gone.
If a process dies in its critical section, the above code solves the problem and lets one of the surviving processes in. If a process dies outside its critical section, this code also works.

Question 8:

(a)        Explain working set model. Explain its concept as well as implementation.                (5)

Working-Set Model
Principle of Locality
Pages are not accessed randomly. At each instant of execution a program tends to use only a small set of pages. As the pages in the set change, the program is said to move from one phase to another. The principle of locality states that most references will be to the current small set of pages in use. The examples are shown below:
1) Instructions are fetched sequentially (except for branches) from the same page.
2) Array processing usually proceeds sequentially through the array functions repeatedly, access variables in the top stack frame.

If we have locality, we are unlikely to continually suffer page-faults. If a page consists of 1000 instructions in self-contained loop, we will only fault once (at most) to fetch all 1000 instructions.
Working Set Definition
The working set model is based on the assumption of locality. The idea is to examine the most recent page references in the working set. If a page is in active use, it will be in the Working-set. If it is no longer being used, it will drop from the working set.
The set of pages currently needed by a process is its working set.
WS(k) for a process P is the number of pages needed to satisfy the last k page references for process P.
WS(t) is the number of pages needed to satisfy a process’s page references for the last t units of time.
Either can be used to capture the notion of locality.
Working Set Policy
Restrict the number of processes on the ready queue so that physical memory can accommodate the working sets of all ready processes. Monitor the working sets of ready processes and, when necessary, reduce multiprogramming (i.e. swap) to avoid thrashing.
Note: Exact computation of the working set of each process is difficult, but it can be estimated, by using the reference bits maintained by the hardware to implement an aging algorithm for pages.
When loading a process for execution, pre-load certain pages. This prevents a process from having to “fault into” its working set. May be only a rough guess at start-up, but can be quite accurate on swap-in.

 b)        Compare Direct file organization with Indexed systematical file organization             (5)       

Indexed Allocation: In this each file has its own index block. Each entry of the index points to the disk blocks containing actual file data i.e., the index keeps an array of block pointers for each file. So, index block is an array of disk block addresses. The ith entry in the index block points to the ith block of the file. Also, the main directory contains the address where the index block is on the disk. Initially, all the pointers in index block are set to NIL. The advantage of this scheme is that it supports both sequential and random access. The searching may take place in index blocks themselves. The index blocks may be kept close together in secondary storage to minimize seek time. Also space is wasted only on the index which is not very large and there’s no external fragmentation. But a few limitations of the previous scheme still exists in this, like, we still need to set maximum file length and we can have overflow scheme of the file larger than the predicted value. Insertions can require complete reconstruction of index blocks also. The indexed allocation scheme is diagrammatically shown in Figure

Images are Missing You Can Download It



thanks buddy .. It was bit hard for me to understand the solution of banker's algorithm otherwise .
Your example made it ..cheers !

Post a Comment