Tuesday, January 15, 2008

Exercises Chapter 5

1.A deadlock is a situation wherein two or more competing actions are waiting for the other to finish, and thus neither ever does.A Starvation is similar in effect to deadlock. Two or more programs become deadlocked together, when each of them wait for a resource occupied by another program in the same set. On the other hand, one or more programs are in starvation, when each of them is waiting for resources that are occupied by programs, that may or may not be in the same set that are starving. Moreover, in a deadlock, no program in the set changes its state.A race is a synchronozation problem between two processes vying for the same resources.

2. Real-life example of deadlock is the traffic jam.Real-life example of starvation is when two people meet at one-way. The one-way can only be accomodated by one person.Real-life example of Race is when two people arrived at the same time on the same door.

3. Four Necessary Conditions for DeadlockThe presence of deadlock in a systems is characterized by these four necessary conditions. The term necessary means that if there is deadlock then all four must be present.

a. Mutual exclusive resource access - A resource acquired is held exclusively, i.e., it is not shared by other processes.

b. No preemption - A process' resources cannot be taken away from it. Only the process can give up its resources.

c. Hold and Wait - A process has some resources and is blocked requesting more.

d. Circularity - This means that there is a circular chain of two or more processes in which the resources needed by one process are held by the next process.

4. Algorithm for prevention of deadlock and starvation:

public boolean tryAcquire( int n0, int n1, ... ) { if ( for all i: ni ≤ availi ) { // successful acquisition availi -= ni for all i; return true; // indicate success } else return false; // indicate failure}

init) Semaphore s = new Semaphore(1,1);

Thread A Thread B

-------- --------

s.acquire(1,0); s.acquire(0,1);
s.acquire(0,1); s.acquire(1,0);

Thread B

--------

while(true) {

s.acquire(0,1);
if ( s.tryAcquire(1,0) ) // if second acquisition succeeds
break; // leave the loop
else {
s.release(0,1); // release what is held
sleep( SOME_AMOUNT); // pause a bit before trying again
}
}
run action s.value
--- ------ -------
(1,1)A s.acquire(1,0) (0,1)B s.acquire(0,1) (0,0)A s.acquire(0,1) A blocks on secondB s.tryAcquire(1,0) => falseB s.release(0,1) (0,1)A s.acquire(0,1) (0,0) A succeeds on second

5.

a. Deadlock can be occurred. When the bridge is destroyed.

b. When there are no traffic lights.

c. To prevent deadlock make all the road to be one-way.

6.a. This is not a deadlocked.

b. There is no blocked processes.

c. P2 can freely request on R1 and R2.

d. P1 can freely request on R1 and R2.

e. Both P1 and P2 have requested R2.

1. P1 will wait after the request of P2.

2. P2 will wait after the request of P1.

Thursday, December 13, 2007

Linux Memory Management

Linux Memory Management
The Linux memory manager implements demand paging with a copy-on-write strategy relying on the 386's paging support. A process acquires its page tables from its parent (during a fork()) with the entries marked as read-only or swapped. Then, if the process tries to write to that memory space, and the page is a copy-on-write page, it is copied, and the page is marked read-write. An exec() results in the reading in of a page or so from the executable. The process then faults in any other pages it needs.
Each process has a page directory which means it can access 1 KB of page tables pointing to 1 MB of 4 KB pages which is 4 GB of memory. A process' page directory is initialized during a fork by copy_page_tables(). The idle process has its page directory initialized during the initialization sequence.

Each user process has a local descriptor table that contains a code segment and data-stack segment. These user segments extend from 0 to 3 GB (0xc0000000). In user space, linear addresses and logical addresses are identical.

On the 80386, linear address run from 0GB to 4GB. A linear address points to a particular memory location within this space. A linear address is not a physical address--it is a virtual address. A logical address consists of a selector and an offset. The selector points to a segment and the offset tells how far into that segment the address is located)

The kernel code and data segments are priveleged segments defined in the global descriptor table and extend from 3 GB to 4 GB. The swapper page directory (swapper_page_dir is set up so that logical addresses and physical addresses are identical in kernel space.

The space above 3 GB appears in a process' page directory as pointers to kernel page tables. This space is invisible to the process in user mode but the mapping becomes relevant when privileged mode is entered, for example, to handle a system call. Supervisor mode is entered within the context of the current process so address translation occurs with respect to the process' page directory but using kernel segments. This is identically the mapping produced by using the swapper_pg_dir and kernel segments as both page directories use the same page tables in this space. Only task[0] (the idle task, sometimes called the swapper task for historical reasons, even though it has nothing to do with swapping in the Linux implementation) uses the swapper_pg_dir directly.

The user process' segment_base = 0x00, page_dir private to the process.
user process makes a system call: segment_base=0xc0000000 page_dir = same user page_dir.
swapper_pg_dir contains a mapping for all physical pages from 0xc0000000 to 0xc0000000 + end_mem, so the first 768 entries in swapper_pg_dir are 0's, and then there are 4 or more that point to kernel page tables.

The user page directories have the same entries as swapper_pg_dir above 768. The first 768 entries map the user space. The upshot is that whenever the linear address is above 0xc0000000 everything uses the same kernel page tables.
The user stack sits at the top of the user data segment and grows down. The kernel stack is not a pretty data structure or segment that I can point to with a ``yon lies the kernel stack.'' A kernel_stack_frame (a page) is associated with each newly created process and is used whenever the kernel operates within the context of that process. Bad things would happen if the kernel stack were to grow below its current stack frame. [Where is the kernel stack put? I know that there is one for every process, but where is it stored when it's not being used?]
User pages can be stolen or swapped. A user page is one that is mapped below 3 GB in a user page table. This region does not contain page directories or page tables. Only dirty pages are swapped.
Traditional Unix tools like 'top' often report a surprisingly small amount of free memory after a system has been running for a while. For instance, after about 3 hours of uptime, the machine I'm writing this on reports under 60 MB of free memory, even though I have 512 MB of RAM on the system. Where does it all go?
The biggest place it's being used is in the disk cache, which is currently over 290 MB. This is reported by top as "cached". Cached memory is essentially free, in that it can be replaced quickly if a running (or newly starting) program needs the memory.
The reason Linux uses so much memory for disk cache is because the RAM is wasted if it isn't used. Keeping the cache means that if something needs the same data again, there's a good chance it will still be in the cache in memory. Fetching the information from there is around 1,000 times quicker than getting it from the hard disk. If it's not found in the cache, the hard disk needs to be read anyway, but in that case nothing has been lost in time.

Thursday, November 22, 2007

SUPERCOMPUTERS AND SIX SERVER

TWO REASONS WHY REGIONAL BANK DECIDED TO BUY SIX SERVER COMPUTERS INSTEAD OF ONE SUPERCOMPUTER
1. it can save investments with the six server computers.
2. it can backup for their important files from the other computers while other computers needs maintenance
3. it is more faster thatn having a supercomputer.
4. it can have a back-up if needed

OPERATING SYSTEM ARTICLE

OPERATING SYSTEM


How Operating Systems Work
by
Dave Coustan and Curt Franklin



Introduction to How Operating Systems Work



If you have a computer, then you have heard about operating systems. Any desktop or laptop PC that you buy normally comes pre-loaded with Windows XP. Macintosh computers come pre-loaded with OS X. Many corporate servers use the Linux or UNIX operating systems. The operating system (OS) is the first thing loaded onto the computer -- without the operating system, a computer is useless.
More recently, operating systems have started to pop up in smaller computers as well. If you like to tinker with electronic devices, you are probably pleased that operating systems can now be found on many of the devices we use every day, from cell phones to wireless access points. The computers used in these little devices have gotten so powerful that they can now actually run an operating system and applications. The computer in a typical modern cell phone is now more powerful than a desktop computer from 20 years ago, so this progression makes sense and is a natural development. In any device that has an operating system, there's usually a way to make changes to how the device works. This is far from a happy accident; one of the reasons operating systems are made out of portable code rather than permanent physical circuits is so that they can be changed or modified without having to scrap the whole device.


For a desktop computer user, this means you can add a new security update, system patch, new application or often even a new operating system entirely rather than junk your computer and start again with a new one when you need to make a change. As long as you understand how an operating system works and know how to get at it, you can in many cases change some of the ways it behaves. And, it's as true of your cell phone as it is of your computer.


The purpose of an operating system is to organize and control hardware and software so that the device it lives in behaves in a flexible but predictable way. In this article, we'll tell you what a piece of software must do to be called an operating system, show you how the operating system in your desktop computer works and give you some examples of how to take control of the other operating systems around you.
What Does It Do?


At the simplest level, an operating system does two things:
1. It manages the hardware and software resources of the system. In a desktop computer, these resources include such things as the processor, memory, disk space, etc. (On a cell phone, they include the keypad, the screen, the address book, the phone dialer, the battery and the network connection.)
2. It provides a stable, consistent way for applications to deal with the hardware without having to know all the details of the hardware.


All desktop computers have operating systems. The most common are the Windows family of operating systems developed by Microsoft, the Macintosh operating systems developed by Apple and the UNIX family of operating systems (which have been developed by a whole history of individuals, corporations and collaborators). There are hundreds of other operating systems available for special-purpose applications, including specializations for mainframes, robotics, manufacturing, real-time control systems and so on.