6 Likes Comment

Q.) What is Demand Paging? What is the advantage of demand paging over swapping?
Demand Paging :
In operating system, demand paging is an application of virtual memory. It’s a type of swapping in which pages of data are not copied from the disk to RAM until they are needed.

In contrast, some virtual memory use anticipatory paging, in which the OS attempts to anticipate which data will be needed next and copies it to RAM before it is actually required ( i.e., if a page fault occurs).

In a system that uses swapping, the operating system treats all program’s data as an atomic unit and moves all of the data into or out of the main memory at one time.

Advantages of demand paging over swapping :
· Only loads pages that are demanded by the executing process. As a result :
Less i/o and memory is needed.

Faster response and more user can be availed.
· As there is more space in main memory, more process can be loaded reducing context switching time which utilizes large amount of resources.
· When a context switched occurs, the operating system does not copy any of the old program’s pages out to the disk or any new program’s pages into the memory.
· Instead, it just begins the execution of new program and fetches that program’s pages as they are referenced.
· Less loading latency occurs when program start up, as less information is accessed from secondary storage and less information is brought into main memory.
· Does not need extra hardware support than what paging needs, since protection fault can be used to get page fault.
· The system that uses swapping, typically can not use their magnetic storage to allow a single
program to reference more data than fits in the main memory.
· All of the program’s data must be swapped into or out of the main memory as a unit in swapping but in demand paging it will store only when it is referenced.

Lazy swapper : never swaps a page into memory unless page will be needed. Swapper that
deals with pages is a pager.

Q.) Explain Paging & Segmentation.
Paging is one of the memory-management schemes by which a computer can store and
retrieve data from secondary storage for use in main memory. Paging is an important part of
virtual memory implementation allowing them to use disk storage for data that does not fit
into physical random-access memory (RAM). When paging is used, the processor divides
the linear address space (logical address space) into fixed-size pages (of 4KB, 2MB, or 4
MB in length) that can be mapped into physical memory (physical address) and/or disk
storage. When a program (or task) references a logical address in memory, the processor
translates the address into a linear address and then uses its paging mechanism to translate
the linear address into a corresponding physical address. The main advantage of paging is
that it allows the physical address space of a process to be non contiguous. Before the time
paging was used, systems had to fit whole programs into storage contiguously, which caused various storage and external fragmentation problems.

In paging,
• Physical memory is divided into frames.
• Logical memory is divided into pages.
• A block of auxiliary storage is a slot.
• A frame, has the same size as a page, is a place where a (logical) page can be (physically) placed.

Page size, is defined by the hardware, often of the form 2^n , between 512 bytes and 16 MB,
typically 4-8 KB. It must be carefully chosen: too large means more internal
fragmentation, too small means too much overhead in paging management and processing.

If the page containing the linear address is not currently in physical memory, the processor generates a page-fault exception. The exception handler for the page-fault exception typically directs the operating system to load the page from disk storage into physical memory. When the page has been loaded in physical memory, a return from the exception handler causes the instruction that generated the exception to be restarted. The information that the processor uses to map linear addresses into the physical address space and to generate page-fault exceptions (when necessary) is contained in page directories and page
tables stored in memory.

Segmentation is a Memory Management technique in which memory is divided into variable sized chunks which can be allocated to processes. Each chunk is called a segment. Each segment has a name and a length. A logical address specifies both the segment name and the offset within the segment. For simplicity of implementation segments are numbered
and are referred to by a segment number only.

Thus a logical address consists of a two tuple:

A segment has a set of permissions, and a length, associated with it. A process is only
allowed to make a reference into a segment if the type of reference is allowed by the
permissions, and the offset within the segment is within the range specified by the length of
the segment. Otherwise, a hardware exception such as a segmentation fault is raised.

Q. ) State the difference between Internal & External Fragmentation.
In computer storage, fragmentation is a phenomenon in which storage space is used
inefficiently, reducing storage capacity and in most cases reducing the performance. The
term is also used to denote the wasted space itself .We can say that a process is fragmented if it spends more time in paging than executing.

Fragmentation occurs naturally when you use a disk frequently, creating, deleting and modifying files. At some point, the operating system needs to store parts of a file in noncontiguous clusters. This is entirely invisible to users, but it can slow down the speed at
which data is accessed because the disk drive must search through different parts of the disk to put together a single file.
There are two types of fragmentation- internal fragmentation and external fragmentation.

The differences between the two are as follows-

Internal Fragmentation is the area in a region or a page that is not used by the job
occupying that region or page. This small space is wasted and the overhead to keep track of
this unusable tiny space is not feasible. Hence it is ignored. While this seems foolish, it is
often accepted in return for increased efficiency or simplicity. The term “internal” refers to
the fact that the unusable storage is inside the allocated region but is not being used.

For example, in many file systems, each file always starts at the beginning of a cluster,
because this simplifies organization and makes it easier to grow files. Any space left over
between the last byte of the file and the first byte of the next cluster is a form of internal
fragmentation called file slack or slack space. This slack space can add up and eventually
cause system performance to degrade.

External Fragmentation occurs in dynamic memory allocation scheme, when free memory
becomes divided into several small blocks over time. For example, this can cause problems when an application requests a block of memory of 1000bytes, but the largest
contiguous block of memory is only 300bytes. Even if ten blocks of 300 bytes are found, the
allocation request will fail because they are noncontiguous. The term “external” refers to the
fact that the unusable storage is outside the allocated regions. If too much external
fragmentation occurs, the amount of usable memory is drastically reduced.

Q.) What is the difference between Physical & Logical Address? What is Thrashing?

Difference between logical address and physical address?
Logical address (virtual address) refers to a memory location independent of the
current assignment of data to memory.
Physical address (absolute address) the absolute location of unit of data in memory.
2- Logical address used in virtual memory or by user.
Physical address used in main memory.
3- Physical address is the address which is divided into small parts called frames.
Logical address is the address which is divided into small parts called pages.
4- The address generated by CPU is logical address. logical address is generated when
instruction or data are executed. During its processing by CPU it generates some address for allocating some space in memory. This address is known as logical address.

The address generated by memory unit to load the instruction and data in to memory
register is known as physical address.
The set of all logical addresses generated by a program is known as Logical Address
Set of all physical addresses corresponding to these logical addresses is Physical Address
Now, the run time mapping from virtual address to physical address is done by a hardware
device known as Memory Management Unit. Here in the case of mapping the base
register is known as relocation register. The value in the relocation register is added to the
address generated by a user process at the time it is sent to memory. Let’s understand this
situation with the help of example:If the base register contains the value 1000,then an
attempt by the user to address location 0 is dynamically relocated to location 1000,an
access to location 346 is mapped to location 1346.

When paging is used, in case of ‘external page replacement algorithm’ a problem called
“thrashing” can occur, in which the computer spends an unsuitable amount of time swapping
pages to and from a backing store, hence slowing down useful work. Adding real memory is
the simplest response, but improving application design, scheduling, and memory usage can

If the number of frames allocated to a low priority process is lower than the minimum
number required by the computer architecture then in this case we must suspend the
execution of this low priority process. After this we should page out all of its remaining
pages and freeing all of its allocated frames. This provision introduces a swap in,swap-out
level of intermediate CPU scheduling. Let take a example of a process that does not have
enough number of frames. If the process does not have the number of frames it needs to
support pages in active use, it will quickly page fault. The only option remains here for
process is to replace some active pages, which may be the pages of some other process in
global page replacement environment, with the page that requires a frame.

since all of its pages are in active use, that process must replace a page that will be needed
again right away. Consequently, it quickly faults again and again that mean replacing
pages that it must bring back in immediately. Now, the OS will think that the CPU is
getting idle because there is a significant performance degradation and processing of the
processes is not advancing. So it will increase degree of multiprogramming to increase

CPU utilization, will ultimately cost more and more page faults. This high paging activity
is called Thrashing.

Thrashing can be eliminated by:-
• reducing level or degree of multi programming.
• use local replacement algorithm.
• while allocating check minimum sufficient frame required for a process.

Q. What is compaction?What are the drawbacks of compaction?
Compaction is the solution to the problem of external fragmentation. External fragmentation exists
when there is enough total memory space to satisfy a request but the space is not contiguous. The
goal of compaction is to shuffle the memory contents so as to place all free memory together in one
large block. All holes move in the other direction, producing one large hole of available memory.
Compaction is possible only if relocation is dynamic and is done at execution time.

• When compaction is possible then we have to determine its cost. The scheme is very costly.
• When relocation is static and done at load or assembly time compaction cannot be done.
• Compaction can be particularly severe for large memory that use contiguous allocation,
where compacting all the space may take hours.


You might like


About the Author: Saif

Leave a Reply

Your email address will not be published. Required fields are marked *

Close Bitnami banner