Memory management note

Memory management note

In this “MEMORY MANAGEMENT NOTE” tutorial, Memory Management Introduction, Memory Hierachy, Memory Address, Physical and Logical address, MMU, Memory Management Unit, Address Translation, Symbolic and Relative address, Static and Dynamic binding, Memory Allocation, Single-Partition, Multiple partition, fragmentation, Internal fragmentation, External fragmentation, Swapping, Paging and Demand paging, Segmentation, Thrashing, and Compaction related notes are covered.

Memory Management is the easiest part of the Operating System or in a Computer Architecture. Keep in mind that, Mainly, 3 items are there. No.1 is Processor(CPU), No.2 is RAM(main memory), No3. is Hard disk(secondary memory). Memory Management means managing data, process or program between main memory and disk during execution. Data, Process or Program has some space occupied in disk, or main memory. Thus, Memory management keeps a record of each and every memory location, regardless of either it is allocated to some process or it is free. It is responsible for Memory Allocation, Static & Dynamic binding, etc.

It is easy to understand if you know Microprocessor, or Assembly level language, because it is related with Memory Management.


The closer to CPU the higher the Performance, and Rate. The farest from the CPU the slower the Performance, and Larger in size.


Like we have the address of our house so that anyone can reach out to us. In the same way, we store the data in the memory at different locations with addresses so that we can access the data again whenever required in the future. There are two types of addresses that are used for memory in the operating system i.e. the physical address and logical address.


Q.) What is the difference between Physical & Logical Address?
Difference between logical address and physical address are :
1- Logical address (virtual address) refers to a memory location independent of the current assignment of data to memory. Physical address (absolute address) the absolute location of unit of data in memory.

2- Logical address used in virtual memory or by user. Physical address used in main memory.

3- Physical address is the address which is divided into small parts called frames. Logical address is the address which is divided into small parts called pages.

4- The address generated by CPU is logical address. logical address is generated when instruction or data are executed. During its processing by CPU it generates some address for allocating some space in memory. This address is known as logical address. The address generated by memory unit to load the instruction and data in to memory register is known as physical address.

5- The set of all logical addresses generated by a program is known as Logical Address Space. Set of all physical addresses corresponding to these logical addresses is Physical Address Space. Now, the run time mapping from virtual address to physical address is done by a hardware
device known as Memory Management Unit. Here in the case of mapping the base register is known as Relocation Register.

The value in the Relocation Register is added to the address generated by a user process at the time it is sent to memory. Let’s understand this situation with the help of example: If the base register contains the value 1000, then an attempt by the user to address location 0 is dynamically relocated to location 1000, an access to location 346 is mapped to location 14346.



Logical address is a combination of Page number and Page offset. Physical address is a combination of Frame number and Page offset. The address generated by CPU is logical address. The address generated by memory unit to load the instruction and data in to memory register is known as physical address. The run time mapping from page number(virtual address) to frame number (physical address) is done by a hardware device known as Memory Management Unit.

Symbolic Address: The addresses used in a source code. The variable names, constants, and instruction labels are the basic elements of the symbolic address space.

Relative Addresses: At the time of compilation, a compiler converts symbolic addresses into relative addresses.

Logical and Physical Addresses differ in execution-time address binding.


Static Binding means during compilation, the complete addresses of instructions will be compiled and liked without any dependency. It is faster, and more readability can be achieved. It does not require whole knowledge of Binding. At compile-time, Java or C++ knows which method to call by checking the method signatures. So this is called compile-time polymorphism or static or early binding. Compile-time polymorphism is achieved through method overloading. Method Overloading says you can have more than one function with the same name in one class having a different prototype.

Dynamic Binding means during compilation, the complete addresses of instructions will not be linked. At runtime, the functionality of linking is completed. The runtime polymorphism can be achieved by method overriding. Method overriding says child class has the same method as declared in the parent class. It means if child class provides the specific implementation of the method that has been provided by one of its parent class then it is known as method overriding.

It provides slow execution as compare to early binding because the method that needs to be executed is known at the runtime. It is also known as Late binding.


Main memory usually has two partitions −

  1. Low Memory – Operating system resides in this memory.
  2. High Memory – User processes are held in high memory.


In this type of allocation, relocation-register scheme is used to protect user processes from each other, and from changing operating-system code and data.


The simplest method for memory allocation is to divide memory into several fixed size partition. Two alternative methods are used for fixed size partitioning. One is equal-size partitions and other is unequal-size partitions. In equal-size partition, There are three approaches to select a free partition from the set of available blocks. i.e. First Fit, Best Fir, and Worst fit.


Fragmentation occurs naturally when you use a disk frequently, creating, deleting and modifying files. At some point, the operating system needs to store parts of a file in non contiguous clusters. In computer storage, fragmentation is a phenomenon in which storage space is used inefficiently, reducing storage capacity and in most cases reducing the performance. The term is also used to denote the wasted space itself .We can say that a process is fragmented if it spends more time in paging than executing.

This is entirely invisible to users, but it can slow down the speed at which data is accessed because the disk drive must search through different parts of the disk to put together a single file. There are two types of fragmentation- internal fragmentation and external fragmentation.

Q. ) State the difference between Internal & External Fragmentation.
The differences between the two are as follows-

Internal Fragmentation is the area in a region or a page that is not used by the job occupying that region or page. This small space is wasted and the overhead to keep track of this unusable tiny space is not feasible. Hence it is ignored. While this seems foolish, it is often accepted in return for increased efficiency or simplicity.

The term “internal” refers to the fact that the unusable storage is inside the allocated region but is not being used.

For example, in many file systems, each file always starts at the beginning of a cluster, because this simplifies organization and makes it easier to grow files.

Any space left over between the last byte of the file and the first byte of the next cluster is a Form of internal fragmentation called file slack or slack space. This slack space can add up and eventually cause system performance to degrade.

External Fragmentation occurs in dynamic memory allocation scheme, when free memory becomes divided into several small blocks over time. For example, this can cause problems when an application requests a block of memory of 1000bytes, but the largest contiguous block of memory is only 300bytes.

Even if ten blocks of 300 bytes are found, the allocation request will fail because they are non contiguous. The term “external” refers to the
fact that the unusable storage is outside the allocated regions. If too much external fragmentation occurs, the amount of usable memory is drastically reduced.



Q) What is Swapping?

Swapping: Main Memory has limited space thus all processes cannot store into it ( In Modern era, Size of Main Memory increases and usage of high capacity of Main Memory also increases).

In a system that uses swapping, the operating system treats all program’s data as an atomic unit and moves all of the data into or out of the main memory one at a time. Through this, performance can be better.

Swapping takes some time to swap in and swap out. Thus, After a time, opening a new process can be caused for crash the system.


Paging is one of the memory-management schemes by which a computer can store and retrieve data from secondary storage for use in main memory. Paging is an important part of virtual memory implementation allowing them to use disk storage for data that does not fit into physical random-access memory (RAM).

When paging is used, the processor divides
the linear address space (logical address space) into fixed-size pages (of 4KB, 2MB, or 4 MB in length) that can be mapped into physical memory (physical address) and/or disk storage. When a program (or task) references a logical address in memory, the processor translates the address into a linear address and then uses its paging mechanism to translate
the linear address into a corresponding physical address.

The main advantage of paging is that it allows the physical address space of a process to be non contiguous. Before the time paging was used, systems had to fit whole programs into storage contiguously, which caused various storage and external fragmentation problems.

In paging,
• Physical memory is divided into frames.
• Logical memory is divided into pages.
• A block of auxiliary storage is a slot.
• A frame, has the same size as a page, is a place where a (logical) page can be (physically) placed.

Page size, is defined by the hardware, often of the form 2^n , between 512 bytes and 16 MB, typically 4-8 KB. It must be carefully chosen: too large means more internal fragmentation, too small means too much overhead in paging management and processing.

If the page containing the linear address is not currently in physical memory, the processor generates a page-fault exception. The exception handler for the page-fault exception typically directs the operating system to load the page from disk storage into physical memory. When the page has been loaded in physical memory, a return from the exception handler causes the instruction that generated the exception to be restarted.

The information that the processor uses to map linear addresses into the physical address space and to generate page-fault exceptions (when necessary) is contained in page directories and page tables stored in memory.

Q) What is Demand Paging?
Demand Paging : In operating system, demand paging is an application of virtual memory. It’s a type of swapping in which pages of data are not copied from the Disk to RAM until they are needed.
In contrast, some virtual memory use anticipatory paging, in which the OS attempts to anticipate which data will be needed next and copies it to RAM before it is actually required ( i.e. if a page fault occurs).

What is the advantage of demand paging over swapping?

Here is a list of advantages –
1. Only loads pages that are demanded by the executing process. As a result : Less i/o and memory is needed. Faster response and performance improved.
2. As there is more space in main memory, more process can be loaded reducing context switching time which utilizes large amount of resources.
3. When a context switched(Swap-in and Swap-out) occurs, the operating system copy program in memory that’s decreases memory space. Instead, Demand Paging just begins the execution of new program and fetches that program’s pages as they are referenced.
4. In Swapping, it does not need extra hardware support than what paging needs(page-table), since protection fault can be used to get page fault.
5. All of the program’s data must be swapped into or out of the main memory as a unit in swapping but in demand paging it will store only when it is referenced.

Lazy swapper : It never swaps a page into memory unless page will be needed. Swapper that deals with pages is a pager.



Segmentation is a Memory Management technique in which memory is divided into variable sized chunks which can be allocated to processes. Each chunk is called a segment.

Each segment has a name and a length. A logical address specifies both the segment name and the offset within the segment. For simplicity of implementation segments are numbered
and are referred to by a segment number only.

Thus a logical address consists of a two tuple:

A segment has a set of permissions, and a length, associated with it. A process is only allowed to make a reference into a segment if the type of reference is allowed by the permissions, and the offset within the segment is within the range specified by the length of the segment. Otherwise, a hardware exception such as a segmentation fault is raised.


Q> Explain Thrashing.
When paging is used, in case of ‘external page replacement algorithm’ a problem called “thrashing” can occur, in which the computer spends an unsuitable amount of time swapping pages to and from a backing store, hence slowing down useful work.

Adding real memory is the simplest response, but improving application design, scheduling, and memory usage can help.

If the number of frames allocated to a low priority process is lower than the minimum number required by the computer architecture then in this case we must suspend the execution of this low priority process. After this we should swap out all of its remaining pages and freeing all of its allocated frames.

This provision introduces a swap in,swap-out level of intermediate CPU scheduling. Let take a example of a process that does not have enough number of frames. If the process does not have the number of frames it needs to support pages in active use, it will quickly page fault. The only option remains here for process is to replace some active pages, which may be the pages of some other process in global page replacement Environment, with the page that requires a frame.

However, since all of its pages are in active use, that process must replace a page that will be needed again right away. Consequently, it quickly faults again and again that mean replacing pages that it must bring back in immediately.

Now, the OS will think that the CPU is getting idle because there is a significant performance degradation and processing of the
processes is not advancing. So it will increase degree of multi programming to increase CPU utilization, will ultimately cost more and more page faults. This high paging activity is called Thrashing.


Thrashing can be eliminated by:-
• reducing level or degree of multi programming.
• use local replacement algorithm.
• while allocating check minimum sufficient frame required for a process.


Q. What is compaction?What are the drawbacks of compaction?
Compaction is the solution to the problem of external fragmentation. External fragmentation exists when there is enough total memory space to satisfy a request but the space is not contiguous.

The goal of compaction is to shuffle the memory contents so as to place all free memory together in one large block. All holes move in the other direction, producing one large hole of available memory. Compaction is possible only if relocation is dynamic and is done at execution time.


• When compaction is possible then we have to determine its cost. The scheme is very costly.
• When relocation is static and done at load or assembly time compaction cannot be done.
• Compaction can be particularly severe for large memory that use contiguous allocation,
where compacting all the space may take hours.



What is main memory?

Main Memory refers to a physical memory that is the internal memory to the computer. The word main is used to distinguish it from external mass storage devices such as disk drives. Main memory is also known as RAM. The computer is able to change only data that is in main memory.

Your contribution matters in our Website, You can get more and more knowledge through our website’s articles.

Here, Introduction, Memory Hierachy, Memory Address, Physical and Logical address, MMU, Memory Management Unit, Address Translation, Symbolic and Relative address, Static and Dynamic binding, Memory Allocation, Single-Partition, Multiple partition, fragmentation, Internal fragmentation, External fragmentation, Swapping, Paging and Demand paging, Segmentation, Thrashing, and Compaction. related notes are covered.

You can read Operating System MCQs, that’s help you to gain more knowledge. Memory related MCQs are discussed in there.

Leave a Reply

Your email address will not be published. Required fields are marked *

Close Bitnami banner