Memory Management. Mobile Hardware Resources. Address Space and Process. Memory Management Overview. Memory Management Concepts

Similar documents
Embedded Systems Dr. Santanu Chaudhury Department of Electrical Engineering Indian Institute of Technology, Delhi

Mobile Operating Systems Lesson 01 Operating System

Memory management: outline

Memory management: outline

Operating Systems Design Exam 2 Review: Spring 2011

CS 416: Opera-ng Systems Design March 23, 2012

Embedded Linux Architecture

PROCESS VIRTUAL MEMORY. CS124 Operating Systems Winter , Lecture 18

CS399 New Beginnings. Jonathan Walpole

Product Technical Brief S3C2440X Series Rev 2.0, Oct. 2003

CS370 Operating Systems

CIS Operating Systems Memory Management Cache and Demand Paging. Professor Qiang Zeng Spring 2018

UNIT III MEMORY MANAGEMENT

Memory Management. Dr. Yingwu Zhu

CIS Operating Systems Memory Management Cache. Professor Qiang Zeng Fall 2015

Memory Management. Reading: Silberschatz chapter 9 Reading: Stallings. chapter 7 EEL 358

CIS Operating Systems Memory Management Cache. Professor Qiang Zeng Fall 2017

HY225 Lecture 12: DRAM and Virtual Memory

Learning Outcomes. An understanding of page-based virtual memory in depth. Including the R3000 s support for virtual memory.

Preview. Memory Management

Learning Outcomes. An understanding of page-based virtual memory in depth. Including the R3000 s support for virtual memory.

Processes and Virtual Memory Concepts

Picture of memory. Word FFFFFFFD FFFFFFFE FFFFFFFF

Embedded Systems: Architecture

Chapter 3 - Memory Management

CS 550 Operating Systems Spring Memory Management: Paging

14 May 2012 Virtual Memory. Definition: A process is an instance of a running program

CS370 Operating Systems

CS 261 Fall Mike Lam, Professor. Virtual Memory

Overview. Memory Classification Read-Only Memory (ROM) Random Access Memory (RAM) Functional Behavior of RAM. Implementing Static RAM

Machines and Virtualization. Systems and Networks Jeff Chase Spring 2006

Flash File Systems Overview

CS370 Operating Systems

Memory Study Material

Part V. Process Management. Sadeghi, Cubaleska RUB Course Operating System Security Memory Management and Protection

Virtual Memory. CS61, Lecture 15. Prof. Stephen Chong October 20, 2011

Address spaces and memory management

Memory Protection. Machines and Virtualization. Architectural Foundations of OS Kernels. Memory and the CPU. Introduction to Virtual Addressing

Memory management. Last modified: Adaptation of Silberschatz, Galvin, Gagne slides for the textbook Applied Operating Systems Concepts

Virtual Memory 1. Virtual Memory

Memory Hierarchy. Goal: Fast, unlimited storage at a reasonable cost per bit.

Virtual Memory 1. Virtual Memory

EECS 482 Introduction to Operating Systems

Chapter 3: Processes. Operating System Concepts 8 th Edition,

Misc. Third Generation Batch Multiprogramming. Fourth Generation Time Sharing. Last Time Evolution of OSs

Course: Operating Systems Instructor: M Umair. M Umair

John Wawrzynek & Nick Weaver

Introduction. CS3026 Operating Systems Lecture 01

ENGR 3950U / CSCI 3020U Midterm Exam SOLUTIONS, Fall 2012 SOLUTIONS

Operating System Support

File Systems. OS Overview I/O. Swap. Management. Operations CPU. Hard Drive. Management. Memory. Hard Drive. CSI3131 Topics. Structure.

Pharmacy college.. Assist.Prof. Dr. Abdullah A. Abdullah

Operating Systems. Operating Systems

Operating Systems Memory Management. Mathieu Delalandre University of Tours, Tours city, France

Operating Systems Design Exam 2 Review: Spring 2012

MEMORY MANAGEMENT/1 CS 409, FALL 2013

16 Sharing Main Memory Segmentation and Paging

FILE SYSTEMS, PART 2. CS124 Operating Systems Fall , Lecture 24

The Operating System. Chapter 6

Chapter 8 Virtual Memory

OPERATING SYSTEMS. After A.S.Tanenbaum, Modern Operating Systems 3rd edition Uses content with permission from Assoc. Prof. Florin Fortis, PhD

Universität Dortmund. ARM Architecture

smxnand RTOS Innovators Flash Driver General Features

Embedded System Design

OPERATING SYSTEM. PREPARED BY : DHAVAL R. PATEL Page 1. Q.1 Explain Memory

15 Sharing Main Memory Segmentation and Paging

Spring 2017 :: CSE 506. Device Programming. Nima Honarmand

Memory Management. Goals of Memory Management. Mechanism. Policies

The Google File System

Chapter 8: Main Memory

Chapter 5B. Large and Fast: Exploiting Memory Hierarchy

k -bit address bus n-bit data bus Control lines ( R W, MFC, etc.)

Chapter 8: Memory- Management Strategies. Operating System Concepts 9 th Edition

ECE 550D Fundamentals of Computer Systems and Engineering. Fall 2017

Secondary storage. CS 537 Lecture 11 Secondary Storage. Disk trends. Another trip down memory lane

Product Technical Brief S3C2416 May 2008

CS 61 Section Notes 5

Chapter 8: Memory-Management Strategies

Memory Management Topics. CS 537 Lecture 11 Memory. Virtualizing Resources

ARM Processors for Embedded Applications

Random-Access Memory (RAM) Systemprogrammering 2007 Föreläsning 4 Virtual Memory. Locality. The CPU-Memory Gap. Topics

Product Technical Brief S3C2413 Rev 2.2, Apr. 2006

Chapter 9 Memory Management Main Memory Operating system concepts. Sixth Edition. Silberschatz, Galvin, and Gagne 8.1

Random-Access Memory (RAM) Systemprogrammering 2009 Föreläsning 4 Virtual Memory. Locality. The CPU-Memory Gap. Topics! The memory hierarchy

memory management Vaibhav Bajpai

NAND/MTD support under Linux

Processes and More. CSCI 315 Operating Systems Design Department of Computer Science

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Memory Management

Memory: Overview. CS439: Principles of Computer Systems February 26, 2018

UNIT 4 Device Management

CSC 553 Operating Systems

CS3600 SYSTEMS AND NETWORKS

CHAPTER 8 - MEMORY MANAGEMENT STRATEGIES

CS 326: Operating Systems. Process Execution. Lecture 5

Virtual Memory. Reading. Sections 5.4, 5.5, 5.6, 5.8, 5.10 (2) Lecture notes from MKP and S. Yalamanchili

Objectives and Functions Convenience. William Stallings Computer Organization and Architecture 7 th Edition. Efficiency

Chapter 8: Main Memory. Operating System Concepts 9 th Edition

7/20/2008. What Operating Systems Do Computer-System Organization

! What is main memory? ! What is static and dynamic allocation? ! What is segmentation? Maria Hybinette, UGA. High Address (0x7fffffff) !

Computer Fundamentals and Operating System Theory. By Neil Bloomberg Spring 2017

Transcription:

Mobile Hardware Resources Memory Management Symbian Operating System 275078 Eka Ardianto 263416 Ikrimach M. 292964 Jumadi 275057 Muji Syukur 293082 Prisa M. Kusumantara 292301 Sheila Nurul Huda Moblie Memory Organization and Disk Drives Memory Management Overview Purpose Provides functionality concerned with how memory is allocated for and within programs. Fundamental to Symbian OS programs is a concern that memory, as a limited resource, is carefully handled, particularly in the event of error conditions. For this reason, exception handling and memory management are closely tied together in the Cleanup Support API. Architectural relationships The Uikon framework uses these APIs to provide to each GUI program the basic infrastructure for well-behaved memory handling. In particular, each GUI program has support for cleaning up memory in exception conditions, and, in debug builds, detection of memory leaks. Memory Management Concepts Address Space and Process Chunks Heaps Structure of a heap Virtual machine model Types of Memory in Symbian OS Address Space and Process Programs in Symbian OS consist of a number of processes, each of which contains one or more conceptually concurrent threads of execution. Each user process has its own private address space, i.e. a collection of memory regions which that process can access. A user process cannot directly address memory areas in the address space of another process. Threads belonging to a user process run at user privilege level. There is a special process, the Kernel process, whose threads run at supervisor privilege level. This process normally contains two threads: the Kernel server thread, which is the initial thread whose execution begins at the reset vector, and which is used to implement all Kernel functions requiring allocation or deallocation on the Kernel heap the null thread, which runs only when no other threads are ready to run. The null thread places the processor into idle mode to save power. 1

Address Space and Process Chunks The address space of a process consists of a number of chunks, where a chunk is a region of RAM mapped into contiguous virtual addresses. On creation, a user process contains one thread (the main thread) and one to three chunks; these are: The stack/heap chunk containing the the stack and the heap used by the main thread of the process; this chunk always exists The code chunk; this exists only if the process is loaded into RAM (i.e. it is not ROM resident). The data chunk; this exists only if the process has static data. If a process creates additional threads, then a new chunk is created for each new thread. Each chunk contains the thread's stack; if a new thread is not sharing an existing heap, then the chunk also contains a new heap. Chunks map RAM or memory-mapped I/O devices into contiguous virtual addresses. A chunk consists of a reserved region and a committed region. The reserved region is the contiguous set of virtual addresses accessible to running code. The committed region is the region in which RAM (or memory-mapped I/O) is actually mapped. The size of a chunk is dynamically alterable, allowing the committed region to vary in size from zero up to the reserved region size, in integer multiples of the processor page size. This allows processes to obtain more memory on demand. Generally the committed region starts at the bottom of the reserved region. Chunk Chunk A chunk also has a maximum size, which is defined when the chunk is created. The reserved region can be smaller than this maximum size, but it can also be made bigger by reallocating it. The reserved region cannot be made bigger than the maximum size. The size of the reserved region of a chunk is always an integer multiple of the virtual address range of a single entry in the processor page directory (PDE size). This means that the reserved region of a chunk is mapped by a number of consecutive page directory entries (PDEs). Any given PDE maps part of the reserved region of at most one chunk. Symbian OS has a number of chunk types, but for user side code, the chunks of interest are User chunks and Shared chunks. A chunk is a kernel side entity, and like all other kernel side entities, it is accessed from the user side using a handle, (an instance of the RChunk class). The concept of local and global also applies to chunks. User chunks Shared chunks Local and global chunks User chunks User chunks On systems with an MMU, Symbian OS provides three types of user chunks. Each type is characterised by having a different subset of the reserved address range containing committed memory: Normal chunks Double-ended chunks Disconnected chunks Normal chunks These chunks have a committed region consisting of a single contiguous range starting at the chunk's base address and a size that is a multiple of the MMU page size. 2

Normal chunks diagram Double-ended chunks These chunks have a committed region consisting of a single contiguous range starting at arbitrary lower and upper endpoints within the reserved region. The only condition is that the lower and upper endpoints must be a multiple of the MMU page size. Both the bottom and top of the committed region can be altered dynamically. Double-ended chunks diagram Disconnected chunks These chunks have a committed region consisting of an arbitrary set of MMU pages within the reserved region. Each page-sized address range within the reserved region starting on a page boundary can be committed independently. Disconnected chunks diagram Shared chunks A shared chunk is a mechanism that allows kernel-side code to share memory buffers safely with user-side code. By kernel-side code, we usually mean device driver code. The main points to note about shared chunks are: They can only be created and destroyed by device drivers. It is typical behaviour for user-side code, which in this context we refer to as the client of the device driver, to pass a request to the device driver to open a handle to a shared chunk. This causes the device driver to open a handle to the chunk and return the handle value to the client. Successful handle creation also causes the chunk's memory to be mapped into the address space of the process to which the client's thread belongs. Note, however, that it is the the driver that dictates exactly when the chunk itself is created, and when memory is committed. The precise protocol depends on the design of the driver; you need to refer to that driver's documentation for programming guidance. 3

Shared chunks Local and global chunks Like all kernel side objects, a shared chunk is reference counted. This means that it remains in existence for as long as the reference count is greater than zero. Once all opened references to the shared chunk have been closed, regardless of whether the references are user-side, or kernel-side, then it is destroyed. User-side code that has gained access to a shared chunk from one device driver can pass this to a second device driver. The second device driver must open the chunk before it can be used. More than one user side application can access the data in a shared chunk. A handle to a shared chunk can be passed from one process to another process using standard handle passing mechanisms. In practice, handles would be passed in a client-server context, either from client to server or from server to client using inter process communication (IPC). Processes that share data inside a chunk should communicate the location of that data as an offset from the start of the chunk, and not as an absolute address. The shared chunk may appear at different addresses within the address spaces of different user processes. Local chunks A chunk is local when it is private to the process creating it and is not intended for access by other user processes. A local chunk cannot be mapped into any other process and is, therefore, used for memory that does not need to be shared. A local chunk does not have a name. Global chunks A chunk is global if it is intended to be accessed by other processes. Global chunks have names that can be used to identify the chunk to another process wishing to access it. A process can open a global chunk by name; this maps the chunk into that process's address space, allowing direct access and enabling the sharing of data between processes. Heaps Structure of a heap Each thread has a chunk which contains that thread's program stack. For the main thread of a process, this chunk also contains the thread's heap. A program's request for memory is allocated from this heap. If a process creates additional threads, then a new chunk is created for each new thread. Each chunk contains the thread's stack; if a new thread is not sharing an existing heap, then the chunk also contains a new heap. When a new thread is created, either: a new heap is created for it it uses the creating thread's heap it uses an explicitly referenced heap. A heap simply consists of two lists of cells; one is the list of allocated cells and the other the list of free cells. Each list is anchored in the heap object. A cell consists of a cell header followed by the body of the cell itself. The body of the cell is the area of memory which is considered allocated. A typical mix of free and allocated cells diagram Using Memory Allocation How to share heaps How to switch heaps How to walk the heap 4

How to share heaps Heaps may be shared between threads within a process. When a new thread is created: it can use the same heap as the thread which is doing the creating. it can use the heap which has been explicitly created for it by the thread which is doing the creating. it can use the heap automatically created for it by the operating system. Only in the first two cases is the heap being shared. How to switch heaps When freeing memory, it is extremely important that the current heap is the heap from which it was allocated. Trying to free a memory cell from a heap when it was allocated from a different heap has undefined consequences. A thread can switch between heaps using User::SwitchHeap(). After a call to this function, any new request for memory is satisfied from the new heap. How to walk the heap A heap can be checked to make sure that its allocated and free cells are in a consistent state and that the heap is not corrupt. Virtual machine model The Kernel provides a 'virtual machine' environment to user processes. Each process accesses its data in the same virtual address range, called the data section, which ranges from: 0x00400000 to 0x3FFFFFFF but note that the static data always appears at: 0x00400000 The code chunk for RAM loaded processes always appears at: 0x20000000 Virtual machine model Virtual machine model This allows multiple processes to run, each executing the same code (e.g. multiple word documents open at the same time, each in a separate instance of the word application) where the same code chunk is used for each of the processes. This reduces RAM usage. In effect, each user process has the same kind of view. Code instructions address data using the virtual address; the Memory Management Unit (MMU) is responsible for the translation of the virtual address to the physical RAM address. Only one chunk can occupy a given virtual address range at a time, so a context switch between different processes involves re-mapping the chunks. The process chunks of the old process are re-mapped to their home addresses. These are in the home section, which is the virtual address range from: 0x80000000 to 0xFFFFFFFF ROM code is normally mapped into the address range: 0x50000000 to 0x5FFFFFFF 5

Virtual machine model Virtual machine model The process chunks of the new process are mapped from their home addresses back to the data section. Chunks which are not accessible by the current user process reside in the home section, and they have supervisor mode only access permissions, so that only the kernel can access them. The Kernel's data and stack/heap chunks also reside in the home section. These are never visible to user processes. Code chunks for RAM-loaded libraries reside at the top end of the home section and have user read-only access, so that all user processes can execute code from a loaded library. A context switch between processes thus involves: moving the old process chunks to the home section, and changing their access permissions to supervisor-only moving the new process chunks to the data section and changing their access permissions back to user-accessible. This is best seen graphically. In the first diagram shown below, user-process 1 is running and can 'see' all chunks in the clear boxes. Those boxes with dark background represent chunks which are not visible to user-process 1. When user-process 2 is running, the context switch re-maps the user-process 2 data to the data section and user process 1 data is re-mapped to the home section as the second diagram shows. VMM: User process 1's view VMM: User process 2's view Virtual Memory and Paging Virtual Memory The two memory areas of interest are the home area and the run area. 6

Memory Areas Types of Memory in Symbian OS The home area is where the data chunks for a process are kept when the process has been launched but is not the currently active process. The home area is a protected region of memory only kernel-level code can read and write it. When a process is scheduled to execute, its data chunks are moved (remapped via the MMU) from the home area of virtual memory to the area known as the run area, and process execution starts. You can think of the run area as a sort of stage for processes, and the home area as the backstage area. When a process is ready to perform, it goes on stage i.e. its data chunks are moved to the run area, and the process executes. Why aren t the process data chunks simply left in the home area when the process executes? The reason is that the process code always expects its data to reside in the same place. This is because the linker places data address references (when code references a static variable, for example) in the code image at build time. Thus, the process code expects its data in the single place specified by the linker (i.e. the run area) no matter what instance of the program is running. Note, however, that code chunks are never moved to the run area. This is because, unlike data chunks, you do not have separate copies of code for each process instance and the code can be run from its location in the home area. Random Access Memory (RAM) RAM is the volatile execution and data memory used by running programs. Applications vary in how much RAM they use, and this also depends on what the application is doing at the time. For example, a browser application loading a web page needs to allocate more RAM for the web page data as it s loaded. Also, the more RAM space you have, the more programs you can run on your smartphone at once. Typically, mobile phones have between 7 and 30MB of RAM available for applications to use. Read Only Memory (ROM) The ROM is where the Symbian OS software itself resides. It includes all the startup code to boot the device, as well as all device drivers and other hardware-specific code. This area cannot be written to by a user, although some of it can be seen by the file system as drive z:. Viewing the z: drive will show all the built-in applications of the OS, as well as the system DLLs, device drivers and system configuration files. For added efficiency, code in ROM is executed in place i.e. it is not loaded into RAM before executing. Typically a phone has between 16 and 32MB of ROM. Types of Memory in Symbian OS Direct Memory Access Internal Flash Disk The internal flash acts like a disk drive and allows for reading and writing files to it via the Symbian OS file system. The file system is fully featured and supports a hierarchical directory structure, with very similar features to those you would find on high-end operating systems. The internal flash drive is represented as the c: drive to the file system. This memory contains userloaded applications, as well as data such as documents, pictures, video, bookmarks, calendar entries, etc. The size of the internal flash disk varies with the phone, but it can be quite generous. For example, the Nokia 9500 has 80MB of internal flash space available to the user. On many phones, however, available internal user space is significantly less. The Nokia 6600, for example, has 6MB of flash space available to the user. Removable memory cards Memory cards act as removable disk drives and allow you to expand the storage provided internally. You can also read from and write to a memory card just as to the internal disk including operations such as saving user data, and even installing applications. This card is treated as another disk volume by the file system and is represented by a drive letter such as d: or e: (this varies between phones). The memory card formats (MMC and SD are examples) and available sizes vary by phone. Memory card sizes can vary from 16MB (or even less) to 1GB. Direct Memory Access (DMA) is used by Symbian OS to offload the burden of high bandwidth memory to peripheral data transfers and allow the CPU to perform other tasks. DMA can reduce the interrupt load by a factor of 100 for a given peripheral, saving power and increasing the real-time robustness of that interface. A DMA engine is a bus master peripheral. It can be programmed to move large quantities of data between peripherals and memory without the intervention of the CPU. Multi-channel DMA engines are capable of handling more than one transfer at one time. SoCs for Symbian phones should have as many channels as peripheral ports that require DMA, and an additional channel for memory-to-memory copies can be useful. A DMA channel transfer will be initiated by programming the control registers with burst configuration commands, transfer size, the physical addresses of the target RAM and the peripheral FIFO. This is followed by a DMA start command. The transfers of data will be hardware flow controlled by the peripheral interface, since the peripherals will always be slower than the system RAM. Direct Memory Access Direct Memory Access In a memory to peripheral transfer, the DMA engine will wait until the peripheral signals that it is ready for more data. The engine will read a burst of data, typically 8, 16 or 32 bytes, into a DMA internal buffer, and it will then write out the data into the peripheral FIFO. The channel will increment the read address ready for the next burst until the total transfer has completed, when it will raise a completion interrupt. A DMA engine that raises an interrupt at the end of every transfer is single-buffered. The CPU will have to service an interrupt and re-queue the next DMA transfer before any more data will flow. An audio interface will have a real-time response window determined by its FIFO depth and drain rate. The DMA ISR must complete within this time to avoid data underflow. For example, this time would be about 160 µs for 16-bit stereo audio. Double-buffered DMA engines allow the framework to queue up the next transfer while the current one is taking place, by having a duplicate set of channel registers that the engine switches between. Doublebuffering increases the real-time response window up to the duration of a whole transfer, for example about 20 ms for a 4 KB audio transfer buffer. Scatter-gather DMA engines add another layer of sophistication and programmability. A list of DMA commands is assembled in RAM, and then the channel is told to process it by loading the first command into the engine. At the end of each transfer, the DMA engine will load the next command until it runs out of commands. New commands can be added or updated in the lists while DMA is in progress, so in theory my audio example need never stop. Scatter-gather engines are good for transferring data into virtual memory systems where the RAM consists of fragmented pages. They are also good for complex peripheral interactions where data reads need to be interspersed with register reads. NAND controllers require 512 bytes of data, followed by 16 bytes of metadata and the reading of the ECC registers for each incoming block. Power savings come from using DMA, since the CPU can be idle during a transfer, DMA bursts don t require interrupts or instructions to move a few bytes of data, and the DMA engine can be tuned to match the performance characteristics of the peripheral and memory systems. 7

Flash memory Flash memory Symbian phones use Flash memory as their principal store of system code and user data. Flash memory is a siliconbased non-volatile storage medium that can be programmed and erased electronically. Flash memory comes in two major types: NOR and NAND. The names refer to their fundamental silicon gate design. Symbian OS phones make best use of both types of Flash through the selection of file systems The built-in system code and applications appear to Symbian software as one large read-only drive, known as the Z: drive. The Z: drive is sometimes known as the ROM image. User data and installed applications reside on the internal, writable C: drive. A typical Symbian phone today will use between 32 and 64 MB of Flash for the code and user data this is the total ROM budget. Symbian uses many techniques to minimize the code and data sizes within a phone, such as THUMB instruction set, prelinked XIP images, compressed executables, compressed data formats and coding standards that emphasize minimal code size. NOR Flash NOR flashes allow for unlimited writes to the same data block, to turn the ones into zeros. Flashes usually have a write buffer of around 32 to 64 bytes that allows a number of bytes to be written in parallel to increase speed. Erasing a NOR segment is slow, taking about half a second to one second. But erases can be suspended and later restarted. Completed writes and erases will update the status register within the Flash, and may generate an external interrupt. Without an interrupt, the CPU will need to use a high-speed timer to poll the Flash for completion. Flash memory How to track down memory leaks NAND Flash is treated as a block-based disk, rather than randomly addressable memory. Unlike NOR, it does not have any address lines, so cannot appear in the memory map. This means that code cannot execute directly from NAND and it has to be copied into RAM first. This results in the need for extra RAM in a NAND phone compared to a similar NOR device. NAND Flash writes are about 10 times faster than those on NOR Flash. A phone cannot boot directly from NAND. The process is more complex, requiring a set of boot loaders that build upon each other, finally resulting in a few megabytes of core Symbian OS image, the ROM, being loaded into RAM, where it will execute. NAND is also inherently less reliable than NOR. The lower price of NAND compared to NOR makes it attractive for mass-market phone projects, even after taking into account the extra RAM required. This article presents a method of tracking down a memory leak in cases where the address of that heap allocation keeps changing. The solution consists of the following steps: Start the application and run it until it panics. Take note of the address that wasn't deallocated. (Assume it's 0x1046a432.) Restart the application and run it until it panics. Make sure you run it exactly as you did the first time. Again take note of the address. (Assume it's 0x1a52a432.) Run it one more time, again until it panics and in exactly the same way. Again take note of the address. (Assume it's 0x245ea432.) Now looking at the numbers you will find a correlation. Random Access Memory Random Access Memory (RAM) is the place of all the live data within the system, and often the executing code. The quantity of RAM determines the type and number of applications you can run simultaneously, the access speed of the RAM contributes to their performance. Multimedia uses lots of RAM for megapixel cameras images and video recording. The RAM chip is a significant part of the total cost of a phone. Random Access Memory Mobile SDRAM Manufacturers have started to produce RAM specifically for the mobile phone market, known as Low Power or Mobile SDRAM. This memory has been optimized for lower power consumption and slower interface speeds of about 100 MHz, compared to normal PC memory that is four times faster. Mobile memories have additional features to help maintain battery life. Power down mode enables the memory controller to disable the RAM part without the need for external control circuitry. Internal RAM (IRAM) Memory that is embedded within the System-on- Chip(SoC) is known as Internal RAM (IRAM). This is much less than that of main memory. IRAM can be used as an internal frame buffer. An LCD controller driving a dumb display needs to re-read the entire frame buffer 60 times a second. IRAM can also be useful as a scratch pad or message box between multiple processors on the SoC. 8