The Operating System. Chapter 6

Similar documents
Today s class. Operating System Machine Level. Informationsteknologi. Friday, November 16, 2007 Computer Architecture I - Class 12 1

The Operating System Machine Level

The Working Set. CS 355 Operating Systems. The Working Set. Working Set Model 3/27/18. Paging Algorithms and Segmentation

The Operating System Machine Level

Memory Management. Chapter 4 Memory Management. Multiprogramming with Fixed Partitions. Ideally programmers want memory that is.

Modeling Page Replacement: Stack Algorithms. Design Issues for Paging Systems

MEMORY MANAGEMENT/1 CS 409, FALL 2013

Chapter 4 Memory Management. Memory Management


Outline. V Computer Systems Organization II (Honors) (Introductory Operating Systems) Advantages of Multi-level Page Tables

Multi-Process Systems: Memory (2) Memory & paging structures: free frames. Memory & paging structures. Physical memory

Memory management. Last modified: Adaptation of Silberschatz, Galvin, Gagne slides for the textbook Applied Operating Systems Concepts

a process may be swapped in and out of main memory such that it occupies different regions

Virtual Memory Outline

Chapter 8: Main Memory

Chapter 8: Memory-Management Strategies

stack Two-dimensional logical addresses Fixed Allocation Binary Page Table

Process size is independent of the main memory present in the system.

CHAPTER 8 - MEMORY MANAGEMENT STRATEGIES

Memory Allocation. Copyright : University of Illinois CS 241 Staff 1

Clock page algorithm. Least recently used (LRU) NFU algorithm. Aging (NFU + forgetting) Working set. Process behavior

CS6401- Operating System UNIT-III STORAGE MANAGEMENT

Move back and forth between memory and disk. Memory Hierarchy. Two Classes. Don t

Memory Management Ch. 3

CS450/550 Operating Systems

Chapter 8: Main Memory. Operating System Concepts 9 th Edition

Chapters 9 & 10: Memory Management and Virtual Memory

Chapter 3 Memory Management: Virtual Memory

Chapter 3 - Memory Management

Memory Management (Chaper 4, Tanenbaum)

Chapter 7: Main Memory. Operating System Concepts Essentials 8 th Edition

Chapter 9: Virtual Memory

Motivation. Memory Management. Memory Management. Computer Hardware Review

Page Size Page Size Design Issues

CHAPTER 8: MEMORY MANAGEMENT. By I-Chen Lin Textbook: Operating System Concepts 9th Ed.

Chapter 9 Memory Management

Chapter 8: Main Memory

Operating Systems, Fall Lecture 5 1. Overhead due to page table and internal fragmentation. Tbl 8.2 [Stal05] 4.

Chapter 8 Virtual Memory

Operating Systems Memory Management. Mathieu Delalandre University of Tours, Tours city, France

Operating Systems, Fall Lecture 5 1

CISC 7310X. C08: Virtual Memory. Hui Chen Department of Computer & Information Science CUNY Brooklyn College. 3/22/2018 CUNY Brooklyn College

Embedded Systems Dr. Santanu Chaudhury Department of Electrical Engineering Indian Institute of Technology, Delhi

Chapter 9 Memory Management Main Memory Operating system concepts. Sixth Edition. Silberschatz, Galvin, and Gagne 8.1

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses.

Memory Management. Disclaimer: some slides are adopted from book authors slides with permission 1

Chapter 8: Virtual Memory. Operating System Concepts Essentials 2 nd Edition

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Memory Management

Memory and multiprogramming

Chapter 8 Virtual Memory

Chapter 3: Important Concepts (3/29/2015)

Main Memory (Part I)

MODERN OPERATING SYSTEMS. Chapter 3 Memory Management

Introduction to Virtual Memory Management

Memory Management (Chaper 4, Tanenbaum)

Memory Management. 3. What two registers can be used to provide a simple form of memory protection? Base register Limit Register

Chapter 8: Virtual Memory. Operating System Concepts

Chapter 8: Memory- Manage g me m nt n S tra r t a e t gie i s

Chapter 8: Memory Management

Virtual Memory I. Jo, Heeseung

Module 8: Memory Management

CS307: Operating Systems

Memory management, part 3: outline

Chapter 9: Virtual Memory. Operating System Concepts 9 th Edition

Paging. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Virtual Memory 1. To do. q Segmentation q Paging q A hybrid system

Paging. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Announcement. Exercise #2 will be out today. Due date is next Monday

Background. Contiguous Memory Allocation

Basic Memory Management

Chapter 8 & Chapter 9 Main Memory & Virtual Memory

6 - Main Memory EECE 315 (101) ECE UBC 2013 W2

CS 143A - Principles of Operating Systems

Preview. Memory Management

Memory management, part 3: outline

CSE 120 Principles of Operating Systems

CS370 Operating Systems

Memory management. Requirements. Relocation: program loading. Terms. Relocation. Protection. Sharing. Logical organization. Physical organization

Address spaces and memory management

CSE 120 Principles of Operating Systems Spring 2017

Operating System Support

Basic Memory Management. Basic Memory Management. Address Binding. Running a user program. Operating Systems 10/14/2018 CSC 256/456 1

Operating Systems. Operating Systems Sina Meraji U of T

Processes and Tasks What comprises the state of a running program (a process or task)?

OPERATING SYSTEM. Chapter 9: Virtual Memory

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

VIRTUAL MEMORY. Operating Systems 2015 Spring by Euiseong Seo

Operating System Concepts

Operating Systems Comprehensive Exam. Spring Student ID # 3/16/2006

Segmentation (Apr 5 and 7, 2005)

Chapter 8: Memory Management

Logical versus Physical Address Space

OPERATING SYSTEMS. After A.S.Tanenbaum, Modern Operating Systems 3rd edition Uses content with permission from Assoc. Prof. Florin Fortis, PhD

Memory Management! How the hardware and OS give application pgms:" The illusion of a large contiguous address space" Protection against each other"

Virtual Memory Demand Paging. Virtual Memory Working Set Model

Techno India Batanagar Department of Computer Science & Engineering. Model Questions. Multiple Choice Questions:

CS420: Operating Systems

Main Memory. CISC3595, Spring 2015 X. Zhang Fordham University

Objectives and Functions Convenience. William Stallings Computer Organization and Architecture 7 th Edition. Efficiency

Memory Management and Protection

Transcription:

The Operating System Machine Level Chapter 6 1

Contemporary Multilevel Machines A six-level l computer. The support method for each level is indicated below it.2

Operating System Machine a) Operating System is a program that, from the programmer s point of view, adds a variety of new instructions and features. b) Normally, operating system is implemented in software. c) Both the OSM and the ISA levels are abstract (software). 3

OSM and ISA a) The OSM level instruction set is the complete set of instructions available to application programmers. b) It contains all ISA level instructions plus the set of new instructions. c) The new instructions are called system calls. A system call invokes a predefined operating system service. d) The OSM level is always interpreted. 4

Operating System Machine Positioning of the operating system machine level. 5

Three Topics of OSM a) Virtual Memory a technique to make the machine appear to have more memory than it in reality has. b) File I/O a higher level concept of I/O instructions. c) Parallel Processing how multiple processes can execute, communicate, and synchronize. 6

Virtual Memory a) IBM 650 has only 2000 words of memory, PDP-1 only 4096 18-bit words. b) The traditional solution was the use of secondary memory, such as disk. Overlay management c) 1961, Manchester, England, a method proposed to perform the overlay process automatically Virtual Memory! d) By the early 1970s, virtual memory had become available on most computers. 7

Paging a) The idea of separating the concepts of address space and memory locations b) The technique for automatic overlaying is called paging g and the chucks of program read in from disk are called pages. c) Because the programmer can program as though paging did not exist, the paging mechanism is said to be transparent. d) The virtual machine provided by the operating system can provide the illusion that all the virtual addresses are backed up by real memory. Only operating system writers have to know how the illusion is supported. 8

Paging A mapping in which virtual addresses 4096 to 8191 are mapped A mapping in which virtual addresses 4096 to 8191 are mapped onto main memory addresses 0 to 4095. 9

Paging - Example a) The contents of main memory would be saved on disk b) Words 8192 to 12287 would be located on disk c) Words 8192 to 12287 would be loaded into main memory d) The address map would be changed to map addresses 8192 to 12287 onto memory locations 0 to 4096 e) Execution would continue as through h nothing unusual had happened. 10

Implementation of Paging The virtual address space is broken up into a number of equal-sized pages. Page sizes ranging from 512 to 64K bytes are common. Sizes as large as 4 MB are used occasionally. The page size is always a power of 2. The physical address space is broken up into pieces in a similar way, each piece being the size of a page. These pieces of main memory into which the pages go are called page frames. 11

Implementation of Paging Every computer with virtual memory has a device for doing the virtualto-physical mapping. This device is called the MMU (Memory Management Unit). It may be on the CPU chip, or it may be on a separate chip that works closely with the CPU chip. Since our sample MMU maps from a 32-bit virtual address to a 15-bit physical address, it needs a 32-bit input register and a 15-bit output register. 12

Implementation of Paging On the following slide, the MMU is presented with a 32-bit virtual address. It separates the address into a 20-bit virtual page number and a 12-bit offset within the page (because the pages are 4K). The virtual page number is used as an index into the page table to find the entry for the page referenced. In the example, the virtual page number is 3, so entry 3 of the page table is selected, as shown. The first thing the MMU does is check to see if the page referenced is in main memory. 13

Implementation of Paging Not all virtual pages can be in memory at once. The MMU makes this check by examining the present/absent bit in the page table entry. In the example, the bit is 1, meaning the page is currently in memory. Now, the page frame value from the selected entry (6 in this case) is copied into the upper 3 bits of the 15-bit output register. In parallel with this operation, the low-order 12-bits of the virtual address (the page offset field) are copied into the low-order 12 bits of the output register. 14

Implementation of fpaging (1) The first 64 KB of virtual address space divided into 16 pages, with each page being 4K. a) Indexing, indirect addressing may be used to generate this address. 15

Implementation of fpaging (2) A 32 KB main memory divided up into eight page frames of 4 KB each. 16

Implementation of Paging (3) Formation of a main memory address from a virtual address. 17

Demand Paging and the Working Set Model A possible mapping of the first 16 virtual pages onto a main memory with eight page frames. 18

Paging a) The alternative approach is based on the observation that most programs do not reference their address space uniformly but that the references tend to cluster on a small number of pages. Locality Principle b) At any instant time, there exists a set consisting of all the pages used dby the K most recent memory references. Working Set. 19

Page Replacement Policy a) When a program references a page that is not in main memory, the needed page must be fetched from the disk. Some other page will generally have to be sent back to the disk. b) Choosing a page to remove at random is not a good idea. c) One way is to make a prediction when the next reference to each page will occur and remove the page whose predicted next reference lies furthest in the future. d) Least Recently Used (LRU) algorithm evicts the page least recently used because the a priori probability of its not being in the current working set is high. 20

Page Replacement Policy Failure of the LRU algorithm. 21

Alternative Page Replacement Algorithm a) First-In First-Out (FIFO) algorithm removes the least recently loaded page, independent of when this page was last referenced. 22

Page Internal Fragmentation a) Internal Fragmentation waste: user s program and data happen not to fill an integral number of pages exactly, b) Using a small page size can minimize waste c) On the other hand, a small page size means many pages, as well as a large page table more registers, increased cost, and more time to load and save 23

Segmentation a) In many cases, have two or more separate virtual address spaces may be much better than having only one b) Segments many completely independent address spaces c) Each segment consists of a linear sequence of addresses. Different segments may have different lengths, and the lengths may change during execution. d) Because each segment constitutes a separate address space, different segments can grow or shrink independently, without affecting each other. 24

Segmentation (1) In a one dimensional address space with growing tables In a one-dimensional address space with growing tables, one table may bump into another. 25

Segmentation (2) A segmented memory allows each table to grow or shrink independently of the other tables. 26

Segmentation (3) Comparison of paging g and segmentation. 27

Implementation of Segmentation Segmentation can be implemented in one of two ways: swapping and paging. In the former, some set of segments are in memory at a given instant. If a reference to a segment not currently in memory, that segment is brought into memory. One or more segments may have to be written to the disk. This is like demand paging, but pages are fixed size and segments are not. External fragmentation can result because of this. 28

Implementation of Segmentation (1) (a) (d) Development of external fragmentation (a)-(d) Development of external fragmentation. (e) Removal of the external fragmentation by compaction. 29

Implementation of Segmentation The second approach is to divide each segment up into fixed size pages and demand page them. In this scheme, some of the pages of a segment may be in memory and others may not be. To page a segment, a separate page table is needed for each segment. Since a segment is just a linear address space, all the techniques we have seen so far for paging apply to each segment. The only new feature here is that each segment gets its own page table. 30

Implementation of Segmentation (2) Conversion of a two part MULTICS address Conversion of a two-part MULTICS address into a main memory address. 31

Virtual Memory on the Pentium 4 The Pentium 4 has a sophisticated virtual memory system that supports demand paging, pure segmentation, and segmentation with paging. The heart of the Pentium 4 virtual memory consists of two tables: the LDT (Local Descriptor Table) and the GDT (Global Descriptor Table). Each program has its own LDT, but there is a single GDT, shared by all programs. 32

Virtual Memory on the Pentium 4 The LDT describes segments local to each program, including its code, data, stack, and so on, whereas the GDT describes system segments, including the OS itself. To access a segment, a Pentium 4 program first loads a selector for that segment into one of the segment registers. Each selector is a 16-bit number. One of the selector bits tells whether the segment is local or global. Thirteen other bits specify the LDT or GDT entry number. The other 2 bits relate to protection. 33

Virtual Memory on the Pentium 4 Descriptor 0 is invalid and causes a trap if used. At the time a selector is loaded into a segment register, the corresponding descriptor is fetched from the LDT or GDT and stored in internal MMU registers so it can be accessed quickly. A descriptor consists of 8 bytes, including the segment s base address, size, and other information. The format of the selector makes locating the descriptor easy. First either LDT or GDT is selected, based on bit 2. Then the selector is copied to an MMU scratch register, and the 3 low-order bits are set to 0. 34

Virtual Memory on the Pentium 4 Now the address of either the LDT or GDT table (kept in internal MMU registers) is added to it, to give a direct pointer to the descriptor. A (selector, offset) pair is converted to a physical address. As soon as the hardware knows which segment register is being used, it can find the complete descriptor corresponding to that t selector in its internal registers. If the segment does not exist (selector 0) or is currently not in memory (P is 0), a trap occurs. 35

Virtual Memory on the Pentium 4 (1) A Pentium 4 selector. A Pentium 4 code segment descriptor. Data segments differ slightly. 36

Virtual Memory on the Pentium 4 It then checks to see if the offset is beyond the end of the segment, in which case a trap also occurs. If the G (Granularity) field is 0, the LIMIT field is the exact segment size in bytes, up to 1 MB. If it is 1, the LIMIT field gives the segment size in pages. The Pentium 4 page size is never smaller than 4 KB, so 20 bits is enough for segments up to 2 32 bytes. Assuming that the segment is in memory and the offset is in range, the Pentium 4 then adds the 32-bit BASE field in the descriptor to the offset to form a linear address. 37

Virtual Memory on the Pentium 4 (2) Conversion of a (selector, offset) pair to a linear address. 38

Virtual Memory on the Pentium 4 If paging is disabled (by a bit in a global control register), the linear address is interpreted as the physical address and sent to the memory to read or write. Thus, with paging disabled we have a pure segmentation scheme with each segment s base address given in its descriptor. If paging is enabled, the linear address is interpreted as a virtual address and mapped onto the physical address using page tables. A two-level mapping is used. 39

Virtual Memory on the Pentium 4 Each running program has a page directory consisting of 1024 32-bit entries. It is located at an address pointed to by a global register. Each entry points to a page table also containing 1024 32-bit entries. The page table entries point to page frames. To avoid making repeated references to memory, the Pentium 4 MMU has special hardware support to look up the most recently used DIR- PAGE combinations quickly. 40

Virtual Memory on the Pentium 4 (3) Mapping of a linear address onto a physical address. 41

Virtual Memory on the Pentium 4 If an application does not need segmentation, all segment registers can be set up with the same selector, whose descriptor has BASE = 0 and LIMIT set to the maximum. The Pentium 4 also supports four protection levels, with level 0 being the most privileged and level 3 the least. At each instant, a running program is at a certain level, indicated by a 2-bit field in its PSW (Program Status Word). Each segment also belongs to a certain level. l 42

Virtual Memory on the Pentium 4 (4) Protection ti on the Pentium 4. 43

Virtual Memory on the Pentium 4 As long as a program restricts itself to using segments at its own level, everything works fine. Attempts to access data at a higher level are permitted. Attempts to access data at a lower level are illegal and cause traps. 44

Virtual I/O Instructions a) Input/Output is one of the areas where the OSM and ISA levels differ considerably. b) Security reasons c) Doing I/O at the ISA level is extremely tedious and complex 45

Files a) One way of organizing the virtual I/O is to use an abstraction called a file. b) In its simplest form, a file consists of a sequence of bytes written to an I/O device. c) Different files have different lengths and other properties. (For example, files on a disk) The abstraction of a file allows virtual I/O to be organized in a simple way. 46

Files a) File I/O is done by system calls for opening, reading, writing, and closing files. b) The process of opening a file allows the operating system to locate the file on disk and bring into memory information necessary to access it. c) The read system call must have: An indication of which open file is to be read. A pointer to a buffer in memory in which to put the data. The number of bytes to be read. d) The read call puts the requested data in the buffer. 47

Files a) Mainframe operating systems have a more sophisticated definition of file A sequence of logical records, each with a well-defined structure. 48

Implementation of Virtual I/O Instructions (1) Reading a file consisting of logical records. (a) Before reading record 19. (b) After reading record 19. 49

Implementation of Virtual I/O Instructions a) The allocation unit often consists of a block of consecutive sectors. b) A fundamental property of a file implementation is whether a file is stored in consecutive allocation units or not. 50

Implementation of Virtual I/O Instructions (2) Disk allocation strategies Disk allocation strategies. (a) A file in consecutive sectors. (b) A file not in consecutive sectors. 51

Implementation of Virtual I/O Instructions a) The operating system sees the file as an ordered, although not necessarily consecutive, collection of allocation units (on disk). b) A table, file index, is needed to locate any arbitrary byte or logical record, given the allocation units and their actual disk addresses. (used by UNIX) c) An alternative method of locating the allocation units of a file is to organize the file as a linked list. (used by MS-DOS and Windows 95/98) 52

Implementation of Virtual I/O Instructions a) Why do we need both consecutively allocated files and nonconsecutive allocated files? 53

Implementation of Virtual I/O Instructions a) In order to allocate space on the disk for a file, the operating system must keep track of which blocks are available, and which are already in use storing other files. b) One method consists of maintaining a list of all the holes, a hole being any number of contiguous allocation units. This list is called the free list. c) An alternative method is to maintain a bit map. If has the advantage of being constant in size compared with the free list. d) When any file on the disk is allocated and returned, the allocation list or table are updated. 54

Implementation of Virtual I/O Instructions (3) Two ways of keeping track of available sectors. (a) A free list. (b) A bit map. 55

Directory Management Instructions a) Information that is directly accessible to the computer without the need for human intervention is on-line line, as contrasted with off-line information. b) The usual way for an operating system to organize on-line files is to group them into directories. c) System calls provide Create a file and enter it in a directory Delete a file from a directory Rename a file Change the protection status of a file d) Tree of directories

Directory Management Instructions A user file directory and the contents of a

d) If only one processor available, the effect of parallel processing can be simulated by having the processor run each process in turn for a short time. Virtual Instructions for Parallel Processing a) Some computations can be most conveniently programmed for two or more cooperating processes running in parallel rather than for a single processor. Others can be divided into pieces, which can then be carried out in parallel to decrease the elapsed time required for the total computation. b) Physical limit (for subnanosecond computers) c) On a computer with more than on CPU each of several cooperating processes can be assigned to its own CPU (process simultaneously) l

Virtual Instructions for Parallel Processing (a) True parallel processing with multiple CPUs. (b) P ll l i i l t d b it hi (b) Parallel processing simulated by switching one CPU among three processes.

Process Creation a) Most modern operating systems allow processes to be created and terminated dynamically. b) The take full advantage, a system call to create a new process is needed to achieve parallel processing. c) In some cases, the creating (parent) process maintains partial or complete control over the created (child) process. d) Virtual instructions ti exist for a parent to stop, restart, t examine, and terminate its children. In other cases, once a process has been created, there is no way for the parent to forcibly stop, restart, t examine, or terminate it. Two processes run

Race Conditions (1) Use of a circular buffer.

Race Conditions (2) Parallel processing with a fatal race condition.

Race Conditions (3) Parallel processing with a fatal race condition.

Race Conditions (4) Parallel processing with a fatal race condition.

Race Conditions (5) Failure of the producer-consumer communication

Process Synchronization Using Semaphores (1) The effect of a semaphore operation.

Process Synchronization Using Semaphores (2) Parallel l processing using semaphores.

Process Synchronization Using Semaphores (3) Parallel l processing using semaphores.

Process Synchronization Using Semaphores (4) Parallel l processing using semaphores.

Critical Regions a) While the CPU protects itself against simultaneous use, the code that interacts with the other serially reusable resources cannot. b) Such code is called a critical region. c) If two tasks enter the same critical region simultaneously, a catastrophic error occur.

Semaphores a) The most common methods for protecting critical regions involves a special variable called a semaphore. b) A semaphore S is a memory location that t acts as a lock to protect critical regions. c) Two operations: wait P(S), signal V(S)

void P(int S) { while (S == true); S = true; } void V(int S) { S = false; } Semaphore is initialized to false.

Semaphores a) The wait operation suspends any program calling until the semaphore S is FALSE, whereas the signal operation sets the semaphore S to FALSE. b) Code that enters a critical region is bracketed by calls to wait and signal. This prevents more than one process from entering the critical region.

Process_1... P(S) critical region V(S)... Process_2... P(S) critical region V(S)...

Mailboxes and Semaphores a) Mailboxes can be used to implement semaphores is semaphore primitives iti are not provided d by the operating system. b) In this case, there is the added advantage that the pend instruction suspends the waiting process rather than actually waiting for the semaphore.

void P(int S) { int key = 0; pend(key, S); } void V(int S) { int key = 0; post(key, S); }

Counting Semaphores a) The P and V semaphores are called binary semaphores because they can take one of two values. b) Alternatively, a counting semaphore can be used to protect pools of resources, or to keep track of the number of free resources.

void P(int S) { S--; while(s < 0); } void V(int S) { S++; }

void MP(int R) /* multiple wait */ { P(S); /* lock counter */ R--; /* request a resource */ if (R < 0) /* none available? */ { } V(S); /* release counter */ P(T); /* wait for free resource */ } V(S); /* release counter */ void MV(int R) /* multiple l signal */ { P(S); /* lock counter */ R++; /* free resource */ if (R <= 0) V(T); else V(S); /* release counter */ }

Counting Semaphores a) The integer R keeps track of the number of free resources. Binary semaphore S protects R, and binary semaphore T is used to protect the pool of resources. b) The initial value of S is set to False, T to True, and R ),, to the number of available resources in the kernel.

OSM Level - ISA Level a) Virtual Memory b) Virtual I/O Instructions c) Virtual Instructions for Parallel Processing

UNIX (1) A rough breakdown of the UNIX system calls.

UNIX (2) The structure t of a typical UNIX system.

Windows XP The structure t of Windows XP.

UNIX Virtual Memory The address space of a single UNIX process.

Windows XP Virtual Memory The principal i Windows XP API calls for managing

UNIX Virtual I/O (1) The principal i UNIX file system calls.

UNIX Virtual I/O (2) A program fragment for copying a file using the UNIX system calls. This fragment is in C because Java hides the

UNIX Virtual I/O (3) Part of a typical UNIX directory system.

UNIX Virtual I/O (4) The principal i UNIX directory management calls.

Windows XP Virtual I/O (1) The principal Win32 API functions for file I/O.

Windows XP Virtual I/O (2) A program fragment for copying a file using the Windows XP API functions. This fragment is in C because Java hides the low-level system calls and we are trying to expose

Windows XP Virtual I/O (3) The principal Win32 API functions for directory management. The second column gives the nearest UNIX equivalent, when one exists.

Windows XP Virtual I/O (4) The Windows XP master file table.

UNIX Process Management (1) A process tree in UNIX.

UNIX Process Management (2) The principal i POSIX thread calls.