The Input/Output Subsystem

Size: px
Start display at page:

Download "The Input/Output Subsystem"

Transcription

1 Cmpt 250 Input/Output April 1, 2008 The Input/Output Subsystem So far, we ve discussed the processor and the memory hierarchy, and we ve looked at how the two interact. Exclude, for a moment, the secondary memory that lies at the bottom of the memory hierarchy. The time scale is uniformly fast, with transaction times ranging from subnanosecond to a few hundred nanonseconds. Memory is random access. The time required for transactions is predictable, and we re dealing with one transaction model. Interaction between the memory and the CPU is highly structured, occurring in a very predictable way as the CPU fetches and executes instructions. It s also synchronised to the CPU clock. The CPU and the top levels of the memory hierarchy are very tightly integrated. As a user, you cannot swap out one manufacturer s L2 cache and replace it with another. The distances involved are small. L1 and L2 cache are often fabricated on the same IC as the CPU. At worst, they may be a separate IC within the same hybrid package. Memory is a few inches away, somewhere on the motherboard. Input-output, on the other hand, is very different. The data rate varies from gigabits/second (network interfaces, disks) to seconds/bit (mouse or keyboard events). Meeting the bandwidth requirements is challenging for high-speed i/o devices. The time required for a transaction is not predictable most i/o devices are not random-access, so the time required will vary depending on the data involved and the response time of the device. Worse, human interaction may be involved. The occurrence of an i/o event is not predictable. When it occurs, the CPU may need to respond very quickly (milli- or microseconds). Response time (latency) becomes an issue, in addition to throughput. Some i/o events are just not as important as others. We may well want to postpone handling an i/o event, or suspend the handling of one event while we deal with another that is more important and/or requires a faster response. Distance is greater, and the variation is greater. Some peripherals will be within a few inches of the CPU, others may be a few feet away. 1

2 Cmpt 250 Interfaces April 1, 2008 There s an enormous variation in available i/o devices, and consumers expect to be able to change the number and configuration of i/o devices with ease. As we ll see, this requires some new capabilities and approaches. To get the attention of the CPU, we ll introduce the idea of an interrupt (more generally, an exception). This will allow us to force the CPU to suspend execution of the current instruction stream and divert to another instruction stream to deal with the interrupt. We ll also see how this can be made invisible to the interrupted process. To move data between i/o devices and the CPU and memory, we ll be using busses. To deal with the nearly infinite variety of access models embodied in i/o devices, we ll use a (relatively) limited number of communication conventions called bus protocols, embedded in bus standards (PCI, USB, etc.). Manufacturers package their i/o devices with standard interfaces that conform to some bus standard. Because of the range of distances involved, and the differing speeds of attached devices, bus protocols will often use transaction models that require explicit acknowledgement by each party. There will typically be provision to vary (usually, extend) a transaction in order to allow one party time to respond. To accommodate the huge variation in data rates, some bus standards are designed to support low-speed devices, others high-speed devices. One class of interface is a device which acts as a time-division multiplexer, connecting several slow busses to a high-speed bus. Interfaces At its most basic, the role of an interface is to transform the raw interface provided by the i/o device into an interface that conforms to the conventions of digital circuits. Translation between analog signal levels and digital 1 s and 0 s is a nearly universal function in interfaces. I/O devices are a mixture of digital and analog electronic components, optical components, and mechanical components. For example, a request to read a disk block must be translated into a sequence of analog signals that will cause motors to properly position the disk heads. Once the data begins to travel past the heads, the 2

3 Cmpt 250 Interfaces April 1, 2008 analog signal generated by sensing the magnetic fields on the disk must be translated into 1 s and 0 s for use by the CPU. Another nearly universal function is the provision of buffers for data transfer. The basic function is to change the blocking of the data, but that s not the important idea. By providing buffers for data transfer, the interface can reduce the amount of work the CPU must perform to transfer data. Buffers also give the CPU more freedom to choose when to respond to an i/o event. For example, many network protocols transmit data bit by bit over a single wire. If a Gigabit Ethernet interface required the CPU to execute instructions to move each bit of data from the interface to memory, no CPU could keep up. Instead, the interface provides a fairly large data buffer. The CPU executes instructions to load data into this buffer and then executes a few more instructions to tell the interface to transmit the data. Dedicated logic within the interface processes the block of data one bit at a time. Here s a block diagram for a generic interface. interface register digital data buffer signal conversion circuitry computer system select r/w request control logic control register signal conversion circuitry device ready interface status register signal conversion circuitry On the device side, there will be connections for data, control, and status signals. Inside the interface, there will be digital logic individual registers, or larger memory arrays to hold the data. Signal conversion circuitry handles blocking (parallel/serial conversion, for example) and conversion to/from the signalling conventions used by the device. 3

4 Cmpt 250 Interfaces April 1, 2008 Control can be separated into two distinct groups: control signals which instruct the device to perform some action (via the control register) and control signals which are used to control the movement of data, control, and status information between the interface and the device (the signals coming directly from the control logic). The directions shown (bidirectional for data, output for control, input for status) are common. A pair of unidirectional connections for data input and output is also possible. On the computer system side, there is a bidirectional connection for data, with a buffer register. There are also connections for control and status, but they re labelled a bit differently. Control is exercised by using the select and r/w lines to notify the interface that the computer system wishes to read or write information. The select lines also tell the interface which of the internal registers (data buffer, control, or status) should be selected for the transfer. The request and ready lines communicate interface status to the computer system. Request is used to request service from the computer system. Ready is used to inform the computer system that the interface has acted on a request from the computer system (data is ready, for a read request; data has been accepted, for a write request) and is prepared to accept a new request. Inside the interface, we have the circuitry necessary to move digital data between the computer system and the device. There is buffering on the computer system side (the interface register) and on the device side (the data buffer and control and status registers). There will be an interconnection structure (a bus is shown, but point-to-point interconnections and multiplexers will work just as well). In addition, there will be control logic to respond to control signals from the computer system, to generate status signals, and to control the movement of data within the interface. How does this work? What is the generic sequence of events to transfer data between the computer system and the device? To read data, the computer system will select the interface and ask for the contents of the status register. The interface will transfer the contents of the status register to the interface register and signal to the computer system that the information is ready. The computer system will read the information from the interface register and examine it. If the status indicates that the device has data ready, the computer system will select the interface and ask it to write to the 4

5 Cmpt 250 Data Transfer Primitives April 1, 2008 control register. The data transferred from the computer system to the interface will be the proper control word to cause the device to transfer data to the data buffer. The interface control logic will coordinate transfer of the control word to the device, and receipt of the data from the device. When the data is available, the interface will use request and ready to notify the computer system. The computer system will then select the interface and ask for the content of the data buffer. The interface will move data into the interface register, signal that the data is ready, and it will be read by the computer system. To write data, the sequence is much the same: The computer system will check the status of the device, issue the appropriate control word, and then send data to the interface for transfer to the device. Typically, an interface is not connected directly to a computer system. Instead, multiple interfaces are connected to a bus. The bus provides data, address (select) and control lines that are connected to all interfaces. data control address interface device Data Transfer Primitives To communicate with an interface, the computer system sets the address lines to the proper value to select the interface, sets the control signals for read or write, and then transfers data using the data lines. Clearly, in order for this system to work correctly, we need to set things up so that at most one interface recognises any given address. All outputs from an interface must be equipped with tristate buffers so that only the selected interface attempts to assert a value on a given wire in the bus. Let s get down to details now, and examine some of the primitive operations involved in transferring data over a bus. The simplest sort of transaction uses a strobe signal to pace the exchange 5

6 Cmpt 250 Data Transfer Primitives April 1, 2008 of data. To keep the explanation simple, let s assume that we have only two interfaces. They are connected by a set of bidirectional data lines and two control signals, strobe and r/w. It s useful to take a moment and define the roles that an interface can play in an exchange of data. One way to characterise the role of an interface is as the source or destination of the transfer. This role is determined by the direction of data transfer. Another way to characterise the role of an interface is as the initiator (master) or responder (slave) in a transaction. These are independent of one another: When the master specifies a read, the slave is the source and the master is the destination. When the master specifies a write, the master is the source and the slave is the destination. Mano sort of obscures this with the notion of destination-initiated transfer (commonly called a read) and source-initiated transfer (commonly called a write). So, how do we use the data, strobe, and r/w signals to transfer data? Let s look at a read first. destination source master data strobe r/w slave data data from slave strobe r/w In a read operation, the master is the destination and the slave supplies the data. The data lines start in an undefined state, with the (tristate) drivers for both the master and slave in the hi-impedance state. The r/w signal also starts in an unknown state, simply because we don t know how the master handles it when no transfer is in progress. 6

7 Cmpt 250 Data Transfer Primitives April 1, 2008 The master begins the transfer by setting the r/w signal to 1 (read) and asserting the strobe signal. The slave sees strobe and responds by placing data onto the data lines. The master waits for a while, latches the data, and then drops strobe to tell the slave that it s latched the data. When the master drops strobe, r/w no longer needs to be valid and can return to an unknown state. The slave responds to strobe by removing the data from the data lines (by placing its output drivers in the hi-impedance state). At this point, the two interfaces are ready to begin another transaction. And now a write. source destination master data strobe r/w slave data data from master strobe r/w In a write operation, the master is the source and must supply the data. The master begins the transaction by asserting the data onto the data lines, setting r/w to 0 (write), and asserting strobe. The slave sees strobe and latches the data from the data lines. After an appropriate amount of time, the master drops strobe. The diagram shows r/w and data returning to their unknown states. Notice the subtle difference in timing between the read and write operations. For a read, the master must ensure that r/w is valid while strobe is asserted. Otherwise, the slave could perform the wrong operation. The data lines become valid only when the slave responds. For a write, the master must ensure that r/w and data are both valid while strobe is asserted. Otherwise, the slave could perform the wrong operation, or receive incorrect data. 7

8 Cmpt 250 Data Transfer Primitives April 1, 2008 In the more general case where the master must select some interface as the slave for the transaction, the address (select) lines must also be valid before strobe is asserted. Strobed data transfer, as just described, has one glaring fault: There s no feedback from the slave to the master. The master is simply assuming that the slave has done its part. A common technique for providing the missing feedback is called the two-line handshake. It looks like this: req (master) rply (slave) There s an interlocking pattern. To start the transaction, the master asserts req, and the slave responds by asserting rply. The transaction ends with a similar interlock: req, followed by rply. Let s see how this works in the context of read and write operations for our simple pair of interfaces. First, a read operation. destination master data req rply r/w source slave data data from slave r/w req (master) rply (slave) The master initiates the transaction by setting r/w to indicate a read operation and then asserting req. The slave responds by placing data onto the data lines. To indicate to the the master that data is available, the slave asserts the rply signal. This gives a positive indication that the data lines are valid. 8

9 Cmpt 250 Data Transfer Primitives April 1, 2008 When the master sees rply, it knows that the data lines are valid and it can latch the data. When this is complete, the master drops req; the r/w control line can return to an unknown state. When the slave sees req, it has a positive indication that the master has received the data. The slave responds by returning the drivers for data to the hi-impedance state and dropping rply. The fall of rply indicates the end of the transaction. A new transaction can start only after the final rply. Next, a write. source master data req rply r/w destination slave data data from master r/w req (master) rply (slave) The master initiates the transaction by placing data on the data lines, setting r/w to indicate a write, and asserting req. When the slave has completed the actions required to latch the data, it signals the master by asserting rply. When the master sees rply, it has a positive indication that the slave has successfully performed the write operation. In response, it drops req and ceases to assert data on the data lines. The r/w signal is no longer required and can return to an unknown state. When the slave sees req, it drops rply. As with a read, rply indicates the end of the transaction. A new transaction can start only after the final rply. Again, notice that the timing of the rise and fall of req and rply is slightly different for read and write. The underlying principle is the same however: 9

10 Cmpt 250 Data Transfer Primitives April 1, 2008 Each transition indicates that some set of signals is valid, or some set of operations has been completed, and it is safe for the partner to proceed to the next step. Taking the write operation as an example: The master should not raise req until the data and r/w signals have valid values. Otherwise, the slave could perform the wrong operation or latch invalid data. The slave should not raise rply until it has latched the data. Otherwise, the master could remove the data before the slave has latched it, or change the r/w control signal, causing the slave to perform an incorrect action. The fall of req indicates that the operation is over, as far as the master is concerned. The r/w signal should remain valid until after req, so that the slave does not perform an incorrect operation. The master has some flexibility in terms of the data lines. Once it has seen rply, it knows that the slave has no further need for the data. The only real requirement, in this simple example, is that the master cease to assert data on the data lines (by putting its drivers into the hi-impedance state) before initiating a read operation, so that the data lines are available to the slave. The fall of rply indicates that the operation is over, as far as the slave is concerned, and indicates the completion of the full transaction. The req and rply signals are now back to their initial state and a new transaction can be initiated. In its explanation of a read operation (destination-initiated transfer), the text states that The destination unit [master] may not make another request until the source unit [slave] has shown its readiness to provide new data by disabling Reply. This is a little bit misleading. It s possible to attach this meaning to rply, but not necessary. The fall of rply simply indicates that the slave s interface logic is ready to start a new transaction. The attached device may or may not be ready to respond. If the device is not ready to respond to a new request, all that will happen is that rply will be delayed in the next transaction until the device is ready. If you think about it for a bit, this is the right thing to do when the interface is attached to a shared bus. After all, the next transaction on the bus may not involve the same interface. A design goal for interface logic is to free the bus for use by other interfaces as quickly as possible. 10

11 Cmpt 250 Bus Structures April 1, 2008 Bus Structures Now that we have some of the basics in hand, let s try to place them in the context of a full bus structure. The purpose of a bus in the context of computer systems is to allow many different entities to communicate. Here s a minimal example: A CPU, a memory, and a disk, all connected by a bus. CPU interface Primary Memory (DRAM) interface data control address interface Disk There are a number of things to point out in this figure. Everybody gets a bus interface. If the CPU is going to communicate with disk and memory over a bus, it needs bus interface logic. The same can be said for the primary memory. What roles do the various components play? In computer systems, the CPU is always a master when it participates in a bus transaction. Similarly, memory is always a slave. Interfaces for i/o devices can play both roles. The disk interface will play the role of the slave when the CPU is sending it commands to set up a transfer of data into the primary memory. For the actual data transfer, the disk interface will play the role of the master, controlling the bus cycles which move data from the disk interface to memory. (This is known as DMA (direct memory access) i/o. It s commonly used for high-speed 11

12 Cmpt 250 Bus Structures April 1, 2008 devices in order to spare the CPU the work of executing instructions to transfer data at high speed. We ll come back to this later.) The function of the address lines becomes more clear: If the CPU is the master for a bus transaction, it must have some way to choose one of the memory or the disk interface as the slave. The interface for the memory will recognise a range of addresses that matches the amount of physical memory present in the system. The interface for the disk will recognise a much smaller set of addresses corresponding to the data buffer and control and status registers in the interface. Note that the CPU need only write to the address lines, because it s always a master when it participates in a bus transaction. Similarly, the memory need only read the address lines. The disk interface must be able to do both. Both the CPU and the disk interface can take the role of master in a bus transaction. What happens when both of them want control of the bus? We ll need to devise some method of arbitration a way to select one interface as the master for the next bus cycle. As with DMA, we ll come back to this later. Since the CPU is always the master when it participates in a bus transaction, and doesn t even monitor the address lines to see if it s selected, we need some other way to request it to participate in a bus transaction with another interface. For this, we ll use interrupts. As with arbitration, we ll come back to this later. Now that we have a system model, let s consider a real bus the PCI bus, introduced in the early 1990 s. The description here is far from complete, but it should be enough to give you some idea of how a bus works. The original PCI bus standard described a parallel bus which could transfer up to 32 bits in parallel. PCI provides the three major signal groups data, address, and control but it uses the same wires for address and data in order to reduce the total number of wires required. This is a common technique used in many bus standards. The wires are first used to transmit an address. All interfaces examine the address, and one interface will recognise that it has been selected as the slave for the transaction. Once the interface has indicated that it s selected, the master removes the address and the same wires are used to transmit data. The PCI bus is a synchronous bus, i.e., a common clock signal is transmitted to all interfaces on the bus. By default, each step in a read or write transaction takes one clock period. However, as we ll see, there 12

13 Cmpt 250 Bus Structures April 1, 2008 are other control signals which are used to control the progress of the bus cycle. This hybrid structure a common clock and default timing, combined with some way to delay the progress of the bus transaction is a very common structure. The PCI bus standard specifies that signal values change on the falling edge of the clock and are checked on the rising edge of the clock. This ensures that changes have time to propagate from one end of the bus to the other. Here s an example of a read operation on a PCI bus (the figure is adapted from [2, Figure 23.8]). A PCI bus allows the transmission of multiple units of data during a single transaction. For the read transaction pictured here, four units of data are sent from the slave to the master, starting at the initial address clock frame devsel adr/data cmd/be irdy trdy address read data 0 data 1 data 2 data 3 byte enable (wait) (wait) Clock Cycle #1 At clock, the bus master places an address on adr/data, places an operation code (in this case, the code for read) on cmd/be, and asserts frame to indicate that the address and command are valid and a new bus transaction has started. (Notice that many PCI bus signals are active low, so asserting the signal means that the value goes to zero.) Clock Cycle #2 At clock, all interfaces check the frame signal for the 1 0 transition that marks the start of a bus transaction. This has just occurred, so the interfaces will latch the address and command values. One interface will recognise its own address. At clock, this interface will assert devsel to indicate that it is selected. Recall that the adr/data lines will be used for both address and data. Now that all interfaces have had a chance to latch 13

14 Cmpt 250 Bus Structures April 1, 2008 the data, the bus master will cease to assert the address on these wires, and they will be available for data. Data will come from either the master or the slave, depending on whether this is a write or a read operation. Since this is a read, the master device will put its (tristate) drivers into hiimpedance mode, effectively disconnecting from the wires. This frees them to be driven by the slave in subsequent cycles. Similarly, the cmd/be lines are changed from the operation code to a set of signals which specify which of the four possible adr/data bytes may be used to transmit data. In the case of the cmd/be signals, however, the master device always drives the signals. Finally, if the master is ready to accept data, it will assert irdy (initiator ready) as shown. The signals frame and devsel are playing the role of request and reply, respectively, in the start of a two-line handshake sequence. If no interface asserts devsel, the bus master knows that something has gone wrong (incorrect address, interface failure, etc.) and can attempt error recovery. Clock Cycle #3 At clock, the slave will check cmd/be to see which data bytes can be used to transmit data. Assuming that it s ready to supply data, at clock it will drive data onto adr/data and assert trdy to indicate the availability of data. Clock Cycle #4 The master device will latch the data on adr/data and check trdy at clock. Seeing that trdy is asserted, the master will know that it has latched valid data. The default assumption is that the master will consume the data at the first rising edge after the data appears on the bus. In this example, the slave device is, for some reason, not prepared to supply new data in this clock cycle. It indicates this by returning trdy to the inactive value (remember, this is an active low signal). As you can see, the signal trdy allows the slave to delay the progress of the transaction. The signal irdy serves the same purpose for the master. Clock Cycle #5 Because the slave is not asserting trdy at clock, the master knows that there is no valid data on the bus. At clock, the slave is again ready to supply data. It places the data on adr/data and asserts trdy. 14

15 Cmpt 250 Interrupts April 1, 2008 Clock Cycle #6 At clock, the master device latches the data and knows that it s valid because the slave has asserted trdy. Clock Cycle #7 At clock, the master device again latches valid data. This time, it s the master device which is unprepared to accept new data at the next clock. It indicates this by returning irdy to the inactive value. Clock Cycle #8 The slave, seeing that irdy is not asserted at clock, maintains the same data on adr/data for another clock period. The master device has caught up and latched the data, and at clock it again asserts irdy. This is the final item of data that the master device wants to receive in this transaction. It returns frame to the inactive state, indicating that this is the end of the bus transaction. Clock Cycle #9 In this clock cycle, the master and slave wrap up the transaction, returning all signals to their initial state. In response to frame, the slave removes the final data item from adr/data and returns devsel and trdy to their inactive values. The signals frame and devsel have completed the second part of the two-line handshake which frames the bus transaction. The master ceases to assert the byte enable signals on cmd/be and returns irdy to the inactive value. Interrupts We have several topics pending from the previous section: DMA i/o, bus arbitration, and interrupts. Interrupts will be covered in this section, and then we ll move on to discuss i/o transfer modes, including DMA i/o. As you ll see at a later point, some of the structures used to manage interrupt handling will be equally useful for bus arbitration. We need a way for an interface to make the CPU aware that it needs attention, and this is one use of interrupts. The text introduces the concept of interrupts in 10.9, but unfortunately doesn t integrate it into any of the processor designs. It s time to correct that oversight. Interrupts provide a way to suspend the current instruction execution stream and transfer control flow to a new instruction stream in order to deal with an exceptional event. Not an unanticipated event. A computer cannot respond to a completely unanticipated event. The best we can manage is advance preparation 15

16 Cmpt 250 Interrupts April 1, 2008 for an event which we know will occur at some unspecified time in the future. There must be a sequence of instructions somewhere in memory that can be executed in response to this event. This sequence of instructions is commonly called an interrupt handler or interrupt service routine, There must be provision in the hardware to accept an interrupt request (a signal indicating that the event has occurred) and transfer control flow to the interrupt handler. The hardware actions which do this are commonly called the hardware interrupt response sequence. In other words, a human has to anticipate that interrupts might be useful and design hardware to accept interrupt requests and transfer control flow to an interrupt handler. A human must also write the interrupt handler and make the necessary arrangements (i.e., initialise the proper locations in memory with code and data) so that the handler will be executed when the hardware interrupt response sequence is triggered by an interrupt request. The kinds of events we re talking about here are events that are anticipated, but the exact time of occurrence cannot be specified in advance. Interrupts are divided into three broad categories based on origin: External interrupts: interrupt requests due to exceptional events originating outside the CPU, such as i/o requests or power failure. Internal interrupts: interrupt requests due to exceptional events triggered by instruction execution: an attempt to execute an illegal instruction, or division by zero. Software interrupts: an interrupt request that results directly from the execution of a special instruction (e.g., the SWI instruction in the 68HC12). Internal interrupts are often called exceptions, and the term interrupt is taken to mean an external interrupt. You may be asking Isn t a software interrupt sort of a contradiction? After all, interrupts are supposed to happen at unpredictable times. Why would we want to execute an instruction to trigger the interrupt response sequence? It turns out that the same steps used to respond to interrupts are an excellent way for a user program to gain access to operating system services. The relevant question is How does my program know what address to use when it calls an operating system service routine? The short answer is It doesn t. Your program executes a software interrupt instruction with a well-defined code that tells the interrupt handler what system service is requested. This is a bit beyond the scope of Cmpt 250; we won t pursue it further. Take an operating systems course to learn more. 16

17 Cmpt 250 Interrupts April 1, 2008 The precise details for responding to an interrupt will vary from one CPU architecture to the next, but the minimum set of actions is as follows: 1. Immediately before the CPU fetches a new instruction, it checks an interrupt request signal. If interrupts are enabled and the interrupt request signal is asserted, the hardware will begin the hardware interrupt response sequence of Step The hardware interrupt response sequence will save the current value of the PC to a known location and load the PC with the address of the first instruction in the interrupt service routine. The CPU will then fetch and execute this instruction. In short, the hardware interrupt response sequence amounts to a call to the interrupt handler. The details of how to obtain the starting address of the interrupt service routine will be part of the CPU s hardware design and the overall computer system design. 3. The interrupt service routine executes its first instruction. Most often, this instruction will disable further interrupts. Notice that the interrupt service routine is guaranteed to be able to execute this first instruction, because the CPU hardware will not check again for an interrupt request until execution of this instruction is finished. 4. The interrupt service routine will save any CPU state that might be modified while the service routine executes. The most common items to be saved are the values of any CPU registers which will be used by the service routine. Remember, the interrupted code is not expecting this to happen. We must be able to restore everything exactly as it was when the CPU hardware accepted the interrupt. 5. The interrupt service routine will execute instructions to determine the cause of the interrupt. In its simplest form, this might be a check of the status register of each i/o interface, looking for an interface that s ready for an i/o operation. (This activity is commonly called polling.) In more sophisticated designs, the hardware will assist with the task of identifying the source of the interrupt. 6. The interrupt service routine will execute instructions to deal with the cause of the interrupt. 7. The interrupt service routine will execute instructions to restore the state that it saved in Step 4. 17

18 Cmpt 250 Interrupts April 1, The interrupt service routine will execute a return from interrupt instruction, which will resume execution of the interrupted program. The basic action required of a return from interrupt instruction is the same as a return from a subroutine the PC value saved in Step 2 is loaded into the PC and the next instruction is fetched from that address. At some point in the course of the actions taken in Steps 2 5, the interface will realise that its interrupt request has been acknowledged and it will cease to assert its interrupt request. Now that we know the general sequence of events that occurs when the CPU hardware accepts an interrupt request and services the interrupt, let s look at how an interface can signal an interrupt request to the CPU, and how an interrupt service routine can determine which interface is requesting interrupt service. A typical CPU will offer only a few inputs for external interrupt requests. It s quite common to have just two: a maskable interrupt and a nonmaskable interrupt. A maskable interrupt can be enabled or disabled by program control (i.e., by executing instructions). Typically, the CPU will provide one or more bits in a special-purpose CPU register for this purpose. (For example, the interrupt mask (I) bit in the CCR of the 68HC12, and the CLI and SEI instructions which clear and set it to enable or disable, respectively, the maskable interrupt.) A nonmaskable interrupt cannot be disabled by program control. It is used for high-priority events (typically, power failure) which should never be ignored or postponed. If you think back to the actions involved in responding to an interrupt, the hardware must be designed to disable the nonmaskable interrupt request signal while responding to this type of interrupt. The hardware must disable it in Step 2 and reenable it as part of the execution of the return from interrupt instruction in Step 8. In the situation where there are only the nonmaskable and maskable interrupt requests, the nonmaskable interrupt has priority over the maskable interrupt. When a processor provides more than two interrupt request lines, there will also be some way to establish their relative priority. The priority may be hardwired, or it may be adjustable under program control by writing to a special-purpose CPU register. In systems with many i/o devices operating at varying speeds, it s very useful to establish some priority for responding to requests for service. In the case 18

19 Cmpt 250 Interrupts April 1, 2008 where the CPU provides only one or two interrupt lines, additional logic is necessary. There are three common configurations: daisy-chain priority logic, parallel priority logic, and a hybrid of the two. Here s a figure that illustrates the daisy-chain configuration. IntAck In Out In Out In Out V+ CPU Device 0 Interface IRQ Device 1 Interface IRQ Device k Interface IRQ IntReq When an interface requires the attention of the CPU, it asserts the active low signal IRQ. All IRQ signals are connected to a single wire in a configuration called a wired-or. The IRQ output uses a special output circuit called an open-collector output. When it s on, it pulls the output low; when it s off, it s in a hi-impedance state. The difference between an open-collector output and a tri-state output is that an open-collector output has no ability to assert a high output. When no IRQ output is asserted, the resistor pulls the signal value up to a high (inactive) value. This configuration is called a wired-or because it performs a logical OR function for active-low signals. When any of the connected signals (in this case, the IRQ outputs of the interfaces) are asserted (active low), the resulting signal (in this case, IntReq) is asserted (active low). And here s a figure that illustrates the logic used to generate an interrupt request and capture or propagate the acknowledgement. internal IRQ D Q (open collector output) IRQ (interrupt request to CPU) (IntAck daisy chain) In Q Out (IntAck daisy chain) internal signals are internal to device interface internal clock internal IntAck 19

20 Cmpt 250 Interrupts April 1, 2008 Signals labelled internal are produced by the interface logic. When the interface does not require service, the internal interrupt request signal IRQ is not asserted, hence the FF is set to 0 and the external IRQ signal is not asserted. If an IntAck signal arrives at the In input, it will be propagated to the Out output and passed along to the next interface in the chain. When the interface needs service, it asserts the internal IRQ signal. This will set the FF and assert IRQ. After a time, the CPU will respond to the interrupt and assert IntAck. When the IntAck signal arrives at the In input, it will not be propagated to the daisy chain Out output, and the internal IntAck signal will be asserted. The interface and the CPU will now begin to interact over the system bus. When the CPU determines the interface that has captured the IntAck signal, it will drop IntAck to the inactive state. This same interaction will allow the interface logic to recognise that its interrupt is now being serviced. When it sees the fall of IntAck, it will drop the internal IRQ signal. This will cause the FF to be set to 0. The external IRQ will no longer be asserted, and the interface is once again in a state where an arriving IntAck signal will be propagated to the next interface in the chain. Other designs are certainly possible for this function; the details will depend on system conventions for the signals used to request and acknowledge interrupts. The interface and the CPU will now begin to interact over the system bus. is a bit vague. Here are two possible scenarios for how this interaction might work: After the CPU asserts the IntAck signal, it begins a special interrupt response bus transaction to read information from the interface. The interface which has captured IntAck is implicitly selected as the target interface. When the interface participates in this special bus transaction, it knows that its interrupt is being serviced. As part of this interrupt response bus transaction, the interface may supply additional information to the CPU to aid it in locating the starting address of the service routine. After the CPU asserts the IntAck signal, it begins to execute a generic interrupt service routine. This service routine polls the interfaces on the bus. The interface which has captured IntAck will set a bit in its status register to indicate that it has captured IntAck and should receive service at this time. When the polling routine sees this bit set in the interface s status register, it knows that it s found the right interface. 20

21 Cmpt 250 Interrupts April 1, 2008 When the CPU offers multiple interrupt request and acknowledgement signals, these signals will be used as inputs to priority logic which selects the highest priority interrupt for service. The text calls this configuration the parallel priority interrupt method. Typical logic for this function is shown in the following figure. CPU from interfaces IRQ3 IRQ2 IRQ1 IRQ0 3 active Priority Encoder code(1:0) 2 IRQ IVec(1:0) to CPU interrupt response logic to interfaces IntAck3 IntAck2 IntAck1 IntAck Decoder enable code(1:0) IntAck from CPU interrupt response logic When no interrupt requests are pending, the priority encoder s active output is inactive, as is IRQ. When one or more requests is pending, IRQ is asserted and the binary code corresponding to the highest priority request is available on Ivec(1:0). The IRQ signal indicates that an interrupt is pending. The value of Ivec(1:0) can be used to quickly select the starting address of the proper interrupt service routine. When the CPU decides to respond to an interrupt, it will assert IntAck. This will enable the decoder outputs, and the output corresponding to the value of Ivec(1:0) will be asserted, acknowledging the interrupt request. When the CPU offers only one or two interrupt request and acknowledgement signals, priority interrupt arbitration logic can be constructed external to the CPU in a dedicated interrupt controller. This very common configuration is illustrated in the text in Figure The text figure includes an interrupt mask register to disable some or all interrupt requests, and expands on the notion of using the value of Ivec(1:0) to construct the address of the interrupt service routine (VAD, in the figure). One can picture this logic as yet another device attached to the system bus. The CPU can write to the mask register and read the VAD register. 21

22 Cmpt 250 I/O Transfer Modes April 1, 2008 The most general interrupt request/acknowledge configuration is a hybrid of the parallel priority and daisy-chain methods. A daisy-chain structure is attached to each pair of parallel priority request/acknowledge signals. I/O Transfer Modes Now we ve laid the necessary groundwork to describe the three modes used for i/o in computer systems. Keep firmly in mind that modern CPUs perform memory-mapped i/o using the same load and store instructions used to access memory. A range of addresses is assigned to i/o interfaces. To use the Mano pipelined RISC as an example, when the CPU wants to read data from memory, it will execute a load (LD) instruction. When the CPU wants to read data from an i/o device, it will execute a load (LD) instruction. The only difference between the two is the address. Most often, the goal of an i/o operation is to transfer a block of data between memory and an i/o device. There may be an initial exchange between the CPU and an interface as the CPU gives the interface the details of the data transfer to be performed. The may be a final exchange between the CPU and the interface to wrap up the data transfer and return the interface to the idle state. For a small set of i/o devices, transfer of one or a few bytes of data directly to a CPU register is all that s required (reading a real-time clock, for example). Given that we re interested in moving data between an i/o device and memory, if the CPU is directly involved in moving bytes of data it s acting as an intermediary. For a read, data will move from the i/o device interface to a CPU register and then to memory. For a write, the data flows from memory through a CPU register to the i/o device interface. The CPU must execute load and store instructions to move each unit of data. This is immediately obvious for a RISC instruction set, where the CPU must execute a load instruction followed by a store instruction to transfer data in either direction. This is less obvious when the CPU supports a CISC instruction set with addressing modes that allow the specification of operands in memory, but it remains true. Even if the instruction set provides an instruction which appears to allow you to specify direct movement of data from one 22

23 Cmpt 250 I/O Transfer Modes April 1, 2008 memory location to another, the data will be fetched to the CPU, held in a temporary register, and then transferred to the destination. You can see that this must be true if you think for a moment about bus cycles. We can specify exactly one address per bus cycle either a source or a destination to select the slave for the transaction. The other participant is the master. Add to the above another consideration: Unlike primary memory, an i/o device is not always ready to transfer data. In addition to executing instructions to move the data between the device s interface and memory, the CPU will need to execute instructions to determine if the i/o device is ready to read or write data. With this bit of analysis, we can introduce the three modes used for i/o operations: When performing program-controlled i/o, the CPU executes instructions to determine if the device is ready for a data transfer, and then executes additional instructions to move the data between the device and memory. When performing interrupt-initiated i/o, the CPU assumes that the interface will produce an interrupt request when the device is ready to transfer data. The CPU will respond to the interrupt and execute instructions to move the data between the device and memory. When performing direct memory access (DMA) i/o), the CPU executes instructions to tell the interface the details of the transfer. Then, while the CPU performs other work, the data transfer is handled by the device interface (which must have the intelligence to act as the bus master during the transfer). When the transfer is finished, the interface interrupts the CPU and the CPU will execute any instructions necessary to conclude the data transfer. The goal, of course, is to perform the i/o operation in a cost-effective manner. At one extreme, program-controlled i/o requires very little hardware or software support, but the CPU will spend a lot of time executing instructions in support of i/o. At the other extreme, DMA i/o relieves the CPU of all but the essential activity of specifying the i/o operation, but it requires more capable device interfaces. In the middle, interrupt-driven i/o allows the CPU to do something useful while it s waiting for the device to become ready for a data transfer. 23

24 Cmpt 250 I/O Transfer Modes April 1, 2008 Consider the flowchart of Figure in the text, which specifies the actions required for program-controlled i/o. The CPU must execute instructions to poll the interface, followed by instructions to move the data. Here s one possible assembly language sequence to read data from an interface: ; R13: destination address of data in memory ; R14: number of units of data to be transferred ; R15: address of interface status register ; R16: address of interface data register ; R17: mask to isolate ready bit in status poll LD R1, R15 ; load status from interface AND R2, R1, R17 ; isolate ready bit in status BZ poll ; if not set, device not ready LD R1, R16 ; load data from interface data register ST R13, R1 ; store data to memory ADI R13, R13, 1 ; increment data destination pointer ADI R14, R14, -1 ; decrement data count BNZ poll ; repeat until done At first glance, the CPU must execute eight instructions for each unit of data read from the interface and written to memory. What will we actually achieve for a slow device (a keyboard, for example). Assume the Mano pipelined CPU with data forwarding and branch prediction (which always predicts that the branch will not be taken). Assume further a 1 GHz. clock frequency, so that one instruction completes execution or one bubble leaves the pipeline every nanosecond. First, the polling loop contains a control hazard: We ll end up with two bubbles each time it executes and branches back to repeat the poll (i.e., branch prediction fails). If that were the worst of our problems, it would take 5 ns. (five clock periods) to poll the device. But... we can t cache the interface status! (Why?) The absolute best we can hope for is that we can read the interface s status register in the same amount of time it takes us to read primary memory a few hundred clock periods. Let s use Mano s (optimistic) figure: 100 ns. (100 clock periods) to execute the polling instructions, including the time to access the interface s status register. This line of reasoning, extended to the rest of the data transfer loop, says that we can assume another 200 ns. to transfer a unit of data once the device indicates it s ready. 24

25 Cmpt 250 I/O Transfer Modes April 1, 2008 A very fast typist can type perhaps 120 words per minute. With an average 7 8 characters per word, we have around 15 keystrokes per second. Suppose our process polls the keyboard status register 20 times per second to be sure that no keystroke is missed. How much of the CPU s capacity is the process using? It will need = 2000 clock periods for polling each second, and there are 10 9 clock periods available, so we need only ( )/( ) =.0002% of the CPU s capacity. Clearly the CPU will have no trouble keeping up. But that s not really the problem. How will the process know to poll once every 50 ms., at just the right moment? If the process simply executes the polling loop, it will waste a huge amount of time polling (49,998 µs. in every 50,000 µs., by our estimate). And there s still no guarantee (in a multiprocessing environment) that the deadline will be met, because there s no guarantee that the process will execute at least once every 50 ms. For slow devices, the message is clear: We need to find some way to avoid polling. Before we leave his example, let s consider how programmed i/o will perform when we re dealing with fast devices. Suppose we have a Gigabit Ethernet interface. Let s say that the interface is capable of receiving a byte every 10 ns. and transfers data on the system bus in units of four-byte words. We have 40 ns. to perform a transfer! In this case, program-controlled i/o cannot keep up. Suppose the i/o device is a disk drive. The data transfer rate once the seek is completed is around 3 MB/sec. If the interface were designed to transfer four-byte data words to the CPU as the data is read from the disk, the required transfer rate would be about (3 MB/sec.)/4 = 750,000 transfers per second, or around 1333 ns. per transfer. In this case, program-controlled i/o at 300 ns. for a status check and data transfer can keep up. But the process will need to poll at least 750,000 times per second to make sure no data is dropped. Then (100)( )/( ) = 7.5% of the CPU s time is required for polling! (Again, under the unrealistic assumption that we can poll just once, at exactly the right time, in every 1333 ns. interval.) This really isn t acceptable, and we ve been very generous in our assumptions. How have we been generous? Well, no self-respecting multiprocessing operating system will let a user process get anywhere near the hardware. So our polling loop really involves a call to the operating system, asking it to read the interface status or data register and return the result. A 25

Chapter Operation Pinout Operation 35

Chapter Operation Pinout Operation 35 68000 Operation 35 Chapter 6 68000 Operation 6-1. 68000 Pinout We will do no construction in this chapter; instead, we will take a detailed look at the individual pins of the 68000 and what they do. Fig.

More information

INPUT-OUTPUT ORGANIZATION

INPUT-OUTPUT ORGANIZATION INPUT-OUTPUT ORGANIZATION Peripheral Devices: The Input / output organization of computer depends upon the size of computer and the peripherals connected to it. The I/O Subsystem of the computer, provides

More information

Accessing I/O Devices Interface to CPU and Memory Interface to one or more peripherals Generic Model of IO Module Interface for an IO Device: CPU checks I/O module device status I/O module returns status

More information

1. Define Peripherals. Explain I/O Bus and Interface Modules. Peripherals: Input-output device attached to the computer are also called peripherals.

1. Define Peripherals. Explain I/O Bus and Interface Modules. Peripherals: Input-output device attached to the computer are also called peripherals. 1. Define Peripherals. Explain I/O Bus and Interface Modules. Peripherals: Input-output device attached to the computer are also called peripherals. A typical communication link between the processor and

More information

INPUT/OUTPUT ORGANIZATION

INPUT/OUTPUT ORGANIZATION INPUT/OUTPUT ORGANIZATION Accessing I/O Devices I/O interface Input/output mechanism Memory-mapped I/O Programmed I/O Interrupts Direct Memory Access Buses Synchronous Bus Asynchronous Bus I/O in CO and

More information

COMP I/O, interrupts, exceptions April 3, 2016

COMP I/O, interrupts, exceptions April 3, 2016 In this lecture, I will elaborate on some of the details of the past few weeks, and attempt to pull some threads together. System Bus and Memory (cache, RAM, HDD) We first return to an topic we discussed

More information

Unit 3 and Unit 4: Chapter 4 INPUT/OUTPUT ORGANIZATION

Unit 3 and Unit 4: Chapter 4 INPUT/OUTPUT ORGANIZATION Unit 3 and Unit 4: Chapter 4 INPUT/OUTPUT ORGANIZATION Introduction A general purpose computer should have the ability to exchange information with a wide range of devices in varying environments. Computers

More information

Programmed I/O Interrupt-Driven I/O Direct Memory Access (DMA) I/O Processors. 10/12/2017 Input/Output Systems and Peripheral Devices (02-2)

Programmed I/O Interrupt-Driven I/O Direct Memory Access (DMA) I/O Processors. 10/12/2017 Input/Output Systems and Peripheral Devices (02-2) Programmed I/O Interrupt-Driven I/O Direct Memory Access (DMA) I/O Processors 1 Principle of Interrupt-Driven I/O Multiple-Interrupt Systems Priority Interrupt Systems Parallel Priority Interrupts Daisy-Chain

More information

INPUT/OUTPUT ORGANIZATION

INPUT/OUTPUT ORGANIZATION INPUT/OUTPUT ORGANIZATION Accessing I/O Devices I/O interface Input/output mechanism Memory-mapped I/O Programmed I/O Interrupts Direct Memory Access Buses Synchronous Bus Asynchronous Bus I/O in CO and

More information

Chapter 5 Input/Output Organization. Jin-Fu Li Department of Electrical Engineering National Central University Jungli, Taiwan

Chapter 5 Input/Output Organization. Jin-Fu Li Department of Electrical Engineering National Central University Jungli, Taiwan Chapter 5 Input/Output Organization Jin-Fu Li Department of Electrical Engineering National Central University Jungli, Taiwan Outline Accessing I/O Devices Interrupts Direct Memory Access Buses Interface

More information

Unit 5. Memory and I/O System

Unit 5. Memory and I/O System Unit 5 Memory and I/O System 1 Input/Output Organization 2 Overview Computer has ability to exchange data with other devices. Human-computer communication Computer-computer communication Computer-device

More information

Generic Model of I/O Module Interface to CPU and Memory Interface to one or more peripherals

Generic Model of I/O Module Interface to CPU and Memory Interface to one or more peripherals William Stallings Computer Organization and Architecture 7 th Edition Chapter 7 Input/Output Input/Output Problems Wide variety of peripherals Delivering different amounts of data At different speeds In

More information

(Refer Slide Time 00:01:09)

(Refer Slide Time 00:01:09) Computer Organization Part I Prof. S. Raman Department of Computer Science & Engineering Indian Institute of Technology Lecture 3 Introduction to System: Hardware In the previous lecture I said that I

More information

Top-Level View of Computer Organization

Top-Level View of Computer Organization Top-Level View of Computer Organization Bởi: Hoang Lan Nguyen Computer Component Contemporary computer designs are based on concepts developed by John von Neumann at the Institute for Advanced Studies

More information

Chapter 4. MARIE: An Introduction to a Simple Computer. Chapter 4 Objectives. 4.1 Introduction. 4.2 CPU Basics

Chapter 4. MARIE: An Introduction to a Simple Computer. Chapter 4 Objectives. 4.1 Introduction. 4.2 CPU Basics Chapter 4 Objectives Learn the components common to every modern computer system. Chapter 4 MARIE: An Introduction to a Simple Computer Be able to explain how each component contributes to program execution.

More information

Interfacing. Introduction. Introduction Addressing Interrupt DMA Arbitration Advanced communication architectures. Vahid, Givargis

Interfacing. Introduction. Introduction Addressing Interrupt DMA Arbitration Advanced communication architectures. Vahid, Givargis Interfacing Introduction Addressing Interrupt DMA Arbitration Advanced communication architectures Vahid, Givargis Introduction Embedded system functionality aspects Processing Transformation of data Implemented

More information

CS370 Operating Systems

CS370 Operating Systems CS370 Operating Systems Colorado State University Yashwant K Malaiya Fall 2016 Lecture 2 Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 2 System I/O System I/O (Chap 13) Central

More information

Chapter 7 : Input-Output Organization

Chapter 7 : Input-Output Organization Chapter 7 Input-Output organization 7.1 Peripheral devices In addition to the processor and a set of memory modules, the third key element of a computer system is a set of input-output subsystem referred

More information

Introduction Electrical Considerations Data Transfer Synchronization Bus Arbitration VME Bus Local Buses PCI Bus PCI Bus Variants Serial Buses

Introduction Electrical Considerations Data Transfer Synchronization Bus Arbitration VME Bus Local Buses PCI Bus PCI Bus Variants Serial Buses Introduction Electrical Considerations Data Transfer Synchronization Bus Arbitration VME Bus Local Buses PCI Bus PCI Bus Variants Serial Buses 1 Most of the integrated I/O subsystems are connected to the

More information

Computer Organization ECE514. Chapter 5 Input/Output (9hrs)

Computer Organization ECE514. Chapter 5 Input/Output (9hrs) Computer Organization ECE514 Chapter 5 Input/Output (9hrs) Learning Outcomes Course Outcome (CO) - CO2 Describe the architecture and organization of computer systems Program Outcome (PO) PO1 Apply knowledge

More information

Input/Output Problems. External Devices. Input/Output Module. I/O Steps. I/O Module Function Computer Architecture

Input/Output Problems. External Devices. Input/Output Module. I/O Steps. I/O Module Function Computer Architecture 168 420 Computer Architecture Chapter 6 Input/Output Input/Output Problems Wide variety of peripherals Delivering different amounts of data At different speeds In different formats All slower than CPU

More information

The control of I/O devices is a major concern for OS designers

The control of I/O devices is a major concern for OS designers Lecture Overview I/O devices I/O hardware Interrupts Direct memory access Device dimensions Device drivers Kernel I/O subsystem Operating Systems - June 26, 2001 I/O Device Issues The control of I/O devices

More information

These three counters can be programmed for either binary or BCD count.

These three counters can be programmed for either binary or BCD count. S5 KTU 1 PROGRAMMABLE TIMER 8254/8253 The Intel 8253 and 8254 are Programmable Interval Timers (PTIs) designed for microprocessors to perform timing and counting functions using three 16-bit registers.

More information

PC Interrupt Structure and 8259 DMA Controllers

PC Interrupt Structure and 8259 DMA Controllers ELEC 379 : DESIGN OF DIGITAL AND MICROCOMPUTER SYSTEMS 1998/99 WINTER SESSION, TERM 2 PC Interrupt Structure and 8259 DMA Controllers This lecture covers the use of interrupts and the vectored interrupt

More information

Chapter 5 - Input / Output

Chapter 5 - Input / Output Chapter 5 - Input / Output Luis Tarrataca luis.tarrataca@gmail.com CEFET-RJ L. Tarrataca Chapter 5 - Input / Output 1 / 90 1 Motivation 2 Principle of I/O Hardware I/O Devices Device Controllers Memory-Mapped

More information

Chapter 3. Top Level View of Computer Function and Interconnection. Yonsei University

Chapter 3. Top Level View of Computer Function and Interconnection. Yonsei University Chapter 3 Top Level View of Computer Function and Interconnection Contents Computer Components Computer Function Interconnection Structures Bus Interconnection PCI 3-2 Program Concept Computer components

More information

INPUT-OUTPUT ORGANIZATION

INPUT-OUTPUT ORGANIZATION 1 INPUT-OUTPUT ORGANIZATION Peripheral Devices Input-Output Interface Asynchronous Data Transfer Modes of Transfer Priority Interrupt Direct Memory Access Input-Output Processor Serial Communication 2

More information

MARIE: An Introduction to a Simple Computer

MARIE: An Introduction to a Simple Computer MARIE: An Introduction to a Simple Computer 4.2 CPU Basics The computer s CPU fetches, decodes, and executes program instructions. The two principal parts of the CPU are the datapath and the control unit.

More information

Chapter 13: I/O Systems. Operating System Concepts 9 th Edition

Chapter 13: I/O Systems. Operating System Concepts 9 th Edition Chapter 13: I/O Systems Silberschatz, Galvin and Gagne 2013 Chapter 13: I/O Systems Overview I/O Hardware Application I/O Interface Kernel I/O Subsystem Transforming I/O Requests to Hardware Operations

More information

HANDLING MULTIPLE DEVICES

HANDLING MULTIPLE DEVICES HANDLING MULTIPLE DEVICES Let us now consider the situation where a number of devices capable of initiating interrupts are connected to the processor. Because these devices are operationally independent,

More information

6 Direct Memory Access (DMA)

6 Direct Memory Access (DMA) 1 License: http://creativecommons.org/licenses/by-nc-nd/3.0/ 6 Direct Access (DMA) DMA technique is used to transfer large volumes of data between I/O interfaces and the memory. Example: Disk drive controllers,

More information

INPUT/OUTPUT ORGANIZATION

INPUT/OUTPUT ORGANIZATION INPUT/OUTPUT ORGANIZATION Accessing I/O Devices I/O interface Input/output mechanism Memory-mapped I/O Programmed I/O Interrupts Direct Memory Access Buses Synchronous Bus Asynchronous Bus I/O in CO and

More information

Module 3. Embedded Systems I/O. Version 2 EE IIT, Kharagpur 1

Module 3. Embedded Systems I/O. Version 2 EE IIT, Kharagpur 1 Module 3 Embedded Systems I/O Version 2 EE IIT, Kharagpur 1 Lesson 15 Interrupts Version 2 EE IIT, Kharagpur 2 Instructional Objectives After going through this lesson the student would learn Interrupts

More information

EEL 4744C: Microprocessor Applications. Lecture 7. Part 1. Interrupt. Dr. Tao Li 1

EEL 4744C: Microprocessor Applications. Lecture 7. Part 1. Interrupt. Dr. Tao Li 1 EEL 4744C: Microprocessor Applications Lecture 7 Part 1 Interrupt Dr. Tao Li 1 M&M: Chapter 8 Or Reading Assignment Software and Hardware Engineering (new version): Chapter 12 Dr. Tao Li 2 Interrupt An

More information

Reading Assignment. Interrupt. Interrupt. Interrupt. EEL 4744C: Microprocessor Applications. Lecture 7. Part 1

Reading Assignment. Interrupt. Interrupt. Interrupt. EEL 4744C: Microprocessor Applications. Lecture 7. Part 1 Reading Assignment EEL 4744C: Microprocessor Applications Lecture 7 M&M: Chapter 8 Or Software and Hardware Engineering (new version): Chapter 12 Part 1 Interrupt Dr. Tao Li 1 Dr. Tao Li 2 Interrupt An

More information

by I.-C. Lin, Dept. CS, NCTU. Textbook: Operating System Concepts 8ed CHAPTER 13: I/O SYSTEMS

by I.-C. Lin, Dept. CS, NCTU. Textbook: Operating System Concepts 8ed CHAPTER 13: I/O SYSTEMS by I.-C. Lin, Dept. CS, NCTU. Textbook: Operating System Concepts 8ed CHAPTER 13: I/O SYSTEMS Chapter 13: I/O Systems I/O Hardware Application I/O Interface Kernel I/O Subsystem Transforming I/O Requests

More information

CS370 Operating Systems

CS370 Operating Systems CS370 Operating Systems Colorado State University Yashwant K Malaiya Spring 2018 Lecture 2 Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 2 What is an Operating System? What is

More information

Chapter 4. MARIE: An Introduction to a Simple Computer

Chapter 4. MARIE: An Introduction to a Simple Computer Chapter 4 MARIE: An Introduction to a Simple Computer Chapter 4 Objectives Learn the components common to every modern computer system. Be able to explain how each component contributes to program execution.

More information

CS 101, Mock Computer Architecture

CS 101, Mock Computer Architecture CS 101, Mock Computer Architecture Computer organization and architecture refers to the actual hardware used to construct the computer, and the way that the hardware operates both physically and logically

More information

QUIZ Ch.6. The EAT for a two-level memory is given by:

QUIZ Ch.6. The EAT for a two-level memory is given by: QUIZ Ch.6 The EAT for a two-level memory is given by: EAT = H Access C + (1-H) Access MM. Derive a similar formula for three-level memory: L1, L2 and RAM. Hint: Instead of H, we now have H 1 and H 2. Source:

More information

Input Output (IO) Management

Input Output (IO) Management Input Output (IO) Management Prof. P.C.P. Bhatt P.C.P Bhatt OS/M5/V1/2004 1 Introduction Humans interact with machines by providing information through IO devices. Manyon-line services are availed through

More information

Input / Output. School of Computer Science G51CSA

Input / Output. School of Computer Science G51CSA Input / Output 1 Overview J I/O module is the third key element of a computer system. (others are CPU and Memory) J All computer systems must have efficient means to receive input and deliver output J

More information

Organisasi Sistem Komputer

Organisasi Sistem Komputer LOGO Organisasi Sistem Komputer OSK 5 Input Output 1 1 PT. Elektronika FT UNY Input/Output Problems Wide variety of peripherals Delivering different amounts of data At different speeds In different formats

More information

8086 Interrupts and Interrupt Responses:

8086 Interrupts and Interrupt Responses: UNIT-III PART -A INTERRUPTS AND PROGRAMMABLE INTERRUPT CONTROLLERS Contents at a glance: 8086 Interrupts and Interrupt Responses Introduction to DOS and BIOS interrupts 8259A Priority Interrupt Controller

More information

INTERFACING THE ISCC TO THE AND 8086

INTERFACING THE ISCC TO THE AND 8086 APPLICATION NOTE INTERFACING THE ISCC TO THE 68 AND 886 INTRODUCTION The ISCC uses its flexible bus to interface with a variety of microprocessors and microcontrollers; included are the 68 and 886. The

More information

Operating Systems 2010/2011

Operating Systems 2010/2011 Operating Systems 2010/2011 Input/Output Systems part 1 (ch13) Shudong Chen 1 Objectives Discuss the principles of I/O hardware and its complexity Explore the structure of an operating system s I/O subsystem

More information

5 Computer Organization

5 Computer Organization 5 Computer Organization 5.1 Foundations of Computer Science Cengage Learning Objectives After studying this chapter, the student should be able to: List the three subsystems of a computer. Describe the

More information

Bus System. Bus Lines. Bus Systems. Chapter 8. Common connection between the CPU, the memory, and the peripheral devices.

Bus System. Bus Lines. Bus Systems. Chapter 8. Common connection between the CPU, the memory, and the peripheral devices. Bus System Chapter 8 CSc 314 T W Bennet Mississippi College 1 CSc 314 T W Bennet Mississippi College 3 Bus Systems Common connection between the CPU, the memory, and the peripheral devices. One device

More information

操作系统概念 13. I/O Systems

操作系统概念 13. I/O Systems OPERATING SYSTEM CONCEPTS 操作系统概念 13. I/O Systems 东南大学计算机学院 Baili Zhang/ Southeast 1 Objectives 13. I/O Systems Explore the structure of an operating system s I/O subsystem Discuss the principles of I/O

More information

C02: Interrupts and I/O

C02: Interrupts and I/O CISC 7310X C02: Interrupts and I/O Hui Chen Department of Computer & Information Science CUNY Brooklyn College 2/8/2018 CUNY Brooklyn College 1 Von Neumann Computers Process and memory connected by a bus

More information

I/O Systems. Amir H. Payberah. Amirkabir University of Technology (Tehran Polytechnic)

I/O Systems. Amir H. Payberah. Amirkabir University of Technology (Tehran Polytechnic) I/O Systems Amir H. Payberah amir@sics.se Amirkabir University of Technology (Tehran Polytechnic) Amir H. Payberah (Tehran Polytechnic) I/O Systems 1393/9/15 1 / 57 Motivation Amir H. Payberah (Tehran

More information

10. INPUT/OUTPUT STRUCTURES

10. INPUT/OUTPUT STRUCTURES 10. INPUT/OUTPUT STRUCTURES (R. Horvath, Introduction to Microprocessors, Chapter 10) The input/output (I/O) section of a computer handles the transfer of information between the computer and the devices

More information

Operating System: Chap13 I/O Systems. National Tsing-Hua University 2016, Fall Semester

Operating System: Chap13 I/O Systems. National Tsing-Hua University 2016, Fall Semester Operating System: Chap13 I/O Systems National Tsing-Hua University 2016, Fall Semester Outline Overview I/O Hardware I/O Methods Kernel I/O Subsystem Performance Application Interface Operating System

More information

5 Computer Organization

5 Computer Organization 5 Computer Organization 5.1 Foundations of Computer Science ã Cengage Learning Objectives After studying this chapter, the student should be able to: q List the three subsystems of a computer. q Describe

More information

EE108B Lecture 17 I/O Buses and Interfacing to CPU. Christos Kozyrakis Stanford University

EE108B Lecture 17 I/O Buses and Interfacing to CPU. Christos Kozyrakis Stanford University EE108B Lecture 17 I/O Buses and Interfacing to CPU Christos Kozyrakis Stanford University http://eeclass.stanford.edu/ee108b 1 Announcements Remaining deliverables PA2.2. today HW4 on 3/13 Lab4 on 3/19

More information

THE CPU SPENDS ALMOST ALL of its time fetching instructions from memory

THE CPU SPENDS ALMOST ALL of its time fetching instructions from memory THE CPU SPENDS ALMOST ALL of its time fetching instructions from memory and executing them. However, the CPU and main memory are only two out of many components in a real computer system. A complete system

More information

Chapter 3 - Top Level View of Computer Function

Chapter 3 - Top Level View of Computer Function Chapter 3 - Top Level View of Computer Function Luis Tarrataca luis.tarrataca@gmail.com CEFET-RJ L. Tarrataca Chapter 3 - Top Level View 1 / 127 Table of Contents I 1 Introduction 2 Computer Components

More information

Buses. Disks PCI RDRAM RDRAM LAN. Some slides adapted from lecture by David Culler. Pentium 4 Processor. Memory Controller Hub.

Buses. Disks PCI RDRAM RDRAM LAN. Some slides adapted from lecture by David Culler. Pentium 4 Processor. Memory Controller Hub. es > 100 MB/sec Pentium 4 Processor L1 and L2 caches Some slides adapted from lecture by David Culler 3.2 GB/sec Display Memory Controller Hub RDRAM RDRAM Dual Ultra ATA/100 24 Mbit/sec Disks LAN I/O Controller

More information

CS152 Computer Architecture and Engineering Lecture 20: Busses and OS s Responsibilities. Recap: IO Benchmarks and I/O Devices

CS152 Computer Architecture and Engineering Lecture 20: Busses and OS s Responsibilities. Recap: IO Benchmarks and I/O Devices CS152 Computer Architecture and Engineering Lecture 20: ses and OS s Responsibilities April 7, 1995 Dave Patterson (patterson@cs) and Shing Kong (shing.kong@eng.sun.com) Slides available on http://http.cs.berkeley.edu/~patterson

More information

Introduction to Embedded System I/O Architectures

Introduction to Embedded System I/O Architectures Introduction to Embedded System I/O Architectures 1 I/O terminology Synchronous / Iso-synchronous / Asynchronous Serial vs. Parallel Input/Output/Input-Output devices Full-duplex/ Half-duplex 2 Synchronous

More information

I/O Organization John D. Carpinelli, All Rights Reserved 1

I/O Organization John D. Carpinelli, All Rights Reserved 1 I/O Organization 1997 John D. Carpinelli, All Rights Reserved 1 Outline I/O interfacing Asynchronous data transfer Interrupt driven I/O DMA transfers I/O processors Serial communications 1997 John D. Carpinelli,

More information

Storage Systems. Storage Systems

Storage Systems. Storage Systems Storage Systems Storage Systems We already know about four levels of storage: Registers Cache Memory Disk But we've been a little vague on how these devices are interconnected In this unit, we study Input/output

More information

ACCESSING I/O DEVICES

ACCESSING I/O DEVICES ACCESSING I/O DEVICES A simple arrangement to connect I/O devices to a computer is to use a single bus structure. It consists of three sets of lines to carry Address Data Control Signals. When the processor

More information

Chapter 13: I/O Systems

Chapter 13: I/O Systems Chapter 13: I/O Systems DM510-14 Chapter 13: I/O Systems I/O Hardware Application I/O Interface Kernel I/O Subsystem Transforming I/O Requests to Hardware Operations STREAMS Performance 13.2 Objectives

More information

Interconnecting Components

Interconnecting Components Interconnecting Components Need interconnections between CPU, memory, controllers Bus: shared communication channel Parallel set of wires for data and synchronization of data transfer Can become a bottleneck

More information

Basic Processing Unit: Some Fundamental Concepts, Execution of a. Complete Instruction, Multiple Bus Organization, Hard-wired Control,

Basic Processing Unit: Some Fundamental Concepts, Execution of a. Complete Instruction, Multiple Bus Organization, Hard-wired Control, UNIT - 7 Basic Processing Unit: Some Fundamental Concepts, Execution of a Complete Instruction, Multiple Bus Organization, Hard-wired Control, Microprogrammed Control Page 178 UNIT - 7 BASIC PROCESSING

More information

Module 6: INPUT - OUTPUT (I/O)

Module 6: INPUT - OUTPUT (I/O) Module 6: INPUT - OUTPUT (I/O) Introduction Computers communicate with the outside world via I/O devices Input devices supply computers with data to operate on E.g: Keyboard, Mouse, Voice recognition hardware,

More information

COSC 243. Input / Output. Lecture 13 Input/Output. COSC 243 (Computer Architecture)

COSC 243. Input / Output. Lecture 13 Input/Output. COSC 243 (Computer Architecture) COSC 243 Input / Output 1 Introduction This Lecture Source: Chapter 7 (10 th edition) Next Lecture (until end of semester) Zhiyi Huang on Operating Systems 2 Memory RAM Random Access Memory Read / write

More information

Chapter 13: I/O Systems

Chapter 13: I/O Systems Chapter 13: I/O Systems Chapter 13: I/O Systems I/O Hardware Application I/O Interface Kernel I/O Subsystem Transforming I/O Requests to Hardware Operations Streams Performance 13.2 Silberschatz, Galvin

More information

k -bit address bus n-bit data bus Control lines ( R W, MFC, etc.)

k -bit address bus n-bit data bus Control lines ( R W, MFC, etc.) THE MEMORY SYSTEM SOME BASIC CONCEPTS Maximum size of the Main Memory byte-addressable CPU-Main Memory Connection, Processor MAR MDR k -bit address bus n-bit data bus Memory Up to 2 k addressable locations

More information

Architecture of Computers and Parallel Systems Part 2: Communication with Devices

Architecture of Computers and Parallel Systems Part 2: Communication with Devices Architecture of Computers and Parallel Systems Part 2: Communication with Devices Ing. Petr Olivka petr.olivka@vsb.cz Department of Computer Science FEI VSB-TUO Architecture of Computers and Parallel Systems

More information

Storage systems. Computer Systems Architecture CMSC 411 Unit 6 Storage Systems. (Hard) Disks. Disk and Tape Technologies. Disks (cont.

Storage systems. Computer Systems Architecture CMSC 411 Unit 6 Storage Systems. (Hard) Disks. Disk and Tape Technologies. Disks (cont. Computer Systems Architecture CMSC 4 Unit 6 Storage Systems Alan Sussman November 23, 2004 Storage systems We already know about four levels of storage: registers cache memory disk but we've been a little

More information

MARIE: An Introduction to a Simple Computer

MARIE: An Introduction to a Simple Computer MARIE: An Introduction to a Simple Computer Outline Learn the components common to every modern computer system. Be able to explain how each component contributes to program execution. Understand a simple

More information

Modes of Transfer. Interface. Data Register. Status Register. F= Flag Bit. Fig. (1) Data transfer from I/O to CPU

Modes of Transfer. Interface. Data Register. Status Register. F= Flag Bit. Fig. (1) Data transfer from I/O to CPU Modes of Transfer Data transfer to and from peripherals may be handled in one of three possible modes: A. Programmed I/O B. Interrupt-initiated I/O C. Direct memory access (DMA) A) Programmed I/O Programmed

More information

Topics. Interfacing chips

Topics. Interfacing chips 8086 Interfacing ICs 2 Topics Interfacing chips Programmable Communication Interface PCI (8251) Programmable Interval Timer (8253) Programmable Peripheral Interfacing - PPI (8255) Programmable DMA controller

More information

Advanced Parallel Architecture Lesson 3. Annalisa Massini /2015

Advanced Parallel Architecture Lesson 3. Annalisa Massini /2015 Advanced Parallel Architecture Lesson 3 Annalisa Massini - 2014/2015 Von Neumann Architecture 2 Summary of the traditional computer architecture: Von Neumann architecture http://williamstallings.com/coa/coa7e.html

More information

Computer Architecture CS 355 Busses & I/O System

Computer Architecture CS 355 Busses & I/O System Computer Architecture CS 355 Busses & I/O System Text: Computer Organization & Design, Patterson & Hennessy Chapter 6.5-6.6 Objectives: During this class the student shall learn to: Describe the two basic

More information

Chapter 12: I/O Systems

Chapter 12: I/O Systems Chapter 12: I/O Systems Chapter 12: I/O Systems I/O Hardware! Application I/O Interface! Kernel I/O Subsystem! Transforming I/O Requests to Hardware Operations! STREAMS! Performance! Silberschatz, Galvin

More information

Chapter 13: I/O Systems

Chapter 13: I/O Systems Chapter 13: I/O Systems Chapter 13: I/O Systems I/O Hardware Application I/O Interface Kernel I/O Subsystem Transforming I/O Requests to Hardware Operations STREAMS Performance Silberschatz, Galvin and

More information

Chapter 12: I/O Systems. Operating System Concepts Essentials 8 th Edition

Chapter 12: I/O Systems. Operating System Concepts Essentials 8 th Edition Chapter 12: I/O Systems Silberschatz, Galvin and Gagne 2011 Chapter 12: I/O Systems I/O Hardware Application I/O Interface Kernel I/O Subsystem Transforming I/O Requests to Hardware Operations STREAMS

More information

Faculty of Science FINAL EXAMINATION

Faculty of Science FINAL EXAMINATION Faculty of Science FINAL EXAMINATION COMPUTER SCIENCE COMP 273 INTRODUCTION TO COMPUTER SYSTEMS Examiner: Prof. Michael Langer April 18, 2012 Associate Examiner: Mr. Joseph Vybihal 2 P.M. 5 P.M. STUDENT

More information

CPE/EE 421/521 Fall 2004 Chapter 4 The CPU Hardware Model. Dr. Rhonda Kay Gaede UAH. The CPU Hardware Model - Overview

CPE/EE 421/521 Fall 2004 Chapter 4 The CPU Hardware Model. Dr. Rhonda Kay Gaede UAH. The CPU Hardware Model - Overview CPE/EE 421/521 Fall 2004 Chapter 4 The 68000 CPU Hardware Model Dr. Rhonda Kay Gaede UAH Fall 2004 1 The 68000 CPU Hardware Model - Overview 68000 interface Timing diagram Minimal configuration using the

More information

CSC 2405: Computer Systems II

CSC 2405: Computer Systems II CSC 2405: Computer Systems II Dr. Mirela Damian http://www.csc.villanova.edu/~mdamian/csc2405/ Spring 2016 Course Goals: Look under the hood Help you learn what happens under the hood of computer systems

More information

INTERRUPTS in microprocessor systems

INTERRUPTS in microprocessor systems INTERRUPTS in microprocessor systems Microcontroller Power Supply clock fx (Central Proccesor Unit) CPU Reset Hardware Interrupts system IRQ Internal address bus Internal data bus Internal control bus

More information

6.9. Communicating to the Outside World: Cluster Networking

6.9. Communicating to the Outside World: Cluster Networking 6.9 Communicating to the Outside World: Cluster Networking This online section describes the networking hardware and software used to connect the nodes of cluster together. As there are whole books and

More information

Computer System Overview OPERATING SYSTEM TOP-LEVEL COMPONENTS. Simplified view: Operating Systems. Slide 1. Slide /S2. Slide 2.

Computer System Overview OPERATING SYSTEM TOP-LEVEL COMPONENTS. Simplified view: Operating Systems. Slide 1. Slide /S2. Slide 2. BASIC ELEMENTS Simplified view: Processor Slide 1 Computer System Overview Operating Systems Slide 3 Main Memory referred to as real memory or primary memory volatile modules 2004/S2 secondary memory devices

More information

Computer System Overview

Computer System Overview Computer System Overview Operating Systems 2005/S2 1 What are the objectives of an Operating System? 2 What are the objectives of an Operating System? convenience & abstraction the OS should facilitate

More information

William Stallings Computer Organization and Architecture 10 th Edition Pearson Education, Inc., Hoboken, NJ. All rights reserved.

William Stallings Computer Organization and Architecture 10 th Edition Pearson Education, Inc., Hoboken, NJ. All rights reserved. + William Stallings Computer Organization and Architecture 10 th Edition 2016 Pearson Education, Inc., Hoboken, NJ. All rights reserved. 2 + Chapter 3 A Top-Level View of Computer Function and Interconnection

More information

Computer Organization

Computer Organization Objectives 5.1 Chapter 5 Computer Organization Source: Foundations of Computer Science Cengage Learning 5.2 After studying this chapter, students should be able to: List the three subsystems of a computer.

More information

Buses. Maurizio Palesi. Maurizio Palesi 1

Buses. Maurizio Palesi. Maurizio Palesi 1 Buses Maurizio Palesi Maurizio Palesi 1 Introduction Buses are the simplest and most widely used interconnection networks A number of modules is connected via a single shared channel Microcontroller Microcontroller

More information

Input/Output Systems

Input/Output Systems Input/Output Systems CSCI 315 Operating Systems Design Department of Computer Science Notice: The slides for this lecture have been largely based on those from an earlier edition of the course text Operating

More information

Question Bank Microprocessor and Microcontroller

Question Bank Microprocessor and Microcontroller QUESTION BANK - 2 PART A 1. What is cycle stealing? (K1-CO3) During any given bus cycle, one of the system components connected to the system bus is given control of the bus. This component is said to

More information

EECS 373 Design of Microprocessor-Based Systems

EECS 373 Design of Microprocessor-Based Systems EECS 373 Design of Microprocessor-Based Systems Prabal Dutta University of Michigan Lecture 6: AHB-Lite, Interrupts (1) September 18, 2014 Slides"developed"in"part"by"Mark"Brehob" 1" Today" Announcements"

More information

Chapter Seven Morgan Kaufmann Publishers

Chapter Seven Morgan Kaufmann Publishers Chapter Seven Memories: Review SRAM: value is stored on a pair of inverting gates very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: value is stored as a charge on capacitor (must be

More information

Computer Organization and Structure. Bing-Yu Chen National Taiwan University

Computer Organization and Structure. Bing-Yu Chen National Taiwan University Computer Organization and Structure Bing-Yu Chen National Taiwan University Storage and Other I/O Topics I/O Performance Measures Types and Characteristics of I/O Devices Buses Interfacing I/O Devices

More information

The von Neuman architecture characteristics are: Data and Instruction in same memory, memory contents addressable by location, execution in sequence.

The von Neuman architecture characteristics are: Data and Instruction in same memory, memory contents addressable by location, execution in sequence. CS 320 Ch. 3 The von Neuman architecture characteristics are: Data and Instruction in same memory, memory contents addressable by location, execution in sequence. The CPU consists of an instruction interpreter,

More information

Interrupts (I) Lecturer: Sri Notes by Annie Guo. Week8 1

Interrupts (I) Lecturer: Sri Notes by Annie Guo. Week8 1 Interrupts (I) Lecturer: Sri Notes by Annie Guo Week8 1 Lecture overview Introduction to Interrupts Interrupt system specifications Multiple Sources of Interrupts Interrupt Priorities Interrupts in AVR

More information

Introduction to Input and Output

Introduction to Input and Output Introduction to Input and Output The I/O subsystem provides the mechanism for communication between the CPU and the outside world (I/O devices). Design factors: I/O device characteristics (input, output,

More information

1. Internal Architecture of 8085 Microprocessor

1. Internal Architecture of 8085 Microprocessor 1. Internal Architecture of 8085 Microprocessor Control Unit Generates signals within up to carry out the instruction, which has been decoded. In reality causes certain connections between blocks of the

More information

I/O Handling. ECE 650 Systems Programming & Engineering Duke University, Spring Based on Operating Systems Concepts, Silberschatz Chapter 13

I/O Handling. ECE 650 Systems Programming & Engineering Duke University, Spring Based on Operating Systems Concepts, Silberschatz Chapter 13 I/O Handling ECE 650 Systems Programming & Engineering Duke University, Spring 2018 Based on Operating Systems Concepts, Silberschatz Chapter 13 Input/Output (I/O) Typical application flow consists of

More information