IBA Software Architecture SCSI RDMA Protocol (SRP) Storage Driver High Level Design. Draft 2

Size: px
Start display at page:

Download "IBA Software Architecture SCSI RDMA Protocol (SRP) Storage Driver High Level Design. Draft 2"

Transcription

1 Draft 2 June, 2002

2 Revision History and Disclaimers Rev. Date Notes Draft 1 <May, 2002> Internal review. Draft 2 <June, 2002> Integrated Draft 1 review comments. Open to group wide review. THIS SPECIFICATION IS PROVIDED "AS IS" WITH NO WARRANTIES WHATSOEVER, INCLUDING ANY WARRANTY OF MERCHANTABILITY, NONINFRINGEMENT, FITNESS FOR ANY PARTICULAR PURPOSE, OR ANY WARRANTY OTHERWISE ARISING OUT OF ANY PROPOSAL, SPECIFICATION OR SAMPLE. Intel disclaims all liability, including liability for infringement of any proprietary rights, relating to use of information in this specification. No license, express or implied, by estoppel or otherwise, to any intellectual property rights is granted herein. This Specification as well as the software described in it is furnished under license and may only be used or copied in accordance with the terms of the license. The information in this document is furnished for informational use only, is subject to change without notice, and should not be construed as a commitment by Intel Corporation. Intel Corporation assumes no responsibility or liability for any errors or inaccuracies that may appear in this document or any software that may be provided in association with this document. Except as permitted by such license, no part of this document may be reproduced, stored in a retrieval system, or transmitted in any form or by any means without the express written consent of Intel Corporation. Intel is a trademark or registered trademark of Intel Corporation or its subsidiaries in the United States and other countries. *Other names and brands may be claimed as the property of others. Copyright 2002 Intel Corporation.

3 Approval Role Signature Date Responsible Engineer Engineering Group Leader Software Engineering Manager

4 Abstract The Linux SRP driver, srpl, is a low level Linux SCSI driver. It provides user applications access to storage resources on InfiniBand fabric attached SRP storage I/O Units, either directly through a device file, or through a transparent mount point in the file system. Srpl registers with the Linux SCSI mid layer as does any other low level Linux SCSI driver for a SCSI host bus adapter. An application's access to Linux SCSI devices is abstracted through the Linux SCSI mid layer. I/O requests from the file system or block driver go to the SCSI mid layer which then delivers SCSI commands to the driver in control of the target SCSI device. The mid layer keeps track of all the SCSI devices and their controllers. As controller drivers start, they register with the mid layer, and the mid layer scans for devices then maps those devices to device file nodes (names) in the file system. For a low level driver to register with the mid layer, it passes a set of driver entry points for the mid layer to use when passing commands. When an application opens a device, the mid layer arranges to pass SCSI commands intended for that device to the appropriate driver. An SRP storage I/O unit on an InfiniBand fabric is any device on the fabric that provides block storage services using the SRP over InfiniBand. Srpl provides the SCSI mid layer with access to the InfiniBand attached SRP storage resources. It behaves like any other low level Linux SCSI driver as far as the mid layer is concerned. It differs from most low level drivers in some important ways. First, srpl does not directly manage any specific storage controller hardware. Second, this driver registers with the InfiniBand plug and play manager. This allows the plug and play manager to inform srpl when a SRP storage controller becomes available on the fabric. When this happens, srpl establishes an InfiniBand connection with the storage unit then registers the newly discovered controller with the mid layer. After registration with the mid layer, srpl is ready to service I/O requests from the mid layer. This is done in the following steps (see figure for illustration): 1. Srpl receives an I/O request from the SCSI mid layer. 2. Srpl translates the I/O request from its native Linux SCSI mid layer form to an SRP information unit command request. 3. Srpl sends the command request in an InfiniBand message, to the I/O unit. 4. After the I/O unit has completed the request (successfully or unsuccessfully), srpl will receive the corresponding SRP command response in an InfiniBand message from the I/O unit. 5. Srpl translates the command response message to find the completion status. 6. Srpl completes the I/O by reporting the completion status from the command response back to the SCSI mid layer.

5 SCSI Midlayer 1 SCSI Command SRP protocol engine SCSI Command result 6 srpl InfiniBand Storage Driver I/O Request Management Connection Management Command Translation Response Translation Resource Management 2 3 SRP Command Request SRP Command Response 4 5 Initialization and Shutdwn Management Error Handler I/O Unit Figure 1 1. Srpl provides SCSI mid layer with access to IB storage resources.

6 Contents 1. Introduction Purpose and Scope Audience Acronyms and Terms References Conventions Stakeholders Before You Begin Features Goals Design Assumptions & Rules Design Overview Major Components Interfaces Other components Operation Design Details Plug and Play Manager Interface Inbound Data Flow Outbound Data Flow Fabric Attached SRP Controller Interface Outbound Data Flow Inbound Data Flow SCSI Mid Layer Interface Inbound Data Flow Outbound Data Flow I/O Request Management Resource Management Threading Model Locking Buffer Strategy Error Handling Major Data Structures srpl globals srpl host srpl request System Resource Usage Memory Other Resources Internal Compatibility Interaction with Other Components System Requirements...8-1

7 8.3 Imported Interfaces Exported Interfaces External Compatibility Standards Deviations from Standards Other Dependencies Initialization & Shutdown Initialization Shutdown Installing, Configuring, and Uninstalling Installing Configuring Uninstalling Unresolved Issues Data Structures and APIs Figures Figure 1 1. Srpl provides SCSI mid layer with access to IB storage resources.... v Figure 5 1. Interfaces of srpl Figure 6 1. SCSI command execution Figure 6 2. Srpl s request state machine Figure 6 3. Controller resources Figure 6 4. Major data structures Figure Controller discovery Tables Table 5-1. Srpl Responses to Events...5-3

8

9 1. Introduction IBA Software Architecture 1.1 Purpose and Scope This document is one of a set of s (HLDs) that supplement the Software Architecture Specification (SAS) by providing further levels of decomposition and design detail. Please refer to the SAS for the first level design decomposition and architectural description. This HLD defines the implementation of one component in the SAS, including inter-component dependencies. When completed, this HLD will enable the product development team (PDT) to complete the low-level design, coordinate commitments, make good estimates of the required effort, start test planning, and schedule for the Plan of Record (POR). 1.2 Audience Anyone interested in understanding this implementation of the SAS should read this document, including: Software developers who are integrating the separate modules into their own software projects Hardware developers who need an understanding of the software behavior to optimize their designs Evaluation engineers who are developing tests for InfiniBand-compliant devices Others in similar roles who need more than a basic understanding of the software 1.3 Acronyms and Terms Information Unit Information Units are SRP formatted requests of various types and responses. Information Units are exchanged between SCSI initiators and targets across an RDMA channel (such as InfiniBand). The most common of these are the command requests (sent by the initiator to the target) and command responses (send by the target to the initiator). IU Information Unit SCSI RDMA Protocol This protocol defined by the ANSI T-10 Committee describes an encapsulation scheme by which the SCSI I/O protocol is mapped to an RDMA capable transport. SRP SCSI RDMA Protocol Srpl Srpl is the name of the Intel Linux SRP InfiniBand storage driver. 1-1

10 1.4 References SRP Specificationhttp:// American National Standard for Information Systems Information Technology Working draft SCSI RDMA Protocol (SRP). (Current draft is revision 15.) Conventions This document uses the following typographical conventions and icons: Italic Bold Fixed width is used for book titles, manual titles, URLs, and new terms. is used for user input (in the Installation section). is used for code definitions, data structures, function definitions, and system console output. Fixed width text is always in Courier font. NOTE Is used to alert you to an item of special interest. DESIGN ISSUE Is used to alert you to unresolved design issues that may impact the module s design, function, or usage. 1.6 Stakeholders The stakeholders in this design are: Manager of Software Development Program Management Evaluation Inspection Technical Publications Technical marketing Software Quality InfiniBand Linux System SW Manager 1.7 Before You Begin Please note the following: This document assumes that you are familiar with the InfiniBand Architecture Specification, which is available from the InfiniBand Trade Association at 1-2

11 For a complete list of acronyms, terms, and references for all the HLDs, see the InfiniBand* Architecture Glossary and References. 1-3

12

13 2. Features IBA Software Architecture Srpl will be loadable by the InfiniBand Access Layer s plug and play manager. Srpl will register with the plug and play manager as it initializes allowing the plug and play manager to notify srpl of new SRP resources as they become available on the fabric. In addition, srpl will have an entry in the device configuration file, which maps I/O controllers to driver modules. Thus if a resource becomes available, and the driver is not loaded, the plug and play manager will be able to select the srpl module file and load it on demand. Srpl will support proc file system controls to modify behavior and extract counter values. 2-1

14

15 3. Goals IBA Software Architecture The performance and other goals have not been defined for the first draft of this document. 3-1

16 3-2

17 4. Design Assumptions & Rules The following are assumed in order to support the design of srpl: - The plug and play manager will be present and capable of loading the srpl driver and notifying srpl of changes in the status of relevant resources on the fabric - The Access Layer and Connection manager will be present and capable of supporting InfiniBand connection establishment, message passing and RDMA services. - The Linux SCSI mid layer will serve as request broker between the application (or driver) using the disk services and srpl. This mid layer will manage flow control and I/O request backlog, and error handling above the srpl driver. - The version of Linux is Red Had The physical machine is capable of supporting an Intel HCA and the software stack to run it. Srpl requires about 40 to 60 Kbytes per connected I/O controller, depending on the request queue depth and the number of disks on that controller. Srpl is designed as a Linux low level SCSI driver. It conforms to the interface requirements of the Linux SCSI mid layer whose purpose is to abstract a common usage model of all low level SCSI drivers for the benefit of those programs higher on the storage stack (e.g. file system drivers, user applications). 4-1

18 4-2

19 5. Design Overview 5.1 Major Components Interfaces Srpl has three interfaces. One to interact with the Linux SCSI mid layer, another for the fabric attached SRP storage controller, and the third is used to communicate with the plug and play manager. (The path to the storage controller goes through the InfiniBand access layer to receive and deliver messages to and from the fabric.) To the Linux SCSI mid layer, srpl behaves like any other low level Linux SCSI driver. It registers with the mid layer in order to advertise its entry points in a low level driver standard way. The mid layer uses these entry points to deliver SCSI commands and error recovery notifications to srpl. Figure 5 1 shows the relationship of srpl and its neighboring components. The heavy connectors in the figure represent the interfaces discussed here. Srpl relies on the InfiniBand Access Layer for establishment of InfiniBand connections with the fabric attached storage controller service on an I/O unit. The interface with the I/O unit is defined by the SRP specification, which determines formats of messages, and expected behavior. Srpl opens a connection in an SRP specific way then sends messages to the I/O unit containing SRP information units (IU) that represent the SCSI command received from the mid layer. Srpl interprets responses from the I/O unit and notifies the mid layer of I/O completion. Srpl logically sits between the SCSI mid layer and the fabric attached SRP controller. Its primary job is to translate SCSI requests (from the mid layer) into SRP information units, and information units received from the controller into responses for which will be delivered to the mid layer. Srpl is capable of doing this for several (limited only by system memory resources) fabric attached controllers simultaneously. It is also capable of managing many I/O transactions for each of these controllers. The third interface used to communicate with the plug and play manager. When srpl starts up it immediately registers with the plug and play manager, advertising entry points for notification, so that srpl can be informed of newly available fabric attached resources (or that some resource has become unavailable). 5-1

20 User Applications File System Drivers Linux Kernel Linux Block Driver Linux SCSI Mid Layer Linux SRP Storage Driver Other Low Level SCSI drivers Connection Services PnP Services IB Access Layer Verbs Provider InfiniBand Fabric Storage I/O Unit Figure 5 1. Interfaces of srpl 5-2

21 5.1.2 Other components In addition to these interfaces, other major elements of srpl are the SRP protocol engine, the I/O request manager, error handler and resource manager. The SRP protocol engine translates SCSI commands into SRP requests and SRP responses from the target into command results and completions for the SCSI mid layer. Error handling in the srpl driver is mostly concerned with InfiniBand port fail over, but also includes special interfaces to facilitate target error recovery by the mid layer. The resource manager sets up pools of resources (messages and I/O request structures) for newly discovered controllers, and manages those pools during execution. 5.2 Operation Srpl is event driven. All of srpl s behavior is in response to certain events. See the table below describing srpl s actions for each type of event it might receive. Once the driver is running, any of these types of events could happen at any time. The I/O request manager and resource manager elements of the driver keep track of transactions in progress, so that the state of the driver remains coherent. Event Driver Load Controller assigned I/O Request from mid layer Send message completion Recv message completion Controller revoked Driver unload IB Connection failure SCSI Error Srpl Action Driver Initialization; Registration with plug and play manager Connect through InfiniBand; Allocate message and I/O request structure resources; Register controller with SCSI mid layer Translate SCSI Command into SRP message to I/O unit; Send message over fabric to controller service on I/O unit Advance state of I/O request; If complete, notify mid layer and recycle I/O request, messages Advance state of I/O request; If complete, notify mid layer and recycle I/O request, messages [Currently the design doesn t support this event. Srpl will generate an error and continue. More work needs to be done to understand how to handle this in Linux] Return all resources; Close all InfiniBand connections; driver exit Fail over: srpl attempts to fail over. Srpl suspends action on all unfinished I/O requests; attempts to re-connect, possibly using a different path; If this fails, for each outstanding I/O, each is closed in error and the mid layer notified; If successful, then srpl re-issues the outstanding I/Os in the order they were initially issued. It is possible the an HCA will support automatic path migration. In a future release, srpl can use this feature to improve fail over performance. Report error to mid layer; issue command abort, or reset to controller, under mid layer direction Table 5-1. Srpl Responses to Events 5-3

22

23 6. Design Details IBA Software Architecture This section gives details on the major components of the driver. 6.1 Plug and Play Manager Interface Srpl interacts with the plug and play manager. Srpl registers with the plug and play manager, advertising a set of entry points intended to be used to notify srpl of events related to the arrival or departure of SRP resources on the fabric. The relationship between srpl (or any other channel driver) and the plug and play manager can be established even before the driver loads. The plug and play manager uses a configuration file in the file system to associate InfiniBand I/O controllers with specific channel drivers. Thus when the plug and play manager learns about a new SRP resource on the fabric, and there is no driver module loaded which has registered and associated itself with that resource, the plug and play manager can choose the srpl driver and load it. When this happens, the driver gets a chance to initialize and register with the plug and play manager, so it can be notified of the new SRP resource on the fabric Inbound Data Flow Data inbound from the plug and play manager is really an event notification and takes the form of a call to one of srpl s two plug and play entry points (advertised at plug and play registration). The first of these is the add unit entry point, srpl_add_unit(). The plug and play manager passes an IOC profile structure that contains characteristics about the service. The access layer provides facilities for finding paths to the IOC. When srpl gets notification of a new SRP resource on the fabric, it then establishes a connection with that service, and makes it available to the Linux SCSI mid layer, by calling the mid layer function to register a new SCSI controller. The Other entry point the plug and play manager may use is the remove unit entry point, srpl_remove_unit(). This notification informs srpl that a previously available SRP resource has become unavailable. The argument to this call is an IOC profile structure pointer. This is enough information for srpl to identify which resource has been removed. DESIGN ISSUE Linux doesn t handle device removal very well. This is an unresolved issue at this point. For the moment, srpl will ignore these notifications. See chapter Outbound Data Flow The only outbound interaction srpl has with the plug and play manager is summed up in two events. Srpl calls RegisterChannelDriver() to register as it is initializing. This registration process informs the plug and play manager of the entry points to use to notify the srpl. It also lists the vendor and device ids for those fabric-attached services for which this driver should be used. The other event is de-registration. When srpl is shutting down, it calls UnregisterChannelDriver() to inform the plug and play manager that the driver is being unloaded and the entry points are no longer valid. 6-1

24 6.2 Fabric Attached SRP Controller Interface This section describes how srpl communicates with the SRP storage service on the fabric. Connection to the target is initiated when the plug and play manager notifies srpl that a fabric attached controller has become available for use. Srpl issues a connect request containing a structure of connection parameters specified by the SRP protocol (e.g., SRP logon request that contains message size and queue depth). If successful, the target will accept the connection (with possibly modified connection parameters) and finally, srpl will accept the target s connection response. In this step srpl posts messages on the receive queue, the number of which is the same as the queue depth. These will be used to receive messages (command responses) from the I/O unit, on this connection. Once this has happened the connection is ready to support message traffic in both directions Outbound Data Flow Outbound data from srpl to the target SRP service is contained in InfiniBand messages. The message payload contains an SRP information unit of type command request, translated from the Linux SCSI command data structure. The message specifies the location in memory, the location on the disk (which are the endpoints of the user buffer movement), size of the user buffer, and other information such as the SCSI target id and LUN, and command tag. The message is interpreted by the SRP service on the other end of the connection. The SRP target will then execute the command, using RDMA to move the user data (without srpl s direct involvement). When the I/O unit has completed the SRP request, it replies to srpl with a message containing an SRP command response. See section Inbound Data Flow Inbound data through the target connection come in the form of InfiniBand messages containing SRP information units (command responses). They report the status of the command request with the same tag. Any error information, and auto sense data are also contained within the information unit. The srpl driver uses the information in command responses to complete the requested I/O for the Linux mid layer. SRP supports a small number of special commands from the target. These messages are distinguished by srpl as they arrive. See the SRP specification for the use of these and more details on the use of commands and command responses in the protocol. Srpl I/O request manager keeps track of all (perhaps many) outstanding I/O requests. Please see section 6.4 for details. 6.3 SCSI Mid Layer Interface This section describes the interface used to exchange I/O requests and results between the Linux SCSI mid layer and srpl. This interface is the standard one any low level SCSI host bus adapter uses with the Linux operating system. For each connection to a fabric attached resource, srpl registers with the Linux SCSI mid layer using scsi_register_module() to pass a SCSI host template data structure pointer to the mid layer. This template describes the attributes of the low level driver, including various parameters such as maximum number of SCSI target ids, and LUNS. The structure also contains a set of pointers to the driver s entry points, 6-2

25 which constitutes the mid layer s interface to srpl. There are pointers to routines for delivering SCSI commands to the driver, aborting commands, resetting, and handling errors. An exhaustive list and descriptions of these entry points is beyond the scope of this document. Those details can be found in the Linux kernel source, specifically, the file /usr/src/linux/drivers/scsi/hosts.h, which contains the definition of the SHT (SCSI host template), many of whose fields are function pointers into the low level driver. The structure fields are commented with information about how the functions pointed to are to be used. This section deals with the normal flow of SCSI commands and results across this interface Inbound Data Flow Commands flow into srpl from the SCSI mid layer via calls made to srpl_queuecommand(), the main entry point advertised upon driver registration with the mid layer. There are two arguments to srpl_queuecommand(): a pointer to a Linux SCSI command data structure, and a pointer to a mid layer callback function. The SCSI command structure contains the SCSI command block which identifies the request the mid layer is making, (usually) a pointer to a user buffer which holds the data to be moved to or from the target device, the buffer s size, and space to be filled with result status and (possibly) auto-sense data. It should be noted that srpl doesn t explicitly move data in or out of the user buffer. The I/O unit manages that data movement. Srpl facilitates this by getting a memory handle for the user buffer and making that available to the I/O unit. The I/O unit, after it interprets the command request information unit in the message, will have enough information to initiate an InfiniBand RDMA request to move the user data. When the mid layer calls srpl_queuecommand(), it expects srpl to queue the request and return from the call before the request has completed. The mid layer is then free to call srpl_queuecommand() again (and again) until the maximum queue depth of outstanding requests is reached. For each command dispatched to a low level driver, the mid layer sets a timer so that it can detect missing request completions. Normally this timer doesn t expire. It is destroyed when the command is completed. If a command timer should expire, however, the mid layer will interpret this to mean that the SCSI controller is unable to complete the command. The mid layer will take steps to recover by first issuing a command abort to the low level driver, and if successful, will retry the command. If this doesn t result in a completion of the command, the mid layer will try resetting first the device, then the SCSI bus, and finally the adapter. To do this, the SCSI mid layer may call srpl s command abort handler, or any of three reset routines (for device reset, SCSI bus reset, or adapter reset) in attempts to recover from detected errors or command time out events. When calling any of these srpl entry points, the mid layer expects the request (abort or reset) to be completed by the time srpl returns control back to the mid layer. Each of these entry points has only one argument: a pointer to SCSI command structure. Srpl will abort the command identified by this SCSI command or reset the device, bus or controller associated with the SCSI command. To abort a previously received SCSI command, srpl will create a message to the I/O unit with an SRP task management request information unit with code: abort, and send that to the I/O unit. Upon receiving the reply to the abort, srpl cleans up the data structures associated with the I/O request and returns from the abort handler to the mid layer. The mid layer will not expect to see a completion for the aborted command. The mid layer can now re-issue the aborted command. Resets are handled in a similar way. Reset notification is forwarded to the appropriate controller on the fabric. After the reset has completed, srpl cleans up all requests associated with the object that was reset, and returns from the reset handler to the mid layer. The mid layer will not expect completions for commands issued to the unit (disk, bus, or controller) being reset after the reset success is reported. 6-3

26 6.3.2 Outbound Data Flow IBA Software Architecture After the I/O unit has completed a request (that is the reply message containing the command response has been received), srpl fills in the appropriate fields of the associated SCSI command, including error status and perhaps SCSI sense data. Srpl then calls the mid layer completion function associated with the command, passing the SCSI command structure pointer at its argument. This completes the I/O and reduces the outstanding queue depth by one. Srpl also posts a new InfiniBand message receive request to replace the one consumed by receiving the I/O unit s message. For the life of the I/O request, srpl uses its own request data structure to contain all the associated resources, information and the state of the request. This structure holds a pointer to the SCSI command structure, the mid layers completion handler pointer, and a pointer to the message containing the SRP information unit that was sent to the I/O unit. See section 6.4 for details on how srpl manages the I/O requests. 6.4 I/O Request Management Srpl receives SCSI commands from the mid layer, translates them to SRP information units, and puts them into messages bound for the target I/O unit that is providing the service. As responses come in from the I/O unit, srpl coordinates message reception, completions of I/O requests, and other updates of I/O requests. Figure 6 1 shows the steps that occur during the life of a SCSI command from the mid layer. Here is a description of the steps taken during the life of an I/O request. 1. SCSI mid layer delivers SCSI command structure (pointer) through the interface srpl registered with the mid layer (srpl_queuecommand()). 2. Srpl formats an SRP command request information unit and puts it in a message and calls Connection Service to send it to the I/O unit. 3. Connection Service sends the message. 4. Connection Service notifies srpl of the message send completion by calling srpl s send completion handler. 5. I/O unit executes the I/O request, managing an RDMA transaction to move the data from (or to) the user buffer. 6. I/O unit sends message containing an SRP command response information unit to srpl. 7. Connection Service notifies srpl of a message receive completion by calling srpl s receive completion handler. 8. Srpl interprets the I/O completion message, reports status in the SCSI command structure associated with the I/O request, then calls the mid layer s completion routine. For each command delivered to srpl, an I/O request structure is allocated from this controller s free list and is put on the work-in-progress list until the command has completed. At that time the I/O request structure is returned to the free list. Srpl uses a state machine to track the progress of each I/O request. As steps in the process happen, the state of the request is updated until it reaches the final state, at which point the I/O request is completed and the resources (including the I/O request structure and the InfiniBand message) associated with it are freed for future use. The I/O request structure contains the state of the I/O transaction (which is changing through its life), and pointers to the resources in use by the transaction, such as InfiniBand messages as well as pointers to the 6-4

27 SCSI command structure and the mid layer s completion routine. Figure 6 2 is a diagram showing the I/O request state machine. Each I/O request traverses through the machine from its initialization (which happens as srpl is given a new SCSI command) to the time the command is completed and the I/O request structure freed. This machine has the following states: free, init, send_pend_recv_pend, send_comp_recv_pend, send_pend_recv_comp and complete. The state of the I/O request structure starts as free. At this time the structure is on the free list belonging to the controller (srpl host) structure. When a SCSI command is received from the mid layer, an I/O request structure is removed from the free list and its state is updated to init. Srpl translates the SCSI command into an information unit (which is in User Data Buffer SCSI Mid Layer srpl Connection Services 3 Fabric 6 I/O Unit Figure 6 1. SCSI command execution the payload of an InfiniBand Message). The resources associated with the request (the SCSI command structure pointer, the outbound request message, etc.) are stored in the I/O request structure. This request 6-5

28 structure is then added to the work-in-progress queue of the controller (srpl host) structure to join any other requests that have not yet completed. Free Allocate Request Structure init Free Request Structure Message Receive Completion send_comp recv_pend Message Send Completion Message Send Requested complete send_pend recv_pend Message Send Completion Message Receive Completion send_pend recv_comp Figure 6 2. Srpl s request state machine Srpl calls the message send routine of the Access Layer s Connection Service, and updates the state of the I/O request to send_pend_recv_pend. The message has been sent, though the completion for the send has not been detected, nor has the reply from the I/O unit. From the send_pend_recv_pend state, the request will transition to one of two states, send_comp_recv_pend or send_pend_recv_comp, depending on which completion event handler is called first. (It is possible that the reply receive completion handler is called before the send completion handler for the outbound request.) Both completion handlers get a context argument, whose value is the address of the I/O request structure to be updated. 6-6

29 From the send_comp_recv_pend state, the request will transition to complete when its command response is received. The receive completion handler will complete the request for the Linux SCSI mid layer. From the send_pend_recv_comp state the request will transition to complete when the completion for the send completes (the response has already been completed). In this path the send completion handler will complete the request for the Linux SCSI mid layer. When the request is in the complete state, then either the send handler or the receive handler has initiated completion with the mid layer. The mid layer s completion routine is called, and after its return, the request structure is returned to the free list, and its state is returned to free. At any time there could arise an interruption of the connection. When that happens, srpl attempts to fail over to a new connection. During this recovery normal operation of srpl is suspended until the connection is re-established and the outstanding command requests are re-sent to the I/O unit. As part of the process, all outstanding requests representing work in progress (those on the wip) transition to a special state, shelved (not shown in figure 6-1), and are queued on this host s shelf. If any new requests arrive from the mid layer while srpl is failing over, the new requests go immediately to the shelf, with the state of shelved. If srpl is successful at creating a new connection to the I/O unit, then all requests on the shelf are retransmitted to the I/O unit over the new connection, and their state is changed to send_pend_recv_pend. At this time normal operation of the driver resumes, and requests traverse the state machine as described above. 6.5 Resource Management When an I/O controller on the fabric is assigned to a Linux host, srpl creates a new data structure called an srpl host structure, to represent that controller. Associated with the srpl host structure are a set of srpl request structures and a pool of messages to be used to exchange SRP requests and responses with the I/O controller service on the I/O unit. The number of srpl request structures is the same as the queue depth for that controller (established at channel connection), and the number of message is twice that (one each for sending the command requests and receiving the command responses). Figure 6 3 Illustrates the relationship between the major resources associated with each controller, managed by the srpl host structure. Creation of these resources happens only when the controller is assigned and is managed by the plug and play interface for adding new units. Dynamic management of these resources is handled by a set of routines not exposed outside of the driver. These routines ensure that the state of the free list and of the data structures remains coherent. srpl _host req free list req wip list free message pool srpl_request srpl_request srpl_request srpl_request 6-7

30 Figure 6 3. Controller resources 6.6 Threading Model This driver does not explicitly manage any threads. At controller initialization time, srpl creates event managers for handling notification of message send and receive completions on its InfiniBand connections. Each of these managers has a thread that runs when signaled by the Access Layer. This ensures that when servicing message completions (that is, updating request state and possibly calling the mid layer s completion handler), the thread has a context (i.e. the context is not within an interrupt service routine). This allows the completion handlers to be free of restrictions that exist for code that might run in the context of an interrupt. The driver advertises entry points to two other local services, the SCSI mid layer and the plug and play manager. When any of these routines is called, they run in the context of the calling thread. Any threading model used by those services is implicitly extended to include execution of functions they call in srpl. 6.7 Locking Locks are fine grained and implemented to serialize access to common data structures. First, there is the srpl globals structure. This structure holds parameters global to the entire instance of the driver. When a controller is assigned to this host, a new host structure is initialized and added to a list headed by a field in this structure. A lock is taken while the addition of such a list element is in progress. Secondly, there are locks associated with each host structure instance. As operations and events occur, there are counters in the host structure that are incremented. Write access to these fields is guarded by a lock. In addition there are locks guarding access to the request free and work-in-progress queues (both owned by the host structure). Lastly, there is a lock in the request structure, used to serialize changes of state of the request. For example, once a command request has been sent to the I/O unit, two events are expected: the message send completion and the message receive completion (indicating a response from the I/O unit). In a multiprocessor environment, these two handlers could be running simultaneously, each updating the state of the same request structure. The lock in the request structure is there to ensure that once an update to state has started, it will finish before the other update can start. All locks are public library implemented spin locks. These are, at their core, Linux kernel spin locks. Taking a lock involves spinning (holding the processor) until the lock is available. For this reason certain locking principles are observed. First, only hold the lock for a short time (or piece of code path); second, never yield the processor while holding a lock (either directly by blocking or scheduling or by calling a function that may block); and third, hold as few locks as possible at any given time. In this implementation, there is no time when a thread running in this driver is holding more than one lock at any given time. 6-8

31 6.8 Buffer Strategy Srpl does not buffer data moved between memory and disk. The client data is identified by a pointer to the buffer s location in user memory, and size. This information is passed in the SCSI command structure to srpl through the srpl_queuecommand() interface. Srpl creates a memory handle to the buffer and sends that in the command request message to the I/O unit. The I/O unit manages the movement of data between user memory and the disk, using the InfiniBand RDMA service. The buffer strategies used by I/O units will vary and is beyond the scope of this document. 6.9 Error Handling There are three major classes of errors that srpl is prepared to handle: local resource shortage, InfiniBand connection errors and target I/O unit errors. The first class of errors is lack of resources on the local system, which causes an allocation request to fail. When this happens srpl returns the appropriate error code to the calling routine. The caller might be the mid layer, in which case another attempt will be made, until after repeated failure, the mid layer gives up. Resource allocation errors are not expected when the mid layer is queuing a command. This is because the queue depth is known in advance, and the necessary resources are allocated before the first command is queued. If the caller is the plug and play manager attempting to notify srpl of an newly assigned controller, then it is possible for resource allocation to fail, as this is the time when the driver is asking the operating system for more memory resources, which might not be currently available. If the system should be so limited that resources cannot be allocated to support the new controller assignment, then srpl returns, and the controller is not initialized. The controller will not be registered with the mid layer, and Linux will not queue commands to that controller. The second class is errors on the InfiniBand connection to the I/O unit. Any sort of error on the connection results in the connection being destroyed. The I/O unit controller service is free to throw away any requests in progress and reset itself. Srpl, attempts to establish a new connection, possible using the same path record or, alternatively, a different one if there is another one known. All of the outstanding requests are moved temporarily to the shelf, and any new requests go directly to the shelf. Next, srpl will attempt to open a new connection, trying repeatedly on each of the paths to the I/O unit it knows or can find out. After the new connection is established, requests are re-sent to the I/O unit (using the new connection), and the requests are moved back to the srpl host structure s work-in-progress queue, where they will be found when the completion notifications are delivered. Errors of the third class occur on the I/O unit itself. In some cases, the I/O unit will indicate an error condition in the response it sends back to srpl. While this is unusual, the protocols allow for it. Srpl will facilitate negotiation between the mid layer and the I/O unit. In that event, the error condition notice and any sense data are sent up to the SCSI mid layer for processing. The mid layer may choose to reissue the command, or take some other course of action (such as requesting a reset). In other cases, a message may get lost or stuck in the I/O unit, and the host never receives a response from the I/O unit. In that case, the mid layer timer associated with the request expires, notifying the mid layer that a response is missing. The mid layers response to this is to first issue an abort request within the mid layer, and then reissue the request to srpl. If this fails, then the mid layer starts issuing reset requests to a larger domain until it works. The first reset attempt is used on the device the next on the channel and finally, to the controller interface as a whole. Srpl s role in this is to forward the abort or reset requests to the I/O unit, clean up resources associated with abandoned I/O requests, and report abort or reset results back to the mid layer. 6-9

32 6.10 Major Data Structures This section describes the major data structures employed by srpl and their relationships to each other. Figure 6 4 Shows the relationships of the major data structures. The sections that follow describe those structures. srpl_host spinlock next next next state state state next shtp scsi_host req_free host Scsi_Cmnd mid layer callback SRP msg host Scsi_Cmnd mid layer callback SRP msg host Scsi_Cmnd mid layer callback SRP msg req_wip connection attributes IOC attributes resource pools Figure 6 4. Major data structures srpl globals The srpl globals data structure holds the heads of two linked lists of structures. The first is the list of srpl host structures (described in section ). The other is the list of Linux scsi host templates. These are defined by the Linux SCSI mid layer and filled out by srpl to identify driver attributes including driver entry points. The Linux SCSI host template is passed to the mid layer s scsi_register_host() routine when srpl registers a controller. The srpl globals structure is locked as items are added or removed from these lists srpl host For each discovered InfiniBand fabric-attached SRP controller, srpl creates an srpl host structure and adds it to the host list maintained within the srpl globals structure. The srpl host structure organizes the resources in use for that controller: connection attributes, IOC profile, path records to the IOC, and I/O 6-10

33 request structures (described in section ). The I/O request structures are kept on three lists. The free list keeps the I/O request structures not currently in use, the wip (work in progress) list keeps the I/O request structures representing unfinished I/Os. The third list is only used to manage port fail over connection recovery. The srpl host structure also contains event counters to track number of requests received and completed, among other events, and the locks necessary to ensure coherent access to the I/O request structure lists srpl request Each time srpl receives a SCSI command from the mid layer, it takes an I/O request structure from the host s free list, and uses it to track the I/O specific information and resources. This structure has a pointer to the original SCSI command, a pointer to the mid layer s completion call back routine, pointers to the messages sent to and received from the I/O unit, address and size of the user buffer, and finally a state field indicating the stage of progress of this I/O request. 6-11

34

35 7. System Resource Usage 7.1 Memory The total amount of memory required by srpl depends, of course, on the number of fabric attached SRP controllers it is managing, the number of disks on those controllers and the total number of possible outstanding I/O requests (the sum of all the queue depths over all the controllers). The controller cost is about 4 Kb per controller (connection). The cost per disk and I/O request structure (allocated when the remote service is reported available by the plug and play manager) is about 256 bytes and 650 bytes, respectively. This includes the amount of memory that srpl allocates directly as well as memory that is allocated by the SCSI mid layer in order to manage command configuration in its layer. To estimate the size of the memory footprint on a given host on a fabric apply the following formula: MF = c * r * d * 250. Where MF is the memory footprint in bytes, c is the number of controllers, r is the total number of I/O request structures allocated (e.g., 64 * c, sixty-four each for each of four controllers), and d is the total number of disk srpl discovers. For example, in a system with 1 (one) controller, 4 (four) disks and a queue depth capability of 64, the total memory footprint is about 46 Kb of kernel physical memory. If there are three controllers, each with 6 disks, and each with a command queue depth of 64, then the queue depth would be about 126 Kb of kernel physical memory. 7.2 Other Resources Srpl depends on the presence of an HCA, and a stack of drivers to run the HCA, the Access Layer, and Subnet management services. Therefore the system must have enough resources to support those functions. 7-1

36

37 8. Internal Compatibility 8.1 Interaction with Other Components As a channel driver, srpl sits on top of the Intel InfiniBand driver stack. It depends on the Access Layer for sending and receiving InfiniBand messages to and from the SRP I/O unit. Srpl also interacts with the plug and play manager (part of the Access Layer). The plug and play manager is able to load srpl when it detects an SRP service on the fabric (if srpl is not already loaded). It will notify srpl via srpl s advertised entry point srpl_add_unit(). The plug and play manager will also notify srpl in the event an existing SRP service disappears from the fabric. The plug and play manager does this by calling srpl_remove_unit(). As srpl initializes, it calls the plug and play manager function RegisterChannelDriver() to register, which here means identifying the add unit and remove unit entry points, as well as listing the InfiniBand vendor and device ids that this driver is intended to manage. Srpl behaves as a low level SCSI controller driver to the Linux SCSI mid layer. It conforms to the standard Linux model by which SCSI host bus adapters provide their services to the operating system. 8.2 System Requirements In order for a system to support srpl, it must have an HCA that will support RMDA. It must be capable of driving that interface, that is, the system must have adequate memory, processor resources, and system bus resources to support the HCA and its driver stack, including the Access Layer. 8.3 Imported Interfaces The Intel InfiniBand storage driver, srpl, depends on the following software interfaces: the Linux SCSI mid layer as found in Linux release , the Intel InfiniBand Access Layer s channel service, and plug and play manager interfaces. Srpl s use of these interfaces is discussed in detail in chapter 6. In addition, this driver depends on the Linux kernel environment of for system facilities such as locks, events, and memory management. 8.4 Exported Interfaces See Section

38

39 9. External Compatibility 9.1 Standards The wire protocol used by srpl to communicate with I/O units on the fabric is SCSI RDMA Protocol (SRP) and is defined by the ANSI T10 working group (see section 1.4 for reference). The InfiniBand specifications define the method by hosts and I/O resources are discovered on the fabric, as well as connection establishment procedures and protocol of packet exchange over the fabric. Srpl uses many of these features directly, and depends on software that uses others. 9.2 Deviations from Standards Srpl is intended to implement version 2 of the SRP ANSI specification. That document does not yet exist. Until it is released, srpl will comply with various working drafts leading up to that version. 9-1

InfiniBand* Software Architecture Access Layer High Level Design June 2002

InfiniBand* Software Architecture Access Layer High Level Design June 2002 InfiniBand* Software Architecture June 2002 *Other names and brands may be claimed as the property of others. THIS SPECIFICATION IS PROVIDED "AS IS" WITH NO WARRANTIES WHATSOEVER, INCLUDING ANY WARRANTY

More information

InfiniBand Linux Operating System Software Access Layer

InfiniBand Linux Operating System Software Access Layer Software Architecture Specification (SAS) Revision Draft 2 Last Print Date: 4/19/2002-9:04 AM Copyright (c) 1996-2002 Intel Corporation. All rights reserved. InfiniBand Linux Operating System Software

More information

Intel Platform Innovation Framework for EFI SMBus Host Controller Protocol Specification. Version 0.9 April 1, 2004

Intel Platform Innovation Framework for EFI SMBus Host Controller Protocol Specification. Version 0.9 April 1, 2004 Intel Platform Innovation Framework for EFI SMBus Host Controller Protocol Specification Version 0.9 April 1, 2004 SMBus Host Controller Protocol Specification THIS SPECIFICATION IS PROVIDED "AS IS" WITH

More information

TLBs, Paging-Structure Caches, and Their Invalidation

TLBs, Paging-Structure Caches, and Their Invalidation TLBs, Paging-Structure Caches, and Their Invalidation Application Note April 2007 Document Number: 317080-001 Legal Statements INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS.

More information

Intel Cluster Ready Allowed Hardware Variances

Intel Cluster Ready Allowed Hardware Variances Intel Cluster Ready Allowed Hardware Variances Solution designs are certified as Intel Cluster Ready with an exact bill of materials for the hardware and the software stack. When instances of the certified

More information

Ausgewählte Betriebssysteme - Mark Russinovich & David Solomon (used with permission of authors)

Ausgewählte Betriebssysteme - Mark Russinovich & David Solomon (used with permission of authors) Outline Windows 2000 - The I/O Structure Ausgewählte Betriebssysteme Institut Betriebssysteme Fakultät Informatik Components of I/O System Plug n Play Management Power Management I/O Data Structures File

More information

Windows* 2003 Cluster Implementation Guide for the Intel RAID controller SRCU42X

Windows* 2003 Cluster Implementation Guide for the Intel RAID controller SRCU42X Windows* 2003 Cluster Implementation Guide for the Intel RAID controller SRCU42X Revision 1.0 May 2003 Enterprise Platforms and Services Marketing Disclaimers Information in this document is provided in

More information

USB Feature Specification: Shared Endpoints

USB Feature Specification: Shared Endpoints USB Feature Specification: Shared Endpoints SYSTEMSOFT CORPORATION INTEL CORPORATION Revision 1.0 October 27, 1999 USB Feature Specification: Shared Endpoints Revision 1.0 Revision History Revision Issue

More information

Enhanced Serial Peripheral Interface (espi)

Enhanced Serial Peripheral Interface (espi) Enhanced Serial Peripheral Interface (espi) Addendum for Server Platforms December 2013 Revision 0.7 329957 0BIntroduction Intel hereby grants you a fully-paid, non-exclusive, non-transferable, worldwide,

More information

InfiniBand * Access Layer Programming Interface

InfiniBand * Access Layer Programming Interface InfiniBand * Access Layer Programming Interface April 2002 1 Agenda Objectives Feature Summary Design Overview Kernel-Level Interface Operations Current Status 2 Agenda Objectives Feature Summary Design

More information

Architecture Specification

Architecture Specification PCI-to-PCI Bridge Architecture Specification, Revision 1.2 June 9, 2003 PCI-to-PCI Bridge Architecture Specification Revision 1.1 December 18, 1998 Revision History REVISION ISSUE DATE COMMENTS 1.0 04/05/94

More information

CSTA Gatekeeper Installation and Configuration Guide

CSTA Gatekeeper Installation and Configuration Guide CSTA Gatekeeper Installation and Configuration Guide Order Number: 05-1417-002 Software/Version: CSTA Gatekeeper Version 1.1 INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS.

More information

Messaging Overview. Introduction. Gen-Z Messaging

Messaging Overview. Introduction. Gen-Z Messaging Page 1 of 6 Messaging Overview Introduction Gen-Z is a new data access technology that not only enhances memory and data storage solutions, but also provides a framework for both optimized and traditional

More information

Intel IXP400 Digital Signal Processing (DSP) Software: Priority Setting for 10 ms Real Time Task

Intel IXP400 Digital Signal Processing (DSP) Software: Priority Setting for 10 ms Real Time Task Intel IXP400 Digital Signal Processing (DSP) Software: Priority Setting for 10 ms Real Time Task Application Note November 2005 Document Number: 310033, Revision: 001 November 2005 Legal Notice INFORMATION

More information

Intel 82580EB/82580DB GbE Controller Feature Software Support. LAN Access Division (LAD)

Intel 82580EB/82580DB GbE Controller Feature Software Support. LAN Access Division (LAD) Intel 82580EB/82580DB GbE Controller Feature Software Support LAN Access Division (LAD) Revision: 1.3 March 2012 Intel 82580EB/82580DB GbE Controller Legal Legal INFORMATION IN THIS DOCUMENT IS PROVIDED

More information

TCG Physical Security Interoperability Alliance IP Video Use Case 002 (PSI-UC-IPV002) Specification Version 1.0 Revision 0.2

TCG Physical Security Interoperability Alliance IP Video Use Case 002 (PSI-UC-IPV002) Specification Version 1.0 Revision 0.2 TCG Physical Security Interoperability Alliance IP Video Use Case 002 (PSI-UC-IPV002) Specification Version 1.0 Revision 0.2 Revision History Description Date By Version 1.0 Rev 0.1 Initial Draft August

More information

Intel True Scale Fabric Switches Series

Intel True Scale Fabric Switches Series Intel True Scale Fabric Switches 12000 Series Doc. Number: H70235 Revision: 001US No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.

More information

Intel Setup and Configuration Service. (Lightweight)

Intel Setup and Configuration Service. (Lightweight) Intel Setup and Configuration Service (Lightweight) Release Notes Version 6.0 (Technology Preview #3) Document Release Date: August 30, 2009 Information in this document is provided in connection with

More information

Event Service API for Windows Operating Systems

Event Service API for Windows Operating Systems Event Service API for Windows Operating Systems Programming Guide October 2005 05-1918-003 INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY

More information

Intel IXP400 Software Version 1.5

Intel IXP400 Software Version 1.5 Intel IXP400 Software Version 1.5 Order Number: 308225, Revision: 001 Legal Notice Legal Lines and Disclaimers INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS

More information

OSEK/VDX. Communication. Version January 29, 2003

OSEK/VDX. Communication. Version January 29, 2003 Open Systems and the Corresponding Interfaces for Automotive Electronics OSEK/VDX Communication Version 3.0.1 January 29, 2003 This document is an official release and replaces all previously distributed

More information

Kernel Korner AEM: A Scalable and Native Event Mechanism for Linux

Kernel Korner AEM: A Scalable and Native Event Mechanism for Linux Kernel Korner AEM: A Scalable and Native Event Mechanism for Linux Give your application the ability to register callbacks with the kernel. by Frédéric Rossi In a previous article [ An Event Mechanism

More information

Oracle VM. Getting Started Guide for Release 3.2

Oracle VM. Getting Started Guide for Release 3.2 Oracle VM Getting Started Guide for Release 3.2 E35331-04 March 2014 Oracle VM: Getting Started Guide for Release 3.2 Copyright 2011, 2014, Oracle and/or its affiliates. All rights reserved. Oracle and

More information

... Application Note AN-531. PCI Express System Interconnect Software Architecture. Notes Introduction. System Architecture.

... Application Note AN-531. PCI Express System Interconnect Software Architecture. Notes Introduction. System Architecture. PCI Express System Interconnect Software Architecture Application Note AN-531 Introduction By Kwok Kong A multi-peer system using a standard-based PCI Express (PCIe ) multi-port switch as the system interconnect

More information

PCI-X Protocol Addendum to the PCI Local Bus Specification Revision 2.0a

PCI-X Protocol Addendum to the PCI Local Bus Specification Revision 2.0a PCI-X Protocol Addendum to the PCI Local Bus Specification Revision 2.0a July 22, 2003 REVISION REVISION HISTORY DATE 1.0 Initial release. 9/22/99 1.0a Clarifications and typographical corrections. 7/24/00

More information

Intel Platform Administration Technology Quick Start Guide

Intel Platform Administration Technology Quick Start Guide Intel Platform Administration Technology Quick Start Guide 320014-003US This document explains how to get started with core features of Intel Platform Administration Technology (Intel PAT). After reading

More information

True Scale Fabric Switches Series

True Scale Fabric Switches Series True Scale Fabric Switches 12000 Series Order Number: H53559001US Legal Lines and Disclaimers INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED,

More information

Request for Comments: 4755 Category: Standards Track December 2006

Request for Comments: 4755 Category: Standards Track December 2006 Network Working Group V. Kashyap Request for Comments: 4755 IBM Category: Standards Track December 2006 Status of This Memo IP over InfiniBand: Connected Mode This document specifies an Internet standards

More information

TCG Storage Interface Interactions Specification (SIIS) Specification Version 1.02 Revision December, 2011 TCG

TCG Storage Interface Interactions Specification (SIIS) Specification Version 1.02 Revision December, 2011 TCG TCG Storage Interface Interactions Specification (SIIS) Specification Version 1.02 Revision 1.00 30 December, 2011 TCG TCG PUBLISHED Copyright TCG 2011 Copyright 2011 Trusted Computing Group, Incorporated.

More information

ADT Frame Format Notes (Paul Suhler) ADI ADT Frame Format Proposal (Rod Wideman)

ADT Frame Format Notes (Paul Suhler) ADI ADT Frame Format Proposal (Rod Wideman) To: INCITS T10 Membership From: Paul Entzel, Quantum Date: 11 November 2002 Document: T10/02-329r2 Subject: Proposed frame format for ADT 1 Related Documents T10/02-233r0 T10/02-274r0 ADT Frame Format

More information

The following modifications have been made to this version of the DSM specification:

The following modifications have been made to this version of the DSM specification: NVDIMM DSM Interface Revision V1.6 August 9, 2017 The following modifications have been made to this version of the DSM specification: - General o Added two tables of supported Function Ids, Revision Ids

More information

CA IdentityMinder. Glossary

CA IdentityMinder. Glossary CA IdentityMinder Glossary 12.6.3 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the Documentation ) is for your informational

More information

You have accessed an older version of a Paradyne product document.

You have accessed an older version of a Paradyne product document. You have accessed an older version of a Paradyne product document. Paradyne is no longer a subsidiary of AT&T. Any reference to AT&T Paradyne is amended to read Paradyne Corporation. Paradyne 6700-A2-GB41-10

More information

An Introduction to GPFS

An Introduction to GPFS IBM High Performance Computing July 2006 An Introduction to GPFS gpfsintro072506.doc Page 2 Contents Overview 2 What is GPFS? 3 The file system 3 Application interfaces 4 Performance and scalability 4

More information

RapidIO TM Interconnect Specification Part 7: System and Device Inter-operability Specification

RapidIO TM Interconnect Specification Part 7: System and Device Inter-operability Specification RapidIO TM Interconnect Specification Part 7: System and Device Inter-operability Specification Rev. 1.3, 06/2005 Copyright RapidIO Trade Association RapidIO Trade Association Revision History Revision

More information

ComAPI+ API Documentation

ComAPI+ API Documentation [01.2017] ComAPI+ API Documentation 30515ST10841A Rev. 4 2017-07-20 Mod. 0806 SPECIFICATIONS ARE SUBJECT TO CHANGE WITHOUT NOTICE NOTICES LIST While reasonable efforts have been made to assure the accuracy

More information

LED Manager for Intel NUC

LED Manager for Intel NUC LED Manager for Intel NUC User Guide Version 1.0.0 March 14, 2018 INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO

More information

Intel Manageability Commander User Guide

Intel Manageability Commander User Guide Intel Manageability Commander User Guide Document Release Date: October 27, 2016 Legal Information INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED,

More information

DRAM and Storage-Class Memory (SCM) Overview

DRAM and Storage-Class Memory (SCM) Overview Page 1 of 7 DRAM and Storage-Class Memory (SCM) Overview Introduction/Motivation Looking forward, volatile and non-volatile memory will play a much greater role in future infrastructure solutions. Figure

More information

NVDIMM DSM Interface Example

NVDIMM DSM Interface Example Revision 1.3 December 2016 See the change bars associated with the following changes to this document: 1) Common _DSMs supported by all NVDIMMs have been removed from this document. 2) Changes to SMART

More information

Intel Entry Storage System SS4000-E

Intel Entry Storage System SS4000-E Intel Entry Storage System SS4000-E Software Release Notes January 2007 Storage Systems Technical Marketing Engineering Document Revision History Intel Entry Storage System SS4000-E Document Revision History

More information

Intel Cloud Builder Guide: Cloud Design and Deployment on Intel Platforms

Intel Cloud Builder Guide: Cloud Design and Deployment on Intel Platforms EXECUTIVE SUMMARY Intel Cloud Builder Guide Intel Xeon Processor-based Servers Novell* Cloud Manager Intel Cloud Builder Guide: Cloud Design and Deployment on Intel Platforms Novell* Cloud Manager Intel

More information

TCG. TCG Storage Interface Interactions Specification. Specification Version 1.0. January 27, Contacts:

TCG. TCG Storage Interface Interactions Specification. Specification Version 1.0. January 27, Contacts: TCG Storage Interface Interactions Specification January 27, 2009 Contacts: storagewg@trustedcomputinggroup.org Copyright TCG 2009 TCG Copyright 2009 Trusted Computing Group, Incorporated. Disclaimer,

More information

PCI Express System Interconnect Software Architecture for PowerQUICC TM III-based Systems

PCI Express System Interconnect Software Architecture for PowerQUICC TM III-based Systems PCI Express System Interconnect Software Architecture for PowerQUICC TM III-based Systems Application Note AN-573 By Craig Hackney Introduction A multi-peer system using a standard-based PCI Express multi-port

More information

LNet Roadmap & Development. Amir Shehata Lustre * Network Engineer Intel High Performance Data Division

LNet Roadmap & Development. Amir Shehata Lustre * Network Engineer Intel High Performance Data Division LNet Roadmap & Development Amir Shehata Lustre * Network Engineer Intel High Performance Data Division Outline LNet Roadmap Non-contiguous buffer support Map-on-Demand re-work 2 LNet Roadmap (2.12) LNet

More information

PCI-X Addendum to the PCI Compliance Checklist. Revision 1.0a

PCI-X Addendum to the PCI Compliance Checklist. Revision 1.0a PCI-X Addendum to the PCI Compliance Checklist Revision 1.0a August 29, 2000 PCI-X Addendum to the PCI Compliance Checklist REVISION REVISION HISTORY DATE 1.0 Initial Release 3/1/00 1.0a Updates for PCI-X

More information

Dynamic Power Optimization for Higher Server Density Racks A Baidu Case Study with Intel Dynamic Power Technology

Dynamic Power Optimization for Higher Server Density Racks A Baidu Case Study with Intel Dynamic Power Technology Dynamic Power Optimization for Higher Server Density Racks A Baidu Case Study with Intel Dynamic Power Technology Executive Summary Intel s Digital Enterprise Group partnered with Baidu.com conducted a

More information

Intel Storage System JBOD 2000S3 Product Family

Intel Storage System JBOD 2000S3 Product Family Intel Storage System JBOD 2000S3 Product Family SCSI Enclosure Services Programming Guide SES Version 3.0, Revision 1.8 Apr 2017 Intel Server Boards and Systems Headline

More information

PCI Express System Interconnect Software Architecture for x86-based Systems. Root Complex. Processor. UP Multi-port PCIe switch DP DP DP

PCI Express System Interconnect Software Architecture for x86-based Systems. Root Complex. Processor. UP Multi-port PCIe switch DP DP DP PCI Express System Interconnect Software Architecture for x86-based Systems Application Note AN-571 Introduction By Kwok Kong and Alex Chang A multi-peer system using a standard-based PCI Express multi-port

More information

SELINUX SUPPORT IN HFI1 AND PSM2

SELINUX SUPPORT IN HFI1 AND PSM2 14th ANNUAL WORKSHOP 2018 SELINUX SUPPORT IN HFI1 AND PSM2 Dennis Dalessandro, Network SW Engineer Intel Corp 4/2/2018 NOTICES AND DISCLAIMERS INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH

More information

IBA Software Architecture IP over IB Driver High Level Design. Draft 2

IBA Software Architecture IP over IB Driver High Level Design. Draft 2 IP over IB Driver Draft 2 July 2002 Revision History and Disclaimers Rev. Date Notes Draft 1 March 2002 Internal review. THIS SPECIFICATION IS PROVIDED "AS IS" WITH NO WARRANTIES WHATSOEVER, INCLUDING

More information

Intel Education Theft Deterrent Release Note WW16'14. August 2014

Intel Education Theft Deterrent Release Note WW16'14. August 2014 Intel Education Theft Deterrent Release Note WW16'14 August 2014 Legal Notices Information in this document is provided in connection with Intel products. No license, express or implied, by estoppels

More information

Enhanced Serial Peripheral Interface (espi) ECN

Enhanced Serial Peripheral Interface (espi) ECN Enhanced Serial Peripheral Interface (espi) ECN Engineering Change Notice TITLE Clarify OOB packet payload DATE 10 January 2014 AFFECTED DOCUMENT espi Base Specification Rev 0.75 DISCLOSURE RESTRICTIONS

More information

No Trade Secrets. Microsoft does not claim any trade secret rights in this documentation.

No Trade Secrets. Microsoft does not claim any trade secret rights in this documentation. [MS-DSLR]: Intellectual Property Rights Notice for Open Specifications Documentation Technical Documentation. Microsoft publishes Open Specifications documentation for protocols, file formats, languages,

More information

Setting up the DR Series System on Acronis Backup & Recovery v11.5. Technical White Paper

Setting up the DR Series System on Acronis Backup & Recovery v11.5. Technical White Paper Setting up the DR Series System on Acronis Backup & Recovery v11.5 Technical White Paper Quest Engineering November 2017 2017 Quest Software Inc. ALL RIGHTS RESERVED. THIS WHITE PAPER IS FOR INFORMATIONAL

More information

Dell PowerVault MD3600f/MD3620f Remote Replication Functional Guide

Dell PowerVault MD3600f/MD3620f Remote Replication Functional Guide Dell PowerVault MD3600f/MD3620f Remote Replication Functional Guide Page i THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT

More information

[MC-SMP]: Session Multiplex Protocol. Intellectual Property Rights Notice for Open Specifications Documentation

[MC-SMP]: Session Multiplex Protocol. Intellectual Property Rights Notice for Open Specifications Documentation [MC-SMP]: Intellectual Property Rights Notice for Open Specifications Documentation Technical Documentation. Microsoft publishes Open Specifications documentation ( this documentation ) for protocols,

More information

Rapid Recovery License Portal Version User Guide

Rapid Recovery License Portal Version User Guide Rapid Recovery License Portal Version 6.1.0 User Guide 2017 Quest Software Inc. ALL RIGHTS RESERVED. This guide contains proprietary information protected by copyright. The software described in this guide

More information

Introduction to High-Speed InfiniBand Interconnect

Introduction to High-Speed InfiniBand Interconnect Introduction to High-Speed InfiniBand Interconnect 2 What is InfiniBand? Industry standard defined by the InfiniBand Trade Association Originated in 1999 InfiniBand specification defines an input/output

More information

Enabling Multi-peer Support with a Standard-Based PCI Express Multi-ported Switch

Enabling Multi-peer Support with a Standard-Based PCI Express Multi-ported Switch Enabling Multi-peer Support with a Standard-Based PCI Express Multi-ported Switch White Paper Introduction By Kwok Kong There are basically three different types of devices in a native PCI Express (PCIe

More information

Intel Unite. Intel Unite Firewall Help Guide

Intel Unite. Intel Unite Firewall Help Guide Intel Unite Intel Unite Firewall Help Guide September 2015 Legal Disclaimers & Copyrights All information provided here is subject to change without notice. Contact your Intel representative to obtain

More information

SVP Overview. Ophidian Designs

SVP Overview. Ophidian Designs SVP Overview SCSI VI Protocol Overview Permission is granted to members of NCITS, its technical committees, and their associated task groups to reproduce this document for the purposes of NCITS standardization

More information

Revision: 0.30 June Intel Server Board S1200RP UEFI Development Kit Firmware Installation Guide

Revision: 0.30 June Intel Server Board S1200RP UEFI Development Kit Firmware Installation Guide Revision: 0.30 June 2016 Intel Server Board S1200RP UEFI Development Kit Firmware Installation Guide Intel Server Board S1200RP UEFI Development Kit Firmware Installation Guide INFORMATION IN THIS DOCUMENT

More information

MICHAL MROZEK ZBIGNIEW ZDANOWICZ

MICHAL MROZEK ZBIGNIEW ZDANOWICZ MICHAL MROZEK ZBIGNIEW ZDANOWICZ Legal Notices and Disclaimers INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY

More information

S1R72U01 Technical Manual

S1R72U01 Technical Manual S1R72U01 Technical Manual Rev. 1.00 NOTICE No part of this material may be reproduced or duplicated in any form or by any means without the written permission of Seiko Epson. Seiko Epson reserves the right

More information

CREATING A COMMON SOFTWARE VERBS IMPLEMENTATION

CREATING A COMMON SOFTWARE VERBS IMPLEMENTATION 12th ANNUAL WORKSHOP 2016 CREATING A COMMON SOFTWARE VERBS IMPLEMENTATION Dennis Dalessandro, Network Software Engineer Intel April 6th, 2016 AGENDA Overview What is rdmavt and why bother? Technical details

More information

1.0. Quest Enterprise Reporter Discovery Manager USER GUIDE

1.0. Quest Enterprise Reporter Discovery Manager USER GUIDE 1.0 Quest Enterprise Reporter Discovery Manager USER GUIDE 2012 Quest Software. ALL RIGHTS RESERVED. This guide contains proprietary information protected by copyright. The software described in this guide

More information

Intel Setup and Configuration Service Lite

Intel Setup and Configuration Service Lite Intel Setup and Configuration Service Lite Release Notes Version 6.0 Document Release Date: February 4, 2010 Information in this document is provided in connection with Intel products. No license, express

More information

Management Console for SharePoint

Management Console for SharePoint Management Console for SharePoint User Guide Copyright Quest Software, Inc. 2009. All rights reserved. This guide contains proprietary information, which is protected by copyright. The software described

More information

Operating Systems 2010/2011

Operating Systems 2010/2011 Operating Systems 2010/2011 Input/Output Systems part 1 (ch13) Shudong Chen 1 Objectives Discuss the principles of I/O hardware and its complexity Explore the structure of an operating system s I/O subsystem

More information

Intel Theft Deterrent Client User Guide

Intel Theft Deterrent Client User Guide Intel Theft Deterrent Client User Guide Legal Notices Information in this document is provided in connection with Intel products. No license, express or implied, by estoppels or otherwise, to any intellectual

More information

IEEE1588 Frequently Asked Questions (FAQs)

IEEE1588 Frequently Asked Questions (FAQs) IEEE1588 Frequently Asked Questions (FAQs) LAN Access Division December 2011 Revision 1.0 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED,

More information

INTEL PERCEPTUAL COMPUTING SDK. How To Use the Privacy Notification Tool

INTEL PERCEPTUAL COMPUTING SDK. How To Use the Privacy Notification Tool INTEL PERCEPTUAL COMPUTING SDK How To Use the Privacy Notification Tool LEGAL DISCLAIMER THIS DOCUMENT CONTAINS INFORMATION ON PRODUCTS IN THE DESIGN PHASE OF DEVELOPMENT. INFORMATION IN THIS DOCUMENT

More information

IBM. Software Development Kit for Multicore Acceleration, Version 3.0. SPU Timer Library Programmer s Guide and API Reference

IBM. Software Development Kit for Multicore Acceleration, Version 3.0. SPU Timer Library Programmer s Guide and API Reference IBM Software Development Kit for Multicore Acceleration, Version 3.0 SPU Timer Library Programmer s Guide and API Reference Note: Before using this information and the product it supports, read the information

More information

Using Tasking to Scale Game Engine Systems

Using Tasking to Scale Game Engine Systems Using Tasking to Scale Game Engine Systems Yannis Minadakis March 2011 Intel Corporation 2 Introduction Desktop gaming systems with 6 cores and 12 hardware threads have been on the market for some time

More information

ExpressCluster X 3.2 WebManager Mobile

ExpressCluster X 3.2 WebManager Mobile ExpressCluster X 3.2 WebManager Mobile Administrator s Guide 2/19/2014 1st Edition Revision History Edition Revised Date Description 1st 2/19/2014 New manual Copyright NEC Corporation 2014. All rights

More information

PCI-X Protocol Addendum to the PCI Local Bus Specification Revision 2.0a

PCI-X Protocol Addendum to the PCI Local Bus Specification Revision 2.0a PCI-X Protocol Addendum to the PCI Local Bus Specification Revision 2.0a July 29, 2002July 22, 2003 REVISION REVISION HISTORY DATE 1.0 Initial release. 9/22/99 1.0a Clarifications and typographical corrections.

More information

Application Note Software Device Drivers for the M29Fxx Flash Memory Device

Application Note Software Device Drivers for the M29Fxx Flash Memory Device Introduction Application Note Software Device Drivers for the M29Fxx Flash Memory Device Introduction This application note provides library source code in C for the M29Fxx Flash memory using the Flash

More information

Open-E Data Storage Server. Intel Modular Server

Open-E Data Storage Server. Intel Modular Server Open-E Data Storage Server Intel Modular Server Contents About Open-E Data Storage Server*...4 Hardware Components...5 Installation Software...6 Open-E Data Storage Server* Installation...7 2 www.intel.com/go/esaa

More information

Accelerated Library Framework for Hybrid-x86

Accelerated Library Framework for Hybrid-x86 Software Development Kit for Multicore Acceleration Version 3.0 Accelerated Library Framework for Hybrid-x86 Programmer s Guide and API Reference Version 1.0 DRAFT SC33-8406-00 Software Development Kit

More information

QPP Proprietary Profile Guide

QPP Proprietary Profile Guide Rev. 04 April 2018 Application note Document information Info Content Keywords Proprietary Profile, Server, Client Abstract The Proprietary Profile is used to transfer the raw data between BLE devices.

More information

Intel Unite Plugin Guide for VDO360 Clearwater

Intel Unite Plugin Guide for VDO360 Clearwater Intel Unite Plugin Guide for VDO360 Clearwater INSTALLATION AND USER GUIDE Version 1.2 December 2017 Legal Disclaimers & Copyrights All information provided here is subject to change without notice. Contact

More information

Silver Peak EC-V and Microsoft Azure Deployment Guide

Silver Peak EC-V and Microsoft Azure Deployment Guide Silver Peak EC-V and Microsoft Azure Deployment Guide How to deploy an EC-V in Microsoft Azure 201422-001 Rev. A September 2018 2 Table of Contents Table of Contents 3 Copyright and Trademarks 5 Support

More information

OpenFlow Switch Errata

OpenFlow Switch Errata OpenFlow Switch Errata Version 1.0.2 November 1, 2013 ONF TS-013 Disclaimer THIS SPECIFICATION IS PROVIDED AS IS WITH NO WARRANTIES WHATSOEVER, INCLUDING ANY WARRANTY OF MERCHANTABILITY, NONINFRINGEMENT,

More information

Computer Management* (IEA) Training Foils

Computer Management* (IEA) Training Foils Intel-powered classmate PC Computer Management* (IEA) Training Foils Version 1.0 Legal Information INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED,

More information

SCOM 2012 with Dell Compellent Storage Center Management Pack 2.0. Best Practices

SCOM 2012 with Dell Compellent Storage Center Management Pack 2.0. Best Practices SCOM 2012 with Dell Compellent Storage Center Management Pack 2.0 Best Practices Document revision Date Revision Comments 4/30/2012 A Initial Draft THIS BEST PRACTICES GUIDE IS FOR INFORMATIONAL PURPOSES

More information

Boot Agent Application Notes for BIOS Engineers

Boot Agent Application Notes for BIOS Engineers Boot Agent Application Notes for BIOS Engineers September 2007 318275-001 Revision 1.0 Legal Lines and Disclaimers INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE,

More information

3.1 Introduction. Computers perform operations concurrently

3.1 Introduction. Computers perform operations concurrently PROCESS CONCEPTS 1 3.1 Introduction Computers perform operations concurrently For example, compiling a program, sending a file to a printer, rendering a Web page, playing music and receiving e-mail Processes

More information

Intel Server Board S2600CW2S

Intel Server Board S2600CW2S Redhat* Testing Services Enterprise Platforms and Services Division Intel Server Board S2600CW2S Server Test Submission (STS) Report For Redhat* Certification Rev 1.0 This report describes the Intel Server

More information

Network Working Group Request for Comments: 2236 Updates: 1112 November 1997 Category: Standards Track

Network Working Group Request for Comments: 2236 Updates: 1112 November 1997 Category: Standards Track Network Working Group W. Fenner Request for Comments: 2236 Xerox PARC Updates: 1112 November 1997 Category: Standards Track Internet Group Management Protocol, Version 2 Status of this Memo This document

More information

Intel X48 Express Chipset Memory Controller Hub (MCH)

Intel X48 Express Chipset Memory Controller Hub (MCH) Intel X48 Express Chipset Memory Controller Hub (MCH) Specification Update March 2008 Document Number: 319123-001 Legal Lines and Disclaimers INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH

More information

Mac OS X Fibre Channel connectivity to the HP StorageWorks Enterprise Virtual Array storage system configuration guide

Mac OS X Fibre Channel connectivity to the HP StorageWorks Enterprise Virtual Array storage system configuration guide Mac OS X Fibre Channel connectivity to the HP StorageWorks Enterprise Virtual Array storage system configuration guide Part number: 5697-0025 Third edition: July 2009 Legal and notice information Copyright

More information

I/O virtualization. Jiang, Yunhong Yang, Xiaowei Software and Service Group 2009 虚拟化技术全国高校师资研讨班

I/O virtualization. Jiang, Yunhong Yang, Xiaowei Software and Service Group 2009 虚拟化技术全国高校师资研讨班 I/O virtualization Jiang, Yunhong Yang, Xiaowei 1 Legal Disclaimer INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE,

More information

ExpressCluster X 3.1 WebManager Mobile

ExpressCluster X 3.1 WebManager Mobile ExpressCluster X 3.1 WebManager Mobile Administrator s Guide 10/11/2011 First Edition Revision History Edition Revised Date Description First 10/11/2011 New manual ii Copyright NEC Corporation 2011. All

More information

Native POSIX Thread Library (NPTL) CSE 506 Don Porter

Native POSIX Thread Library (NPTL) CSE 506 Don Porter Native POSIX Thread Library (NPTL) CSE 506 Don Porter Logical Diagram Binary Memory Threads Formats Allocators Today s Lecture Scheduling System Calls threads RCU File System Networking Sync User Kernel

More information

Dell Change Auditor 6.5. Event Reference Guide

Dell Change Auditor 6.5. Event Reference Guide Dell Change Auditor 6.5 2014 Dell Inc. ALL RIGHTS RESERVED. This guide contains proprietary information protected by copyright. The software described in this guide is furnished under a software license

More information

Intel X38 Express Chipset

Intel X38 Express Chipset Intel X38 Express Chipset Specification Update For the 82X38 Memory Controller Hub (MCH) December 2007 Document Number: 317611-002 Legal Lines and Disclaimers INFORMATION IN THIS DOCUMENT IS PROVIDED IN

More information

Process Description and Control. Chapter 3

Process Description and Control. Chapter 3 Process Description and Control 1 Chapter 3 2 Processes Working definition: An instance of a program Processes are among the most important abstractions in an OS all the running software on a computer,

More information

SolarWinds Orion Integrated Virtual Infrastructure Monitor Supplement

SolarWinds Orion Integrated Virtual Infrastructure Monitor Supplement This PDF is no longer being maintained. Search the SolarWinds Success Center for more information. SolarWinds Orion Integrated Virtual Infrastructure Monitor Supplement INTEGRATED VIRTUAL INFRASTRUCTURE

More information

HP A5120 EI Switch Series IRF. Command Reference. Abstract

HP A5120 EI Switch Series IRF. Command Reference. Abstract HP A5120 EI Switch Series IRF Command Reference Abstract This document describes the commands and command syntax options available for the HP A Series products. This document is intended for network planners,

More information