IBM Scale Out Network Attached Storage (SONAS) using the Acuo Universal Clinical Platform A vendor-neutral medical-archive offering Dave Curzio IBM Systems and Technology Group ISV Enablement February 2012 Copyright IBM Corporation, 2012
Table of contents Abstract... 1 Introduction... 1 Lab configuration... 2 Servers... 3 Storage... 4 Software tools used... 4 Tests performed... 5 Planning the IBM vendor-neutral archive deployment... 6 Summary... 7 Appendix A: Best practices... 8 Appendix B: Resources... 9 Appendix C: About the author... 10 Acknowledgments... 10 Trademarks and special notices... 11 IBM Scale Out Network Attached Storage (SONAS) using the Acuo Universal Clinical Platform
Abstract This white paper presents a vendor-neutral medical-image archive solution that integrates IBM Scale Out Network Attached Storage (SONAS) and the Acuo Universal Clinical Platform running on IBM x86 servers. It describes the configuration that was used to test the Acuo Universal Clinical Platform with IBM SONAS. The configuration was minimized to ensure that all components would function with the smallest of configurations. This bundled solution is available from participating IBM value-added distributors and their networks of IBM Business Partners. Introduction In the present clinical environment for health-care providers, a significant portion of the medical-imaging data is stored in individual silos that must be individually maintained and managed. This type of storage infrastructure creates a tendency to cause a duplication of some or all of the following resources: Available disk space Other IT resources Environmental resources When using an infrastructure that contains individual silos, it becomes increasingly difficult for a physician to view the media from each of the different silos. In most cases, the media has to be accessed indirectly, through an interface to the individual silo and associated application. This consumes valuable time and resource and also contributes to rising medical costs. IBM and Acuo Technologies have now teamed up to create a tested offering that addresses the data storage, usage and business needs of health-care providers. IBM Scale Out Network Attached Storage (SONAS) delivers seamless scalability for the high performance and massive capacity that health-care providers require. The distributed architecture reduces management complexity, speeds up processes and removes any single point of failure (SPOF) that might impede data availability. The Acuo solution builds on the Acuo Universal Clinical Platform, a standards-based suite of software applications that scales up to accommodate large centralized repositories and scales out to enable fully distributed storage grids. It is also designed to enable archiving of medical images in a highly interoperable fashion. The set of services that Acuo delivers through its Universal Clinical Platform also extends and virtualizes storage resources. With this integrated approach to archiving and sharing images, the IBM-Acuo solution frees health-care providers to acquire storage on an as-needed basis and to implement advanced archiving capabilities without a costly and lengthy migration process. It also enables image interoperability across departments and applications (regardless of the number of application vendors involved) and allows streamlined EMR access to clinical images. Using this solution permits the management of just one storage platform across all clinical departments. The IBM Acuo solution reduces the need for redundant free space across multiple silos, management of individual storage platforms and the overall needs of the environment. 1
Lab configuration This section outlines the lab configuration at the time of running the test. Figure 1: Illustration of configuration used The hardware definitions (see Figure 1 and Figure 2) are as follows: Servers Two IBM System x3650 M3 servers Cisco Catalyst 3750G 24T This is a 24-port Ethernet switch that provides 1 Gigabit Ethernet (GbE) connectivity between all the devices in this test configuration. SONAS This is the IBM storage platform. IBM SONAS is a network-attached storage (NAS) based storage appliance that internally uses IBM General Parallel File System (IBM GPFS ), and is capable of presenting multiple NAS mounts through multiple NAS protocols. A SONAS can independently scale capacity (up to 14 PB, all under a single global-name space for ease of management and implementation) or performance by adding additional interface nodes (up to 30 nodes). 2
Shared Working Storage Windows Server SQL Database Acuo Software Windows Server SQL Database Acuo Software SONAS Storage Figure 2: Block diagram representing data flow with the solution Servers Three Intel based IBM System x3650 M3 servers, with the following standard configuration, were used: Intel Xeon processor X5667 at 3.06 GHz (two processors) 32 GB of memory One Quad Port Gb Ethernet Adapter Microsoft Windows Server 2008 R2 (Service Pack 1) Enterprise 64-bit Microsoft SQL Server 2008 R2 AcuoMed DICOM (Digital Imaging and COmmunications in Medicine) and Workflow service. Sends, receives, routes and prefetches images. It also manages data integrity through realtime HL7 processing of ADT messages. AcuoStore Storage-virtualization component, interfaces with AcuoMed and is used to store the studies received from AcuoMed. 3
The disk configuration for each of the servers varied slightly. The following configurations have been used for each server. Servers Two 146 GB SAS drives configured as a RAID 1E. This array contains the operating system (OS) as well as the tools used to run the tests. Shared working storage - The remaining disk space was made available to the cluster (presented through either fiber channel (FC) or iscsi): Storage 1. Quorum Disk for cluster RAID5 2. Database disk RAID 10, 50 GB 3. Logs RAID 10, 50 GB 4. DILIB RAID 5, 50 GB The IBM SONAS includes the following configuration: Two internal Ethernet switches Two internal InfiniBand switches Three interface nodes, each with a dual port 10 GbE Converged Network Adapter (CAN) One storage pod, consisting of two storage nodes and one enclosure of 60 2 TB hard drives Software tool used The OFFIS DICOM Toolkit (DCMTK) is a collection of libraries and applications that implement large parts of the DICOM standard. It includes software for examining, constructing and converting DICOM image files, handling offline media, sending and receiving images over a network connection, as well as demonstrative image storage and work-list servers. DCMTK is written in a mixture of ANSI C and C++. 4
Tests performed Tests were performed for both DICOM receiving and DICOM sending tasks: DICOM receiving tests Benchmark 1: The cluster is in standard state. Begin transfer of one concurrent DICOM connection to the receiver with a file size of 128 KB. Let the test run for seven minutes. Record the rate of moving from image cache. Benchmark 2: The cluster is in standard state. Begin transfer of two concurrent DICOM connections to a receiver with file size of 128 KB. Let the test run for seven minutes. Record the rate of moving from image cache. Benchmark 3: The cluster is in standard state. Begin transfer of three concurrent DICOM connections to the receiver with a file size of 128 KB. Let the test run for seven minutes. Record the rate of moving from image cache. Benchmark 4: The cluster is in standard state. Begin the transfer of four concurrent DICOM connections to the receiver with a file size of 128 KB. Let the test run for seven minutes. Record the rate of moving from image cache. Benchmark 5: The cluster is in standard state. Begin the transfer of five concurrent DICOM connections to the receiver with a file size of 128 KB. Let the test run for seven minutes. Record the rate of moving from image cache. Benchmark 6: The cluster is in standard state. Begin the transfer of six concurrent DICOM connections to the receiver with a file size of 128 KB. Let the test run for seven minutes. Record the rate of moving from image cache. DICOM sending tests Benchmark 1: The cluster is in standard state. Begin the transfer of 10 concurrent DICOM connections to the system with a file size of 128 KB. Let the test run for five minutes. Record the ingests rate (dilib) and the rate of moving to image cache (dilib to image cache). Benchmark 2: The cluster is in standard state. Begin the transfer of 15 concurrent DICOM connections to the system with a file size of 128 KB. Let the test run for five minutes. Record the ingests rate (dilib) and the rate of moving to image cache (dilib to image cache). Benchmark 3: The cluster is in standard state. Begin the transfer of 10 concurrent DICOM connections to the system with a file size of 256 KB. Let the test run for five minutes. Record ingests rate (dilib) and rate of moving to image cache (dilib to image cache). Benchmark 4: The cluster is in standard state. Begin the transfer of 15 concurrent DICOM connections to the system with a file size of 256 KB. Let the test run for five minutes. Record the ingests rate (dilib) and the rate of moving to image cache (dilib to image cache). Benchmark 5: The cluster is in standard state. Begin the transfer of 10 concurrent DICOM connections to the system with a file size of 512 KB. Let the test run for five minutes. Record the ingests rate (dilib) and the rate of moving to image cache (dilib to image cache). 5
Benchmark 6: The cluster is in standard state. Begin the transfer of 15 concurrent DICOM connections to the system with a file size of 512 KB. Let the test run for five minutes. Record the ingests rate (dilib) and the rate of moving to image cache (dilib to image cache). Benchmark 7: The cluster is in standard state. Begin the transfer of 10 concurrent DICOM connections to the system with a file size of 4 MB. Let the test run for five minutes. Record the ingests rate (dilib) and the rate of moving to image cache (dilib to image cache). Benchmark 8: The cluster is in standard state. Begin the transfer of 15 concurrent DICOM connections to the system with a file size of 4 MB. Let the test run for five minutes. Record the ingests rate (dilib) and the rate of moving to image cache (dilib to image cache). Planning the IBM vendor-neutral archive deployment The SONAS appliance can be configured as a complete Enterprise NAS solution, which can support multiple purposes, including, but not limited to, email archive, databases, virtual machines and so on. The SONAS storage that comes with the IBM vendor-neutral archive (VNA) solution is configured only for use with the VNA application. For more information about expanding SONAS storage to support additional functions, refer to the SONAS web site (found in Resources section of this white paper) and contact your authorized IBM Business Partner for more information. 6
Summary The IBM SONAS storage performed as expected, and even when tested with high simultaneous thread counts, the minimal configuration performed extremely well. This set of tests shows that, in conjunction with IBM System x3650 M3 hardware and the Acuo Universal Clinical Platform software, IBM SONAS storage can provide a solid solution for even small-scale users. For more information, refer to the links in the Resources section of this white paper. 7
Appendix A: Best practices Here are some best practices for using IBM SONAS storage: Use Common Internet File System (CIFS) as the primary protocol Deploy SONAS with version 1.3 at a minimum Use the available adaptive-load balancing (ALB) trunking on the SONAS to increase bandwidth, if more than two application nodes are deployed. Put SONAS on a separate subnet or switch with the application servers for increased performance. Integrate SONAS into the active-directory (AD) system for security. 8
Appendix B: Resources The following web sites provide useful references to supplement the information contained in this paper: IBM Systems on PartnerWorld ibm.com/partnerworld/systems Virtual Loaner Program ibm.com/systems/vlp IBM Redbooks ibm.com/redbooks IBM Publications Center www.elink.ibmlink.ibm.com/public/applications/publications/cgibin/pbi.cgi?cty=us IBM Scale Out Network Attached Storage (SONAS) ibm.com/systems/storage/network/sonas Acuo Technologies acuotech.com 9
Appendix C: About the author Dave Curzio is a solutions architect in the IBM Systems and Technology Group ISV Enablement organization. He has more than 10 years experience working with IBM server hardware, IBM storage platforms and the healthcare environment. You can contact Dave Curzio at DCurzio@us.ibm.com Acknowledgments Thanks to Acuo for their support in performing these test scenarios. 10
Trademarks and special notices Copyright IBM Corporation 2012. All rights Reserved. References in this document to IBM products or services do not imply that IBM intends to make them available in every country. IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. If these and other IBM trademarked terms are marked on their first occurrence in this information with a trademark symbol ( or ), these symbols indicate U.S. registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at www.ibm.com/legal/copytrade.shtml. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Intel, Intel Inside (logos), MMX, and Pentium are trademarks of Intel Corporation in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others. Information is provided "AS IS" without warranty of any kind. All customer examples described are presented as illustrations of how those customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics may vary by customer. Information concerning non-ibm products was obtained from a supplier of these products, published announcement material, or other publicly available sources and does not constitute an endorsement of such products by IBM. Sources for non-ibm list prices and performance numbers are taken from publicly available information, including vendor announcements and vendor worldwide homepages. IBM has not tested these products and cannot confirm the accuracy of performance, capability, or any other claims related to non-ibm products. Questions on the capability of non-ibm products should be addressed to the supplier of those products. Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput or performance improvements equivalent to the ratios stated here. Photographs shown are of engineering prototypes. Changes may be incorporated in production models. Any references in this information to non-ibm web sites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this IBM product and use of those websites is at your own risk. 11