Accelrys Pipeline Pilot and HP ProLiant servers

Similar documents
Newest generation of HP ProLiant DL380 takes #1 position overall on Oracle E-Business Suite Small Model Benchmark

Scalable RNA Sequencing on Clusters of Multicore Processors

HP ProLiant DL580 G5. HP ProLiant BL680c G5. IBM p570 POWER6. Fujitsu Siemens PRIMERGY RX600 S4. Egenera BladeFrame PB400003R.

Performance of Mellanox ConnectX Adapter on Multi-core Architectures Using InfiniBand. Abstract

Available Packs and Purchase Information

HP ProLiant BL35p Server Blade

Assessing performance in HP LeftHand SANs

Illumina Next Generation Sequencing Data analysis

HP on CUTTING EDGE with ProLiant BL460c G6 server blade

HP BladeSystem c-class Ethernet network adaptors

HP BladeSystem c-class Ethernet network adapters

HP ProLiant delivers #1 overall TPC-C price/performance result with the ML350 G6

Find the right platform for your server needs

GPUBwa -Parallelization of Burrows Wheeler Aligner using Graphical Processing Units

Intel Hyper-Threading technology

RNA-seq. Manpreet S. Katari

AcuSolve Performance Benchmark and Profiling. October 2011

HP SAS benchmark performance tests

QLogic TrueScale InfiniBand and Teraflop Simulations

SQL/MX UPDATE STATISTICS Enhancements

Storage + VDI: the results speak for themselves at Nuance Communications

QuickSpecs. Models SATA RAID Controller HP 6-Port SATA RAID Controller B21. HP 6-Port SATA RAID Controller. Overview.

Masher: Mapping Long(er) Reads with Hash-based Genome Indexing on GPUs

HPE ProLiant ML110 Gen10 Server

Retired. For more information on HP's ProLiant Security Server visit:

HPE Datacenter Care for SAP and SAP HANA Datacenter Care Addendum

HP EVA P6000 Storage performance

HP AutoPass License Server

Key results at a glance:

Protect enterprise data, achieve long-term data retention

HPE ProLiant ML350 Gen P 16GB-R E208i-a 8SFF 1x800W RPS Solution Server (P04674-S01)

BREAK THE CONVERGED MOLD

QuickSpecs. Models HP I/O Accelerator Options. HP PCIe IO Accelerators for ProLiant Servers. Overview

HPE ProLiant ML110 Gen P 8GB-R S100i 4LFF NHP SATA 350W PS DVD Entry Server/TV (P )

Super-Fast Genome BWA-Bam-Sort on GLAD

A Comprehensive Study on the Performance of Implicit LS-DYNA

QuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Currently shipping versions:

HPE ProLiant ML350 Gen10 Server

QLIKVIEW SCALABILITY BENCHMARK WHITE PAPER

University at Buffalo Center for Computational Research

Enhancing Analysis-Based Design with Quad-Core Intel Xeon Processor-Based Workstations

HP and CATIA HP Workstations for running Dassault Systèmes CATIA

QuickSpecs HP Remote Graphics Software 7.3

QuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Currently shipping versions:

QuickSpecs. Models HP SC11Xe Host Bus Adapter B21. HP SC11Xe Host Bus Adapter. Overview

QuickSpecs. HPE Workload Aware Security for Linux. Overview

Performance analysis of parallel de novo genome assembly in shared memory system

HP BladeSystem Matrix

JULIA ENABLED COMPUTATION OF MOLECULAR LIBRARY COMPLEXITY IN DNA SEQUENCING

HP StorageWorks D2D Backup Systems and StoreOnce

Table of contents. OpenVMS scalability with Oracle Rdb. Scalability achieved through performance tuning.

Migrating from E/X5600 Processors

Microsoft Office SharePoint Server 2007 with Windows 2008 and SQL Server 2008 on HP servers and storage technologies

HP ProLiant m300 1P C2750 CPU 32GB Configure-to-order Server Cartridge B21

HP Storage Mirroring Application Manager 4.1 for Exchange white paper

Mapping NGS reads for genomics studies

ELPREP PERFORMANCE ACROSS PROGRAMMING LANGUAGES PASCAL COSTANZA CHARLOTTE HERZEEL FOSDEM, BRUSSELS, BELGIUM, FEBRUARY 3, 2018

HP and NX. Introduction. What type of application is NX?

NA12878 Platinum Genome GENALICE MAP Analysis Report

QuickSpecs HP Remote Graphics Software 7.5

Galaxy Platform For NGS Data Analyses

Model HP SATA 16x SuperMulti Drive (Black)

The HP Thunderbolt 2 PCIe Card is backward compatible to allow your Thunderbolt devices to operate with no cable adapters needed.

SlopMap: a software application tool for quick and flexible identification of similar sequences using exact k-mer matching

HPE MSA 2042 Storage. Data sheet

The HP Blade Workstation Solution A new paradigm in workstation computing featuring the HP ProLiant xw460c Blade Workstation

Business white paper HP and Bentley MicroStation V8i (SELECTseries3) Business white paper. HP and Bentley. MicroStationV8i (SELECTseries3)

Retired. ProLiant iscsi Acceleration Software Pack for Embedded Multifunction Server Adapters Overview

NGS Data and Sequence Alignment

HP Data Protector Media Operations 6.11

Managing Your IP Telephony Environment

QuickSpecs. What's New New 146GB Pluggable Ultra320 SCSI 15,000 rpm Universal Hard Drive. HP SCSI Ultra320 Hard Drive Option Kits (Servers) Overview

HP StorageWorks LTO-5 Ultrium tape portfolio

REPORT. NA12878 Platinum Genome. GENALICE MAP Analysis Report. Bas Tolhuis, PhD GENALICE B.V.

HP s Performance Oriented Datacenter

HP SmartCache technology

The HP Thunderbolt 3 Dual Port PCIe I/O Card is backward compatible to allow your Thunderbolt 2 devices (compatible adapter required).

QuickSpecs. What's New The addition of VMware ESX Server and VMware Virtual Infrastructure Node (VIN)

HPE Basic Implementation Service for Hadoop

HPE ProLiant DL360 Gen P 16GB-R P408i-a 8SFF 500W PS Performance Server (P06453-B21)

HP ProLiant DL385p Gen8 Server

Tutorial: De Novo Assembly of Paired Data

HPE ProLiant DL580 Gen10 Server

HP E-PCM Plus Network Management Software Series Overview

QuickSpecs. What's New HP 120GB 1.5Gb/s SATA 5400 rpm SFF HDD. HP Serial-ATA (SATA) Hard Drive Option Kits. Overview

HPE OneView for Microsoft System Center Release Notes (v 8.2 and 8.2.1)

HP PROLIANT ML110 G7 SERVER

HP BladeSystem Matrix Compatibility Chart

Welcome to MAPHiTS (Mapping Analysis Pipeline for High-Throughput Sequences) tutorial page.

EMC Backup and Recovery for Microsoft SQL Server

Models PDC/O5000 9i W2K Cluster Kit B24

Target Environments The Smart Array 6i Controller offers superior investment protection to the following environments: Non-RAID

HP Integrity NonStop NS16000 Server Data sheet

NOTE: For more information on HP's offering of Windows Server 2012 R2 products, go to:

HP Power Calculator Utility: a tool for estimating power requirements for HP ProLiant rack-mounted systems

INCREASING DENSITY AND SIMPLIFYING SETUP WITH INTEL PROCESSOR-POWERED DELL POWEREDGE FX2 ARCHITECTURE

HP Dynamic Deduplication achieving a 50:1 ratio

QuickSpecs. Available Packs and Purchase Information. ProLiant Essentials Vulnerability and Patch Management Pack v2.1. Overview.

QuickSpecs. HP PCIe IO Accelerators for ProLiant Servers. Overview

ABySS Performance Benchmark and Profiling. May 2010

Transcription:

Accelrys Pipeline Pilot and HP ProLiant servers A performance overview Technical white paper Table of contents Introduction... 2 Accelrys Pipeline Pilot benchmarks on HP ProLiant servers... 2 NGS Collection for Pipeline Pilot... 2 Hardware configuration... 3 SRX000600 test case... 3 Results... 4 Bowtie... 4 BWA... 7 Conclusion... 10

Introduction Next Generation Sequencing (NGS) technologies have transformed the way the world perceives genomics. Growing market adoption and application diversity are two trends characterizing the fast-progressing field. With an ultimate goal of decoding the complete human genome, the throughput of DNA sequencing methods has grown at unprecedented rates doubling approximately every 4.5 months. This is at least four times faster than Moore s law for growth rates of transistors in a processor. As an evolving technology, companies are continuously seeking to advance in cost-efficient performance in the NGS space. Storage, data management, and informatics are the prime challenges to scale this massively parallel technology. The insatiable need for storage and computational resources to process NGS data has grown proportionally with the evolution of the technology. Scientists, on one hand, have to deal with the ever-increasing amount of data from the sequencers and downstream processing, which can reach petabyte scales. On the other hand, they must extract the maximum information possible from the sequencing data with their finite computational resources. HP and Accelrys combine their industry leadership in software and service solutions and NGS solutions to provide R&D organizations with new technologies that affordably increase productivity. HP and Accelrys engineers work collaboratively to deliver Accelrys applications that are analyzed and tuned for optimal performance on a broad spectrum of HP platforms, including the HP Unified Cluster Portfolio (UCP) of servers, clusters, scale-up large-memory x86 servers, and blade systems running Linux or Windows. HP UCP provides Accelrys a complete and defined infrastructure fully tested and supported by HP. Accelrys Pipeline Pilot benchmarks on HP ProLiant servers NGS Collection for Pipeline Pilot The NGS Collection for Pipeline Pilot lets you analyze and interpret the massive datasets generated by the most current DNA sequencing instruments. Built for use with the Pipeline Pilot informatics platform, the NGS Collection comes with a comprehensive assortment of NGS data analysis pipelines that are ready to analyze your data. The unparalleled power and flexibility of the new NGS Collection allows you to accommodate not just current analysis needs, but to adapt to novel applications and computational methods emerging rapidly in the NGS landscape. Using NGS components and sample protocols, you can perform common workflows such as de novo sequencing, mapping to a reference genome or reference sequences, and variation detection. The NGS Collection supports unpaired and paired reads in both base space and color space. Through use of a flexible sequence repository, Pipeline Pilot components access mapped reads, genomic features, and reference genomes. The data reader and writers use common formats, such as SAM, BAM, GFF3, or FASTQ, as appropriate, to optimize integration of open source programs in this quickly evolving area for both hardware and software. Because the NGS Collection is built on our flexible scientific informatics platform, Pipeline Pilot, third-party applications can be integrated and new algorithms can be implemented without breaking existing data pipelines. 2

Hardware configuration All benchmarks are run on HP servers, based on both Intel Xeon and AMD Opteron processors. The two clusters are configured with both InfiniBand QDR and Gigabit Ethernet interconnects. All computers utilize multi-core processors. We use the Red Hat Enterprise Linux 5.5 (RHEL 5.5) as our operating environment. Node type HP ProLiant SL390s G7 Server HP ProLiant SL165s G7 Server Architecture Processors/Core Xeon 5670 (2.93 GHz) 2-processor 6-core Opteron 6172 (2.1 GHz) 2-processor 12-core Cache size 24 MB 48 MB Cache configuration 12 MB shared between 6 cores 12 MB shared between 6 cores Number of x nodes Cores per node 16 x 12 16 x 24 Memory per node 24 GB 64 GB Interconnect Voltaire IB 4X QDR/Gigabit Ethernet Mellanox IB 4X QDR/Gigabit Ethernet Operating system RHEL 5.5 RHEL 5.5 SRX000600 test case SRX000600 experiment is a paired-end Illumina sequencing of HapMap: NA18507. The data underwent purity-filtering (PF) to remove mixed reads, where two or more different template molecules are close enough on the surface of the flow cell to form a mixed or overlapping cluster. No other filtering of the data has been carried out prior to mapping. The data contains 3.77 billion PF reads from the short-insert library and 296 million PF reads from the long-insert library. In all, the experiment achieved a sequence depth of more than 40-fold. Unless specified, a subset of SRX000600 experiment containing 128 runs totaling 34 GB compressed data size is used. FASTQ is the preferred data format used in this study. The read alignment pipelines are designed using Pipeline Pilot version 8.5. They contain a FASTQ Directory Reader feeding data to a short-read mapper component using Burrows-Wheeler Aligner (BWA) or Bowtie. The reads are aligned to human reference genome build 37.1. The time required to build the indexes for Bowtie and BWA are not included. Figure 1. A screenshot of Pipeline Pilot protocol used when processing the subset of the SRX000600 experiment with Bowtie. 3

Results Bowtie Bowtie is a short-read aligner designed to align large sets of short DNA sequences (reads) efficiently to large genomes. Karkkainen's blockwise algorithm allows for a tradeoff between run time and memory usage. The Burrows-Wheeler transform is used to compress data allowing Bowtie to perform the indexing without using the large memory required by other short alignment tools. Indexing of the human genome build 37.1 takes 3.5 hours on an HP ProLiant SL390s G7 Server with 24 GB RAM. The index once generated is reused by the subsequent runs. Except for choosing a subset of the reads from SRX000600 no other filtering or trimming is performed on the dataset used. A maximum of five alignments are allowed. A seed length of 28 is used. Default values for insert sizes (maximum of 250 and minimum of 0) are used. Bowtie uses threads to increase alignment throughput. An input option allows you to launch a specified number of parallel search threads, which find alignments. The threads synchronize when parsing reads and outputting alignments. While it would require a lot of manual work or expertise to script the workflow of the mapping exercise, the Pipeline Pilot Professional Client allows scientists to create workflows through an easy and intuitive graphical interface. Bookkeeping of all the activities performed while mapping and other analyses are stored in a single repository such that they are organized and can be retrieved with ease. The Bowtie mapping protocol had two pipelines, one to set the global variables and the other the mapping pipeline itself. Unless specified all experiments used a mapping pipeline that contains two components namely FASTQ Directory Reader and Bowtie Short Read Mapper. Additionally, the Add Mapped Reads to Repository component is used to add the mapped reads to the repository for further analyses. Depending on the version of Pipeline Pilot used, this component sorts, merges, and indexes the mapped reads. The component also computes statistics for the mapped reads. The final output is a single sorted and indexed alignment in Binary Sequence Alignment/Map (BAM) format. The size of the output file is proportional to the total size of the reads used for mapping. In our experiments using reads amounting to 34 GB, a BAM file of 32 GB was created. One should also note that the BAM files by its specification are compressed using BGZF format. In our experiments, the FASTQ Directory Reader acts as a filter in selecting only paired reads. It also identifies the mate pairs with suffixed patterns 1 and 2 respectively. The component was configured to output group of reads instead of individual reads. This prevents the FASTQ Directory Reader from reading and converting FASTQ files into individual data records. It is efficient to allow the downstream components to take the location of the FASTQ files and process them just in time. The Bowtie Short Read Mapper component of the NGS Collection contains a total of nine pipelines that create or reuse Bowtie indexes of the reference genome, decompress short-read sequences if required, use Bowtie to map the sequences to the reference genome and some housekeeping activities. As discussed earlier, integrating Pipeline Pilot with a cluster of HP ProLiant servers is straightforward. It is possible to use the native clustering algorithms of Pipeline Pilot or offload the job management to one of the supported job schedulers. Pipeline Pilot has built in support for Load Sharing Facility (LSF) and Portable Batch System (PBS). It is possible to integrate other job schedulers by slightly tweaking the custom scheduler script wrappers provided with the installation. In this case, native job-leveling algorithm is chosen as the clustering method. Bowtie Short Read Mapper is configured to run in parallel with a batch size of 1. This causes every pair of FASTQ files be run as a single job. The job-leveling algorithm assesses the resource utilization of servers in the cluster and schedules subsequent jobs to the least loaded server. The 8.5 release of the NGS Collection for Pipeline Pilot brings an impressive improvement in computational efficiency. Figure 2 contrasts the performance of the 8.5 and the 8.0 releases. 4

The improvement in performance stems from efficiently distributing tasks that can be parallelized. To derive useful information through analysis, it is required that the mapped reads be sorted and merged into a single file. This file is then indexed using SAMtools. In Pipeline Pilot version 8.0, all sorting, merging, and indexing of the mapped reads are run using a single processor. In effect, this leads to a huge performance overhead. Pipeline Pilot version 8.5 offloads the sorting of mapped files to the mapping components, for example, Bowtie Short Read Mapper, and utilizes all available server effectively. However, merging and indexing the mapped reads remains a serial task. The change in the component design has improved the overall performance of the protocol by 50 percent. Note that all of the sorting, merging, and indexing tasks for BAM files are generic tasks and not specific to the use of a sequence repository. Figure 2. Effect of the revision of the NGS Collection for Pipeline Pilot. 6.0 5.0 4.0 3.0 2.0 1.0 Add mapped reads to repository Add mapped reads to repository 0.0 NGS Collection for Pipeline Pilot 8.0 NGS Collection for Pipeline Pilot 8.5 The test case is run with Bowtie. The SRX000600 test case is run on 16 compute nodes of the ProLiant SL390s G7 Server cluster. One Bowtie process utilizing 12 threads is run on each computer. The mapping time increases slightly in version 8.5 due to BAM file sorting being moved to the mapping components. Add Mapped Reads to Repository contains a number of nonparallelizable tasks for the generic preparation of BAM files. The mapping experiments that follow do not have the Add Mapped Reads to Repository component. The serial task of merging the mapped reads would take the same time irrespective of the number of servers in the cluster unless the dataset has changed. The combination of the parallel that Pipeline Pilot uses to achieve parallelism and the threads used by Bowtie allows for many ways to utilize a node. Threaded tend to have significant nonparallel overhead, causing them to scale poorly. The process parallelism implemented with Pipeline Pilot is sensitive to load imbalance. Best performance should be achieved when the number of times the threads equals the number of cores on a node. Figure 3 shows the time to map the SRX000600 test case on 16 nodes of the ProLiant SL390s G7 Server; three threads per process is the fastest way to run. This is 16 percent faster than running 12 single-thread and more than twice as fast as running a single 12-way parallel process. It is a mistake to rely exclusively on thread or process parallelism. 5

Figure 3. The time to map the SRX000600 test case with Bowtie on a cluster of 16 ProLiant SL390s G7 Server nodes as a function of the number of run on each node. 2.5 2.0 1.5 1.0 0.5 0.0 12 1-thread 6 2-thread 4 3-thread 3 4-thread 2 6-thread 1 12-thread It is common practice to use clusters of computers to perform mapping. Figure 4 shows that the SRX000600 test case scales well as we double and quadruple the number of computers used to solve the problem from 4 to 8 and 16 respectively. The solid lines show observed scaling and the dotted lines show ideal scaling. The scaling is good but far from ideal. The Intel Westmere-based ProLiant SL390s G7 Server cluster gives the fastest time completion, despite the fact that it has half as many cores as the AMD Magny-Cours-based ProLiant SL165s G7 Server. Four of the Intel-based compute nodes are 11 percent faster than the same number of AMD-based nodes; at 16 nodes the improvement is 15 percent. 6

Figure 4. The time to map the SRX000600 test case with Bowtie on clusters of ProLiant SL165s G7 and SL390s G7 Server nodes as a function of the number nodes. 4.0 1 24-thread process on a SL165s G7 1 12-thread process on a SL390s G7 2.0 1.0 4 8 16 Compute nodes One process is run on each node; each process uses the same number of threads as the number of cores on the node. Actual performance is represented by the solid lines; ideal scaling from the 4 node time is shown by the dashed lines. BWA Burrows-Wheeler Aligner (BWA) is another threaded NGS program that aligns short nucleotide sequences against a long reference sequence. It is slower than Bowtie but allows indels in the alignment. It uses the BWA-short algorithm for query sequences shorter than 200 base pairs (bp). This algorithm supports paired-end reads and is computationally efficient. BWA uses the BWA-SW algorithm to perform a heuristic Smith Waterman (SW)-like alignment for longer sequences up to around 100 kbp. Both algorithms perform gapped sequence alignment. For both BWA algorithms, the database file must be first indexed. This took 3.5 hours with this dataset on a single core of the ProLiant SL390 G7 Server cluster. The time taken to index the reference genome is not included while comparing performance. Except choosing a subset of the reads from SRX000600, no other filtering or trimming is performed on the dataset used. A gap opening penalty of 11, consisting of gap extension penalty and mismatch penalty of 3, is used, and only 1 gap opening is allowed. The K-difference method is used to disallow longer gap extensions. The BWA protocol is shown in figure 5; it uses two components of the NGS Collection for Pipeline Pilot, and is very similar to the pipeline used with Bowtie protocol. The FASTQ Directory Reader acts as a filter in selecting only paired reads. It also identifies the mate pairs with suffixed patterns 1 and 2 respectively. The component was configured to output group of reads instead of individual reads. The resulting path to the reads was passed to the subsequent components. The BWA Short Read Mapper component of the NGS Collection contains a total of eight pipelines. It automates the complete workflow of the mapping experiment including creation or reuse of BWA indexes of the reference genome, decompression of short-read sequences if required, use BWA itself to map the sequences to the reference genome and some housekeeping activities. 7

Figure 5. A screenshot of Pipeline Pilot protocol used when processing the subset of the SRX000600 experiment with BWA. This is a modification of the protocol used with Bowtie. The native job-leveling algorithm of the Pipeline Pilot is used to run the protocol in parallel on the cluster. BWA is run in parallel with a batch size of 1. This causes every pair of FASTQ files be run as an independent job. The job-leveling algorithm assesses the resource utilization of servers in the cluster and schedules subsequent jobs to the least loaded server. Pipeline Pilot uses to achieve parallelism, and the indexing application BWA uses threads, thus allowing many ways to utilize a node. Figure 6 shows elapsed time for 16 computers running combinations of and threads that use all of the cores on the systems. Again, hybrid of the two methods for parallelization, using three threads per process, is the best combination. This is 18 percent faster than running 12 single-thread and more than twice as fast as running a single 12-way parallel process. Figure 6. The time to map the SRX000600 test case with BWA on a cluster of 16 ProLiant SL390s G7 Server nodes as a function of the number of run on each node. 3.5 3.0 2.5 2.0 1.5 1.0 0.5 0.0 12 1-thread 6 2-thread 4 3-thread 3 4-thread 2 6-thread 1 12-thread process 8

The parallel scaling achieved with BWA is shown in figure 7. The solid lines show observed scaling and the dotted lines show ideal scaling. While scaling of the elapsed time from 4 to 16 computers is linear, it is not ideal. The Intel Westmere-based ProLiant SL390s G7 Server cluster again gives the faster time completion than the AMD Magny-Cours-based ProLiant SL165s G7 Server. Figure 7. The time to map the SRX000600 test case with BWA on clusters of ProLiant SL165s G7 and SL390s G7 Server nodes as a function of the number nodes. 4.0 4 6-thread process on SL165s G7 4 3-thread process on SL390s G7 2.0 1.0 4 8 16 Compute nodes Four are run on each node; the number of threads per process is a quarter of the number of cores on the node. Actual performance is represented by the solid lines; ideal scaling from the 4 node time is shown by the dashed lines. By comparing the time to run Bowtie, see figure 8, you will see that Bowtie scales a little better than BWA. This is because BWA allows for the indels in alignment, while Bowties does not. There is only a difference in run time of 3 percent at 4 compute nodes, but at 16 nodes the difference reaches 13 percent. Figure 8. The times to map the SRX000600 test case on clusters ProLiant SL390s G7 Server nodes as a function of the number used compute nodes. 5 4 3 2 1 Bowtie BWA 0 4 8 16 Compute nodes One process that uses 12 threads is run on each computer. Times for Bowtie are shown in green, and times for BWA are orange. 9

Conclusion The Next Generation Sequencing (NGS) Collection for Pipeline Pilot on the HP ProLiant servers combine to give you an easy-to-use, efficient way to map reads to a repository. The Intel Westmere-based HP ProLiant nodes perform significantly better than the AMD Magny-Cours ProLiant nodes. The alignment step benefits from the use of parallelism. The alignment programs allow for the use of thread based parallelism. Use of three threads per process yields optimal performance. While this step benefits from the use of many HP ProLiant nodes, the scaling is less than perfect. In this paper we have shown that the NGS Collection for Pipeline Pilot allows you to use process parallelism easily on the HP ProLiant servers. Examples using both Bowtie and BWA are used for the time-consuming alignment step. Further studies will focus on genome assembly using the NGS Collection for Pipeline Pilot and the large-memory ProLiant DL980 Server. The combination of the NGS Collection for Pipeline Pilot with the cluster of Intel-based HP ProLiant nodes and the ProLiant DL980 Server form the complete solution required for processing of NGS data. Drive your Accelrys Pipeline Pilot NGS projects rapidly on the HP UCP of servers, clusters, and blade systems. To know more about Accelrys Pipeline Pilot, visit www.accelrys.com. Find more about HP High Performance Computing and Unified Cluster Portfolio, visit http://h20311.www2.hp.com/hpc/us/en/hpc-index.html. Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. AMD is a trademark of Advanced Micro Devices, Inc. Intel and Intel Xeon are trademarks of Intel Corporation in the U.S. and other countries. Windows is a U.S. registered trademark of Microsoft Corporation. 4AA3-9456ENW, Created February 2012