Maximizing NFS Scalability

Similar documents
W H I T E P A P E R. Comparison of Storage Protocol Performance in VMware vsphere 4

Performance Comparisons of Dell PowerEdge Servers with SQL Server 2000 Service Pack 4 Enterprise Product Group (EPG)

Getting the Best Performance from an HPC Cluster: BY BARIS GULER; JENWEI HSIEH, PH.D.; RAJIV KAPOOR; LANCE SHULER; AND JOHN BENNINGHOFF

Performance Characterization of the Dell Flexible Computing On-Demand Desktop Streaming Solution

Implementing Software RAID

Aziz Gulbeden Dell HPC Engineering Team


Computer Organization and Structure. Bing-Yu Chen National Taiwan University

Storage. Hwansoo Han

Deploy a High-Performance Database Solution: Cisco UCS B420 M4 Blade Server with Fusion iomemory PX600 Using Oracle Database 12c

Computer Systems Laboratory Sungkyunkwan University

DELL EMC ISILON F800 AND H600 I/O PERFORMANCE

Introduction I/O 1. I/O devices can be characterized by Behavior: input, output, storage Partner: human or machine Data rate: bytes/sec, transfers/sec

VERITAS Foundation Suite TM 2.0 for Linux PERFORMANCE COMPARISON BRIEF - FOUNDATION SUITE, EXT3, AND REISERFS WHITE PAPER

Performance Optimizations via Connect-IB and Dynamically Connected Transport Service for Maximum Performance on LS-DYNA

HP SAS benchmark performance tests

Use of the Internet SCSI (iscsi) protocol

Dell PowerEdge R720xd with PERC H710P: A Balanced Configuration for Microsoft Exchange 2010 Solutions

Chapter 6. Storage and Other I/O Topics

PAC094 Performance Tips for New Features in Workstation 5. Anne Holler Irfan Ahmad Aravind Pavuluri

Computer Architecture Computer Science & Engineering. Chapter 6. Storage and Other I/O Topics BK TP.HCM

Milestone Solution Partner IT Infrastructure Components Certification Report

Best Practices for Setting BIOS Parameters for Performance

A Comparative Study of Microsoft Exchange 2010 on Dell PowerEdge R720xd with Exchange 2007 on Dell PowerEdge R510

Comparison of Storage Protocol Performance ESX Server 3.5

Sun Fire V880 System Architecture. Sun Microsystems Product & Technology Group SE

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance

Intel Enterprise Edition Lustre (IEEL-2.3) [DNE-1 enabled] on Dell MD Storage

Network Storage Solutions for Computer Clusters Florin Bogdan MANOLACHE Carnegie Mellon University

BLAST on Intel EM64T Architecture

IBM Emulex 16Gb Fibre Channel HBA Evaluation

Performance Scaling. When deciding how to implement a virtualized environment. with Dell PowerEdge 2950 Servers and VMware Virtual Infrastructure 3

Cisco MCS 7815-I1 Unified CallManager Appliance

INFOBrief. Dell PowerEdge 750. Key Points

A GPFS Primer October 2005

Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage

Exchange 2003 Deployment Considerations for Small and Medium Business

IBM InfoSphere Streams v4.0 Performance Best Practices

Cisco MCS 7845-H1 Unified CallManager Appliance

Network Design Considerations for Grid Computing

Chapter 10: Mass-Storage Systems

Robert Jamieson. Robs Techie PP Everything in this presentation is at your own risk!

IBM System p5 550 and 550Q Express servers

p5 520 server Robust entry system designed for the on demand world Highlights

Example Networks on chip Freescale: MPC Telematics chip

Deploying Microsoft Windows Compute Cluster Server 2003

EqualLogic Storage and Non-Stacking Switches. Sizing and Configuration

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE

PCI EXPRESS TECHNOLOGY. Jim Brewer, Dell Business and Technology Development Joe Sekel, Dell Server Architecture and Technology

Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell EqualLogic Storage Arrays

Meet the Increased Demands on Your Infrastructure with Dell and Intel. ServerWatchTM Executive Brief

Milestone Solution Partner IT Infrastructure Components Certification Report

Sun Lustre Storage System Simplifying and Accelerating Lustre Deployments

Performance Report: Multiprotocol Performance Test of VMware ESX 3.5 on NetApp Storage Systems

BlueGene/L. Computer Science, University of Warwick. Source: IBM

Chapter 10: Mass-Storage Systems. Operating System Concepts 9 th Edition

How to Choose the Right Bus for Your Measurement System

CS370 Operating Systems

Cisco MCS 7825-I1 Unified CallManager Appliance

Ingo Brenckmann Jochen Kirsten Storage Technology Strategists SAS EMEA Copyright 2003, SAS Institute Inc. All rights reserved.

Arrakis: The Operating System is the Control Plane

Connectivity. Module 2.2. Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved. Connectivity - 1

Microsoft SQL Server 2012 Fast Track Reference Configuration Using PowerEdge R720 and EqualLogic PS6110XV Arrays

CISCO MEDIA CONVERGENCE SERVER 7815-I1

1Highwinds. Software. Highwinds Software LLC. Document Version 1.1 April 2003

eslim SV Dual and Quad-Core Xeon Server Dual and Quad-Core Server Computing Leader!! ESLIM KOREA INC.

Understanding the performance of an X user environment

CS3600 SYSTEMS AND NETWORKS

High-Performance Lustre with Maximum Data Assurance

DELL POWERVAULT MD FAMILY MODULAR STORAGE THE DELL POWERVAULT MD STORAGE FAMILY

HP Z Turbo Drive G2 PCIe SSD

STAR-CCM+ Performance Benchmark and Profiling. July 2014

Evaluation Report: HP StoreFabric SN1000E 16Gb Fibre Channel HBA

InfiniBand SDR, DDR, and QDR Technology Guide

Chapter 6. Storage and Other I/O Topics

File System Internals. Jo, Heeseung

SU Dual and Quad-Core Xeon UP Server

Microsoft Office SharePoint Server 2007

Introduction...2. Executive summary...2. Test results...3 IOPs...3 Service demand...3 Throughput...4 Scalability...5

Molecular Dynamics and Quantum Mechanics Applications

Open Benchmark Phase 3: Windows NT Server 4.0 and Red Hat Linux 6.0

Computer Science Section. Computational and Information Systems Laboratory National Center for Atmospheric Research

JMR ELECTRONICS INC. WHITE PAPER

Storage Update and Storage Best Practices for Microsoft Server Applications. Dennis Martin President, Demartek January 2009 Copyright 2009 Demartek

Cisco MCS 7815-I2. Serviceable SATA Disk Drives

Computer Architecture CS 355 Busses & I/O System

Presented by: Nafiseh Mahmoudi Spring 2017

Table of Contents. HP A7173A PCI-X Dual Channel Ultra320 SCSI Host Bus Adapter. Performance Paper for HP PA-RISC Servers

Module 6: INPUT - OUTPUT (I/O)

Storage Systems. Storage Systems

Cisco MCS 7815-I2 Unified CallManager Appliance

Microsoft SQL Server 2012 Fast Track Reference Architecture Using PowerEdge R720 and Compellent SC8000

Computer Systems Laboratory Sungkyunkwan University

Lecture 23. Finish-up buses Storage

eslim SV Xeon 1U Server

T E C H N I C A L S A L E S S O L U T I O N S

Using Synology SSD Technology to Enhance System Performance Synology Inc.

Introduction to TCP/IP Offload Engine (TOE)

Deploying Microsoft SQL Server 2005 Standard Edition with SP2 using the Dell PowerVault MD3000i iscsi Storage Array

SQL Server 2005 on a Dell Scalable Enterprise Foundation

Transcription:

Maximizing NFS Scalability on Dell Servers and Storage in High-Performance Computing Environments Popular because of its maturity and ease of use, the Network File System (NFS) can be used in high-performance computing cluster environments. This article discusses how administrators can improve performance and obtain maximum scalability from a Dell NFS server by adjusting parameters for the server and related systems, including I/O subsystems, back-end storage, NFS service, and network interconnect. BY ANDREW BACHLER AND JENWEI HSIEH, PH.D. Since the early 198s, the Network File System (NFS) distributed file system has been deployed widely in many environments. NFS remains popular in today s enterprises because it is easy to use and simple to manage. One growing application for NFS is high-performance computing (HPC) clusters. This article describes how administrators can deploy a Dell NFS server in an HPC cluster environment and tune its system performance. Scalability has long been considered the main disadvantage of NFS in HPC environments. NFS does not scale linearly because the protocol relies on a single server to process all client requests. As a result, in large deployments both the NFS server and the network path to the server can become bottlenecks. However, an NFS server can be highly effective in HPC environments depending on the type of traffic, back-end storage, NFS service configuration, and network interconnect used. As shown in Figure 1, NFS architecture comprises several components, and each component affects overall performance. For example, the method by which the NFS server exports the file system to the clients and the type of back-end storage the server uses can significantly influence performance. By adjusting the system components that most affect performance and following best practices, administrators can optimize the scalability of NFS servers. Configuring the test environment In June 24, Dell engineers conducted a study to compare the effect of performance tuning on an experimental NFS HPC cluster configuration. The cluster comprised a Dell PowerEdge 185 server for NFS file-sharing functions connected to a Dell PowerVault 22S storage system using a PowerEdge Expandable RAID Controller 4, Dual Channel (PERC 4/DC), and configured with 2 GB of RAM. The NFS server s operating system (OS) was Red Hat Enterprise Linux AS 3 using Intel Extended Memory 64 Technology (EM64T). The Dell engineers configured 32 client nodes in the cluster; each client was a PowerEdge 175 server running Red Hat Enterprise Linux WS 3. An Extreme Networks Black Diamond 688 Gigabit Ethernet switch formed the network backbone. To evaluate NFS performance in this study, Dell engineers used a parallel I/O program called ior_mpiio. (The name ior_mpiio stands for Interleaved or Random www.dell.com/powersolutions Reprinted from Dell Power Solutions, October 24. Copyright 24 Dell Inc. All rights reserved. POWER SOLUTIONS 53

Application mount Network export NFS service NFS server Local file system Storage Figure 1. NFS architecture Message Passing Interface I/O.) 1 This parallel program was developed by the Scalable I/O Project (SIOP) at Lawrence Livermore National Laboratory. It performs parallel writes and reads to and from a single file, and reports the throughput rates. The ior_mpiio program uses Message Passing Interface (MPI) for process synchronization. The data is written and read using independent parallel transfers of equal-sized blocks of contiguous bytes that cover the file with no gaps and do not overlap each other. Unless testers request otherwise by adjusting environmental variables such as file size, record size, and file data layout, each ior_mpiio test consists of creating a new file, writing data to that file, then reading the data back from the file. The data is written as C integers. In conducting the tests for this article, the Dell team ran the ior_mpiio program 1 times for each test, discarded the best and the worst results, and took the average of the remaining results. Because the NFS server was configured with 2 GB of memory, the total amount of data written to the server in all tests was 4 GB, divided across the clients in equal chunks. Each client wrote and read 1 MB of records at a time. The test team tuned NFS service parameters for the Dell NFS server and clients, as well as the I/O subsystems, back-end storage, and network interconnect. Based on ior_mpiio benchmark results for the experimental NFS HPC cluster configuration described in this article, Dell engineers achieved significant performance increases by tuning these parameters instead of using their standard, default settings. The following sections describe how the Dell team obtained this performance improvement by adjusting settings for the I/O subsystems, back-end storage, NFS server and client parameters, and network interconnect. Adjusting I/O based on application access patterns and needs The most important considerations in designing any I/O subsystem are the needs and access patterns of the application that will use the system. Tuning choices depend on specific application characteristics such as the following: whether the application is read or write intensive, what the typical file size is, whether reads and writes are sequential or random, and whether files are accessed in large or small blocks. For instance, for small-file access especially access that involves a significant amount of metadata operations such as deleting files the Reiser file system is most likely the best choice of local file system for the NFS server. 2 Another factor that can enhance performance is the data layout. If system administrators can control how data is stored on the disk, best practices suggest avoiding parallel writes as much as possible. Minimizing parallel writes enables the NFS server to take advantage of different caching options. Consider, for instance, an application that is distributed across multiple nodes, with each node writing its own file to the NFS server in parallel and later reading those files. These parallel writes create a noncontiguous allocation of disk blocks to each file, which later results in random data distribution, and in turn, random disk access when systems attempt to read the files. In this study, the test results indicated that when files were created contiguously, through sequential writes, client reads were significantly faster compared to files that were created in parallel and required random reads. Nonparallel writes made more effective use of caching, both in the file system and on the storage controller. Note that for this study, the Dell team ran 32 instances of ior_mpiio one per client to ensure that each client had its own file to write and read. Figure 2 shows that read bandwidth was more than 2.5 times faster when data had been written sequentially compared to read bandwidth when data had been written in parallel. 3 2.5 2 1.5 1.5 Files written in parallel Files written sequentially Read bandwidth (32 clients) Figure 2. Effect of sequential versus parallel data layout on read bandwidth 1 For more information about ior_mpiio, visit http://www.llnl.gov/asci/purple/benchmarks/limited/ior/ior.mpiio.readme.html. 2 For more information about the Reiser file system, visit http://www.namesys.com. 54 POWER SOLUTIONS October 24

2.5 2 1.5 1.5 4 16 Number of clients Figure 3. Effect of the number of nfsd threads on read performance 8 nfsd threads 16 nfsd threads 32 Tuning back-end storage to improve NFS server performance Because storage is local to the NFS server, administrators can promote better overall server performance by tuning NFS storage for optimal performance. However, any adjustments to local back-end storage must take into consideration the I/O traffic patterns of the application or applications that will use the NFS server. SCSI or Fibre Channel storage. Dell offers several SCSI and Fibre Channel storage enclosures. The PowerVault 22S is the most commonly used SCSI enclosure in combination with a PERC 4/DC. Fibre Channel products, however, are designed to be more efficient than SCSI at handling larger data blocks. RAID level. The RAID level also influences system performance and reliability. Although RAID- offers outstanding throughput, RAID-5 is a common choice because it is designed to protect data if a hard drive fails by storing parity information on each disk in the logical storage unit. When feasible, administrators can create the RAID device across multiple channels of the controller to utilize available bandwidth. This approach can be particularly helpful for RAID- configurations because adding more spindles can help improve RAID- performance. Conversely, adding more spindles to a RAID-5 configuration eventually reduces performance because RAID-5 overhead is directly proportional to the number of drives (parity is read from each drive). File system characteristics. The local file system on the NFS server can be another critical factor affecting storage performance. Every file system exhibits different characteristics and targets distinct access patterns. For example, the Reiser file system can be an excellent choice for access patterns that consist mainly of metadata operations or small-file operations. Both the Reiser and the ext3 file systems are journaling file systems that are available in standard Red Hat distributions. Journaling file systems are designed to offer an advantage over non-journaling file systems by requiring less time than non-journaling file systems to bring the server to a stable state after a crash, but the overhead incurred by journaling can reduce system performance. However, Reiser and ext3 file systems allow administrators to store the journal information in a separate device such as a flash card which can help improve file system performance if a very fast device is used. Despite their performance penalty, journaling file systems have become standard thanks to advances in file system design. Nonetheless, when administrators require top performance from an NFS configuration, ext2 a nonjournaling file system may be a better option. Adjusting the NFS service to achieve greater scalability After administrators tune the back-end storage on the server, the next step is to adjust the NFS service itself. Several important factors influence the capability of the NFS server to handle client requests such as the number of nfsd threads and the parameters assigned to the export command. The choice of an appropriate server is crucial as well. Choice of server. The Dell tests in this study indicate that the PowerEdge 185 server can be an excellent platform for NFS file serving. The PowerEdge 185 server is an EM64T-capable platform whose 64-bit extension OS is designed to allow for better memory utilization than a 32-bit OS. Furthermore, the enterprise kernel in Red Hat Enterprise Linux AS 3 the OS installed on the PowerEdge 185 server used in this study is designed to enable administrators to address all the physical memory of the server. The PowerEdge 185 server also includes a choice of riser cards based on Peripheral Component Interconnect Extended (PCI-X) or PCI Express technology. Number of nfsd threads. Because the NFS service is a multithreaded application on the server side, the number of nfsd threads that the server launches can affect the scalability of the NFS server. By default, the PowerEdge 185 server launches eight nfsd threads. However, the results of the Dell study described in this article indicate that performance can be improved by increasing the number of nfsd threads as the number of clients exercising the server increases. One way to check nfsd thread usage is to observe the last few entries on the th line in the /proc/net/rpc/nfsd file. The last 1 numbers in the th line represent the number of seconds that the thread usage was at the maximum allowable percentage. For instance, if the ninth number is 6, the thread usage was at 9 percent for 6 seconds. High numbers in the last three or four places indicate that the system might benefit from an increased thread count. To change the thread count, administrators can modify the value of the RPCNFSDCOUNT variable in /etc/init.d/nfs and restart the NFS service. By increasing the nfsd thread count, the Dell team observed performance improvements in this study, particularly on reads. The baseline for the testing in this study was the performance of 32 clients using eight nfsd threads. Figure 3 shows that the performance improvements were more significant as the number of nodes increased, which is typical in environments such as HPC clusters. The improved bandwidth can be attributed to an increase in the number of threads. In this study, doubling the number of nfsd threads from 8 to 16 helped speed up www.dell.com/powersolutions POWER SOLUTIONS 55

the read performance of incoming NFS requests from many clients. Export command parameters. Another component of the NFS service that administrators should consider is how the file system is made available, or exported, to the clients. The export command has several parameters. The one that most affects performance is the sync or async option. Application performance can benefit greatly from the asynchronous behavior of NFS. In this test, write performance increased by more than 5 percent when Dell engineers used the async option instead of the sync option. (See Figure 4, which shows relative performance improvement for writes using asynchronous I/O.) However, it is very important for system administrators to weigh the risks and benefits of using the async option, and understand that with greatly improved write throughput from async I/O comes a higher risk of data loss if the server crashes before committing data to the disk. Also note that the default export option differs with different versions of the NFS server s OS. 3 Using client parameters to specify the size of data blocks The most commonly tuned parameter for s is the rsize or wsize option to the mount command. The values of rsize and wsize specify the amount of read and write data, respectively, that is transferred in each NFS request. The limit on the values of rsize and wsize in NFS version 2 is up to 8 KB each. The values of rsize and wsize need not be the same. In version 3 of the NFS protocol, the maximum values of rsize and wsize are defined by the NFSSVC_MAXBLKSIZE variable in the include/linux/nfsd/const.h file. Increasing these values and recompiling the kernel may benefit IT departments that seek to achieve high performance over a non-lossy network. Note: Although User Datagram Protocol (UDP) is the default protocol for NFS, it can be unreliable. For example, when large chunks of data are transferred using UDP, a single lost packet results in the retransmission of the entire request. This can make UDP an untenable option for lossy networks. Choosing optimal system components The most important considerations in designing any I/O subsystem are the needs and access patterns of the application that will use the system. The choice of individual system components that affect the baseline storage performance can be very important in tuning the NFS server. For example, the PowerEdge 185 server used in this study provides a choice of two different riser cards based on PCI-X and PCI Express technology. PCI Express is the new-generation I/O architecture that Intel plans to feature across all Intel platforms for workstations, desktops, and servers. 4 PCI Express is based on a type of serial communications technology equivalent to that in the Universal Serial Bus (USB) and Serial ATA (SATA) hard drives. PCI Express accommodates board connectors that come in 1, 4, 8, 16, and 32 widths. A single lane, referred to as a 1 link (pronounced by one link ), can help support up to 5 MB/sec in both directions (dual simplex). Lanes can be aggregated to create higher-bandwidth lanes. For example, as lane widths are increased to a 32 width, the PCI Express specification is designed to offer a potential peak bandwidth capability of up to 16 GB/sec dual simplex while remaining compatible with PCI software. Additional PCI Express features are designed to provide the following advantages: Compatibility with existing PCI drivers and software as well as many operating systems High bandwidth per pin, low overhead, and low latency A point-to-point connection, which enables each device to have a dedicated connection without sharing bandwidth Power consumption and power management Hot swappability and hot pluggability for devices Coexistence with current PCI hardware Dell engineers installed a PCI Express PERC 4 Express/DC (PERC 4 E/DC) card in a PowerEdge 185 NFS server that was connected to an external storage system containing fourteen 1.8 1.6 1.4 1.2 1..8.6.4.2 Synchronous I/O Write performance Figure 4. Write performance improvements using asynchronous I/O Asynchronous I/O 3 The exportfs program, which is responsible for exporting the file systems, is installed from the nfs-utils RPM (Red Hat Package Manager). Version 1..1 of the nfs-utils utility, installed with Red Hat Linux 9., exports file systems with the sync option by default. Earlier versions of the utility, including the one installed with Red Hat Enterprise Linux AS 2.1, default to async export mode. 4 For more information about PCI Express, see Exploring the Advantages of PCI Express by Chris Croteau and Paul Luse in Dell Power Solutions, June 24. 56 POWER SOLUTIONS October 24

2. 1.8 1.6 1.4 1.2 1..8.6.4.2 PCI-X RAID controller 146 GB SCSI drives in a RAID- configuration. Compared to the PERC 4/DC option, which was tested in the same RAID- configuration for this study, the PERC 4 E/DC configuration provided a 6 percent to 7 percent increase in NFS performance when a server utilized the PCI Express bus and PERC 4 E/DC compared to the PCI-X equivalents (see Figure 5). 5, 6, 7 Using NIC teaming to alleviate network bottlenecks If the network becomes the bottleneck in an NFS setup, then network interface card (NIC) teaming in the NFS server can help increase the bandwidth available to the server. The NIC teaming method uses a driver to combine two or more NICs into a single logical network link. NIC teaming can work well with UDP traffic, 8 making it appropriate for use in an NFS server. However, NIC teaming benefits NFS environments only when the bandwidth to the back-end storage exceeds the bandwidth available through a single link. Boosting NFS application and cluster performance The NFS distributed file system can operate effectively in many HPC installations. Tests performed for the study described in this article indicate that tuning the NFS service to perform optimally in a specific environment can help increase its performance and consequently help increase the performance of an application using NFS services. No single method exists to optimize NFS performance. Tuning choices are not universal and should be determined by the application needs of each cluster. Write bandwidth (32 clients) Figure 5. Effect of server s peripheral bus and RAID controller on NFS performance PCI Express RAID controller However, administrators can adjust the many components of an NFS server and the systems that affect the server to help improve speed and throughput of both the application and the cluster as a whole. Administrators can choose the local file system for the NFS server based on the application that will use the server, and optimize the data layout accordingly. In addition, adjusting local back-end storage, and server parameters, and the network configuration can contribute to performance gains. For example, using the parameters and options described in this article, Dell engineers achieved significant improvements in the NFS performance for different application patterns on a cluster of 32 Dell servers, compared to the same configuration using default NFS service parameters and standard settings. Acknowledgments The authors would like to thank Rizwan Ali, Baris Guler, and Amina Saify for their assistance in conducting the tests and preparing this article. Andrew Bachler is a systems engineer in the Scalable Systems Group at Dell. He has an associate s degree in Electronic Engineering and 12 years of UNIX and Linux experience. Jenwei Hsieh, Ph.D., is an engineering manager in the Scalable Systems Group at Dell, where he is responsible for developing high-performance clusters. His work in the areas of multimedia computing and communications, high-speed networking, serial storage interfaces, and distributed network computing has been published extensively. Jenwei has a Ph.D. in Computer Science from the University of Minnesota and a B.E. from Tamkang University in Taiwan. FOR MORE INFORMATION Dell storage products: http://www.dell.com/storage Dell HPC cluster products: http://www.dell.com/hpcc Because storage is local to the NFS server, administrators can promote better overall server performance by tuning NFS storage for optimal performance. 5 This test was run locally on the NFS server, without utilizing the network. 6 Aside from running at a faster clock rate, the PERC 4 E/DC was designed with other advantages over the PERC 4/DC. For example, the PERC 4 E/DC uses a 2 MHz cache that has 256 MB of double date rate (DDR) memory versus a 1 MHz cache with 128 MB of synchronous dynamic RAM (SDRAM) on the PERC 4/DC. This study tested each controller s standard, default feature set. 7 Unless otherwise indicated, all tests in this study were performed using PCI-X technology. 8 NIC teaming works well with UDP traffic because UDP packets are not sequenced; therefore, the order in which they arrive is irrelevant. (When using TCP, the CPU must perform additional work to properly sequence the packets arriving on two separate interfaces.) www.dell.com/powersolutions POWER SOLUTIONS 57