ConnectX VPI Single/Dual-Port Adapters with Virtual Protocol Interconnect
|
|
- Bennett Cory Doyle
- 5 years ago
- Views:
Transcription
1 ADAPTER CARDS ConnectX VPI Single/Dual-Port Adapters with Virtual Protocol Interconnect ConnectX adapter cards with Virtual Protocol Interconnect (VPI) provide the highest performing and most flexible interconnect solution for Enterprise Data Centers, High-Performance Computing, and Embedded environments. Clustered data bases, parallelized applications, transactional services and high-performance embedded I/O applications will achieve significant performance improvements resulting in reduced completion time and lower cost per operation. ConnectX with VPI also simplifies network deployment by consolidating cables and enhancing performance in virtualized server environments. Virtual Protocol Interconnect VPI-enabled adapters make it possible for any standard networking, clustering, storage, and management protocol to seamlessly operate over any converged network leveraging a consolidated software stack. With auto-sense capability, each ConnectX port can identify and operate on InfiniBand, Ethernet, or Data Center Ethernet (DCE) fabrics. ConnectX with VPI simplifies I/O system design and makes it easier for IT managers to deploy infrastructure that meets the challenges of a dynamic data center. World-Class Performance Over InfiniBand ConnectX delivers low-latency and high-bandwidth for performance-driven server and storage clustering applications. These applications will benefit from the reliable transport connections and advanced multicast support offered by ConnectX. Network protocol processing and data movement overhead such as InfiniBand RDMA and Send/Receive semantics are completed in the adapter without CPU intervention. Servers supporting PCI Express 2.0 with 5GT/s will be able to take advantage of 40Gb/s InfiniBand, balancing the I/O requirement of these high-end servers. World-Class Performance Over Data Center Ethernet ConnectX delivers similar low-latency and high-bandwidth performance leveraging Ethernet with DCE support. Low Latency Ethernet (LLE) provides efficient RDMA transport over Layer 2 Ethernet with IEEE 802.1q VLAN tagging and Ethernet Priority-based Flow Control (PFC). The LLE software stack maintains existing and future application compatibility for bandwidth and latency sensitive clustering applications. With link-level interoperability in existing Ethernet infrastructure, Network Administrators can leverage existing data center fabric management solutions. TCP/UDP/IP Acceleration Applications utilizing TCP/UDP/IP transport can achieve industry-leading throughput over InfiniBand or 10GigE. The hardware-based stateless offload engines in ConnectX reduce the CPU overhead of IP packet transport, freeing more processor cycles to work on the application. I/O Virtualization ConnectX support for hardware-based I/O virtualization provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VM) within the server. I/O virtu- ConnectX Adapter Cards Single- and Dual-Port QSFP Connectors Shown BENEFITS One adapter for InfiniBand, 10Gig Ethernet or Data Center Ethernet fabrics World-class cluster performance High-performance networking and storage access Guaranteed bandwidth and low-latency services Reliable transport I/O consolidation Virtualization acceleration Scales to tens-of-thousands of nodes KEY FEATURES Virtual Protocol Interconnect 1us MPI ping latency Selectable 10, 20, or 40Gb/s InfiniBand or 10GigE per port Single- and Dual-Port options available PCI Express 2.0 (up to 5GT/s) CPU offload of transport operations End-to-end QoS and congestion control Hardware-based I/O virtualization TCP/UDP/IP stateless offload Fibre Channel encapsulation (FCoIB or FCoE)
2 ConnectX VPI Single/Dual-Port Adapters with Virtual Protocol Interconnect 2 alization with ConnectX gives data center managers better server utilization and LAN and SAN unification while reducing cost, power, and cable complexity. Storage Accelerated A consolidated compute and storage network achieves significant costperformance advantages over multi-fabric networks. Standard block and file access protocols leveraging InfiniBand RDMA result in high-performance storage access. Fibre Channel frame encapsulation (FCoIB or FCoE) and hardware offloads enable simple connectivity to Fibre Channel SANs. Software Support All Mellanox adapter cards are compatible with TCP/IP and OpenFabrics-based RDMA protocols and software. They are also compatible with InfiniBand and cluster management software available from OEMs. The adapter cards are compatible with major operating system distributions. Adapter Cards FEATURE SUMMARY INFINIBAND IBTA Specification compliant 10, 20, or 40Gb/s per port RDMA, Send/Receive semantics Hardware-based congestion control Atomic operations 16 million I/O channels 256 to 4Kbyte MTU, 1Gbyte messages 9 virtual lanes: 8 data + 1 management ENHANCED INFINIBAND Hardware-based reliable transport Hardware-based reliable multicast Extended Reliable Connected transport Enhanced Atomic operations Fine grained end-to-end QoS ETHERNET IEEE 802.3ae 10 Gigabit Ethernet IEEE 802.3ak 10GBASE-CX4 IEEE 802.3ad Link Aggregation and Failover IEEE 802.3x Pause IEEE 802.1Q,.1p VLAN tags and priority Multicast Jumbo frame support (10KB) 128 MAC/VLAN addresses per port Class Based Flow Control / Per Priority Pause HARDWARE-BASED I/O VIRTUALIZATION Single Root IOV Address translation and protection Multiple queues per virtual machine VMware NetQueue support PCISIG IOV compliant ADDITIONAL CPU OFFLOADS TCP/UDP/IP stateless offload Intelligent interrupt coalescence Compliant to Microsoft RSS and NetDMA STORAGE SUPPORT T10-compliant Data Integrity Field support Fibre Channel over InfiniBand or Ethernet COMPATIBIITY CONNECTIVITY Interoperable with InfiniBand or 10GigE switches microgigacn or QSFP connectors 20m+ (10Gb/s), 10m+ (20Gb/s) or 7m+ (40Gb/s) of pasive copper cable External optical media adapter and active cable support MANAGEMENT AND TOOLS InfiniBand OpenSM Interoperable with third-party subnet managers Firmware and debug tools (MFT, IBDIAG) Ethernet MIB, MIB-II, MIB-II Extensions, RMON, RMON 2 Configuration and diagnostic tools OPERATING SYSTEMS/DISTRIBUTIONS Novell SLES, Red Hat Enterprise Linux (RHEL), Fedora, and other Linux distributions Microsoft Windows Server 2003/2008/CCS 2003 OpenFabrics Enterprise Distribution (OFED) OpenFabrics Windows Distribution (WinOF) VMware ESX Server 3.5, Citrix XenServer 4.1 PROTOCOL SUPPORT Open MPI, OSU MVAPICH, HP MPI, Intel MPI, MS MPI, Scali MPI TCP/UDP, IPoIB, SDP, RDS SRP, iser, NFS RDMA, FCoIB, FCoE udapl Ordering Part Number Port Host Bus Power Typ Dimensions w/o Brackets MHGH28-XTC Dual 4X 20Gb/s InfiniBand or 10GigE PCIe GT/s 11.0W 13.6cm x 6.4cm MHGH29-XTC Dual 4X 20Gb/s InfiniBand or 10GigE PCIe GT/s 11.6W 13.6cm x 6.4cm MHJH29-XTC Dual 4X 40Gb/s InfiniBand or 10GigE PCIe GT/s 12.2W 13.6cm x 6.4cm MHRH29-XTC Dual 4X QSFP 20Gb/s InfiniBand PCIe GT/s 11.6W 16.8cm x 6.4cm MHQH29-XTC Dual 4X QSFP 40Gb/s InfiniBand PCIe GT/s 12.2W 16.8cm x 6.4cm Single-port options also available Oakmead Parkway, Suite 100, Sunnyvale, CA Tel: Fax: Copyright Mellanox Technologies. All rights reserved. Mellanox, ConnectX, InfiniBlast, InfiniBridge, InfiniHost, InfiniRISC, InfiniScale, and InfiniPCI are registered trademarks of Mellanox Technologies, Ltd. Virtual Protocol Interconnect and BridgeX are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners. 2770PB Rev 2.5
3 CABLES Copper Cables Supporting data rates up to 40Gb/s Mellanox cables are a cost-effective solution for connecting InfiniBand adapters, extending the benefits of Mellanox s high-performance InfiniBand adapters throughout the network. These cables meet or exceed IBTA and IEEE standards, and have been rigorously tested with all Mellanox products. Mellanox passive copper cables provide robust connections up to seven meters in length and require no power from the connector. Mellanox active cables are a thinner alternative to the passive six and eight meter cables, and are available up to 12 meters in length. All cables have easy-to-read serial numbers printed at each end to facilitate installation and network debug. Enterprise Data Centers, High-Performance Computing, and Embedded installations are assured the highest interconnect performance when using Mellanox adapters and cables. Cables Ordering Part Number Length (m) Wire AWG Cable Type MCC4Q30C-00A QSFP MCC4Q30C QSFP MCC4Q26C QSFP MCC4Q26C QSFP MCC4Q26C QSFP MCC4Q26C QSFP MCC4Q26C QSFP MCC4Q26C QSFP MCC4N26C Hybrid QSFP-CX4 MCC4N26C Hybrid QSFP-CX4 MCC4N26C Hybrid QSFP-CX4 MCC4R26C Active QSFP MCC4R28C Active QSFP MCC4R26C Active QSFP MCC4R26C Active QSFP BENEFITS Industry-leading price/performance Certified performance on Mellanox-based products KEY FEATURES Compliant to the InfiniBand Architecture specification Supports up to 40Gb/s bandwidth Superior signal integrity Bit Error Rate (BER) less than Low crosstalk Low insertion loss Matched impedance Secure latching mechanism Serial numbers printed at each end RoHS-5 compliant 1-year warranty SPECIFICATIONS Passive cables - Maximum reach: 7 meters (24 AWG) Active cables - Maximum reach: 12 meters (26 AWG) - Maximum current: 400 mamps - Power supply: 3.3 Volts 350 Oakmead Parkway, Suite 100, Sunnyvale, CA Tel: Fax: Copyright Mellanox Technologies. All rights reserved. Mellanox, ConnectX, InfiniBlast, InfiniBridge, InfiniHost, InfiniRISC, InfiniScale, and InfiniPCI are registered trademarks of Mellanox Technologies, Ltd. Virtual Protocol Interconnect and BridgeX are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners. 2961PB Rev 1.1
4 SWITCH SYSTEM MTS port 20 and 40Gb/s InfiniBand Switch System MTS3600 switch systems provide the highest-performing fabric solution by delivering high-bandwidth and low-latency to Enterprise Data Centers, High-Performance Computing and Embedded environments. Networks built with MTS3600 systems can carry converged traffic with the combination of assured bandwidth and granular quality of service. World Class Density and Scalability Built with Mellanox s 4th generation InfiniScale IV InfiniBand switch device, the MTS3600 provides up to 40Gb/s full bisectional bandwidth per port. With 36 ports, these systems are among the densest switching systems available. These stand-alone switches are an ideal choice for topof-rack leaf connectivity or for building small to medium sized clusters. Sustained Network Performance The MTS3600 supports adaptive routing as well as static routing to reduce or eliminate congestion situations. Adaptive routing dynamically and automatically re-routes traffic to alleviate congested ports. Static routing, meanwhile, has been shown to produce superior results in networks where traffic patterns are more predictable. In situations where switching contention is unavoidable, such as many-to-one traffic patterns, the MTS3600 supports IBTA 1.2 congestion control mechanisms. The switch systems work in conjunction with ConnectX InfiniBand HCAs to restrict the traffic causing congestion, ensuring high-bandwidth and low-latency to all other flows. Whether used for parallel computation or as a converged fabric, the combination of high-bandwidth, adaptive routing, and congestion control provide the industry s best traffic-carrying capacity. Utility Computing Virtual partitioning of a cluster enables efficient use of all of its computing resources. Allocating only the compute power that each client needs enables more clients on the cluster at one time. Clusters built on MTS3600 systems can run up to six multiple subnets, securely segregating client processes while ensuring the highest productivity of the cluster. High Availability The MTS3600 systems deliver director-class availability required for mission-critical applications environments. These systems support an optional redundant hot-swappable power supply and a hot-swappable fan unit to help eliminate down time. Easy Management The MTS3600 systems are easily managed through any IBTA compliant subnet manager. Port configuration and data paths can be set up automatically or customized to meet the needs of the application. Firmware can be updated in-band for simple network maintenance. 1U 36-port InfiniBand Switch with QSFP connectors BENEFITS Industry-leading stand alone switch platform in performance, power, and density Wirespeed InfiniBand switch platform up to 40Gb/s per port High-bandwidth, low-latency fabric for compute-intensive applications KEY FEATURES 2.88Tb/s switching capacity Adaptive routing Congestion control Quality of Service enforcement Up to 6 virtual subnets Temperature sensors and voltage monitors INFINIBAND IBTA Specification 1.2 compliant Integrated subnet manager agent Hardware-based congestion control Adaptive routing Fine grained end-to-end QoS 2.88Tb/s total switching bandwidth 256 to 4Kbyte MTU 9 virtual lanes: 8 data + 1 management 48K entry linear forwarding data base MANAGEMENT Supports Open SM or third-party subnet managers Up to 6 virtual subnets with dynamic port allocation Port mirroring Diagnostic and debug tools
5 MTS port 20 and 40Gb/s InfiniBand Switch System 2 Mellanox Advantage Mellanox is the leading supplier of industry standard InfiniBand adapters and switch products. Our products have been deployed in clusters scaling to thousands of nodes and are being deployed end-to-end in data centers and Top500 systems around the world. SPECIFICATIONS 36 InfiniBand ports - Up to 40Gb/s - Up to 3.5W per port for active cable or fiber module support Full bisectional bandwidth to all ports IBTA 1.2 compliant Dual redundant auto-sensing 110/220VAC power supplies Hot-swappable fan trays Port and system status LED indicators RoHS-5 compliant Rack-mountable, 1U 1-year warranty HARDWARE INFINIBAND SWITCH 36 ports Up to 40Gb/s per port Fully non-blocking architecture CONNECTORS AND CABLING QSFP connectors Passive copper cable Active copper and fiber cables Fiber media adapters INDICATORS Per port status LEDs: Link, Activity System status LEDs: System, fans, power supplies DIMENSIONS AND WEIGHT 1.73H x 17.5W x 23.8D inches 22 lbs (1 power supply) 24.5 lbs (2 power supplies) POWER SUPPLY Dual redundant slots Hot plug operation Input range: VAC Frequency: 47-63Hz, single phase AC MAXIMUM POWER CONSUMPTION 316W (MTS3600R-1UNC) - Dissipated power: 190W - Power through QSFP: 126W 326W (MTS3600Q-1UNC) - Dissipated power: 200W - Power through QSFP: 126W COMPLIANCE SAFETY US/Canada: ctuvus EU: IEC60950 International: CB EMC (EMISSIONS) USA: FCC, Class A Canada: ICES, Class A EU: EN55022, Class A EU: EN55024, Class A EU: EN , Class A EU: EN , Class A Japan: VCCI, Class A ENVIRONMENTAL EU: IEC : Random Vibration EU: IEC : Shocks, Type I / II EU: IEC : Fall Test ACOUSTIC ISO 7779 ETS OPERATING CONDITIONS Operating temperature: 10 to 45 C Humidity: 10-90% non-condensing Ordering Part Number MTS3600Q-1UNC RoHS R5 Reserved Optional management U= unmanaged A = Celeron-based management Number of power supplies InfiniBand port speed and port connector R = 10, 20Gb/s QSFP Q = 10, 20, 40Gb/s QSFP InfiniScale IV 36-port Switch System Switch System Mellanox Technologies 350 Oakmead Parkway, Suite 100, Sunnyvale, CA Tel: Fax: Copyright Mellanox Technologies. All rights reserved. Mellanox, ConnectX, InfiniBlast, InfiniBridge, InfiniHost, InfiniRISC, InfiniScale, and InfiniPCI are registered trademarks of Mellanox Technologies, Ltd. Virtual Protocol Interconnect and BridgeX are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners. 2938PB Rev 1.0
6 Platform Cluster Manager Benefits: Highlights: Easily deploy open clusters Simplify cluster management Update compute nodes without re-installing Perform risk-free, trial software installations Change node personalities with ease Deploy your choice of Linux OS environments The most advanced cluster management tools Integrated workload management Intuitive web-interface Optimized for performance Certified & supported Clusters Made Simple A better way to manage open clusters Key capability Clusters installed quickly and easily Easily install add-on software Available NVIDIA CUDATM SDK Easy to use web interface Update software with no re-installing Perform repository snapshots Open Source Solution Platform Cluster Manager Platform Cluster Manager (PCM) represents a re-think of how open source clusters are deployed and managed. Based on the open source Project Kusu, Platform Cluster Manager is a pre-integrated, vendor certified solution enabling the consistent delivery of scaled-out application clusters. Built using open source components, it includes all the tools required to deploy, run and manage clusters with unprecedented ease. Using Platform Cluster Manager, cluster administrators can easily perform operations that are simply not possible with less advanced cluster management solutions. Change node personalities any time Integrated workload manager User-friendly job submission portal Built-in monitoring, alerting & reporting Multiple OS environments Vendor optimized 24/7 Enterprise support Easy to use web interface The web interface makes operations like installing and maintaining clusters a snap. Cluster administrators will appreciate the built-in alerting, reporting and management features and the ease with which software configurations can be managed and deployed without ever leaving the Platform Management Console (PMC) graphical interface. Cluster users can take advantage of the web portal and the open workload manager, Platform Lava (Platform LSF compatible), that allows them to submit, monitor and manage their own cluster workloads without ever touching the command line. Ideal for academic and commercial applications alike, the intuitive web interface makes job submission interfaces self-documenting reducing training requirements and error rates and reducing the support burden on busy cluster administrators.
7 Platform Cluster Manager provides an intuitive extensible web interface allowing administrators to manage the cluster without learning complex command line syntax. Cluster users will appreciate the easy to use web interface for submitting and monitoring simulations and cluster applications. Manage cluster software as installable components Update compute nodes on the fly As clusters inevitably get busy and user communities grow, scheduling downtime even in small environments can be a challenge. This is especially true in environments with multiple users, or those supporting long-running or parallel workloads. While other cluster management solutions require that nodes be re-installed every time a minor software update is made, Platform Cluster Manager provides a powerful cluster file manager (CFM). For common activities such as package and patch installations, it can transparently synchronize files to cluster nodes without any downtime or re-installation requirement. Updates are properly reflected in the Platform Cluster Manager repository so that changes will be retained the next time nodes are provisioned. For updates that do require a reboot, the web interface makes this apparent to cluster administrators, allowing them to defer the re-installation of nodes according to their own schedule. Perform risk-free, trial software installations Platform Cluster Manager employs a unique and flexible approach to software management supporting multiple repositories that in turn contain different operating systems and software components. Performing software upgrades becomes simple and risk-free. Administrators can snapshot their known good repository, and perform upgrades against a repository copy. If something goes wrong after deploying the upgraded software, node group definitions can simply be linked to the previous repository version, allowing the cluster to be quickly restored to its original state. This unique ability of Platform Cluster Manager, to create restore points for clusters, and the ability to maintain libraries of installable software components, is ideal for agencies or labs that may need to rapidly deploy earlier software stacks owing to regulatory requirements, or reliably demonstrate the repeatability of prior calculations. Flexible provisioning While Platform Cluster Manager, by default, employs the same provisioning approach as other cluster managers, it also offers enhanced capabilities. In addition to performing simple package-based installations, Platform Cluster Manager supports imagebased installs allowing cluster nodes to be rapidly cloned and even provides support for diskless nodes. Multiple operating systems and versions can be deployed concurrently to the same cluster, providing cluster administrators with the flexibility to support special requests from their user communities and accommodate unanticipated changes. The most advanced HPC tools Since performance is the whole reason for HPC clusters, Platform Cluster Manager provides the most advanced suite of HPC infrastructure components including libraries pre-optimized for vendor hardware and network components. Standard benchmark tests are included to ensure your cluster will deliver the best performance possible out-of-the-box. Platform Cluster Manager includes a range of industry standard, pre-tuned MPI implementations making it easy to get your parallel applications up and running quickly. For organizations requiring enhanced management of parallel workloads, Platform LSF, available separately, can be easily loaded as a kit and deployed to cluster nodes automatically. Exploit the power of GPUs in your HPC environment Available as a free download for Platform Cluster Manager, the NVIDIA CUDA Kit allows customers to easily harness the tremendous floating-point capability of modern graphics processors (GPUs) for general purpose computations. The NVIDIA kit automatically deploys a full CUDA 2.2 environment including all the drivers, tools, SDKs and debuggers needed to deliver breakthrough performance at low cost for a variety of widely used HPC algorithms optimized for NVIDIA GPUs.
8 Platform Cluster Manager Components Everything you need is here Vendor optimized Unlike other cluster deployment tools that may leave you scrambling for optimized drivers or specialty software components, Platform Cluster Manager is complete. It provides all the required software components so that open clusters are fully integrated out of the box. Platform Cluster Manager also includes a number of vendor specific enhancements including: The Intel Cluster Checker available on select configurations only, the Intel Cluster Checker is used to ensure Intel Cluster Ready (ICR) compliance. Vendor specific MPI run-time environments Enhanced InfiniBand support and optimized MPI libraries for selected switch manufacturers Enhanced device support for Mellanox, Myrinet, Qlogic, Vendor kits with enhanced drivers, tools and utilities specific to particular hardware vendors Fully supported by Platform Platform Cluster Manager is the only open Linux cluster solution fully supported by Platform Computing, the world leader in HPC management solutions. Unlike competing cluster toolkits which are little more than thrown together collections of open-source software, Platform Cluster Manager is the result of a close and long-standing engineering collaboration with our system partners. With a focus on quality and platform certification, Platform Computing and our systems partners jointly provide outstanding support on a 7 x 24 basis. When dealing with Platform, you re dealing with the organization that actually develops and builds the key software components comprising the cluster, ensuring top notch support and speedy problem resolution. Platform Cluster Manager includes everything you need to have your HPC cluster up and running quickly. For ease of installation and management, software is packaged as easy to deploy, pre-built kits. Of course, the Platform Cluster Manager cluster remains fully open, and administrators can still easily install their own software, using more traditional approaches if they choose. Kits are easy to build and sites may choose to package their own software as kits, or take advantage of a growing library of contributed kits from Platform Computing and others available at Among the pre-integrated kits included in Platform Cluster Manager are: Base Contains the tools and applications for managing the Platform Cluster Manager cluster Web GUI A complete web-based management interface and application-centric portal Cacti* An open source reporting tool used to collect and graph various node metrics Ganglia A powerful open-source resource monitoring solution HPC A comprehensive collection of tools, libraries and utilities specific to HPC clusters Platform Lava A powerful, open-source workload scheduler 100% compatible with Platform LSF Nagios A powerful open source host, service and network monitoring facility NTOP A tool to monitor network bandwidth and analyze traffic JAVA JRE Java 2 Platform Standard Edition Runtime Environment OFED A collection of drivers and libraries for InfiniBand * Note: Cacti is an open source rrdtool-based graphing solution. Copyright The Cacti Group For more information see
9 Built for Growth Most clusters grow not always in terms of nodes, but almost certainly in terms of applications, users and complexity as clusters are augmented with new hardware, network and storage devices. Platform Cluster Manager ensures that your cluster is future proof not only can you easily grow and scale, but you can support different operating systems and provisioning approaches as your needs evolve. With a unique driver patching capability, your cluster can even accommodate the addition of hardware not yet invented, avoiding the need to completely re-install or upgrade the cluster to support new devices. Should your requirements outgrow the open source components included, a range of optional commercial packages are available that seamlessly integrate with the Platform Cluster Manager cluster providing complete investment protection. Supported Operating Systems Platform Cluster Manager provides support for popular Linux environments including Red Hat Enterprise Linux, SUSE Linux and CentOS. The latest set of supported hardware platforms and operating environments is available at FOR MORE INFORMATION, CONTACT PLATFORM SALES SUPPORT: Tel: partnersales@platform.com Platform Computing is the leader in grid and cloud computing software that dynamically connects IT resources to workload demand according to business policies. Over 2,000 of the world s largest organizations rely on our solutions to improve IT productivity and reduce data center costs. Platform has strategic relationships with Cray, Dell TM, HP, IBM, Intel, Microsoft, Red Hat, and SAS. Building on 16 years of market leadership, Platform continues to help data centers be more efficient, responsive and dynamic. Visit World Headquarters Platform Computing Inc th Avenue Markham, Ontario L3R 3T7 Canada Tel: Fax: Toll-free tel: info@platform.com Sales - Headquarters Toll-free tel: Tel: North America New York: San Jose: Detroit: Europe Basingstoke: +44 (0) London: +44 (0) Paris: +33 (0) Düsseldorf: Munich: info-europe@platform.com Asia-Pacific Beijing: Xi an: asia@platform.com Tokyo: +81(0) info-japan@platform.com Singapore: wliaw@platform.com Copyright 2009 Platform Computing Corporation. The symbols and T designate trademarks of Platform Computing Corporation or identified third parties. All other logos and product names are the trademarks of their respective owners, errors and omissions excepted. Printed in Canada. Platform and Platform Computing refer to Platform Computing Corporation and each of its subsidiaries
PERFORMANCE ACCELERATED Mellanox InfiniBand Adapters Provide Advanced Levels of Data Center IT Performance, Productivity and Efficiency
PERFORMANCE ACCELERATED Mellanox InfiniBand Adapters Provide Advanced Levels of Data Center IT Performance, Productivity and Efficiency Mellanox continues its leadership providing InfiniBand Host Channel
More informationADAPTER CARD PRODUCT GUIDE
ADAPTER CARD PRODUCT GUIDE CARD PROTOCOL # OF PORTS SYSTEM INTERFACE POWER (TYP) PART NUMBER AVAILABILITY InfiniHost PCI-X Adapter With 128MB 10Gb/s InfiniBand 2 PCI-X 12W MNEH28-XTC NOW InfiniHost III
More informationInfiniBand Switch System Family. Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity
InfiniBand Switch System Family Highest Levels of Scalability, Simplified Network Manageability, Maximum System Productivity Mellanox continues its leadership by providing InfiniBand SDN Switch Systems
More informationHighest Levels of Scalability Simplified Network Manageability Maximum System Productivity
InfiniBand Brochure Highest Levels of Scalability Simplified Network Manageability Maximum System Productivity 40/56/100/200Gb/s InfiniBand Switch System Family MELLANOX SMART INFINIBAND SWITCH SYSTEMS
More informationARISTA: Improving Application Performance While Reducing Complexity
ARISTA: Improving Application Performance While Reducing Complexity October 2008 1.0 Problem Statement #1... 1 1.1 Problem Statement #2... 1 1.2 Previous Options: More Servers and I/O Adapters... 1 1.3
More informationHP 10 GbE Dual Port Mezzanine Adapter for HP BladeSystem c-class server blades
Overview The HP 10GbE dual port mezzanine Ethernet Network Interface Cards (NIC) for HP BladeSystem c-class is based on Mellanox ConnectX EN technology. The HP 10GbE mezz NIC improves network performance
More informationVPI / InfiniBand. Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability
VPI / InfiniBand Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability Mellanox enables the highest data center performance with its
More informationVPI / InfiniBand. Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability
VPI / InfiniBand Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability Mellanox enables the highest data center performance with its
More informationQuickSpecs. Models. HP NC510C PCIe 10 Gigabit Server Adapter. Overview
Overview The NC510C is a x8 PCI Express (PCIe) 10 Gigabit Ethernet CX4 (10GBASE-CX4 copper) network solution offering the highest bandwidth available in a ProLiant Ethernet adapter. This high-performance,
More informationInfiniBand OFED Driver for. VMware Infrastructure 3. Installation Guide
Mellanox Technologies InfiniBand OFED Driver for VMware Infrastructure 3 Installation Guide Document no. 2820 Mellanox Technologies http://www.mellanox.com InfiniBand OFED Driver for VMware Infrastructure
More informationThe Mellanox ConnectX-2 Dual Port QSFP QDR IB network adapter for IBM System x delivers industryleading performance and low-latency data transfer
IBM United States Hardware Announcement 111-209, dated September 27, 2011 The Mellanox ConnectX-2 Dual Port QSFP QDR IB network adapter for IBM System x delivers industryleading performance and low-latency
More informationInfiniBand OFED Driver for. VMware Virtual Infrastructure (VI) 3.5. Installation Guide
Mellanox Technologies InfiniBand OFED Driver for VMware Virtual Infrastructure (VI) 3.5 Installation Guide Document no. 2820 Mellanox Technologies http://www.mellanox.com InfiniBand OFED Driver for VMware
More information2-Port 40 Gb InfiniBand Expansion Card (CFFh) for IBM BladeCenter IBM BladeCenter at-a-glance guide
2-Port 40 Gb InfiniBand Expansion Card (CFFh) for IBM BladeCenter IBM BladeCenter at-a-glance guide The 2-Port 40 Gb InfiniBand Expansion Card (CFFh) for IBM BladeCenter is a dual port InfiniBand Host
More informationCisco SFS 7000D InfiniBand Server Switch
Data Sheet The Cisco SFS 7000D InfiniBand Server Switch sets the standard for cost-effective, low-latency, 4X DDR and SDR InfiniBand switching for building high-performance clusters. High-performance computing
More informationFlex System EN port 10Gb Ethernet Adapter Product Guide
Flex System EN4132 2-port 10Gb Ethernet Adapter Product Guide The Flex System EN4132 2-port 10Gb Ethernet Adapter delivers high-bandwidth and industry-leading Ethernet connectivity for performance-driven
More informationNOTE: A minimum of 1 gigabyte (1 GB) of server memory is required per each NC510F adapter. HP NC510F PCIe 10 Gigabit Server Adapter
Overview The NC510F is an eight lane (x8) PCI Express (PCIe) 10 Gigabit Ethernet SR (10GBASE-SR fiber optic) network solution offering the highest bandwidth available in a ProLiant Ethernet adapter. The
More informationUncompromising Performance. Elastic Network Manageability. Maximum System Productivity.
Ethernet Brochure Uncompromising Performance. Elastic Network Manageability. Maximum System Productivity. 10/25/40/50/100/200/400Gb/s Ethernet Switch System Family Mellanox provides the highest performing
More informationPerformance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability
Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability Mellanox InfiniBand Host Channel Adapters (HCA) enable the highest data center
More informationUncompromising Performance Elastic Network Manageability Maximum System Productivity
Ethernet Brochure Uncompromising Performance Elastic Network Manageability Maximum System Productivity 10/25/40/50/100/200/400Gb/s Ethernet Switch System Family Mellanox provides the highest performing
More informationTHE OPEN DATA CENTER FABRIC FOR THE CLOUD
Product overview THE OPEN DATA CENTER FABRIC FOR THE CLOUD The Open Data Center Fabric for the Cloud The Xsigo Data Center Fabric revolutionizes data center economics by creating an agile, highly efficient
More informationQuickSpecs. HP InfiniBand Options for HP BladeSystems c-class. Overview
Overview HP supports 40Gbps (QDR) and 20Gbps (DDR) InfiniBand products that include mezzanine Host Channel Adapters (HCA) for server blades, switch blades for c-class enclosures, and rack switches and
More informationFlex System IB port FDR InfiniBand Adapter Lenovo Press Product Guide
Flex System IB6132 2-port FDR InfiniBand Adapter Lenovo Press Product Guide The Flex System IB6132 2-port FDR InfiniBand Adapter delivers low latency and high bandwidth for performance-driven server clustering
More informationQuickSpecs. HP Z 10GbE Dual Port Module. Models
Overview Models Part Number: 1Ql49AA Introduction The is a 10GBASE-T adapter utilizing the Intel X722 MAC and X557-AT2 PHY pairing to deliver full line-rate performance, utilizing CAT 6A UTP cabling (or
More informationQuickSpecs. HP NC6170 PCI-X Dual Port 1000SX Gigabit Server Adapter. Overview. Retired
The is a dual port fiber Gigabit server adapter that runs over multimode fiber cable. It is the first HP server adapter to combine dual port Gigabit Ethernet speed with PCI-X bus technology for fiber-optic
More informationQuickSpecs. NC7771 PCI-X 1000T Gigabit Server Adapter. HP NC7771 PCI-X 1000T Gigabit Server Adapter. Overview
Overview The NC7771 supports 10/100/1000Mbps Ethernet speeds as well as a PCI-X 64-bit/133MHz data path and it is backwards compatible with existing PCI bus architectures. Additionally, the NC7771 ships
More informationHP NC7771 PCI-X 1000T
Overview The NC7771 supports 10/100/1000Mbps Ethernet speeds as well as a PCI-X 64-bit/133MHz data path and it is backwards compatible with existing PCI bus architectures. This range of features enables
More informationHP InfiniBand Options for HP ProLiant and Integrity Servers Overview
Overview HP supports 40Gbps 4x Quad Data Rate (QDR) and 20Gbps 4X Double Data Rate (DDR) InfiniBand products that include Host Channel Adapters (HCA), switches, and cables for HP ProLiant servers, and
More informationIBM Flex System EN port 10Gb Ethernet Adapter IBM Redbooks Product Guide
IBM Flex System EN4132 2-port 10Gb Ethernet Adapter IBM Redbooks Product Guide The IBM Flex System EN4132 2-port 10Gb Ethernet Adapter delivers high-bandwidth and industry-leading Ethernet connectivity
More informationEthernet. High-Performance Ethernet Adapter Cards
High-Performance Ethernet Adapter Cards Supporting Virtualization, Overlay Networks, CPU Offloads and RDMA over Converged Ethernet (RoCE), and Enabling Data Center Efficiency and Scalability Ethernet Mellanox
More informationQuickSpecs. Models. HP NC380T PCI Express Dual Port Multifunction Gigabit Server Adapter. Overview
Overview The HP NC380T server adapter is the industry's first PCI Express dual port multifunction network adapter supporting TOE (TCP/IP Offload Engine) for Windows, iscsi (Internet Small Computer System
More informationModels HP NC320T PCI Express Gigabit Server Adapter B21
Overview The HP NC320T server adapter is a single port Gigabit Ethernet network adapter for Local Area Networks using copper cabling. The NC320T is a single lane (x1), serial device designed for PCI Express
More informationSun Dual Port 10GbE SFP+ PCIe 2.0 Networking Cards with Intel GbE Controller
Sun Dual Port 10GbE SFP+ PCIe 2.0 Networking Cards with Intel 82599 10GbE Controller Oracle's Sun Dual Port 10 GbE PCIe 2.0 Networking Cards with SFP+ pluggable transceivers, which incorporate the Intel
More informationCreating an agile infrastructure with Virtualized I/O
etrading & Market Data Agile infrastructure Telecoms Data Center Grid Creating an agile infrastructure with Virtualized I/O Richard Croucher May 2009 Smart Infrastructure Solutions London New York Singapore
More informationSUN BLADE 6000 VIRTUALIZED 40 GbE NETWORK EXPRESS MODULE
SUN BLADE 6000 VIRTUALIZED 40 GbE NETWORK EXPRESS MODULE IN-CHASSIS VIRTUALIZED 40 GBE ACCESS KEY FEATURES Virtualized 1 port of 40 GbE or 2 ports of 10 GbE uplinks shared by all server modules Support
More informationCisco UCS Virtual Interface Card 1225
Data Sheet Cisco UCS Virtual Interface Card 1225 Cisco Unified Computing System Overview The Cisco Unified Computing System (Cisco UCS ) is a next-generation data center platform that unites compute, networking,
More informationAccelerating Hadoop Applications with the MapR Distribution Using Flash Storage and High-Speed Ethernet
WHITE PAPER Accelerating Hadoop Applications with the MapR Distribution Using Flash Storage and High-Speed Ethernet Contents Background... 2 The MapR Distribution... 2 Mellanox Ethernet Solution... 3 Test
More informationRoCE vs. iwarp Competitive Analysis
WHITE PAPER February 217 RoCE vs. iwarp Competitive Analysis Executive Summary...1 RoCE s Advantages over iwarp...1 Performance and Benchmark Examples...3 Best Performance for Virtualization...5 Summary...6
More informationQuickSpecs. Models. HP NC110T PCI Express Gigabit Server Adapter. Overview. Retired
Overview The HP NC110T is a cost-effective Gigabit Ethernet server adapter that features single-port, copper, single lane (x1) PCI Express capability, with 48KB onboard memory that provides 10/100/1000T
More informationCONNECTRIX MDS-9132T, MDS-9396S AND MDS-9148S SWITCHES
SPECIFICATION SHEET Connectrix MDS Fibre Channel Switch Models CONNECTRIX MDS-9132T, MDS-9396S AND SWITCHES The Dell EMC Connecrix MDS 9000 Switch series support up to 32Gigabit per second (Gb/s) Fibre
More informationRetired. HP NC7771 PCI-X 1000T Gigabit Server Adapter Overview
Overview The NC7771 supports 10/100/1000Mbps Ethernet speeds as well as a PCI-X 64-bit/133MHz data path and it is backwards compatible with existing PCI bus architectures. Additionally, the NC7771 ships
More informationIntroduction to High-Speed InfiniBand Interconnect
Introduction to High-Speed InfiniBand Interconnect 2 What is InfiniBand? Industry standard defined by the InfiniBand Trade Association Originated in 1999 InfiniBand specification defines an input/output
More informationCavium FastLinQ 25GbE Intelligent Ethernet Adapters vs. Mellanox Adapters
Cavium FastLinQ 25GbE Intelligent Ethernet Adapters vs. Mellanox Adapters Cavium FastLinQ QL45000 25GbE adapters provide maximum performance and flexible bandwidth management to optimize virtualized servers
More informationMellanox Virtual Modular Switch
WHITE PAPER July 2015 Mellanox Virtual Modular Switch Introduction...1 Considerations for Data Center Aggregation Switching...1 Virtual Modular Switch Architecture - Dual-Tier 40/56/100GbE Aggregation...2
More informationMellanox Technologies Delivers The First InfiniBand Server Blade Reference Design
Mellanox Technologies Delivers The First InfiniBand Server Blade Reference Design Complete InfiniBand Reference Design Provides a Data Center in a Box SANTA CLARA, CALIFORNIA and YOKNEAM, ISRAEL, Jan 21,
More informationCisco UCS 6324 Fabric Interconnect
Data Sheet Cisco UCS 6324 Fabric Interconnect Cisco Unified Computing System Overview The Cisco Unified Computing System (Cisco UCS ) is a next-generation data center platform that unites computing, networking,
More informationOCP3. 0. ConnectX Ethernet Adapter Cards for OCP Spec 3.0
OCP3. 0 ConnectX Ethernet Adapter Cards for OCP Spec 3.0 High Performance 10/25/40/50/100/200 GbE Ethernet Adapter Cards in the Open Compute Project Spec 3.0 Form Factor For illustration only. Actual products
More informationMellanox ConnectX-4 NATIVE ESX Driver for VMware vsphere 5.5/6.0 Release Notes
Mellanox ConnectX-4 NATIVE ESX Driver for VMware vsphere 5.5/6.0 Release Notes www.mellanox.com Rev 4.5.2.0/ NOTE: THIS HARDWARE, SOFTWARE OR TEST SUITE PRODUCT ( PRODUCT(S) ) AND ITS RELATED DOCUMENTATION
More informationReadme for Platform Open Cluster Stack (OCS)
Readme for Platform Open Cluster Stack (OCS) Version 4.1.1-2.0 October 25 2006 Platform Computing Contents What is Platform OCS? What's New in Platform OCS 4.1.1-2.0? Supported Architecture Distribution
More informationEmulex OCe11102-N and Mellanox ConnectX-3 EN on Windows Server 2008 R2
Competitive Benchmark Emulex OCe11102-N and Mellanox ConnectX-3 EN on Windows Server 2008 R2 Performance Tests Simulating Real-World Workloads Table of contents At a Glance... 3 Introduction... 3 Performance...
More informationScalar i500. The Intelligent Midrange Tape Library Platform FEATURES AND BENEFITS
AUTOMATION 1 TO 18 DRIVES Scalar i500 The Intelligent Midrange Tape Library Platform 41 TO 409 CARTRIDGES MODULAR GROWTH, CONTINUOUS ROBOTICS CAPACITY-ON-DEMAND INTEGRATED ilayer MANAGEMENT SOFTWARE FEATURES
More informationVirtualWisdom SAN Performance Probe Family Models: ProbeFC8-HD, ProbeFC8-HD48, and ProbeFC16-24
DATASHEET VirtualWisdom SAN Performance Probe Family Models: ProbeFC8-HD, ProbeFC8-HD48, and ProbeFC16-24 Industry s only Fibre Channel monitoring probes enable comprehensive Infrastructure Performance
More informationEonStor DS - SAS Series High-Availability Solutions Delivering Excellent Storage Performance
EonStor DS EonStor DS - SAS Series High-Availability Solutions Delivering Excellent Storage Performance Reliability and Availability and volume copy/ mirror functionality enables quick data recovery after
More informationCisco UCS Virtual Interface Card 1227
Data Sheet Cisco UCS Virtual Interface Card 1227 Cisco Unified Computing System Overview The Cisco Unified Computing System (Cisco UCS ) is a next-generation data center platform that unites computing,
More information2 to 4 Intel Xeon Processor E v3 Family CPUs. Up to 12 SFF Disk Drives for Appliance Model. Up to 6 TB of Main Memory (with GB LRDIMMs)
Based on Cisco UCS C460 M4 Rack Servers Solution Brief May 2015 With Intelligent Intel Xeon Processors Highlights Integrate with Your Existing Data Center Our SAP HANA appliances help you get up and running
More informationQLogic FastLinQ QL45462HLCU 40GbE Converged Network Adapter
QLogic FastLinQ QL45462HLCU 40GbE Converged Network Adapter Fully featured 40GbE adapter delivers the best price per performance ratio versus 10GbE Increase VM density and accelerate multitenant networks
More informationIBM WebSphere MQ Low Latency Messaging Software Tested With Arista 10 Gigabit Ethernet Switch and Mellanox ConnectX
IBM WebSphere MQ Low Latency Messaging Software Tested With Arista 10 Gigabit Ethernet Switch and Mellanox ConnectX -2 EN with RoCE Adapter Delivers Reliable Multicast Messaging With Ultra Low Latency
More informationOracle Database Appliance X6-2-HA
ORACLE DATA SHEET Oracle Database Appliance X6-2-HA The Oracle Database Appliance X6-2-HA saves time and money by simplifying deployment, maintenance, and support of high availability database solutions.
More informationCisco UCS 6100 Series Fabric Interconnects
Cisco UCS 6100 Series Fabric Interconnects Cisco Unified Computing System Overview The Cisco Unified Computing System is a next-generation data center platform that unites compute, network, storage access,
More informationQLogic TrueScale InfiniBand and Teraflop Simulations
WHITE Paper QLogic TrueScale InfiniBand and Teraflop Simulations For ANSYS Mechanical v12 High Performance Interconnect for ANSYS Computer Aided Engineering Solutions Executive Summary Today s challenging
More informationi Scalar 2000 The Intelligent Library for the Enterprise FEATURES AND BENEFITS
AUTOMATION i Scalar 2000 The Intelligent Library for the Enterprise 1 TO 96 DRIVES 100 TO 3,492 CARTRIDGES CAPACITY-ON-DEMAND INTEGRATED iplatform ARCHITECTURE ilayer MANAGEMENT SOFTWARE FEATURES AND BENEFITS
More informationMicrosoft Office SharePoint Server 2007
Microsoft Office SharePoint Server 2007 Enabled by EMC Celerra Unified Storage and Microsoft Hyper-V Reference Architecture Copyright 2010 EMC Corporation. All rights reserved. Published May, 2010 EMC
More informationVM Migration Acceleration over 40GigE Meet SLA & Maximize ROI
VM Migration Acceleration over 40GigE Meet SLA & Maximize ROI Mellanox Technologies Inc. Motti Beck, Director Marketing Motti@mellanox.com Topics Introduction to Mellanox Technologies Inc. Why Cloud SLA
More informationMellanox ConnectX-3 and ConnectX-3 Pro Adapters for IBM System x IBM Redbooks Product Guide
Mellanox ConnectX-3 and ConnectX-3 Pro Adapters for IBM System x IBM Redbooks Product Guide High-performance computing (HPC) solutions require high bandwidth, low latency components with CPU offloads to
More information2014 LENOVO INTERNAL. ALL RIGHTS RESERVED.
2014 LENOVO INTERNAL. ALL RIGHTS RESERVED. Connectivity Categories and Selection Considerations NIC HBA CNA Primary Purpose Basic Ethernet Connectivity Connection to SAN/DAS Converged Network and SAN connectivity
More informationQuickSpecs. StorageWorks Fibre Channel SAN Switch 16-EL by Compaq M ODELS O VERVIEW
M ODELS Fibre Channel Switch 16-EL 212776-B21 253740-001 Fibre Channel Switch 16-EL - Bundle What is New This model is the only Compaq Switch that comes with a set of rack mounting rails, allowing the
More informationOracle Database Appliance X6-2S / X6-2M
Oracle Database Appliance X6-2S / X6-2M The Oracle Database Appliance saves time and money by simplifying deployment, maintenance, and support of database solutions for organizations of every size. Optimized
More informationORACLE DATABASE APPLIANCE X3 2
ORACLE DATABASE APPLIANCE X3 2 FEATURES Fully integrated database appliance Simple one button installation, patching, and diagnostics Oracle Database, Enterprise Edition Oracle Real Application Clusters
More informationOracle Database Appliance X6-2L
Oracle Database Appliance X6-2L The Oracle Database Appliance saves time and money by simplifying deployment, maintenance, and support of database solutions for organizations of every size. Optimized for
More informationCisco UCS C210 M2 General-Purpose Rack-Mount Server
Cisco UCS C210 M2 General-Purpose Rack-Mount Server Product Overview Cisco UCS C-Series Rack-Mount Servers extend unified computing innovations to an industry-standard form factor to help reduce total
More informationIBM TotalStorage SAN Switch M12
High availability director supports highly scalable fabrics for large enterprise SANs IBM TotalStorage SAN Switch M12 High port density packaging saves space Highlights Enterprise-level scalability and
More informationEMC SYMMETRIX VMAX 40K SYSTEM
EMC SYMMETRIX VMAX 40K SYSTEM The EMC Symmetrix VMAX 40K storage system delivers unmatched scalability and high availability for the enterprise while providing market-leading functionality to accelerate
More informationEMC SYMMETRIX VMAX 40K STORAGE SYSTEM
EMC SYMMETRIX VMAX 40K STORAGE SYSTEM The EMC Symmetrix VMAX 40K storage system delivers unmatched scalability and high availability for the enterprise while providing market-leading functionality to accelerate
More informationThe following InfiniBand products based on Mellanox technology are available for the HP BladeSystem c-class from HP:
Overview HP supports 56 Gbps Fourteen Data Rate (FDR) and 40Gbps 4X Quad Data Rate (QDR) InfiniBand (IB) products that include mezzanine Host Channel Adapters (HCA) for server blades, dual mode InfiniBand
More informationBenefits of Offloading I/O Processing to the Adapter
Benefits of Offloading I/O Processing to the Adapter FCoE and iscsi Protocol Offload Delivers Enterpriseclass Performance, Reliability, and Scalability Hewlett Packard Enterprise (HPE) and Cavium have
More informationRetired. Models HP NC340T PCI-X 4-port 1000T Gigabit Server Adapter B21
Overview The features four 10/100/1000T Gigabit Ethernet ports on a single card, saving valuable server I/O slots for other purposes. The four gigabit ports make the NC340T provides the greatest bandwidth
More informationCisco Meraki MS400 Series Cloud-Managed Aggregation Switches
Datasheet MS400 Series Cisco Meraki MS400 Series Cloud-Managed Aggregation Switches OVERVIEW The Cisco Meraki MS400 Series brings powerful cloud-managed switching to the aggregation layer. This range of
More informationEonStor DS - iscsi Series High-Availability Solutions Delivering Excellent Storage Performance
EonStor DS EonStor DS - iscsi Series High-Availability Solutions Delivering Excellent Storage Performance Reliability and Availability Local and remote replication functionality enables quick data recovery
More informationLossless 10 Gigabit Ethernet: The Unifying Infrastructure for SAN and LAN Consolidation
. White Paper Lossless 10 Gigabit Ethernet: The Unifying Infrastructure for SAN and LAN Consolidation Introduction As organizations increasingly rely on IT to help enable, and even change, their business
More informationQuickSpecs. Models. Standard Features Server Support. HP Integrity PCI-e 2-port 10GbE Cu Adapter. HP Integrity PCI-e 2-port 10GbE LR Adapter.
Overview The is an eight lane (x8) PCI Express (PCIe) 10 Gigabit network solution offering optimal throughput. This PCI Express Gen 2 adapter ships with two SFP+ (Small Form-factor Pluggable) cages suitable
More informationIBM Storwize V5000 disk system
IBM Storwize V5000 disk system Latest addition to IBM Storwize family delivers outstanding benefits with greater flexibility Highlights Simplify management with industryleading graphical user interface
More informationOver 70% of servers within a data center are not connected to Fibre Channel SANs for any of the following reasons:
Overview The provides modular multi-protocol SAN designs with increased scalability, stability and ROI on storage infrastructure. Over 70% of servers within a data center are not connected to Fibre Channel
More informationCisco UCS B460 M4 Blade Server
Data Sheet Cisco UCS B460 M4 Blade Server Product Overview The new Cisco UCS B460 M4 Blade Server uses the power of the latest Intel Xeon processor E7 v3 product family to add new levels of performance
More informationData Sheet Fujitsu ETERNUS DX200 S3 Disk Storage System
Data Sheet Fujitsu ETERNUS DX200 S3 Disk Storage System The all-in-one storage system for SMBs or subsidiaries ETERNUS DX - Business-centric Storage ETERNUS DX200 S3 Combining leading performance architecture
More informationSUN SERVER X2-8 SYSTEM
SUN SERVER X2-8 SYSTEM KEY FEATURES Compact design enterprise class server in 5U Powered by four or eight Intel Xeon processor E7-8800 product family Up to 4 TB of low voltage memory with128 DIMMs Eight
More informationHP BladeSystem c-class Ethernet network adapters
HP BladeSystem c-class Ethernet network adapters Family data sheet HP NC552m 10 Gb Dual Port Flex-10 Ethernet Adapter HP NC551m Dual Port FlexFabric 10 Gb Converged Network Adapter HP NC550m 10 Gb Dual
More informationWho says world-class high performance computing (HPC) should be reserved for large research centers? The Cray CX1 supercomputer makes HPC performance
Who says world-class high performance computing (HPC) should be reserved for large research centers? The Cray CX1 supercomputer makes HPC performance available to everyone, combining the power of a high
More informationChoosing the Best Network Interface Card for Cloud Mellanox ConnectX -3 Pro EN vs. Intel XL710
COMPETITIVE BRIEF April 5 Choosing the Best Network Interface Card for Cloud Mellanox ConnectX -3 Pro EN vs. Intel XL7 Introduction: How to Choose a Network Interface Card... Comparison: Mellanox ConnectX
More informationQuickSpecs. Models HP NC364T PCI Express Quad Port Gigabit Server Adapter B21. HP NC364T PCI Express Quad Port Gigabit Server Adapter.
DA - 12701 Worldwide Version 1 3.26.2007 Page 1 Overview The HP NC364T PCI Express Quad-Port Gigabit Server features four 10/100/1000T Gigabit Ethernet ports on a single card, saving valuable server I/O
More informationMS425 SERIES. 40G fiber aggregation switches designed for large enterprise and campus networks. Datasheet MS425 Series
Datasheet MS425 Series MS425 SERIES 40G fiber aggregation switches designed for large enterprise and campus networks AGGREGATION SWITCHING WITH MERAKI The Cisco Meraki 425 series extends cloud management
More informationIBM System p5 550 and 550Q Express servers
The right solutions for consolidating multiple applications on a single system IBM System p5 550 and 550Q Express servers Highlights Up to 8-core scalability using Quad-Core Module technology Point, click
More informationEMC Business Continuity for Microsoft Applications
EMC Business Continuity for Microsoft Applications Enabled by EMC Celerra, EMC MirrorView/A, EMC Celerra Replicator, VMware Site Recovery Manager, and VMware vsphere 4 Copyright 2009 EMC Corporation. All
More informationCisco UCS C210 M1 General-Purpose Rack-Mount Server
Cisco UCS C210 M1 General-Purpose Rack-Mount Server Product Overview Cisco UCS C-Series Rack-Mount Servers extend unified computing innovations to an industry-standard form factor to help reduce total
More informationData Sheet FUJITSU Storage ETERNUS DX200F All-Flash-Array
Data Sheet FUJITSU Storage ETERNUS DX200F All-Flash-Array Data Sheet FUJITSU Storage ETERNUS DX200F All-Flash-Array Superior performance at reasonable cost ETERNUS DX - Business-centric Storage ETERNUS
More informationHP NC364T PCI Express Quad Port Gigabit Server Adapter Overview
Overview The HP NC364T PCI Express Quad-Port Gigabit Server features four 10/100/1000T Gigabit Ethernet ports on a single card, saving valuable server I/O slots for other purposes. The four ports provide
More informationNST6000 UNIFIED HYBRID STORAGE. Performance, Availability and Scale for Any SAN and NAS Workload in Your Environment
DATASHEET TM NST6000 UNIFIED HYBRID STORAGE Performance, Availability and Scale for Any SAN and NAS Workload in Your Environment UNIFIED The Nexsan NST6000 unified hybrid storage system is ideal for organizations
More informationQuickSpecs. Integrated NC7782 Gigabit Dual Port PCI-X LOM. Overview
Overview The integrated NC7782 dual port LOM incorporates a variety of features on a single chip for faster throughput than previous 10/100 solutions using Category 5 (or better) twisted-pair cabling,
More informationData Sheet Fujitsu ETERNUS DX80 S2 Disk Storage System
Data Sheet Fujitsu ETERNUS DX80 S2 Disk Storage System The Flexible Data Safe for Dynamic Infrastructures eternus dx s2 Disk Storage Systems The second generation of ETERNUS DX disk storage systems from
More informationFastLinQ QL41162HLRJ. 8th Generation 10Gb Converged Network Adapter with iscsi, FCoE, and Universal RDMA. Product Brief OVERVIEW
FastLinQ QL41162HLRJ 8th Generation 10Gb Converged Network Adapter with iscsi, FCoE, and Universal RDMA Delivers full line-rate 10GbE performance across both ports Universal RDMA Delivers choice and flexibility
More informationScaling to Petaflop. Ola Torudbakken Distinguished Engineer. Sun Microsystems, Inc
Scaling to Petaflop Ola Torudbakken Distinguished Engineer Sun Microsystems, Inc HPC Market growth is strong CAGR increased from 9.2% (2006) to 15.5% (2007) Market in 2007 doubled from 2003 (Source: IDC
More informationSUN STORAGETEK DUAL 8 Gb FIBRE CHANNEL DUAL GbE EXPRESSMODULE HOST BUS ADAPTER
SUN STORAGETEK DUAL 8 Gb FIBRE CHANNEL DUAL GbE EXPRESSMODULE HOST BUS ADAPTER KEY FEATURES MULTIPROTOCOL NETWORK HOST BUS ADAPTER FOR SUN BLADE SERVERS Comprehensive virtualization capabilities with support
More information