In partnership with. VelocityAI REFERENCE ARCHITECTURE WHITE PAPER

Size: px
Start display at page:

Download "In partnership with. VelocityAI REFERENCE ARCHITECTURE WHITE PAPER"

Transcription

1 In partnership with VelocityAI REFERENCE JULY // 2018

2 Contents Introduction 01 Challenges with Existing AI/ML/DL Solutions 01 Accelerate AI/ML/DL Workloads with Vexata VelocityAI 02 VelocityAI Reference Architecture 03 VelocityAI Network Configuration 04 DGX-1 NFS Client Configuration 04 VelocityAI Network Configuration File heads 04 VelocityAI Network Configuration - Storage 04 Imagenet Benchmark Configuration 05 VelocityAI Filesystem Configuration 05 VelocityAI Imagenet Benchmark Tests 06 Imagenet Benchmark Results For 150KB and 1 MB Filesize 07 Storage Analytics for Small and Large File Size 08 Conclusion 08

3 Introduction ENABLING PREDICTIVE AND COGNITIVE ANALYTICS Machine Learning (ML) and Deep Learning (DL) workloads are increasing in volume and complexity as organizations look to reduce training and operational timelines for artificial intelligence (AI) use cases. This has given rise to massively parallel GPU servers like the NVIDIA DGX-1, delivering massive compute power to run these machine learning frameworks. IDC* predicts by 2019, 40% of Data Transformation initiatives will use AI services; by 2021, 75% of commercial enterprise apps will use AI, over 90% of consumers will interact with customer support bots, and over 50% of new industrial robots will leverage AI. In order to accelerate training and operational cycles, storage systems that power these AI/ML/DL pipelines must maintain ultra-low latency, massive ingest bandwidth and heavy mixed random and sequential read/write handling. Architectures using direct attached storage (DAS) limits performance and data mobility, while existing all-flash arrays lack the sustained performance to deliver timely insights at scale. NVIDIA and Vexata have teamed up to deliver industry best of breed solution for customers moving to predictive, prescriptive and cognitive analytics. This joint solution comprises of 2 or 4 NVIDIA DGX-1 servers with Vexata VX-100FS File storage system. Vexata VX-100FS File Storage system is pre-configured and tuned and seamlessly deploys in existing or new NFS environments. Vexata VX-100FS can be configured with 2 or 4 Network Attached Storage (NAS) File Heads and Accelerated Storage system. This paper discusses the joint solution and the reference architecture. The objective of this document is to outline the configuration and deployment details of the Vexata File Storage System. This document captures the performance benchmarks by running a series of synthetic workloads. This joint solution will be offered directly to end customers or through qualified business partners. Challenges with Existing AI/ML/DL Solutions Existing AI/ML/DL solutions are based on sharding on Direct attached storage architectures, where data locality was a concern. Poor utilization of Expensive GPU cycles as storage I/O is not fast enough to feed GPU cycles Slower model training and inferencing elapsed time is opportunity lost for businesses Need massively parallel performance for all pipeline stages to keep up with the GPU parallelism Current deployments based on Direct attached storage, sharding and bringing compute closer to data leads to complexity DAS architectures force data to be staged 1st before computing Compute and storage cannot be scaled independently 3 way replication based protection leads to poor storage efficiency * Source IDC Technology Spotlight - Accelerate and Operationalize AI Deployments Using AI Optimized Infrastructure 01

4 Accelerate AI/ML/DL Workloads with Vexata VelocityAI Vexata VX-100FS, with its transformative VX-OS is purpose built to overcome these machine learning challenges - Reduce training and inferencing time from days to hours, improving data scientist productivity Accelerated data path with deterministic low latency performance for better GPU utilization Faster storage eliminates data locality Access large training and inferencing data-sets Accelerated non-blocking access to NVMe media for large data ingest with low latency IO performance. Consolidate and eliminate movement between data pipeline stages Shared storage to handle all data pipeline stages without performance degradation Simultaneously supports small block random IO, large block sequential IO, mixed Read/Write IO In-place data analytics with flexibility of ingest protocols (FC, NVMe-oF, NFS, SMB, S3) Storage security, protection and Efficiency RAID5/6 protection eliminates 3 copies, compression and always on encryption Ingest, many datatypes, small and large file, massive B/W SPARK based ETL - Decode, Augment tensor, Label Build Models, using Neural nets Train, Infer, Predict, Iterate on large data-sets Predictive, Prescriptive, Cognitive Analytics Use Cases FINANCIAL SYSTEMS SENSORS Stage 1 GPU S Stage 2 GPU S Stage 3 GPU CLUSTER Stage 4 Fraud Analytics, Quant Trading SAS Analytics, Kdb+ Computer Vision Speech Recognition Hyper Spectrometry Autonomous Vehicles Biomedical Cancer Detection 02

5 VelocityAI Reference Architecture Compute Four DGX-1 systems (8 Tesla V100 GPU s, 2x Intel E v4) 4*100GbE DGX IB ports are configured to run 100 GbE Ethernet 4 PFLOPS of Deep Learning performance Container based Nvidia GPU Cloud Deep Learning stack with machine learning frameworks Networking Mellanox SN GbE x 32 switch (2 switches) 8x100GbE 8x100GbE 8x100GbE 8x100GbE Storage Vexata VX-100FS NVMe-oF scale-out storage system 430 TB of fast file tier 50 GB s of bandwidth Scale Add DGX s, add head nodes, add arrays DGX-A DGX-B DGX-C DGX-D SW1 Mellanox SN2700 SW2 Mellanox SN2700 C1 C2 Vexata VX-100FS VEXATA VX-100FS FILE STORAGE SYSTEM 4 File Heads Processor: Intel(R) Xeon(R) Gold GHz Sockets: 2 Cores per socket: 18 Threads per core: 2 Memory: 512GB OS: CentOS v7.4 Accelerated Storage Node Brand: Vexata Model: VX-100F OS: Vexata OS release Usable Capacity: 430 TB Storage Modules (ESM): 16 03

6 VelocityAI Network Configuration DGX-1 NFS CLIENT CONFIGURATION For load balancing using DNS round robin, the following setting needs to be done on the DNS server. vx-nfs IN A <100GbE interface IP of node1> vx-nfs IN A <100GbE interface IP of node2> vx-nfs IN A <100GbE interface IP of node3> vx-nfs IN A <100GbE interface IP of node4> Nvidia DGX-1 NFS Client Access File system will be accessible to DGX-1 using single mount point and the DGX-1 clients can do a simple nfs mount to access the file system. For example: mount.nfs-orw,tcp,hard,intr,rsize=32768,wsize=32768,retry=10000,timeo=600,retrans=5,acregmin=3,acregmax=60,acdirmin=30,acdirmax=60,nfsvers=3,sloppy vx-nfs:/tmp/nfs1 /tmp/n1 VelocityAI NETWORK CONFIGURATION FILE HEADS Vexata File Storage System has four pre-configured Nas Head nodes. These nodes require 16 ethernet connections and 12 IPs. Four IPs for management, four IPs for IPMI, and four client facing IPs (two bonded connections per NSD): NODE ROLE # OF 1 GbE ETHERNET PORTS SPEED IP ADDRESS NETMASK GATEWAY NAS Head 1 Management 1 Auto NAS Head 1 IPMI 1 Auto NAS Head 1 NFS GbE NAS Head 2 Management 1 Auto NAS Head 2 IPMI 1 Auto NAS Head 2 NFS GbE NAS Head 3 Management 1 Auto NAS Head 3 IPMI 1 Auto NAS Head 3 NFS GbE NAS Head 4 Management 1 Auto NAS Head 4 IPMI 1 Auto NAS Head 4 NFS GbE VelocityAI NETWORK CONFIGURATION - STORAGE The Storage Node requires six ethernet connections (four management ports, two IPMI ports) and five IP addresses. All IP address can all be on the same subnet and can be either static or DHCP assigned. 04

7 INTERFACE ROLE # OF 1 GbE ETHERNET PORT SPEED IP ADDRESS NETMASK GATEWAY Primary Management Virtual IP 0 Auto Management Controller 0 1 Auto Management Controller 1 1 Auto IPMI Controller 0 1 Auto IPMI Controller 1 1 Auto IMAGENET BENCHMARK CONFIGURATION VelocityAI bandwidth is equally divided between training/inferencing and Ingest/ETL/Build Imagenet pre-trained model is used for benchmark, Alexnet used because it is storage IO heavy Inception V3, Resnet 50, Resent 152, Alexnet, VGG16 container images Supervised Learning with 1.28M labelled images with 1000 categories used as dataset Standard docker file - nvcr.io/nvidia/tensorflow:18.04-py2 Batch_size = 64 VelocityAI FILESYSTEM CONFIGURATION Following commands show: Cluster configuration with pagepool and name of configured filesystem 16 volumes assigned to 4 file heads State of 4 file heads Following command shows filesystem attributes with the associated inodes and the block size 05

8 VelocityAI Imagenet Benchmark Tests Filesystem performance testing of 143 GB of Imagenet dataset, took 165 secs average to load in memory. 06

9 IMAGENET BENCHMARK RESULTS FOR 150KB AND 1 MB FILESIZE TensorFlow benchmarks were run against ImageNet large visual database designed for visual object recognition software research. The database comprises of 1.28M labelled images to test supervised learning. Pre-trained Convolutional network models were used with the ImageNet dataset to measure the storage IO performance and it s ability to keep the GPU s fed during training phase. Alexnet was used because of its ability to exercise storage I/O stack. The performance is measured in-terms of images/sec. To emulate a real world use case which comprises of Ingest, ETL, modeling, training and inferencing, only half of the available bandwidth of the storage system is used for training phase. This allows the remaining bandwidth to be used for other phases, thus making the DGX cluster a real world Deep Learning solution. Ingest, ETL, modeling, training and inferencing can be now run on the same solution. This is a unique advantage with VelocityAI due to transformative VX-OS. Testing was also conducted on small file size images (150Kb), to emulate real world sensor data in-addition to the large file size images (1 MB). VX-OS again uniquely provides the same bandwidth when it is a small block or a large block I/O or when there is mixed Read/ Write I/O happening at the same time, across all the Deep Learning phases Test configuration: Bandwidth equally divided between training/inferencing and Ingest/ETL/Build Imagenet pre-trained model Alexnet used because it is storage IO heavy Inception V3, Resnet 50, Resent 152, Alexnet, VGG16 container images Supervised Learning, labelled images, 1.28M, 1000 categories Standard docker file - nvcr.io/nvidia/tensorflow:18.04-py2 Batch_size = 64 Horovod = VelocityAI - 1 DGX Server, 2 File Heads, 4 Storage Blades VelocityAI 2 DGX Servers, 2 File Heads, 8 Storage Blades VelocityAI 4 DGX Servers, 4 File Heads, 16 Storage Blades VEXATA NVIDIA SOLUTION- 1 DGX SERVERS, 4 BLADES, 2 HEADS VEXATA NVIDIA SOLUTION- 2 DGX SERVERS, 8 BLADES, 2 HEADS VEXATA NVIDIA SOLUTION- 4 DGX SERVERS, 16 BLADES, 4 HEADS File Size Available B/W training/ inference Images/sec Remaining B/W Available B/W training/ inference Images/sec Remaining B/W Available B/W training/ inference Images/sec Remaining B/W 150 KB 6.25 GB/s 41K 6.25 GB/s 12.5 GB/s 83K 12.5 GB/s 25 GB/s 166K 25 GB/s 1 MB 6.25 GB/s 6.25K 6.25 GB/s 12.5 GB/s 12.5K 12.5 GB/s 25 GB/s 25K 25 GB/s 07

10 STORAGE ANALYTICS FOR SMALL AND LARGE FILE SIZE Conclusion VelocityAI solution clearly demonstrates the price performance leadership, when industry best of breed systems are combined to jump start customer digital transformation journey. It provides a single, easy to use solution which can be leveraged by Data Scientists, Chief Data officers, Chief Analytics officers to accelerate their data pipelines and host a multitude of data driven applications. Data driven applications literally run on large data-sets which are continuously accessed by the compute layer for training and inferencing. Data driven application neural net prediction is only as good as the training data-set it has. Hence data is the new oil and the GPU compute layer needs to be well fed eliminating all hot spots and I/O bottlenecks at the storage layer. Nvidia DGX-1 provides massive parallelism (1 PFLOPs per DGX-1) and consolidation at the compute layer and Vexata with its transformative and unique VX-OS and FPGA acceleration is able to present the same massive parallelism at the storage layer. Mellanox 100 GbE fabric, removes all data locality concerns. With this VelocityAI is uniquely able to provide the highest throughput at deterministic latencies across all the Deep Learning phases, with all unstructured data types and for mixed workloads. Acknowledgments We would like to take this opportunity to sincerely thank our esteemed friends at Nvidia, Darrin Johnson, James Mauro, Tony Paikeday, Richard Salazar who spent cycles reviewing this document Vexata. All Rights Reserved. All third-party trademarks are the property of their respective companies or their subsidiaries in the U.S. Contact Vexata: info@vexata.com and/or other countries. RA ABOUT VEXATA: Founded on the premise that every business is challenged to deliver cognitive, data-intensive applications, Vexata delivers 10x performance AND efficiency improvements at a fraction of the cost of existing all-flash storage solutions. Learn more at 08

IBM SpectrumAI with NVIDIA Converged Infrastructure Solutions for AI workloads

IBM SpectrumAI with NVIDIA Converged Infrastructure Solutions for AI workloads IBM SpectrumAI with NVIDIA Converged Infrastructure Solutions for AI workloads The engine to power your AI data pipeline Introduction: Artificial intelligence (AI) including deep learning (DL) and machine

More information

HOW TO BUILD A MODERN AI

HOW TO BUILD A MODERN AI HOW TO BUILD A MODERN AI FOR THE UNKNOWN IN MODERN DATA 1 2016 PURE STORAGE INC. 2 Official Languages Act (1969/1988) 3 Translation Bureau 4 5 DAWN OF 4 TH INDUSTRIAL REVOLUTION BIG DATA, AI DRIVING CHANGE

More information

Deep Learning Performance and Cost Evaluation

Deep Learning Performance and Cost Evaluation Micron 5210 ION Quad-Level Cell (QLC) SSDs vs 7200 RPM HDDs in Centralized NAS Storage Repositories A Technical White Paper Rene Meyer, Ph.D. AMAX Corporation Publish date: October 25, 2018 Abstract Introduction

More information

Deep Learning Performance and Cost Evaluation

Deep Learning Performance and Cost Evaluation Micron 5210 ION Quad-Level Cell (QLC) SSDs vs 7200 RPM HDDs in Centralized NAS Storage Repositories A Technical White Paper Don Wang, Rene Meyer, Ph.D. info@ AMAX Corporation Publish date: October 25,

More information

SUPERMICRO, VEXATA AND INTEL ENABLING NEW LEVELS PERFORMANCE AND EFFICIENCY FOR REAL-TIME DATA ANALYTICS FOR SQL DATA WAREHOUSE DEPLOYMENTS

SUPERMICRO, VEXATA AND INTEL ENABLING NEW LEVELS PERFORMANCE AND EFFICIENCY FOR REAL-TIME DATA ANALYTICS FOR SQL DATA WAREHOUSE DEPLOYMENTS TABLE OF CONTENTS 2 THE AGE OF INFORMATION ACCELERATION Vexata Provides the Missing Piece in The Information Acceleration Puzzle The Vexata - Supermicro Partnership 4 CREATING ULTRA HIGH-PERFORMANCE DATA

More information

TECHNICAL WHITE PAPER REAL-WORLD AI DEEP LEARNING PIPELINE POWERED BY FLASHBLADE

TECHNICAL WHITE PAPER REAL-WORLD AI DEEP LEARNING PIPELINE POWERED BY FLASHBLADE TECHNICAL WHITE PAPER REAL-WORLD AI DEEP LEARNING PIPELINE POWERED BY FLASHBLADE TABLE OF CONTENTS INTRODUCTION... 3 LIFECYCLE OF DATA... 3 THE DATA SCIENTIST WORKFLOW... 5 SCALABLE DATASETS... 6 WHY FLASHBLADE...

More information

Scalable AI Infrastructure

Scalable AI Infrastructure Technical Whitepaper Scalable AI Infrastructure Designing For Real-World Deep Learning Use Cases Sundar Ranganathan, NetApp Santosh Rao, NetApp June 2018 WP-7267 In partnership with Executive Summary Deep

More information

Building the Most Efficient Machine Learning System

Building the Most Efficient Machine Learning System Building the Most Efficient Machine Learning System Mellanox The Artificial Intelligence Interconnect Company June 2017 Mellanox Overview Company Headquarters Yokneam, Israel Sunnyvale, California Worldwide

More information

Building the Most Efficient Machine Learning System

Building the Most Efficient Machine Learning System Building the Most Efficient Machine Learning System Mellanox The Artificial Intelligence Interconnect Company June 2017 Mellanox Overview Company Headquarters Yokneam, Israel Sunnyvale, California Worldwide

More information

Accelerating Hadoop Applications with the MapR Distribution Using Flash Storage and High-Speed Ethernet

Accelerating Hadoop Applications with the MapR Distribution Using Flash Storage and High-Speed Ethernet WHITE PAPER Accelerating Hadoop Applications with the MapR Distribution Using Flash Storage and High-Speed Ethernet Contents Background... 2 The MapR Distribution... 2 Mellanox Ethernet Solution... 3 Test

More information

Deep Learning mit PowerAI - Ein Überblick

Deep Learning mit PowerAI - Ein Überblick Stephen Lutz Deep Learning mit PowerAI - Open Group Master Certified IT Specialist Technical Sales IBM Cognitive Infrastructure IBM Germany Ein Überblick Stephen.Lutz@de.ibm.com What s that? and what s

More information

VEXATA FOR ORACLE. Digital Business Demands Performance and Scale. Solution Brief

VEXATA FOR ORACLE. Digital Business Demands Performance and Scale. Solution Brief Digital Business Demands Performance and Scale As enterprises shift to online and softwaredriven business models, Oracle infrastructure is being pushed to run at exponentially higher scale and performance.

More information

IBM Deep Learning Solutions

IBM Deep Learning Solutions IBM Deep Learning Solutions Reference Architecture for Deep Learning on POWER8, P100, and NVLink October, 2016 How do you teach a computer to Perceive? 2 Deep Learning: teaching Siri to recognize a bicycle

More information

AIRI SCALE-OUT AI-READY INFRASTRUCTURE ARCHITECTED BY PURE STORAGE AND NVIDIA WITH ARISTA 7060X SWITCH REFERENCE ARCHITECTURE

AIRI SCALE-OUT AI-READY INFRASTRUCTURE ARCHITECTED BY PURE STORAGE AND NVIDIA WITH ARISTA 7060X SWITCH REFERENCE ARCHITECTURE REFERENCE ARCHITECTURE AIRI SCALE-OUT AI-READY INFRASTRUCTURE ARCHITECTED BY PURE STORAGE AND NVIDIA WITH ARISTA 7060X SWITCH TABLE OF CONTENTS INTRODUCTION... 3 Accelerating Computation: NVIDIA DGX-1...

More information

An introduction to Machine Learning silicon

An introduction to Machine Learning silicon An introduction to Machine Learning silicon November 28 2017 Insight for Technology Investors AI/ML terminology Artificial Intelligence Machine Learning Deep Learning Algorithms: CNNs, RNNs, etc. Additional

More information

S8765 Performance Optimization for Deep- Learning on the Latest POWER Systems

S8765 Performance Optimization for Deep- Learning on the Latest POWER Systems S8765 Performance Optimization for Deep- Learning on the Latest POWER Systems Khoa Huynh Senior Technical Staff Member (STSM), IBM Jonathan Samn Software Engineer, IBM Evolving from compute systems to

More information

Cisco UCS C480 ML M5 Rack Server Performance Characterization

Cisco UCS C480 ML M5 Rack Server Performance Characterization White Paper Cisco UCS C480 ML M5 Rack Server Performance Characterization The Cisco UCS C480 ML M5 Rack Server platform is designed for artificial intelligence and machine-learning workloads. 2018 Cisco

More information

NVIDIA DGX SYSTEMS PURPOSE-BUILT FOR AI

NVIDIA DGX SYSTEMS PURPOSE-BUILT FOR AI NVIDIA DGX SYSTEMS PURPOSE-BUILT FOR AI Overview Unparalleled Value Product Portfolio Software Platform From Desk to Data Center to Cloud Summary AI researchers depend on computing performance to gain

More information

Accelerate AI with Cisco Computing Solutions

Accelerate AI with Cisco Computing Solutions Accelerate AI with Cisco Computing Solutions Data is everywhere. Your data scientists are propelling your business into a future of data-driven intelligence. But how do you deploy and manage artificial

More information

HPE Deep Learning Cookbook: Recipes to Run Deep Learning Workloads. Natalia Vassilieva, Sergey Serebryakov

HPE Deep Learning Cookbook: Recipes to Run Deep Learning Workloads. Natalia Vassilieva, Sergey Serebryakov HPE Deep Learning Cookbook: Recipes to Run Deep Learning Workloads Natalia Vassilieva, Sergey Serebryakov Deep learning ecosystem today Software Hardware 2 HPE s portfolio for deep learning Government,

More information

SUPERCHARGE DEEP LEARNING WITH DGX-1. Markus Weber SC16 - November 2016

SUPERCHARGE DEEP LEARNING WITH DGX-1. Markus Weber SC16 - November 2016 SUPERCHARGE DEEP LEARNING WITH DGX-1 Markus Weber SC16 - November 2016 NVIDIA Pioneered GPU Computing Founded 1993 $7B 9,500 Employees 100M NVIDIA GeForce Gamers The world s largest gaming platform Pioneering

More information

Deep learning prevalence. first neuroscience department. Spiking Neuron Operant conditioning First 1 Billion transistor processor

Deep learning prevalence. first neuroscience department. Spiking Neuron Operant conditioning First 1 Billion transistor processor WELCOME TO Operant conditioning 1938 Spiking Neuron 1952 first neuroscience department 1964 Deep learning prevalence mid 2000s The Turing Machine 1936 Transistor 1947 First computer science department

More information

IBM Db2 Analytics Accelerator Version 7.1

IBM Db2 Analytics Accelerator Version 7.1 IBM Db2 Analytics Accelerator Version 7.1 Delivering new flexible, integrated deployment options Overview Ute Baumbach (bmb@de.ibm.com) 1 IBM Z Analytics Keep your data in place a different approach to

More information

Is your IT Infrastructure Ready for Machine Learning & Artificial Intelligence?

Is your IT Infrastructure Ready for Machine Learning & Artificial Intelligence? BRKPAR-2955 Is your IT Infrastructure Ready for Machine Learning & Artificial Intelligence? Hoseb Dermanilian, EMEA BDM, NetApp Arnaud BASSALER, CSE, Cisco Systems Agenda Introduction AI, Machine Learning

More information

Characterization and Benchmarking of Deep Learning. Natalia Vassilieva, PhD Sr. Research Manager

Characterization and Benchmarking of Deep Learning. Natalia Vassilieva, PhD Sr. Research Manager Characterization and Benchmarking of Deep Learning Natalia Vassilieva, PhD Sr. Research Manager Deep learning applications Vision Speech Text Other Search & information extraction Security/Video surveillance

More information

Implementing SQL Server 2016 with Microsoft Storage Spaces Direct on Dell EMC PowerEdge R730xd

Implementing SQL Server 2016 with Microsoft Storage Spaces Direct on Dell EMC PowerEdge R730xd Implementing SQL Server 2016 with Microsoft Storage Spaces Direct on Dell EMC PowerEdge R730xd Performance Study Dell EMC Engineering October 2017 A Dell EMC Performance Study Revisions Date October 2017

More information

Predicting Service Outage Using Machine Learning Techniques. HPE Innovation Center

Predicting Service Outage Using Machine Learning Techniques. HPE Innovation Center Predicting Service Outage Using Machine Learning Techniques HPE Innovation Center HPE Innovation Center - Our AI Expertise Sense Learn Comprehend Act Computer Vision Machine Learning Natural Language Processing

More information

World s most advanced data center accelerator for PCIe-based servers

World s most advanced data center accelerator for PCIe-based servers NVIDIA TESLA P100 GPU ACCELERATOR World s most advanced data center accelerator for PCIe-based servers HPC data centers need to support the ever-growing demands of scientists and researchers while staying

More information

Fast and Easy Persistent Storage for Docker* Containers with Storidge and Intel

Fast and Easy Persistent Storage for Docker* Containers with Storidge and Intel Solution brief Intel Storage Builders Storidge ContainerIO TM Intel Xeon Processor Scalable Family Intel SSD DC Family for PCIe*/NVMe Fast and Easy Persistent Storage for Docker* Containers with Storidge

More information

IBM Power AC922 Server

IBM Power AC922 Server IBM Power AC922 Server The Best Server for Enterprise AI Highlights More accuracy - GPUs access system RAM for larger models Faster insights - significant deep learning speedups Rapid deployment - integrated

More information

ACCELERATE YOUR ANALYTICS GAME WITH ORACLE SOLUTIONS ON PURE STORAGE

ACCELERATE YOUR ANALYTICS GAME WITH ORACLE SOLUTIONS ON PURE STORAGE ACCELERATE YOUR ANALYTICS GAME WITH ORACLE SOLUTIONS ON PURE STORAGE An innovative storage solution from Pure Storage can help you get the most business value from all of your data THE SINGLE MOST IMPORTANT

More information

São Paulo. August,

São Paulo. August, São Paulo August, 28 2018 A Modernização das Soluções de Armazeamento e Proteção de Dados DellEMC Mateus Pereira Systems Engineer, DellEMC mateus.pereira@dell.com Need for Transformation 81% of customers

More information

Dell EMC PowerEdge R740xd as a Dedicated Milestone Server, Using Nvidia GPU Hardware Acceleration

Dell EMC PowerEdge R740xd as a Dedicated Milestone Server, Using Nvidia GPU Hardware Acceleration Dell EMC PowerEdge R740xd as a Dedicated Milestone Server, Using Nvidia GPU Hardware Acceleration Dell IP Video Platform Design and Calibration Lab June 2018 H17250 Reference Architecture Abstract This

More information

REFERENCE ARCHITECTURE Microsoft SQL Server 2016 Data Warehouse Fast Track. FlashStack 70TB Solution with Cisco UCS and Pure Storage FlashArray//X

REFERENCE ARCHITECTURE Microsoft SQL Server 2016 Data Warehouse Fast Track. FlashStack 70TB Solution with Cisco UCS and Pure Storage FlashArray//X REFERENCE ARCHITECTURE Microsoft SQL Server 2016 Data Warehouse Fast Track FlashStack 70TB Solution with Cisco UCS and Pure Storage FlashArray//X FLASHSTACK REFERENCE ARCHITECTURE September 2018 TABLE

More information

FlashGrid Software Enables Converged and Hyper-Converged Appliances for Oracle* RAC

FlashGrid Software Enables Converged and Hyper-Converged Appliances for Oracle* RAC white paper FlashGrid Software Intel SSD DC P3700/P3600/P3500 Topic: Hyper-converged Database/Storage FlashGrid Software Enables Converged and Hyper-Converged Appliances for Oracle* RAC Abstract FlashGrid

More information

MICROWAY S NVIDIA TESLA V100 GPU SOLUTIONS GUIDE

MICROWAY S NVIDIA TESLA V100 GPU SOLUTIONS GUIDE MICROWAY S NVIDIA TESLA V100 GPU SOLUTIONS GUIDE LEVERAGE OUR EXPERTISE sales@microway.com http://microway.com/tesla NUMBERSMASHER TESLA 4-GPU SERVER/WORKSTATION Flexible form factor 4 PCI-E GPUs + 3 additional

More information

DEEP NEURAL NETWORKS CHANGING THE AUTONOMOUS VEHICLE LANDSCAPE. Dennis Lui August 2017

DEEP NEURAL NETWORKS CHANGING THE AUTONOMOUS VEHICLE LANDSCAPE. Dennis Lui August 2017 DEEP NEURAL NETWORKS CHANGING THE AUTONOMOUS VEHICLE LANDSCAPE Dennis Lui August 2017 THE RISE OF GPU COMPUTING APPLICATIONS 10 7 10 6 GPU-Computing perf 1.5X per year 1000X by 2025 ALGORITHMS 10 5 1.1X

More information

Supermicro All-Flash NVMe Solution for Ceph Storage Cluster

Supermicro All-Flash NVMe Solution for Ceph Storage Cluster Table of Contents 2 Powering Ceph Storage Cluster with Supermicro All-Flash NVMe Storage Solutions 4 Supermicro Ceph OSD Ready All-Flash NVMe Reference Architecture Planning Consideration Supermicro NVMe

More information

DataON and Intel Select Hyper-Converged Infrastructure (HCI) Maximizes IOPS Performance for Windows Server Software-Defined Storage

DataON and Intel Select Hyper-Converged Infrastructure (HCI) Maximizes IOPS Performance for Windows Server Software-Defined Storage Solution Brief DataON and Intel Select Hyper-Converged Infrastructure (HCI) Maximizes IOPS Performance for Windows Server Software-Defined Storage DataON Next-Generation All NVMe SSD Flash-Based Hyper-Converged

More information

Brainchip OCTOBER

Brainchip OCTOBER Brainchip OCTOBER 2017 1 Agenda Neuromorphic computing background Akida Neuromorphic System-on-Chip (NSoC) Brainchip OCTOBER 2017 2 Neuromorphic Computing Background Brainchip OCTOBER 2017 3 A Brief History

More information

IBM Power Advanced Compute (AC) AC922 Server

IBM Power Advanced Compute (AC) AC922 Server IBM Power Advanced Compute (AC) AC922 Server The Best Server for Enterprise AI Highlights IBM Power Systems Accelerated Compute (AC922) server is an acceleration superhighway to enterprise- class AI. A

More information

Democratizing Machine Learning on Kubernetes

Democratizing Machine Learning on Kubernetes Democratizing Machine Learning on Kubernetes Joy Qiao, Senior Solution Architect - AI and Research Group, Microsoft Lachlan Evenson - Principal Program Manager AKS/ACS, Microsoft Who are we? The Data Scientist

More information

The Impact of SSD Selection on SQL Server Performance. Solution Brief. Understanding the differences in NVMe and SATA SSD throughput

The Impact of SSD Selection on SQL Server Performance. Solution Brief. Understanding the differences in NVMe and SATA SSD throughput Solution Brief The Impact of SSD Selection on SQL Server Performance Understanding the differences in NVMe and SATA SSD throughput 2018, Cloud Evolutions Data gathered by Cloud Evolutions. All product

More information

A NEW COMPUTING ERA JENSEN HUANG, FOUNDER & CEO GTC CHINA 2017

A NEW COMPUTING ERA JENSEN HUANG, FOUNDER & CEO GTC CHINA 2017 A NEW COMPUTING ERA JENSEN HUANG, FOUNDER & CEO GTC CHINA 2017 TWO FORCES DRIVING THE FUTURE OF COMPUTING 10 7 Transistors (thousands) 10 6 10 5 1.1X per year 10 4 10 3 10 2 1.5X per year Single-threaded

More information

TPC-E testing of Microsoft SQL Server 2016 on Dell EMC PowerEdge R830 Server and Dell EMC SC9000 Storage

TPC-E testing of Microsoft SQL Server 2016 on Dell EMC PowerEdge R830 Server and Dell EMC SC9000 Storage TPC-E testing of Microsoft SQL Server 2016 on Dell EMC PowerEdge R830 Server and Dell EMC SC9000 Storage Performance Study of Microsoft SQL Server 2016 Dell Engineering February 2017 Table of contents

More information

DDN. DDN Updates. Data DirectNeworks Japan, Inc Shuichi Ihara. DDN Storage 2017 DDN Storage

DDN. DDN Updates. Data DirectNeworks Japan, Inc Shuichi Ihara. DDN Storage 2017 DDN Storage DDN DDN Updates Data DirectNeworks Japan, Inc Shuichi Ihara DDN A Broad Range of Technologies to Best Address Your Needs Protection Security Data Distribution and Lifecycle Management Open Monitoring Your

More information

A NEW COMPUTING ERA. Shanker Trivedi Senior Vice President Enterprise Business at NVIDIA

A NEW COMPUTING ERA. Shanker Trivedi Senior Vice President Enterprise Business at NVIDIA A NEW COMPUTING ERA Shanker Trivedi Senior Vice President Enterprise Business at NVIDIA THE ERA OF AI AI CLOUD MOBILE PC 2 TWO FORCES DRIVING THE FUTURE OF COMPUTING 10 7 Transistors (thousands) 10 5 1.1X

More information

TESLA V100 PERFORMANCE GUIDE. Life Sciences Applications

TESLA V100 PERFORMANCE GUIDE. Life Sciences Applications TESLA V100 PERFORMANCE GUIDE Life Sciences Applications NOVEMBER 2017 TESLA V100 PERFORMANCE GUIDE Modern high performance computing (HPC) data centers are key to solving some of the world s most important

More information

Upgrade to Microsoft SQL Server 2016 with Dell EMC Infrastructure

Upgrade to Microsoft SQL Server 2016 with Dell EMC Infrastructure Upgrade to Microsoft SQL Server 2016 with Dell EMC Infrastructure Generational Comparison Study of Microsoft SQL Server Dell Engineering February 2017 Revisions Date Description February 2017 Version 1.0

More information

IBM CORAL HPC System Solution

IBM CORAL HPC System Solution IBM CORAL HPC System Solution HPC and HPDA towards Cognitive, AI and Deep Learning Deep Learning AI / Deep Learning Strategy for Power Power AI Platform High Performance Data Analytics Big Data Strategy

More information

Accelerating Microsoft SQL Server 2016 Performance With Dell EMC PowerEdge R740

Accelerating Microsoft SQL Server 2016 Performance With Dell EMC PowerEdge R740 Accelerating Microsoft SQL Server 2016 Performance With Dell EMC PowerEdge R740 A performance study of 14 th generation Dell EMC PowerEdge servers for Microsoft SQL Server Dell EMC Engineering September

More information

NVIDIA DEEP LEARNING INSTITUTE

NVIDIA DEEP LEARNING INSTITUTE NVIDIA DEEP LEARNING INSTITUTE TRAINING CATALOG Valid Through July 31, 2018 INTRODUCTION The NVIDIA Deep Learning Institute (DLI) trains developers, data scientists, and researchers on how to use artificial

More information

Introducing SUSE Enterprise Storage 5

Introducing SUSE Enterprise Storage 5 Introducing SUSE Enterprise Storage 5 1 SUSE Enterprise Storage 5 SUSE Enterprise Storage 5 is the ideal solution for Compliance, Archive, Backup and Large Data. Customers can simplify and scale the storage

More information

High-Performance Lustre with Maximum Data Assurance

High-Performance Lustre with Maximum Data Assurance High-Performance Lustre with Maximum Data Assurance Silicon Graphics International Corp. 900 North McCarthy Blvd. Milpitas, CA 95035 Disclaimer and Copyright Notice The information presented here is meant

More information

Accelerate Applications Using EqualLogic Arrays with directcache

Accelerate Applications Using EqualLogic Arrays with directcache Accelerate Applications Using EqualLogic Arrays with directcache Abstract This paper demonstrates how combining Fusion iomemory products with directcache software in host servers significantly improves

More information

FAST SQL SERVER BACKUP AND RESTORE

FAST SQL SERVER BACKUP AND RESTORE WHITE PAPER FAST SQL SERVER BACKUP AND RESTORE WITH PURE STORAGE TABLE OF CONTENTS EXECUTIVE OVERVIEW... 3 GOALS AND OBJECTIVES... 3 AUDIENCE... 3 PURE STORAGE INTRODUCTION... 4 SOLUTION SUMMARY... 4 FLASHBLADE

More information

DDN. DDN Updates. DataDirect Neworks Japan, Inc Nobu Hashizume. DDN Storage 2018 DDN Storage 1

DDN. DDN Updates. DataDirect Neworks Japan, Inc Nobu Hashizume. DDN Storage 2018 DDN Storage 1 1 DDN DDN Updates DataDirect Neworks Japan, Inc Nobu Hashizume DDN Storage 2018 DDN Storage 1 2 DDN A Broad Range of Technologies to Best Address Your Needs Your Use Cases Research Big Data Enterprise

More information

Deep Learning Accelerators

Deep Learning Accelerators Deep Learning Accelerators Abhishek Srivastava (as29) Samarth Kulshreshtha (samarth5) University of Illinois, Urbana-Champaign Submitted as a requirement for CS 433 graduate student project Outline Introduction

More information

HyperConverged Appliance

HyperConverged Appliance HyperConverged Appliance DATASHEET HCA Value Proposition StarWind HyperConverged Appliance is a complete turnkey solution designed to eliminate unreasonably high complexity and cost of IT infrastructures.

More information

Evaluation Report: Improving SQL Server Database Performance with Dot Hill AssuredSAN 4824 Flash Upgrades

Evaluation Report: Improving SQL Server Database Performance with Dot Hill AssuredSAN 4824 Flash Upgrades Evaluation Report: Improving SQL Server Database Performance with Dot Hill AssuredSAN 4824 Flash Upgrades Evaluation report prepared under contract with Dot Hill August 2015 Executive Summary Solid state

More information

Oracle Exadata: Strategy and Roadmap

Oracle Exadata: Strategy and Roadmap Oracle Exadata: Strategy and Roadmap - New Technologies, Cloud, and On-Premises Juan Loaiza Senior Vice President, Database Systems Technologies, Oracle Safe Harbor Statement The following is intended

More information

Dell EMC Ready Bundle for HPC Digital Manufacturing Dassault Systѐmes Simulia Abaqus Performance

Dell EMC Ready Bundle for HPC Digital Manufacturing Dassault Systѐmes Simulia Abaqus Performance Dell EMC Ready Bundle for HPC Digital Manufacturing Dassault Systѐmes Simulia Abaqus Performance This Dell EMC technical white paper discusses performance benchmarking results and analysis for Simulia

More information

IBM V7000 Unified R1.4.2 Asynchronous Replication Performance Reference Guide

IBM V7000 Unified R1.4.2 Asynchronous Replication Performance Reference Guide V7 Unified Asynchronous Replication Performance Reference Guide IBM V7 Unified R1.4.2 Asynchronous Replication Performance Reference Guide Document Version 1. SONAS / V7 Unified Asynchronous Replication

More information

Webinar Series: Triangulate your Storage Architecture with SvSAN Caching. Luke Pruen Technical Services Director

Webinar Series: Triangulate your Storage Architecture with SvSAN Caching. Luke Pruen Technical Services Director Webinar Series: Triangulate your Storage Architecture with SvSAN Caching Luke Pruen Technical Services Director What can you expect from this webinar? To answer a simple question How can I create the perfect

More information

IBM Spectrum Scale IO performance

IBM Spectrum Scale IO performance IBM Spectrum Scale 5.0.0 IO performance Silverton Consulting, Inc. StorInt Briefing 2 Introduction High-performance computing (HPC) and scientific computing are in a constant state of transition. Artificial

More information

InfiniBand Networked Flash Storage

InfiniBand Networked Flash Storage InfiniBand Networked Flash Storage Superior Performance, Efficiency and Scalability Motti Beck Director Enterprise Market Development, Mellanox Technologies Flash Memory Summit 2016 Santa Clara, CA 1 17PB

More information

EMC SOLUTION FOR SPLUNK

EMC SOLUTION FOR SPLUNK EMC SOLUTION FOR SPLUNK Splunk validation using all-flash EMC XtremIO and EMC Isilon scale-out NAS ABSTRACT This white paper provides details on the validation of functionality and performance of Splunk

More information

Increasing Performance of Existing Oracle RAC up to 10X

Increasing Performance of Existing Oracle RAC up to 10X Increasing Performance of Existing Oracle RAC up to 10X Prasad Pammidimukkala www.gridironsystems.com 1 The Problem Data can be both Big and Fast Processing large datasets creates high bandwidth demand

More information

SoftNAS Cloud Performance Evaluation on AWS

SoftNAS Cloud Performance Evaluation on AWS SoftNAS Cloud Performance Evaluation on AWS October 25, 2016 Contents SoftNAS Cloud Overview... 3 Introduction... 3 Executive Summary... 4 Key Findings for AWS:... 5 Test Methodology... 6 Performance Summary

More information

Lenovo Database Configuration for Microsoft SQL Server TB

Lenovo Database Configuration for Microsoft SQL Server TB Database Lenovo Database Configuration for Microsoft SQL Server 2016 22TB Data Warehouse Fast Track Solution Data Warehouse problem and a solution The rapid growth of technology means that the amount of

More information

NVDIA DGX Data Center Reference Design

NVDIA DGX Data Center Reference Design White Paper NVDIA DGX Data Center Reference Design Easy Deployment of DGX Servers for Deep Learning 2018-07-19 2018 NVIDIA Corporation. Contents Abstract ii 1. AI Workflow and Sizing 1 2. NVIDIA AI Software

More information

Red Hat Ceph Storage and Samsung NVMe SSDs for intensive workloads

Red Hat Ceph Storage and Samsung NVMe SSDs for intensive workloads Red Hat Ceph Storage and Samsung NVMe SSDs for intensive workloads Power emerging OpenStack use cases with high-performance Samsung/ Red Hat Ceph reference architecture Optimize storage cluster performance

More information

A High-Performance Storage and Ultra- High-Speed File Transfer Solution for Collaborative Life Sciences Research

A High-Performance Storage and Ultra- High-Speed File Transfer Solution for Collaborative Life Sciences Research A High-Performance Storage and Ultra- High-Speed File Transfer Solution for Collaborative Life Sciences Research Storage Platforms with Aspera Overview A growing number of organizations with data-intensive

More information

Isilon: Raising The Bar On Performance & Archive Use Cases. John Har Solutions Product Manager Unstructured Data Storage Team

Isilon: Raising The Bar On Performance & Archive Use Cases. John Har Solutions Product Manager Unstructured Data Storage Team Isilon: Raising The Bar On Performance & Archive Use Cases John Har Solutions Product Manager Unstructured Data Storage Team What we ll cover in this session Isilon Overview Streaming workflows High ops/s

More information

FlashStack 70TB Solution with Cisco UCS and Pure Storage FlashArray

FlashStack 70TB Solution with Cisco UCS and Pure Storage FlashArray REFERENCE ARCHITECTURE Microsoft SQL Server 2016 Data Warehouse Fast Track FlashStack 70TB Solution with Cisco UCS and Pure Storage FlashArray FLASHSTACK REFERENCE ARCHITECTURE December 2017 TABLE OF CONTENTS

More information

SoftNAS Cloud Performance Evaluation on Microsoft Azure

SoftNAS Cloud Performance Evaluation on Microsoft Azure SoftNAS Cloud Performance Evaluation on Microsoft Azure November 30, 2016 Contents SoftNAS Cloud Overview... 3 Introduction... 3 Executive Summary... 4 Key Findings for Azure:... 5 Test Methodology...

More information

Copyright 2012, Oracle and/or its affiliates. All rights reserved.

Copyright 2012, Oracle and/or its affiliates. All rights reserved. 1 Storage Innovation at the Core of the Enterprise Robert Klusman Sr. Director Storage North America 2 The following is intended to outline our general product direction. It is intended for information

More information

Storage for HPC, HPDA and Machine Learning (ML)

Storage for HPC, HPDA and Machine Learning (ML) for HPC, HPDA and Machine Learning (ML) Frank Kraemer, IBM Systems Architect mailto:kraemerf@de.ibm.com IBM Data Management for Autonomous Driving (AD) significantly increase development efficiency by

More information

Designing Next Generation FS for NVMe and NVMe-oF

Designing Next Generation FS for NVMe and NVMe-oF Designing Next Generation FS for NVMe and NVMe-oF Liran Zvibel CTO, Co-founder Weka.IO @liranzvibel Santa Clara, CA 1 Designing Next Generation FS for NVMe and NVMe-oF Liran Zvibel CTO, Co-founder Weka.IO

More information

Dell PowerEdge R730xd Servers with Samsung SM1715 NVMe Drives Powers the Aerospike Fraud Prevention Benchmark

Dell PowerEdge R730xd Servers with Samsung SM1715 NVMe Drives Powers the Aerospike Fraud Prevention Benchmark Dell PowerEdge R730xd Servers with Samsung SM1715 NVMe Drives Powers the Aerospike Fraud Prevention Benchmark Testing validation report prepared under contract with Dell Introduction As innovation drives

More information

Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage

Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage Database Solutions Engineering By Raghunatha M, Ravi Ramappa Dell Product Group October 2009 Executive Summary

More information

Making Sense of Artificial Intelligence: A Practical Guide

Making Sense of Artificial Intelligence: A Practical Guide Making Sense of Artificial Intelligence: A Practical Guide JEDEC Mobile & IOT Forum Copyright 2018 Young Paik, Samsung Senior Director Product Planning Disclaimer This presentation and/or accompanying

More information

EFFICIENT INFERENCE WITH TENSORRT. Han Vanholder

EFFICIENT INFERENCE WITH TENSORRT. Han Vanholder EFFICIENT INFERENCE WITH TENSORRT Han Vanholder AI INFERENCING IS EXPLODING 2 Trillion Messages Per Day On LinkedIn 500M Daily active users of iflytek 140 Billion Words Per Day Translated by Google 60

More information

Inference Optimization Using TensorRT with Use Cases. Jack Han / 한재근 Solutions Architect NVIDIA

Inference Optimization Using TensorRT with Use Cases. Jack Han / 한재근 Solutions Architect NVIDIA Inference Optimization Using TensorRT with Use Cases Jack Han / 한재근 Solutions Architect NVIDIA Search Image NLP Maps TensorRT 4 Adoption Use Cases Speech Video AI Inference is exploding 1 Billion Videos

More information

Bringing Intelligence to Enterprise Storage Drives

Bringing Intelligence to Enterprise Storage Drives Bringing Intelligence to Enterprise Storage Drives Neil Werdmuller Director Storage Solutions Arm Santa Clara, CA 1 Who am I? 28 years experience in embedded Lead the storage solutions team Work closely

More information

DELL EMC ISILON F800 AND H600 I/O PERFORMANCE

DELL EMC ISILON F800 AND H600 I/O PERFORMANCE DELL EMC ISILON F800 AND H600 I/O PERFORMANCE ABSTRACT This white paper provides F800 and H600 performance data. It is intended for performance-minded administrators of large compute clusters that access

More information

DELL EMC VXRACK FLEX FOR HIGH PERFORMANCE DATABASES AND APPLICATIONS, MULTI-HYPERVISOR AND TWO-LAYER ENVIRONMENTS

DELL EMC VXRACK FLEX FOR HIGH PERFORMANCE DATABASES AND APPLICATIONS, MULTI-HYPERVISOR AND TWO-LAYER ENVIRONMENTS PRODUCT OVERVIEW DELL EMC VXRACK FLEX FOR HIGH PERFORMANCE DATABASES AND APPLICATIONS, MULTI-HYPERVISOR AND TWO-LAYER ENVIRONMENTS Dell EMC VxRack FLEX is a Dell EMC engineered and manufactured rack-scale

More information

Bringing Intelligence to Enterprise Storage Drives

Bringing Intelligence to Enterprise Storage Drives Bringing Intelligence to Enterprise Storage Drives Neil Werdmuller Director Storage Solutions Arm Santa Clara, CA 1 Who am I? 28 years experience in embedded Lead the storage solutions team Work closely

More information

Accelerating Data Center Workloads with FPGAs

Accelerating Data Center Workloads with FPGAs Accelerating Data Center Workloads with FPGAs Enno Lübbers NorCAS 2017, Linköping, Sweden Intel technologies features and benefits depend on system configuration and may require enabled hardware, software

More information

Broadberry. Artificial Intelligence Server for Fraud. Date: Q Application: Artificial Intelligence

Broadberry. Artificial Intelligence Server for Fraud. Date: Q Application: Artificial Intelligence TM Artificial Intelligence Server for Fraud Date: Q2 2017 Application: Artificial Intelligence Tags: Artificial intelligence, GPU, GTX 1080 TI HM Revenue & Customs The UK s tax, payments and customs authority

More information

Cloudian Sizing and Architecture Guidelines

Cloudian Sizing and Architecture Guidelines Cloudian Sizing and Architecture Guidelines The purpose of this document is to detail the key design parameters that should be considered when designing a Cloudian HyperStore architecture. The primary

More information

FLASHARRAY//M Smart Storage for Cloud IT

FLASHARRAY//M Smart Storage for Cloud IT FLASHARRAY//M Smart Storage for Cloud IT //M AT A GLANCE PURPOSE-BUILT to power your business: Transactional and analytic databases Virtualization and private cloud Business critical applications Virtual

More information

Lenovo Database Configuration

Lenovo Database Configuration Lenovo Database Configuration for Microsoft SQL Server Standard Edition DWFT 9TB Reduce time to value with pretested hardware configurations Data Warehouse problem and a solution The rapid growth of technology

More information

Fast Hardware For AI

Fast Hardware For AI Fast Hardware For AI Karl Freund karl@moorinsightsstrategy.com Sr. Analyst, AI and HPC Moor Insights & Strategy Follow my blogs covering Machine Learning Hardware on Forbes: http://www.forbes.com/sites/moorinsights

More information

Emerging Technologies for HPC Storage

Emerging Technologies for HPC Storage Emerging Technologies for HPC Storage Dr. Wolfgang Mertz CTO EMEA Unstructured Data Solutions June 2018 The very definition of HPC is expanding Blazing Fast Speed Accessibility and flexibility 2 Traditional

More information

NVIDIA GPU CLOUD DEEP LEARNING FRAMEWORKS

NVIDIA GPU CLOUD DEEP LEARNING FRAMEWORKS TECHNICAL OVERVIEW NVIDIA GPU CLOUD DEEP LEARNING FRAMEWORKS A Guide to the Optimized Framework Containers on NVIDIA GPU Cloud Introduction Artificial intelligence is helping to solve some of the most

More information

SurFS Product Description

SurFS Product Description SurFS Product Description 1. ABSTRACT SurFS An innovative technology is evolving the distributed storage ecosystem. SurFS is designed for cloud storage with extreme performance at a price that is significantly

More information

Hyper-converged storage for Oracle RAC based on NVMe SSDs and standard x86 servers

Hyper-converged storage for Oracle RAC based on NVMe SSDs and standard x86 servers Hyper-converged storage for Oracle RAC based on NVMe SSDs and standard x86 servers White Paper rev. 2016-05-18 2015-2016 FlashGrid Inc. 1 www.flashgrid.io Abstract Oracle Real Application Clusters (RAC)

More information

Unlock business value with HPC & Artificial Intelligence. FORUM TERATEC June 19, 2018 José RODRIGUES HPC Sales Manager

Unlock business value with HPC & Artificial Intelligence. FORUM TERATEC June 19, 2018 José RODRIGUES HPC Sales Manager Unlock business value with HPC & Artificial Intelligence FORUM TERATEC June 19, 2018 José RODRIGUES HPC Sales Manager Investment in HPC delivers compelling financial returns Financial Services Oil and

More information

Low-Overhead Flash Disaggregation via NVMe-over-Fabrics Vijay Balakrishnan Memory Solutions Lab. Samsung Semiconductor, Inc.

Low-Overhead Flash Disaggregation via NVMe-over-Fabrics Vijay Balakrishnan Memory Solutions Lab. Samsung Semiconductor, Inc. Low-Overhead Flash Disaggregation via NVMe-over-Fabrics Vijay Balakrishnan Memory Solutions Lab. Samsung Semiconductor, Inc. 1 DISCLAIMER This presentation and/or accompanying oral statements by Samsung

More information