Jian Zhang, Senior Software Engineer. Jack Zhang, Senior Enterprise Architect.
|
|
- Dorcas Parker
- 6 years ago
- Views:
Transcription
1 Jian Zhang, Senior Software Engineer Jack Zhang, Senior Enterprise Architect April, 2016
2 Agenda A brief introduction Ceph in PRC The stability challenges and optimizations The performance challenges and optimization The opportunities Ceph with future 3D NAND and 3D XPoint Technology Summary 2
3 Agenda A brief introduction Ceph in PRC The stability challenges and optimizations The performance challenges and optimization The opportunities Ceph with future 3D NAND and 3D XPoint Technology Summary 3
4 A brief introduction Intel Cloud Storage Engineering team Deliver optimized open source storage solutions on Intel platforms What did we do on Ceph Ceph performance analysis, development, and optimization Tools and BKM CeTune; Virtual storage management (VSM); COSBench Cloud Object Storage Benchmark tools Performance tunings and BKMs Customer engagements Working with 35+ customers to help them enabling Ceph based storage solutions 4
5 Agenda A brief introduction Ceph in PRC The stability challenges and optimizations The performance challenges and optimization The opportunities Ceph with future 3D NAND and 3D XPoint Technology Summary 5
6 PRC Ceph is much hotter in PRC with the growth of Openstack Redevelopment based on the upstream code More and more companies move to OpenSource storage solutions Intel/Redhat held two Ceph days at Beijing and Shanghai 350+ attendees from 250+ companies Self media-reports and re-clippings More and more PRC code contributors ZTE, XSKY, H3C, LETV, UnitedStack, AliYun, Ebay.. 6
7 PRC Active & spontaneous local community ceph.org.cn Grow Ceph ecosystem Three Beijing and Shanghai On average ~100 attendees More cities Zhengzhou Next Technical problem discussion 1094 users for BBS, 1025 for wechat group, 2265 wechat public account subscribes Ceph localization Ceph documents localization Ceph local mirrors Material from 7
8 Who is using Ceph? Telcom With System scale and data growth, performance requirement is on the increase, and become new burden and bottleneck for whole IT infrastructure High Storage cost (30%-50% of Total IT cost) Limited scalability of traditional Disk Array makes difficulties in operation Searchable Examples: ChinaUnicom CSP/IPDC Non-TAB IPDC customers are moving away from SAN/NAS solution to open source scale out storage solution such as Ceph. The service work functionally and performance is not up to par. Tier 2 CSPs are building their storage service with Ceph. Searchable Examples: LeTV, Ctrip, PLCloud OEM/ODM OEM/ODM building Ceph based storage solutions Searchable Examples: H3C, Quanta Cloud Technology Enterprise & Research Institutes Building Ceph based storage products Searchable Examples: 8
9 The Feedbacks Customers & Developers & End users from CSP, IPDC, Telcom, Enterprise, and Research Institutes Ceph is awesome! Performance is acceptable Never Lost Data It s awesome we can deliver three type of services in one single platform Beyond Performance Can you deliver a training to us on the architecture/code? Is there any tool to deploy/test/manager a Ceph cluster? Hey, can you help, our Ceph cluster met some performance problems. The cluster is complaining slow requests Why my OSD up and down again and again? Top Current Ceph Problems 6% 9% 27% 26% 16% 17% Performance Documentation Stability code complexity Ecosystem other. With Ceph, Perfect World is able to build high performance, highly scalable and reliable software defined storage solution that to providing KVM private cloud service to many high load critical applications. In the deployment process, Ceph proves software defined storage is capable of delivering high reliability as well as high performance. Zhang Xiaoyi VP Perfect World 9
10 Agenda A brief introduction Ceph in PRC The stability challenges and optimizations The performance challenges and optimization The opportunities Ceph with future 3D NAND and 3D XPoint Technology Summary 10
11 The Stability problems --Motivation Ceph performance is good when the cluster is stable, but there are some stability problems OSD Flapping Slow Requests Sometimes it s more efficient to handle it with third-party software 11
12 The Stability problems --OSD Flapping Reported from customer1: 3 nodes, each nodes has 1 monitor and 9 ODS Cluster network and public network on separate network Reproduce Steps: 1. Cutoff node2 cluster network 2. OSD becomes un stable, start to flapping, finally into stable status after 40mins (OSD on node 1&3 is up, while on node 2 is down). Log: MON :00: f8f38e osd heartbeat_check: no reply from osd.1 since back :59: front :00: (cutoff :59: ) :02: f8f log_channel(cluster) log [WRN] : map e1807 wrongly marked me down OSD Public Network Cluster Network MON OSD MON OSD 12
13 The Stability problems --Optimizations: OSD Flapping Smart Failure Detection mon: support min_down_reporter conuted by host (#6709) Nowadays we use the "osd_min_down_reporters as threshold to mark an OSD down. We would like to extend the semantic to allow it counted by any crush level(host, rack, etc) thus user could require failure reports from at least two nodes to mark an OSD down, which should prevent an isolated host make trouble to the cluster. In general this PR will prevent OSDs in an isolated node marking other health OSD down. osd/osd.cc: shutdown after flapping certain times (#6708) Shutdown the OSD after certain times of reboot(flapping), thus speed up the convergence of cluster. Do not try to boot up if we have no available peers(#6645) We should be isolated in this case, do not boot Ignore report from an OSD in down state(wip) Better failure report Will aggregate the failure message like so admin could identify the connective issue easily cannot reach _% of peers outside of my $crushlevel $foo [on front back] cannot reach _% of hosts in $crushlevel $foo [on front back] 13
14 The Stability problems --Slow Request Slow Requests: A phenomenon usually come along with performance issues If a ceph-osd daemon is slow to respond to a request, it will generate log messages complaining about requests that are taking too long. The warning threshold defaults to 30 seconds, and is configurable via the OSD op complaint time option. When this happens, the cluster log will receive messages. Optracker will record the request initiated time, and report slow requests :13: f8f38e log_channel(cluster) log [WRN] : 16 slow requests, 16 included below; oldest blocked for > secs :13: f8f38e log_channel(cluster) log [WRN] : slow request seconds old, received at :13: : osd_op(client : rbd_data.85642ae8944a c [set-alloc-hint object_size write_size ,write ~524288] 7.eced69f0 RETRY=6 ack+ondisk+retry+write+known_if_redirected e1916) currently reached_pg Possible causes Possible solutions Opportunties A bad drive (check dmesg output) A bug in the kernel file system bug (check dmesg output) An overloaded cluster (check system load, iostat, etc.) A bug in the ceph-osd daemon. Remove VMs Cloud Solutions from Ceph Hosts Upgrade Kernel Upgrade Ceph Restart OSDs Better Error Handling Smart Failure Report/detection Need a better mechanism to detect what caused the slow requests beyond Optracker s scope. 14
15 The Stability problems --optimizations How to detect the failure in a fixed (or short) time End users are unlikely to wait and would like to involve to fix the problems ASAP. Time out based solution is hard to control How shorten the cluster state convergence time Better nested failure handling Smart device failure detection e.g., smart info, perfcounter of an OSD instance? How to expose more message about failure types (hints) from the log How to trouble shooting the problems? 15
16 Agenda A brief introduction Ceph in PRC The stability challenges and optimizations The performance challenges and optimization The opportunities Ceph with future 3D NAND and 3D XPoint Technology Summary 16
17 The performance problems Ceph on All Flash Array --Motivation Storage Providers are Struggling to achieve the required high performance There is a growing trend for cloud provider to adopt SSD CSP who wants to build EBS alike service for their OpenStack based public/private cloud Strong demands to run enterprise applications OLTP workloads running on Ceph high performance multi-purpose Ceph cluster is the key advantages Performance is still an important factor SSD price continue to decrease 17
18 The performance problems Ceph on All Flash Array -- Configuration FIO FIO FIO Test Environment FIO FIO FIO FIO FIO FIO FIO 5x Client Node Intel Xeon processor E GHz, 64GB mem 10Gb NIC MON OSD1 CLIENT 1 CLIENT 2 CLIENT 3 1x10Gb NIC 2x10Gb NIC CEPH1 CEPH2 CEPH3 OSD8 OSD1 OSD8 OSD1 OSD8 OSD1 CLIENT 4 CEPH4 OSD8 CLIENT 5 CEPH5 OSD1 OSD8 5x Storage Node Intel Xeon processor E GHz 128GB Memory 1x 1T HDD for OS 1x Intel DC P G SSD for Journal (U.2) 4x 1.6TB Intel SSD DC S3510 as data drive 2 OSD instances one each S3510 SSD Note: Refer to backup for detailed test configuration for hardware, Ceph and testing scripts 18
19 The performance problems Ceph on All Flash Array --Observations & Analysis Leveldb became the bottleneck Single thread leveldb pushed one core to 100% utilization Omap overhead Among the threads, average ~47 threads are running, ~ 10 pipe threads and ~9 OSD op threads are running, most of OSD op threads are sleep (top H) Osd op threads are waiting for throttle of filestore to be released Disable omap operation can speedup release of filestore throttle, which makes more OSD op thread in running state, average ~ 105 threads are running. Throughput improved 63% High CPU consumption 70% CPU utilization of Two high-end Xeon E5 v3 processors (36 cores) with 4 S3510s Perf showed that most of CPU intensive functions are malloc, free and other system calls *Bypass omap: Ingore object_map->set_keys in FileStore::_omap_setkeys, for tests only 19
20 Normalized The performance problems Ceph on All Flash Array --Tuning and optimization efforts K random Read/Write Tunings Default Tuning-1 Tuning-2 Tuning-3 Tuning-4 Tuning-5 Tuning-6 4K Random Read 4K random write 4K Random Read Tunings 4K Random Write Tunings Default Single OSD Single OSD Tuning-1 2 OSD instances per SSD 2 OSD instances per SSD Tuning-2 Tuning1 + debug=0 Tuning2+Debug 0 tuning3+ op_tracker off, tuning fd Tuning-3 Tuning2 + jemalloc cache Tuning-4 Tuning3 + read_ahead_size=16 Tuning4+jemalloc Tuning-5 Tuning4 + osd_op_thread=32 Tuning4 + Rocksdb to store omap Tuning-6 Tuning5 + rbd_op_thread=4 N/A Up to 16x performance improvement for 4K random read, peak throughput 1.08M IOPS Up to 7.6x performance improvement for 4K random write, 140K IOPS 20
21 LATENCY(MS) LATENCY(MS) The performance problems Ceph on All Flash Array --Tuning and optimization efforts RANDOM READ PERFORMANCE RBD # SCALE TEST RANDOM WRITE PERFORMANCE RBD # SCALE TEST 63K 64k Random Read 40ms 4K Rand.R 8K Rand.R 16K Rand.R 64K Rand.R 4K Rand.W 8K Rand.w 16K Rand.W 64K Rand.W K 16k Random Read 10 ms 500K 8k Random Read 8.8ms IOPS 1.08M 4k Random Read 3.4ms K 64k Random Write 2.6ms 88K 16kRandom Write 2.7ms IOPS 132K 8k Random Write 4.1ms 144K 4kRandom Write 4.3ms 1.08M IOPS for 4K random read, 144K IOPS for 4K random write with tunings and optimizations Excellent random read performance and Acceptable random write performance 21
22 Normalized The performance problems Ceph on All Flash Array --Ceph * : SSD Cluster vs. HDD Cluster Performance Comparison ~ K Rand.W HDD SSD ~ K Rand.R Both journal on PCI Express * /NVM Express * SSD 4K random write, need ~ 58x HDD Cluster (~ 2320 HDDs) to get same performance 4K random read, need ~ 175x HDD Cluster (~ 7024 HDDs) to get the same performance Client Node 5 nodes with Intel Xeon processor E GHz, 64GB memory OS : Ubuntu* Trusty Storage Node 5 nodes with Intel Xeon processor E GHz, 128GB memory Ceph* Version : 9.2.0, OS : Ubuntu Trusty 1 x P3700 SSDs for Journal per node Cluster difference: SSD cluster : 4xS TB for OSD per node HDD cluster : 10 x STAT 7200RPM HDDs as OSD per node ALL SSD Ceph helps provide excellent TCO (both Capx and Opex), not only performance but also space, Power, Fail rate, etc. 22
23 Agenda A brief introduction Ceph in PRC The performance challenges and optimization The stability challenges and optimizations The opportunities Ceph with future 3D NAND and 3D XPoint Technology Summary 23
24 Opportunities Ceph 3D NAND and 3D Xpoint Technology 3D NAND and 3D Xpoint technology is emerging Can we solve today s performance problems with Intel 3D Xpoint technology? Persistent Memory based KV database Persistent Memory based BlueStore backend Client side cache Hyper-converge solutions How to build Cost-Effective Ceph Storage Solutions with 3D NAND Solid State Drives? 24
25 NAND Flash vs 3D XPoint Technology for Ceph tomorrow 3D MLC and TLC NAND 3D XPoint Technology Enable higher capacity OSDs at lower price Higher performance, opening up new use cases, DRAM extended, Key/value 25
26 Intel Continues to Drive Technology Accelerating Solid State Storage in Computing Platforms 2D NAND 3D NAND 32 Tiers CAPACITY Enables high-density flash devices COST Achieves lower cost per gigabyte than 2D NAND at maturity CONFIDENCE 3D architecture increases performance and endurance 26
27 3D Xpoint TECHNOLOGY A new class of non-volatile memory Media 1000X faster THAN NAND X endurance OF NAND 1 10X denser THAN DRAM 1 1 Technology claims are based on comparisons of latency, density and write cycling metrics amongst memory technologies recorded on published specifications of in-market memory products against internal Intel specifications Nand-like densities and dram-like speeds 27
28 3D Xpoint TECHNOLOGY Breaks the Memory Storage Barrier SRAM Latency: 1X Size of Data: 1X DRAM Latency: ~10X Size of Data: ~100X STORAGE 3D XPoint Memory Media Latency: ~100X Size of Data: ~1,000X NAND SSD Latency: ~100,000X Size of Data: ~1,000X HDD Latency: ~10 MillionX Size of Data: ~10,000X MEMORY Technology claims are based on comparisons of latency, density and write cycling metrics amongst memory technologies recorded on published specifications of in-market memory products against internal Intel specifications. 28
29 Intel Optane (prototype) vs Intel SSD DC P3700 Series at QD=1 Tests document performance of components on a particular test, in specific systems. Differences in hardware, software, or configuration will affect actual performance. Consult other sources of information to evaluate performance as you consider your purchase. For more complete information about performance and benchmark results, visit Server Configuration: 2x Intel Xeon E v3 NVM 29 Express * (NVMe) NAND based SSD: Intel P GB, 3D Xpoint based SSD: Optane NVMe OS: Red Hat * 7.1
30 Storage Hierarchy Tomorrow DRAM: 10GB/s per channel, ~100 nanosecond latency Server side and/or AFA Business Processing High Performance/In-Memory Analytics Scientific Cloud Web/Search/Graph Big Data Analytics (Hadoop * ) Object Store / Active-Archive Swift, lambert, HDFS, Ceph * Hot 3D XPoint DIMMs NVM Express * (NVMe) 3D XPoint SSDs Warm NVMe 3D NAND SSDs ~6GB/s per channel ~250 nanosecond latency PCI Express * (PCIe * ) 3.0 x4 link, ~3.2 GB/s <10 microsecond latency PCIe 3.0 x4, x2 link <100 microsecond latency Low cost archive Cold NVMe 3D NAND SSDs SATA or SAS HDDs SATA * 6Gbps Minutes offline Comparisons between memory technologies based on in-market product specifications and internal Intel specifications. 30
31 File File File API API Load/store mmap 3D Xpoint opportunities KV database A new Key-Value DB on Persistent Memory BlueStore Alternative of LevelDB or RocksDB in Ceph to speedup metadata operations Metadata Persistent-Memory KV Data Implemented with Intel NVM Library Bw-Tree as internal index a mapping table that virtualizes both the location and the size of pages Delta update to maximize CPU cache hit ratio Libpmemobj Libpmemlib Libpmemblk libpmemobj transaction to support KV transaction Use libpmemobj library to manage disk space DAX Enabled File System PMEMDevice PMEMDevice 31
32 File File File mmap Load/store API API mmap Load/store 3D Xpoint opportunities Bluestore backend BlueStore Three usages for PMEM device Backend of bluestore: raw PMEM block device or file of dax-enabled FS Metadata Rocksdb BlueFS Data Backend of rocksdb: raw PMEM block device or file of dax-enabled FS Backend of rocksdb s WAL: raw PMEM block device or file of DAX-enabled FS Two methods for accessing PMEM devices libpmemblk Libpmemblk Libpmemlib DAX Enabled File System mmap + libpmemlib PMEMDevice PMEMDevice PMEMDevice 32
33 3D Xpoint opportunities Client side cache Overview Client Side cache: caching on compute node Local read cache and distributed write cache Independent cache layer between RBD and Rados Extensible Framework Pluggable design/cache policies General caching interfaces: Memcached like API Data Services Deduplication, Compression when flushing to HDD Value add feature designed for 3D Xpoint device Log-structure object store for write cache Caching tier HDD Capacity tier Read Cache Compute Node VM1 VM2 VMn Write Cache meta Read Cache Compute Node VM1 VM2 VMn Write Cache PM PM PM PM PM PM PM PM dedup/compression meta OSD OSD OSD OSD OSD OSD OSD OSD OSD OSD OSD OSD 33
34 3D NAND - Ceph cost effective solution Enterprise class, highly reliable, feature rich, and cost effective AFA solution Ceph Node P3700 M.2 800GB NVMe SSD is today s SSD, and 3D NAND or TLC SSD is today s HDD S TB S TB S TB S TB NVMe as Journal, high capacity SATA SSD or 3D NAND SSD as data store 3D Xpoint Provide high performance, high capacity, a more cost effective solution 1M 4K Random Read IOPS delivered by 5 Ceph nodes Cost effective: 1000 HDD Ceph nodes (10K HDDs) to deliver same throughput High capacity: 100TB in 5 nodes W/ special software optimization on filestore and bluestore backend Ceph Node P3700 & 3D Xpoint SSDs P3520 4TB P3520 4TB 3D NAND P3520 4TB P3520 4TB P3520 4TB 34
35 Agenda A brief introduction Ceph in PRC The performance challenges and optimization The stability challenges and optimizations The opportunities Ceph with future 3D NAND and 3D XPoint Technology Summary 35
36 Summary Ceph becomes more popular w/ growth of OpenStack in China Challenge still exists w/ stability and performance Intel is working with community to improve Ceph performance 36
37 Acknowledgements This is a joint team work. Thanks for the contributions of Chendi Xue, Jianpeng Ma, Xinxin Shu and Yuan Zhou. Thanks for the contributions of ceph.org.cn from Hang 37
38 Legal Notices and Disclaimers Intel technologies features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at intel.com, or from the OEM or retailer. No computer system can be absolutely secure. Tests document performance of components on a particular test, in specific systems. Differences in hardware, software, or configuration will affect actual performance. Consult other sources of information to evaluate performance as you consider your purchase. For more complete information about performance and benchmark results, visit Cost reduction scenarios described are intended as examples of how a given Intel-based product, in the specified circumstances and configurations, may affect future costs and provide cost savings. Circumstances will vary. Intel does not guarantee any costs or cost reduction. This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps. Statements in this document that refer to Intel s plans and expectations for the quarter, the year, and the future, are forward-looking statements that involve a number of risks and uncertainties. A detailed discussion of the factors that could affect Intel s results and plans is included in Intel s SEC filings, including the annual report on Form 10-K. The products described may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request. No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document. Intel does not control or audit third-party benchmark data or the web sites referenced in this document. You should visit the referenced web site and confirm whether referenced data are accurate. Intel, Xeon and the Intel logo are trademarks of Intel Corporation in the United States and other countries. *Other names and brands may be claimed as the property of others Intel Corporation. 38
39 Backup 39
40 NVM Library Source: Typical NVDIMM Software Architecture, 40
41 Ceph All Flash Tunings [global] debug paxos = 0/0 debug journal = 0/0 debug mds_balancer = 0/0 debug mds = 0/0 mon_pg_warn_max_per_osd = debug lockdep = 0/0 debug auth = 0/0 debug mds_log = 0/0 debug mon = 0/0 debug perfcounter = 0/0 debug monc = 0/0 debug rbd = 0/0 debug throttle = 0/0 debug mds_migrator = 0/0 debug client = 0/0 debug rgw = 0/0 debug finisher = 0/0 debug journaler = 0/0 debug ms = 0/0 debug hadoop = 0/0 debug mds_locker = 0/0 debug tp = 0/0 debug context = 0/0 debug osd = 0/0 debug bluestore = 0/0 debug objclass = 0/0 debug objecter = 0/0 debug log = 0 debug filer = 0/0 debug mds_log_expire = 0/0 debug crush = 0/0 debug optracker = 0/0 debug rados = 0/0 debug heartbeatmap = 0/0 debug buffer = 0/0 debug asok = 0/0 debug objectcacher = 0/0 debug filestore = 0/0 debug timer = 0/0 mutex_perf_counter = True rbd_cache = False ms_crc_header = False ms_crc_data = False osd_pool_default_pgp_num = osd_pool_default_size = 2 rbd_op_threads = 4 cephx require signatures = False cephx sign messages = False osd_pool_default_pg_num = throttler_perf_counter = False auth_service_required = none auth_cluster_required = none auth_client_required = none osd_mount_options_xfs = rw,noatime,inode64,logbsize=256k,delaylog osd_mkfs_type = xfs filestore_queue_max_ops = 5000 osd_client_message_size_cap = 0 objecter_infilght_op_bytes = ms_dispatch_throttle_bytes = osd_mkfs_options_xfs = -f -i size=2048 filestore_wbthrottle_enable = True filestore_fd_cache_shards = 64 objecter_inflight_ops = filestore_queue_committing_max_bytes = osd_op_num_threads_per_shard = 2 filestore_queue_max_bytes = osd_op_threads = 32 osd_op_num_shards = 16 filestore_max_sync_interval = 10 filestore_op_threads = 16 osd_pg_object_context_cache_count = journal_queue_max_ops = 3000 journal_queue_max_bytes = journal_max_write_entries = 1000 filestore_queue_committing_max_ops = 5000 journal_max_write_bytes = osd_enable_op_tracker = False filestore_fd_cache_size = osd_client_message_cap = 0 41
Ceph BlueStore Performance on Latest Intel Server Platforms. Orlando Moreno Performance Engineer, Intel Corporation May 10, 2018
Ceph BlueStore Performance on Latest Intel Server Platforms Orlando Moreno Performance Engineer, Intel Corporation May 10, 2018 Legal Disclaimers 2017 Intel Corporation. Intel, the Intel logo, Xeon and
More informationA New Key-value Data Store For Heterogeneous Storage Architecture Intel APAC R&D Ltd.
A New Key-value Data Store For Heterogeneous Storage Architecture Intel APAC R&D Ltd. 1 Agenda Introduction Background and Motivation Hybrid Key-Value Data Store Architecture Overview Design details Performance
More informationRe-Architecting Cloud Storage with Intel 3D XPoint Technology and Intel 3D NAND SSDs
Re-Architecting Cloud Storage with Intel 3D XPoint Technology and Intel 3D NAND SSDs Jack Zhang yuan.zhang@intel.com, Cloud & Enterprise Storage Architect Santa Clara, CA 1 Agenda Memory Storage Hierarchy
More informationA New Key-Value Data Store For Heterogeneous Storage Architecture
A New Key-Value Data Store For Heterogeneous Storage Architecture brien.porter@intel.com wanyuan.yang@intel.com yuan.zhou@intel.com jian.zhang@intel.com Intel APAC R&D Ltd. 1 Agenda Introduction Background
More informationAll-NVMe Performance Deep Dive Into Ceph + Sneak Preview of QLC + NVMe Ceph
All-NVMe Performance Deep Dive Into Ceph + Sneak Preview of QLC + NVMe Ceph Ryan Meredith Sr. Manager, Storage Solutions Engineering 2018 Micron Technology, Inc. All rights reserved. Information, products,
More informationYuan Zhou Chendi Xue Jian Zhang 02/2017
Yuan Zhou yuan.zhou@intel.com Chendi Xue Chendi.xue@intel.com Jian Zhang jian.zhang@intel.com 02/2017 Agenda Introduction Hyper Converged Cache Hyper Converged Cache Architecture Overview Design details
More informationIs Open Source good enough? A deep study of Swift and Ceph performance. 11/2013
Is Open Source good enough? A deep study of Swift and Ceph performance Jiangang.duan@intel.com 11/2013 Agenda Self introduction Ceph Block service performance Swift Object Storage Service performance Summary
More informationAndrzej Jakowski, Armoun Forghan. Apr 2017 Santa Clara, CA
Andrzej Jakowski, Armoun Forghan Apr 2017 Santa Clara, CA Legal Disclaimers Intel technologies features and benefits depend on system configuration and may require enabled hardware, software or service
More informationCeph in a Flash. Micron s Adventures in All-Flash Ceph Storage. Ryan Meredith & Brad Spiers, Micron Principal Solutions Engineer and Architect
Ceph in a Flash Micron s Adventures in All-Flash Ceph Storage Ryan Meredith & Brad Spiers, Micron Principal Solutions Engineer and Architect 217 Micron Technology, Inc. All rights reserved. Information,
More informationMicron 9200 MAX NVMe SSDs + Red Hat Ceph Storage 3.0
Micron 92 MAX NVMe SSDs + Red Hat Ceph Storage 3. Reference Architecture Contents Executive Summary... 3 Why Micron for this Solution... 3 Ceph Distributed Architecture Overview... 4 Reference Architecture
More informationEvolution of Rack Scale Architecture Storage
Evolution of Rack Scale Architecture Storage Murugasamy (Sammy) Nachimuthu, Principal Engineer Mohan J Kumar, Fellow Intel Corporation August 2016 1 Agenda Introduction to Intel Rack Scale Design Storage
More informationIntel Solid State Drive Data Center Family for PCIe* in Baidu s Data Center Environment
Intel Solid State Drive Data Center Family for PCIe* in Baidu s Data Center Environment Case Study Order Number: 334534-002US Ordering Information Contact your local Intel sales representative for ordering
More informationIntel. Rack Scale Design: A Deeper Perspective on Software Manageability for the Open Compute Project Community. Mohan J. Kumar Intel Fellow
Intel Rack Scale Design: A Deeper Perspective on Software Manageability for the Open Compute Project Community Mohan J. Kumar Intel Fellow Agenda Rack Scale Design (RSD) Overview Manageability for RSD
More informationBuilding an Open Memory-Centric Computing Architecture using Intel Optane Frank Ober Efstathios Efstathiou Oracle Open World 2017 October 3, 2017
Building an Memory-Centric Computing Architecture using Intel Optane Frank Ober Efstathios Efstathiou Oracle World 2017 October 3, 2017 Agenda The legal stuff Why Memory Centric Computing? Overview of
More informationIntel SSD Data center evolution
Intel SSD Data center evolution March 2018 1 Intel Technology Innovations Fill the Memory and Storage Gap Performance and Capacity for Every Need Intel 3D NAND Technology Lower cost & higher density Intel
More informationUnderstanding Write Behaviors of Storage Backends in Ceph Object Store
Understanding Write Behaviors of Storage Backends in Object Store Dong-Yun Lee, Kisik Jeong, Sang-Hoon Han, Jin-Soo Kim, Joo-Young Hwang and Sangyeun Cho How Amplifies Writes client Data Store, please
More informationNVMe SSDs with Persistent Memory Regions
NVMe SSDs with Persistent Memory Regions Chander Chadha Sr. Manager Product Marketing, Toshiba Memory America, Inc. 2018 Toshiba Memory America, Inc. August 2018 1 Agenda q Why Persistent Memory is needed
More informationMarch NVM Solutions Group
March 2017 NVM Solutions Group Ideally one would desire an indefinitely large memory capacity such that any particular word would be immediately available. It does not seem possible physically to achieve
More informationRe- I m a g i n i n g D a t a C e n t e r S t o r a g e a n d M e m o r y
Intel Innovations Re- I m a g i n i n g D a t a C e n t e r S t o r a g e a n d M e m o r y March 2018 Greg Matson, Director of SSD Strategic Planning and Product Marketing Legal Disclaimer Intel may make
More informationNVMFS: A New File System Designed Specifically to Take Advantage of Nonvolatile Memory
NVMFS: A New File System Designed Specifically to Take Advantage of Nonvolatile Memory Dhananjoy Das, Sr. Systems Architect SanDisk Corp. 1 Agenda: Applications are KING! Storage landscape (Flash / NVM)
More informationMicron 9200 MAX NVMe SSDs + Ceph Luminous BlueStore
Micron 9200 MAX NVMe SSDs + Ceph Luminous 12.2.8 + BlueStore Reference Architecture Contents Executive Summary... 3 Why Micron for this Solution... 3 Ceph Distributed Architecture Overview... 4 Reference
More informationData life cycle monitoring using RoBinHood at scale. Gabriele Paciucci Solution Architect Bruno Faccini Senior Support Engineer September LAD
Data life cycle monitoring using RoBinHood at scale Gabriele Paciucci Solution Architect Bruno Faccini Senior Support Engineer September 2015 - LAD Agenda Motivations Hardware and software setup The first
More informationFast-track Hybrid IT Transformation with Intel Data Center Blocks for Cloud
Fast-track Hybrid IT Transformation with Intel Data Center Blocks for Cloud Kyle Corrigan, Cloud Product Line Manager, Intel Server Products Group Wagner Diaz, Product Marketing Engineer, Intel Data Center
More informationDataON and Intel Select Hyper-Converged Infrastructure (HCI) Maximizes IOPS Performance for Windows Server Software-Defined Storage
Solution Brief DataON and Intel Select Hyper-Converged Infrastructure (HCI) Maximizes IOPS Performance for Windows Server Software-Defined Storage DataON Next-Generation All NVMe SSD Flash-Based Hyper-Converged
More informationAn Effective and Efficient Performance Optimization Method by Design & Experiment: A Case Study with Ceph Storage
An Effective and Efficient Performance Optimization Method by Design & Experiment: A Case Study with Ceph Storage Lay Wai Kong, Ph.D. Six Sigma Master Black Belt Intel Corporation 1 Objective: Design of
More informationDisclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme
FUT3040BU Storage at Memory Speed: Finally, Nonvolatile Memory Is Here Rajesh Venkatasubramanian, VMware, Inc Richard A Brunner, VMware, Inc #VMworld #FUT3040BU Disclaimer This presentation may contain
More informationAccelerating NVMe-oF* for VMs with the Storage Performance Development Kit
Accelerating NVMe-oF* for VMs with the Storage Performance Development Kit Jim Harris Principal Software Engineer Intel Data Center Group Santa Clara, CA August 2017 1 Notices and Disclaimers Intel technologies
More informationExtremely Fast Distributed Storage for Cloud Service Providers
Solution brief Intel Storage Builders StorPool Storage Intel SSD DC S3510 Series Intel Xeon Processor E3 and E5 Families Intel Ethernet Converged Network Adapter X710 Family Extremely Fast Distributed
More informationAccelerate block service built on Ceph via SPDK Ziye Yang Intel
Accelerate block service built on Ceph via SPDK Ziye Yang Intel 1 Agenda SPDK Introduction Accelerate block service built on Ceph SPDK support in Ceph bluestore Summary 2 Agenda SPDK Introduction Accelerate
More informationData center day. Non-volatile memory. Rob Crooke. August 27, Senior Vice President, General Manager Non-Volatile Memory Solutions Group
Non-volatile memory Rob Crooke Senior Vice President, General Manager Non-Volatile Memory Solutions Group August 27, 2015 THE EXPLOSION OF DATA Requires Technology To Perform Random Data Access, Low Queue
More informationINTEL NEXT GENERATION TECHNOLOGY - POWERING NEW PERFORMANCE LEVELS
INTEL NEXT GENERATION TECHNOLOGY - POWERING NEW PERFORMANCE LEVELS Russ Fellows Enabling you to make the best technology decisions July 2017 EXECUTIVE OVERVIEW* The new Intel Xeon Scalable platform is
More informationFast and Easy Persistent Storage for Docker* Containers with Storidge and Intel
Solution brief Intel Storage Builders Storidge ContainerIO TM Intel Xeon Processor Scalable Family Intel SSD DC Family for PCIe*/NVMe Fast and Easy Persistent Storage for Docker* Containers with Storidge
More informationInterface Trends for the Enterprise I/O Highway
Interface Trends for the Enterprise I/O Highway Mitchell Abbey Product Line Manager Enterprise SSD August 2012 1 Enterprise SSD Market Update One Size Does Not Fit All : Storage solutions will be tiered
More informationToward a Memory-centric Architecture
Toward a Memory-centric Architecture Martin Fink EVP & Chief Technology Officer Western Digital Corporation August 8, 2017 1 SAFE HARBOR DISCLAIMERS Forward-Looking Statements This presentation contains
More informationJim Harris Principal Software Engineer Intel Data Center Group
Jim Harris Principal Software Engineer Intel Data Center Group Legal Disclaimer INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR
More informationLATEST INTEL TECHNOLOGIES POWER NEW PERFORMANCE LEVELS ON VMWARE VSAN
LATEST INTEL TECHNOLOGIES POWER NEW PERFORMANCE LEVELS ON VMWARE VSAN Russ Fellows Enabling you to make the best technology decisions November 2017 EXECUTIVE OVERVIEW* The new Intel Xeon Scalable platform
More informationCreate a Flexible, Scalable High-Performance Storage Cluster with WekaIO Matrix
Solution brief Intel Storage Builders WekaIO Matrix Intel eon Processor E5-2600 Product Family Intel Ethernet Converged Network Adapter 520 Intel SSD Data Center Family Data Plane Development Kit Create
More informationFlashGrid Software Enables Converged and Hyper-Converged Appliances for Oracle* RAC
white paper FlashGrid Software Intel SSD DC P3700/P3600/P3500 Topic: Hyper-converged Database/Storage FlashGrid Software Enables Converged and Hyper-Converged Appliances for Oracle* RAC Abstract FlashGrid
More informationInnovator, Disruptor or Laggard, Where will your storage applications live? Next generation storage
Innovator, Disruptor or Laggard, Where will your storage applications live? Next generation storage Bev Crair, Vice President and General Manager, Storage Group Intel The world is changing Information
More informationHewlett Packard Enterprise HPE GEN10 PERSISTENT MEMORY PERFORMANCE THROUGH PERSISTENCE
Hewlett Packard Enterprise HPE GEN10 PERSISTENT MEMORY PERFORMANCE THROUGH PERSISTENCE Digital transformation is taking place in businesses of all sizes Big Data and Analytics Mobility Internet of Things
More informationTechnology Advancement in SSDs and Related Ecosystem Changes
Technology Advancement in SSDs and Related Ecosystem Changes Sanjeev Kumar/ Ravish Sharma Software Product Engineering, HiTech, Tata Consultancy Services 27 May 2016 1 SDC India 2016 Agenda Disruptive
More informationData-Centric Innovation Summit ALPER ILKBAHAR VICE PRESIDENT & GENERAL MANAGER MEMORY & STORAGE SOLUTIONS, DATA CENTER GROUP
Data-Centric Innovation Summit ALPER ILKBAHAR VICE PRESIDENT & GENERAL MANAGER MEMORY & STORAGE SOLUTIONS, DATA CENTER GROUP tapping data value, real time MOUNTAINS OF UNDERUTILIZED DATA Challenge Shifting
More informationVMware vsphere Virtualization of PMEM (PM) Richard A. Brunner, VMware
VMware vsphere Virtualization of PMEM (PM) Richard A. Brunner, VMware Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents
More informationThe Impact of SSD Selection on SQL Server Performance. Solution Brief. Understanding the differences in NVMe and SATA SSD throughput
Solution Brief The Impact of SSD Selection on SQL Server Performance Understanding the differences in NVMe and SATA SSD throughput 2018, Cloud Evolutions Data gathered by Cloud Evolutions. All product
More informationIntel optane memory as platform accelerator. Vladimir Knyazkin
Intel optane memory as platform accelerator Vladimir Knyazkin 2 Legal Disclaimers Intel technologies features and benefits depend on system configuration and may require enabled hardware, software or service
More informationAn Exploration into Object Storage for Exascale Supercomputers. Raghu Chandrasekar
An Exploration into Object Storage for Exascale Supercomputers Raghu Chandrasekar Agenda Introduction Trends and Challenges Design and Implementation of SAROJA Preliminary evaluations Summary and Conclusion
More informationAccelerating Real-Time Big Data. Breaking the limitations of captive NVMe storage
Accelerating Real-Time Big Data Breaking the limitations of captive NVMe storage 18M IOPs in 2u Agenda Everything related to storage is changing! The 3rd Platform NVM Express architected for solid state
More information3D Xpoint Status and Forecast 2017
3D Xpoint Status and Forecast 2017 Mark Webb MKW 1 Ventures Consulting, LLC Memory Technologies Latency Density Cost HVM ready DRAM ***** *** *** ***** NAND * ***** ***** ***** MRAM ***** * * *** 3DXP
More informationOracle Exadata: Strategy and Roadmap
Oracle Exadata: Strategy and Roadmap - New Technologies, Cloud, and On-Premises Juan Loaiza Senior Vice President, Database Systems Technologies, Oracle Safe Harbor Statement The following is intended
More informationDisclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme
STO1053BES Redefine vsan Deployments with Next Generation Intel Xeon processor, Intel Optane and Intel 3D NAND SSDs VMworld 2017 Kathryn Vandiver, vsan ReadyLabs Engineering, VMware Vivek Sarathy, Non-Volatile
More informationENVISION TECHNOLOGY CONFERENCE. Functional intel (ia) BLA PARTHAS, INTEL PLATFORM ARCHITECT
ENVISION TECHNOLOGY CONFERENCE Functional Safety @ intel (ia) BLA PARTHAS, INTEL PLATFORM ARCHITECT Legal Notices & Disclaimers This document contains information on products, services and/or processes
More informationVirtuozzo Hyperconverged Platform Uses Intel Optane SSDs to Accelerate Performance for Containers and VMs
Solution brief Software-Defined Data Center (SDDC) Hyperconverged Platforms Virtuozzo Hyperconverged Platform Uses Intel Optane SSDs to Accelerate Performance for Containers and VMs Virtuozzo benchmark
More informationEntry-level Intel RAID RS3 Controller Family
PRODUCT Brief Entry-Level Intel RAID RS3 Controller Portfolio Entry-level Intel RAID RS3 Controller Family 12Gb/s connectivity and basic data protection RAID matters. Rely on Intel RAID. Cost-effective
More informationThe Transition to PCI Express* for Client SSDs
The Transition to PCI Express* for Client SSDs Amber Huffman Senior Principal Engineer Intel Santa Clara, CA 1 *Other names and brands may be claimed as the property of others. Legal Notices and Disclaimers
More informationWindows Support for PM. Tom Talpey, Microsoft
Windows Support for PM Tom Talpey, Microsoft Agenda Windows and Windows Server PM Industry Standards Support PMDK Support Hyper-V PM Support SQL Server PM Support Storage Spaces Direct PM Support SMB3
More informationThe Comparison of Ceph and Commercial Server SAN. Yuting Wu AWcloud
The Comparison of Ceph and Commercial Server SAN Yuting Wu wuyuting@awcloud.com AWcloud Agenda Introduction to AWcloud Introduction to Ceph Storage Introduction to ScaleIO and SolidFire Comparison of Ceph
More informationA fields' Introduction to SUSE Enterprise Storage TUT91098
A fields' Introduction to SUSE Enterprise Storage TUT91098 Robert Grosschopff Senior Systems Engineer robert.grosschopff@suse.com Martin Weiss Senior Consultant martin.weiss@suse.com Joao Luis Senior Software
More informationLow-Overhead Flash Disaggregation via NVMe-over-Fabrics Vijay Balakrishnan Memory Solutions Lab. Samsung Semiconductor, Inc.
Low-Overhead Flash Disaggregation via NVMe-over-Fabrics Vijay Balakrishnan Memory Solutions Lab. Samsung Semiconductor, Inc. 1 DISCLAIMER This presentation and/or accompanying oral statements by Samsung
More informationIntroducing SUSE Enterprise Storage 5
Introducing SUSE Enterprise Storage 5 1 SUSE Enterprise Storage 5 SUSE Enterprise Storage 5 is the ideal solution for Compliance, Archive, Backup and Large Data. Customers can simplify and scale the storage
More informationWindows Support for PM. Tom Talpey, Microsoft
Windows Support for PM Tom Talpey, Microsoft Agenda Industry Standards Support PMDK Open Source Support Hyper-V Support SQL Server Support Storage Spaces Direct Support SMB3 and RDMA Support 2 Windows
More informationProvisioning Intel Rack Scale Design Bare Metal Resources in the OpenStack Environment
Implementation guide Data Center Rack Scale Design Provisioning Intel Rack Scale Design Bare Metal Resources in the OpenStack Environment NOTE: If you are familiar with Intel Rack Scale Design and OpenStack*
More informationRed Hat Ceph Storage and Samsung NVMe SSDs for intensive workloads
Red Hat Ceph Storage and Samsung NVMe SSDs for intensive workloads Power emerging OpenStack use cases with high-performance Samsung/ Red Hat Ceph reference architecture Optimize storage cluster performance
More informationEnterprise Architectures The Pace Accelerates Camberley Bates Managing Partner & Analyst
Enterprise Architectures The Pace Accelerates Camberley Bates Managing Partner & Analyst Change is constant in IT.But some changes alter forever the way we do things Inflections & Architectures Solid State
More informationSupermicro All-Flash NVMe Solution for Ceph Storage Cluster
Table of Contents 2 Powering Ceph Storage Cluster with Supermicro All-Flash NVMe Storage Solutions 4 Supermicro Ceph OSD Ready All-Flash NVMe Reference Architecture Planning Consideration Supermicro NVMe
More informationAdrian Proctor Vice President, Marketing Viking Technology
Storage PRESENTATION in the TITLE DIMM GOES HERE Socket Adrian Proctor Vice President, Marketing Viking Technology SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless
More informationPerformance Benefits of Running RocksDB on Samsung NVMe SSDs
Performance Benefits of Running RocksDB on Samsung NVMe SSDs A Detailed Analysis 25 Samsung Semiconductor Inc. Executive Summary The industry has been experiencing an exponential data explosion over the
More informationREFERENCE ARCHITECTURE Microsoft SQL Server 2016 Data Warehouse Fast Track. FlashStack 70TB Solution with Cisco UCS and Pure Storage FlashArray//X
REFERENCE ARCHITECTURE Microsoft SQL Server 2016 Data Warehouse Fast Track FlashStack 70TB Solution with Cisco UCS and Pure Storage FlashArray//X FLASHSTACK REFERENCE ARCHITECTURE September 2018 TABLE
More informationArchitected for Performance. NVMe over Fabrics. September 20 th, Brandon Hoff, Broadcom.
Architected for Performance NVMe over Fabrics September 20 th, 2017 Brandon Hoff, Broadcom Brandon.Hoff@Broadcom.com Agenda NVMe over Fabrics Update Market Roadmap NVMe-TCP The benefits of NVMe over Fabrics
More informationEvaluation Report: HP StoreFabric SN1000E 16Gb Fibre Channel HBA
Evaluation Report: HP StoreFabric SN1000E 16Gb Fibre Channel HBA Evaluation report prepared under contract with HP Executive Summary The computing industry is experiencing an increasing demand for storage
More informationSPDK Blobstore: A Look Inside the NVM Optimized Allocator
SPDK Blobstore: A Look Inside the NVM Optimized Allocator Paul Luse, Principal Engineer, Intel Vishal Verma, Performance Engineer, Intel 1 Outline Storage Performance Development Kit What, Why, How? Blobstore
More informationFuture of datacenter STORAGE. Carol Wilder, Niels Reimers,
Future of datacenter STORAGE Carol Wilder, carol.a.wilder@intel.com Niels Reimers, niels.reimers@intel.com Legal Notices/disclaimer Intel technologies features and benefits depend on system configuration
More informationImplementing SQL Server 2016 with Microsoft Storage Spaces Direct on Dell EMC PowerEdge R730xd
Implementing SQL Server 2016 with Microsoft Storage Spaces Direct on Dell EMC PowerEdge R730xd Performance Study Dell EMC Engineering October 2017 A Dell EMC Performance Study Revisions Date October 2017
More informationNFV Platform Service Assurance Intel Infrastructure Management Technologies
NFV Platform Service Assurance Intel Infrastructure Management Technologies Meeting the service assurance challenge to nfv (Part 1) Virtualizing and Automating the Network NFV Changes the Game for Service
More informationAll-Flash High-Performance SAN/NAS Solutions for Virtualization & OLTP
All-Flash High-Performance SAN/NAS Solutions for Virtualization & OLTP All-flash configurations are designed to deliver maximum IOPS and throughput numbers for mission critical workloads and applicati
More informationScott Oaks, Oracle Sunil Raghavan, Intel Daniel Verkamp, Intel 03-Oct :45 p.m. - 4:30 p.m. Moscone West - Room 3020
Scott Oaks, Oracle Sunil Raghavan, Intel Daniel Verkamp, Intel 03-Oct-2017 3:45 p.m. - 4:30 p.m. Moscone West - Room 3020 Big Data Talk Exploring New SSD Usage Models to Accelerate Cloud Performance 03-Oct-2017,
More informationAnalysts Weigh In On Persistent Memory...
Analysts Weigh In On Persistent Memory... Your Experts Today Jeff Janukowicz, IDC Tom Coughlin, Coughlin Associates Jim Handy, Objective Analysis 2 Perspective on the Market and Persistent Memory Jeff
More informationA Gentle Introduction to Ceph
A Gentle Introduction to Ceph Narrated by Tim Serong tserong@suse.com Adapted from a longer work by Lars Marowsky-Brée lmb@suse.com Once upon a time there was a Free and Open Source distributed storage
More informationLEVERAGING FLASH MEMORY in ENTERPRISE STORAGE
LEVERAGING FLASH MEMORY in ENTERPRISE STORAGE Luanne Dauber, Pure Storage Author: Matt Kixmoeller, Pure Storage SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless
More informationA U G U S T 8, S A N T A C L A R A, C A
A U G U S T 8, 2 0 1 8 S A N T A C L A R A, C A Data-Centric Innovation Summit LISA SPELMAN VICE PRESIDENT & GENERAL MANAGER INTEL XEON PRODUCTS AND DATA CENTER MARKETING Increased integration and optimization
More informationBlock Storage Service: Status and Performance
Block Storage Service: Status and Performance Dan van der Ster, IT-DSS, 6 June 2014 Summary This memo summarizes the current status of the Ceph block storage service as it is used for OpenStack Cinder
More informationTHE IN-PLACE WORKING STORAGE TIER OPPORTUNITIES FOR SOFTWARE INNOVATORS KEN GIBSON, INTEL, DIRECTOR MEMORY SW ARCHITECTURE
THE IN-PLACE WORKING STORAGE TIER OPPORTUNITIES FOR SOFTWARE INNOVATORS KEN GIBSON, INTEL, DIRECTOR MEMORY SW ARCHITECTURE I/O LATENCY WILL SOON EXCEED MEDIA LATENCY 30 NVM Tread 25 NVM xfer Controller
More informationBen Walker Data Center Group Intel Corporation
Ben Walker Data Center Group Intel Corporation Notices and Disclaimers Intel technologies features and benefits depend on system configuration and may require enabled hardware, software or service activation.
More informationLow-Overhead Flash Disaggregation via NVMe-over-Fabrics
Low-Overhead Flash Disaggregation via NVMe-over-Fabrics Vijay Balakrishnan Memory Solutions Lab. Samsung Semiconductor, Inc. August 2017 1 DISCLAIMER This presentation and/or accompanying oral statements
More informationFlash Memory Summit Persistent Memory - NVDIMMs
Flash Memory Summit 2018 Persistent Memory - NVDIMMs Contents Persistent Memory Overview NVDIMM Conclusions 2 Persistent Memory Memory & Storage Convergence Today Volatile and non-volatile technologies
More informationAccessing NVM Locally and over RDMA Challenges and Opportunities
Accessing NVM Locally and over RDMA Challenges and Opportunities Wendy Elsasser Megan Grodowitz William Wang MSST - May 2018 Emerging NVM A wide variety of technologies with varied characteristics Address
More informationAt-Scale Data Centers & Demand for New Architectures
Allen Samuels At-Scale Data Centers & Demand for New Architectures Software Architect, Software and Systems Solutions June 4, 2015 1 Forward-Looking Statements During our meeting today we may make forward-looking
More informationJim Harris. Principal Software Engineer. Data Center Group
Jim Harris Principal Software Engineer Data Center Group Notices and Disclaimers Intel technologies features and benefits depend on system configuration and may require enabled hardware, software or service
More informationThe Benefits of Solid State in Enterprise Storage Systems. David Dale, NetApp
The Benefits of Solid State in Enterprise Storage Systems David Dale, NetApp SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies
More informationTHE CEPH POWER SHOW. Episode 2 : The Jewel Story. Daniel Messer Technical Marketing Red Hat Storage. Karan Singh Sr. Storage Architect Red Hat Storage
THE CEPH POWER SHOW Episode 2 : The Jewel Story Karan Singh Sr. Storage Architect Red Hat Storage Daniel Messer Technical Marketing Red Hat Storage Kyle Bader Sr. Storage Architect Red Hat Storage AGENDA
More informationUsing persistent memory and RDMA for Ceph client write-back caching Scott Peterson, Senior Software Engineer Intel
Using persistent memory and RDMA for Ceph client write-back caching Scott Peterson, Senior Software Engineer Intel 2018 Storage Developer Conference. Intel Corporation. All Rights Reserved. 1 Ceph Concepts
More informationThe next step in Software-Defined Storage with Virtual SAN
The next step in Software-Defined Storage with Virtual SAN Osama I. Al-Dosary VMware vforum, 2014 2014 VMware Inc. All rights reserved. Agenda Virtual SAN s Place in the SDDC Overview Features and Benefits
More informationSPDK China Summit Ziye Yang. Senior Software Engineer. Network Platforms Group, Intel Corporation
SPDK China Summit 2018 Ziye Yang Senior Software Engineer Network Platforms Group, Intel Corporation Agenda SPDK programming framework Accelerated NVMe-oF via SPDK Conclusion 2 Agenda SPDK programming
More informationMySQL and Ceph. A tale of two friends
ysql and Ceph A tale of two friends Karan Singh Sr. Storage Architect Red Hat Taco Scargo Sr. Solution Architect Red Hat Agenda Ceph Introduction and Architecture Why ysql on Ceph ysql and Ceph Performance
More informationCeph Optimizations for NVMe
Ceph Optimizations for NVMe Chunmei Liu, Intel Corporation Contributions: Tushar Gohad, Xiaoyan Li, Ganesh Mahalingam, Yingxin Cheng,Mahati Chamarthy Table of Contents Hardware vs Software roles conversion
More informationUpgrade to Microsoft SQL Server 2016 with Dell EMC Infrastructure
Upgrade to Microsoft SQL Server 2016 with Dell EMC Infrastructure Generational Comparison Study of Microsoft SQL Server Dell Engineering February 2017 Revisions Date Description February 2017 Version 1.0
More informationAndreas Schneider. Markus Leberecht. Senior Cloud Solution Architect, Intel Deutschland. Distribution Sales Manager, Intel Deutschland
Markus Leberecht Senior Cloud Solution Architect, Intel Deutschland Andreas Schneider Distribution Sales Manager, Intel Deutschland Legal Disclaimers 2016 Intel Corporation. Intel, the Intel logo, Xeon
More informationNext-Generation NVMe-Native Parallel Filesystem for Accelerating HPC Workloads
Next-Generation NVMe-Native Parallel Filesystem for Accelerating HPC Workloads Liran Zvibel CEO, Co-founder WekaIO @liranzvibel 1 WekaIO Matrix: Full-featured and Flexible Public or Private S3 Compatible
More informationVMware Virtual SAN Technology
VMware Virtual SAN Technology Today s Agenda 1 Hyper-Converged Infrastructure Architecture & Vmware Virtual SAN Overview 2 Why VMware Hyper-Converged Software? 3 VMware Virtual SAN Advantage Today s Agenda
More informationColin Cunningham, Intel Kumaran Siva, Intel Sandeep Mahajan, Oracle 03-Oct :45 p.m. - 5:30 p.m. Moscone West - Room 3020
Colin Cunningham, Intel Kumaran Siva, Intel Sandeep Mahajan, Oracle 03-Oct-2017 4:45 p.m. - 5:30 p.m. Moscone West - Room 3020 Big Data Talk Exploring New SSD Usage Models to Accelerate Cloud Performance
More informationData and Intelligence in Storage Carol Wilder Intel Corporation
Data and Intelligence in Storage Carol Wilder carol.a.wilder@intel.com Intel Corporation 1 Legal Notices/Disclaimer Intel technologies features and benefits depend on system configuration and may require
More information