SUSE Enterprise Storage 5 and iscsi

Similar documents
iscsi storage is used as shared storage in Redhat cluster, VMware vsphere, Redhat Enterprise Virtualization Manager, Ovirt, etc.

Configuring and Managing Virtual Storage

SDS Heterogeneous OS Access. Technical Strategist

SUSE Enterprise Storage Technical Overview

Choosing an Interface

Dell PowerVault NX1950 configuration guide for VMware ESX Server software

Expert Days SUSE Enterprise Storage

Preparing Virtual Machines for Cisco APIC-EM

Preparing Virtual Machines for Cisco APIC-EM

Oracle VM. Getting Started Guide for Release 3.2

Server Fault Protection with NetApp Data ONTAP Edge-T

Deploying Software Defined Storage for the Enterprise with Ceph. PRESENTATION TITLE GOES HERE Paul von Stamwitz Fujitsu

Unleash the Power of Ceph Across the Data Center

1) Use either Chrome of Firefox to access the VMware vsphere web Client.

Virtuozzo Storage. User s Guide

iscsi Boot from SAN with Dell PS Series

StarWind Virtual SAN. HyperConverged 2-Node Scenario with Hyper-V Cluster on Windows Server 2012R2. One Stop Virtualization Shop MARCH 2018

Virtual Appliance User s Guide

StarWind Virtual SAN AWS EC2 Deployment Guide

A Robust, Flexible Platform for Expanding Your Storage without Limits

VMware vsphere with ESX 4 and vcenter

StarWind Native SAN Configuring HA File Server for SMB NAS

StarWind Virtual SAN Configuring HA Shared Storage for Scale-Out File Servers in Windows Server 2012R2

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise

NexentaStor VVOL

StarWind Virtual SAN Configuring HA Shared Storage for Scale-Out File Servers in Windows Server 2016

StarWind Virtual SAN Installation and Configuration of HyperConverged 2 Nodes with Hyper-V Cluster

StarWind Virtual SAN Configuring HA SMB File Server in Windows Server 2016

VMware Identity Manager Connector Installation and Configuration (Legacy Mode)

Production Installation and Configuration. Openfiler NSA

Veritas Access. Installing Veritas Access in VMWare ESx environment. Who should read this paper? Veritas Pre-Sales, Partner Pre-Sales

Setup for Failover Clustering and Microsoft Cluster Service

Using EonStor DS Series iscsi-host storage systems with VMware vsphere 5.x

Nested Home Lab Setting up Shared Storage

StarWind Virtual SAN Compute and Storage Separated with Windows Server 2016

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise

StarWind Virtual SAN Compute and Storage Separated 3-Node Setup with Hyper-V

Data Migration from Dell PS Series or PowerVault MD3 to Dell EMC SC Series Storage using Thin Import

SolidFire and Ceph Architectural Comparison

StarWind Virtual SAN Compute and Storage Separated with Windows Server 2012 R2

Provisioning with SUSE Enterprise Storage. Nyers Gábor Trainer &

Virtuozzo Storage 2.3. User's Guide

StarWind Virtual SAN Compute and Storage Separated 2-Node Cluster. Creating Scale- Out File Server with Hyper-V.

1) Use either Chrome of Firefox to access the VMware vsphere web Client.

EMC Unity Family EMC Unity All Flash, EMC Unity Hybrid, EMC UnityVSA

Introducing SUSE Enterprise Storage 5

Virtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vsphere

SECURE, FLEXIBLE ON-PREMISE STORAGE WITH EMC SYNCPLICITY AND EMC ISILON

Dell EMC SC Series Storage with SAS Front-end Support for VMware vsphere

SQL Server on NetApp HCI

Getting Started with ESX Server 3i Installable Update 2 and later for ESX Server 3i version 3.5 Installable and VirtualCenter 2.5

Dell TM PowerVault TM Configuration Guide for VMware ESX/ESXi 3.5

SUSE OpenStack Cloud Production Deployment Architecture. Guide. Solution Guide Cloud Computing.

StarWind Virtual SAN. Installing and Configuring SQL Server 2014 Failover Cluster Instance on Windows Server 2012 R2. One Stop Virtualization Shop

StarWind iscsi SAN Software: ESX Storage Migration

Using SANDeploy iscsi SAN for VMware ESX / ESXi Server

StarWind Virtual SAN Installing and Configuring SQL Server 2017 Failover Cluster Instance on Windows Server 2016

Introduction to Virtualization

StarWind Virtual SAN Installing and Configuring SQL Server 2019 (TP) Failover Cluster Instance on Windows Server 2016

HySecure Quick Start Guide. HySecure 5.0

Discover CephFS TECHNICAL REPORT SPONSORED BY. image vlastas, 123RF.com

DSI Optimized Backup & Deduplication for VTL Installation & User Guide

StarWind Virtual SAN 2-Node Stretched Hyper-V Cluster on Windows Server 2016

Configuring a Microsoft Windows Server 2012/R2 Failover Cluster with Storage Center

SUSE Enterprise Storage 3


THE FOLLOWING EXAM OBJECTIVES ARE COVERED IN THIS CHAPTER: Configure local storage. Configure Windows Firewall

Dell Compellent Storage Center. XenServer 6.x Best Practices

The Ip address / Name value should be: srvvcenter-cis

StarWind Virtual SAN Hyperconverged 2-Node Scenario with Hyper-V Server 2016

VMware vsphere Storage Appliance Installation and Configuration

SEP sesam Backup & Recovery to SUSE Enterprise Storage. Hybrid Backup & Disaster Recovery

INTRODUCTION TO CEPH. Orit Wasserman Red Hat August Penguin 2017

Tintri Deployment and Best Practices Guide

SUSE Enterprise Storage Deployment Guide for Veritas NetBackup Using S3

Control Center Planning Guide

Getting Started with ESX Server 3i Embedded ESX Server 3i version 3.5 Embedded and VirtualCenter 2.5


-Presented By : Rajeshwari Chatterjee Professor-Andrey Shevel Course: Computing Clusters Grid and Clouds ITMO University, St.

VMware vfabric Data Director 2.5 EVALUATION GUIDE

StarWind iscsi SAN Software: Using an existing SAN for configuring High Availability with VMWare vsphere and ESX server

COPYRIGHTED MATERIAL. Windows Server 2008 Storage Services. Chapter. in this chapter:

Dell EMC Unity Family

Setting Up Cisco Prime LMS for High Availability, Live Migration, and Storage VMotion Using VMware

iscsi Configuration for ESXi using VSC Express Guide

Ceph Software Defined Storage Appliance

Dell EMC SC Series Storage and SMI-S Integration with Microsoft SCVMM

StarWind iscsi SAN & NAS: Configuring 3-node HA Shared Storage for vsphere December 2012

Dell EMC Unity Family

HOL09-Entry Level VMAX: Provisioning, FAST VP, TF VP Snap, and VMware Integration

Installation of Cisco Business Edition 6000H/M

Ceph Intro & Architectural Overview. Abbas Bangash Intercloud Systems

StarWind iscsi SAN & NAS:

TECHNICAL REPORT. Design Considerations for Using Nimble Storage with OpenStack

Control Center Planning Guide

StarWind Native SAN for Hyper-V:

#1 HyperConverged Appliance for SMB and ROBO. StarWind Virtual SAN. Hyper-Converged 2 Nodes Scenario with VMware vsphere NOVEMBER 2016 TECHNICAL PAPER

Parallels Containers for Windows 6.0

3.1. Storage. Direct Attached Storage (DAS)

A Gentle Introduction to Ceph

Transcription:

Whitepaper Storage www.suse.com SUSE Enterprise Storage 5 and iscsi

Table of Contents Table of Contents... 2 Introduction... 3 The Ceph Storage System... 3 iscsi Block Storage... 4 lrbd... 5 Deployment Considerations... 6 Installing and Configuring SUSE Enterprise Storage... 6 Creating RBD Images... 7 Connecting to Your iscsi Targets... 12 Conclusion... 30 p. 2

Executive Summary SUSE Enterprise Storage is a distributed storage management solution that enables you to readily build and deploy a distributed, highly available, open source Ceph storage cluster. Among the strengths of SUSE Enterprise Storage is its ability to make file, object and block storage available to all your new and legacy workloads running on Linux, Microsoft Windows, VMware and pretty much any platform you can name. Its flexibility and resiliency are rooted in Ceph, the popular open-source storage technology, which makes it a good choice for anyone looking for a robust storage solution that can scale without breaking the bank. Traditional SAN solutions deliver solid block storage, but they are often expensive and difficult to scale especially if you are locked into a single vendor. With SUSE Enterprise Storage, iscsi and openattic, you get an open-source solution than can deliver reliable, resilient, highly available block storage volumes for just about everything in your data center from your OpenStack and Kubernetes deployments to your database servers and save you a bundle while you are at it. Introduction This whitepaper provides an overview of the following technologies: The Ceph distributed storage system The iscsi network block-storage protocol and its Linux kernel implementation The SUSE Enterprise Storage openattic Ceph management tool lrbd, a program used to create iscsi targets and make Ceph RBD images available across your network The Ceph Storage System SUSE Enterprise Storage is based on Ceph, the distributed, highly available, open-source software originally conceived at the University of California, Santa Cruz, in 2003 that has become the de facto standard for cloud storage. It is versatile software-defined storage that relies on commodity hardware, open protocols, a robust API and an active development community to provide highly scalable storage to clients running workloads anywhere in your data center. Applications can use Ceph storage directly, use intermediate abstraction layers that provide a distributed POSIX file system, a RESTful object store akin to OpenStack Swift and Amazon S3 and more. As with Ceph, SUSE Enterprise Storage runs on Linux and offers storage to clients running nearly all modern operating systems. SUSE has long been a key contributor to the Ceph community. In 2018, it became one of the founding members of the Ceph Foundation, which aims to galvanize rapid adoption, training and collaboration across the Ceph ecosystem. Reliable Distributed Autonomic Storage (RADOS) At the heart of Ceph and SUSE Enterprise Storage is the Reliable Distributed Autonomic Object Store (RADOS), which provides highly distributed, scalable, self-healing, auto-balancing and highly available storage. Any chunk of data in RADOS is stored as an object, and a Controlled Replication Under Scalable Hashing (CRUSH) algorithm determines how to store and retrieve data from physical storage devices. Instead of a centralized data look-up service, which is often a single point of failure that prevents scaling in other systems, RADOS uses a hierarchy organized by physical nodes, racks, aisles, rooms, data centers or any factor you want. p. 3

RADOS can be broken down into logical subdivisions, known as pools, that you can define to suit your needs. You can then define replication and availability policies by pool and define storage rules tuned to different risks and needs. Data replicas can be confined to a single rack or distributed data centers whatever you need. RADOS Block Devices Although SUSE Enterprise Storage (and Ceph) can be configured to serve up storage as file and object stores, this whitepaper focuses on block devices, which can be made available via iscsi when necessary the same iscsi protocol used by traditional SANs. Block devices are a simple and ubiquitous storage abstraction. These devices have a defined length (the maximum capacity they can store) and a sector size (the smallest increment that can be used to read and write data). As a result, the input/output (I/O) interface on a block device is simple. Either a memory buffer of a given length is being written onto the device at a given location or data at a given location is being read into a memory buffer. RADOS Block Devices (RBDs) provide a versatile client-side abstraction layer on top of RADOS that enables users to interact with the storage cluster as if it were a conventional block storage device. With RBDs, users define images (volumes) and use the RBD client libraries available for Linux, Z/Architecture and scores of other systems to interact with these devices as if they were traditional attached block devices. Every block read-and-write operation automatically translates into a RADOS object operation that is transparent to the user and allows RBDs to seamlessly inherit RADOS distribution, replication, scalability and high availability. iscsi Block Storage iscsi is an implementation of the Small Computer System Interface (SCSI) command set that uses the well-established Internet Protocol. These connections enable a client (the initiator) to talk to a server (the target) via a session on TCP port 3260. An iscsi target s IP address and port are called an iscsi portal. A target can be exposed through one or more portals, providing built-in redundancy right out of the gate. The combination of a target and one or more portals is called the target portal group. The Linux Kernel iscsi Target The Linux kernel iscsi target was originally named LIO for linux-iscsi.org, the project s original domain and website. For some time, no fewer than four competing iscsi target implementations were available for the Linux platform, but LIO ultimately prevailed as the single iscsi reference target. The mainline kernel code for LIO uses the simple but somewhat ambiguous name target, distinguishing between target core and a variety of front-end and back-end target modules. The most commonly used front-end module is arguably iscsi. However, LIO also supports Fibre Channel (FC), Fibre Channel over Ethernet (FCoE) and several other protocols. At this time, only the iscsi protocol has been validated with SUSE Enterprise Storage. iscsi Initiators LINUX The standard initiator on the Linux platform is open-iscsi, an open source package available for most distributions, including SUSE Linux Enterprise Server. It launches a daemon, iscsid, which allows you to discover iscsi targets on any given portal, log in to those targets and map iscsi volumes. iscsid communicates with the iscsi mid-layer to p. 4

create in-kernel block devices that the kernel can then treat like any other SCSI block device on the system, such as /dev/sda or /dev/sdb on a Linux system. The open-iscsi initiator can be deployed in conjunction with the Device Mapper Multipath (dm-multipath) facility to provide a highly available iscsi block device. MICROSOFT WINDOWS AND HYPER-V The default iscsi initiator for Windows is the Microsoft iscsi initiator. The iscsi service can be configured via a graphical interface and supports multipath I/O for high availability. VMWARE The default iscsi initiator for VMware vsphere and ESX is the VMware iscsi initiator, vmkiscsi. Once enabled, it can be configured from the vsphere client console or with the vm-kiscsi-tool command. You can then format connected storage volumes with VMFS and use them like any other VM storage device. The VMware initiator also supports multipath I/O for high availability. We will discuss how to attach volumes using all three of these platforms. lrbd The lrbd utility simplifies iscsi management of RBD images by centrally storing configurations in Ceph objects and executing the necessary rbd and targetcli commands. It enables any application to make use of the Ceph block storage even if it doesn t speak any Ceph client protocol. In SUSE Enterprise Storage, lrbd is abstracted by the openattic management console. Figure 1: The iscsi Gateway enables a wide variety of systems to communicate with the Ceph datastore. p. 5

lrbd is highly available by design and supports multipath operations, which means it can provide multiple network routes to the data objects. When communicating with an iscsi target configured with more than one gateway, initiators can loadbalance iscsi requests on the client. So if a gateway fails, I/O will transparently continue via another node. Figure 2: With lrbd, storage targets become highly available with different OSD nodes, providing simultaneous connections. Deployment Considerations A minimum SUSE Enterprise Storage configuration consists of: A cluster with at least four physical servers hosting at least eight object storage daemons (OSDs) each. In such a configuration, three OSD nodes double as monitor (MON) nodes. An iscsi target server running the LIO iscsi target, configured via openattic or lrbd. An iscsi initiator host, such as open-iscsi on Linux, the Microsoft iscsi Initiator on Windows or the VMware iscsi Software initiator in vsphere. A recommended production SUSE Enterprise Storage configuration consists of: A cluster with 10 or more physical servers, each typically running 10 to 12 object storage daemons (OSDs), with no fewer than three dedicated monitor (MON) nodes. All should be running the same version of SUSE Linux Enterprise Server. Several iscsi target servers running the LIO iscsi target, configured via openattic or lrbd. For iscsi failover and load-balancing, these servers must run a kernel supporting the target_core_rbd module. This also requires the latest available update of the kernel-default package. (Update packages are available from the SUSE Linux Enterprise Server maintenance channel.) An iscsi initiator host, such as open-iscsi on Linux, the Microsoft iscsi Initiator on Windows or the VMware iscsi Software initiator in vsphere. Installing and Configuring SUSE Enterprise Storage Adding the SUSE Enterprise Storage Installation Media p. 6

Use YaST to configure SUSE Enterprise Storage as an add-on product to SUSE Linux Enterprise Server. You can use a local ISO, a SUSE Customer Center license or a local Subscription Management Tool server (SMT). Using SMT is recommended for speed and security, and it is a particularly good choice if you are deploying behind a firewall. Information on how to add the SMT pattern to a new or existing SUSE Linux Enterprise Server is available here, https://www.suse.com/documentation/sles-12/book_smt/data/smt_installation.html. Deploying a Ceph Cluster The SUSE Enterprise Storage deployment is managed by Salt and Deepsea, which automate the installation steps, including setting up the policies used to assign monitor, management, services and data nodes. Even a basic Deepsea deployment can automate the installation of everything you need to use and manage the RADOS gateway, the iscsi gateway, ganesha (if you plan to also deploy NFS) and openattic (the SUSE complete Ceph management application). Complete details are available here, https://www.suse.com/documentation/suse-enterprise-storage- 5/singlehtml/book_storage_deployment/book_storage_deployment.html. Figure 3: SUSE Enterprise Storage features openattic, which makes it easy to set up and deploy RBDs, pools and gateways. Creating RBD Images RBD images your storage volumes are created in the Ceph object store and subsequently exported to iscsi. We recommend that you use a dedicated RADOS pool for your RBD images. OpenATTIC makes this easy, but you can also p. 7

create a volume by logging into any host that can connect to your storage cluster and has the Ceph rbd command-line utility installed. This requires the client to have at least a minimal ceph.conf configuration file and appropriate CephX authentication credentials, all of which are taken care of for you when deploying with Salt and Deepsea. We recommend using openattic, first to create a pool and then an RBD image. Start by clicking on the Pool menu and clicking Add : Figure 4: Create a new pool in the openattic management console. p. 8

Next, create a testvol RBD image. Simply click on the RBD menu item and click Add : Figure 5: Create an RBD image using openattic. Alternatively, you can manually create a new volume and a pool in which to store it. For example, use the osd tool to create a pool named iscsi with 128 placement groups: # ceph osd pool create iscsi 128 Then use the rbd create command to create the volume, specifying the volume size in megabytes. For example, in order to create a 100GB volume named testvol in the newly created pool named iscsi, run: # rbd --pool iscsi create --size=102400 testvol This will create an RBD format 1 volume. In order to enable layered snapshots and cloning, you might want to create a format 2 volume instead: p. 9

# rbd --pool iscsi create --image-format 2 --size=102400 testvol To always enable format 2 on all newly created images, add the following configuration options to /etc/ceph/ceph.conf: [client] rbd default image format = 2 Exporting RBD Images via iscsi To export RBD images via iscsi, you can again use openattic or the lrbd utility. Either approach enables you to create, review and modify the iscsi target configuration. OpenATTIC creates the necessary.json for you and automatically adds it to /srv/salt/ceph/igw/cache/lrbd.conf on your SUSE Enterprise Storage admin server. Using openattic, click the iscsi menu item and click Add. This will auto-generate a Target IQN name and enable you to choose portals (nodes available in your cluster) and a target image (iscsi:testvol, in this example): Figure 6: Export an RBD image to an ISCSI gateway using openattic. In order to manually edit the configuration using lrbd, use lrbd -e or lrbd --edit. This will invoke your system s default editor and enable you to enter raw JSON content. We strongly recommend that you avoid manually editing the JSON file when using openattic, but if you are curious, below is an lrbd JSON example configuration featuring the following: p. 10

1. Two iscsi gateway hosts named iscsi1.example.com and iscsi2.example.com 2. A single iscsi target with an iscsi Qualified Name (IQN) of iqn.2003-01.org.linuxiscsi.iscsi.x86:testvol 3. A single iscsi Logical Unit (LU) 4. An RBD image named testvol in the RADOS pool rbd 5. Two targets exported via two portals named east and west { "auth": [ { "target": "iqn.2003-01.org.linux-iscsi.iscsi.x86:testvol", "authentication": "none" } ], "targets": [ { "target": "iqn.2003-01.org.linux-iscsi.iscsi.x86:testvol", "hosts": [ { "host": "iscsi1.example.com", "portal": "east" }, { "host": "iscsi2.example.com", "portal": "west" } ] } ], "portals": [ { "name": "east", "addresses": [ "192.168.124.104" ] }, { "name": "west", "addresses": [ "192.168.124.105" ] } ], "pools": [ { "pool": "rbd", "gateways": [ { "target": "iqn.2003-01.org.linux-iscsi.iscsi.x86:testvol", "tpg": [ { "image": "testvol" } ] } ] } ] p. 11

} Note that whenever you refer to a hostname in the configuration, that hostname must match the output of the iscsi gateway s uname -n output. This JSON object is available to the gateway hosts where the JSON is edited (in this case, on the SUSE Enterprise Storage admin host in /srv/salt/ceph/igw/cache/lrbd.conf) and to all gateway hosts connected to the same Ceph cluster. Any additional gateways created using openattic can be viewed in the dashboard. All openattic and any manually created lrbd entries are stored in the same lrbd.conf file, which is why we strongly recommend using openattic and not switching back and forth between manual lrbd.conf edits. Manual edits can introduce errors that can cause problems. Once the configuration is created and saved via openattic, it s automatically activated. To manually activate the configuration using lrbd, run the following from the command line: # lrbd # Run lrbd without additional options # systemctl restart lrbd.service If you plan to run lrbd manually and if you haven t installed openattic to manage it for you you should also enable lrbd to auto-configure on system start-up: # systemctl enable lrbd.service The iscsi gateway sample above is basic, but you can find an extensive set of configuration examples in /usr/share/doc/packages/lrbd/samples. The samples also are available in GitHub at https://github.com/suse/lrbd/tree/master/samples. Connecting to Your iscsi Targets Linux (open-iscsi) Once you have created iscsi targets, remotely accessing them from Linux hosts is a two-step process. First, the remote host, known as the client, must discover the iscsi targets available on the gateway host and then map the available logical units (LUs). Both steps require the open-iscsi daemon to be running on the Linux client host (that is, the host to which you want to attach the storage). Starting and enabling the open-iscsi daemon is straightforward: # systemctl start open-iscsi To enable it on system start, additionally run: # systemctl enable open-iscsi If your initiator host runs SUSE Linux Enterprise Server, refer to www.suse.com/documentation/sles- 12/stor_admin/data/sec_iscsi_initiator.html for details on how to connect to an iscsi target using YaST. p. 12

For any other Linux distribution supporting open-iscsi, you can discover your iscsi gateway targets with a simple command using the open-iscsi tool iscsiadm. This example uses iscsi1.example.com as the portal address and returns information about all the available target IQNs. # iscsiadm -m discovery -t sendtargets -p iscsi1.example.com 192.168.124.104:3260,1 iqn.2003-01.org.linux-iscsi.iscsi.x86:testvol 192.168.124.105:3260,1 iqn.2003-01.org.linux-iscsi.iscsi.x86:testvol The final step is to log in to the portal. You have a couple of options. The first example will log you in to all available iscsi targets: # iscsiadm -m node -p iscsi1.example.com --login Logging in to [iface: default, target: iqn.2003-01.org.linux-iscsi.iscsi.x86:testvol, portal: 192.168.124.104,3260] (multiple) Login to [iface: default, target: iqn.2003-01.org.linux-iscsi.iscsi.x86:testvol, portal: 192.168.124.104,3260] successful. If you want to log into a single specific iscsi target, you can use the target name and its IP address or domain name: # iscsiadm -m node --login -T iqn.2003-01.org.linux-iscsi.iscsi.x86:testvol p iscsi1.example.com Logging in to [iface: default, target: iqn.2003-01.org.linux-iscsi.iscsi.x86.testvol, portal: 10.128.1.41192.168.124.104,3260] Login to [iface: default, target: iqn.2003-01.org.linux-iscsi.iscsi.x86.testvol, portal: 10.128.1.41192.168.124.104,3260] successful. Once logged in, the block storage will immediately become available and visible on the system, highlighted in bold here as /dev/sdb: $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 32G 0 disk sda1 8:1 0 8M 0 part sda2 8:2 0 30G 0 part /var/lib/docker/btrfs sda3 8:3 0 2G 0 part [SWAP] sdb 8:16 0 49G 0 disk sr0 11:0 1 3.7G 0 rom If your system has the lsscsi utility installed, use it to see the iscsi device(s) on your system: # lsscsi [0:0:0:0] disk QEMU HARDDISK 2.5+ /dev/sda [2:0:0:0] cd/dvd QEMU DVD-ROM 2.5+ /dev/sr0 [3:0:0:0] disk SUSE RBD 4.0 /dev/sdb p. 13

In a multipath configuration (where two connected iscsi devices represent one and the same LU), you can also examine the device state with the multipath utility: # multipath -ll 360014050cf9dcfcb2603933ac3298dca dm-9 SUSE,RBD size=49g features= 0 hwhandler= 0 wp=rw -+- policy= service-time 0 prio=1 status=active `- 2:0:0:0 sdb 8:64 active ready running `-+- policy= service-time 0 prio=1 status=enabled `- 3:0:0:0 sdf 8:80 active ready running You can now use this iscsi device as you would any other physically attached block device. For example, you can use it as a physical volume for Linux Logical Volume Management (LVM), or you can simply create a file system on it. The example below demonstrates how to create an XFS file system on the newly connected multipath iscsi volume: # mkfs.xfs /dev/sdb log stripe unit (4194304 bytes) is too large (maximum is 256KiB) log stripe unit adjusted to 32KiB meta-data=/dev/mapper/360014050cf9dcfcb2603933ac3298dca isize=256 agcount=17, agsize=799744 blks = sectsz=512 attr=2, projid32bit=1 = crc=0 finobt=0 data = bsize=4096 blocks=12800000, imaxpct=25 = sunit=1024 swidth=1024 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=0 log =internal log bsize=4096 blocks=6256, version=2 = sectsz=512 sunit=8 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 Note that because XFS is a non-clustered file system, you can only connect it to a single iscsi initiator node at any given time. Remember that iscsi volumes are block storage devices, not file storage devices. Connecting the same target to separate systems shares just the raw block volume, not the files. Double-mounting will lead to data loss! If you want to stop using the iscsi LUs associated with a target, simply log out by running the following command. Be sure to unmount the device first to avoid data corruption: # iscsiadm -m node -p iscsi1.example.com --logout Logging out of session [sid: 18, iqn.2003-01.org.linux-iscsi.iscsi.x86:testvol, portal:192.168.124.104,3260] Logout of [sid: 18, target: iqn.2003-01.org.linux-iscsi.iscsi.x86:testvol, portal: 192.168.124.104,3260] successful. Microsoft Windows (Microsoft iscsi Initiator) Taking advantage of RBD-backed storage via iscsi is not limited to Linux hosts. Ceph volumes can be readily attached to Windows machines using the Microsoft iscsi Initiator, available from the Windows Server Manager Tools iscsi Initiator or simply by searching for iscsi Initiator. When it opens, select the Discovery tab: p. 14

In the Discover Target Portal dialog, enter the target s hostname or IP address in the Target field and click OK: p. 15

Repeat this process for all other gateway hostnames or IP addresses. Once complete, review the Target Portals list: Next, switch to the Targets tab and review your discovered target(s): p. 16

Click Connect in the Targets tab. When the Connect To Target dialog appears, select the Enable multi-path checkbox to enable multipath I/O (MPIO); then click OK: Once the Connect to Target dialog closes, select Properties to review the target s properties: p. 17

Select Devices : Next, click on MPIO to review the multipath I/O configuration: p. 18

The default Load Balance policy is Round Robin With Subset. If you prefer a pure fail-over configuration, change it to Fail Over Only. That is it! The iscsi volumes are now available just like any other local disks and can be initialized for use as volumes and drives. Click OK to close the iscsi Initiator Properties dialog and proceed with the File and Storage Services role from the Server Manager dashboard: p. 19

If the new volume does not appear immediately, select Rescan Storage from the Tasks drop-down to rescan the iscsi bus. Right-click on the iscsi volume and select New Volume from the context menu: The New Volume wizard appears. Click Next to begin: p. 20

Highlight the newly connected iscsi volume and click Next: Initially, the device is empty and does not contain a partition table. When prompted, confirm the dialog indicating that the volume will be initialized with a GPT partition table: Next, select the volume size. Typically, you would use the device s full capacity: p. 21

Then, assign a drive letter or folder name where you want the newly created volume to become available: Select a file system to create on the new volume: Finally, confirm your selection and click the Create button to finish creating the volume: p. 22

When the process finishes, review the results and click Close to complete the drive initialization: Once initialization is complete, the volume (and its NTFS file system) is available just like any newly initialized local drive. VMware To connect iscsi volumes to VMs running on vsphere, you need a configured iscsi software adapter. If no such adapter is available in your vsphere configuration, create one by selecting Configuration Storage Adapters Add iscsi Software initiator. Once available, select the adapter s properties by right-clicking on it and selecting Properties from the context menu: p. 23

In the iscsi Software Initiator dialog, click the Configure button: Go to the Dynamic Discovery tab and select Add: p. 24

Enter the IP address or hostname of your iscsi gateway. If you run multiple iscsi gateways in a failover configuration, repeat this step for as many gateways as you operate: Once you have entered all iscsi gateways, click Yes in the dialog box to initiate a rescan of the iscsi adapter: Once the rescan is complete, the new iscsi device appears below the Storage Adapters list in the Details pane: p. 25

For multipath devices, you can now right-click on the adapter and select Manage Paths from the context menu: You should now see all paths with a green light under Status. One of your paths should be marked Active (I/O) and all others simply Active : You can now switch from Storage Adapters to the item labeled simply Storage: p. 26

Select Add Storage in the top-right corner of the pane to bring up the Add Storage dialog. Then select Disk/LUN and click Next: The newly added iscsi device appears in the Select Disk/LUN list. Select it and then click Next to proceed: p. 27

Click Next to accept the default disk layout: In the Properties pane, assign a name to the new datastore. Then click Next: p. 28

Accept the default setting to use the volume s entire space for the datastore or select Custom Space Setting for a smaller datastore: Finally, click Finish to complete the datastore creation: The new datastore now appears in the datastore list and you can select it to retrieve details. You can now use the iscsi volume like any other vsphere datastore: p. 29

Conclusion The ability to rapidly deploy iscsi gateway targets is a key component of SUSE Enterprise Storage that enables ready access to distributed, highly available block storage on any server or client capable of speaking the iscsi protocol. That gives you all the advantages of Ceph redundancy, high availability, ready load balancing across your data center. You can quickly scale up or down to suit your needs with resilient, self-healing enterprise storage technology that does not lock you into any one vendor, while enabling you to employ commodity hardware and an entirely open source platform. Disclaimer: The findings detailed in this whitepaper are developed using SUSE s long-term technical expertise and to the best of SUSE s knowledge and abilities, but on an as is basis and without warranty of any kind, to the extent permitted by applicable law. 262-002523-004 02/19 201p SUSE LLC. All rights reserved. SUSE, the SUSE logo and YaST are registered trademarks, and SUSE Enterprise Storage is a trademark of SUSE LLC in the United States and other countries. All third-party trademarks are the property of their respective owners. p. 30