iscsi storage is used as shared storage in Redhat cluster, VMware vsphere, Redhat Enterprise Virtualization Manager, Ovirt, etc.

Similar documents
As this method focuses on working with LVM, we will first confirm that our partition type is actually Linux LVM by running the below command.

SUSE Enterprise Storage 5 and iscsi

JSA KVM SUPPORT. Theodore Jencks, CSE Juniper Networks

Unless otherwise noted, all references to STRM refer to STRM, STRM Log Manager, and STRM Network Anomaly Detection.

Changing user login password on templates

Upgrade Cisco Interface Module for LoRaWAN IXM using the Console

This section describes the procedures needed to add a new disk to a VM. vmkfstools -c 4g /vmfs/volumes/datastore_name/vmname/xxxx.

Adding a block devices and extending file systems in Linux environments

This is sometimes necessary to free up disk space on a volume that cannot have extra disk space easily added.

Nested Home Lab Setting up Shared Storage

Using iscsi On Debian Lenny (Initiator And Target)

Getting Started with Pentaho and Cloudera QuickStart VM

Assignment No. SAN. Title. Roll No. Date. Programming Lab IV. Signature

Cloning and Repartitioning sessionmgr Disks

SCSI RDMA Protocol. Introduction. Configuring SRP CHAPTER

This is Worksheet and Assignment 12. Disks, Partitions, and File Systems

Disks, Filesystems Todd Kelley CST8177 Todd Kelley 1

Exam Actual. Higher Quality. Better Service! QUESTION & ANSWER

Disks, Filesystems, Booting Todd Kelley CST8177 Todd Kelley 1

<Insert Picture Here> XFS The High Performance Enterprise File System. Jeff Liu

How To Resize ext3 Partitions Without Losing Data

Virtualization Provisioning & Centralized Management with iscsi. RACF Brookhaven National Laboratory James Pryor CHEP 2010

7. Try shrinking / -- what happens? Why? Cannot shrink the volume since we can not umount the / logical volume.

BT Cloud Compute. Adding a Volume to an existing VM running Linux. The power to build your own cloud solutions to serve your specific business needs

Installing VMware ESXi and vcenter for Cisco RMS

Data Integrity & Security, & Privacy

Project 3: An Introduction to File Systems. COP 4610 / CGS 5765 Principles of Operating Systems

Manually Mount Usb Flash Drive Linux Command Line Redhat

Link Gateway ISO Installation Manual

Production Installation and Configuration. Openfiler NSA

Red Hat Enterprise Linux 8.0 Beta

1. Set up the storage to allow access to the LD(s) by the server following the NEC storage user guides.

Disks, Filesystems 1

Bare Metal Server. User Guide. Issue 11 Date

High-Availability Storage with GlusterFS on CentOS 7 - Mirror across two storage servers

Magneto-Optical Disk Drive

Reconfigure Offboard Storage During a JSA Upgrade

Linux Howtos. Fedora 9 Install (114) CIS Fall Fedora 9 Install (114) Fedora 9 installation with custom partitions.

Exadata Landing Pad: Migrating a Fibre Channel-based Oracle Database Using an Oracle ZFS Storage Appliance

Blueprints. Installing Linux on a Multipath iscsi LUN on an IP Network

istorage Server: High Availability iscsi SAN for Linux Server

Getting Started with BeagleBoard xm

For personnal use only

Installing VMware ESXi and vcenter for Cisco RMS

UUID and R1Soft. What is a UUID and what is it used for?

Block level VM backups Restore whole VM or individual files Instant restore of VMDKs Instant restore of whole VMs (requires Standard vsphere license)

Project 3: An Introduction to File Systems. COP4610 Florida State University

Red Hat Enterprise Linux 8.0 Beta

DESS. User Guide. Issue 06 Date HUAWEI TECHNOLOGIES CO., LTD.

Manual File System Check Linux Command Line

RocketRAID 2680/2684 SAS Controller Red Hat Enterprise/CentOS Linux Installation Guide

Soma s Installation Notes

Week 10 Project 3: An Introduction to File Systems. Classes COP4610 / CGS5765 Florida State University

Operating System. Hanyang University. Hyunmin Yoon Operating System Hanyang University

OPS235. Linux File Systems Partitioning Mounting File Systems

RocketRAID 231x/230x SATA Controller Red Hat Enterprise/CentOS Linux Installation Guide

Linux Diskless iscsi Boot HowTo ( V1.0)

Control Center Planning Guide

jfield Documentation Release 1 Jason Field

NetApp Data Ontap Simulator Cookbook

Express Report Service Setup Guide (Linux/VMware)

Using ACLs with Fedora Core 2 (Linux Kernel 2.6.5)

New RHEL 7.5 features: VDO, USBGuard, NBDE and AIDE. RHUG Q Marc Skinner Principal Solutions Architect 3/21/2018

System Administration. Storage Systems

Controller Installation

More on file systems, Booting Todd Kelley CST8177 Todd Kelley 1

Disk-Level Encryption

It is recommended to complete the tutorial using a web browser from the same operating system as your Putty or SSH client (e.g. Ubuntu terminal).

StorTrends - Citrix. Introduction. Getting Started: Setup Guide

NASA Lab. Partition/Filesystem/Bootloader. TinRay, Yu-Chuan

CST8207: GNU/Linux Operating Systems I Lab Nine Disks, Partitions, and File Systems Part 2. Disks, Partitions, and File Systems - Part 2 of 2

Linux Host Utilities 6.2 Quick Start Guide

Chapter 6. Linux File System

Control Center Planning Guide

SIOS Protection Suite for Linux: Apache/MySQL. Evaluation Guide

Manually Mount Usb Flash Drive Linux Command Line Fedora

First look at the renewed CTL High Availability implementation in FreeBSD

RocketRAID 231x/230x SATA Controller Fedora Linux Installation Guide

Red Hat Enterprise Linux 6 or 7 Host Configuration and Backup with Auto- Snapshot Manager

Partitioning and Formatting Guide

Learn Linux, 101: Control mounting and unmounting of

How to add additional disks to XenServer host

example.com index.html # vim /etc/httpd/conf/httpd.conf NameVirtualHost :80 <VirtualHost :80> DocumentRoot /var/www/html/

Spectrum Scale Virtualization in Test and Development

Deploying Red Hat Enterprise Linux with Dell EqualLogic PS Series Arrays

Critical Analysis and last hour guide for RHCSA/RHCE Enterprise 7

Using iscsi with BackupAssist. User Guide

Highly available iscsi storage with DRBD and Pacemaker. Brian Hellman & Florian Haas Version 1.2

Build Your Own Oracle RAC 11g Release 2 Cluster on Red Hat 4 and iscsi

SLES Linux Installation Guide

Back Up (And Restore) LVM Partitions With LVM Snapshots

3.5 Inch TFT Display

Author : admin. 1. Getting information about current file system partitions with fdisk and mount

Virtuozzo Storage. Administrator s Command Line Guide

Installation of the OS

NetApp Data Ontap Simulator Cookbook

EX200 EX200. Red Hat Certified System Administrator RHCSA

Method of Procedure to Upgrade RMS OS to Red Hat Enterprise Linux 6.7

Novell SUSE Linux Enterprise Server

Using Fluentd as an alternative to Splunk

Transcription:

Configure iscsi Target & Initiator on CentOS 7 / RHEL7 iscsi stands for Internet Small Computer Systems Interface, IP-based storage, works on top of internet protocol by carrying SCSI commands over IP network. iscsi transports block-level data between an iscsi initiator on a client machine and an iscsi target on a storage device (server). iscsi storage is used as shared storage in Redhat cluster, VMware vsphere, Redhat Enterprise Virtualization Manager, Ovirt, etc. Environment Server: server.itzgeek.local IP Address: 192.168.12.20 OS: CentOS Linux release 7.4.1708 (Core) Client: node1.itzgeek.local IP Address: 192.168.12.11 OS: CentOS Linux release 7.4.1708 (Core) Storage Configuration Here, we will create 5GB of LVM disk on the target server to use as shared storage for clients. Let s list the available disks attached to the target server using below command. If you want to use the whole disk for LVM, then skip the disk partitioning step. [root@server ~]# fdisk -l grep -i sd Disk /dev/sda: 107.4 GB, 107374182400 bytes, 209715200 sectors

/dev/sda1 * 2048 1026047 512000 83 Linux /dev/sda2 1026048 209715199 104344576 8e Linux LVM Disk /dev/sdb: 10.7 GB, 10737418240 bytes, 20971520 sectors From the above output, you can see that my system has a 10GB of disk (/dev/sdb). We will create a 5GB partition on the above disk and will use it for LVM. [root@server ~]# fdisk /dev/sdb Welcome to fdisk (util-linux 2.23.2). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Device does not contain a recognized partition table Building a new DOS disklabel with disk identifier 0x173dfa4d. Command (m for help): n --> New partition Partition type: p primary (0 primary, 0 extended, 4 free) e extended Select (default p): p --> Pimary partition Partition number (1-4, default 1): 1 - -> Partition number First sector (2048-20971519, default 2048): --> Just enter Using default value 2048 Last sector, +sectors or +size{k,m,g} (2048-20971519, default 20971519): +5G --> Enter the size Partition 1 of type Linux and of size 5 GiB is set Command (m for help): t --> Change label Selected partition 1 Hex code (type L to list all codes): 8e --> Change it as LVM label Changed type of partition 'Linux' to 'Linux LVM' Command (m for help): w --> Save The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. Create a LVM with /dev/sdb1 partition (replace /dev/sdb1 with your disk name) [root@server ~]# pvcreate /dev/sdb1 [root@server ~]# vgcreate vg_iscsi /dev/sdb1 [root@server ~]# lvcreate -l 100%FREE -n lv_iscsi vg_iscsi

Configure iscsi target Now you have an option of creating target either with or without authentication. In this article, you can find steps for both scenarios. It is up to you to decide which one is suitable for your environment. Here, will configure iscsi target without CHAP authentication. Install the targetcli package on the server. [root@server ~]# yum install targetcli -y Once you installed the package, enter below command to get an iscsi CLI for an interactive prompt. [root@server ~]# targetcli Warning: Could not load preferences file /root/.targetcli/prefs.bin. targetcli shell version 2.1.fb41 Copyright 2011-2013 by Datera, Inc and others. For help on commands, type 'help'. > Now use an existing logical volume (/dev/vg_iscsi/lv_iscsi) as a block-type backing store for storage object scsi_disk1_server. /> cd backstores/block /backstores/block> create scsi_disk1_server /dev/vg_iscsi/lv_iscsi Created block storage object scsi_disk1_server using /dev/vg_iscsi/lv_iscsi. Create a target. /backstores/block> cd /iscsi /iscsi> create iqn.2016-02.local.itzgeek.server:disk1 Created target iqn.2016-02.local.itzgeek.server:disk1. Created TPG 1. Global pref auto_add_default_portal=true Created default portal listening on all IPs (0.0.0.0), port 3260. /iscsi> Create ACL for client machine (It s the IQN which clients use to connect).

/iscsi> cd /iscsi/iqn.2016-02.local.itzgeek.server:disk1/tpg1/acls /iscsi/iqn.20...sk1/tpg1/acls> create iqn.2016-02.local.itzgeek.server:node1node2 Created Node ACL for iqn.2016-02.local.itzgeek.server:node1node2 Create a LUN under the target. The LUN should use the previously mentioned backing storage object named scsi_disk1_server. /iscsi/iqn.20...er:disk1/tpg1> cd /iscsi/iqn.2016-02.local.itzgeek.server:disk1/tpg1/luns /iscsi/iqn.20...sk1/tpg1/luns> create /backstores/block/scsi_disk1_server Created LUN 0. Created LUN 0->0 mapping in node ACL iqn.2016-02.local.itzgeek.server:node1node2 Verify the target server configuration. /iscsi/iqn.20.../tpg1/portals> cd / /> ls o- /...... [...] o- backstores...... [...] o- block... [Storage Objects: 1] o- scsi_disk1_server... [/dev/vg_iscsi/lv_iscsi (5.0GiB) write-thru activated] o- fileio... [Storage Objects: 0] o- pscsi... [Storage Objects: 0] o- ramdisk... [Storage Objects: 0] o- iscsi... [ Targets: 1] o- iqn.2016-02.local.itzgeek.server:disk1.... [TPGs: 1] o- tpg1... [gen-acl s, no-auth] o- acls.... [ACLs: 1]

o- iqn.2016-02.local.itzgeek.server:node1node2... [Mapp ed LUNs: 1] o- mapped_lun0... [lun0 block/scsi_disk1_s erver (rw)] o- luns.... [LUNs: 1] o- lun0... [block/scsi_disk1_server (/dev/vg_iscsi /lv_iscsi)] o- portals... [ Portals: 1] o- 0.0.0.0:3260...... [OK] o- loopback... [ Targets: 0]Save and exit from target CLI. /> saveconfig Last 10 configs saved in /etc/target/backup. Configuration saved to /etc/target/saveconfig.json /> exit Global pref auto_save_on_exit=true Last 10 configs saved in /etc/target/backup. Configuration saved to /etc/target/saveconfig.json Enable and restart the target service. [root@server ~]# systemctl enable target.service [root@server ~]# systemctl restart target.service Configure the firewall to allow iscsi traffic. [root@server ~]# firewall-cmd --permanent --add-port=3260/tcp [root@server ~]# firewall-cmd --reload

Configure Initiator Now, it s the time to configure a client machine to use the created target as storage. Install the below package on the client machine (node1). [root@node1 ~]# yum install iscsi-initiator-utils -y Edit the initiatorname.iscsi file. [root@node1 ~]# vi /etc/iscsi/initiatorname.iscsi Add the iscsi initiator name. InitiatorName=iqn.2016-02.local.itzgeek.server:node1node2 Discover the target using the below command. [root@node1 ~]# iscsiadm -m discovery -t st -p 192.168.12.20 192.168.12.20:3260,1 iqn.2016-02.local.itzgeek.server:disk1 Restart and enable the initiator service. [root@node1 ~]# systemctl restart iscsid.service [root@node1 ~]# systemctl enable iscsid.service Login to the discovered target. [root@node1 ~]# iscsiadm -m node -T iqn.2016-02.local.itzgeek.server:disk1 -p 192.168.12.20 -l Logging in to [iface: default, target: iqn.2016-02.local.itzgeek.server:disk1, portal: 192.168.12.20,3260] (multiple) Login to [iface: default, target: iqn.2016-02.local.itzgeek.server:disk1, portal: 192.168.12.20,3260] successful.

Create File System on ISCSI Disk After login (connecting) to discovered target, have a look at messages file. You would find similar output like below, from where you can find a name of the disk. [root@node1 ~]# cat /var/log/messages Feb 23 14:54:47 node2 kernel: sd 34:0:0:0: [sdb] 10477568 512-byte logical blocks: (5.36 GB/4.99 GiB) Feb 23 14:54:47 node2 kernel: sd 34:0:0:0: [sdb] Write Protect is off Feb 23 14:54:47 node2 kernel: sd 34:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA Feb 23 14:54:48 node2 kernel: sdb: unknown partition table Feb 23 14:54:48 node2 kernel: sd 34:0:0:0: [sdb] Attached SCSI disk Feb 23 14:54:48 node2 iscsid: Could not set session2 priority. READ/WRITE throughout and latency could be affected. Feb 23 14:54:48 node2 iscsid: Connection2:0 to [target: iqn.2016-02.local.itzgeek.server:disk1, portal: 192.168.12.20,326 0] through [iface: default] is operational now List down the attached disks. [root@node1 ~]# cat /proc/partitions major minor #blocks name 8 0 104857600 sda 8 1 512000 sda1 8 2 104344576 sda2 11 0 1048575 sr0 253 0 2113536 dm-0 253 1 52428800 dm-1 253 2 49799168 dm-2 8 16 5238784 sdb Format the new disk (for the sake of article, I have formatted whole disk instead of creating partition) root@node1 ~]# mkfs.xfs /dev/sdb

meta-data=/dev/sdb isize=256 agcount=8, agsize=163712 blks = sectsz=512 attr=2, projid32bit=1 = crc=0 data = bsize=4096 blocks=1309696, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=0 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 Mount the disk. [root@node1 ~]# mount /dev/sdb /mnt Verify the disk is mounted using the below command. [root@node1 ~]# df -ht Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/centos-root xfs 50G 955M 50G 2% / devtmpfs devtmpfs 908M 0 908M 0% /dev tmpfs tmpfs 914M 54M 861M 6% /dev/shm tmpfs tmpfs 914M 8.5M 905M 1% /run tmpfs tmpfs 914M 0 914M 0% /sys/fs/cgroup /dev/mapper/centos-home xfs 48G 33M 48G 1% /home /dev/sda1 xfs 497M 97M 401M 20% /boot /dev/sdb xfs 5.0G 33M 5.0G 1% /mnt Automount iscsi storage To automount the iscsi storage during every reboot, you would need to make an entry in /etc/fstab file.

Before updating the /etc/fstab file, get the UUID of the iscsi disk using the following command. Replace /dev/sdb with your iscsi disk name. blkid /dev/sdb /dev/sdb: LABEL="/" UUID="9df472f4-1b0f-41c0-a6eb-89574d2caee3" TYPE="xfs" Now, edit the /etc/fstab file. vi /etc/fstab Make an entry something like below. # # /etc/fstab # Created by anaconda on Tue Jan 30 02:14:21 2018 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # UUID=9df472f4-1b0f-41c0-a6eb-89574d2caee3 / xfs defaults 0 0 UUID=c7469f92-75ec-48ac-b42d-d5b89ab75b39 /mnt xfs _netdev 0 0 Remove iscsi storage In case you want to de-attach the added disk, please follow the procedure (unmount and logout). [root@node1 ~]# umount /mnt/ [root@node1 ~]# iscsiadm -m node -T iqn.2016-02.local.itzgeek.server:disk1 -p 192.168.12.20 -u Logging out of session [sid: 1, target: iqn.2016-02.local.itzgeek.server:disk1, portal: 192.168.12.20,3260] Logout of [sid: 1, target: iqn.2016-02.local.itzgeek.server:disk1, portal: 192.168.12.20,3260] successful. That s All.