Using Network Attached Storage as a Gateway to Storage Area Networks

Similar documents
Realizing the Promise of SANs

CONTENTS. 1. Introduction. 2. How To Store Data. 3. How To Access Data. 4. Manage Data Storage. 5. Benefits Of SAN. 6. Conclusion

VERITAS SANPoint Storage Appliance Overview of an Open Platform for the Implementation of Intelligent Storage Server

NAS When, Why and How?

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp

Backup Exec 9.0 for Windows Servers. SAN Shared Storage Option

EMC Celerra CNS with CLARiiON Storage

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp

The Road to Centralized Management

iscsi Technology Brief Storage Area Network using Gbit Ethernet The iscsi Standard

access addresses/addressing advantages agents allocation analysis

Technical White Paper iscsi Boot November 11, 2004

Storage Area Networks SAN. Shane Healy

iscsi Technology: A Convergence of Networking and Storage

Assignment No. SAN. Title. Roll No. Date. Programming Lab IV. Signature

Xcellis Technical Overview: A deep dive into the latest hardware designed for StorNext 5

SNIA Discussion on iscsi, FCIP, and IFCP Page 1 of 7. IP storage: A review of iscsi, FCIP, ifcp

HP Supporting the HP ProLiant Storage Server Product Family.

Introduction to iscsi

USING ISCSI AND VERITAS BACKUP EXEC 9.0 FOR WINDOWS SERVERS BENEFITS AND TEST CONFIGURATION

DELL POWERVAULT NX3500. A Dell Technical Guide Version 1.0

iscsi Unified Network Storage

Effective SAN Management

CLOUD COMPUTING IT0530. G.JEYA BHARATHI Asst.Prof.(O.G) Department of IT SRM University

Storage Virtualization Explained

Storage Consolidation: Simplifying Storage Infrastructures

HP Storage Software Solutions

MANAGING MULTI-VENDOR SANS WITH VERITAS SANPOINT CONTROL

Architecture. SAN architecture is presented in these chapters: SAN design overview on page 16. SAN fabric topologies on page 24

Application-Oriented Storage Resource Management

CyberStore DSS. Multi Award Winning. Broadberry. CyberStore DSS. Open-E DSS v7 based Storage Appliances. Powering these organisations

COSC6376 Cloud Computing Lecture 17: Storage Systems

PeerStorage Arrays Unequalled Storage Solutions

Veritas NetBackup on Cisco UCS S3260 Storage Server

iscsi Unified Network Storage Increases the Cost-effectiveness of the Distributed Information Value Chain

Storage Area Network (SAN)

Exam : S Title : Snia Storage Network Management/Administration. Version : Demo

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

Optimizing and Managing File Storage in Windows Environments

Backup Solutions with (DSS) July 2009

As storage networking technology

Vendor: IBM. Exam Code: C Exam Name: Fundamentals of Applying Tivoli Storage Solutions V3. Version: Demo

Retired. Microsoft iscsi Software Target for ProLiant Storage Servers Overview

Introduction to Virtualization. From NDG In partnership with VMware IT Academy

Fibre Channel Gateway Overview

Lossless 10 Gigabit Ethernet: The Unifying Infrastructure for SAN and LAN Consolidation

FUJITSU Storage ETERNUS AF series and ETERNUS DX S4/S3 series Non-Stop Storage Reference Architecture Configuration Guide

IBM TotalStorage Enterprise Storage Server Model 800

LEVERAGING A PERSISTENT HARDWARE ARCHITECTURE

BEST PRACTICES GUIDE FOR DATA PROTECTION WITH FILERS RUNNING FCP

ORACLE FABRIC MANAGER

Veritas Storage Foundation in a VMware ESX Environment

Veritas Storage Foundation In a VMware ESX Environment

Storage Area Network (SAN)

Symantec NetBackup PureDisk Compatibility Matrix Created August 26, 2010

VMware vsphere with ESX 4.1 and vcenter 4.1

Microsoft Office SharePoint Server 2007

VERITAS Volume Manager for Windows 2000

All Roads Lead to Convergence

Deployment Guide: Network Convergence with Emulex OneConnect FCoE CNA and Windows Server Platform

THE ZADARA CLOUD. An overview of the Zadara Storage Cloud and VPSA Storage Array technology WHITE PAPER

Protect enterprise data, achieve long-term data retention

Data Sheet: Storage Management Veritas Storage Foundation by Symantec Heterogeneous online storage management

Comparing File (NAS) and Block (SAN) Storage

Data Sheet: Storage Management Veritas Storage Foundation for Oracle RAC from Symantec Manageability and availability for Oracle RAC databases

IBM TotalStorage TM NAS Gateway 300 Models G02/G27

IBM Storwize V7000 Unified

Veritas Storage Foundation for Windows by Symantec

Veritas InfoScale Enterprise for Oracle Real Application Clusters (RAC)

How to integrate HP StorageWorks tape libraries into a SAN based backup environment.

The Transition to Networked Storage

VERITAS Storage Foundation 4.0 for Windows

Veritas Storage Foundation for Windows by Symantec

My First SAN solution guide

Vendor: Cisco. Exam Code: Exam Name: Introducing Cisco Data Center Technologies. Version: Demo

The UnAppliance provides Higher Performance, Lower Cost File Serving

HP StorageWorks D2D Backup Systems and StoreOnce

IBM IBM Open Systems Storage Solutions Version 4. Download Full Version :

HCI: Hyper-Converged Infrastructure

DELL POWERVAULT MD FAMILY MODULAR STORAGE THE DELL POWERVAULT MD STORAGE FAMILY

HH0-450 Exam Questions Demo Hitachi. Exam Questions HH0-450

Overview. Implementing Fibre Channel SAN Boot with Oracle's Sun ZFS Storage Appliance. August By Tom Hanvey; update by Peter Brouwer

Using EonStor DS Series iscsi-host storage systems with VMware vsphere 5.x

Designing and Deploying a Cisco Unified Computing System SAN Using Cisco MDS 9000 Family Switches

THE OPEN DATA CENTER FABRIC FOR THE CLOUD

IBM TotalStorage Enterprise Storage Server (ESS) Model 750

IBM IBM Storage Networking Solutions Version 1.

STORAGE PROTOCOLS. Storage is a major consideration for cloud initiatives; what type of disk, which

3.1. Storage. Direct Attached Storage (DAS)

Solution Brief. IBM eserver BladeCenter & VERITAS Solutions for Microsoft Exchange

COPYRIGHTED MATERIAL. Windows Server 2008 Storage Services. Chapter. in this chapter:

Configuring and Managing Virtual Storage

A GPFS Primer October 2005

E-Seminar. Storage Networking. Internet Technology Solution Seminar

Modular Platforms Market Trends & Platform Requirements Presentation for IEEE Backplane Ethernet Study Group Meeting. Gopal Hegde, Intel Corporation

QuickSpecs. Models. Key Features. Overview. Retired

HP StorageWorks DAS-to-SAN Exchange 2000 Migration Solution

VMware vsphere 6.5 Boot Camp

IBM Europe Announcement ZG , dated February 13, 2007

Rio-2 Hybrid Backup Server

Transcription:

Using Network Attached Storage as a Gateway to Storage Area Networks Dave Rosenberg Hewlett-Packard Company 825 14 th Street SW Loveland, CO 8057 dave_rosenberg@hp.com 1

Using Network Attached Storage as a Gateway to Storage Area Networks Contents Overview... NAS Architecture... SAN Architecture... NAS and SAN Fusion... How to Incorporate NAS into an Existing SAN... The Future of NAS/SAN Fusion... Using Network-Attached Storage as a Gateway to Storage Area Networks 2

Using Network Attached Storage as a Gateway to Storage Area Networks Network Attached Storage (NAS) is defined as a purpose-built device (appliance) that presents one or more file systems to client systems via a network. A Storage Area Network (SAN) is defined as a dedicated network of interconnected storage and servers that offers any-to-any communication between these devices and allows multiple servers to access the same storage device independently. A SAN can be a very large pool of storage for running enterprise level applications. Can you connect a NAS device into a SAN so network clients get efficient file system access to all that SAN storage? Absolutely! Overview As the need for storage continues to grow at an unprecedented rate, the question of how to meet that need and keep IT budgets under control remains at the forefront for most enterprise system managers. NAS and SAN are two recent technologies helping to address this problem. NAS NAS devices have been around for a few years and are rapidly growing in popularity. Often called filers, NAS represents a simple and quick method for adding storage to a network. NAS devices are essentially dedicated file servers consisting of a processor, network interface, disk storage and NAS software. Most NAS boxes use operating system and file system software specifically tuned to provide efficient network file access. NAS boxes do not run databases; they do not run third party applications; and they do not provide print serving. NAS boxes only serve files, so they are made to serve them very well. Most NAS devices are heterogeneous, meaning they can provide file access to both Windows (SMB/CIFS) and UNIX (NFS) based clients. Some also can be used as Netware filers. NAS boxes are known for their ease of installation on the network. Most require only a Local Area Network (LAN) connection and power. A web browser running on a network client is used to configure the NAS. NAS boxes can be installed and file storage access made available to clients without ever taking down the enterprise. That makes them extremely flexible to use. NAS is often described in terms of a head and storage. The NAS head consists of the dedicated processor, network interface and NAS software. The storage part is the disk controllers and disk drives. Benefits of NAS technology include: Multiple client file system access - This capability is not available with SANs. While SANs do offer multiple server access to devices, most applications expect multiple client access to files. This is one of the unique features of NAS. Ease of understanding - While Fiber Channel has matured in its expanded use, some still regard it as less understood than the IP technologies that users have experienced for years. As such, users understand the technical foundation of NAS; this understanding provides high confidence in its use. Ease of maintenance - NAS systems are comprised of elements from one manufacturer and are well integrated. Many SANs are integrated with multiple components from multiple vendors, thus creating greater complexity for customers who attempt to maintain them. Component failures on the NAS can be fixed without application server downtime. Inexpensive implementation - NAS systems allow users to leverage their existing network infrastructures, and there is no need to build a back-end SAN to implement NAS. Using Network-Attached Storage as a Gateway to Storage Area Networks

Protection against failures - Many NAS systems can automatically replicate its files to other disk storage in clustered configurations, thus providing an alternative storage to draw from if the primary storage goes down or becomes is corrupted. Virus protection Some NAS systems include anti-virus software to prevent hostile attacks against files. SAN SANs provide an efficient method for linking storage systems with servers and represent a significant improvement over traditional Direct Access Storage (DAS). With DAS, fixed storage systems are hard wired to a single server. If a server needs additional storage capacity, the only option is to upgrade the system. IT budgets are, of course limited, now more than ever, so not all servers in an enterprise can be upgraded to meet user need. Furthermore, once upgraded, a server is seldom downgraded. As a result, the additional storage remains on the system forever, whether or not the server s users still need it. This creates a mismatch between available system resources and actual user requirements. The enterprise may have the necessary storage, but it s not on the server the where users need it. SANs can reduce this resource mismatch problem, providing significant cost savings. SANs create a pool from one or more disks or storage systems and allow IT managers to allocate necessary storage from the pool to individual servers as needed. Unlike a NAS device, servers in a SAN are often general-purpose. A SAN server can be running a database, email or any other application. Of course, a server on a SAN can also be a file server. SANs allow scaling of storage by decoupling the direct link of storage and servers. SANs allow system management to focus from device management to management of the entire storage environment. SANs also allow multi-vendor storage to be managed as one coherent pool. SAN benefits include: Serving raw devices SAN provides raw block level storage to its servers, which can run any type of application. NAS cannot serve raw blocks. If your application requires access to a raw device, NAS is not an option. Flexibility and scalability - SAN storage can grow by simply attaching additional storage capacity to the SAN. Easier data backup - The cross sectional bandwidth of SAN makes large-scale backup and recovery much easier and faster. With NAS, performing a full backup of disk space used for the file system can task the I/O system much more than in any other application. NAS and SAN - Together Because of increasing volume requirements, some applications for NAS are pushing the bandwidth and performance limits of traditional NAS systems. This trend has created a requirement for dramatically greater NAS storage to meet these increased data needs. The net effect of these additional requirements is a convergence of NAS and SAN technologies, referred to as NAS/SAN fusion. Combining NAS and SAN technologies allows IT managers to capitalize on the advantages of both networked storage approaches. It also enables corporations to better cope with the unrelenting pressures of data storage growth and limited IT resource investment funds. NAS Architecture The purpose of a NAS device is to serve files to client workstations. A NAS system physically resembles a general-purpose server, consisting of a processor, operating system, network interface and storage. However, because NAS is dedicated to file serving, the software that operates on a NAS platform is limited to provide file serving. Using Network-Attached Storage as a Gateway to Storage Area Networks 4

Figure 1 NAS Architecture NAS Operating System The NAS operating system can be a standard operating system such as Windows or UNIX. A NAS operating system can also be proprietary. Proprietary NAS operating systems can be made specifically to speed file serving. This specialization gives them a performance advantage over industry standard operating systems that are designed to support general-purpose applications. Most NAS operating systems support both Windows and UNIX file structures, so that they can serve files to both Windows and UNIX clients. Disk caching and other techniques often are used to speed file serving. NAS Head The NAS Head primarily consists of the server hardware platform, network interface cards (NIC) and Fibre or SCSI Bus Adapters (HBA). The server platform can be custom for the NAS but typically is based on a manufacturer s standard server. A standard Ethernet NIC provides connection to the network and client workstations. The HBA(s) provides connection to the storage components. Often multiple HBAs will be used to improve performance by increasing processor to storage system bandwidth. Storage The storage component of a NAS can vary greatly. Some NAS systems, especially at the low end, are packaged such that the storage resides with the head in a single chassis. In the middle to upper performance and capacity range, the storage component is separated from the NAS head. This provides for greater storage options such as increased capacity, RAID, redundant storage networking, etc. Management erface The management erface is used to administer the NAS, including setting an IP address, defining users, creating storage shares and exports, etc. The user interface for a NAS can be either command line driven or Using Network-Attached Storage as a Gateway to Storage Area Networks 5

Professional Workstation 6000 PRO Professional Workstation 6000 PRO graphical. Mose GUI management interfaces are accessible over the web. Command line interfaces are managed via telnet. NAS in the Network The figure below shows a NAS system configured into a network. Files are shared as directories over the network. Windows clients see the shared storage as directories that can be mapped as local disk drives. UNIX systems see the storage as exported file systems that can be mounted. Clent Workstations LAN NAS System (Racked NAS Head and Storage Arrays) Figure 2 Traditional NAS system on a LAN SAN Architecture As defined earlier, a SAN is a dedicated network of interconnected storage and servers that offers any-to-any communication. A network of switches and high-speed fiber provides the any-to-any connection. Data is transferred between servers and disks (or disk arrays) at the block level. The switches determine data block routing between a particular server and a particular disk device. Together, the high-speed fiber and the switches are referred to as a fabric. s connect to the fabric using a Bus Adapter (HBA). Although the fabric can allow any server in the SAN to communicate with any disk system on the SAN, in reality, block access must be very well controlled. If two servers were to access the same data, corruption could occur. Specific software must be used when multiple servers are to share the same SAN storage. This special software is usually used by systems designed for fail-over or data mirroring. Using Network-Attached Storage as a Gateway to Storage Area Networks 6

Professional Workstation 6000 HEWLETT P ACKARD PRO 1000 BASE X 1 2 AMBER A= CTIVITY GREEN LINK = OK FLASHING G REEN DISABLED = 1 0/10 0 BASE TX 4 5 6 7 8 9 1011 1 2 1 2 4 5 6 7 8 9 10 11 R R PO WER MGMT 54 11 En te rp rise Swit ch Professional Workstation 6000 HEWLETT PACKARD PRO Client Workstations LAN SAN Windows UNIX -AL 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 1 2 4 5 6 7 8 9 10 11 12 1 14 15 1 2 4 5 6 7 8 9 10 11 12 1 14 15 Figure Storage Area Network with Windows and UNIX s To prevent one server on a SAN from corrupting another server s data on the same SAN, storage on a SAN is partitioned and allocated to individual servers. Partitioning of disk system storage is based on Logical Unit Numbers (LUN). A LUN is the portion of a storage array capacity that appears as a physical disk. A RAID manager creates LUNs and can present multiple LUNs via a single port. A JBOD (just a bunch of disks) is typically a single LUN. Only storage capacity that has been assigned a LUN is available to operating systems. There are three primary methods for insuring allocated storage remains accessible to only a single server. level control Control access within the network Control access within the storage array level control requires special software on each server on the SAN, essentially delegating access control to each server, making the server itself responsible for accessing storage assigned to it. level control allows storage sharing at a block level, controlling multiple OS access to LUNs. level control allows assignment and reassignment of storage from one server to another without the need to reboot servers. control can offer a simplified management through automatic server and storage discovery. Control access within the network uses a system called zoning. Zoning can be either hard or soft. Hard zoning specifies which ports within a switch can communicate with the other ports in the switch. Hard zoning is very secure and often used to demonstrate a physical separation between storage systems shared between multiple users. Zoning can be somewhat inflexible, however. Because it uses physical port connections to Using Network-Attached Storage as a Gateway to Storage Area Networks 7

HEWLETT PACKARD 10/100 BASE TX 4 5 6 7 8 9 10 11 1000 BASE X 1 2 AMBER ACTIVITY = GREEN LINK = OK FLASHING GREEN DISABLED = 4 5 6 7 8 9 10 11 1 1 2 2 R R POWER MGMT HEWLETT PA CKA RD 5411 Enterprise Switch control access, all the LUNs routed to a particular port are accessible to all other ports permitted to communicate with it. Soft zoning is similarly limited however, it uses a mapping of world wide names to specify which server can access which port on a storage device. A worldwide name is a unique hardware address within Fibre Channel that identifies each HBA in servers. It is similar to the MAC address of a network card used in Local Area Network topologies. Example: A Windows NT server and a UNIX based server are connected via fabric in a SAN environment. The Windows NT host desires exclusive access to LUN 0 and LUN1 on Array 1 and shares its associated file systems to client workstations. A UNIX server is on the same SAN,. It desires access to LUN 2 and on Array 2. LAN Windows UNIX Visible LUNs: LUN 0, LUN 1 Visible LUNs: LUN 2, LUN Ports 1 and 2 "Zoned" Ports 5and 6 "Zoned" LUN 0 LUN 1 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 1 2 4 5 6 7 8 9 10 11 12 1 14 15 1 2 4 5 6 7 8 9 10 11 12 1 14 15 LUN 2 LUN Figure 4 Storage Area Network with Zoned LUN Allocation Control access within the storage array places the responsibility of access control with the storage device using a method called LUN masking. Because access control resides with the storage device, this method provides very secure access control. In LUN masking, the RAID controller within the storage system makes LUNs available only to server HBAs with specific worldwide names. LUNs that are not to be accessed by a particular server are hidden from view or masked. This makes it impossible for any server to access a LUN unless the storage device has permitted it. Information about which server HBA can access the storage device is kept in a LUN masking table in the raid controller of each array. Since connection tables must be maintained within the storage devices, this method does have the drawback that control must be managed device by device rather than centrally. Using Network-Attached Storage as a Gateway to Storage Area Networks 8

HEWLETT PACKARD 1000 BA SE X 1 2 AM BE R = ACTIVITY GREEN LINK = OK FLA SHI NG G REE DIS N = A BLED 10/ 100 BASE TX 4 5 6 7 8 9 10 11 1 2 1 2 4 5 6 7 8 9 10 11 R R POWER MGMT 5411 Enterprise Switch HEWLETT PACKARD Example: A Windows NT server and a UNIX based server are connected via fabric in a SAN environment. The Windows NT host desires exclusive access to LUN 0 on Array 1 and shares its associated file systems to client workstations. A UNIX server is on the same SAN, it desires access to LUN 2 and on Array 2 and shares its associated file systems to client workstations. LUN 1 is not allocated to any server. LAN Windows UNIX HBA WWN:0x000011110000ABF0 Visible LUNs: LUN 0 HBA WWN: 0x000011110000ABF1 Visible LUNs: LUN 2,LUN -AL LUN 0 LUN 1 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 1 2 4 5 6 7 8 9 10 11 12 1 14 15 1 2 4 5 6 7 8 9 10 11 12 1 14 15 LUN 2 LUN Array 1 LUN Masking Table: 0x000011110000ABF0, LUN 0, WRITE Array 2 LUN Masking Table: 0x000011110000ABF1, LUN 2, WRITE 0x000011110000ABF1, LUN, WRITE Figure 5 Storage Area Network with LUN Masking NAS and SAN Fusion Combining NAS and SAN is a union of two technologies, resulting in significant benefits. When using NAS in combination with SAN, the NAS device is used as a dedicated file server, providing clients with file access of SAN storage. Only the head of the NAS is used in a SAN configuration. Storage for the NAS head is then allocated to NAS head from that storage available within the SAN. Thus, the NAS head becomes a file server gateway into the SAN, and the SAN becomes a pool of storage for use by the NAS head. Storage allocated to the NAS can be increased or decreased as user demand varies. Using Network-Attached Storage as a Gateway to Storage Area Networks 9

Using NAS as a gateway to SAN storage has another advantage. General-purpose servers placed on a SAN are either Windows file system based (CIFS) or UNIX based (NFS). But most NAS devices are heterogeneous, and can provide both CIFS shares and NFS exports of the same data. Therefore using NAS as a gateway to a SAN can provide both Windows and UNIX based workstations access to the same files. Networks with platforms or protocols in transition or with planned migration will benefit from a NAS-SAN gateway. Plans to migrate from Microsoft Windows to Linux are common scenarios today. Such a migration is easily facilitated with a NAS-SAN gateway that can handle CIFS and NFS concurrently. NAS-SAN fusion provides substantive advantages of greater storage consolidation and storage management. Improved NAS Backup The complications of backing up NAS data from traditional file serving appliances are numerous. And while some clever NDMP methods have been developed to ease this complexity, the task of backing up file data from a NAS appliance remains challenging, and certainly less reliable than the comparable experience of running backups from SAN-attached disk to tape. As such, the consolidation and convenience that NAS/SAN fusion provides with regard to NAS data backup is significant. Although separate LUNs are created and used for NAS and SAN data, the backup of all SAN-resident data is collectively controlled. This greatly simplifies the backup of NAS data when compared to traditional standalone NAS integrated appliances. How to Incorporate NAS into an Existing SAN Incorporating NAS into an existing SAN is not very different from adding any other server into a SAN. A NAS device (head only) is basically another server on the SAN. It just happens to be dedicated to file serving. The NAS head to be connected to the SAN must support Fiber Channel. If you purchased a NAS head specifically to integrate with SAN it should have at least one Fiber Channel HBA. If you are converting a standalone NAS to operate on a SAN, you may need to add a Fiber Channel HBA to your NAS head. Check with the NAS manufacturer to make sure there are the appropriate drivers, etc. available to support Fiber Channel. Using Network-Attached Storage as a Gateway to Storage Area Networks 10

Professional Workstatio n 6000 HEWLETT PACKARD PRO 1000 BASE X 1 2 AMBER = AC TIV ITY GREEN L= INK OK FLASHI NG G R EE DIS N = ABLED 1 0/ 100 BASE TX 4 5 6 7 8 9 10 11 1 2 1 2 4 5 6 7 8 9 10 11 R R POWER MGMT 5411 Enterpris e Swit ch Profe ssional Workstation 6 00 0 HEWLETT PACKARD PRO Client Workstatiopns LAN SAN Windows UNIX NAS -AL 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 1 2 4 5 6 7 8 9 10 11 12 1 14 15 1 2 4 5 6 7 8 9 10 11 12 1 14 15 Figure 6 NAS with an Existing SAN Basic Procedures Before connecting the NAS to the SAN, you should identify the LUNs on the SAN that will be dedicated to NAS. Only two of the LUN allocation methods discussed earlier will work with NAS; 1) Network control using zoning and 2) Storage array control by LUN Masking. control will not typically work because it requires special software to be loaded onto the NAS device. NAS devices do not allow third party agents to be loaded onto them. Changing zoning in switches or LUN masking tables in storage arrays is tedious, but if you ve done it before for the existing SAN, the process should be familiar. When LUN allocation on the SAN is complete, make sure all current servers still have access to their data. A mistake in the LUN allocation could leave a server without access to its data. Most users will not be very happy. The next steps are to connect the NAS to the SAN, then to LAN and turn it on. Follow the NAS head manufacturer s instructions for configuring the NAS on you IP network. Next, you will need to configure storage on the NAS. The NAS should automatically discover any LUNs on the SAN that have been allocated to it. If this does not happen, check either the zoning (if zoning was used) or world-wide-names if LUN masking was used. The world-wide-name of the NAS HBA must be in the LUN masking table of the array to be accessed by the NAS. Follow the NAS head manufacturer s procedure for creating volumes and file systems associated with each LUN. Finally, create CIFS shares and NFS exports of directories for client users. There are some things to watch out for: A SAN needs to be managed and several SAN management tools have recently emerged. These products, called Storage Area Managers (SAM), make it easier for IT managers to configure and monitor SANs. However, many of the capabilities of these products require special software (agents) to be placed on SAN servers. Most NAS devices are closed systems, that is, they do not allow third party software to be added. That limits the features that can be supported by SAM products since they cannot load their specific agents on the Using Network-Attached Storage as a Gateway to Storage Area Networks 11

NAS. A few NAS manufacturers, who happen to also be developers of SAM software, have started providing integrated NAS-SAM solutions. However, these solutions still remain limited. The Future of NAS/SAN Fusion For the most part, the management tasks for NAS and SAN storage, NAS file services, NAS processor state-ofhealth and SAN quality-of-service are completely segregated, requiring separate human resource oversight for each of these activities. Market demand for an open SAN environment, where devices from multiple manufacturers can co-exist and be managed by centralized SAMs, are forcing the industry to set standards for device management. Storage Networking Industry Association The Storage Networking Association (SNIA) was formed in 1997 with the mission to ensure that storage networks become efficient, complete, and trusted solutions across the IT community. Working through the SNIA, developers of storage and networking products in addition to system integrators, application vendors, and service providers are working to establish standards for storage device managers. These standards will allow NAS and SAN device managers to interoperate, thus simplifying the overall management of both. Figure 7 Multi-vendor Open SAN with NAS Common Information Model Based Management Using Network-Attached Storage as a Gateway to Storage Area Networks 12

Several developers are moving towards achieving this interoperability based on the Common Information Model (CIM). CIM is a common data model of an implementation-neutral schema for describing overall management information in a network/enterprise environment. CIM is based on an object-oriented model. Using CIM as a common base, NAS and SAN will one day be able to communicate with each other such that both can be managed from a central user interface. The IT manager will not need to use a SAM tool to allocate LUNs on the SAN, then go to the NAS management program to create volumes, then file systems and finally expose them as sharers for a client workstation to use. Rather the IT manger will tell the SAN he needs 150 GB of Windows file system space called Payroll. And it will be there. Using Network-Attached Storage as a Gateway to Storage Area Networks 1