UNAM TIER-1 VCE for Grid Computing Enviroment Roberto Castro EMC roberto.castro@emc.com Luis Perez Cisco Systems luperez@cisco.com
AGENDA Tier-1 Challenges Overview Tier-1 Strategy VCE Architecture for UNAM Tier-1 Q&A 2
TIER-1 CHALLENGES OVERVIEW High Bandwidth requirement Huge Storage Demand (PB) Alice/Grid Collaboration Colaboración Computing Power Technology Management Staff/Human Resources Processes /Automation Scalability SLA s/availability
THE VISION FOR UNAM TIER-1 NOC/SOC Tier-1 DC UNAM Backbone 10G -> 40G 10G 10,40G Computing Power High capacity Storage High Bandwith Simple, scalable and Flexible Guarantee SLAs Cloud Services for other App End to End solution QoS, Security, SLA s
TIER-1 STRATEGY Standards Best Practices Reference Architectures 5
VCE OVERVIEW Cisco, EMC, Intel and VMware, formed an alliance oriented to the acceleration of utility/cloud based models to diminish both the risks and the operational complexity Applications Operating Systems Virtualization SW Computing Information Infrastructure and Services Storage and Smart data Management 6
PLATFORM ARCHITECTURAL PRINCIPLES With VCE, the Tier-1 platform achieve synergy instead of interoperability of components. Higher Performance High availability and converged networking (unified fabric) Flexibility and Scalability Orchestration and automation to simplify management Proposed units of IT infrastructure with known operational characteristics Known deployment characteristics: power, space, and cooling Non-stop use Efficient use of I/O and storage Holistic Approach
THE DATA CENTER BRIDGING (DCB) TASK GROUP The charter of the DCB TG is to provide enhancements to existing 802.1 bridge specifications to satisfy the requirements of protocols and applications in the data center. Existing high-performance data centers typically comprise multiple application-specific networks that run on different link layer technologies; e.g. Fibre Channel for storage, InfiniBand for highperformance computing, Ethernet for network management and LAN connectivity. The specifications from this TG will enable 802.1 bridges to be used for the deployment of a converged network where all applications can be run over a single physical infrastructure.
CASE FOR A UNIFIED DATA CENTER FABRIC Primary Network Unified Fabric Secondary Network Complexity, Cost, Power Universal I/O, Ubiquitous Connectivity
A LARGER PICTURE IEEE 802 Evolution of Ethernet (10 GE, 40 GE, 100 GE, copper and fiber) Evolution of switching (Priority Flow Control, Enhanced Transmission, Data Center Bridging exchange, others) INCITS/T11 (InterNational Committee for Information Technology Standards) for further processing as an ANSI (American National Standards Institute) standard. IETF On June 3rd 2009, the FC-BB-5 working group of T11 completed its work and unanimously approved a final standard TRILL (Transparent Interconnection of Lots of Links) InfiniBand Trade Association Announces RDMA over Converged Ethernet (RoCE) April 2010
FLEXIBLE INFORMATION INFRASTRUCTURE Multiprotocol storage Environment FCoE Fibre Channel iscsi CIFS/NFS RDMAoE Storage Platform Data Protection Ease of Management FS Layer Maximum Response SSD FC SAS SATA 11
SUMMARY An integrated platform for Tier-1 requirements A proven platform to scale your applications Greatly Improve Service Levels! The platform includes: Computing power, Storage capacity and converged networking All with known operational parameters Known I/O processing and efficiency Known scalability Eliminates trial and error and pre-staging
VCE SEAMLESS SUPPORT EXPERIENCE Enabling Pervasive Virtualization and Private Cloud Unified inter-company collaboration tool Joint problem re-creation labs Single experience for onsite and remote support Cross-company, cross-product-trained support experts Cooperative Engineering Groups Common metrics and alignment Shared problem resolution and escalation processes Documented processes via best practice Support Implementation Plan 13
THANK YOU
Q&A