CHAMELEON: A LARGE-SCALE, RECONFIGURABLE EXPERIMENTAL ENVIRONMENT FOR CLOUD RESEARCH

Similar documents
CloudLab. Updated: 5/24/16

Looking Beyond the Internet

igeni: International Global Environment for Network Innovations

Measurement Challenges and Opportunities for Developing Smart Grid Testbeds

Eastern Regional Network (ERN) Barr von Oehsen Internet2 Tech Exchange 10/16/2018

ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development

Mass Big Data: Progressive Growth through Strategic Collaboration

Outline. March 5, 2012 CIRMMT - McGill University 2

CSD3 The Cambridge Service for Data Driven Discovery. A New National HPC Service for Data Intensive science

Building the At-Scale GENI Testbed

NSF Project Reporting Format

Introduction of the Industrial Internet Consortium. May 2016

Clemson HPC and Cloud Computing

MAJOR RESEARCH EQUIPMENT $240,450,000 AND FACILITIES CONSTRUCTION

The NeTS Program. NSF Programmable Wireless Networking Informational Meeting 5 February 2004 Arlington, Virginia

OUR VISION To be a global leader of computing research in identified areas that will bring positive impact to the lives of citizens and society.

Analysis of CPU Pinning and Storage Configuration in 100 Gbps Network Data Transfer

Welcome to the Industrial Internet Forum

This is the Next Industrial Revolution: The Industrial Internet. 8 February 2017

einfrastructures Concertation Event

National Cybersecurity Center of Excellence

Research Infrastructures and Horizon 2020

Introduction to FREE National Resources for Scientific Computing. Dana Brunson. Jeff Pummill

Picasso Panel Thinking Beyond 5 G David Corman

DDN Annual High Performance Computing Trends Survey Reveals Rising Deployment of Flash Tiers & Private/Hybrid Clouds vs.

DataONE: Open Persistent Access to Earth Observational Data

Strategic Energy Institute Energy Policy Innovation Center EPICenter

The Cambridge Bio-Medical-Cloud An OpenStack platform for medical analytics and biomedical research

R&D to shape the networks and services of the future

Modernizing the Grid for a Low-Carbon Future. Dr. Bryan Hannegan Associate Laboratory Director

The Virtual Observatory and the IVOA

Renovating your storage infrastructure for Cloud era

Internet2 and IPv6: A Status Update

How to use Water Data to Produce Knowledge: Data Sharing with the CUAHSI Water Data Center

Student Union Social Programming Board Constitution

The LHC Computing Grid

ANNUAL REPORT Visit us at project.eu Supported by. Mission

NSF/CISE Mid-Scale Experimental Computing Research Infrastructure. Jack Brassil Program Director, CISE/CNS

Meeting Experimenters Needs Through Testbed Federation

Data publication and discovery with Globus

Center for Advanced Materials & Clean Energy Technologies (CAMCET)

Cyberinfrastructure Framework for 21st Century Science & Engineering (CIF21)

STRATEGIC PLAN

Network Infrastructure for the Future-Ready Service Provider. Big Communications Event 2016 Austin, TX

Marine Institute Job Description

Federating FutureGrid and GENI

Accelerating Success with Cisco Partner Ecosystem

GNSSN. Global Nuclear Safety and Security Network

Indiana University Research Technology and the Research Data Alliance

Storage Virtualization. Eric Yen Academia Sinica Grid Computing Centre (ASGC) Taiwan

JSC THE JUSTICE & SAFETY CENTER. Snapshot 2014

Is your mission critical data center ready for an always on world? Nir Oran Lead Enterprise Architect

Power your cloud infrastructure with Oracle VM and Cisco!

A digital resources warehouse as a playground for the future Internet

CIO Workshop Wrap Up & Next Steps

Business Model for Global Platform for Big Data for Official Statistics in support of the 2030 Agenda for Sustainable Development

THE NATIONAL DATA SERVICE(S) & NDS CONSORTIUM A Call to Action for Accelerating Discovery Through Data Services we can Build Ed Seidel

Five-Year Strategic Plan

PREPARE FOR TAKE OFF. Accelerate your organisation s journey to the Cloud.

Proposition to participate in the International non-for-profit Industry Association: Energy Efficient Buildings

Revolutionizing Open. Cecilia Carniel IBM Power Systems Scale Out sales

Interconnection of Armenian e- Infrastructures with the pan- Euroepan Integrated Environments

EPA Near-port Community Capacity Building: Tools and Technical Assistance for Collaborative Solutions

The Africa Utilities Telecom Council Johannesburg CC, South Africa 1 st December, 2015

Midwest Big Data Hub Accelerating the Big Data Innovation Ecosystem

The Global Context of Sustainable Development Data

A Perspective on MGI Networking

Cyberinfrastructure and Campus Networking at the National Science Foundation. Jim Kurose NSF CISE January 2015

Marine Institute Job Description

Italy - Information Day: 2012 FP7 Space WP and 5th Call. Peter Breger Space Research and Development Unit

Cisco Cloud Application Centric Infrastructure

ETSI FUTURE Network SDN and NFV for Carriers MP Odini HP CMS CT Office April 2013

Information Technology Infrastructure Committee (ITIC)

Cloud Computing: Making the Right Choice for Your Organization

The intelligence of hyper-converged infrastructure. Your Right Mix Solution

Why Converged Infrastructure?

Advancing Library Cyberinfrastructure for Big Data Sharing and Reuse. Zhiwu Xie

The Mission of the Abu Dhabi Smart Solutions and Services Authority. Leading ADSSSA. By Michael J. Keegan

ASREN Arab States Research and Education Network

Connected & Automated Vehicle Activities

DATACENTER SERVICES DATACENTER

MT. SAN ANTONIO COLLEGE 2018 Educational and Facilities Master Plan HMC ARCHITECTS // COLLABORATIVE BRAIN TRUST

Digital Platforms for 'Interoperable and smart homes and grids'

Cisco Start. IT solutions designed to propel your business

GRADUATE PROGRAMS IN ENTERPRISE AND CLOUD COMPUTING

Cyberinfrastructure!

The Computation and Data Needs of Canadian Astronomy

Does Research ICT KALRO? Transforming education using ICT

STATEWIDE CENTERLINE INITIATIVE EDITED: MAY 17, 2013

Introduction of the Industrial Internet Consortium. July 2015

Next Generation Networking and The HOPI Testbed

POSITION DESCRIPTION

SLATE. Services Layer at the Edge. First Meeting of the National Research Platform Montana State University August 7-8, 2017

NIAC Vulnerability Disclosure Working Group. Status Report & Update

SRS Overview. Dave Hepner. Looking toward the future of the Savannah River Site

The Why, What, and How of Cisco Tetration

Higher Education in Texas: Serving Texas Through Transformational Education, Research, Discovery & Impact

Overview Traditional converged infrastructure systems often require you to choose different systems for different applications performance, capacity,

National R&E Networks: Engines for innovation in research

Department of Homeland Security Updates

Transcription:

CHAMELEON: A LARGE-SCALE, RECONFIGURABLE EXPERIMENTAL ENVIRONMENT FOR CLOUD RESEARCH Principal Investigator: Kate Keahey Co-PIs: J. Mambretti, D.K. Panda, P. Rad, W. Smith, D. Stanzione NB: With an Additional Brief Description of ClouldLab 14 th Annual Global LambdaGrid Workshop Queenstown, New Zealand September 30 - October 1, 2014 SE PTEMBER 29, 2014 1

SOLVING TODAY S RESEARCH CHALLENGES Big Data Data volume, velocity, and variety Large Scale Programmable networks cheap, ubiquitous sensors and other emergent trends Big Instruments Cyber-Physical Systems, Observatories Big Compute A wide range of data analytics Engagement Reconfigurability Connectedness

NATIONAL SCIENCE FOUNDATION COMPUTER AND INFORMATION SCIENCE AND ENGINEERING (CISE) RESEARCH INFRASTRUCTURE PROGRAM The CISE Research Infrastructure (CRI) program drives discovery and learning in the computing disciplines by supporting the creation and enhancement of world-class computing research infrastructure. This infrastructure enables CISE researchers to advance the frontiers of CISE research. Through the CRI program, CISE seeks to ensure that individuals from a diverse range of academic institutions have access to such infrastructure. Mid-Scale Infrastructure - NSFCloud solicitation constitutes a track within the CRI program specifically supporting research infrastructure that enables the academic research community to develop and experiment with novel cloud architectures addressing emerging challenges, including real-time and highconfidence systems. Phase I for NSFCloud supports required infrastructure design and ramp-up activities, as well as demonstration of readiness for full-fledged execution. Phase II will enable the infrastructure to be fully staffed and operational, fulfilling the proposed mission of serving as a testbed that is used extensively by the research community.

CHAMELEON: A POWERFUL AND FLEXIBLE EXPERIMENTAL INSTRUMENT Large-scale instrument Targeting Big Data, Big Compute, Big Instrument research More than 650 nodes and 5 PB disk over two sites,100g network Reconfigurable instrument Bare metal reconfiguration, operated as single instrument, graduated approach for ease-of-use Connected instrument Workload and Trace Archive Partnerships with production clouds: CERN, OSDC, Rackspace, Google, and others Partnerships with users Complementary instrument Complementing GENI, Grid 5000, and other testbeds Sustainable instrument Strong connections to industry

CHAMELEON HARDWARE SCUs connect to core and fully connected to each other Switch Standard Cloud Unit 42 compute 4 storage x2 Switch Standard Cloud Unit 42 compute 4 storage x10 Core Services Front End and Data Mover Nodes Chameleon Core Network 100Gbps uplink public network (each site) Core Services 3 PB Central File Systems, Front End and Data Movers To UTSA, GENI, Future Partners 504 x86 Compute Servers 48 Dist. Storage Servers 102 Heterogeneous Servers 16 Mgt and Storage Nodes Chicago Austin Heterogeneous Cloud Units Alternate Processors and Networks

CAPABILITIES AND SUPPORTED RESEARCH Development of new models, algorithms, platforms, auto-scaling HA, etc., innovative application and educational uses Persistent, reliable, shared clouds Repeatable experiments in new models, algorithms, platforms, auto-scaling,high-availability, cloud federation, etc. Isolated partition, pre-configured images reconfiguration Virtualization technology (e.g., SR-IOV, accelerators), systems, networking, infrastructure-level resource management, etc. Isolated partition, full bare metal reconfiguration

OUTREACH AND ENGAGEMENT Early User Program Committed users, driving and testing new capabilities, enhanced level of support Chameleon Workshop Annual workshop to inform, share experimental techniques solutions and platforms, discuss upcoming requirements, and showcase research Advisory Bodies Research Steering Committee: advise on capabilities needed to investigate upcoming research challenges Industry Advisory Board: provide synergy between industry and academia

PROJECT SCHEDULE Fall 2014: Hit the ground running: FutureGrid resources at UC and TACC available as OpenStack clouds Spring 2015: Maintain the momentum: Initial bare metal reconfiguration available on FutureGrid UC&TACC resources for Early Users Summer 2015: New hardware: large-scale homogenous partitions available to Early Users Fall 2015: Large-scale homogenous partitions generally available 2015/2016: Refinements to experiment management capabilities Fall 2016: Heterogeneous hardware available

INTERNATIONAL PARTNERSHIPS Multiple cloud testbeds are being implemented around the world This projects will engage with the communities developing those cloud testbeds A special focus will be interoperability among such testbeds These processes will build on existing international research testbeds such as the international SDN/OpenFlow testbed based on the GLIF

TEAM Kate Keahey Chameleon PI Science Director Paul Rad Industry Liason Joe Mambretti Programmable networks Warren Smith Director of Operations DK Panda High-performance networks Dan Stanzione Facilities Director

CLOUDLAB: A SECOND NSFCLOUD PROJECT CloudLab is a large-scale distributed infrastructure based at the University of Utah, Clemson University and the University of Wisconsin, on top of which researchers will be able to construct many different types of clouds. Each site will have unique hardware, architecture and storage features, and will connect to the others via 100 gigabit-per-second connections on I2's advanced platform, supporting OpenFlow (an open standard that enables researchers to run experimental protocols in campus networks) and other software-defined networking technologies. CloudLab will be a facility where researchers can build their own clouds and experiment with new ideas with complete control, visibility and scientific fidelity. CloudLab will help researchers develop clouds that enable new applications with direct benefit to the public in areas of national priority In total, CloudLab will provide approximately 15,000 processing cores and in excess of 1 petabyte of storage at its three data centers. Each center will comprise different hardware, facilitating additional experimentation. In that capacity, the team is partnering with three vendors: HP, Cisco and Dell to provide diverse, cutting-edge platforms for research. Like Chameleon, CloudLab will feature bare-metal access. Over its lifetime, CloudLab is expected to run dozens of virtual experiments simultaneously and to support thousands of researchers. Other partners on CloudLab include Raytheon BBN Technologies, the University of Massachusetts Amherst and US Ignite, Inc.

PARTING THOUGHTS Chameleon is a large-scale, responsive experimental testbed Targeting critical research problems at scale Evolve with the community input Reconfigurable environment Support use cases from bare metal to production clouds Support for repeatable and reproducible experiments One-stop shopping for experimental needs Trace and Workload Archive, user contributions, requirement discussions Engage the community Network of partnerships and connections with scientific production testbeds and industry Partnerships with existing testbeds Outreach activities, including to the international community Come visit us at www.chameleoncloud.org!