Executive Brief Cut Costs, Reduce Complexity, and Drive Availability for the Always-On Enterprise Sponsored by: Veeam Carla Arend March 2016 IDC Opinion Businesses are now operating in a connected world, where customers, partners, and employees require constant access to data and applications through a wide variety of devices and online portals. Many businesses are currently executing digital transformation strategies to satisfy customer and employee demands for constant data access and availability. While they are modernizing their business processes through the use of IT, they find that they also need to modernize their datacenters to ensure the required speed and reliability to consistently deliver a great end-user experience. Understanding cost and complexity drivers is important when evaluating options for data protection and availability solutions, as cost has several dimensions beyond mere acquisition. Staff costs, operational costs, migration costs, licensing models, and, most importantly, the cost of doing nothing have to be considered when choosing a solution. In This Executive Brief This IDC Executive Brief discusses the cost and complexity challenges stemming from legacy backup infrastructure, as well as the cost drivers and best practices that IT managers need to consider when choosing a modern data protection solution. Availability Is Key to Business Success Today Businesses are currently executing digital transformation strategies to satisfy customer, partner, and employee demands for constant data access and availability. As they modernize their business processes through the use of IT, they find that they also need to modernize their datacenters to ensure the required speed and reliability to consistently deliver a great end-user experience. Datacenter modernization efforts are focused on 1) storage, to cater to the increased need for performance and availability while containing costs; 2) virtualization, to drive greater efficiency and automation; and 3) cloud, to create a more flexible and dynamic infrastructure. The goal for datacenter modernization, however, is to ensure the constant availability of all data and applications, which have become vital to business success. Ensuring availability is a key challenge for the following reasons: Data Explosion: Accelerated innovation based on new operational models (e.g., connected manufacturing), the Internet of Things, new communication and marketing channels based on the use of social media, and big data and analytics applications are driving a data explosion in most businesses. Making this vast and growing volume of data available to March 2016, IDC #CEMA41059416
customers, partners, and employees for informed decision making and continuous business operations is paramount to success in this connected world. The Cost of Downtime Skyrocketing: For businesses operating in this highly connected environment, downtime has become detrimental to business success, and the cost of downtime has skyrocketed. IDC estimates that the mean cost of one hour of downtime for an organization with between 1,000 and 4,999 employees is approximately $225,000. Consequently, businesses cannot tolerate the same levels of planned and unplanned downtime that they could before they started on their digital transformation journeys, and, for many businesses, the window for downtime is close to zero. Dependency on Applications for Business Success: Modern businesses communicate with their customers mainly through digital channels, and they have started to blend the physical and the virtual customer experience. This has vastly increased their dependency on applications to deliver the right information to customers and employees and heightened the necessity to ensure that these applications and the corresponding data are available at all times. While transactional systems once represented the business-critical aspects of an organization, in the modern digital era, systems of engagement are equally important, and employees need to be able to access their productivity tools (email, file shares, and analytics), transactional systems, and communication channels with customers and partners to create value and support success. Tiering applications into business-critical and nonbusiness-critical used to be important, because providing availability to all applications was impossible. With modern availability solutions optimized for modern storage, virtual, and cloud environments, the question is not how to provide availability for business-critical applications, but rather how much availability every application needs. Consequently, IT managers can provide just the right level of availability, even as business requirements are changing. While availability has been established as key to business success today, it is not easy to achieve, and most businesses are investing heavily in IT without reaching their availability goals. In the next section, we will look at what is holding them back. The Enemies of Availability The cost and complexity of IT infrastructure are the most common challenges to availability. Years of evolutionary investment in IT infrastructure have often led to complex, heterogeneous IT environments, which require many manual interventions to keep operational. The typical drivers of cost and complexity are as follows: Heterogeneous IT: The new digital era of complex heterogeneous IT infrastructure features a mix of on-premises IT, hybrid and public cloud services, and complex, multitier applications with dynamic requirements. Investments in server virtualization were aimed at remedying this situation, but have frequently created a new level of complexity, as the traditional means for ensuring data availability and disaster recovery were not applicable to the new virtual IT paradigm. To achieve the expected service levels for the protection and continuous and consistent delivery of datacenter services, businesses have to invest in new technologies and processes. Ensuring availability requires integration with all major storage vendors to take advantage of their specific features, especially snapshots. Staying current with the newest storage technologies is essential; flash storage (for example) has seen rapid adoption and now accounts for more than 50% of the storage market. Cloud architectures are moving into the mainstream, with more than 56% of organizations' annual IT spending 2016 IDC #CEMA41059416 2
being directed to cloud services in 2016, including on-premise private cloud, hosted private cloud, and public cloud services. Fragmentation of Data and Applications: The fragmentation of data in physical and virtual servers and across multiple hypervisors and cloud infrastructures, as well as the need to store, manage, and analyze both structured and unstructured data, is adding complexity and rendering traditional data protection tools incapable of fulfilling the complete datacenter requirements efficiently and cost effectively. To achieve cost-efficient and operationally efficient availability, it is important to tier backups by utilizing a combination of storage snapshots, nearline storage, and archival storage such as tape or cloud. Outdated Data Protection Tools: IT managers are under pressure to ensure high-speed recovery, prevent data-loss scenarios, and have full visibility of their datacenter performance to identify bottlenecks before they affect customer experience. Underinvestment in data protection tools, or continued use of outdated legacy data protection, will impair IT managers ability to meet availability requirements for data and applications. Cost of Doing Nothing: When faced with investing in backup and availability software, organizations sometimes adopt the approach of "do not touch a running system". They are unaware that the cost of doing nothing is actually higher than the total cost of a new solution. Technology is changing quickly; the underlying storage layer is increasingly flash-based, and cloud is moving into mainstream adoption. The cost of doing nothing is a function of: 1) an organization's inability to take advantage of the newest efficiency and management features, and, consequently, its continuous reliance on error-prone and time-consuming manual processes and the resulting risk of unplanned downtime; 2) spiraling storage costs, as capacity is growing unabated and drives up associated software licensing costs; and 3) damage to an organization's reputation due to its inability to fulfill modern customer expectations such as recovery time objectives (RTOs) and recovery point objectives (RPOs). As illustrated in Figure 1, when considering investment in a new availability solution, acquisition cost is the tip of the iceberg and the most visible cost in the initial cost calculations, but operational and ongoing licensing costs are the most significant drivers of cost and complexity over the longer term. Consequently, choosing to do nothing might be a significantly more expensive decision for your business. 2016 IDC #CEMA41059416 3
FIGURE 1 Cost and Complexity Drivers Acquisition Cost SW License Cost HW Cost (use of existing hardware vs. purchase) Implementation Service Cost Operational Cost Maintenance Cost Staff Cost (manual vs. automated processes) Cost of additional hardware to accommodate data growth Troubleshooting Recovery/Downtime Licensing Cost Capacity Licensing Source: IDC, 2016 Availability Challenges IT managers are struggling to meet the challenging requirements that their business leaders, employees, partners, and customers place on them, because their traditional data protection tools were designed for a different IT infrastructure than that which is common today. Optimization for virtualized server environments, integration with cloud services, and easy integration with new storage technologies are modern design criteria that were not widely considered when most data protection tools currently in use were created. In addition, because backup is seen as a cost and an insurance policy, rather than as a business enabler, many organizations have under-invested in it for many years and are running on legacy backup solutions that are not designed to fulfill the service- 2016 IDC #CEMA41059416 4
level requirements of the digital era. This results in IT managers struggling to close the gap between service-level agreements (SLAs) with their business users and their ability to deliver the required levels of availability. The commonly observed challenges are summarized in the figure below: FIGURE 2 Top 5 Data Protection Challenges Restore critical workloads in minutes, not hours 39% Improve backup performance/shorten backup window 38% Protect data at remote/branch office 31% Redesign disaster recovery solution 30% Protect virtual machine data without agents 29% Source: IDC, 2016 Unplanned Downtime: Businesses are suffering from increased downtime, which results in lost sales through digital channels, and negatively impacting the customer experience. IDC research shows that a medium-sized organization experiences, on average, 15 18 business hours of network, system, or application downtime per year, which causes major disruption to the business and reputation damage, as customers are unforgiving of a bad experience with a company. Missed RTOs and RPOs: Business leaders and business users have increased the pressure on IT managers to deliver ever tighter RTOs and RPOs because reliance on data for business decision making has increased dramatically and data loss has a detrimental impact on business performance. In a recent IDC survey, 39% of the respondents cited they now need to restore critical workloads in minutes, not hours. Meeting this requirement is virtually impossible with outdated data protection methods. Shortening Backup Windows: Many backup jobs are not completed within the allotted timeframe, making it impossible for IT staff to meet their SLAs. In a recent IDC survey, 38% of IT managers reported meeting shortening backup windows as being a major challenge that they need to solve this year. Unproductive Data: Businesses store data for protection purposes and for restoration purposes in the event of a mistake or disaster. However, simply storing vast amounts of data as an insurance policy is an expensive strategy when the data should be used productively; for example, in test and development environments, or for analytics purposes and informed business decision making. Verified Recoverability: Organizations have limited visibility into their backup environments and are taken by surprise when backups or even worse restores fail. This puts IT managers into a reactive "firefighting" mode and prevents them from meeting RTOs and RPOs. Modern data protection solutions provide automated testing and assurance capabilities to give visibility into an organization's ability to recover. Less than one-third (30%) of organizations are currently redesigning their disaster recovery solutions to achieve 2016 IDC #CEMA41059416 5
higher levels of application and data availability and to demonstrate their improved capabilities to their management through better reporting tools. To achieve verified recoverability, organizations need to monitor, receive reporting on, and perform capacity planning on their backup infrastructure to see what is happening in their environments in real time, whether any VMs have not been backed up, and whether storage is reaching capacity so that they can fix issues before they occur. Reliance on Manual Tasks: In this patchwork of disaster recovery (DR), high availability (HA) and data protection technologies, IT staff still need to perform many tasks manually, which limits their ability to scale with data growth. Legacy backup solutions can be complicated and can take too much time to deploy and manage, requiring resources that could be spent doing more productive tasks that drive the business. Automating manual processes will free up staff time, ensure higher levels of application and data availability, and enable better communication with line-of-business managers about meeting SLAs. Lack of Skills: Skilled staff with the necessary technical knowledge about DR, HA, and data protection is becoming a scarce resource, and data protection is increasingly the responsibility of the application owner (e.g., the database administrator [DBA] or the virtual environment engineer). Ease of use is a key design criterion that data protection technologies must meet to enable non-storage staff to handle backups successfully. In addition, IT managers are tasked to hand over HA, DR, and backup-related tasks to lowercost personnel, rather than tying up expensive IT resources. Solutions that enable IT managers to delegate rights to support staff or help desk resources help solve the skills shortage challenge. Future Outlook Best Practices to Create an Available Datacenter Availability is of paramount importance in the digital era, and IDC has observed best practices on how to satisfy the demands of data-hungry employees, customers, partners, and applications. Focus on Recovery Time: Speed of recovery is key to availability, and successful businesses ensure that they can recover in minutes, not hours. Testing backups continuously to determine whether they were successful and that any data loss can be recovered is a prerequisite for fast recovery speeds. Snapshot-based recovery is another function that accelerates recovery. Virtual First: Most business-critical applications now run in a virtual environment, as virtualization has been established as a de facto infrastructure standard. Consequently, data protection solutions should be designed and optimized for virtual environments, and should be able to handle heterogeneous virtual environments. Automate: The need has increased for automation to cope with larger data volumes and provide the flexibility and agility required by a highly available datacenter. Automation covers several aspects (e.g., automated testing of backups, automated restore processes, and automated monitoring of backup infrastructure to ensure optimal operation). Modernize: Backup infrastructure must be modernized to provide more value from stored data back to the business; improve performance through better recovery time and recovery point objectives, RTPOs, storage management, and deduplication; and help IT to adhere to SLAs. Modern data protection products enable IT managers to provide stored data to the business for development and test processes, as well as for analytics. Achieve Visibility: An overview of the health of the backup infrastructure, as well as the success rates and possible problems, is a key requirement for a modern backup product to ensure smooth operation, compliance with SLAs, and meeting business requirements. Constant (24x7), real-time monitoring and alerting capabilities will allow organizations to 2016 IDC #CEMA41059416 6
identify performance and backup issues across their full virtual and cloud infrastructure and resolve them before they become an operational problem, ensuring the continuous availability of datacenter services. Usage of Cloud Services: The usage of cloud services, either in terms of a secondary site for disaster recovery or for long-term storage of backups, is increasingly popular. When considering cloud as part of a backup and disaster recovery process, it is important to find a solution that offers integration with a wide range of cloud service providers, so that the organization can choose whichever provider, local or regional, meets its requirements for data location, service levels, and cost structure. Recommendations The key question remains: How does an organization achieve 24/7/365 availability and reduce complexity while cutting infrastructure and administrative costs? First of all, taking a fresh look at the current backup infrastructure is the starting point in understanding whether the current products and processes are still usable in the digital age. Most likely, they are in serious need of modernization if this area of IT was neglected during times when cost cutting was imperative. However, as business is now focusing on innovation and the customer experience, applications and data have become the lifeblood of the organization and need to be protected and made available with modern tools and updated processes. IDC therefore recommends: Understanding the True Cost of Modernizing the Solution: The up-front investment for licenses and hardware is only the tip of the iceberg, while differences in licensing models (capacity-based licensing versus CPU-based licensing), operating costs, and continuing maintenance costs can create significant differences in a solution s total cost of ownership. The cost of doing nothing also needs to be taken into account, as the operational savings from an investment in a new availability solution often outweigh the initial investment costs. Choosing a Provider Carefully: In addition to selecting a provider with a proven track record of innovation and customer satisfaction, it is important to make sure that the new solution will remain current with technology advances and will be able to meet emerging requirements. Considering the Ecosystem: It is increasingly important to choose a data protection vendor with a strong ecosystem spanning storage hardware, virtualization, and service providers in order to ensure that the chosen solution fits with the broader IT infrastructure and can be extended to the service provider network if necessary. Find a vendor that has integration with many hardware and cloud providers, but does not lock you into a specific hardware/software solution, but rather stays agnostic, so that you get the benefit of integration but do not suffer from lock-in. In order to get the benefits of integration while avoiding solution lock in, organizations should seek cooperation with vendors that take an agnostic approach and cooperate with many different hardware and cloud providers. Classifying Application Workloads and Business Data: This is a necessary exercise to determine the appropriate level of DR, HA, and data protection. A one-size-fits-all approach will be too expensive and is likely to leave important applications under-protected and less critical applications over-protected. The applications that are deemed business critical are changing over time. Organizations now consider collaboration and file-sharing solutions as business critical, in addition to the traditional transactional systems, databases, websites, and email systems. Availability solutions should be chosen on the basis of their ability to scale the availability levels of business applications according to their current importance to operations and changing business requirements. 2016 IDC #CEMA41059416 7
About IDC International Data Corporation (IDC) is the premier global provider of market intelligence, advisory services, and events for the information technology, telecommunications and consumer technology markets. IDC helps IT professionals, business executives, and the investment community make factbased decisions on technology purchases and business strategy. More than 1,100 IDC analysts provide global, regional, and local expertise on technology and industry opportunities and trends in over 110 countries worldwide. For 50 years, IDC has provided strategic insights to help our clients achieve their key business objectives. IDC is a subsidiary of IDG, the world's leading technology media, research, and events company. IDC U.K. IDC UK 5th Floor, Ealing Cross, 85 Uxbridge Road London W5 5TH, United Kingdom 44.208.987.7100 Twitter: @IDC idc-insights-community.com www.idc.com Copyright Notice This IDC research document was published as part of an IDC continuous intelligence service, providing written research, analyst interactions, telebriefings, and conferences. Visit www.idc.com to learn more about IDC subscription and consulting services. To view a list of IDC offices worldwide, visit www.idc.com/offices. Please contact the IDC Hotline at 800.343.4952, ext. 7988 (or +1.508.988.7988) or sales@idc.com for information on applying the price of this document toward the purchase of an IDC service or for information on additional copies or Web rights. Copyright 2016 IDC. Reproduction is forbidden unless authorized. All rights reserved.