Welcome!

SDN Journal Authors: Pat Romanski, Destiny Bertucci, Liz McMillan, Elizabeth White, Amitabh Sinha

Related Topics: @CloudExpo, Microservices Expo, Containers Expo Blog, Agile Computing, @DXWorldExpo, SDN Journal

@CloudExpo: Article

To Cloud or Not to Cloud?

Transformational metrics and analytics for managing "Infrastructure Anywhere"

Today's IT infrastructure is in the midst of a major transformation. In many ways, the data center is a victim of its own success. The growing number of technologies and applications residing in the data center has spawned increasing complexity, which makes IT as a whole less responsive and agile. While businesses are focused on moving faster than ever, large and complex infrastructure is inherently rigid and inefficient.

As a result, IT is moving outside the traditional data center into colocation facilities and cloud infrastructures - essentially Infrastructure Anywhere. The move to Infrastructure Anywhere is driven by the core objective of improving responsiveness and agility and reducing costs. For example, you can scale up resources through the cloud in minutes, not months. But for all of its benefits, this new Infrastructure Anywhere model presents critical challenges.

To make smart decisions about where to run applications and what kind of resources you need, you first must understand your workload: utilization, capacity, and cost. Gaining unified visibility is difficult when your application workloads are distributed across data centers and colocation facilities in different parts of the country or around the world. With limited visibility, how do you accurately align resources and capacity with workloads for efficient processing, cost control, and - more important - achieve the full business value of your IT investment?

It's All About Agility
According to the results of Sentilla's recent survey of data center professionals about their plans for cloud deployments, agility and flexibility are the top drivers behind enterprise IT transformation initiatives such as cloud deployments - followed closely by issues of capacity and cost.

Figure 1: Key drivers for cloud computing initiatives

While agility is the prime motivating factor, the importance of cost as a factor should not be ignored. According to the survey, the major resource limitation experienced by respondents - for all infrastructure initiatives - is budget.

Figure 2: Resource limitations

Note that several of the reported constraints (personnel, storage capacity) are related to the broader issue of budget. In this sense, cost is overwhelmingly the most important constraint on IT initiatives - including cloud initiatives.

2013 Is for Planning, 2014 for Deployment
Of the organizations surveyed, nearly 50 percent plan to be deploying cloud initiatives in 2014. Many are in the planning phases. In general, we can expect cloud computing deployments to increase by 70 percent in 12 months:

Figure 3: Data center cloud initiatives, by year

Similarly, those surveyed expect to gradually migrate more workloads to cloud platforms in the coming years - with 28 percent planning to run more than half of their applications in the cloud by 2014. The barriers to cloud migration are lowering.

Figure 4: Percentage of applications planned to move to the cloud, by year

The Cloud Isn't a Homogeneous Place
Cloud computing can refer to several different deployment models. At a high level, cloud infrastructure alternatives are defined by how they are shared among different organizations.

Figure 5: Where respondents planned to deploy cloud initiatives

Private clouds offer the flexibility of the elastic infrastructure shared between different applications, but are never shared with other organizations. Hosted on dedicated equipment either on-premises or in a colocation provider, a private cloud is the most secure but the least cost-effective cloud model.

Public cloud infrastructure offers similar elasticity and scalability and is shared with many organizations. This model is best suited for businesses requiring managing load spikes and scaling to a large number of users without a large capital investment. Amazon Web Services (AWS) is perhaps the most widely deployed example of public cloud infrastructure as a service.

Hybrid cloud offers the dual advantages of secure applications and data hosting on a private cloud and the cost benefits of keeping sharable applications and data on the public cloud. This model is often used for cloud bursting - the migration of workloads between public and private hosting to handle load spikes.

Community cloud is an emerging category in which different organizations with similar needs use a shared cloud computing environment. This new model is taking hold in environments with common regulatory requirements, including healthcare, financial services and government.

The research showed that organizations are evaluating a broad range of different cloud solutions, including Amazon AWS, Microsoft Azure, Google Cloud Platform, and Red Hat Cloud Computing, as well as many solutions based on OpenStack, the open source cloud computing software.

Without planning, ad hoc cloud deployments combined with islands of virtualization will only add complexity to the existing data center infrastructure. The resulting environment is one of physical, virtual and cloud silos with fragmented visibility. While individual tools may deliver insight into specific parts of the infrastructure puzzle (physical infrastructure, server virtualization with VMware, specific infrastructure in a specific cloud provider), IT organizations have little visibility into the total picture. This lack of visibility can impede the IT organization's ability to align infrastructure investments with business needs and cost constraints.

Infrastructure Complexity Is the New Normal
While it aims to bring agility to IT, the process of cloud transformation will only increase infrastructure complexity in the near term. IT organizations must manage a combination of legacy systems with islands of virtualization and cloud technologies.

When asked about where cloud infrastructure will reside, survey respondents indicated that they will be managing a blend of on-premises and outsourced infrastructure, with the balance shifting dramatically from 2013-2014.

Figure 6: Where cloud infrastructure will reside, by year

The Need for Unified Visibility into Complex Infrastructure
As you plan your own cloud initiatives, you must prepare for multiple phases of transformation:

  • Deploying new applications to the cloud as part of the broader application portfolio
  • Migrating existing applications to cloud infrastructure where possible and appropriate
  • Managing the hybrid "infrastructure anywhere' environment during the transition and beyond

To support these phases, you need visibility into workloads and capacity across essential application infrastructure - no matter where it resides. From the physical and virtual resources, up through applications and services, you will need insight so you can align IT with business objectives.

Figure 7: The need for infrastructure insight at all levels

Essential Infrastructure Metrics for Right-Sizing Infrastructure
Decisions about which applications to deploy to the cloud and where to deploy them will require visibility into:

  • Historical, current and predicted application workload
  • Current and predicted capacity requirements of the workload
  • Comparative cost of providing that capacity and infrastructure on different platforms

For application migration scenarios, you will need to understand the actual resource consumption of the existing application. Whether it's a new application or a migrated one, you will need to ‘right-size' the cloud infrastructure to avoid the twin dangers of over-provisioning (and wasting financial resources) or under-provisioning and risking outages or performance slow-downs. You will need insight into:

  • Memory utilization
  • CPU utilization
  • Data transfer/bandwidth
  • Storage requirements

You will also need good metrics about the cost of running the application in your existing data center, as well as the predicted costs of running that same application on various platforms. These metrics need to factor in the total cost of the application, such as:

  • Personnel for supporting the application
  • Operating system
  • Management software
  • Cooling and power
  • Leased cloud
  • Server and storage hardware

To accurately predict the cost of running the application on cloud-based infrastructure, you will need accurate metrics around the actual, historical resource consumption of the application (storage, memory, CPU, etc.) as it maps to billable units by the provider. By understanding the actual consumption, you can avoid over-provisioning and overpaying for resources from external providers.

Infrastructure Analytics for Cloud Migration
For any given application that is a candidate for the cloud, you want to be able to compare the total cost of the resources required across different options (public, private and on-premise).

While you could try to manually crunch these numbers in a spreadsheet, the computations are not trivial. These are decisions that you will need to make repeatedly, for each application, and revisit when an infrastructure provider changes its cost model or fee structure. For that reason you'll want a tool that lets you get an accurate and continuous view into current costs and model "what-if" scenarios for different deployments.

Continuous Analysis for Continuous Improvement of the "Infrastructure Anywhere" Environment
Before deploying an application, what-if scenarios help you make sound resource decisions and right-size applications. After deploying, continuous analysis is key to ensuring that you are optimizing capacity and using resources most efficiently.

While individual tools may already give you slices of the necessary information, you need integrated insight into the complete infrastructure environment. Again, emerging infrastructure intelligence can assemble necessary information from applications and assets that are both on and off your premises, virtualized and not, in different platforms and locations.

Figure 8: Transformational analytics

The software can provide 'single pane of glass' visibility into assets and applications throughout the physical, virtual, and cloud infrastructure, including:

  • Application cost/utilization spanning different locations
  • True resource requirements of apps (for more accurate provisioning in cloud infrastructure)
  • CPU and memory utilization of apps, wherever they reside

Summary
By 2014, enterprise computing will look quite different than it does today, yet many legacy systems and infrastructure will still be with us. IT operations, business units and application architects will need to manage applications that reside in infrastructure that spans on-premise and offsite locations, with public, private, hybrid, and community cloud infrastructure. Data centers will be just one part of the total pool of infrastructure that IT manages on behalf of the business.

To manage this transformation, you will need to make smart decisions about where workloads should reside based on specific application and business needs. As these changes roll out, you will need to manage the transforming and hybrid application infrastructure to deliver the necessary performance and service levels, no matter where applications reside.

IT organizations need the insight to make fast, smart and informed decisions about where workloads and data should reside and how to deploy new applications. Rather than isolated silos of metrics and capacity and utilization data, IT needs unified visibility into infrastructure across the virtual computing environment - both on-premise and off. And they need the metrics and continuous analysis to manage the evolving infrastructure in a manner aligned with business objectives.

An emerging category of infrastructure intelligence can provide the continuous and unified analytics necessary to understand and compare your decisions and to manage the data center during the transformation. With broad 'infrastructure insight' you can align cloud platforms with business needs and cost requirements - delivering the agility to realize new revenue opportunities with the insight to contain the costs of existing applications.

More Stories By Ranvir Wadera

Ranvir Wadera is senior vice president of product development at Sentilla Corporation, an Infrastructure Intelligence software platform provider based in Redwood City, California. He has more than 20 years of product development and management experience in both entrepreneurial and large company environments. Before joining Sentilla, he was founder and CEO of Rjenda, a SaaS solution for the education market. Prior to Rjenda, he was vice president of product development for query and reporting tools at Hyperion/Oracle. Prior to Hyperion, Ranvir was vice president of product development at Business Objects. Early in his career, he also held product development positions at Oracle and developer and manager roles in India.

Ranvir received his undergraduate and post-graduate education in India from the University of Delhi and the International Management Institute, with follow up technical education from University of California at Berkeley.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@CloudExpo Stories
Digital transformation is about embracing digital technologies into a company's culture to better connect with its customers, automate processes, create better tools, enter new markets, etc. Such a transformation requires continuous orchestration across teams and an environment based on open collaboration and daily experiments. In his session at 21st Cloud Expo, Alex Casalboni, Technical (Cloud) Evangelist at Cloud Academy, explored and discussed the most urgent unsolved challenges to achieve f...
In his session at 21st Cloud Expo, Raju Shreewastava, founder of Big Data Trunk, provided a fun and simple way to introduce Machine Leaning to anyone and everyone. He solved a machine learning problem and demonstrated an easy way to be able to do machine learning without even coding. Raju Shreewastava is the founder of Big Data Trunk (www.BigDataTrunk.com), a Big Data Training and consulting firm with offices in the United States. He previously led the data warehouse/business intelligence and B...
The past few years have brought a sea change in the way applications are architected, developed, and consumed—increasing both the complexity of testing and the business impact of software failures. How can software testing professionals keep pace with modern application delivery, given the trends that impact both architectures (cloud, microservices, and APIs) and processes (DevOps, agile, and continuous delivery)? This is where continuous testing comes in. D
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
Mobile device usage has increased exponentially during the past several years, as consumers rely on handhelds for everything from news and weather to banking and purchases. What can we expect in the next few years? The way in which we interact with our devices will fundamentally change, as businesses leverage Artificial Intelligence. We already see this taking shape as businesses leverage AI for cost savings and customer responsiveness. This trend will continue, as AI is used for more sophistica...
The “Digital Era” is forcing us to engage with new methods to build, operate and maintain applications. This transformation also implies an evolution to more and more intelligent applications to better engage with the customers, while creating significant market differentiators. In both cases, the cloud has become a key enabler to embrace this digital revolution. So, moving to the cloud is no longer the question; the new questions are HOW and WHEN. To make this equation even more complex, most ...
Blockchain is a shared, secure record of exchange that establishes trust, accountability and transparency across business networks. Supported by the Linux Foundation's open source, open-standards based Hyperledger Project, Blockchain has the potential to improve regulatory compliance, reduce cost as well as advance trade. Are you curious about how Blockchain is built for business? In her session at 21st Cloud Expo, René Bostic, Technical VP of the IBM Cloud Unit in North America, discussed the b...
SYS-CON Events announced today that Synametrics Technologies will exhibit at SYS-CON's 22nd International Cloud Expo®, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Synametrics Technologies is a privately held company based in Plainsboro, New Jersey that has been providing solutions for the developer community since 1997. Based on the success of its initial product offerings such as WinSQL, Xeams, SynaMan and Syncrify, Synametrics continues to create and hone in...
With tough new regulations coming to Europe on data privacy in May 2018, Calligo will explain why in reality the effect is global and transforms how you consider critical data. EU GDPR fundamentally rewrites the rules for cloud, Big Data and IoT. In his session at 21st Cloud Expo, Adam Ryan, Vice President and General Manager EMEA at Calligo, examined the regulations and provided insight on how it affects technology, challenges the established rules and will usher in new levels of diligence arou...
As you move to the cloud, your network should be efficient, secure, and easy to manage. An enterprise adopting a hybrid or public cloud needs systems and tools that provide: Agility: ability to deliver applications and services faster, even in complex hybrid environments Easier manageability: enable reliable connectivity with complete oversight as the data center network evolves Greater efficiency: eliminate wasted effort while reducing errors and optimize asset utilization Security: imple...
Nordstrom is transforming the way that they do business and the cloud is the key to enabling speed and hyper personalized customer experiences. In his session at 21st Cloud Expo, Ken Schow, VP of Engineering at Nordstrom, discussed some of the key learnings and common pitfalls of large enterprises moving to the cloud. This includes strategies around choosing a cloud provider(s), architecture, and lessons learned. In addition, he covered some of the best practices for structured team migration an...
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
The 22nd International Cloud Expo | 1st DXWorld Expo has announced that its Call for Papers is open. Cloud Expo | DXWorld Expo, to be held June 5-7, 2018, at the Javits Center in New York, NY, brings together Cloud Computing, Digital Transformation, Big Data, Internet of Things, DevOps, Machine Learning and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding busin...
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex ...
In his session at 21st Cloud Expo, Michael Burley, a Senior Business Development Executive in IT Services at NetApp, described how NetApp designed a three-year program of work to migrate 25PB of a major telco's enterprise data to a new STaaS platform, and then secured a long-term contract to manage and operate the platform. This significant program blended the best of NetApp’s solutions and services capabilities to enable this telco’s successful adoption of private cloud storage and launching ...
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It’s clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. Tha...
In his general session at 21st Cloud Expo, Greg Dumas, Calligo’s Vice President and G.M. of US operations, discussed the new Global Data Protection Regulation and how Calligo can help business stay compliant in digitally globalized world. Greg Dumas is Calligo's Vice President and G.M. of US operations. Calligo is an established service provider that provides an innovative platform for trusted cloud solutions. Calligo’s customers are typically most concerned about GDPR compliance, application p...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
Smart cities have the potential to change our lives at so many levels for citizens: less pollution, reduced parking obstacles, better health, education and more energy savings. Real-time data streaming and the Internet of Things (IoT) possess the power to turn this vision into a reality. However, most organizations today are building their data infrastructure to focus solely on addressing immediate business needs vs. a platform capable of quickly adapting emerging technologies to address future ...