Welcome!

SDN Journal Authors: Destiny Bertucci, Liz McMillan, Pat Romanski, Elizabeth White, Amitabh Sinha

Related Topics: Containers Expo Blog, Microservices Expo, SDN Journal

Containers Expo Blog: Blog Post

Five Best Practices in Real-Time Virtualization [#SDN #Virtualization]

Virtualization is a critical infrastructure component in most modern data centers

Virtualization technology has matured and is a critical infrastructure component in most modern data centers, especially as a trusted part of production environments. As our use of, and reliance on, virtualization technology continues to grow it is important that we are managing the technology  following best practices. Here are five best practice areas of focus to help ensure success in leveraging virtualization technology.

1. Monitor the Entire Infrastructure Stack.
Virtualization touches almost all aspects of the data center including servers, storage and networking.  While most of the virtualization vendors provide tools to monitor and manage their technology, they rarely give an IT organization a comprehensive view into the entire stack of technology that virtualization touches. For example, understanding network traffic at a physical port, or being alerted to drive failures on a SAN, might be important to the performance of your virtualized environment.  These are a just a couple examples of things that impact the full virtualization stack but are typically outside the purview of dedicated virtualization monitoring tools. Ensure your tooling provides you visibility into all parts of your infrastructure stack.

2. Monitor Inside Virtual Machines
Understanding the performance of a VM in terms of the physical resources it is consuming is critical. However, it is just as critical to monitor what is happening inside the VM itself. In particular, monitoring the applications inside the VMs is the only way to really understand if everything is running as expected. To provide a good end-user experience you need to understand when critical services have stopped or a database can't keep up with the volume of data being read and written. Remember that the VM being healthy is a means to an end; it's more important to know if services and applications are being delivered.

3. Use the Right Metrics
Hypervisor vendors have learned they need to do a good job helping users figure how to measure performance in the right dimensions. To that end they publish papers that help us identify the key metrics that will point to performance issues Many of the metrics are those that we have used for years - CPU usage, memory usage, disk IOPS, network IOPS, etc.  Watching these standard metrics on both physical servers and VMs is a must.

In addition to the standard metrics we have used for years there are also virtualization specific metrics that can help us hone in on issues specifically related to virtualization. VMware, in particular, does a good job of surfacing metrics that point to specific issues.  Some of VMware's key metrics are CPU Ready, Memory Ballooning, and CPU Co-stop. For example, CPU Ready indicates whether a physical host is able to keep up with the CPU demand from the virtual machines running on it.  Include these metrics in your monitoring strategy as well.

4. Keep an Eye on the Future
Virtualization has greatly increased our agility in the data center. In many environments the time it takes to provide requested resources to satisfy project demand has decreased from months to minutes. A side effect of this agility is that we are running many more workloads (operating systems) today than we were prior to the use of virtualization. The positive side of this is companies can quickly execute on new projects. The negative is companies are consuming more and more compute, storage, and networking resources. Today's IT organizations are being asked to run as lean and agile as possible while still providing enough resources to execute on future demands.  Purchasing too much resources means waste and contradicts the desire to be agile, supplying too little resource means either a decrease in the ability to execute on new projects or it can manifest itself into an IT environment that is less reliable or stable.

To the extent that past trends can help predict the future, they can be used to help guide a company in terms of their hardware requirements for the future. There are four primary metrics that companies should be planning around - CPU utilization, memory utilization, disk utilization, and disk IOPS. By projecting forward the past usage of these key metrics against the capacity available for each company, you can ballpark when they will need to purchase more hardware to satisfy future demand.  Having a detailed understanding of utilization versus capacity can also help an organization keep these resources in balance.

There is another related side effect of virtualization - waste. IT organizations need ways to track this waste so they can reclaim resources when they are no longer being used. For IT organizations operating as private clouds, a second technique to control waste should be evaluated - over-subscription. While most of us don't appreciate the over-subscription practices of the airlines, car rental companies or even our doctors they do help those organizations drive down costs.

5. Take Advantage of All the Virtualization Technology
Virtualization technology has evolved beyond just the hypervisor. Most virtualization vendors are building software-defined storage and software-defined network technologies. These technologies create a similar layer of abstraction that the hypervisor provides from the server. These abstractions provide similar opportunities to improve the management, utilization and the cost effectiveness of storage and networking. The investments made in learning, understanding and using these new technologies will almost certainly provide benefits in the long run.

In addition to technologies that abstract hardware from applications, virtualization vendors have created new technologies that can make the datacenter much more efficient and reliable. One of the key challenges with virtualization stems from multiple VMs running on a single physical system. Virtualization vendors provide technology to recover VMs on other hosts if the current hosts fails. They provide technology that can automatically balance resources, including powering off unused resources.  There are many other ancillary technologies that can complement virtualization, including new improved approaches to managing disaster recovery and backups, to running and managing desktops.

Virtualization revolutionized the data center. Data centers today are more agile, cost effective and better utilized. By focusing on these five best practices we can leverage virtualization to its full potential.

More Stories By Peter Dyer

Peter Dyer, product manager for Uptime Software, has been involved in the IT industry for more than 15 years. His roles have ranged from Software Engineer to Product Manager. His work has been centered around software solutions designed to work with virtualization for both small and large companies.

Peter holds a degree in Electrical Engineering along with an MBA from Dalhousie University.

@CloudExpo Stories
The dynamic nature of the cloud means that change is a constant when it comes to modern cloud-based infrastructure. Delivering modern applications to end users, therefore, is a constantly shifting challenge. Delivery automation helps IT Ops teams ensure that apps are providing an optimal end user experience over hybrid-cloud and multi-cloud environments, no matter what the current state of the infrastructure is. To employ a delivery automation strategy that reflects your business rules, making r...
Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex ...
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It’s clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. Tha...
In his session at 21st Cloud Expo, Michael Burley, a Senior Business Development Executive in IT Services at NetApp, described how NetApp designed a three-year program of work to migrate 25PB of a major telco's enterprise data to a new STaaS platform, and then secured a long-term contract to manage and operate the platform. This significant program blended the best of NetApp’s solutions and services capabilities to enable this telco’s successful adoption of private cloud storage and launching ...
In his general session at 21st Cloud Expo, Greg Dumas, Calligo’s Vice President and G.M. of US operations, discussed the new Global Data Protection Regulation and how Calligo can help business stay compliant in digitally globalized world. Greg Dumas is Calligo's Vice President and G.M. of US operations. Calligo is an established service provider that provides an innovative platform for trusted cloud solutions. Calligo’s customers are typically most concerned about GDPR compliance, application p...
The past few years have brought a sea change in the way applications are architected, developed, and consumed—increasing both the complexity of testing and the business impact of software failures. How can software testing professionals keep pace with modern application delivery, given the trends that impact both architectures (cloud, microservices, and APIs) and processes (DevOps, agile, and continuous delivery)? This is where continuous testing comes in. D
The 22nd International Cloud Expo | 1st DXWorld Expo has announced that its Call for Papers is open. Cloud Expo | DXWorld Expo, to be held June 5-7, 2018, at the Javits Center in New York, NY, brings together Cloud Computing, Digital Transformation, Big Data, Internet of Things, DevOps, Machine Learning and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding busin...
Smart cities have the potential to change our lives at so many levels for citizens: less pollution, reduced parking obstacles, better health, education and more energy savings. Real-time data streaming and the Internet of Things (IoT) possess the power to turn this vision into a reality. However, most organizations today are building their data infrastructure to focus solely on addressing immediate business needs vs. a platform capable of quickly adapting emerging technologies to address future ...
SYS-CON Events announced today that Synametrics Technologies will exhibit at SYS-CON's 22nd International Cloud Expo®, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Synametrics Technologies is a privately held company based in Plainsboro, New Jersey that has been providing solutions for the developer community since 1997. Based on the success of its initial product offerings such as WinSQL, Xeams, SynaMan and Syncrify, Synametrics continues to create and hone in...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
Nordstrom is transforming the way that they do business and the cloud is the key to enabling speed and hyper personalized customer experiences. In his session at 21st Cloud Expo, Ken Schow, VP of Engineering at Nordstrom, discussed some of the key learnings and common pitfalls of large enterprises moving to the cloud. This includes strategies around choosing a cloud provider(s), architecture, and lessons learned. In addition, he covered some of the best practices for structured team migration an...
With tough new regulations coming to Europe on data privacy in May 2018, Calligo will explain why in reality the effect is global and transforms how you consider critical data. EU GDPR fundamentally rewrites the rules for cloud, Big Data and IoT. In his session at 21st Cloud Expo, Adam Ryan, Vice President and General Manager EMEA at Calligo, examined the regulations and provided insight on how it affects technology, challenges the established rules and will usher in new levels of diligence arou...
Most technology leaders, contemporary and from the hardware era, are reshaping their businesses to do software. They hope to capture value from emerging technologies such as IoT, SDN, and AI. Ultimately, irrespective of the vertical, it is about deriving value from independent software applications participating in an ecosystem as one comprehensive solution. In his session at @ThingsExpo, Kausik Sridhar, founder and CTO of Pulzze Systems, discussed how given the magnitude of today's application ...
The “Digital Era” is forcing us to engage with new methods to build, operate and maintain applications. This transformation also implies an evolution to more and more intelligent applications to better engage with the customers, while creating significant market differentiators. In both cases, the cloud has become a key enabler to embrace this digital revolution. So, moving to the cloud is no longer the question; the new questions are HOW and WHEN. To make this equation even more complex, most ...
As you move to the cloud, your network should be efficient, secure, and easy to manage. An enterprise adopting a hybrid or public cloud needs systems and tools that provide: Agility: ability to deliver applications and services faster, even in complex hybrid environments Easier manageability: enable reliable connectivity with complete oversight as the data center network evolves Greater efficiency: eliminate wasted effort while reducing errors and optimize asset utilization Security: imple...
Mobile device usage has increased exponentially during the past several years, as consumers rely on handhelds for everything from news and weather to banking and purchases. What can we expect in the next few years? The way in which we interact with our devices will fundamentally change, as businesses leverage Artificial Intelligence. We already see this taking shape as businesses leverage AI for cost savings and customer responsiveness. This trend will continue, as AI is used for more sophistica...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
Digital transformation is about embracing digital technologies into a company's culture to better connect with its customers, automate processes, create better tools, enter new markets, etc. Such a transformation requires continuous orchestration across teams and an environment based on open collaboration and daily experiments. In his session at 21st Cloud Expo, Alex Casalboni, Technical (Cloud) Evangelist at Cloud Academy, explored and discussed the most urgent unsolved challenges to achieve f...
In his session at 21st Cloud Expo, Raju Shreewastava, founder of Big Data Trunk, provided a fun and simple way to introduce Machine Leaning to anyone and everyone. He solved a machine learning problem and demonstrated an easy way to be able to do machine learning without even coding. Raju Shreewastava is the founder of Big Data Trunk (www.BigDataTrunk.com), a Big Data Training and consulting firm with offices in the United States. He previously led the data warehouse/business intelligence and B...