Welcome!

SDN Journal Authors: Liz McMillan, Yeshim Deniz, Elizabeth White, Pat Romanski, TJ Randall

Related Topics: Containers Expo Blog, @CloudExpo, SDN Journal

Containers Expo Blog: Article

Infrastructure 101 - Part 2 | @CloudExpo #Cloud #SDN #SDS #DataCenter

IT Infrastructure Fundamentals

Server Virtualization has transformed the way we manage server workloads but virtualization hypervisors were not the endgame of data center management. What is the role of server virtualization and hypervisors in the new age of cloud, containers, and more importantly, hyperconvergence?

I covered SAN technology in my last Infrastructure 101 article, so for today I'm going to cover server virtualization and maybe delve into containers and cloud.

Server virtualization as we know it now is based on hypervisor technology. A hypervisor is an operating system that allows sharing of physical computing resources such as networking, CPU, RAM, and storage among multiple virtual machines (sometimes called virtual servers). Virtual machines replaced traditional physical servers that each had their own physical chassis with storage, RAM, networking, and CPU. To understand the importance of hypervisors, let's look at a bit of history.

Early on, computing was primarily done on mainframes, which were monolithic machines designed to provide all of the computing necessary for an organization. They were designed to share resources among various parallel processes to accommodate multiple users. As computing needs grew, organization began to move away from the monolithic architecture of the mainframe to hosting multiple physical servers that were less expensive and that would run one or more applications for multiple users. Physical servers could range in size and capacity from very large, rivaling mainframes, down to very small, resembling personal computers.

While mainframes never disappeared completely, the flexibility in cost and capacity of physical servers made them an infrastructure of choice across all industries. Unfortunately, as computing needs continued to grow, organizations began needing more and more servers, and more administrators to manage the servers. The size of server rooms, along with the power and cooling needs were honestly becoming ridiculous.

There were a number of technologies that emerged resembling what we now call server virtualization that allowed the compute and storage resources of a single physical box to be divided among different virtualized servers, but those never became the mainstream. Virtualization didn't really take off until hypervisor technology for the x86 platform came around, which happened at the same time as other platforms were declining in the server market.

Initially, virtualization was not adopted for production servers but instead was used extensively for testing and development because it lacked some of the performance and stability needed for production servers. The widespread use for test and dev eventually led to improvements that made administrators confident with its use on production servers. The combination of performance improvements along with clustering to provide high availability for virtual machines open the door for widespread adoption for production servers.

The transition to virtualization was dramatic, reducing server rooms that once housed dozens and dozens of server racks to only a handful of server racks for the host servers and storage on which all of the same workloads ran. It is now difficult to find an IT shop that is still using physical servers as their primary infrastructure.

While there were many hypervisors battling to become the de facto solution, a number of hypervisors were adopted including Xen and KVM (both open source), Hyper-V, and VMware ESX/ESXi which took the lion's share of the market.  Those hypervisors or their derivatives continue to battle for marketshare today, after more than a decade. Cloud platforms have risen, built over each of these hypervisors, adding to the mystery of whether a de facto hypervisor will emerge. But maybe it no longer matters.

Virtualization has now become a commodity technology. It may not seem so to VMware customers who are still weighing various licensing options, but server virtualization is pretty well baked and the innovations have shifted to hyperconvergence, cloud, and container technologies. The differences between hypervisors are few enough that the buying decisions are often based more on price and support than technology at this point.

This commoditization of server virtualization does not necessarily indicate any kind of decline in virtualization anytime soon, but rather a shift in thinking from traditional virtualization architectures. While cloud is driving innovation in multi-tenancy and self-service, hyperconvergence is fueling innovation in how hardware and storage can be designed and used more efficiently by virtual machines (as per my previous post about storage technologies).

IT departments are beginning to wonder if the baggage of training and management infrastructures for server virtualization are still a requirement or if, as a commodity, server virtualization should no longer be so complex. Is being a virtualization expert still a badge of honor or is it now a default expectation for IT administrators? And with hyperconvergence and cloud technologies simplifying virtual machine management, what level of expertise is really still required?

I think the main take away from the commoditization of server virtualization is that as you move to hyperconvergence and cloud platforms, you shouldn't need to know what the underlying hypervisor is, nor should you care, and you definitely shouldn't have to worry about licensing it separately.  They say you don't understand something unless you can explain it to a 5 year old.  It is time for server virtualization to be easy enough that a 5 year old can provision virtual machines instead of requiring a full time, certified virtualization expert. Or maybe even a 4 year old.

More Stories By David Paquette

Starting with a degree in writing and a family history of software development, David entered the industry on the consumer end, providing tech support for dial up internet users before moving into software development as a software tester in 1999. With 16 years of software development experience moving from testing to systems engineering to product marketing and product management, David lived the startup and IPO experience with expertise in disaster recovery, server migration, and datacenter infrastructure. Now at Scale Computing as the Product Marketing Manager, David is leading the messaging efforts for hyperconverged infrastructure adoption.

CloudEXPO Stories
Daniel Jones is CTO of EngineerBetter, helping enterprises deliver value faster. Previously he was an IT consultant, indie video games developer, head of web development in the finance sector, and an award-winning martial artist. Continuous Delivery makes it possible to exploit findings of cognitive psychology and neuroscience to increase the productivity and happiness of our teams.
Atmosera delivers modern cloud services that maximize the advantages of cloud-based infrastructures. Offering private, hybrid, and public cloud solutions, Atmosera works closely with customers to engineer, deploy, and operate cloud architectures with advanced services that deliver strategic business outcomes. Atmosera's expertise simplifies the process of cloud transformation and our 20+ years of experience managing complex IT environments provides our customers with the confidence and trust that they are being taken care of.
Your job is mostly boring. Many of the IT operations tasks you perform on a day-to-day basis are repetitive and dull. Utilizing automation can improve your work life, automating away the drudgery and embracing the passion for technology that got you started in the first place. In this presentation, I'll talk about what automation is, and how to approach implementing it in the context of IT Operations. Ned will discuss keys to success in the long term and include practical real-world examples. Get started on automating your way to a brighter future!
Serveless Architectures brings the ability to independently scale, deploy and heal based on workloads and move away from monolithic designs. From the front-end, middle-ware and back-end layers, serverless workloads potentially have a larger security risk surface due to the many moving pieces. This talk will focus on key areas to consider for securing end to end, from dev to prod. We will discuss patterns for end to end TLS, session management, scaling to absorb attacks and mitigation techniques.
Crosscode Panoptics Automated Enterprise Architecture Software. Application Discovery and Dependency Mapping. Automatically generate a powerful enterprise-wide map of your organization's IT assets down to the code level. Enterprise Impact Assessment. Automatically analyze the impact, to every asset in the enterprise down to the code level. Automated IT Governance Software. Create rules and alerts based on code level insights, including security issues, to automate governance. Enterprise Audit Trail. Auditors can independently identify all changes made to the environment.