Welcome!

SDN Journal Authors: Pat Romanski, Destiny Bertucci, Liz McMillan, Elizabeth White, Amitabh Sinha

Related Topics: Containers Expo Blog

Containers Expo Blog: Blog Feed Post

Service Virtualization Helps Localize Impact of Elastic Scalability

Service Virtualization is the Opposite of – and Complementary Implementation to – Server Virtualization

Service virtualization is the opposite of – and complementary implementation to – server virtualization.

One of the biggest challenges with any implementation of elastic scalability as it relates to virtualization and cloud computing is managing that scalability at run-time and at design (configuration) time. The goal is to transparently scale out some service – network or application – in such a way as to eliminate the operational disruption often associated with scaling up (and down) efforts.

Service virtualization allows virtually any service to be transparently scaled out with no negative impact to the service and, perhaps more importantly, to the applications and other services which rely upon that service.


A QUICK PRIMER ON SERVER versus SERVICE VIRTUALIZATION

imageService virtualization is the logical opposite of server virtualization. Server virtualization allows one resource to appear to be n resources while service virtualization presents those n resources to appear as one resource. This is the basic premise upon which load balancing is based, and upon which the elastic scalability in cloud computing environments is architected. Service virtualization is necessary to achieve the desired level of transparency in dynamically scaling environments.

One of the side-effects of elastic scalability is a constantly changing network infrastructure, at least from an IP routing point of view. Every device and application – whether virtual or physical – is assigned its own IP address because, well, that’s how networks operate. At least for now. Horizontal scalability is the most common means of achieving elastic scalability (and many, including me, would argue it is the most efficient) but that implies that there exist as many instances of a solution as is required to service demand. Each instance has its own IP address. An environment that leverages horizontal scalability without service virtualization would find itself quickly oversubscribed with managing hundreds or thousands of IP addresses and all the associated network layer protocol configuration required to properly route a request from client to server.

This is part of the “diseconomy of scale” that Greg Ness twitterbird often mentions as part of a growing IPAM (IP Address Management) cost-value problem. It is simply not efficient, nor affordable, nor scalable on a human capital level to continually update routing tables on routers, switches, and intermediate devices to keep up with an environment that scales in real time.

This is where service virtualization comes in and addresses most of the challenges associated with elastic scalability.


SCALABILITY DOMAINS

What service virtualization provides is a constant interface to a given application or network service. Whether that’s firewalls or a CRM deployment is irrelevant; if it can be addresses by an IP address it is almost certainly capable of being horizontally scaled and managed via service virtualization. Service imagevirtualization allows a service to be transparently scaled because clients, other infrastructure services, and applications see only one service, one IP address and that address is a constant. While behind the service virtualization solution – usually a Load balancer or advanced application delivery controller – there will be a variable number of services at any given time as demand requires. There may be only one service to begin with, but as demand increases so will the number of services being virtualized by the service virtualization solution. Applications, clients, and other infrastructure services have no need to know how many services are being virtualized nor should they. Every other service in the datacenter needs only know about the virtual service, which shields them from any amount of volatility that may be occurring “behind the scenes” to ensure capacity meets demand.

This localizes the impact of elastic scalability on the entire infrastructure, constraining the side-effects of dynamism such as ARP storms within “scalability domains”. Each scalability domain is itself shielded from the side-effects of dynamism in other scalability domains because its services always communicate with the virtual service. Scalability domains, when implemented both logically and physically, with a separate network, can further reduce the potential impact of dynamism on application and network performance by walling off the increasing amount of network communication that must occur to maintain high availability amidst a rapidly changing network-layer landscape. The network traffic required to support dynamism can (and probably should) be confined within a scalability domain.

This is, in a nutshell, a service-oriented architecture applied to infrastructure. The difference between a SOI (Service-Oriented Infrastructure) and a SOA (Service-Oriented Architecture) is largely in what type of change is being obfuscated by the interface. In the case of SOA the interface is not supposed to change even though the actual implementation (code) might be extremely volatile. In an SOI the interface (virtual service) does not change even though the implementation (number of services instances) does.

A forward-looking datacenter architecture strategy will employ the use of service virtualization even if it’s not necessary right now. Perhaps it’s the case that one instance of that application or service is all that’s required to meet demand. It is still a good idea to encapsulate the service within a scalability domain to avoid a highly disruptive change in the architecture later on when it becomes necessary to scale out to meet increasing demand. By employing the concept of service virtualization in the initial architectural strategy organizations can eliminate service disruptions because a scalability domain can transparently scale out or in as necessary.

Architecting scalability domains also has the added benefit of creating strategic points of control within the datacenter that allow specific policies to be enforced across all instances of an application or service at an aggregation layer. Applying security, access, acceleration and optimization policies at a strategic point of control ensures that such policies are consistently applied across all applications and services. This further has the advantage of being more flexible, as it is much easier to make a single change to a given policy and apply it once than it is to apply it hundreds of times across all services, especially in a dynamic environment in which it may be easy to “miss” a single application instance.

Scalability domains should be an integral component in any datacenter moving forward. The service virtualization capabilities provide a foundation upon which dynamic scalability and consistent organizational policy enforcement can be implemented with minimal disruption to services and without reliance on individual teams, projects, admins, or developers to ensure policy deployment and usage. Service virtualization is a natural complement to server virtualization and combined with a service oriented architectural approach can provide a strong yet flexible foundation for future growth.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

@CloudExpo Stories
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
Digital transformation is about embracing digital technologies into a company's culture to better connect with its customers, automate processes, create better tools, enter new markets, etc. Such a transformation requires continuous orchestration across teams and an environment based on open collaboration and daily experiments. In his session at 21st Cloud Expo, Alex Casalboni, Technical (Cloud) Evangelist at Cloud Academy, explored and discussed the most urgent unsolved challenges to achieve f...
In his session at 21st Cloud Expo, Raju Shreewastava, founder of Big Data Trunk, provided a fun and simple way to introduce Machine Leaning to anyone and everyone. He solved a machine learning problem and demonstrated an easy way to be able to do machine learning without even coding. Raju Shreewastava is the founder of Big Data Trunk (www.BigDataTrunk.com), a Big Data Training and consulting firm with offices in the United States. He previously led the data warehouse/business intelligence and B...
The past few years have brought a sea change in the way applications are architected, developed, and consumed—increasing both the complexity of testing and the business impact of software failures. How can software testing professionals keep pace with modern application delivery, given the trends that impact both architectures (cloud, microservices, and APIs) and processes (DevOps, agile, and continuous delivery)? This is where continuous testing comes in. D
Mobile device usage has increased exponentially during the past several years, as consumers rely on handhelds for everything from news and weather to banking and purchases. What can we expect in the next few years? The way in which we interact with our devices will fundamentally change, as businesses leverage Artificial Intelligence. We already see this taking shape as businesses leverage AI for cost savings and customer responsiveness. This trend will continue, as AI is used for more sophistica...
The “Digital Era” is forcing us to engage with new methods to build, operate and maintain applications. This transformation also implies an evolution to more and more intelligent applications to better engage with the customers, while creating significant market differentiators. In both cases, the cloud has become a key enabler to embrace this digital revolution. So, moving to the cloud is no longer the question; the new questions are HOW and WHEN. To make this equation even more complex, most ...
Blockchain is a shared, secure record of exchange that establishes trust, accountability and transparency across business networks. Supported by the Linux Foundation's open source, open-standards based Hyperledger Project, Blockchain has the potential to improve regulatory compliance, reduce cost as well as advance trade. Are you curious about how Blockchain is built for business? In her session at 21st Cloud Expo, René Bostic, Technical VP of the IBM Cloud Unit in North America, discussed the b...
SYS-CON Events announced today that Synametrics Technologies will exhibit at SYS-CON's 22nd International Cloud Expo®, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Synametrics Technologies is a privately held company based in Plainsboro, New Jersey that has been providing solutions for the developer community since 1997. Based on the success of its initial product offerings such as WinSQL, Xeams, SynaMan and Syncrify, Synametrics continues to create and hone in...
With tough new regulations coming to Europe on data privacy in May 2018, Calligo will explain why in reality the effect is global and transforms how you consider critical data. EU GDPR fundamentally rewrites the rules for cloud, Big Data and IoT. In his session at 21st Cloud Expo, Adam Ryan, Vice President and General Manager EMEA at Calligo, examined the regulations and provided insight on how it affects technology, challenges the established rules and will usher in new levels of diligence arou...
As you move to the cloud, your network should be efficient, secure, and easy to manage. An enterprise adopting a hybrid or public cloud needs systems and tools that provide: Agility: ability to deliver applications and services faster, even in complex hybrid environments Easier manageability: enable reliable connectivity with complete oversight as the data center network evolves Greater efficiency: eliminate wasted effort while reducing errors and optimize asset utilization Security: imple...
Nordstrom is transforming the way that they do business and the cloud is the key to enabling speed and hyper personalized customer experiences. In his session at 21st Cloud Expo, Ken Schow, VP of Engineering at Nordstrom, discussed some of the key learnings and common pitfalls of large enterprises moving to the cloud. This includes strategies around choosing a cloud provider(s), architecture, and lessons learned. In addition, he covered some of the best practices for structured team migration an...
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
The 22nd International Cloud Expo | 1st DXWorld Expo has announced that its Call for Papers is open. Cloud Expo | DXWorld Expo, to be held June 5-7, 2018, at the Javits Center in New York, NY, brings together Cloud Computing, Digital Transformation, Big Data, Internet of Things, DevOps, Machine Learning and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding busin...
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex ...
In his session at 21st Cloud Expo, Michael Burley, a Senior Business Development Executive in IT Services at NetApp, described how NetApp designed a three-year program of work to migrate 25PB of a major telco's enterprise data to a new STaaS platform, and then secured a long-term contract to manage and operate the platform. This significant program blended the best of NetApp’s solutions and services capabilities to enable this telco’s successful adoption of private cloud storage and launching ...
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It’s clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. Tha...
In his general session at 21st Cloud Expo, Greg Dumas, Calligo’s Vice President and G.M. of US operations, discussed the new Global Data Protection Regulation and how Calligo can help business stay compliant in digitally globalized world. Greg Dumas is Calligo's Vice President and G.M. of US operations. Calligo is an established service provider that provides an innovative platform for trusted cloud solutions. Calligo’s customers are typically most concerned about GDPR compliance, application p...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
Smart cities have the potential to change our lives at so many levels for citizens: less pollution, reduced parking obstacles, better health, education and more energy savings. Real-time data streaming and the Internet of Things (IoT) possess the power to turn this vision into a reality. However, most organizations today are building their data infrastructure to focus solely on addressing immediate business needs vs. a platform capable of quickly adapting emerging technologies to address future ...