Welcome!

SDN Journal Authors: Elizabeth White, Yeshim Deniz, Liz McMillan, Pat Romanski, TJ Randall

Related Topics: Containers Expo Blog

Containers Expo Blog: Blog Feed Post

Service Virtualization Helps Localize Impact of Elastic Scalability

Service Virtualization is the Opposite of – and Complementary Implementation to – Server Virtualization

Service virtualization is the opposite of – and complementary implementation to – server virtualization.

One of the biggest challenges with any implementation of elastic scalability as it relates to virtualization and cloud computing is managing that scalability at run-time and at design (configuration) time. The goal is to transparently scale out some service – network or application – in such a way as to eliminate the operational disruption often associated with scaling up (and down) efforts.

Service virtualization allows virtually any service to be transparently scaled out with no negative impact to the service and, perhaps more importantly, to the applications and other services which rely upon that service.


A QUICK PRIMER ON SERVER versus SERVICE VIRTUALIZATION

imageService virtualization is the logical opposite of server virtualization. Server virtualization allows one resource to appear to be n resources while service virtualization presents those n resources to appear as one resource. This is the basic premise upon which load balancing is based, and upon which the elastic scalability in cloud computing environments is architected. Service virtualization is necessary to achieve the desired level of transparency in dynamically scaling environments.

One of the side-effects of elastic scalability is a constantly changing network infrastructure, at least from an IP routing point of view. Every device and application – whether virtual or physical – is assigned its own IP address because, well, that’s how networks operate. At least for now. Horizontal scalability is the most common means of achieving elastic scalability (and many, including me, would argue it is the most efficient) but that implies that there exist as many instances of a solution as is required to service demand. Each instance has its own IP address. An environment that leverages horizontal scalability without service virtualization would find itself quickly oversubscribed with managing hundreds or thousands of IP addresses and all the associated network layer protocol configuration required to properly route a request from client to server.

This is part of the “diseconomy of scale” that Greg Ness twitterbird often mentions as part of a growing IPAM (IP Address Management) cost-value problem. It is simply not efficient, nor affordable, nor scalable on a human capital level to continually update routing tables on routers, switches, and intermediate devices to keep up with an environment that scales in real time.

This is where service virtualization comes in and addresses most of the challenges associated with elastic scalability.


SCALABILITY DOMAINS

What service virtualization provides is a constant interface to a given application or network service. Whether that’s firewalls or a CRM deployment is irrelevant; if it can be addresses by an IP address it is almost certainly capable of being horizontally scaled and managed via service virtualization. Service imagevirtualization allows a service to be transparently scaled because clients, other infrastructure services, and applications see only one service, one IP address and that address is a constant. While behind the service virtualization solution – usually a Load balancer or advanced application delivery controller – there will be a variable number of services at any given time as demand requires. There may be only one service to begin with, but as demand increases so will the number of services being virtualized by the service virtualization solution. Applications, clients, and other infrastructure services have no need to know how many services are being virtualized nor should they. Every other service in the datacenter needs only know about the virtual service, which shields them from any amount of volatility that may be occurring “behind the scenes” to ensure capacity meets demand.

This localizes the impact of elastic scalability on the entire infrastructure, constraining the side-effects of dynamism such as ARP storms within “scalability domains”. Each scalability domain is itself shielded from the side-effects of dynamism in other scalability domains because its services always communicate with the virtual service. Scalability domains, when implemented both logically and physically, with a separate network, can further reduce the potential impact of dynamism on application and network performance by walling off the increasing amount of network communication that must occur to maintain high availability amidst a rapidly changing network-layer landscape. The network traffic required to support dynamism can (and probably should) be confined within a scalability domain.

This is, in a nutshell, a service-oriented architecture applied to infrastructure. The difference between a SOI (Service-Oriented Infrastructure) and a SOA (Service-Oriented Architecture) is largely in what type of change is being obfuscated by the interface. In the case of SOA the interface is not supposed to change even though the actual implementation (code) might be extremely volatile. In an SOI the interface (virtual service) does not change even though the implementation (number of services instances) does.

A forward-looking datacenter architecture strategy will employ the use of service virtualization even if it’s not necessary right now. Perhaps it’s the case that one instance of that application or service is all that’s required to meet demand. It is still a good idea to encapsulate the service within a scalability domain to avoid a highly disruptive change in the architecture later on when it becomes necessary to scale out to meet increasing demand. By employing the concept of service virtualization in the initial architectural strategy organizations can eliminate service disruptions because a scalability domain can transparently scale out or in as necessary.

Architecting scalability domains also has the added benefit of creating strategic points of control within the datacenter that allow specific policies to be enforced across all instances of an application or service at an aggregation layer. Applying security, access, acceleration and optimization policies at a strategic point of control ensures that such policies are consistently applied across all applications and services. This further has the advantage of being more flexible, as it is much easier to make a single change to a given policy and apply it once than it is to apply it hundreds of times across all services, especially in a dynamic environment in which it may be easy to “miss” a single application instance.

Scalability domains should be an integral component in any datacenter moving forward. The service virtualization capabilities provide a foundation upon which dynamic scalability and consistent organizational policy enforcement can be implemented with minimal disruption to services and without reliance on individual teams, projects, admins, or developers to ensure policy deployment and usage. Service virtualization is a natural complement to server virtualization and combined with a service oriented architectural approach can provide a strong yet flexible foundation for future growth.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

CloudEXPO Stories
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like "How is my application doing" but no idea how to get a proper answer.
DXWorldEXPO LLC announced today that Big Data Federation to Exhibit at the 22nd International CloudEXPO, colocated with DevOpsSUMMIT and DXWorldEXPO, November 12-13, 2018 in New York City. Big Data Federation, Inc. develops and applies artificial intelligence to predict financial and economic events that matter. The company uncovers patterns and precise drivers of performance and outcomes with the aid of machine-learning algorithms, big data, and fundamental analysis. Their products are deployed by some of the world's largest financial institutions. The company develops and applies innovative machine-learning technologies to big data to predict financial, economic, and world events. The team is a group of passionate technologists, mathematicians, data scientists and programmers in Silicon Valley with over 100 patents to their names. Big Data Federation was incorporated in 2015 and is ...
All in Mobile is a place where we continually maximize their impact by fostering understanding, empathy, insights, creativity and joy. They believe that a truly useful and desirable mobile app doesn't need the brightest idea or the most advanced technology. A great product begins with understanding people. It's easy to think that customers will love your app, but can you justify it? They make sure your final app is something that users truly want and need. The only way to do this is by researching target group and involving users in the designing process.
Whenever a new technology hits the high points of hype, everyone starts talking about it like it will solve all their business problems. Blockchain is one of those technologies. According to Gartner's latest report on the hype cycle of emerging technologies, blockchain has just passed the peak of their hype cycle curve. If you read the news articles about it, one would think it has taken over the technology world. No disruptive technology is without its challenges and potential impediments that frequently get lost in the hype. The panel will discuss their perspective on what they see as they key challenges and/or impediments to adoption, and how they see those issues could be resolved or mitigated.
CloudEXPO New York 2018, colocated with DevOpsSUMMIT and DXWorldEXPO New York 2018 will be held November 12-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI and Machine Learning to one location.