Welcome!

SDN Journal Authors: Yeshim Deniz, Elizabeth White, Liz McMillan, Pat Romanski, TJ Randall

Blog Feed Post

Workload Mobility: Is Your Cloning Strategy Shallow or Deep?

#SDN #cloud How low (into the workload stack) can you go?

One of the more interesting sentiments expressed by attendees at a roundtable session at Gartner Data Center earlier this month was the notion that to them, "SDN is packaging app as code, server, and network and deploying where I need it". This is intimately tied to the idea of workload mobility, at least for enterprise customers, because they recognize the relationship between the application and its network infrastructure services as being critical to the success of migration from one environment to another.

Now, I'm not saying I agree with this definition of SDN, but the notion that we need to be able to package applications holistically is not a new one nor is it something that should be ignored.

workload-stack What these participants were pointing to was the need for a "deep" copy of an application as a means to enable workload mobility. They don't want just the app; they need the whole kit and caboodle to be packaged up neatly and moved elsewhere, presumably the cloud. They want the cloning or packaging process to encompass everything, from top to bottom and bottom to top - not just the shallow upper reaches of the stack that starts and ends with the application.

There are two core reasons this isn't possible today. First, not all infrastructure vendors have a packaging strategy themselves. Application-related rules and policies are not often able to be managed as a group of related configuration items. Take an application delivery controller, for example, cause well, they provide the critical load balancing services required to scale applications today in every environment. There are two approaches to packaging ADC services:

  1. Multi-tenant capable systems group application-related configuration objects and attributes together and enable export/import of that packaging.
  2. The suggested deployment is one ADC (usually a virtual instance) per application so that the configuration of the ADC is assumed to be the application's configuration. Packaging becomes a configuration management exercise.

The latter is more common in the ADC market as most delivery platforms have not effectively made the jump from single to true multi-tenant support yet, which means configuration objects are independent of one another and not easily associated in the first place.

Second, even when this is possible, there's no holistic method that can provide the packaging of application and related infrastructure services and migrate that to a generic cloud. There's no EAR file, in developer parlance, that is cross-environment. OVF is an attempt at such a beast, but it's lacking completeness in the stack that results in some infrastructure services being overlooked.

There are a plethora of infrastructure services with which applications today are "integrated": identity federation, persistent load balancing, secure cookie gateways, firewall rules, and URI rewriting are just a few infrastructure services upon which an application may be dependent. These applications are not deployable without them, and thus cannot be migrated to the cloud - or anywhere else, for that matter - without them.

This is the challenge for providers and vendors - to figure out how to enable workload mobility of applications that are dependent on services that may not be compatible or even exist in a cloud computing environment.


 F5 Networksclip_image003[5]clip_image004[5]clip_image006[5]clip_image007[5]clip_image008[5]

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

CloudEXPO Stories
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the public cloud best suits your organization, and what the future holds for operations and infrastructure engineers in a post-container world. Is a serverless world inevitable?
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-centric compute for the most data-intensive applications. Hyperconverged systems already in place can be revitalized with vendor-agnostic, PCIe-deployed, disaggregated approach to composable, maximizing the value of previous investments.
Wooed by the promise of faster innovation, lower TCO, and greater agility, businesses of every shape and size have embraced the cloud at every layer of the IT stack – from apps to file sharing to infrastructure. The typical organization currently uses more than a dozen sanctioned cloud apps and will shift more than half of all workloads to the cloud by 2018. Such cloud investments have delivered measurable benefits. But they’ve also resulted in some unintended side-effects: complexity and risk. End users now struggle to navigate multiple environments with varying degrees of performance. Companies are unclear on the security of their data and network access. And IT squads are overwhelmed trying to monitor and manage it all.
Machine learning provides predictive models which a business can apply in countless ways to better understand its customers and operations. Since machine learning was first developed with flat, tabular data in mind, it is still not widely understood: when does it make sense to use graph databases and machine learning in combination? This talk tackles the question from two ends: classifying predictive analytics methods and assessing graph database attributes. It also examines the ongoing lifecycle for machine learning in production. From this analysis it builds a framework for seeing where machine learning on a graph can be advantageous.'
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.