Welcome!

SDN Journal Authors: Yeshim Deniz, Liz McMillan, Elizabeth White, Pat Romanski, TJ Randall

Related Topics: SDN Journal

SDN Journal: Blog Feed Post

What Do Applications Want?

What is it that applications want, and more importantly, what of those desires can the network fulfill?

That's one of the questions SDN has to answer in order to make SDN relevant in the big picture that is the software-defined data center. What is it, other than forwarding packets and routing between hops and adding a little QoS here and there, can the network offer to applications?

Consider the response of Robert Sherwood, CTO of Big Switch Networks and head of the ONF's Architecture and Framework Working Group, responsible in part for the standardizing of SDN controller northbound APIs to Network World Editor in Chief John Dix's question regarding the role of the northbound API in the SDN architecture:

So the northbound API is how that business application [e.g. Hadoop, OpenStack Nova] talks to the controller to explicitly describe its requirements: I am OpenStack. I want this VM field to talk to this other VM but no other VMs can talk to them, etc. But also give me a view of how loaded the network is so I can make an informed decision on where to put new VMs. So those are two examples of northbound APIs that I think are meaningful for people.

Clarifying the role of software-defined networking northbound APIs

These are two powerful examples of visibility (monitoring of load and conditions) and security (access control, essentially) that are lacking in today's architectures. While people (ops) clearly has visibility, this data is often shuttered off to an APM (application performance monitoring) system, never to be seen again except in the week operations report. Security, of course, is something applications and devops have traditionally accomplished through the use of IP access control lists in the operating system or using application-specific methods to enable/disable access from specific IP addresses and/or ranges.

This, of course, is simply not a sustainable method of managing access in a modern, volatile environment. Such models were designed for fixed, static networks wherein application servers and systems were assigned an IP address at deployment - and they stayed put. Virtualization and cloud computing models break that model and introduce volatility, particularly when elasticity is desired.

Also of importance is the ability to segment out network traffic, to isolate tenants in the parlance of modern cloud architectures. VLAN assignment has traditionally been a very manual process, requiring updates to multiple pieces of network infrastructure along the data path. By enabling a more dynamic and automatic assignment process, tenant traffic can then be assigned specific network performance profiles that aid in meeting service level agreements, as well as routing to services specific to the application such as those providing security at multiple layers of the network stack. This is the concept behind service chaining; dynamically routing traffic through a set of services to provide valuable infrastructure functions on the inbound and outbound data path.

What this implies is not that the controller or the controller "applications" are necessarily providing higher order functions. The controller applications can also be responsible for routing traffic to the appropriate services that provide those higher order functions. The SDN controller and its applications become the primary means of orchestrating traffic through the network, delegating to services hosted in the network those functions that are appropriate for the application.

BUT THAT'S NOT WHAT APPLICATIONS WANT

What's interesting is that VLAN and default gateway configurations are not really application concerns. They are operating system concerns, network device concerns, but they are not, as is becoming the vernacular, domain concerns that are or even should be something the "application" wants. Oh, certainly the application needs an IP address and security policies may dictate that it exchange data only with certain other systems, but that's not what the application wants. That's what it needs. To really start addressing what applications want, we must start evaluating domain concerns that are specific to the application.

An example of this is moving the functionality provided by WCCP (Web Cache Communication Protocol) to an SDN controller application. The cache application on the SDN controller would not necessarily provide the caching service itself, but rather offers the ability to determine if application requests destined for a specific application should be redirected to a caching service which is deployed atop an SDN-enabled (managed) network fabric. The way in which a router today uses WCCP to redirect and route network traffic to a stand-alone web cache translates to an SDN application. In the SDN model, using the northbound API, an application can inform the network it desires the services of a caching system. The SDN controller might then orchestrates the flow of traffic appropriately, chaining services to ensure the inclusion of the cache in the data path.

The interesting thing to watch in the coming months (and probably years, considering the maturation level of SDN in general) will be discovering what "wants" an application has that might be fulfilled using this model. Is it the case that an application will be able to inform an SDN controller it "wants" web application firewall protection for a set of URIs and that from that information the SDN controller will be able to orchestrate (chain) the appropriate services as well as its configuration?

Only time will tell whether this model will mature and turn out to be "the one" but what seems obvious is that success of this model depends entirely on just how application (domain) aware the model will be. Because what applications want are application (domain) services that reside far higher in the stack than what today's SDN models propose to provide and support. Service chaining in conjunction with a robust northbound API seems a feasible means to address that.

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


CloudEXPO Stories
Never mind that we might not know what the future holds for cryptocurrencies and how much values will fluctuate or even how the process of mining a coin could cost as much as the value of the coin itself - cryptocurrency mining is a hot industry and shows no signs of slowing down. However, energy consumption to mine cryptocurrency is one of the biggest issues facing this industry. Burning huge amounts of electricity isn't incidental to cryptocurrency, it's basically embedded in the core of "mining." In this winner-takes-all game, burning the most electricity increases the chances of winning. The Bitcoin Energy Consumption Index states that the global energy usage of all bitcoin mining already is equivalent to the power uptake of the country of the Czech Republic. Mining equipment for a larger operation can exceed 100 megawatts (MWs) - similar to what a 1 million-square-foot Google ...
In very short order, the term "Blockchain" has lost an incredible amount of meaning. With too many jumping on the bandwagon, the market is inundated with projects and use cases that miss the real potential of the technology. We have to begin removing Blockchain from the conversation and ground ourselves in the motivating principles of the technology itself; whether it is consumer privacy, data ownership, trust or even participation in the global economy, the world is faced with serious problems that this technology could ultimately help us in at least partially solving. But if we do not unpack what is real and what is not, we can lose sight of the potential. In this presentation, John Bates-who leads data science, machine learning and AI in the Adobe Analytics business unit-will present his 4-prong model of the general areas where Blockchain can have a real impact and the specific use...
The benefits of automated cloud deployments for speed, reliability and security are undeniable. The cornerstone of this approach, immutable deployment, promotes the idea of continuously rolling safe, stable images instead of trying to keep up with managing a fixed pool of virtual or physical machines. In this talk, we'll explore the immutable infrastructure pattern and how to use continuous deployment and continuous integration (CI/CD) process to build and manage server images for any platform. Then we'll show how automate deploying these images quickly and reliability with open DevOps tools like Terraform and Digital Rebar. Not only is this approach fast, it's also more secure and robust for operators.
Cloud is the motor for innovation and digital transformation. CIOs will run 25% of total application workloads in the cloud by the end of 2018, based on recent Morgan Stanley report. Having the right enterprise cloud strategy in place, often in a multi cloud environment, also helps companies become a more intelligent business. Companies that master this path have something in common: they create a culture of continuous innovation. In his presentation, Dilipkumar Khandelwal outlined the latest research and steps companies can take to make innovation a daily work habit by using enterprise cloud computing. He shared examples from companies that have benefited from enterprise cloud computing and took a look into the future of how the cloud helps companies become a more intelligent business.
Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also received the prestigious Outstanding Technical Achievement Award three times - an accomplishment befitting only the most innovative thinkers. Shankar Kalyana is among the most respected strategists in the global technology industry. As CTO, with over 32 years of IT experience, Mr. Kalyana has architected, designed, developed, and implemented custom and packaged software solutions across a vast spectrum of environments and platforms. His current area of expertise includes hybrid, multi-cloud as-a-service strategies that drive digital and cognitive enterprises to operational excellence. Throughout his career, Mr. Kalyana has established himself as a brilliant strategist, respected technical advisor, renowned speaker, admired author, and insigh...