Welcome!

SDN Journal Authors: Liz McMillan, Yeshim Deniz, Elizabeth White, Pat Romanski, TJ Randall

Related Topics: SDN Journal

SDN Journal: Blog Feed Post

What Do Applications Want?

What is it that applications want, and more importantly, what of those desires can the network fulfill?

That's one of the questions SDN has to answer in order to make SDN relevant in the big picture that is the software-defined data center. What is it, other than forwarding packets and routing between hops and adding a little QoS here and there, can the network offer to applications?

Consider the response of Robert Sherwood, CTO of Big Switch Networks and head of the ONF's Architecture and Framework Working Group, responsible in part for the standardizing of SDN controller northbound APIs to Network World Editor in Chief John Dix's question regarding the role of the northbound API in the SDN architecture:

So the northbound API is how that business application [e.g. Hadoop, OpenStack Nova] talks to the controller to explicitly describe its requirements: I am OpenStack. I want this VM field to talk to this other VM but no other VMs can talk to them, etc. But also give me a view of how loaded the network is so I can make an informed decision on where to put new VMs. So those are two examples of northbound APIs that I think are meaningful for people.

Clarifying the role of software-defined networking northbound APIs

These are two powerful examples of visibility (monitoring of load and conditions) and security (access control, essentially) that are lacking in today's architectures. While people (ops) clearly has visibility, this data is often shuttered off to an APM (application performance monitoring) system, never to be seen again except in the week operations report. Security, of course, is something applications and devops have traditionally accomplished through the use of IP access control lists in the operating system or using application-specific methods to enable/disable access from specific IP addresses and/or ranges.

This, of course, is simply not a sustainable method of managing access in a modern, volatile environment. Such models were designed for fixed, static networks wherein application servers and systems were assigned an IP address at deployment - and they stayed put. Virtualization and cloud computing models break that model and introduce volatility, particularly when elasticity is desired.

Also of importance is the ability to segment out network traffic, to isolate tenants in the parlance of modern cloud architectures. VLAN assignment has traditionally been a very manual process, requiring updates to multiple pieces of network infrastructure along the data path. By enabling a more dynamic and automatic assignment process, tenant traffic can then be assigned specific network performance profiles that aid in meeting service level agreements, as well as routing to services specific to the application such as those providing security at multiple layers of the network stack. This is the concept behind service chaining; dynamically routing traffic through a set of services to provide valuable infrastructure functions on the inbound and outbound data path.

What this implies is not that the controller or the controller "applications" are necessarily providing higher order functions. The controller applications can also be responsible for routing traffic to the appropriate services that provide those higher order functions. The SDN controller and its applications become the primary means of orchestrating traffic through the network, delegating to services hosted in the network those functions that are appropriate for the application.

BUT THAT'S NOT WHAT APPLICATIONS WANT

What's interesting is that VLAN and default gateway configurations are not really application concerns. They are operating system concerns, network device concerns, but they are not, as is becoming the vernacular, domain concerns that are or even should be something the "application" wants. Oh, certainly the application needs an IP address and security policies may dictate that it exchange data only with certain other systems, but that's not what the application wants. That's what it needs. To really start addressing what applications want, we must start evaluating domain concerns that are specific to the application.

An example of this is moving the functionality provided by WCCP (Web Cache Communication Protocol) to an SDN controller application. The cache application on the SDN controller would not necessarily provide the caching service itself, but rather offers the ability to determine if application requests destined for a specific application should be redirected to a caching service which is deployed atop an SDN-enabled (managed) network fabric. The way in which a router today uses WCCP to redirect and route network traffic to a stand-alone web cache translates to an SDN application. In the SDN model, using the northbound API, an application can inform the network it desires the services of a caching system. The SDN controller might then orchestrates the flow of traffic appropriately, chaining services to ensure the inclusion of the cache in the data path.

The interesting thing to watch in the coming months (and probably years, considering the maturation level of SDN in general) will be discovering what "wants" an application has that might be fulfilled using this model. Is it the case that an application will be able to inform an SDN controller it "wants" web application firewall protection for a set of URIs and that from that information the SDN controller will be able to orchestrate (chain) the appropriate services as well as its configuration?

Only time will tell whether this model will mature and turn out to be "the one" but what seems obvious is that success of this model depends entirely on just how application (domain) aware the model will be. Because what applications want are application (domain) services that reside far higher in the stack than what today's SDN models propose to provide and support. Service chaining in conjunction with a robust northbound API seems a feasible means to address that.

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


CloudEXPO Stories
In his session at 21st Cloud Expo, Michael Burley, a Senior Business Development Executive in IT Services at NetApp, described how NetApp designed a three-year program of work to migrate 25PB of a major telco's enterprise data to a new STaaS platform, and then secured a long-term contract to manage and operate the platform. This significant program blended the best of NetApp’s solutions and services capabilities to enable this telco’s successful adoption of private cloud storage and launching of virtual storage services to its enterprise market.
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at Dice, he takes a metrics-driven approach to management. His experience in building and managing high performance teams was built throughout his experience at Oracle, Sun Microsystems and SocialEkwity.
Despite being the market leader, we recognized the need to transform and reinvent our business at Dynatrace, before someone else disrupted the market. Over the course of three years, we changed everything - our technology, our culture and our brand image. In this session we'll discuss how we navigated through our own innovator's dilemma, and share takeaways from our experience that you can apply to your own organization.
Having been in the web hosting industry since 2002, dhosting has gained a great deal of experience while working on a wide range of projects. This experience has enabled the company to develop our amazing new product, which they are now excited to present! Among dHosting's greatest achievements, they can include the development of their own hosting panel, the building of their fully redundant server system, and the creation of dhHosting's unique product, Dynamic Edge.
Cloud Storage 2.0 has brought many innovations, including the availability of cloud storage services that are less expensive and much faster than previous generations of cloud storage. Cloud Storage 2.0 has also delivered new and faster methods for migrating your premises storage environment to the cloud and the concept of multi-cloud. This session will provide technical details on Cloud Storage 2.0 and the methods used to efficiently migrate from premises-to-cloud storage. This session will also discuss best practices for implementing multi-cloud environments.