Welcome!

SDN Journal Authors: Pat Romanski, Destiny Bertucci, Liz McMillan, Elizabeth White, Amitabh Sinha

Related Topics: SDN Journal, Java IoT, Microservices Expo, Containers Expo Blog, @CloudExpo, @DXWorldExpo

SDN Journal: Article

Beyond SDN: Creating Focused and Useable Solutions

The real customers and end users want practical and usable solutions, not definitions

Software Defined Networking (SDN) has become a famous paradigm and also the bandwagon in the networking industry today. SDN is primarily considered to be a methodology or approach to solving some of the wider-known problems in the enterprise and service provider networking space. It's also a tool to create some exciting new features today. The term "Software Defined Networking" provides a green-field opportunity for vendors to define, promote and customize it in their own way. End users don't care so much about the definition; they are more concerned about its contribution in optimizing and solving real problems.

The initial protocol that is considered to be a precursor to SDN is "OpenFlow." Open Networking Foundation (ONF) defines SDN as a new approach to networking, whereby, network control is decoupled from the data-forwarding function and is directly programmable. OpenFlow allows the traditional layer 2 switches to examine headers in the packet/frame and make forwarding decisions. OpenFlow-supported switches examine the packet headers through the transport layers and can match more than 13 fields that span across layer 2 to layer 4.

How Exactly Is It Going to Be Useful?
There are some interesting use cases defined by various vendors that utilize the IP and TCP header look-up to make forwarding decisions. Even though these use cases are not fully established, they may be useful to perform traffic redirection and traffic engineering by merely using switches. Some practical uses of traffic engineering would be to isolate the malicious traffic at the switch level for further analysis and containment. Another example would be the ability to divert traffic through multiple ISP connections based on applications and specific computers (users). Many vendors are focusing on getting these use cases established by creating controllers and switches. Controllers push the rules onto the switches. Switches perform the packet processing, rule lookup and makes forwarding decisions. OpenFlow controllers and switches are considered to be the two main pieces of SDN by many vendors. Other software is currently being developed and promoted under the SDN umbrella such as Orchestration/Automation software.

Why Do We Need Orchestration/Automation Software?
Orchestration/automation software is primarily considered to be a component that sits on top of the controller and uses the controller's northbound APIs to execute sets of tasks in sequence based on events and monitoring. Usually these tasks are performed by scripts that run on either a time-bound or situation-bound way manually set in place by system administrators. As an example, scripts could be a weakened configuration script, a flash crowd-specific network, server configuration script, etc. It provides the ability to perform scenario-specific, time-specific, or business-policy-specific infrastructure setup and configuration. Orchestration software brings these scripts under a single umbrella of SDN and masks the error-prone programming needs from the system administrators to provide a user-friendly and easy-to-configure, easy-to-monitor graphical user interface.

One of the most important uses of Orchestration/Automation software is in cloud computing. The cloud is in essence a data center that runs services on top of physical servers directly or on virtual machines that share a single physical server and provides a user-friendly interface to manage the services, the virtual machines (VM), the servers and the whole infrastructure. The main idea behind consolidating the VMs on a single physical server is to maximize the utilization of the hardware resources that are invested and minimize the operational expenses (OPEX) such as energy costs by running the fewest possible physical servers for a given load. As loads increase, more VMs require enabling to balance the load and provide optimum service. Hardware virtualization software (hypervisors) makes the process of preserving a running operating system as a snapshot or image easy and automatic. When a snapshot is created as a virtual machine, it's important to get the underlying networking also reconfigured automatically. This is where OpenFlow comes into play to enable network virtualization.

Here's how it works. When the VM is booted up and sends the first Ethernet frame outbound, the switch captures it and sends layer 2 and layer 4 header information to the controller and checks where to forward the packets. Controller creates the dynamic "vlan-like" port grouping based on predefined policies using MAC addresses or IP addresses. Without any administrative intervention, the newly created VM is already part of the existing network and is part of the pre-configured load balancer server pool. This practical and exciting approach makes good use of the SDN. The automation is generally done through the hypervisor or management software that runs above the hypervisor. While this automation seems magical, there are some important points to consider.

What's the Catch?
Like expert magicians, SDN vendors misdirect the users about the features and opportunities of control and data plane separation while not revealing some important facts. When lots of promotional and inaccurate information about SDN prevails in the market, we should also learn to look behind the curtains to fully understand the price that is paid for the new features. When we look closely, the price of enabling OpenFlow is obvious; it's performance. Traditional switches are meant to look up the layer 2 fixed length headers. Conversely, OpenFlow switches look up variable length headers such as IP and TCP. While the effort to examine length-delimited lookup and parsing is obvious, there are some good readings that detail the performance penalties of handing variable length headers compared to fixed-length headers.

Although OpenFlow switches open up an exciting new approach and bring a huge momentum to the networking industry, the illusion of them replacing all the layer 2 switches will not hold up very well when you actually put them to test and compare the results. OpenFlow should complement the existing infrastructure and should not attempt to replace traditional switches since OpenFlow switches try to solve a different set of problems. Pricing what we pay to automatically detect the newly created VM or newly created application session is actually impacting the packet/frame forwarding performance immensely. While OpenFlow is still useful as traffic engineering and as a flow management tool, it should not be considered a replacement for a layer 2 switch. It's not just based on the OpenFlow protocol maturity at this point; it's based on its design itself.

Hidden Gem
One of the important aspects of the SDN that does not get much traction on the specifics is northbound APIs. While ‘application-oriented' and ‘application-defined' software and networking product promotions have been swamping the industry, this is really about engineering application traffic based on TCP port numbers. But correctly implemented northbound APIs can bridge the gap between the application and networking worlds. Industry brilliance should be applied to solve the real age-old problem: TCP. Applications utilize TCP. Application developers consider networking as a one big pipe of unlimited bandwidth and speed of light connectivity. Applications have limited visibility into the underlying networking or server infrastructure. In the SDN world, controller vendors are pondering and developing northbound APIs. Most controller developers are considering these APIs only as a CLI replacement. They are also viewing it as a southbound interface to another network automation or management software.

Let the Application Be the Controller
Think of the gravity hydro-dams. When counties around the state are requesting more water for irrigation, what happens if the dam's controller decides to honor every request for the needed amount? Should it open the water-gate to its fullest to serve all the required quantity without considering how much the distribution pipes can handle? Although most people will not think of doing this, this is exactly what is happening in the software world today.

When the application receives the incoming requests, it assumes the network has unlimited capacity and light-speed connectivity to the one making the request. Applications start creating packets by spending CPU, memory and disk resources. Later, the network optimization or QoS device finds out that the links are overused and decides to drop the packets to inform the applications to slow down. All of the resources consumed were not only going to waste, it also created more congestion on the network. Instead of using ancient smoke-signaling approaches like packet drops to inform the applications about the network congestion, SDN vendors should build more robust northbound APIs to provide more network visibility to the applications. It will be a paradigm shift in the way applications are developed. It will address the problem at its source. The promise relies on the simplicity and standardization of the northbound APIs.

Although the northbound APIs are not well defined and left for vendors to implement their own sets of rules, the power to make the SDN succeed lies in the northbound APIs. It is the real disruption in the industry not the data and control plane separation.

Northbound APIs for Policy Plane
As the controller's northbound API is to the underlying infrastructure, the needs for northbound APIs for the policy plane are also growing. Policies change all the time to align with business goals as they drive the infrastructure both directly and indirectly. When the policy plane also exposes the APIs for applications to consume the priorities and service level agreements (SLA), the same occurs between the forwarding plane and control plane today on the networking side.

Northbound APIs should allow the application to query the system, network, and server infrastructure to optimize the network globally. It should also be able to interact with the policy layer to get the priorities and SLA before committing to any resources. This will exceed the end user's investment on applications and networking infrastructure while avoiding shifting problems between each other and truly begin to collaborate and complement one another.

The real customers and end users want practical and usable solutions, not definitions. We should think beyond defining the jargon and start creating focused and useable solutions.

References

  1. http://www.cs.cmu.edu/~srini/15-744/F02/readings/McK97.html#3needswitch
  2. https://www.opennetworking.org/about/onf-overview

More Stories By Karthikeyan Subramaniam

Karthikeyan Subramaniam serves as the Chief Software Architect and Architect of the Company’s Software Defined Networking Platform. He led the development of the company’s SDN and Cloud Computing Platform work for Verizon, alongside Hewlett Packard, Intel Corp, the industry leading SDN platform unveiled at the Open Networking Summit, the world’s largest SDN summit. He has created and developed the Company’s platforms in Software Defined Networking, and Interoperability. He was at Intel Technology India, in Intel Server Systems and Intelligent Platform Management. He was at Cisco Offshore Development Center in Cisco’s Enterprises Management Business Unit (EMBU) in Cisco’s Voice Systems, Voice Gateways and Gatekeepers.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@CloudExpo Stories
The past few years have brought a sea change in the way applications are architected, developed, and consumed—increasing both the complexity of testing and the business impact of software failures. How can software testing professionals keep pace with modern application delivery, given the trends that impact both architectures (cloud, microservices, and APIs) and processes (DevOps, agile, and continuous delivery)? This is where continuous testing comes in. D
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
Mobile device usage has increased exponentially during the past several years, as consumers rely on handhelds for everything from news and weather to banking and purchases. What can we expect in the next few years? The way in which we interact with our devices will fundamentally change, as businesses leverage Artificial Intelligence. We already see this taking shape as businesses leverage AI for cost savings and customer responsiveness. This trend will continue, as AI is used for more sophistica...
The “Digital Era” is forcing us to engage with new methods to build, operate and maintain applications. This transformation also implies an evolution to more and more intelligent applications to better engage with the customers, while creating significant market differentiators. In both cases, the cloud has become a key enabler to embrace this digital revolution. So, moving to the cloud is no longer the question; the new questions are HOW and WHEN. To make this equation even more complex, most ...
In his session at 21st Cloud Expo, Raju Shreewastava, founder of Big Data Trunk, provided a fun and simple way to introduce Machine Leaning to anyone and everyone. He solved a machine learning problem and demonstrated an easy way to be able to do machine learning without even coding. Raju Shreewastava is the founder of Big Data Trunk (www.BigDataTrunk.com), a Big Data Training and consulting firm with offices in the United States. He previously led the data warehouse/business intelligence and B...
Blockchain is a shared, secure record of exchange that establishes trust, accountability and transparency across business networks. Supported by the Linux Foundation's open source, open-standards based Hyperledger Project, Blockchain has the potential to improve regulatory compliance, reduce cost as well as advance trade. Are you curious about how Blockchain is built for business? In her session at 21st Cloud Expo, René Bostic, Technical VP of the IBM Cloud Unit in North America, discussed the b...
SYS-CON Events announced today that Synametrics Technologies will exhibit at SYS-CON's 22nd International Cloud Expo®, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Synametrics Technologies is a privately held company based in Plainsboro, New Jersey that has been providing solutions for the developer community since 1997. Based on the success of its initial product offerings such as WinSQL, Xeams, SynaMan and Syncrify, Synametrics continues to create and hone in...
With tough new regulations coming to Europe on data privacy in May 2018, Calligo will explain why in reality the effect is global and transforms how you consider critical data. EU GDPR fundamentally rewrites the rules for cloud, Big Data and IoT. In his session at 21st Cloud Expo, Adam Ryan, Vice President and General Manager EMEA at Calligo, examined the regulations and provided insight on how it affects technology, challenges the established rules and will usher in new levels of diligence arou...
Nordstrom is transforming the way that they do business and the cloud is the key to enabling speed and hyper personalized customer experiences. In his session at 21st Cloud Expo, Ken Schow, VP of Engineering at Nordstrom, discussed some of the key learnings and common pitfalls of large enterprises moving to the cloud. This includes strategies around choosing a cloud provider(s), architecture, and lessons learned. In addition, he covered some of the best practices for structured team migration an...
As you move to the cloud, your network should be efficient, secure, and easy to manage. An enterprise adopting a hybrid or public cloud needs systems and tools that provide: Agility: ability to deliver applications and services faster, even in complex hybrid environments Easier manageability: enable reliable connectivity with complete oversight as the data center network evolves Greater efficiency: eliminate wasted effort while reducing errors and optimize asset utilization Security: imple...
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
The 22nd International Cloud Expo | 1st DXWorld Expo has announced that its Call for Papers is open. Cloud Expo | DXWorld Expo, to be held June 5-7, 2018, at the Javits Center in New York, NY, brings together Cloud Computing, Digital Transformation, Big Data, Internet of Things, DevOps, Machine Learning and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding busin...
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex ...
In his session at 21st Cloud Expo, Michael Burley, a Senior Business Development Executive in IT Services at NetApp, described how NetApp designed a three-year program of work to migrate 25PB of a major telco's enterprise data to a new STaaS platform, and then secured a long-term contract to manage and operate the platform. This significant program blended the best of NetApp’s solutions and services capabilities to enable this telco’s successful adoption of private cloud storage and launching ...
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It’s clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. Tha...
In his general session at 21st Cloud Expo, Greg Dumas, Calligo’s Vice President and G.M. of US operations, discussed the new Global Data Protection Regulation and how Calligo can help business stay compliant in digitally globalized world. Greg Dumas is Calligo's Vice President and G.M. of US operations. Calligo is an established service provider that provides an innovative platform for trusted cloud solutions. Calligo’s customers are typically most concerned about GDPR compliance, application p...
Digital transformation is about embracing digital technologies into a company's culture to better connect with its customers, automate processes, create better tools, enter new markets, etc. Such a transformation requires continuous orchestration across teams and an environment based on open collaboration and daily experiments. In his session at 21st Cloud Expo, Alex Casalboni, Technical (Cloud) Evangelist at Cloud Academy, explored and discussed the most urgent unsolved challenges to achieve f...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
Smart cities have the potential to change our lives at so many levels for citizens: less pollution, reduced parking obstacles, better health, education and more energy savings. Real-time data streaming and the Internet of Things (IoT) possess the power to turn this vision into a reality. However, most organizations today are building their data infrastructure to focus solely on addressing immediate business needs vs. a platform capable of quickly adapting emerging technologies to address future ...