Welcome!

SDN Journal Authors: Pat Romanski, Destiny Bertucci, Liz McMillan, Elizabeth White, Amitabh Sinha

Related Topics: SDN Journal, Java IoT, Microservices Expo, Containers Expo Blog, @CloudExpo, @DXWorldExpo

SDN Journal: Blog Feed Post

SDN: Capability or Context?

Does software define software-defined?

Why is it that the definition of SDN continues to get debated?

I think the definition of SDN remains a bit squishy. And while I am not entirely certain that it matters (people shouldn’t be buying SDN; they should be building networks), it is an interesting phenomenon, and understanding it better could help with the education process.

When most people talk about what SDN is, they tend to fall into two camps: principles and protocols. You will frequently hear SDN described as the separation of control and forwarding planes. You probably hear people talking about SDN needing to be “open” (a horribly imprecise term as I have argued before). These are the people who fall on the principles side. They point less to specific instantiations of technologies and more to the guidelines that define SDN.

The other camp will point to specific protocols and technologies. They rally around the OpenFlow banner for sure, but they might include other technologies like BGP-TE, PCE, ALTO, and I2RS. They see SDN as an architecture with specific building blocks, and the presence of those building blocks determines the SDN-ness of a solution.

I actually don’t think that either of these positions is correct.

I was debating last week whether GMPLS was SDN. It certainly focuses on the separation of control and forwarding plane. It is an open standard. It is absolutely implemented in software. It seems to hit most of the framework criteria for inclusion in the SDN camp. The conclusion of whether GMPLS is SDN or not is less interesting than the discussion that surrounded it.

Does software define software-defined? Claiming something is software-defined because it is implemented in software is probably among the lamest definitional requirements around. The reality is that the vast majority of traditional networking features are all implemented in software. In fact, the major vendors spend north of 80% of their R&D on software-related efforts. By this definition, everything is software-defined.

The real distinction people seem to be trying to make when they talk about software implementations is whether the functionality is resident on a networking device, or whether it sits somewhere on top of the network (as with a controller). But we should be clear about this. Whether some application runs on or off the box is a packaging detail, not some core attribute. Networking devices all have some forwarding ASIC and a general processor. Whether you write something to run natively within the sheet  metal or on some server somewhere is irrelevant. Put differently, if your vendor of choice decided to ship their boxes with the central processing card physically separated (it sits a half micron on top, with separate sheet metal, power, and cooling), would you suddenly brand the solution software-defined?

[Special callout to Mike Dvorkin (@dvorkinista) who frequently makes this argument on social channels.]

Is the separation of control and forwarding the meaningful determinant? Network device behavior is all state-driven. Whether that state is determined by persistent configuration or learned through some protocol is secondary. More simply, how important is it how state gets onto the device? More simply, if you set the state via an on-box CLI or via a controller, does that make the solution any more or less SDN?

When most people talk about control and forwarding, they are really having a discussion about management planes. Controller-based solutions certainly separate the management plane. But so do policy servers, OSS/BSS solutions, and even well-written Perl scripts that pull information from a content versioning system as part of device management.

My point here is not to say that separation is not important, but rather that it is likely not enough by itself to determine the SDN-ness of a particular solution.

Does Open make something SDN? No one will say that merely being open (for whatever definition of open you mean) is enough to make something SDN. The real question is whether something can be SDN and not be open. The answer here gets pretty religious, but that is largely dependent on how people have defined SDN. Can you build a software-implemented, controller-based solution that uses proprietary protocols? Absolutely. If that solution is deployed for 8 years and then the IETF ratifies a standard for the base protocol, has your deployed solution gone from non-SDN to SDN despite the lack of solution changes?

So where did all of this conversation land?

It’s not that I think there are not important principles to be considered before labeling something SDN. I just think that it is less about technology and more about context. It is absolutely conceivable to me that a particular technology can exist in both SDN and non-SDN architectures. How a protocol is used determines whether it is SDN or not. The examples are virtually endless, but I would start with things like BGP, XMPP, NETCONF, YANG, and yes, even GMPLS. Similarly, I think there are controller-based solutions that are non-SDN, just as there might be non-controller-based solutions that could be SDN.

This means that the conversation needs to move away from the technological building blocks and more to the contexts that matter. I’ll offer up three here:

  • Delegation – OSS/BSS systems have already addressed the management problems inherent in networks built from different devices delivered by different vendors. Cannot the solution simply be to implement master translators that push configuration down to however many devices? It seems to me that SDN is about removing the complexity of managing individual elements. That can only happen through delegation. Central controllers are great, but only if they can pass requirements to individual elements rather than having to manage them all in detail. The analogy I like here is one of the modern corporation. Imagine how effective your company would be if your CEO told every individual what to do. Delegation matters.
  • Abstraction - And delegation depends on abstraction. If the goal of SDN is to make workflows more manageable and networks more better (more easily managed, more responsive to applications, more intelligent, more whatever), then we need to abstract out some of the complexity. We need to work less in device-specific directives (read: configuration knobs), and more in overarching intent. The only way that different part of the IT infrastructure can ever collaborate is through a common language, and that will require abstraction. Expecting compute, storage, or applications to speak in terms of VLANs and ACLs is no more practical than turning network admins into storage or compute junkies.
  • Globality – Centralizing control is not about where software runs; it is about what that software can do. The whole premise of controller-based solutions is that having a global view of the available resources allows for more intelligent decisions to be made. If your network behaves exactly the same way with or without OpenFlow (meaning all traffic effectively uses the same paths), then does it even matter if you call it SDN or not? We need to be in the business of doing things better, not just different. And that requires globality.

These might not be the only (or even right) contexts to think about, but they at least start to frame the discussion differently. I think it is entirely possible to build open, controller-based systems that fail to deliver against any of the promises of SDN, just as it is possible to use existing technologies in new ways. Ultimately, it is the context – not the capability – that determines whether the promises of SDN are achievable.

[Today's fun fact: A car that shifts manually gets 2 miles more per gallon of gas than a car with automatic shift. Of course all that extra work requires more sustenance, so it's about a wash environmentally.]

The post SDN: Capability or Context? appeared first on Plexxi.

Read the original blog entry...

More Stories By Michael Bushong

The best marketing efforts leverage deep technology understanding with a highly-approachable means of communicating. Plexxi's Vice President of Marketing Michael Bushong has acquired these skills having spent 12 years at Juniper Networks where he led product management, product strategy and product marketing organizations for Juniper's flagship operating system, Junos. Michael spent the last several years at Juniper leading their SDN efforts across both service provider and enterprise markets. Prior to Juniper, Michael spent time at database supplier Sybase, and ASIC design tool companies Synopsis and Magma Design Automation. Michael's undergraduate work at the University of California Berkeley in advanced fluid mechanics and heat transfer lend new meaning to the marketing phrase "This isn't rocket science."

@CloudExpo Stories
The “Digital Era” is forcing us to engage with new methods to build, operate and maintain applications. This transformation also implies an evolution to more and more intelligent applications to better engage with the customers, while creating significant market differentiators. In both cases, the cloud has become a key enabler to embrace this digital revolution. So, moving to the cloud is no longer the question; the new questions are HOW and WHEN. To make this equation even more complex, most ...
As you move to the cloud, your network should be efficient, secure, and easy to manage. An enterprise adopting a hybrid or public cloud needs systems and tools that provide: Agility: ability to deliver applications and services faster, even in complex hybrid environments Easier manageability: enable reliable connectivity with complete oversight as the data center network evolves Greater efficiency: eliminate wasted effort while reducing errors and optimize asset utilization Security: imple...
Mobile device usage has increased exponentially during the past several years, as consumers rely on handhelds for everything from news and weather to banking and purchases. What can we expect in the next few years? The way in which we interact with our devices will fundamentally change, as businesses leverage Artificial Intelligence. We already see this taking shape as businesses leverage AI for cost savings and customer responsiveness. This trend will continue, as AI is used for more sophistica...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
Digital transformation is about embracing digital technologies into a company's culture to better connect with its customers, automate processes, create better tools, enter new markets, etc. Such a transformation requires continuous orchestration across teams and an environment based on open collaboration and daily experiments. In his session at 21st Cloud Expo, Alex Casalboni, Technical (Cloud) Evangelist at Cloud Academy, explored and discussed the most urgent unsolved challenges to achieve f...
In his session at 21st Cloud Expo, Raju Shreewastava, founder of Big Data Trunk, provided a fun and simple way to introduce Machine Leaning to anyone and everyone. He solved a machine learning problem and demonstrated an easy way to be able to do machine learning without even coding. Raju Shreewastava is the founder of Big Data Trunk (www.BigDataTrunk.com), a Big Data Training and consulting firm with offices in the United States. He previously led the data warehouse/business intelligence and B...
The past few years have brought a sea change in the way applications are architected, developed, and consumed—increasing both the complexity of testing and the business impact of software failures. How can software testing professionals keep pace with modern application delivery, given the trends that impact both architectures (cloud, microservices, and APIs) and processes (DevOps, agile, and continuous delivery)? This is where continuous testing comes in. D
Blockchain is a shared, secure record of exchange that establishes trust, accountability and transparency across business networks. Supported by the Linux Foundation's open source, open-standards based Hyperledger Project, Blockchain has the potential to improve regulatory compliance, reduce cost as well as advance trade. Are you curious about how Blockchain is built for business? In her session at 21st Cloud Expo, René Bostic, Technical VP of the IBM Cloud Unit in North America, discussed the b...
SYS-CON Events announced today that Synametrics Technologies will exhibit at SYS-CON's 22nd International Cloud Expo®, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Synametrics Technologies is a privately held company based in Plainsboro, New Jersey that has been providing solutions for the developer community since 1997. Based on the success of its initial product offerings such as WinSQL, Xeams, SynaMan and Syncrify, Synametrics continues to create and hone in...
With tough new regulations coming to Europe on data privacy in May 2018, Calligo will explain why in reality the effect is global and transforms how you consider critical data. EU GDPR fundamentally rewrites the rules for cloud, Big Data and IoT. In his session at 21st Cloud Expo, Adam Ryan, Vice President and General Manager EMEA at Calligo, examined the regulations and provided insight on how it affects technology, challenges the established rules and will usher in new levels of diligence arou...
Nordstrom is transforming the way that they do business and the cloud is the key to enabling speed and hyper personalized customer experiences. In his session at 21st Cloud Expo, Ken Schow, VP of Engineering at Nordstrom, discussed some of the key learnings and common pitfalls of large enterprises moving to the cloud. This includes strategies around choosing a cloud provider(s), architecture, and lessons learned. In addition, he covered some of the best practices for structured team migration an...
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
The 22nd International Cloud Expo | 1st DXWorld Expo has announced that its Call for Papers is open. Cloud Expo | DXWorld Expo, to be held June 5-7, 2018, at the Javits Center in New York, NY, brings together Cloud Computing, Digital Transformation, Big Data, Internet of Things, DevOps, Machine Learning and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding busin...
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
In his session at 21st Cloud Expo, Michael Burley, a Senior Business Development Executive in IT Services at NetApp, described how NetApp designed a three-year program of work to migrate 25PB of a major telco's enterprise data to a new STaaS platform, and then secured a long-term contract to manage and operate the platform. This significant program blended the best of NetApp’s solutions and services capabilities to enable this telco’s successful adoption of private cloud storage and launching ...
Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex ...
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It’s clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. Tha...
In his general session at 21st Cloud Expo, Greg Dumas, Calligo’s Vice President and G.M. of US operations, discussed the new Global Data Protection Regulation and how Calligo can help business stay compliant in digitally globalized world. Greg Dumas is Calligo's Vice President and G.M. of US operations. Calligo is an established service provider that provides an innovative platform for trusted cloud solutions. Calligo’s customers are typically most concerned about GDPR compliance, application p...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
Smart cities have the potential to change our lives at so many levels for citizens: less pollution, reduced parking obstacles, better health, education and more energy savings. Real-time data streaming and the Internet of Things (IoT) possess the power to turn this vision into a reality. However, most organizations today are building their data infrastructure to focus solely on addressing immediate business needs vs. a platform capable of quickly adapting emerging technologies to address future ...