Welcome!

SDN Journal Authors: Yeshim Deniz, Liz McMillan, Elizabeth White, Pat Romanski, TJ Randall

Related Topics: SDN Journal, Java IoT, Microservices Expo, Linux Containers, Containers Expo Blog, Cloud Security

SDN Journal: Blog Feed Post

The SDN Chicken and the Standardization Egg

Which came first?

Pundits, owing to SDN still being very young and not widely adopted, continue to put forth treatise upon treatise as to why organizations should be falling over themselves to go out and get themselves some SDN. Now. Not tomorrow, today.

One of the compelling reasons to adopt is to address a variety of operational issues that arise from complexity and differentiation in network elements. The way SDN addresses this - as do a variety of other options - is standardization. Note this is standardization with a little "s", not the capital "S" that might imply industry-agreed-upon-and-given-an-RFC standardization.

The claim is simple: organizations have a whole lot of various standalone appliances in their network serving a variety of purposes. Load balancing, firewalls, IDS/IPS, application acceleration, and even identity and access control. All these individual appliances are (generally) from different vendors, each requiring their own management console, their own policy formats, their own.... everything.

The argument goes that if you take all those services and deploy them as services on a (common) network operating system, you'll reduce complexity and all the associated management (operational) overhead. Rainbows will sprout from the network and unicorns will dance merrily from node to node while sprinkling happy dust on your engineers and operators.

Okay, maybe the rainbows and unicorns aren't part of the promise, but the nirvana implied by this consolidation on a standardized (common) network operating system is.

Now, I'm not saying this isn't true. It could be*. You can get standardization (theoretically, at least) from SDN at the upper layers of the stack, but it isn't SDN itself that elevates the data center to a level closer to operational nirvana, it's the standardization on a common platform. SDN is (theoretically) just one path to achieve that standardization.

The Layer 4-7 Landscape

The reason achieving this is more difficult than it sounds is that there are far more functions at layer 4-7 that need to be "standardized". Organizations do, in fact, end up with a hodge-podge of appliances deployed to support all the various layer 4-7 services required. Let me illustrate with a diagram (that is itself incomplete. There are a whole lot of services that go in the "layer 4-7 category"):

the difference

Are organizations struggling with the proliferation of devices at layer 4-7? Absolutely. I've heard stories of folks with 20 or more different vendors operating at layer 4-7 in their network. Not kidding. And every day, every new challenge brings another new solution to the table. Mobility and cloud - not to mention security - are driving the identification of new problems and challenges which in turn drives the proliferation of new solutions that often (though not always) come in the form of a nice, neat box to deploy in the network.

The way in which this proliferation (and its associated overhead) can be effectively addressed is through standardization. That doesn't require SDN, though SDN may be one way in which such standardization is achieved. Standardization - or more apropos - the consolidation of application (that's layer 4-7) network services onto a common platform is what brings that benefit.

* It could be if we ever figure out how to deploy said services in such an architecture without forcing the controller to be part of the data path.

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

CloudEXPO Stories
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a member of the Society of Information Management (SIM) Atlanta Chapter. She received a Business and Economics degree with a minor in Computer Science from St. Andrews Presbyterian University (Laurinburg, North Carolina). She resides in metro-Atlanta (Georgia).
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. His expertise is in automating deployment, management, and problem resolution in these environments, allowing his teams to run large transactional applications with high availability and the speed the consumer demands.
The technologies behind big data and cloud computing are converging quickly, offering businesses new capabilities for fast, easy, wide-ranging access to data. However, to capitalize on the cost-efficiencies and time-to-value opportunities of analytics in the cloud, big data and cloud technologies must be integrated and managed properly. Pythian's Director of Big Data and Data Science, Danil Zburivsky will explore: The main technology components and best practices being deployed to take advantage of data and analytics in the cloud, Architecture, integration, governance and security scenarios and Key challenges and success factors of moving data and analytics to the cloud
SYS-CON Events announced today that DatacenterDynamics has been named “Media Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY. DatacenterDynamics is a brand of DCD Group, a global B2B media and publishing company that develops products to help senior professionals in the world's most ICT dependent organizations make risk-based infrastructure and capacity decisions.
Most DevOps journeys involve several phases of maturity. Research shows that the inflection point where organizations begin to see maximum value is when they implement tight integration deploying their code to their infrastructure. Success at this level is the last barrier to at-will deployment. Storage, for instance, is more capable than where we read and write data. In his session at @DevOpsSummit at 20th Cloud Expo, Josh Atwell, a Developer Advocate for NetApp, will discuss the role and value extensible storage infrastructure has in accelerating software development activities, improve code quality, reveal multiple deployment options through automated testing, and support continuous integration efforts. All this will be described using tools common in DevOps organizations.