Welcome!

SDN Journal Authors: Pat Romanski, Patrick Hubbard, Elizabeth White, Sven Olav Lund, Liz McMillan

Related Topics: SDN Journal, Java IoT, Linux Containers, Containers Expo Blog, @CloudExpo

SDN Journal: Blog Post

OpenFlow Evolution: Standardized Packet Processing Abstraction Is Hard

OpenFlow is a standardized abstraction of the switch capabilities

With the Open Network Summit 2014 about to start in Santa Clara next week, I realized I had not done much OpenFlow reading recently. It is no secret that Plexxi does not use OpenFlow as the API between our switches and controller due to restrictions in what OpenFlow can do (or in some cases could do when we needed to make architecture and design choices). When I saw the ONS announcement I thought it an opportune moment to sync myself to the latest and greatest in OpenFlow world.

What started out as a mechanism to program flows into network switches and routers in a standard way is evolving into a full blown forwarding engine programming and management specification. In the latest version of the spec (1.4, released in October 2013), the abilities exist to configure properties of optical ports, create semi-atomic changes across multiple switches, table full notifications and several more items that have stepped away from basic programming of flow based forwarding behaviors on a network element. In addition, OF-Config exists specifically for configuration and management. Looking at this paper written by some of the OpenFlow principals, the proposed framework for OpenFlow 2.0 would take that to a next level as a generic and abstracted mechanism to program protocol independent packet processing engines. That same paper lists some of the challenges with the current OpenFlow path, which is what led the authors to put the proposal forward.

There is lots of value in creating a single standard way to make packet switches do what they need to do. From SNMP to “industry standard” CLI possibly wrapped in netconf and everything in between, mechanisms to (remotely) configure a switch in a semi standard way have been around since we have had switches. Anyone that has operated a network would tell you they would love to have a single standard way to manage their network. But this is nothing new. That desire has been around forever. And the technology to do so is also not new. The functionality in the chipsets has been there in one form or another. So you have to ask the question, can it be successful this time around?

Inside each ethernet switch is a packet processor (or a piece of software in a vSwitch that pretends to be a packet processor). This packet processor has an abstraction layer written to abstract the register by register configuration of the functionality of the chip. The abstraction layer provides functions like enabling a port, changing the port speed, creating a VLAN, adding a MAC address to a forwarding table, configuring ACL like rules, etc. The amount of functionality provided in these chips is astounding and growing with each and every technology iteration. Thousands of pages of functional descriptions of the abstraction layers accompany each chip and even with this abstraction layer, configuring the more advanced versions of these chips to do things beyond basic switching is not at all trivial. Yes, the abstraction layers most certainly can use some work.

OpenFlow is a standardized abstraction of the switch capabilities. Starting with the ability to direct specific flows into the switch’s forwarding table, it is now includes broader capabilities to control the switch’s behavior, just like the abstraction layer would. As a consumer of switch chipsets, standardized abstraction is extremely beneficial. Today, switching from one chip vendor to another is a painful exercise. The functionality is slightly different, the abstraction layers hugely different and it takes time and effort to adjust from one chipset to another. If all chipsets had the same method of programming, I could chose whatever chipset I wanted from any vendor with little to no software effort.

The aforementioned paper points out some of the real tough challenges that come with a standard abstraction layer. The abstraction models each of the many protocols the switch supports with all the fields you would like to control and as such becomes unwieldy very quickly. A second and important limitation mentioned is the fact that switches are different. Different vendors implement functionality different, provide different capabilities and there is no easy mechanism to express the differences in these key functions. And these are often the types of functions that make switch buyers select one vendor over another. Creating a standardized superset of all abstraction layers of all hardware and software switches is very hard. From Plexxi’s perspective, one of the reasons we do not use OpenFlow is a need to abstract application workflows and topologies rather than individual or aggregate flows.

Standards are needed, but standards have a tendency to be behind. Especially in networking, so many of the best solutions, those that create differentiation, start with standards, but then have piles of private extensions to make it faster, more scaled, easier. Almost all vendor implementations of TRILL, SPB, PIM, even ISIS, OSPF and BGP have proprietary extensions for that purpose. OpenFlow touches perhaps a few percent of a modern chipset capabilities. Will OpenFlow ever cover enough of a switch’s capability and keep up with its evolution to have enough critical mass to not make it yet another abstraction layer in addition to the ones already there? We completely believe in the underlying movement to standardization and openness, but OpenFlow may well be too large and complex to provide us at Plexxi and others in the industry with all the capabilities we need.

The post OpenFlow Evolution: Standardized Packet Processing Abstraction is Hard appeared first on Plexxi.

Read the original blog entry...

More Stories By Marten Terpstra

Marten Terpstra is a Product Management Director at Plexxi Inc. Marten has extensive knowledge of the architecture, design, deployment and management of enterprise and carrier networks.

@CloudExpo Stories
The session is centered around the tracing of systems on cloud using technologies like ebpf. The goal is to talk about what this technology is all about and what purpose it serves. In his session at 21st Cloud Expo, Shashank Jain, Development Architect at SAP, will touch upon concepts of observability in the cloud and also some of the challenges we have. Generally most cloud-based monitoring tools capture details at a very granular level. To troubleshoot problems this might not be good enough.
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
Organizations do not need a Big Data strategy; they need a business strategy that incorporates Big Data. Most organizations lack a road map for using Big Data to optimize key business processes, deliver a differentiated customer experience, or uncover new business opportunities. They do not understand what’s possible with respect to integrating Big Data into the business model.
When it comes to cloud computing, the ability to turn massive amounts of compute cores on and off on demand sounds attractive to IT staff, who need to manage peaks and valleys in user activity. With cloud bursting, the majority of the data can stay on premises while tapping into compute from public cloud providers, reducing risk and minimizing need to move large files. In his session at 18th Cloud Expo, Scott Jeschonek, Director of Product Management at Avere Systems, discussed the IT and busine...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities – ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups. As a result, many firms employ new business models that place enormous impor...
SYS-CON Events announced today that Dasher Technologies will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Dasher Technologies, Inc. ® is a premier IT solution provider that delivers expert technical resources along with trusted account executives to architect and deliver complete IT solutions and services to help our clients execute their goals, plans and objectives. Since 1999, we'v...
SYS-CON Events announced today that Taica will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Taica manufacturers Alpha-GEL brand silicone components and materials, which maintain outstanding performance over a wide temperature range -40C to +200C. For more information, visit http://www.taica.co.jp/english/.
Recently, REAN Cloud built a digital concierge for a North Carolina hospital that had observed that most patient call button questions were repetitive. In addition, the paper-based process used to measure patient health metrics was laborious, not in real-time and sometimes error-prone. In their session at 21st Cloud Expo, Sean Finnerty, Executive Director, Practice Lead, Health Care & Life Science at REAN Cloud, and Dr. S.P.T. Krishnan, Principal Architect at REAN Cloud, will discuss how they b...
We all know that end users experience the Internet primarily with mobile devices. From an app development perspective, we know that successfully responding to the needs of mobile customers depends on rapid DevOps – failing fast, in short, until the right solution evolves in your customers' relationship to your business. Whether you’re decomposing an SOA monolith, or developing a new application cloud natively, it’s not a question of using microservices – not doing so will be a path to eventual b...
SYS-CON Events announced today that TidalScale, a leading provider of systems and services, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. TidalScale has been involved in shaping the computing landscape. They've designed, developed and deployed some of the most important and successful systems and services in the history of the computing industry - internet, Ethernet, operating s...
SYS-CON Events announced today that MIRAI Inc. will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. MIRAI Inc. are IT consultants from the public sector whose mission is to solve social issues by technology and innovation and to create a meaningful future for people.
SYS-CON Events announced today that IBM has been named “Diamond Sponsor” of SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California.
The next XaaS is CICDaaS. Why? Because CICD saves developers a huge amount of time. CD is an especially great option for projects that require multiple and frequent contributions to be integrated. But… securing CICD best practices is an emerging, essential, yet little understood practice for DevOps teams and their Cloud Service Providers. The only way to get CICD to work in a highly secure environment takes collaboration, patience and persistence. Building CICD in the cloud requires rigorous ar...
Join IBM November 1 at 21st Cloud Expo at the Santa Clara Convention Center in Santa Clara, CA, and learn how IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Cognitive analysis impacts today’s systems with unparalleled ability that were previously available only to manned, back-end operations. Thanks to cloud processing, IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Imagine a robot vacuum that becomes your personal assistant tha...
SYS-CON Events announced today that TidalScale will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. TidalScale is the leading provider of Software-Defined Servers that bring flexibility to modern data centers by right-sizing servers on the fly to fit any data set or workload. TidalScale’s award-winning inverse hypervisor technology combines multiple commodity servers (including their ass...
Data scientists must access high-performance computing resources across a wide-area network. To achieve cloud-based HPC visualization, researchers must transfer datasets and visualization results efficiently. HPC clusters now compute GPU-accelerated visualization in the cloud cluster. To efficiently display results remotely, a high-performance, low-latency protocol transfers the display from the cluster to a remote desktop. Further, tools to easily mount remote datasets and efficiently transfer...
As hybrid cloud becomes the de-facto standard mode of operation for most enterprises, new challenges arise on how to efficiently and economically share data across environments. In his session at 21st Cloud Expo, Dr. Allon Cohen, VP of Product at Elastifile, will explore new techniques and best practices that help enterprise IT benefit from the advantages of hybrid cloud environments by enabling data availability for both legacy enterprise and cloud-native mission critical applications. By rev...
In his session at 21st Cloud Expo, Michael Burley, a Senior Business Development Executive in IT Services at NetApp, will describe how NetApp designed a three-year program of work to migrate 25PB of a major telco's enterprise data to a new STaaS platform, and then secured a long-term contract to manage and operate the platform. This significant program blended the best of NetApp’s solutions and services capabilities to enable this telco’s successful adoption of private cloud storage and launchi...
Infoblox delivers Actionable Network Intelligence to enterprise, government, and service provider customers around the world. They are the industry leader in DNS, DHCP, and IP address management, the category known as DDI. We empower thousands of organizations to control and secure their networks from the core-enabling them to increase efficiency and visibility, improve customer service, and meet compliance requirements.
With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend 21st Cloud Expo October 31 - November 2, 2017, at the Santa Clara Convention Center, CA, and June 12-14, 2018, at the Javits Center in New York City, NY, and learn what is going on, contribute to the discussions, and ensure that your enterprise is on the right path to Digital Transformation.