Click here to close now.

Welcome!

SDN Journal Authors: Dana Gardner, Lori MacVittie, Carmen Gonzalez, Liz McMillan, Aria Blog

Related Topics: Virtualization, .NET, Cloud Expo, SDN Journal

Virtualization: Blog Post

Attention: Overlay Tunnel Construction Ahead

It's good to see folks from VMWare and Microsoft try and come together to create a single tunnel mechanism

A while ago I wrote a few articles describing the various tunnel protocols used for network virtualization between vSwitches on servers, and between vSwitches and physical network gateways. These are the mechanisms that construct overlay networks on top of a physical network. VMWare uses STT as the tunneling mechanism between vSwitches on servers and VXLAN to communicate with gateways to the non virtualized world. NVGRE is used mostly by Microsoft, and is an extension to GRE tunneling that has been around for a while.

Each one of these mechanisms have their pros and cons. They are all pretty much standard, or at least published by a standards organization, and multiple implementations exist of most of them. Outside of my complaint about the stream like nature of STT, the biggest problem with all of them is the fact that they are fixed in their definition. The header definition is fixed, the fields are fixed, the sizes of the headers are fixed.

The 24 bits used for a Virtual Network Identifier providing for 16 million different virtual network may seem like an amount we can live with for a long time to go. But creative minds will find ways to use those 24 bits to signal all sorts of meta information between the tunnel endpoints, or to intermediate switches and other services forwarding these tunneled packets. Additional summarized information about the content of a packet is extremely useful for any device or service that makes intelligent decisions on this packet.

And once you start that thought process, those 24 bits will disappear really quickly. And when 24 bits are not enough, it requires an update to the protocols and their implementations, and those are extremely painful, especially if portions of those protocols are handled in hardware.

Enter GENEVE, an Internet Draft published on Valentine’s day this year, which looks to take a more holistic view of tunneling. GENEVE takes its queue from many other protocols that have shown themselves to have a long life. Protocols like BGP, LLDP, ISIS and many others have been around for multiple decades and are still as popular as they have ever been. And the reason is simple, they are extendable. They evolve over time with new capabilities, not by revising the base protocols, but by adding new optional capabilities.

All of these protocols have a set of fixed headers, parameters and values, but then leave room for non-defined optional fields. New fields can be added to the protocol by simply defining and publishing them. The protocol is created in such a way that implementations know there may be optional fields that they may or may not understand.

Think of my favorite BGP. When its 4th version was created in the mid 90s, it had no ability to carry IPv6 (which wasn’t even called IPv6 at the time). It had no ability to carry multicast routes, Communities, ORFs or act as a Route Reflector. BGP has many optional attributes that have been added over the years and as a result is the most powerful routing protocol in existence. It is the result of a solid protocol design practice: we don’t know everything we may want to use this for at the moment we create it, so we design in the ability to simply add new capabilities.

GENEVE Header

Like VXLAN, GENEVE runs on top of UDP. It adds its own header, which is only 8 bytes of fixed header, containing that same 24 bit Virtual Network Identifier and a 16 bit Protocol Type as the main fields. After that, the initial definition is remarkably empty, every else is left open as options that follow a specific format if and when they get defined.

And that is the beauty of an extensible protocol, only those implementations that care about some specific option will have it implemented and acted upon. Everyone else has to quietly accept and ignore these options. Backward compatibility by design.

Along the way, a few more recommended practices are articulated, including one I complained about in my description of STT. While the language could be a bit stronger, it is recommended that each GENEVE encapsulated packet includes the entire header, which means I can actually reconstruct fragmented packets if I need to during debugging, each packet on the network has the entire original packet and the added GENEVE encapsulation.

Whether you like overlays, tunnels and everything that comes along with it or not, it is good to see folks from VMWare and Microsoft try and come together to create a single tunnel mechanism that is sensible and extensible. It seems to have all the right bits and pieces to have a long life and certainly provides mechanisms that allow for much better orchestration and cooperation between the overlay and physical network.

Of course it will take a while before it becomes a complete definition, with real implementations and hardware support. But when it does, we will have a better toolkit as a result.

[Today's fun fact: The city of Geneva (which is French of course is spelled Geneve) has the shortest commute time of any major city in the world. How appropriate.]

The post Attention: overlay tunnel construction ahead appeared first on Plexxi.

Read the original blog entry...

More Stories By Marten Terpstra

Marten Terpstra is a Product Management Director at Plexxi Inc. Marten has extensive knowledge of the architecture, design, deployment and management of enterprise and carrier networks.

@CloudExpo Stories
NaviSite, Inc., a Time Warner Cable company, has opened a new enterprise-class data center located in Santa Clara, California. The new data center will enable NaviSite to meet growing demands for its enterprise-class Cloud and Managed Services from existing and new customers. This facility, which is owned by data center solution provider Digital Realty, will join NaviSite’s fabric of nine existing data centers across the U.S. and U.K., all of which are designed to provide a resilient, secure, hi...
SYS-CON Events announced today that Open Data Centers (ODC), a carrier-neutral colocation provider, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place June 9-11, 2015, at the Javits Center in New York City, NY. Open Data Centers is a carrier-neutral data center operator in New Jersey and New York City offering alternative connectivity options for carriers, service providers and enterprise customers.
When it comes to the Internet of Things, hooking up will get you only so far. If you want customers to commit, you need to go beyond simply connecting products. You need to use the devices themselves to transform how you engage with every customer and how you manage the entire product lifecycle. In his session at @ThingsExpo, Sean Lorenz, Technical Product Manager for Xively at LogMeIn, will show how “product relationship management” can help you leverage your connected devices and the data th...
Thanks to Docker, it becomes very easy to leverage containers to build, ship, and run any Linux application on any kind of infrastructure. Docker is particularly helpful for microservice architectures because their successful implementation relies on a fast, efficient deployment mechanism – which is precisely one of the features of Docker. Microservice architectures are therefore becoming more popular, and are increasingly seen as an interesting option even for smaller projects, instead of bein...
Security can create serious friction for DevOps processes. We've come up with an approach to alleviate the friction and provide security value to DevOps teams. In her session at DevOps Summit, Shannon Lietz, Senior Manager of DevSecOps at Intuit, will discuss how DevSecOps got started and how it has evolved. Shannon Lietz has over two decades of experience pursuing next generation security solutions. She is currently the DevSecOps Leader for Intuit where she is responsible for setting and driv...
The explosion of connected devices / sensors is creating an ever-expanding set of new and valuable data. In parallel the emerging capability of Big Data technologies to store, access, analyze, and react to this data is producing changes in business models under the umbrella of the Internet of Things (IoT). In particular within the Insurance industry, IoT appears positioned to enable deep changes by altering relationships between insurers, distributors, and the insured. In his session at @Things...
Even as cloud and managed services grow increasingly central to business strategy and performance, challenges remain. The biggest sticking point for companies seeking to capitalize on the cloud is data security. Keeping data safe is an issue in any computing environment, and it has been a focus since the earliest days of the cloud revolution. Understandably so: a lot can go wrong when you allow valuable information to live outside the firewall. Recent revelations about government snooping, along...
In his session at DevOps Summit, Tapabrata Pal, Director of Enterprise Architecture at Capital One, will tell a story about how Capital One has embraced Agile and DevOps Security practices across the Enterprise – driven by Enterprise Architecture; bringing in Development, Operations and Information Security organizations together. Capital Ones DevOpsSec practice is based upon three "pillars" – Shift-Left, Automate Everything, Dashboard Everything. Within about three years, from 100% waterfall, C...
SYS-CON Events announced today that CodeFutures, a leading supplier of database performance tools, has been named a “Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York, NY. CodeFutures is an independent software vendor focused on providing tools that deliver database performance tools that increase productivity during database development and increase database performance and scalability during production.
The IoT market is projected to be $1.9 trillion tidal wave that’s bigger than the combined market for smartphones, tablets and PCs. While IoT is widely discussed, what not being talked about are the monetization opportunities that are created from ubiquitous connectivity and the ensuing avalanche of data. While we cannot foresee every service that the IoT will enable, we should future-proof operations by preparing to monetize them with extremely agile systems.
There’s Big Data, then there’s really Big Data from the Internet of Things. IoT is evolving to include many data possibilities like new types of event, log and network data. The volumes are enormous, generating tens of billions of logs per day, which raise data challenges. Early IoT deployments are relying heavily on both the cloud and managed service providers to navigate these challenges. Learn about IoT, Big Data and deployments processing massive data volumes from wearables, utilities and ot...
SYS-CON Events announced today that Intelligent Systems Services will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Established in 1994, Intelligent Systems Services Inc. is located near Washington, DC, with representatives and partners nationwide. ISS’s well-established track record is based on the continuous pursuit of excellence in designing, implementing and supporting nationwide clients’ mission-cri...
PubNub on Monday has announced that it is partnering with IBM to bring its sophisticated real-time data streaming and messaging capabilities to Bluemix, IBM’s cloud development platform. “Today’s app and connected devices require an always-on connection, but building a secure, scalable solution from the ground up is time consuming, resource intensive, and error-prone,” said Todd Greene, CEO of PubNub. “PubNub enables web, mobile and IoT developers building apps on IBM Bluemix to quickly add sc...
The major cloud platforms defy a simple, side-by-side analysis. Each of the major IaaS public-cloud platforms offers their own unique strengths and functionality. Options for on-site private cloud are diverse as well, and must be designed and deployed while taking existing legacy architecture and infrastructure into account. Then the reality is that most enterprises are embarking on a hybrid cloud strategy and programs. In this Power Panel at 15th Cloud Expo (http://www.CloudComputingExpo.com...
There has been a lot of discussion recently in the DevOps space over whether there is a unique form of DevOps for large enterprises or is it just vendors looking to sell services and tools. In his session at DevOps Summit, Chris Riley, a technologist, discussed whether Enterprise DevOps is a unique species or not. What makes DevOps adoption in the enterprise unique or what doesn’t? Unique or not, what does this mean for adopting DevOps in enterprise size organizations? He also explored differe...
“In the past year we've seen a lot of stabilization of WebRTC. You can now use it in production with a far greater degree of certainty. A lot of the real developments in the past year have been in things like the data channel, which will enable a whole new type of application," explained Peter Dunkley, Technical Director at Acision, in this SYS-CON.tv interview at @ThingsExpo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Data-intensive companies that strive to gain insights from data using Big Data analytics tools can gain tremendous competitive advantage by deploying data-centric storage. Organizations generate large volumes of data, the vast majority of which is unstructured. As the volume and velocity of this unstructured data increases, the costs, risks and usability challenges associated with managing the unstructured data (regardless of file type, size or device) increases simultaneously, including end-to-...
The excitement around the possibilities enabled by Big Data is being tempered by the daunting task of feeding the analytics engines with high quality data on a continuous basis. As the once distinct fields of data integration and data management increasingly converge, cloud-based data solutions providers have emerged that can buffer your organization from the complexities of this continuous data cleansing and management so that you’re free to focus on the end goal: actionable insight.
DevOps tends to focus on the relationship between Dev and Ops, putting an emphasis on the ops and application infrastructure. But that’s changing with microservices architectures. In her session at DevOps Summit, Lori MacVittie, Evangelist for F5 Networks, will focus on how microservices are changing the underlying architectures needed to scale, secure and deliver applications based on highly distributed (micro) services and why that means an expansion into “the network” for DevOps.
Between the compelling mockups and specs produced by your analysts and designers, and the resulting application built by your developers, there is a gulf where projects fail, costs spiral out of control, and applications fall short of requirements. In his session at DevOps Summit, Charles Kendrick, CTO and Chief Architect at Isomorphic Software, will present a new approach where business and development users collaborate – each using tools appropriate to their goals and expertise – to build mo...