Welcome!

SDN Journal Authors: Liz McMillan, Yeshim Deniz, Elizabeth White, Pat Romanski, TJ Randall

Related Topics: Containers Expo Blog

Containers Expo Blog: Blog Post

Network abstractions need equivalent of packet walkthrough

Some people will justify the depth by talking about troubleshooting complex systems

Whenever a new networking platform is evaluated, one of the early sales calls includes a packet walkthrough. In excruciating detail, someone walks the customer through the path a packet takes from ingress port, through the device, across the switching or routing ASIC, and back down to the egress port. The technical deep dive frequently includes internals that even the vendor engineers are not all familiar with.

But why?

Some people will justify the depth by talking about troubleshooting complex systems. Others will pull on random technical details that suggest one platform is better than another in some regard or under some set of circumstances. Others will actually parrot some of the vendor’s marketing efforts with claims of flexibility, scalability, or extensibility.

While all of these are absolutely valid, they actually miss the biggest reason the packet walkthrough is a ubiquitous part of every selling motion.

Get comfortable

We networking gearheads are a skeptical lot. We learned long ago that listening to someone and taking their words for granted was a short path to operational hell. Their words might have sounded true but their promises rang hollow. The platform, or even the architecture, did not perform as advertised. And because the result of a network failure is catastrophically worse than any other infrastructure failure, we have collectively vowed to look at every opportunity with a sideways glance from a somewhat disbelieving perspective.

Trust but verify

The real reason that we evaluate in such detail new platforms and solutions is not because of the inherent troubleshooting value of examining the architecture. Nor is it because we can determine with any certainty what the scaling limits are based on a cursory glance at the internals of a system. We examine architectures in detail because it allows us to put the vendor under a bit of scrutiny. If they stand up to a few somewhat randomly placed questions (less random if you have had particularly painful issues in the past), then we believe with a bit more certainty other claims that are made.

I don’t mention this because I think this is a bad way to do things, mind you. Rather, I bring this up because the collective psyche of the networking buyer needs to be understood if architectural advances like SDN and abstractions are to bring any any real value.

Control freaks and abstraction

Networking generally has operated through meticulous control for decades. Network management via configuration knob puts a ton of power at the hands of the network architect. Behavior can be precisely specified. And when something goes wrong, it can be queried to surmise the cause.

A shift to abstractions might make things easier in terms of actual physical workload (how much typing there is), but it comes with a gigantic leap of faith. Control freaks might complain about how much effort things are, but they absolutely cringe at the thought of giving any of that work up lest something go wrong.

When behavior is specified by an abstraction (as with an edge policy abstraction), not only must the syntax be correct but also the translation of that abstraction into underlying behavior. The former is easy to verify, but the latter requires a bit of faith on behalf of the user that the vendor has done the right thing under the hood.

A peek under the hood

There are already a bunch of industry efforts around SDN and abstractions. Whether it’s vendor-specific (as with Cisco’s ACI) or a part of open source (OpenDaylight, for example), there are a number of movements that either focus on or include some abstraction as part of the solution. But if our past teaches us anything, it is that network architects are not happy with a basic understanding of what the abstractions do. They require additional information so they have at least some concept of how they do it.

It would seem that people peddling abstractions will ultimately need to provide the equivalent of a packet walkthrough. With platforms, this is easy. Where does the packet physically enter the device, and where does it leave? But with abstractions, the equivalent is a bit harder.

Abstraction walkthroughs

Initially, this dynamic favors abstractions that merely replace well-understood configuration with something less. The abstraction walkthrough for a replacement is essentially an expansion of the abstraction into the underlying configuration knobs. Think of this as more indirection than abstraction, more similar to header files than anything else.

But if abstractions are about more than saving keystrokes, this type of walkthrough will not permit itself for even slightly more complex scenarios. This leaves the abstraction salesperson in a tough spot: how do you demonstrate that something works if you cannot provide a meaningful look at the internals?

Behavior determines success

The long-term answer here is going to necessarily fall to actual behavior. The creators of abstractions will need to show in the affirmative that the network (or the applications) behave appropriately when an abstraction is used. This might seem obvious, but the implications are actually quite profound.

For networks today, there are lots of ways to verify specific state in the network (BGP neighbors, interface stats, and so on). And when there is no network state, the configuration itself serves as the check. But what if that configuration is not there?

In the long term, the infrastructure broadly (including but not limited to the network) will need to be instrumented with meaningful abstractions in mind. If abstractions become common around managing edge policy, there will need to be additional ways to instrument specific applications, tenants, and flows. For example, if abstractions allow network engineers to specify a particular application as PCI compliant, then there might need to be ways to verify PCI compliance via command.

The bottom line

The abstraction market initially will be focused on keyboard time reduction. That is a fine place to start, and it is easy to verify. But if the real value of abstractions is in the removal of complexity (not just masking it) and the increased collaboration of infrastructure, then abstraction salespeople are going to need to think through the post-sales elements of their products. Those that do this early will certainly find that having an abstraction walkthrough shortens the evaluation time for new solutions. And if no one else has done this, the existence of such a walkthrough could prove a killer element of the product sales cycle.

[Today’s fun fact: Right-handed people tend to chew food on the right side of their mouths, and lefties on the left side.]

The post Network abstractions need equivalent of packet walkthrough appeared first on Plexxi.

Read the original blog entry...

More Stories By Michael Bushong

The best marketing efforts leverage deep technology understanding with a highly-approachable means of communicating. Plexxi's Vice President of Marketing Michael Bushong has acquired these skills having spent 12 years at Juniper Networks where he led product management, product strategy and product marketing organizations for Juniper's flagship operating system, Junos. Michael spent the last several years at Juniper leading their SDN efforts across both service provider and enterprise markets. Prior to Juniper, Michael spent time at database supplier Sybase, and ASIC design tool companies Synopsis and Magma Design Automation. Michael's undergraduate work at the University of California Berkeley in advanced fluid mechanics and heat transfer lend new meaning to the marketing phrase "This isn't rocket science."

CloudEXPO Stories
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term.
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
Machine learning provides predictive models which a business can apply in countless ways to better understand its customers and operations. Since machine learning was first developed with flat, tabular data in mind, it is still not widely understood: when does it make sense to use graph databases and machine learning in combination? This talk tackles the question from two ends: classifying predictive analytics methods and assessing graph database attributes. It also examines the ongoing lifecycle for machine learning in production. From this analysis it builds a framework for seeing where machine learning on a graph can be advantageous.'
Daniel Jones is CTO of EngineerBetter, helping enterprises deliver value faster. Previously he was an IT consultant, indie video games developer, head of web development in the finance sector, and an award-winning martial artist. Continuous Delivery makes it possible to exploit findings of cognitive psychology and neuroscience to increase the productivity and happiness of our teams.