SDN Journal Authors: Yeshim Deniz, Liz McMillan, Elizabeth White, Pat Romanski, TJ Randall

Blog Feed Post

Networking lessons from high-performance car racing

If you take a high-performance racing class, one of the first things you will experience is a ride around the track in a vehicle that seems ill-equipped for racing. Some classes might take you around the course in an average car like a Kia, while others might be a bit more dramatic and get all the students into the back of an old van. The point of this first exercise is to show you that vehicles are far more capable than the average driver expects.

The punch line of this first lesson in racing is that if you think you are pushing your car to the limits when you approach 100mph or take a turn a little hard, you don’t really understand what your car can do. And if you want to be a performance racer, you need to get a lot more comfortable as you get out closer to the extremes.

The point here is less about the car and more about human psychology. Without knowing what the limits are, most of us behave conservatively. The fear of calamity is real, and it is enough to keep us operating well within the perceived constraints of the system. Whenever we experience something a little outside the edges of what we are used to, we recoil a bit.

Comfort levels and networking

Now imagine this in the context of SDN. One of the core aspects of SDN is the presence of some sort of central control. Whether that is a completely-centralized controller or a distributed application providing some form of system control, the fact that it is at least logically central means that it has a global view of the network as a resource.

This is useful because a global view of the resource allows the controller to do intelligent things with network workloads. For example, workloads can be balanced to optimize overall network performance. Or maybe the controller fans traffic out over more available paths, which could drive up fabric utilization.

The challenge

This actually creates a challenge.

Architects and operators are used to running their networks within some constraints. Things like capacity planning take into account total resources. Processes are built around these limits. Even things like purchasing decisions consider the operating assumptions.

Within these constraints, networks are built. And then they are monitored. We look at things like queuing and buffering to get a feel for how traffic is moving across the network. We get accustomed to how counters ought to look. In essence, we familiarize ourselves with the operating parameters of our network.

But what if those limitations were not really the limitations?

If, for example, intelligent load balancing and more sophisticated workload management allowed you to get more out of your network than you were used to, would you feel comfortable extending the operating limits that confine you today?

Intellectually, the answer is likely yes, but there is an education process that has to happen here. Most people are consumers—not producers—of information. The reason best practices are so powerful is that they allow the majority of people to leverage the learnings of the nominally smaller set of people willing to experiment and figure things out.

And because networking is notoriously complex, the dependence on this information is even higher than in other disciplines. It actually keeps most of us from really knowing what our networks are capable of. Not unlike would-be performance drivers, we don’t fully understand what we can do with our network. We operate either well below the limits, or occasionally we do something reckless that results in something disastrous.

Creating familiarity

Either case is really derived from the same issue: an unfamiliarity of where we ought to be.

Getting to familiarity requires a re-examination of how we think about monitoring. When you are driving a car, how do you know where the limit is? It’s not just feeling uncomfortable as the wheel shakes; you need to know the point at which the back side actually slides out from under you on a curve.

In networking parlance, this means we need to be looking at more than just counters and bits per second. We need to know the point at which the network slides out from under us. And in the case where we are making better use of more paths through SDN, we need to be looking at more than just hot links. We will eventually want to know how traffic gets balanced across all available links, and how that impacts application workloads.

Essentially, we are fast approaching an era where monitoring, planning, and troubleshooting are going to rely on more than simple counters. SDN represents more than just a new architecture. It brings with it the ability to do some pretty clever things. But those clever things will push us beyond our comfort zone. For people for whom performance is not important, maybe it’s ok to stay trapped behind a veil of lowered expectations.

But if SDN is really going to breed a new kind of performance networker, it means that we will collectively have to become a lot more familiar with our cars. The results might be life-changing… or at least network-changing

[Today’s fun fact: Running links between sites at 99% utilization is possible. Imagine if you didn’t have to be Google to do it.]

The post Networking lessons from high-performance car racing appeared first on Plexxi.

Read the original blog entry...

More Stories By Michael Bushong

The best marketing efforts leverage deep technology understanding with a highly-approachable means of communicating. Plexxi's Vice President of Marketing Michael Bushong has acquired these skills having spent 12 years at Juniper Networks where he led product management, product strategy and product marketing organizations for Juniper's flagship operating system, Junos. Michael spent the last several years at Juniper leading their SDN efforts across both service provider and enterprise markets. Prior to Juniper, Michael spent time at database supplier Sybase, and ASIC design tool companies Synopsis and Magma Design Automation. Michael's undergraduate work at the University of California Berkeley in advanced fluid mechanics and heat transfer lend new meaning to the marketing phrase "This isn't rocket science."

CloudEXPO Stories
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
Discussions of cloud computing have evolved in recent years from a focus on specific types of cloud, to a world of hybrid cloud, and to a world dominated by the APIs that make today's multi-cloud environments and hybrid clouds possible. In this Power Panel at 17th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the importance of customers being able to use the specific technologies they need, through environments and ecosystems that expose their APIs to make true change and transformation possible.
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
We are seeing a major migration of enterprises applications to the cloud. As cloud and business use of real time applications accelerate, legacy networks are no longer able to architecturally support cloud adoption and deliver the performance and security required by highly distributed enterprises. These outdated solutions have become more costly and complicated to implement, install, manage, and maintain.SD-WAN offers unlimited capabilities for accessing the benefits of the cloud and Internet. SD-WAN helps enterprises to take advantage of the exploding landscape of cloud applications and services, due to its unique capability to support all things cloud related.
In an era of historic innovation fueled by unprecedented access to data and technology, the low cost and risk of entering new markets has leveled the playing field for business. Today, any ambitious innovator can easily introduce a new application or product that can reinvent business models and transform the client experience. In their Day 2 Keynote at 19th Cloud Expo, Mercer Rowe, IBM Vice President of Strategic Alliances, and Raejeanne Skillern, Intel Vice President of Data Center Group and GM, discussed how clients in this new era of innovation can apply data, technology, plus human ingenuity to springboard to advance new business value and opportunities.