Welcome!

SDN Journal Authors: Liz McMillan, Yeshim Deniz, Elizabeth White, Pat Romanski, TJ Randall

Related Topics: SDN Journal, Java IoT, Linux Containers, Containers Expo Blog, @CloudExpo, Cloud Security

SDN Journal: Blog Post

Fabric Engineering Is More than Traffic Engineering

Traffic engineering has taken on a very specific meaning, very much related to the way traffic is mapped onto MPLS

It is human nature to try and relate new information and new ways of doing things to something that we know, something we are familiar with. Often when we talk about the way we fit traffic onto a Plexxi mesh network, the reaction is “I know what you mean, you are doing traffic engineering like we (used to) do in MPLS”. The response to that is usually “kinda, but not really”.

In the most basic meaning, everything that has to do with the placement of traffic onto links, routing and forwarding choices being programmed, etc., would be part of Traffic Engineering. But like too many words and phrases in our networking dictionary, traffic engineering has taken on a very specific meaning, very much related to the way traffic is mapped onto MPLS and like networks. For us, its a bit different. We build ethernet and IP networks that use packet by packet forwarding rules and while individual flows may have a hop by hop crafted path through the network, a very large portion of traffic travels using regular L2 and L3 forwarding tables. We just construct those tables differently than the typical network. Dare I say we engineer the tables to ensure forwarding occurs the way we calculated it to go.

Mike Bushong yesterday talked about equal cost paths and shortest path algorithms. We have discussed our views several times in this and other forums, and it is because we have a fundamental believe that  it can be done better. A Plexxi network consists of switches that are connected together using a variety of optical technologies with a mesh of 10GbE Ethernet connections. These point to point connections between switches for the basis of the L2 and L3 connectivity that exists, on top of which Plexxi Control Engineers the forwarding behavior of the fabric (skipping the part where Control can actually change those point to point connections for now).

So far this still smells like Traffic Engineering. I have a network, I have traffic I need to put on the network. The goal is simple, maximize the available network capacity. Use it as best as you can. All of it. While providing the best possible service for the providers and consumers of the traffic. But the way we do it is just a little different.

Our fabric is engineered based on a set of information sources. The most obvious of those sources is the actual network that is constructed. How our switches are connected together, and what initial L1/L2 topography (mesh of point to point networks) is created as part of our default connectivity. Our second source of information is the set of Affinities defined for the network. What application relationships are explicitly described, what is the requested network behavior for them and most importantly, where can I find these application components on the network. The latter is learned of course, not defined by the operator. The third component is actual traffic utilization on the fabric. What links are in use and how much of it.

These three inputs go into what we call the fitting engine. Heavy duty graph theory that we have blogged about before. The goal: resolve all Affinity constraints on top of the network infrastructure provided, and provide Non Equal Weighted Egress Based Multipath Trees for all non Affinitized traffic to ensure it gets the best service and spreads traffic as best as we can across all available paths between any two points in the fabric. That last one is a mouthful and not really what we call it, but at NFD7 yesterday I used it jokingly as the best descriptive term that articulates all it actually is. Last, a set of backup paths are calculated for link and switch failures. This can number 100s of paths easily. The results of these calculations are passed back to the switches in a multi phase commit fashion, ensuring that all switches will start using the same forwarding directions at the same time. Some of these are used in the switches as explicit flow like rules (similar to perhaps OpenFlow), others are used to populate forwarding tables when new MAC addresses are learned, or ARPs are resolved.

The result is a very carefully constructed description of connectivity from anywhere to anywhere on the fabric, satisfying the needs of Affinities, and balancing the remainder of the traffic across all available bandwidth. And once completed, the next set of traffic statistics that are entered into the overall traffic modeling will provide an even better tuned view of reality. When new Affinities are defined, you can run partial computations to just layer these new requirements on top of the previously calculated topologies.

Traffic Engineering has a very 90’s meaning attached to it. At least for me it does. Maybe this is Traffic Engineering 3.0 for the Datacenter and Data Center Interconnects (see last week’s blog post). Maybe the name does not matter. It’s a very mathematically engineered fabric. Mike said it well at NFD7: we are really a math company, masquerading as a software company, masquerading as a hardware company.

[Today's fun fact: Coconuts kill more people than sharks. Each year 150 people die from falling coconuts. Umbrellas make for safer shade than coconut trees.]

The post Fabric Engineering is more than Traffic Engineering appeared first on Plexxi.

Read the original blog entry...

More Stories By Marten Terpstra

Marten Terpstra is a Product Management Director at Plexxi Inc. Marten has extensive knowledge of the architecture, design, deployment and management of enterprise and carrier networks.

CloudEXPO Stories
Despite being the market leader, we recognized the need to transform and reinvent our business at Dynatrace, before someone else disrupted the market. Over the course of three years, we changed everything - our technology, our culture and our brand image. In this session we'll discuss how we navigated through our own innovator's dilemma, and share takeaways from our experience that you can apply to your own organization.
Nutanix has been named "Platinum Sponsor" of CloudEXPO | DevOpsSUMMIT | DXWorldEXPO New York, which will take place November 12-13, 2018 in New York City. Nutanix makes infrastructure invisible, elevating IT to focus on the applications and services that power their business. The Nutanix Enterprise Cloud Platform blends web-scale engineering and consumer-grade design to natively converge server, storage, virtualization and networking into a resilient, software-defined solution with rich machine intelligence.
Intel is an American multinational corporation and technology company headquartered in Santa Clara, California, in the Silicon Valley. It is the world's second largest and second highest valued semiconductor chip maker based on revenue after being overtaken by Samsung, and is the inventor of the x86 series of microprocessors, the processors found in most personal computers (PCs). Intel supplies processors for computer system manufacturers such as Apple, Lenovo, HP, and Dell. Intel also manufactures motherboard chipsets, network interface controllers and integrated circuits, flash memory, graphics chips, embedded processors and other devices related to communications and computing.
Digital transformation is about embracing digital technologies into a company's culture to better connect with its customers, automate processes, create better tools, enter new markets, etc. Such a transformation requires continuous orchestration across teams and an environment based on open collaboration and daily experiments. In his session at 21st Cloud Expo, Alex Casalboni, Technical (Cloud) Evangelist at Cloud Academy, explored and discussed the most urgent unsolved challenges to achieve full cloud literacy in the enterprise world.
Wasabi is the hot cloud storage company delivering low-cost, fast, and reliable cloud storage. Wasabi is 80% cheaper and 6x faster than Amazon S3, with 100% data immutability protection and no data egress fees. Created by Carbonite co-founders and cloud storage pioneers David Friend and Jeff Flowers, Wasabi is on a mission to commoditize the storage industry. Wasabi is a privately held company based in Boston, MA. Follow and connect with Wasabi on Twitter, Facebook, Instagram and the Wasabi blog.