SDN Journal Authors: TJ Randall, Yeshim Deniz, Liz McMillan, Elizabeth White, Pat Romanski

Blog Feed Post

Plexxi Paths and Topologies Part 2 – Fully Specified Affinity Topologies

Last week I wrote the first installment of what I hope is a fairly detailed explanation of how a Plexxi network is constructed, how paths are created and how traffic is placed on the network. This week I will explain the fundamentals of the role of Plexxi Control in determining actual forwarding paths for Affinitized traffic and how the switches act on what Control calculates.

When Plexxi switches are connected together and powered up they autonomously create a basic loop free topology between all switches. This initial topology uses only a small portion of the fabric side 10GbE mesh connections, but it serves as the topology used for initial basic Ethernet operation, including flooding of packets to unknown destinations. It provides full connectivity between all the switches and allows traffic to flow from devices connected to Access Ports to any other place in the network, just like you would expect from any basic ethernet network.

Once powered up and communicating with other switches, each switch creates a connection to Plexxi Control. After retrieving some basic information about the switch itself, the switch sends the topography described above to Plexxi Control. This topography describes exactly what LightRail side ports this switch has, whether they are up or not, and what switch is on the other side of this 10GbE link. In many cases there are several parallel links between a pair of switches, and each of these links is treated individually and communicated to Plexxi Control. When all switches have checked in with Plexxi Control, it now has a full view of all switches in this ring (and it can do this for multiple rings), and exactly how they are connected to each other. Control has no preconceived notion of how switches are connected, which allows us to change how switches are connected together (even using Access Ports). Depending on the size of the networks, there are many 100s of links between switches, and 1000s of ways to get from one switch to another, many direct, many more indirect with a single switching hop between them, many many more indirect with multiple switching hops.

If you are somewhat familiar with Plexxi, you know that we believe that conversations between applications should drive network behavior. Applications are what networks are created for, so applications should decide how the network should behave to support them. Affinities are policy expressions between network endpoints with an articulation of what these conversations want from the network. They may want a lot of bandwidth (storage backup), low latency (database updates), plain preferred treatment, separation from other conversations, you name it, whatever a network can provide, the policy is be able to express. How these are expressed is a topic for a separate article.

It is impossible (and unnecessary) to define Affinities for every possible conversation in a data center. There can easily be hundreds of thousands, or millions and like everything else, there are certain sets of conversations that are more important than others. There is always a subset of conversations that you really care about, that are critical to your business. It is these conversations that we expect to be explicitly defined in Affinities. These are explicit network endpoints, talking to other explicit endpoints, with specific needs. It is here that Control starts its first step in determining how traffic between endpoints specified in Affinities should be directed through the network.

First, endpoints used in Affinities are defined by a specific identifier, In the most simple case this is a MAC address (or VMAC), but could also be an IP address or other higher level identifiers (like VM name) that Control can translate into a MAC or IP address. They do not however contain any location information, the policy does not define where these endpoints are attached to the network, the policy applies regardless of where this endpoint is located. As its first step, the Control Fitting Engine finds out where the endpoints are attached to the network. For this it uses switch provided MAC and ARP information. Once resolved, the Fitting Engine places these individual conversations onto the network topography. Conversations that need low latency will be placed on the most direct paths between the source and destination switch. Conversations that need to be isolated will be placed on paths between source and destination switches after which that path will be marked to not be used by anyone else. Conversations that need a lot of bandwidth get a larger “chunk” of a path between source and destination switch, meaning that less other conversations are also placed onto that same path, or even the individual links in that path.

This may sound fairly straightforward, but this is done for 1000s of conversations, trying to satisfy all their needs and requirements. All the complex math and graph theory @mbushong and myself have referred to in this blog comes to play to accurately put conversations where they should go. One of the reasons why a Plexxi network provides such a tremendous amount of possible paths between any two switches is exactly for this purpose: more paths gives Control more opportunities to differentiate and separate conversations. The results of all these conversations and how they should traverse the network is kept in what we call Fully Specified Affinity Topologies, of FSATs. They are named this way because the end to end path is fully defined. For these conversations, each source switch is defined, each destination switch is defined, and every specific switch in between them, as well as all actual links in between the switches that must be used.

Once calculated, the FSATs are sent to all Switches. Not as individual flow entries like OpenFlow, but as descriptive topologies for these specific conversations. Each switch receives entire paths, it is fully aware of the complete end to end path. This allows switches to constantly validate whether these paths against the state of the network, without any support from Control. Each switch implements in hardware those rules for which it is the source switch, the destination switch, or a switch in the path between the two. Once instantiated in hardware, any packet for this conversation is forwarded based on the path defined in the FSAT. And this is done without realtime help from Control, the switches have all information they need for these conversations.

As you may have figured out, I am taking some creative liberties in describing how packets are forwarded in a Plexxi network. Some for Intellectual Property reasons, but more importantly, a single end to end description of the exact process would be 10s of pages long and bore you to no extent. In the above we have determined how Affinitized traffic is forwarded, and your next question should be “what about all the non-Affinitized traffic, how does that get forwarded?”. And that is the topic for next week’s article. As well as how Control takes user defined or measured fabric link traffic into account when figuring out what to do with all the other traffic.

The post Plexxi Paths and Topologies Part 2 – Fully Specified Affinity Topologies appeared first on Plexxi.

Read the original blog entry...

More Stories By Michael Bushong

The best marketing efforts leverage deep technology understanding with a highly-approachable means of communicating. Plexxi's Vice President of Marketing Michael Bushong has acquired these skills having spent 12 years at Juniper Networks where he led product management, product strategy and product marketing organizations for Juniper's flagship operating system, Junos. Michael spent the last several years at Juniper leading their SDN efforts across both service provider and enterprise markets. Prior to Juniper, Michael spent time at database supplier Sybase, and ASIC design tool companies Synopsis and Magma Design Automation. Michael's undergraduate work at the University of California Berkeley in advanced fluid mechanics and heat transfer lend new meaning to the marketing phrase "This isn't rocket science."

CloudEXPO Stories
CloudEXPO has been the M&A capital for Cloud companies for more than a decade with memorable acquisition news stories which came out of CloudEXPO expo floor. DevOpsSUMMIT New York faculty member Greg Bledsoe shared his views on IBM's Red Hat acquisition live from NASDAQ floor. Acquisition news was announced during CloudEXPO New York which took place November 12-13, 2019 in New York City.
With the introduction of IoT and Smart Living in every aspect of our lives, one question has become relevant: What are the security implications? To answer this, first we have to look and explore the security models of the technologies that IoT is founded upon. In his session at @ThingsExpo, Nevi Kaja, a Research Engineer at Ford Motor Company, discussed some of the security challenges of the IoT infrastructure and related how these aspects impact Smart Living. The material was delivered interactively to engage with the audience.
Atmosera delivers modern cloud services that maximize the advantages of cloud-based infrastructures. Offering private, hybrid, and public cloud solutions, Atmosera works closely with customers to engineer, deploy, and operate cloud architectures with advanced services that deliver strategic business outcomes. Atmosera's expertise simplifies the process of cloud transformation and our 20+ years of experience managing complex IT environments provides our customers with the confidence and trust that they are being taken care of.
In his session at 23rd International CloudEXPO, Raju Shreewastava, founder of Big Data Trunk, will provide a fun and simple way to introduce Machine Leaning to anyone and everyone. Together we will solve a machine learning problem and find an easy way to be able to do machine learning without even coding. Raju Shreewastava is the founder of Big Data Trunk (www.BigDataTrunk.com), a Big Data Training and consulting firm with offices in the United States. He previously led the data warehouse/business intelligence and Big Data teams at Autodesk. He is a contributing author of book on Azure and Big Data published by SAMS.
ShieldX's CEO and Founder, Ratinder Ahuja, believes that traditional security solutions are not designed to be effective in the cloud. The role of Data Loss Prevention must evolve in order to combat the challenges of changing infrastructure associated with modernized cloud environments. Ratinder will call out the notion that security processes and controls must be equally dynamic and able to adapt for the cloud. Utilizing four key factors of automation, enterprises can remediate issues and improve their security posture by maximizing their investments in legacy DLP solutions. The factors include new infrastructures opening up, public cloud, fast services and appliance models to fit in the new world of cloud security.