Welcome!

SDN Journal Authors: TJ Randall, Yeshim Deniz, Liz McMillan, Elizabeth White, Pat Romanski

Blog Feed Post

Plexxi Paths and Topologies Part 2 – Fully Specified Affinity Topologies

Last week I wrote the first installment of what I hope is a fairly detailed explanation of how a Plexxi network is constructed, how paths are created and how traffic is placed on the network. This week I will explain the fundamentals of the role of Plexxi Control in determining actual forwarding paths for Affinitized traffic and how the switches act on what Control calculates.

When Plexxi switches are connected together and powered up they autonomously create a basic loop free topology between all switches. This initial topology uses only a small portion of the fabric side 10GbE mesh connections, but it serves as the topology used for initial basic Ethernet operation, including flooding of packets to unknown destinations. It provides full connectivity between all the switches and allows traffic to flow from devices connected to Access Ports to any other place in the network, just like you would expect from any basic ethernet network.

Once powered up and communicating with other switches, each switch creates a connection to Plexxi Control. After retrieving some basic information about the switch itself, the switch sends the topography described above to Plexxi Control. This topography describes exactly what LightRail side ports this switch has, whether they are up or not, and what switch is on the other side of this 10GbE link. In many cases there are several parallel links between a pair of switches, and each of these links is treated individually and communicated to Plexxi Control. When all switches have checked in with Plexxi Control, it now has a full view of all switches in this ring (and it can do this for multiple rings), and exactly how they are connected to each other. Control has no preconceived notion of how switches are connected, which allows us to change how switches are connected together (even using Access Ports). Depending on the size of the networks, there are many 100s of links between switches, and 1000s of ways to get from one switch to another, many direct, many more indirect with a single switching hop between them, many many more indirect with multiple switching hops.

If you are somewhat familiar with Plexxi, you know that we believe that conversations between applications should drive network behavior. Applications are what networks are created for, so applications should decide how the network should behave to support them. Affinities are policy expressions between network endpoints with an articulation of what these conversations want from the network. They may want a lot of bandwidth (storage backup), low latency (database updates), plain preferred treatment, separation from other conversations, you name it, whatever a network can provide, the policy is be able to express. How these are expressed is a topic for a separate article.

It is impossible (and unnecessary) to define Affinities for every possible conversation in a data center. There can easily be hundreds of thousands, or millions and like everything else, there are certain sets of conversations that are more important than others. There is always a subset of conversations that you really care about, that are critical to your business. It is these conversations that we expect to be explicitly defined in Affinities. These are explicit network endpoints, talking to other explicit endpoints, with specific needs. It is here that Control starts its first step in determining how traffic between endpoints specified in Affinities should be directed through the network.

First, endpoints used in Affinities are defined by a specific identifier, In the most simple case this is a MAC address (or VMAC), but could also be an IP address or other higher level identifiers (like VM name) that Control can translate into a MAC or IP address. They do not however contain any location information, the policy does not define where these endpoints are attached to the network, the policy applies regardless of where this endpoint is located. As its first step, the Control Fitting Engine finds out where the endpoints are attached to the network. For this it uses switch provided MAC and ARP information. Once resolved, the Fitting Engine places these individual conversations onto the network topography. Conversations that need low latency will be placed on the most direct paths between the source and destination switch. Conversations that need to be isolated will be placed on paths between source and destination switches after which that path will be marked to not be used by anyone else. Conversations that need a lot of bandwidth get a larger “chunk” of a path between source and destination switch, meaning that less other conversations are also placed onto that same path, or even the individual links in that path.

This may sound fairly straightforward, but this is done for 1000s of conversations, trying to satisfy all their needs and requirements. All the complex math and graph theory @mbushong and myself have referred to in this blog comes to play to accurately put conversations where they should go. One of the reasons why a Plexxi network provides such a tremendous amount of possible paths between any two switches is exactly for this purpose: more paths gives Control more opportunities to differentiate and separate conversations. The results of all these conversations and how they should traverse the network is kept in what we call Fully Specified Affinity Topologies, of FSATs. They are named this way because the end to end path is fully defined. For these conversations, each source switch is defined, each destination switch is defined, and every specific switch in between them, as well as all actual links in between the switches that must be used.

Once calculated, the FSATs are sent to all Switches. Not as individual flow entries like OpenFlow, but as descriptive topologies for these specific conversations. Each switch receives entire paths, it is fully aware of the complete end to end path. This allows switches to constantly validate whether these paths against the state of the network, without any support from Control. Each switch implements in hardware those rules for which it is the source switch, the destination switch, or a switch in the path between the two. Once instantiated in hardware, any packet for this conversation is forwarded based on the path defined in the FSAT. And this is done without realtime help from Control, the switches have all information they need for these conversations.

As you may have figured out, I am taking some creative liberties in describing how packets are forwarded in a Plexxi network. Some for Intellectual Property reasons, but more importantly, a single end to end description of the exact process would be 10s of pages long and bore you to no extent. In the above we have determined how Affinitized traffic is forwarded, and your next question should be “what about all the non-Affinitized traffic, how does that get forwarded?”. And that is the topic for next week’s article. As well as how Control takes user defined or measured fabric link traffic into account when figuring out what to do with all the other traffic.

The post Plexxi Paths and Topologies Part 2 – Fully Specified Affinity Topologies appeared first on Plexxi.

Read the original blog entry...

More Stories By Michael Bushong

The best marketing efforts leverage deep technology understanding with a highly-approachable means of communicating. Plexxi's Vice President of Marketing Michael Bushong has acquired these skills having spent 12 years at Juniper Networks where he led product management, product strategy and product marketing organizations for Juniper's flagship operating system, Junos. Michael spent the last several years at Juniper leading their SDN efforts across both service provider and enterprise markets. Prior to Juniper, Michael spent time at database supplier Sybase, and ASIC design tool companies Synopsis and Magma Design Automation. Michael's undergraduate work at the University of California Berkeley in advanced fluid mechanics and heat transfer lend new meaning to the marketing phrase "This isn't rocket science."

CloudEXPO Stories
Using serverless computing has a number of obvious benefits over traditional application infrastructure - you pay only for what you use, scale up or down immediately to match supply with demand, and avoid operating any server infrastructure at all. However, implementing maintainable and scalable applications using serverless computing services like AWS Lambda poses a number of challenges. The absence of long-lived, user-managed servers means that states cannot be maintained by the service. Longer function invocation times (referred to as cold starts) become very important to track, because they impact the response time of the service and will impose additional cost. Additionally, the transition to smaller individual components (much like breaking a monolithic application into microservices) results in a simpler deployment model, but makes the system as a whole increasingly complex.
Here to help unpack insights into the new era of using containers to gain ease with multi-cloud deployments are our panelists: Matt Baldwin, Founder and CEO at StackPointCloud, based in Seattle; Nic Jackson, Developer Advocate at HashiCorp, based in San Francisco, and Reynold Harbin, Director of Product Marketing at DigitalOcean, based in New York. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.
Using serverless computing has a number of obvious benefits over traditional application infrastructure - you pay only for what you use, scale up or down immediately to match supply with demand, and avoid operating any server infrastructure at all. However, implementing maintainable and scalable applications using serverless computing services like AWS Lambda poses a number of challenges. The absence of long-lived, user-managed servers means that states cannot be maintained by the service. Longer function invocation times (referred to as cold starts) become very important to track, because they impact the response time of the service and will impose additional cost. Additionally, the transition to smaller individual components (much like breaking a monolithic application into microservices) results in a simpler deployment model, but makes the system as a whole increasingly complex.
With the rise of Docker, Kubernetes, and other container technologies, the growth of microservices has skyrocketed among dev teams looking to innovate on a faster release cycle. This has enabled teams to finally realize their DevOps goals to ship and iterate quickly in a continuous delivery model. Why containers are growing in popularity is no surprise — they’re extremely easy to spin up or down, but come with an unforeseen issue. However, without the right foresight, DevOps and IT teams may lose a lot of visibility into these containers resulting in operational blind spots and even more haystacks to find the presumptive performance issue needle.
Isomorphic Software is the global leader in high-end, web-based business applications. We develop, market, and support the SmartClient & Smart GWT HTML5/Ajax platform, combining the productivity and performance of traditional desktop software with the simplicity and reach of the open web. With staff in 10 timezones, Isomorphic provides a global network of services related to our technology, with offerings ranging from turnkey application development to SLA-backed enterprise support. Leading global enterprises use Isomorphic technology to reduce costs and improve productivity, developing & deploying sophisticated business applications with unprecedented ease and simplicity.