SDN Journal Authors: Elizabeth White, Yeshim Deniz, Liz McMillan, Pat Romanski, TJ Randall

Related Topics: SDN Journal, Microservices Expo, Containers Expo Blog, @CloudExpo, @DXWorldExpo

SDN Journal: Blog Post

The Silence of The Lambdas

Many different applications have different conversation needs

We still see quite a few eyebrows raised when we explain how we use WDM optics in our datacenter solution. In the various descriptions of the Plexxi solution it is often mentioned and referred to, but it is worth explaining what that optical infrastructure actually looks like and why it is part of our solution.

the_silence_of_the_lambs-300x160One of the key attributes of the Plexxi solution is the ability to create network topologies at L1, L2 and L3 that meet the need of the workload offered, calculated based on the load on the network, and the Affinities created that describe the needs of specific application relationships. As an Ethernet network, changing how traffic flows at L2 and L3 is fairly straightforward. Packet based networks make forwarding decisions on a packet by packet basis, and the lookup tables for each can be fully programmed by the switch control software. 10s to 100s of thousands entries each, packets can be send into different directions based on the direction of the software: L2 and L3 distributed routing protocols like OSPF, ISIS and BGP for legacy networks, forwarding topologies calculated by a controller for a Plexxi network.

Many different applications have different conversation needs. A network with many different ways to get from one switch to any other switch gives a controller a greater ability to differentiate between conversations and direct traffic onto those paths. Each path has its own characteristics based on its physical connectivity (a direct path has a lower latency than an indirect path), but these characteristics change each time the controller decides to put traffic onto that path.

Recognizing that path diversity if fundamental to our ability to match traffic to the paths available, creating a network that provides many more paths than a legacy network is key. First, we do not use shortest path protocols to determine where to send traffic because they will always pick a single best path, or several equal cost paths. Our controller has the ability to pick arbitrary paths for L2 and L3, based on the result of the calculation. But to create that path diversity in the first place is where our optical capabilities really come into play. You can manually construct that physical path diversity but that would require many different cables from any switch to many other switches. Messy to say the least, really hard to track and debug.

That created the first component of the Plexxi optical solution. Using WDM allows us to run many different L1 signals across very few cables. Multiple 10GbE signals are muxed together onto few fibers with cables that simply extend between a switch and two immediate neighbors. In each switch, some of the WDM signals that arrive on this LightRail cable are terminated and attached to 10GbE ports on the Ethernet switching ASIC, others are passed through the switch passively onto the next switch. That next switch receives signals terminates some waves from this passed through set of waves in addition to the ones it terminates from its direct neighbor, and passes on a set of waves passively to the next switch. Repeating this termination and passive passthrough creates a meshed network where each switch has multiple 10GbE WDM signals that connect it to five switches to the logical East and and five switches to the logical West. With our first generation switches, this means that each switch has direct L1 connectivity with multiple 10GbE connections to 10 other switches using only 2 cables. Our second generation switch doubles that to connections to 20 switches, each with multiple 10GbE signals.

While it may not seem like a lot at first, this easily turns into 100s of diverse paths between even two neighboring switches, 1000s of paths between switches on larger networks. More paths means more abilities to differentiate based on the use of the network and the need of the applications. However, that is not the full extend of the optical capabilities in our switches. Previously I described that each switch takes a set of the 10GbE signals and terminates them onto the Ethernet switching ASIC. An extra piece of optical technology allows us to change where those signals are going, and provides the ability to not terminate the signal, but pass it along to another switch, or even to an access side port in our second generation switch. This gives us the ability to actively change the way the switches are connected together, change the L1 topology. There are several applications for this: the ability to take switch to switch connections and run them across long reach optics plugged into the access ports, or to create modified connectivity between switches to provide better topologies for the offered traffic and defined Affinities.

The use of optical multiplexing and redirection are key components in the Plexxi solution that provide the Controller with path diversity and flexibility. Not having to rely on a statically created and heavily cabled datacenter infrastructure is an important part of our belief that programmatic control of the L1, L2 and L3 topologies is what the new datacenter needs. The optical components in our hardware enable those abilities. And those waves quietly do their work. Our lasers are not safe for the eye, but you are welcome to listen to their silence as much as you want…

[Today's fun facts: The movie "The Silence of the Lambs" was only the third movie to win an Academy Award in all 5 major categories. It was released Valentine's Day 1991 because director Jonathan Demme thought it was a great date movie.]

For more information on this topic, check out the following posts:

The post The Silence of The Lambdas appeared first on Plexxi.

Read the original blog entry...

More Stories By Marten Terpstra

Marten Terpstra is a Product Management Director at Plexxi Inc. Marten has extensive knowledge of the architecture, design, deployment and management of enterprise and carrier networks.

CloudEXPO Stories
Extreme Computing is the ability to leverage highly performant infrastructure and software to accelerate Big Data, machine learning, HPC, and Enterprise applications. High IOPS Storage, low-latency networks, in-memory databases, GPUs and other parallel accelerators are being used to achieve faster results and help businesses make better decisions. In his session at 18th Cloud Expo, Michael O'Neill, Strategic Business Development at NVIDIA, focused on some of the unique ways extreme computing is being used on IBM Cloud, Amazon, and Microsoft Azure and how to gain access to these resources in the cloud... for FREE!
CI/CD is conceptually straightforward, yet often technically intricate to implement since it requires time and opportunities to develop intimate understanding on not only DevOps processes and operations, but likely product integrations with multiple platforms. This session intends to bridge the gap by offering an intense learning experience while witnessing the processes and operations to build from zero to a simple, yet functional CI/CD pipeline integrated with Jenkins, Github, Docker and Azure.
"We do one of the best file systems in the world. We learned how to deal with Big Data many years ago and we implemented this knowledge into our software," explained Jakub Ratajczak, Business Development Manager at MooseFS, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Today, we have more data to manage than ever. We also have better algorithms that help us access our data faster. Cloud is the driving force behind many of the data warehouse advancements we have enjoyed in recent years. But what are the best practices for storing data in the cloud for machine learning and data science applications?
The hierarchical architecture that distributes "compute" within the network specially at the edge can enable new services by harnessing emerging technologies. But Edge-Compute comes at increased cost that needs to be managed and potentially augmented by creative architecture solutions as there will always a catching-up with the capacity demands. Processing power in smartphones has enhanced YoY and there is increasingly spare compute capacity that can be potentially pooled. Uber has successfully been able to harness the excess capacity of privately owned vehicles and turned into a meaningful business. This concept can be step-functioned to harnessing the spare compute capacity of smartphones that can be orchestrated by MEC to provide cloud service at the edge.