Welcome!

SDN Journal Authors: Yeshim Deniz, Liz McMillan, Elizabeth White, Pat Romanski, TJ Randall

Related Topics: SDN Journal, @CloudExpo, @DXWorldExpo

SDN Journal: Blog Feed Post

I (don’t) Like Big Buffers By @PlexxiInc | @CloudExpo [#SDN #Cloud #BigData]

Recently Arista released a white paper surrounding the idea that having deeper buffers running within the network can help

Recently Arista released a white paper surrounding the idea that having deeper buffers running within the network can help to alleviate the incast congestion patterns that can present when a large number of many-to-one connections are happening within a network. Also known as the TCP incast problem. They pointedly targeted Hadoop clusters, as the incast problem can rear its ugly head when utilizing the Hadoop Cluster for  MapReduce functions. The study used an example of 20 servers hanging off of a single ToR switch that has 40Gbps of uplink capacity within a Leaf/Spine network, presenting a 5:1 oversubscription ratio. This type of oversubscription was just seen in the recent release of the Facebook network that is used within their data centers. So its safe to assume that these types of oversubscription ratios are seen in the wild. I know I’ve run my fair share of oversubscribed networks in the past.

Treating the Symptom

This particular study actually prods at what is the achilles heel of the traditional leaf/spine network design. All nodes being within 3 switch hops, (ToR <-> Spine <-> ToR), does provide a predictable pathing within the minds of the network operators of today, but I posit that this design is another case of treating the symptom instead of curing the disease. Large buffers allow the the network to mask the disease of oversubscription, congestion caused because of this oversubscription and lack of path diversity and will ultimately not cure the disease. And this is all dependent upon whether or not the flows within a given network are short and bursty. If there are larger, more sustained flows, then larger buffers can at best add more latency to the path rather than increasing performance.

When addressing sustained flows, Little’s Theorem takes over and the rate at which the ‘front’ of the buffer empties is equal to the rate at which the ‘rear’ of the buffer is being populated. When this type of traffic pattern happens, depending on the patterns themselves, the only thing we’re realizing into the system is added latency. The frame needs to be copied into memory, a pointer created and dropped into a queue, that pointer makes it way through the queue, ultimately making it to the front of the queue and being called, it then pointing to the frame that is located in memory, serializing that frame back onto the PHY interface and transmitting it over the wire. This whole process does have an effect overall and does add latency. And again, if its a sustained flow, the best that we’re doing is adding latency to the path.

Curing the Disease

The way to cure the disease in this situation is to remove the outbound bottleneck on the ToR switch in this specific scenario, and we do that today with Plexxi switches. Using our unequal cost multipathing combined with our absence of a spine layer with respect to a data center network, we’re not faced with most of the problems that are discussed within the referenced study. And I reference ‘most’ of the problems as there are problems that weren’t taken into account within the study that should have been taken into account through the whole system and that is solving the choke point of the host itself, with respect to incast problems.

Outbound pathing on the leaf switch, and the inbound / outbound pathing on a given spine switch are both points within the network that can exhibit the TCP incast problem, but there is also the link that connects a given host to the network, as well. Currently there are limited ways in which we can solve this particular problem within a leaf/spine network, and that is to provide more connectivity between a host in a rack and the ToR switch in the form of a LAG, or depending on the type of equipment you have deployed, you may get away with an MLAG between two specific leaf switches. With Plexxi’s deployment of MLAG we’re able to create an MLAG between any two switches within a Plexxi Ring and a host that is connected to the Plexxi network. We do not have the typical vendor specific limitations of MLAG only being configurable between two statically defined switches.

By creating an MLAG between an arbitrary amount of switches within a Plexxi ring and providing unequal cost multipathing within our rings we’re able to diversify connectivity and dynamically allocate bandwidth to help alleviate congestion, on the fly. Removing the need for larger buffers within the network. This helps follow the age old, push the complexity to the edge of the network as much as possible. Our UECMP and MLAG connectivity shifts the congestion to the end host rather than having it contained in a blind spot within a given point of interconnections in a leaf/spine network.

The added ability of programmability and realization of understanding the dynamic allocation of distributed applications in a clustered computing resource allows us to model the network, in terms of required resources, on the fly as well. Meaning, we could potentially allocate network resources specific to the nodes that are potentially impacted by a job that is submitted to the cluster, but this is a post for another point in time. My point is, overall, the cure the incast problem wholly and completely, we need dynamic path diversity along with data-driven workload placement to fully optimize the distributed compute platforms that we’ll be dealing with in the future.

The post I (don’t) like Big Buffers. appeared first on Plexxi.

Read the original blog entry...

More Stories By Michael Bushong

The best marketing efforts leverage deep technology understanding with a highly-approachable means of communicating. Plexxi's Vice President of Marketing Michael Bushong has acquired these skills having spent 12 years at Juniper Networks where he led product management, product strategy and product marketing organizations for Juniper's flagship operating system, Junos. Michael spent the last several years at Juniper leading their SDN efforts across both service provider and enterprise markets. Prior to Juniper, Michael spent time at database supplier Sybase, and ASIC design tool companies Synopsis and Magma Design Automation. Michael's undergraduate work at the University of California Berkeley in advanced fluid mechanics and heat transfer lend new meaning to the marketing phrase "This isn't rocket science."

CloudEXPO Stories
Every organization is facing their own Digital Transformation as they attempt to stay ahead of the competition, or worse, just keep up. Each new opportunity, whether embracing machine learning, IoT, or a cloud migration, seems to bring new development, deployment, and management models. The results are more diverse and federated computing models than any time in our history.
On-premise or off, you have powerful tools available to maximize the value of your infrastructure and you demand more visibility and operational control. Fortunately, data center management tools keep a vigil on memory contestation, power, thermal consumption, server health, and utilization, allowing better control no matter your cloud's shape. In this session, learn how Intel software tools enable real-time monitoring and precise management to lower operational costs and optimize infrastructure for today even as you're forecasting for tomorrow.
"Calligo is a cloud service provider with data privacy at the heart of what we do. We are a typical Infrastructure as a Service cloud provider but it's been designed around data privacy," explained Julian Box, CEO and co-founder of Calligo, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Isomorphic Software is the global leader in high-end, web-based business applications. We develop, market, and support the SmartClient & Smart GWT HTML5/Ajax platform, combining the productivity and performance of traditional desktop software with the simplicity and reach of the open web. With staff in 10 timezones, Isomorphic provides a global network of services related to our technology, with offerings ranging from turnkey application development to SLA-backed enterprise support. Leading global enterprises use Isomorphic technology to reduce costs and improve productivity, developing & deploying sophisticated business applications with unprecedented ease and simplicity.
While a hybrid cloud can ease that transition, designing and deploy that hybrid cloud still offers challenges for organizations concerned about lack of available cloud skillsets within their organization. Managed service providers offer a unique opportunity to fill those gaps and get organizations of all sizes on a hybrid cloud that meets their comfort level, while delivering enhanced benefits for cost, efficiency, agility, mobility, and elasticity.