Welcome!

SDN Journal Authors: Pat Romanski, Liz McMillan, Elizabeth White, Mark Hoover, Stefan Bernbo

Blog Feed Post

Commodity Network Fabrics

What role does the concept of a “network fabric” play in the march towards commoditization of networking?  Well, let’s discuss!

The Whole Shebang

There can be no doubt that an organization’s relationship to networking is to the aggregate thing they call “the network.”  When there are issues, non-network folks say wonderfully vague things like “The network is dropping packets!” or “I can’t login… must be the network.”  This intuition, to think about the network as a whole, rather than as a collection of systems, is right:  Collectively, the network is supposed to produce desirable aggregate behavior.

This is an important clue as to how networking will evolve in the future.  SDN is a step in this direction.  Intelligent software will undoubtedly coordinate the actions of the underlying constituent systems, on behalf of an operator or an application, to achieve some policy goals.  This software need not exist solely in the form of a network controller.  Indeed, here at Plexxi, our switches can coordinate on their own to achieve aggregate behavior.  This is why you can stand up a Plexxi network, and pass traffic, without the need for a centralized controller.

A network fabric should have the goal of managing network workloads according to a higher-level policy.  However, many fabrics do not do this.  They may have some desirable fabric features, but for edge policies operators must still log into individual devices to achieve their goals.  This, of course, is the fundamental problem of networking that SDN hopes to solve:  Let intelligent software perform these menial tasks, and let the organization, or the operator, express network-wide policy to the software.

The Value of the Network

What is the value of the network?  Fundamentally, the network has one feature that matters: paths.  The job the network, first and foremost, is to facilitate the movement of data between it’s edges.  The more paths a network has, the better.  We even see this in leaf-and-bufferspine designs.

Administrative, control, voice, video, bulk, and garbage are just some of the workload types requiring different treatment in the network.  When you have fewer paths in the network, it becomes increasingly difficult to manage workload conflict that arises when multiple types of traffic converge on an egress interface.  Quality-of-Service has always represented a sort of white flag of surrender before conflict even occurs, and let’s be honest, it’s been an absolute nightmare to manage on the ground.  Aggregate flow characteristics change throughout the day (burstiness, packet size distribution, differing workload types), making static policies difficult to implement.  The best you can hope for is a policy that represents the lowest-common denominator compromise.

Even when you have multiple paths in the network, it’s virtually impossible to manage and move differing workload types.  How frustrating it has been that spanning-tree cut the usable bandwidth down drastically in the data center.  Even if we could use it, how to move only some workloads?  Imagine doing this when you have multiple types of workloads just within HTTP!  Transferring files, web traffic, API calls for automation systems… all in the same encapsulation.

QoS is obviously the product of legacy network thinking:  Fewer paths and indiscriminate workload placement, resulting from the erroneous belief that universal reachability for packets is the primary goal of the network.  Build just enough paths to be redundant, put the routes in… and hope for the best.  Are we done being amazed that we can make packets go yet?  Can’t we do better than making a sequel to “The Hangover” because we can ping?  Aren’t we tired of failing to deal with the complexity of networking as a whole?  Then let’s stop using legacy stuff to accomplish our goals.

Network Commodifabricization

The value of the network goes up as more paths are added.  However, the old way of workload placement in the network, as well as the old way of handling workload conflict, just isn’t going to be manageable by hand.  Adding value to the network should be as simple as adding paths, and adding paths should actually be simple both physically and logically.  A commodity network means lots of paths, which are the primary value of the network to begin with.  It also means intelligent software that manages the many types of workloads on the network by distributing them across those paths.  That same software will present an intuitive policy interface to humans who just want “the network” to work.

Where does that leave the current trend of some companies seeking to commoditize on legacy networking?  Well, like cloud, it would seem that many folks are banking on the idea that IT is done evolving.  Including networking!  Obviously, this is not the case.  What we are experiencing right now is the “big crunch” of IT.  If the mainframe represented some primordial IT state that exploded into the constituent pieces of the IT universe, like the big bang of tech, then the data center of the future represents the big crunch of these pieces.  Lots of intermediate layers will disappear, from the guest OS of a VM, to maybe even the IP protocol!  Will linux-based switches and routers with a subset of legacy network features really have a role here?  Perhaps in the short-term, but not for long.

Intuitive network fabrics are the true start down the path of commoditization, making the real value of the network directly and easily manageable.

[Fun fact:  One time, I drove a bulldozer into a pond.  People get really mad when you do that.  Also, it makes the bulldozer inoperable.  Hmmm... if only there had been a "path" around the pond.]

 

The post Commodity Network Fabrics appeared first on Plexxi.

Read the original blog entry...

More Stories By Michael Bushong

The best marketing efforts leverage deep technology understanding with a highly-approachable means of communicating. Plexxi's Vice President of Marketing Michael Bushong has acquired these skills having spent 12 years at Juniper Networks where he led product management, product strategy and product marketing organizations for Juniper's flagship operating system, Junos. Michael spent the last several years at Juniper leading their SDN efforts across both service provider and enterprise markets. Prior to Juniper, Michael spent time at database supplier Sybase, and ASIC design tool companies Synopsis and Magma Design Automation. Michael's undergraduate work at the University of California Berkeley in advanced fluid mechanics and heat transfer lend new meaning to the marketing phrase "This isn't rocket science."

@CloudExpo Stories
Hardware virtualization and cloud computing allowed us to increase resource utilization and increase our flexibility to respond to business demand. Docker Containers are the next quantum leap - Are they?! Databases always represented an additional set of challenges unique to running workloads requiring a maximum of I/O, network, CPU resources combined with data locality.
Due of the rise of Hadoop, many enterprises are now deploying their first small clusters of 10 to 20 servers. At this small scale, the complexity of operating the cluster looks and feels like general data center servers. It is not until the clusters scale, as they inevitably do, when the pain caused by the exponential complexity becomes apparent. We've seen this problem occur time and time again. In his session at Big Data Expo, Greg Bruno, Vice President of Engineering and co-founder of StackIQ...
The cloud market growth today is largely in public clouds. While there is a lot of spend in IT departments in virtualization, these aren’t yet translating into a true “cloud” experience within the enterprise. What is stopping the growth of the “private cloud” market? In his general session at 18th Cloud Expo, Nara Rajagopalan, CEO of Accelerite, explored the challenges in deploying, managing, and getting adoption for a private cloud within an enterprise. What are the key differences between wh...
Security, data privacy, reliability, and regulatory compliance are critical factors when evaluating whether to move business applications from in-house, client-hosted environments to a cloud platform. Quality assurance plays a vital role in ensuring that the appropriate level of risk assessment, verification, and validation takes place to ensure business continuity during the migration to a new cloud platform.
"Tintri was started in 2008 with the express purpose of building a storage appliance that is ideal for virtualized environments. We support a lot of different hypervisor platforms from VMware to OpenStack to Hyper-V," explained Dan Florea, Director of Product Management at Tintri, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and containers together help companies achieve their business goals faster and more effectively. In his session at DevOps Summit, Ruslan Synytsky, CEO and Co-founder of Jelastic, reviewed the current landscape of Dev...
One of the hottest areas in cloud right now is DRaaS and related offerings. In his session at 16th Cloud Expo, Dale Levesque, Disaster Recovery Product Manager with Windstream's Cloud and Data Center Marketing team, will discuss the benefits of the cloud model, which far outweigh the traditional approach, and how enterprises need to ensure that their needs are properly being met.
The security needs of IoT environments require a strong, proven approach to maintain security, trust and privacy in their ecosystem. Assurance and protection of device identity, secure data encryption and authentication are the key security challenges organizations are trying to address when integrating IoT devices. This holds true for IoT applications in a wide range of industries, for example, healthcare, consumer devices, and manufacturing. In his session at @ThingsExpo, Lancen LaChance, vic...
WebRTC has had a real tough three or four years, and so have those working with it. Only a few short years ago, the development world were excited about WebRTC and proclaiming how awesome it was. You might have played with the technology a couple of years ago, only to find the extra infrastructure requirements were painful to implement and poorly documented. This probably left a bitter taste in your mouth, especially when things went wrong.
In their general session at 16th Cloud Expo, Michael Piccininni, Global Account Manager - Cloud SP at EMC Corporation, and Mike Dietze, Regional Director at Windstream Hosted Solutions, reviewed next generation cloud services, including the Windstream-EMC Tier Storage solutions, and discussed how to increase efficiencies, improve service delivery and enhance corporate cloud solution development. Michael Piccininni is Global Account Manager – Cloud SP at EMC Corporation. He has been engaged in t...
You have great SaaS business app ideas. You want to turn your idea quickly into a functional and engaging proof of concept. You need to be able to modify it to meet customers' needs, and you need to deliver a complete and secure SaaS application. How could you achieve all the above and yet avoid unforeseen IT requirements that add unnecessary cost and complexity? You also want your app to be responsive in any device at any time. In his session at 19th Cloud Expo, Mark Allen, General Manager of...
WebRTC is bringing significant change to the communications landscape that will bridge the worlds of web and telephony, making the Internet the new standard for communications. Cloud9 took the road less traveled and used WebRTC to create a downloadable enterprise-grade communications platform that is changing the communication dynamic in the financial sector. In his session at @ThingsExpo, Leo Papadopoulos, CTO of Cloud9, discussed the importance of WebRTC and how it enables companies to focus o...
Big Data engines are powering a lot of service businesses right now. Data is collected from users from wearable technologies, web behaviors, purchase behavior as well as several arbitrary data points we’d never think of. The demand for faster and bigger engines to crunch and serve up the data to services is growing exponentially. You see a LOT of correlation between “Cloud” and “Big Data” but on Big Data and “Hybrid,” where hybrid hosting is the sanest approach to the Big Data Infrastructure pro...
In his General Session at DevOps Summit, Asaf Yigal, Co-Founder & VP of Product at Logz.io, will explore the value of Kibana 4 for log analysis and will give a real live, hands-on tutorial on how to set up Kibana 4 and get the most out of Apache log files. He will examine three use cases: IT operations, business intelligence, and security and compliance. This is a hands-on session that will require participants to bring their own laptops, and we will provide the rest.
"We're bringing out a new application monitoring system to the DevOps space. It manages large enterprise applications that are distributed throughout a node in many enterprises and we manage them as one collective," explained Kevin Barnes, President of eCube Systems, in this SYS-CON.tv interview at DevOps at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
In his General Session at 16th Cloud Expo, David Shacochis, host of The Hybrid IT Files podcast and Vice President at CenturyLink, investigated three key trends of the “gigabit economy" though the story of a Fortune 500 communications company in transformation. Narrating how multi-modal hybrid IT, service automation, and agile delivery all intersect, he will cover the role of storytelling and empathy in achieving strategic alignment between the enterprise and its information technology.
Buzzword alert: Microservices and IoT at a DevOps conference? What could possibly go wrong? In this Power Panel at DevOps Summit, moderated by Jason Bloomberg, the leading expert on architecting agility for the enterprise and president of Intellyx, panelists peeled away the buzz and discuss the important architectural principles behind implementing IoT solutions for the enterprise. As remote IoT devices and sensors become increasingly intelligent, they become part of our distributed cloud enviro...
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo 2016 in New York. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place June 6-8, 2017, at the Javits Center in New York City, New York, is co-located with 20th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry p...
@DevOpsSummit at Cloud taking place June 6-8, 2017, at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long developm...
A look across the tech landscape at the disruptive technologies that are increasing in prominence and speculate as to which will be most impactful for communications – namely, AI and Cloud Computing. In his session at 20th Cloud Expo, Curtis Peterson, VP of Operations at RingCentral, will highlight the current challenges of these transformative technologies and share strategies for preparing your organization for these changes. This “view from the top” will outline the latest trends and developm...