Welcome!

SDN Journal Authors: Liz McMillan, Elizabeth White, Yeshim Deniz, Pat Romanski, TJ Randall

Related Topics: SDN Journal, Containers Expo Blog, @CloudExpo

SDN Journal: Blog Post

Requirements for Data Center Networks By @MatMathews | @CloudExpo [#SDN]

It's important when understanding what is needed for a data center network, to think about all six of these functional areas

The Six Requirements for Data Center Networks

One way or another, all data center networks exhibit at least six different functional areas that their operators need to engineer, implement, and operate with a differing set of needs and requirements. Similarly, in one way or another, most of the available SDN and virtualized network solutions available today or in progress aim to deal with issues in one or more of these areas to improve their functional effectiveness, cost, automated-ness, or integrated-ness. Yet some areas receive an inordinate amount of focus/attention and those areas may not necessarily have the most opportunity for improvement. Let’s take a look at these six requirements in order of the opportunity value to bring new levels of effectiveness to data centers.

1. Edge Switching (inter-server or more generically, inter-end point)
Edge switching loosely covers the function of providing switching between end points, whether they be virtual servers, physical servers, storage devices, or terminating services devices (load balancers, firewalls, etc.) It is important to note that in a virtualized server environment, there is typically 2 layers of edge – a set of virtual switches that connect together VMs and a set of physical switches that connect the physical hosts.

Much of the attention and focus in the industry has revolved around edge switching, possibly because this is the area that is most visible to customers in terms of cost. Technologies like distributed virtual switches, network overlays, and trends like white box switching / disaggregated switching software all aim to lower either the capital or operational costs of the edge. Much of this effort is predicated on the decades of near-monopolistic control of major incumbents on the switching infrastructure – yet it is important to note that in the age of merchant silicon (which is not largely offered through a single vendor and is pretty much common to every edge switch on the market), open source (or semi open source) switching operating systems/stacks and virtual switches, much of that control has been mitigated and costs have rightly come down. It will be interesting to see if the industry starts to move its attention to some of the other areas that could offer more potential gain in overall cost savings.

2. Edge Policy
Edge policy refers to the implementation of configuration of those edge switches that allows some type of policy to be enacted on the endpoints that are connecting to the network. The term “policy” here can refer to basic port configuration, all the way up to SLA-level configuration that effects the behavior of the traffic emanating from the connecting end points.

Edge policy has long been a troublesome area for networks (or really any system) because enforcing a policy across a disparate set of systems creates consistency challenges. The key for most policy efforts are a simplified way to express the policy, and a highly efficient way to distribute the policy to many edges. The virtual switching layer (vSwitches) seem like a natural place to solve edge policy due to the ability to quickly iterate the software in the edge devices. The folks at VMWare seem to understand this and are busily working on providing policy enforcement capabilities in their NSX product and tying that to a policy expressing capability in OpenStack (the Congress project). The important point to consider there is if their approach can easily be stretched across both physical and virtual environments for a truly seamless edge policy approach.

3. Fabric Switching (inter-[rack | row | cage | pod | data center]
Fabric switching refers to the switching of traffic for non end-point connected devices or functional blocks. This could be the spine switches connecting multiple ToR leafs together, or could be a core switching layer that connects multiple pods, or even could be a switching capability that connects multiple data centers together. The basic differentiating attribute of core switching versus edge switching is that typically the core does not connect directly to end points, except for ones providing transit services (like firewalls or load balancers).

There has been a surprising lack of attention (and similar lack of concern?) regarding fabric switching in the industry. The default path seems to be higher density spine switches leveraging the commodity silicon cost/performance curves. While “brute forcing” it seems like the path of least resistance here, we can probably look at the history of the “scale up” approach in other IT contexts to notice that it usually works, until it breaks. I’ll have a future post on this in more detail, but suffice it to say that we ought to see more attention paid here to scale-out solutions that attempt to bring to the network the same capabilities that multi-core processors brought to servers/compute.

4. Fabric Policy
Similar to edge policy, fabric policy refers to the implementation of configuration of fabric devices that allows some type of policy to be enacted in the inter-* network. Since most connectivity policy (like access control, port/VLAN configuration) happens on the edge devices, much of fabric policy relates specifically to how the overall network behaves in accordance with specific business imperatives, such as service level agreements or treatment of regulated data in transit.

Fabric policy can be done implicitly (or basically deferred) by treating all paths through the network as equal, and load balancing all traffic equally across all paths (e.g. Equal Cost Multipathing is an example of this implicit or deferred fitting). Or it can be done explicitly with complex mechanisms such as call admission control (CAC) that are typically not found in most data oriented networks. Fabric policy can also be done algorithmically (see “fitting” below), and increasingly as networks become “Software Defined” with a central controller entity and the applications/users of the network require more heterogeneous treatment, this function not only becomes more easily accomplished but also more important, and in conjunction with the concept of fitting, can be a very powerful area to drive effective utilization of network resources.

Surprisingly, there has been almost no talk in the industry about fabric policy, which is very surprising. After all, the guts of the resources that the network has to offer reside in the fabric and controlling those resources via a policy seems to be an area ripe for efficiency and performance improvements. Fabric policy would allow a user to express how inter-rack, inter-pod, or even inter-data center capacity is allocated not in a pre-determined / engineered way, but in a just-in-time or even a predictive way that follows that actual usage patterns of the data center network.

5. Fitting
Fitting is not a term that is generally familiar to folks that think about networking, and may currently be something that is completely unique to Plexxi’s view on the world (although Cisco’s “declarative networking” concept is similar at least in philosophy). Yet fitting it is something that is almost always done even if it is done implicitly. Most networks are built today with a gross level understand of capacity needs, segmentation needs, etc. The network is then engineered to provide, in aggregate, these capabilities via a set of network resources. The concept of fitting is that we explicitly define what each user of the network (a user could be an application, a set of applications, a site, or really any arbitrary grouping) and based on its business-centric attributes, best fit the network resources to that user. The concept of fitting is hard to do manually, or in traditional legacy networks with traffic is looked at on a packet-by-packet or flow-by-flow basis. But in more evolved “software-defined” networks, it becomes much easier to build a higher level view of the users of the network, and allow the software entities (i.e. the controller) to algorithmically determine how best to dole out the resources based on the information it has about the users.

The concept of fitting is extremely power, especially in a software-defined world. As we want and need to be able to leverage networks for a variety of users with a broad spectrum of business criticality, fitting allows us to explicitly put resources where they can have the most benefit rather than the typically networking approach of “spray and pray”, and this presents enormous opportunities not only for cost savings, but for driving business differentiation.

6. Integration and Automation
Finally, all networks need to be integrated to the rest of the world, and increasingly are being automated by the rest of the world. Integration is typically thought of as the way to drive edge policy – e.g. things like leveraging OpenStack Neutron or ML2 plug-ins to automate VLAN provisioning on an edge port. However, integration can not only drive edge policy, but can also drive fabric policy and fitting if done correctly. Ultimately, most companies are moving toward a data center model that is completely “lights out” and the network need not be an exception. Ultimately the data center network provides a services to applications, and as long as those applications can express their needs across the edge and the fabric, a network should be able to provide those services with a minimal amount of hand holding.

A well-integrated network ought to have the ability to express its capabilities across the edge and the fabric in a set of abstract “primitives” that can be easily driven from external systems. The network also should be able to effect different behaviors via policy and have the ability to efficiently fit available network resources to the most critical business needs. And all of this needs to be done in ways that can easily automatable.

Summary
It's important when understanding what is needed for a data center network, to think about all six of these functional areas and the potential opportunity to drive cost savings or business differentiation in each of these areas. While much of the industry attention is pointed right now on reducing the costs of the first layer of switching (edge switching), there are many more areas that provide more drastic areas of both cost savings and differentiation opportunity for businesses looking for an IT advantage.

The post The 6 Requirements for Data Center Networks appeared first on Plexxi.

Read the original blog entry...

More Stories By Mat Mathews

Visionary solutions are built by visionary leaders. Plexxi co-founder and Vice President of Product Management Mat Mathews has spent 20 years in the networking industry observing, experimenting and ultimately honing his technology vision. The resulting product — a combination of traditional networking, software-defined networking and photonic switching — represents the best of Mat's career experiences. Prior to Plexxi, Mat held VP of Product Management roles at Arbor Networks and Crossbeam Systems. Mat began his career as a software engineer for Wellfleet Communications, building high speed Frame Relay Switches for the carrier market. Mat holds a Bachelors of Science in Computer Systems Engineering from the University of Massachusetts at Amherst.

@CloudExpo Stories
The best way to leverage your CloudEXPO | DXWorldEXPO presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering CloudEXPO | DXWorldEXPO will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at CloudEXPO. Product announcements during our show provide your company with the most reach through our targeted audienc...
@DevOpsSummit at Cloud Expo, taking place November 12-13 in New York City, NY, is co-located with 22nd international CloudEXPO | first international DXWorldEXPO and will feature technical sessions from a rock star conference faculty and the leading industry players in the world.
DXWorldEXPO | CloudEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, @CloudEXPO and DXWorldEXPO are two of the most important technology events of the year. Since its launch over eight years ago, @CloudEXPO and DXWorldEXPO have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors!
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
HyperConvergence came to market with the objective of being simple, flexible and to help drive down operating expenses. It reduced the footprint by bundling the compute/storage/network into one box. This brought a new set of challenges as the HyperConverged vendors are very focused on their own proprietary building blocks. If you want to scale in a certain way, let's say you identified a need for more storage and want to add a device that is not sold by the HyperConverged vendor, forget about it...
In his keynote at 19th Cloud Expo, Sheng Liang, co-founder and CEO of Rancher Labs, discussed the technological advances and new business opportunities created by the rapid adoption of containers. With the success of Amazon Web Services (AWS) and various open source technologies used to build private clouds, cloud computing has become an essential component of IT strategy. However, users continue to face challenges in implementing clouds, as older technologies evolve and newer ones like Docker c...
"MobiDev is a software development company and we do complex, custom software development for everybody from entrepreneurs to large enterprises," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
The next XaaS is CICDaaS. Why? Because CICD saves developers a huge amount of time. CD is an especially great option for projects that require multiple and frequent contributions to be integrated. But… securing CICD best practices is an emerging, essential, yet little understood practice for DevOps teams and their Cloud Service Providers. The only way to get CICD to work in a highly secure environment takes collaboration, patience and persistence. Building CICD in the cloud requires rigorous ar...
"We're focused on how to get some of the attributes that you would expect from an Amazon, Azure, Google, and doing that on-prem. We believe today that you can actually get those types of things done with certain architectures available in the market today," explained Steve Conner, VP of Sales at Cloudistics, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Sanjeev Sharma Joins November 11-13, 2018 @DevOpsSummit at @CloudEXPO New York Faculty. Sanjeev Sharma is an internationally known DevOps and Cloud Transformation thought leader, technology executive, and author. Sanjeev's industry experience includes tenures as CTO, Technical Sales leader, and Cloud Architect leader. As an IBM Distinguished Engineer, Sanjeev is recognized at the highest levels of IBM's core of technical leaders.
As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more than 24 years of IT Operations experience with a focus on highly-scalable architectures.
JETRO showcased Japan Digital Transformation Pavilion at SYS-CON's 21st International Cloud Expo® at the Santa Clara Convention Center in Santa Clara, CA. The Japan External Trade Organization (JETRO) is a non-profit organization that provides business support services to companies expanding to Japan. With the support of JETRO's dedicated staff, clients can incorporate their business; receive visa, immigration, and HR support; find dedicated office space; identify local government subsidies; get...
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogge...
DXWorldEXPO LLC announced today that Dez Blanchfield joined the faculty of CloudEXPO's "10-Year Anniversary Event" which will take place on November 11-13, 2018 in New York City. Dez is a strategic leader in business and digital transformation with 25 years of experience in the IT and telecommunications industries developing strategies and implementing business initiatives. He has a breadth of expertise spanning technologies such as cloud computing, big data and analytics, cognitive computing, m...
In past @ThingsExpo presentations, Joseph di Paolantonio has explored how various Internet of Things (IoT) and data management and analytics (DMA) solution spaces will come together as sensor analytics ecosystems. This year, in his session at @ThingsExpo, Joseph di Paolantonio from DataArchon, added the numerous Transportation areas, from autonomous vehicles to “Uber for containers.” While IoT data in any one area of Transportation will have a huge impact in that area, combining sensor analytic...
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogge...
Charles Araujo is an industry analyst, internationally recognized authority on the Digital Enterprise and author of The Quantum Age of IT: Why Everything You Know About IT is About to Change. As Principal Analyst with Intellyx, he writes, speaks and advises organizations on how to navigate through this time of disruption. He is also the founder of The Institute for Digital Transformation and a sought after keynote speaker. He has been a regular contributor to both InformationWeek and CIO Insight...
Michael Maximilien, better known as max or Dr. Max, is a computer scientist with IBM. At IBM Research Triangle Park, he was a principal engineer for the worldwide industry point-of-sale standard: JavaPOS. At IBM Research, some highlights include pioneering research on semantic Web services, mashups, and cloud computing, and platform-as-a-service. He joined the IBM Cloud Labs in 2014 and works closely with Pivotal Inc., to help make the Cloud Found the best PaaS.
It is of utmost importance for the future success of WebRTC to ensure that interoperability is operational between web browsers and any WebRTC-compliant client. To be guaranteed as operational and effective, interoperability must be tested extensively by establishing WebRTC data and media connections between different web browsers running on different devices and operating systems. In his session at WebRTC Summit at @ThingsExpo, Dr. Alex Gouaillard, CEO and Founder of CoSMo Software, presented ...