Welcome!

SDN Journal Authors: Yeshim Deniz, Liz McMillan, Elizabeth White, Pat Romanski, TJ Randall

Related Topics: @CloudExpo, Java IoT, Linux Containers, Containers Expo Blog, Agile Computing, SDN Journal

@CloudExpo: Article

Why the Cloud Is Disrupting Everything

Cloud is accelerating disruption by changing how data centers deploy, develop & consume everything from software & hardware

Is it just me, or has there been an explosion of buzz words lately? Don't get me wrong, the IT industry innovates at a crazy pace normally, but it seems that things have been evolving faster than ever and that a fundamental change in the way things are done is underway. We can attribute this change to one thing: the cloud. Cloud computing is by no means new, but in 2014 it has come into its own.

Cloud computing is accelerating disruption by changing how data centers deploy, develop and consume everything from software and hardware,  to how they offer products and services to their customers.

Let's take a look at a few of these hot technologies and why you'll be adopting some of them, whether you realize it now or not.

Software-Defined Networking (SDN) - What Is It Anyway?
There are many different descriptions of SDN floating around, partly because this is relatively new technology, and it means different things to different vendors. Until the market matures, this confusion will probably persist, at least for a while. The following explanation provides a good foundation for understanding SDN.

SDN decouples the system that makes decisions about where traffic is sent (the control plane) from the underlying systems that forward traffic to the selected destination (the data plane). The inventors and vendors of these systems claim that this simplifies networking.[1] Through the controller, network administrators can quickly and easily make and push out decisions on how the underlying systems (switches, routers) of the forwarding plane will handle the traffic.

SDN requires some method for the control plane to communicate with the data plane. One such mechanism, OpenFlow, is often misunderstood to be equivalent to SDN, but other mechanisms could also fit into the concept.

By separating the control plane from the forwarding planes, data centers can reduce costs and provide better agility, and who wouldn't want or need that? It does this by:

  1. Reducing reliance on expensive purpose-built, ASIC-based networking hardware and associated pay-as-you-grow models that often result in costly overprovisioning. In other words, you can unlock more value from your network.
  2. SDNs provide increased programmability that enables easier network scalability and system design and management
  3. Agility and flexibility. Everybody needs it, everybody wants it, SDN can deliver it. SDN enables organizations to quickly deploy new infrastructure, applications, and services faster than a traditional network would allow.

OpenFlow
Often people use OpenFlow and SDN interchangeably, but they are not the same. OpenFlow is only a one element in the overall SDN architecture. OpenFlow is an open standard for a communications protocol that enables the control plane to interact with the forwarding plane. As an open standard it's steered by the OpenFlow Consortium. OpenFlow is not the only protocol available or in development for SDN. The open source network OS, ONOS, led by The Open Networking Lab (ON.Lab) is another option.

Network Functions Virtualization (NFV)
This is another term that can mean different things to different people, depending on the industry. For our purposes, we'll focus on what it means to the telecom industry. To understand what has propelled the development of NVF, let's take a look at how the telecom industry has traditionally deployed their networks. For more than 30 years, telecoms have been relying on specially built systems, some of which saw them developing their own ASICs (via Cisco, F5 or Juniper) and proprietary operating systems (Cisco IOS, for example), and then having that technology built into base stations, routers and Ethernet switches, all optimized for their use. The proprietary nature of all of this translates into very expensive systems and slower development cycles.

Fast forward to today's NFV initiative, which is spearheaded by several of the major telecommunications service providers. The value of NFV is in creating a standards-based approach to virtualizing key telecom applications, radically changing the way telecom networks are built and managed. By doing this, NFV enables those apps to run on industry standard servers. And that of course translates into big cost savings and more flexibility than was previously possible.

What has made NVF suitable for use with commercial off-the shelf (COTS) equipment are the advances made in underlying technology including SDN, faster fabrics (40Gb Ethernet), and more powerful processors.

NFV can be implemented without SDN, although the two solutions can work together. NFV is able to support SDN by providing the infrastructure upon which the SDN software can be run. Both technologies share a common objective, and that is to run on lower cost COTS servers and switches.

Source: etsi.org, whitepaper, Network Functions Virtualization:  An Introduction, Benefits, Enablers, Challenges & Call for Action, Oct. 2012

The OpenCompute Project (OCP)
The OCP is a Facebook-led initiative to build computing infrastructures that are energy efficient, easily scalable and low cost. The initiative was born out of the design and build of the massive Facebook data center based in Prineville, Oregon. Following in the footsteps of open source software, the OpenCompute designs are open, shared and available for all to use. The OCP includes software, servers, storage, networking, and data center designs. By utilizing OCP open hardware designs, the OCP claims the Facebook Prineville data center delivered 38 percent better efficiency and was 24 percent less expensive to build and run than other state-of-the-art data centers that use proprietary components. Pretty compelling stuff.

As you can see, there are recurring themes spanning all the aforementioned technologies. In case you missed them: low-cost, energy efficient, non-proprietary, open, scalable, flexible, and agile. Even if you are not looking at redesigning your data center now, you may need to in order to stay competitive.

No matter what technology you choose to deploy, one thing is for sure, the cloud is stressing I/O and I/O bottlenecks will be shifted from where they are today. The further away you get from processing, latency becomes more of a challenge. To plan for the barrage of new technology that's coming your way, look for technologies that reduce latency such as RDMA over Converged Ethernet (RoCE). Also seek out solutions that enable flexible usage of resources and that don't lock you into long-term commitments such as specialized appliances, infrastructure and proprietary software so that you are in a better position to take advantage of new innovations as they become available. Now strap yourself in and get ready for the ride.

Reference:

1.       Open Networking Foundation: "Software-Defined Networking: The New Norm for Networksm" April 13, 2012. Retrieved August 22, 2013.

More Stories By Barbara Porter

Barbara Porter is Senior Product Marketing Manager at Emulex. She has been with Emulex since 2009, bringing more than 15 years of experience to the company. Prior to Emulex, she was product line manager at Quantum, and software marketing manager at MSC Software. Barbara holds of Bachelor of Commerce degree from Griffith University in Australia.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


CloudEXPO Stories
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a member of the Society of Information Management (SIM) Atlanta Chapter. She received a Business and Economics degree with a minor in Computer Science from St. Andrews Presbyterian University (Laurinburg, North Carolina). She resides in metro-Atlanta (Georgia).
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. His expertise is in automating deployment, management, and problem resolution in these environments, allowing his teams to run large transactional applications with high availability and the speed the consumer demands.
The technologies behind big data and cloud computing are converging quickly, offering businesses new capabilities for fast, easy, wide-ranging access to data. However, to capitalize on the cost-efficiencies and time-to-value opportunities of analytics in the cloud, big data and cloud technologies must be integrated and managed properly. Pythian's Director of Big Data and Data Science, Danil Zburivsky will explore: The main technology components and best practices being deployed to take advantage of data and analytics in the cloud, Architecture, integration, governance and security scenarios and Key challenges and success factors of moving data and analytics to the cloud
SYS-CON Events announced today that DatacenterDynamics has been named “Media Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY. DatacenterDynamics is a brand of DCD Group, a global B2B media and publishing company that develops products to help senior professionals in the world's most ICT dependent organizations make risk-based infrastructure and capacity decisions.
Most DevOps journeys involve several phases of maturity. Research shows that the inflection point where organizations begin to see maximum value is when they implement tight integration deploying their code to their infrastructure. Success at this level is the last barrier to at-will deployment. Storage, for instance, is more capable than where we read and write data. In his session at @DevOpsSummit at 20th Cloud Expo, Josh Atwell, a Developer Advocate for NetApp, will discuss the role and value extensible storage infrastructure has in accelerating software development activities, improve code quality, reveal multiple deployment options through automated testing, and support continuous integration efforts. All this will be described using tools common in DevOps organizations.