SDN Journal Authors: Yeshim Deniz, Liz McMillan, Elizabeth White, Pat Romanski, TJ Randall

Related Topics: @CloudExpo, Containers Expo Blog, SDN Journal, sysconmobile

@CloudExpo: Blog Post

Software Defined Networking | Part 2 By @MJannery | @CloudExpo [#SDN #Cloud]

SDN technologies are broadly split into two fundamentally different paradigms - "overlay" SDN and "underlay" SDN

In our initial part in this blog series on SDN, I gave a quick background overview.  This part of the series will cover overlay SDN and underlay SDN.

SDN technologies are broadly split into two fundamentally different paradigms - "overlay" SDN and "underlay" SDN.  With overlay SDN the SDN is implemented on top of an existing physical network.  With underlay SDN, the fabric of the underlying network is reconfigured to provide the paths required to provide the inter-endpoint SDN connectivity.

Overlay SDN (e.g., VMware NSX and Contrail) use tunneling technologies such as VXLAN, STT and GRE to create endpoints within the hypervisor's virtual switches and rely on the existing network fabric to transport the encapsulated packets to the relevant endpoints using existing routing and switching protocols.  One advantage of using encapsulation is that only the tunneling protocol end-point IP addresses (TPEP IPs) are visible in the core network - the IP addresses of the intercommunicating VMs are not exposed (of course the downside of this is that without specific VXLAN awareness, traffic sniffers, flow analyzers, etc. can only report on TPEP IP-IP conversations and not inter-VM flows).  Another advantage of encapsulated overlay networks is that there is no need for tenant segregation within the core (e.g. using MPLS VPNs, 802.1q VLANs, VRFs, etc.) as segregation is implicitly enforced by the tunneling protocol and the TPEPs.

One of the major drawbacks with overlay SDN (such as NSX) is that there is little, if any, network awareness - i.e. it cannot control,  influence or see how traffic flows through the network from one TPEP to another.  This has serious implications for traffic engineering, fault isolation, load distribution, security, etc.  Proponents of overlay SDN often assert that since datacenter network fabric is invariably highly resilient and significantly over-provisioned this is not a significant issue.  The argument is less convincing when heading out of the datacenter into the campus and across the WAN.

Underlay SDN (Openflow, Cisco ACI, QFabric, FabricPath, etc.) directly manipulate network component forwarding tables to create specific paths through the network - i.e. they intrinsically embed the end-to-end network paths within the network fabric.  The SDN controller is responsible for directly manipulating network element configuration to ensure that the requirements presented at the controller's northbound API are correctly orchestrated.  With intimate knowledge of network topology, configured paths through the fabric and link-level metrics (e.g. bandwidth, latency, cost), much more efficient utilization of network infrastructure can be achieved using more complex route packing algorithms - e.g., sub-optimal routing.  Another advantage of underlay SDN is that the controller dictates exactly where in the network each traffic flow traverses which is invaluable for troubleshooting, impact analysis and security.

The industry is currently split between network architects preferring overlay networks to those preferring underlay networks.  It is not a decision to be taken lightly as it has far-reaching implications on complexity, troubleshooting, monitoring, SLA compliance, performance management, RCA and cost.

The next installment in this series will cover whether it's ideal to have an all virtual environment or if you need some physical hardware.

More Stories By Michael Jannery

Michael Jannery is CEO of Entuity. He is responsible for setting the overall corporate strategy, vision, and direction for the company. He brings more than 30 years of experience to Entuity with 25 years in executive management.

Prior to Entuity, he was Vice President of Marketing for Proficiency, where he established the company as the thought, technology, and market leader in a new product lifecycle management (PLM) sub-market. Earlier, Michael held VP of Marketing positions at Gradient Technologies, where he established them as a market leader in the Internet security sector, and Cayenne Software, a leader in the software and database modeling market. He began his career in engineering.

CloudEXPO Stories
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. His expertise is in automating deployment, management, and problem resolution in these environments, allowing his teams to run large transactional applications with high availability and the speed the consumer demands.
The technologies behind big data and cloud computing are converging quickly, offering businesses new capabilities for fast, easy, wide-ranging access to data. However, to capitalize on the cost-efficiencies and time-to-value opportunities of analytics in the cloud, big data and cloud technologies must be integrated and managed properly. Pythian's Director of Big Data and Data Science, Danil Zburivsky will explore: The main technology components and best practices being deployed to take advantage of data and analytics in the cloud, Architecture, integration, governance and security scenarios and Key challenges and success factors of moving data and analytics to the cloud
SYS-CON Events announced today that DatacenterDynamics has been named “Media Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY. DatacenterDynamics is a brand of DCD Group, a global B2B media and publishing company that develops products to help senior professionals in the world's most ICT dependent organizations make risk-based infrastructure and capacity decisions.
Most DevOps journeys involve several phases of maturity. Research shows that the inflection point where organizations begin to see maximum value is when they implement tight integration deploying their code to their infrastructure. Success at this level is the last barrier to at-will deployment. Storage, for instance, is more capable than where we read and write data. In his session at @DevOpsSummit at 20th Cloud Expo, Josh Atwell, a Developer Advocate for NetApp, will discuss the role and value extensible storage infrastructure has in accelerating software development activities, improve code quality, reveal multiple deployment options through automated testing, and support continuous integration efforts. All this will be described using tools common in DevOps organizations.
"When you think about the data center today, there's constant evolution, The evolution of the data center and the needs of the consumer of technology change, and they change constantly," stated Matt Kalmenson, VP of Sales, Service and Cloud Providers at Veeam Software, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.