Welcome!

SDN Journal Authors: Yeshim Deniz, Liz McMillan, Elizabeth White, Pat Romanski, TJ Randall

Related Topics: SDN Journal, Containers Expo Blog, @CloudExpo, @DXWorldExpo, @ThingsExpo, @DevOpsSummit

SDN Journal: Article

Overcoming Network Limitations | @CloudExpo #DataCenter #SDN #Storage

More is being required today of network infrastructure than ever before, due in part to the changing latency & bandwidth needs

More is being required today of network infrastructure than ever before, due in part to the changing latency and bandwidth needs of modern applications. Wide area networks ("WANs") are feeling the pressure, especially those that use many technologies from different services providers, are geographically diverse and are stretched to the limit by increased video and cloud app usage.

In view of these challenges, hybrid WAN architectures with advanced application-level traffic routing are of particular interest. They combine the reliability of private lines for critical business applications with the cost-effectiveness of broadband/Internet connectivity for non-critical traffic.

Architectures like this can be difficult to scale over the existing network; most of today's network management tools can't manage it. Most of them still apply blocks of configuration data to network devices to enable features that in turn enable an overall network policy. To allow adjustment of configuration data to address differences in hardware and OS/firmware levels, those scripts are using "wildcards" replacing certain configuration data. These scripts are heavily tested, carefully curated and subject to stringent change management procedures. The tiniest mistake can bring a network down, resulting in potentially disastrous business losses.

As application-specific routing and hybrid WAN architectures are deployed, network operations teams are experiencing the limits to this approach. Even if the existing hardware already supports all the functionality required, existing network configurations that reflect past user requirements are rarely well understood. As each business unit is asking for specific requirements to ensure that their applications run optimally on the network, networks need to be continuously updated and optimized. Such tasks range from a simple adjustment of the configuration parameters to more complex changes of the underlying network architecture, such as removing and installing upgraded circuits, replacing hardware or even deploying new network architectures.

At this point, it's time to call in the big guns - senior network architects - who will need to put in significant time to determine potential risk of unintentional consequences on the existing network, but waiting for the next change maintenance window may no longer be an acceptable option. Businesses are not concerned with the details; they want the networks to simply "work."

Crafting a New Approach
Clearly, something needs to be done - but what? Standard network management tools are mature and well understood. Network architects and implementation teams are familiar with them, including all of the limitations and difficulties, and any potential change of these tools is immediately vetted against the additional learning curve required vis-à-vis potential benefits in managing the network.

In a best case scenario, implementation and operational concerns do not factor in when defining network policies. The process starts with mapping of the required functionality into a logical model, assembling these models into one overall network policy, verifying interdependencies and inconsistencies, and deploying and maintaining them consistently throughout the network life cycle.

The industry is in the beginning stages of improving network management, but these initiatives are still maturing. For example, YANG is a data modeling language for the NETCONF network configuration protocol. OpenStack Networking (Neutron) is providing an extensible framework to manage networks and IP addresses within the larger realm of cloud computing, focusing on network services such as intrusion detection systems (IDS), load balancing, firewalls and virtual private networks (VPN) to enable multi-tenancy and massive scalability.

However, neither approach can proactively detect interdependencies or inconsistencies and both require network engineers to dive into programming, for example, to manage data entry and storage. As a result, some vendors are offering fully integrated solutions, built on appliances managed through a proprietary network management tool. This model allows businesses to deploy solutions quickly, at the cost of additional training, limited capability for customization and new hardware purchases.

For industry transformation to occur, the focus needs to shift to assembling complete network policies from individual device-specific features, detecting inconsistencies and dependencies, and allowing deployment and ongoing network management. Simply updating wildcards in custom configuration templates and deploying them onto devices is no longer sufficient.

Managing the changes at scale, like network architectures or routing protocols, on live production networks will be difficult or even infeasible. This is especially true in large organizations where any change will always have to be validated by e.g. security. This creates unacceptable delays for implementation.

A Comprehensive New Approach
Organizations will need to carefully consider how to develop an end-to-end approach that creates complete network policies from abstract device-based network features, including interdependency checking, deployment and secure lifecycle management. Network features and related policies can be mapped using these four constructs:

  • Globals - These configuration settings apply throughout the network and are the same for every device in the network. A good example is NTP (network time protocol) where the central architecture team is defining the only NTP servers permissible for the network.
  • Domains - Consistently applied configuration settings across multiple devices. A good example is a QoS configuration which may be different by business units, hence, different QoS domains would allow network engineers to assign QoS policies across all devices associated with specific business units in each region.
  • Features - Configuration settings that are provided for a single device at a time, enabling functionality that the device can provide by itself. A good example is the configuration of a device-specific routing table where the device should forward incoming traffic.
  • Custom - It may not be practical to model everything in a general feature or domain concept, especially specific exceptions to single devices only - for example, a specific set of Access Control Lists (ACLs) needed on a single device. For these cases where no other dependencies with other features exists, just applying configuration data to a device may be acceptable.

Organizations can build any network policy by combining these constructs. Inherent interdependencies can be flagged by network engineers early, so that a network management system can deploy them in the correct sequential order, optimally applying these features to individual devices as well as across the network to create the target policy. Abstracting network functionality into these types of models allows network engineers to re-focus on the actual network architecture and focus less on the mechanics of the management of configuration data. These lead to a number of benefits:

  • Test, validate, deploy: The implementation and maintenance (DevOps) function is logically separated from network architecture and design (NetOps). For example, architects can define the features, domains and global settings needed for a given network infrastructure, assemble them into logical groups and resolve any interdependencies. They can then be tested and validated by, for example, the security team. The assembled features, domains and globals are handed over to the operational team, who will deploy them onto the network and manage them over their lifecycle.
  • Device anonymity: Device configuration is now a result of the functionality of how this device should perform, by itself or in concert with other devices. As a result, the actual hardware itself, its specific OS/firmware or even the manufacturer no longer matters, as long as the device is capable of performing the desired functionality.
  • Network engineering communities: Network modeling via the logical constructs allows for a wide exchange of best-practice reference designs based on common user requirements. Different teams of architects can exchange information about the models they use for specific network functionalities without having to revert to low-level configuration settings. This opens the possibility of creating network engineering communities that exchange specific models based on their desired use cases with clearly defined interdependencies and conflict resolution against other models.

Network Control and Automation
Developing a network management tool for today's demands will require the development of a sophisticated network-aware orchestration engine that is able to detect any interdependencies, resolve them and deploy network policies automatically over the network.

For this to happen, consider the following three non-technical challenges:

  • Users should be confident and willing to accept that the logical network model will, in fact, result in the correct configuration of all devices in the network. Many network engineers are still most comfortable with command line interface (CLI) created from scripts and templates.
  • The primary focus of network engineers is on proper device configurations and ensuring the device is performing as intended - not programming. Any next-generation tools have been designed with a network engineering focus in mind, allowing network engineers to use the system with a much shorter learning curve and minimal programming expertise.
  • Buy-in from NetOps and DevOps teams is vital, as they may be skeptical to trust device configuration to a new management tool.

Multiple technical considerations regarding next-generation management tools include:

  • Management for the high degree of customization needed.
  • Zero-touch provisioning to make the onboarding of new devices into the system as fluid as possible, allowing generalist IT staff to install routers and trigger device provisioning automatically.
  • Functionality that limits or flags unauthorized manual device configuration changes with automatic remediation when needed.
  • Configuration preview, allowing dry runs of new configurations to understand all changes that may have to be performed, even on other network devices when needed.
  • Step-by-step verification of device provisioning actions with automatic revert on errors.

Realizing the Dream
Enterprises can truly transform their networks with tools that provide complete abstraction of network functions while providing deeply integrated model interdependency verification, deployment previews and layer-by-layer provisioning. For example, replacing an existing device with a newer model, even if it's from a different vendor, can be detected and automatically provisioned. Such solutions that can resolve any potential conflicts and interdependencies, even across vendors, are becoming increasingly important as network devices are virtualized on common platforms and the individual strength of vendor-specific solutions are combined into one multi-vendor solution.

A framework like this will improve workflow among architecture and implementation teams, which will bring multiple benefits: higher reliability, elimination of configuration errors, quicker implementation of business requirements, and faster identification and recovery from network outages. The dream of reliability and cost effectiveness can be realized.

More Stories By Stefan Dietrich

Dr. Stefan Dietrich brings to Glue Networks more than 20 years of experience defining innovative strategies and delivering complex technology solutions. Before joining Glue Networks, he was Managing Director of Technology Strategy at AXA Technology Services, introducing advanced new technologies to AXA globally, and held senior IT management positions at Reuters and Deutsche Bank.

Stefan received a Ph.D. in Aerospace Engineering and Computer Science from the University of Stuttgart and served as a Postdoctoral Fellow and faculty member at Cornell University.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


CloudEXPO Stories
David Friend is the co-founder and CEO of Wasabi, the hot cloud storage company that delivers fast, low-cost, and reliable cloud storage. Prior to Wasabi, David co-founded Carbonite, one of the world's leading cloud backup companies. A successful tech entrepreneur for more than 30 years, David got his start at ARP Instruments, a manufacturer of synthesizers for rock bands, where he worked with leading musicians of the day like Stevie Wonder, Pete Townsend of The Who, and Led Zeppelin. David has also co-founded five other companies including Computer Pictures Corporation - an early player in computer graphics, Pilot Software - a company that pioneered multidimensional databases for crunching large amounts of customer data for major retail companies, Faxnet - which became the world's largest provider of fax-to-email services, as well as Sonexis - a VoIP conferencing company.
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by sharing information within the building and with outside city infrastructure via real time shared cloud capabilities.
Whenever a new technology hits the high points of hype, everyone starts talking about it like it will solve all their business problems. Blockchain is one of those technologies. According to Gartner's latest report on the hype cycle of emerging technologies, blockchain has just passed the peak of their hype cycle curve. If you read the news articles about it, one would think it has taken over the technology world. No disruptive technology is without its challenges and potential impediments that frequently get lost in the hype. The panel will discuss their perspective on what they see as they key challenges and/or impediments to adoption, and how they see those issues could be resolved or mitigated.
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure using the Kublr platform, and how Kubernetes objects, such as persistent volumes, ingress rules, and services, can be used to abstract from the infrastructure.