Welcome!

SDN Journal Authors: Elizabeth White, Liz McMillan, Yeshim Deniz, Pat Romanski, TJ Randall

Related Topics: Containers Expo Blog

Containers Expo Blog: Blog Post

Types of Network Automation

In networking, workflows are awfully complicated

In networking, workflows are awfully complicated.  There are many workflows, and the exact nature of each depends on a number of variables.  What task comes next is often dependent on the outcome of the previous task, and there is a large amount of data to navigate sometimes to complete a workflow.  Nevertheless, there plenty of opportunity to identify and automate common tasks and segments of workflows.  Once we’ve identified these, we need to ask ourselves, how exactly should we automate them?

Encapsulation
“Encapsulation” means a vendor (possibly a third party vendor) has written software that accomplishes the same thing the workflow does, but usually not the in the same linear way a customer would do it.  Sub-components within an encapsulation have well-designed interfaces for the purpose of accomplishing the goal.  The encapsulation would likely be written in Java or C.  In networking, encapsulated workflows are usually specific to a vendor’s product and often lack flexibility and features.  Encapsulated workflows will manifest as products or product features.

Consider the following workflow:

Untitled Drawing

Figure 1 shows a simplified packet walkthrough for a device.  Here, in the course of evaluating what is happening to a packet passing through this device, we have discovered a filter policy applied to the ingress interface.  This policy has two terms, and each of these terms references an access-list.  A network engineer would need to evaluate this filter policy to determine if it is doing something to the packets of interest.  The thing is, policy languages have a great deal of expressiveness and grammar.  They are also proprietary.  After the filter policy is evaluated, this workflow follows the forwarding pipeline to the egress interface.  If you are an expereinced network engineer, you will know that there are other elements in the pipeline that should be checked for any given network device.  However, there is enormous variation in the structure of the pipeline from one platform to the next.  Therefore, this is a great candidate for discrete encapsulation.* There are more effective ways of achieving the goal of a packet walkthrough than the way a network engineer must do it now (particularly for SDN products), and vendors know their platforms and policy idioms best.

*Discrete means it’s a workflow with a beginning and an end.  It can be manually invoked by a user, and runs for a finite amount of time, reaching some conclusion.

Automation
A workflow automation, on the other hand, consists of sub-components that are “glued together.”  These components were not built especially for automation, and the interfaces between these components were not designed for any particular workflow.  Automations can be developed by the customer, and very frequently discrete automations are employed by network engineers.  A great example here would be a script to configure the login banner on some number of devices.  These automations are written in “softer” languages like Python or Perl.

There is a clear need for Continuous Automation in networking.  Plexxi’s own DSE, now an integral part of the OpenStack Congress project, attempts to address this need.  As the name implies, Continuous means it’s an on-going process.  In the case of the Congress, it is a modular, event/data driven system.  In an environment where there are a plethora of protocols and APIs, each with their own idiosyncracies, this kind of automation makes sense.  Particularly in the context of an open-source community.

Hybrid
Curiously, some workflows may best be addressed by a combination of automation types.  For instance, if a customer wanted to know what was going on in the network relative to a particular application, that workflow automation could use the packet walkthrough encapsulation of a vendor combined with an automation tool like the DSE to harvest network meta-data from external systems about application endpoints.  This could yield a network map of the application’s endpoints along with visual indicators of issues in the network that could be impacting to the application.  In this way, the network engineer could quickly and accurately gauge the health of the network in the context of an application versus engaging in a tedious and error-prone search “by hand.”

Conclusion
Customers, vendors, and open source communities should work togethor to make networking better.  Identifying common workflows and determing the best way to automate them is a good first step.  This will require vendors to think differently about how they develop their products, with their user’s needs in mind first.  Traditionally, just getting a network feature to work and interoperate was the goal, but now we must consider how this feature fits into common workflows performed by network engineers.

[Fun fact:  Broccoli is a member of the cabbage family.  In spite of this, Broccoli tastes good.  When someone offers you cabbage, they are insulting you.]

The post Types of Network Automation appeared first on Plexxi.

Read the original blog entry...

More Stories By Derick Winkworth

Derick Winkworth has been a developer, network engineer, and IT architect in various verticals throughout his career.He is currently a Product Manager at Plexxi, Inc where he focuses on workflow automation and product UX.

@CloudExpo Stories
HyperConvergence came to market with the objective of being simple, flexible and to help drive down operating expenses. It reduced the footprint by bundling the compute/storage/network into one box. This brought a new set of challenges as the HyperConverged vendors are very focused on their own proprietary building blocks. If you want to scale in a certain way, let's say you identified a need for more storage and want to add a device that is not sold by the HyperConverged vendor, forget about it...
Enterprises are moving to the cloud faster than most of us in security expected. CIOs are going from 0 to 100 in cloud adoption and leaving security teams in the dust. Once cloud is part of an enterprise stack, it’s unclear who has responsibility for the protection of applications, services, and data. When cloud breaches occur, whether active compromise or a publicly accessible database, the blame must fall on both service providers and users. In his session at 21st Cloud Expo, Ben Johnson, C...
It is of utmost importance for the future success of WebRTC to ensure that interoperability is operational between web browsers and any WebRTC-compliant client. To be guaranteed as operational and effective, interoperability must be tested extensively by establishing WebRTC data and media connections between different web browsers running on different devices and operating systems. In his session at WebRTC Summit at @ThingsExpo, Dr. Alex Gouaillard, CEO and Founder of CoSMo Software, presented ...
In this presentation, you will learn first hand what works and what doesn't while architecting and deploying OpenStack. Some of the topics will include:- best practices for creating repeatable deployments of OpenStack- multi-site considerations- how to customize OpenStack to integrate with your existing systems and security best practices.
As you move to the cloud, your network should be efficient, secure, and easy to manage. An enterprise adopting a hybrid or public cloud needs systems and tools that provide: Agility: ability to deliver applications and services faster, even in complex hybrid environments Easier manageability: enable reliable connectivity with complete oversight as the data center network evolves Greater efficiency: eliminate wasted effort while reducing errors and optimize asset utilization Security: implemen...
Your homes and cars can be automated and self-serviced. Why can't your storage? From simply asking questions to analyze and troubleshoot your infrastructure, to provisioning storage with snapshots, recovery and replication, your wildest sci-fi dream has come true. In his session at @DevOpsSummit at 20th Cloud Expo, Dan Florea, Director of Product Management at Tintri, provided a ChatOps demo where you can talk to your storage and manage it from anywhere, through Slack and similar services with...
Most people haven’t heard the word, “gamification,” even though they probably, and perhaps unwittingly, participate in it every day. Gamification is “the process of adding games or game-like elements to something (as a task) so as to encourage participation.” Further, gamification is about bringing game mechanics – rules, constructs, processes, and methods – into the real world in an effort to engage people. In his session at @ThingsExpo, Robert Endo, owner and engagement manager of Intrepid D...
Recently, WebRTC has a lot of eyes from market. The use cases of WebRTC are expanding - video chat, online education, online health care etc. Not only for human-to-human communication, but also IoT use cases such as machine to human use cases can be seen recently. One of the typical use-case is remote camera monitoring. With WebRTC, people can have interoperability and flexibility for deploying monitoring service. However, the benefit of WebRTC for IoT is not only its convenience and interopera...
Evan Kirstel is an internationally recognized thought leader and social media influencer in IoT (#1 in 2017), Cloud, Data Security (2016), Health Tech (#9 in 2017), Digital Health (#6 in 2016), B2B Marketing (#5 in 2015), AI, Smart Home, Digital (2017), IIoT (#1 in 2017) and Telecom/Wireless/5G. His connections are a "Who's Who" in these technologies, He is in the top 10 most mentioned/re-tweeted by CMOs and CIOs (2016) and have been recently named 5th most influential B2B marketeer in the US. H...
Michael Maximilien, better known as max or Dr. Max, is a computer scientist with IBM. At IBM Research Triangle Park, he was a principal engineer for the worldwide industry point-of-sale standard: JavaPOS. At IBM Research, some highlights include pioneering research on semantic Web services, mashups, and cloud computing, and platform-as-a-service. He joined the IBM Cloud Labs in 2014 and works closely with Pivotal Inc., to help make the Cloud Found the best PaaS.
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
"With Digital Experience Monitoring what used to be a simple visit to a web page has exploded into app on phones, data from social media feeds, competitive benchmarking - these are all components that are only available because of some type of digital asset," explained Leo Vasiliou, Director of Web Performance Engineering at Catchpoint Systems, in this SYS-CON.tv interview at DevOps Summit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"This week we're really focusing on scalability, asset preservation and how do you back up to the cloud and in the cloud with object storage, which is really a new way of attacking dealing with your file, your blocked data, where you put it and how you access it," stated Jeff Greenwald, Senior Director of Market Development at HGST, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
"Venafi has a platform that allows you to manage, centralize and automate the complete life cycle of keys and certificates within the organization," explained Gina Osmond, Sr. Field Marketing Manager at Venafi, in this SYS-CON.tv interview at DevOps at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Creating replica copies to tolerate a certain number of failures is easy, but very expensive at cloud-scale. Conventional RAID has lower overhead, but it is limited in the number of failures it can tolerate. And the management is like herding cats (overseeing capacity, rebuilds, migrations, and degraded performance). In his general session at 18th Cloud Expo, Scott Cleland, Senior Director of Product Marketing for the HGST Cloud Infrastructure Business Unit, discussed how a new approach is neces...
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
"We're developing a software that is based on the cloud environment and we are providing those services to corporations and the general public," explained Seungmin Kim, CEO/CTO of SM Systems Inc., in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
Leading companies, from the Global Fortune 500 to the smallest companies, are adopting hybrid cloud as the path to business advantage. Hybrid cloud depends on cloud services and on-premises infrastructure working in unison. Successful implementations require new levels of data mobility, enabled by an automated and seamless flow across on-premises and cloud resources. In his general session at 21st Cloud Expo, Greg Tevis, an IBM Storage Software Technical Strategist and Customer Solution Architec...
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
In his session at Cloud Expo, Alan Winters, U.S. Head of Business Development at MobiDev, presented a success story of an entrepreneur who has both suffered through and benefited from offshore development across multiple businesses: The smart choice, or how to select the right offshore development partner Warning signs, or how to minimize chances of making the wrong choice Collaboration, or how to establish the most effective work processes Budget control, or how to maximize project result...