Welcome!

SDN Journal Authors: Yeshim Deniz, Elizabeth White, Liz McMillan, Pat Romanski, TJ Randall

Related Topics: SDN Journal, Java IoT, Microservices Expo, Containers Expo Blog

SDN Journal: Blog Post

Scripting Is Automation, But Automation Is Not Scripting

There are many extremely complex clustered applications that rely entirely on exchanging information through APIs

Last week Greg Ferro (@etherealmind) wrote this article about his experience with scripting as a method for network automation, with the ultimate conclusion that scripting does not scale.

Early in my career I managed a small network that grew to be a IP over X.25 hub of Europe for a few years providing many countries with their first Internet connectivity. Scripts were everywhere, small ones to grab stats and create pretty graphs, others that continuously checked the status of links and would send emails when things went wrong.

While it is hard to argue with Greg’s complaints per se, I believe the key point is missing. And it has nothing to do with scripting. In a reply, Ivan’s last comment touches on the real issue.

We have been scripting our networks against CLIs forever and I will bet you most folks will consider it successful, even though it may be a pain. The article lists the pains, but not the reasons why. As a network industry, we have never ever considered the interaction with our network devices an API. Not in the true software engineering sense of an API.

There are many extremely complex clustered applications that rely entirely on exchanging information through APIs that are well documented, well versioned, well abstracted and properly promoted or deprecated. Creating and maintaining APIs is a real software engineering effort, a skill that requires true architecture, engineering and discipline. And we have not given our users anything close to it.

If we (that collective network industry) had truly considered our CLI an API, we would (and should) have been pushed aside a long time ago. The CLI is and always has been a simple interface for a human to tell a device what to do. It was not designed to be automated. It is not structured enough to be automated. Even large vendors have multiple flavors that are all industry standard, but all slightly different. And nowhere would you find a formal, full and complete dictionary of that CLI with all inputs, outputs, versions and options. The closest the network industry has had to a true API is SNMP, and that is indeed a very sad statement.

I think we have mentioned before that the networking industry is a bit slow to get to modern software engineering methods and practices, but the tide is changing. And whether you want to call it SDN or something else, the sheer volume and complexity of interaction with the network is pushing us to provide truly automated access to our devices and our networks.

And creating and maintaining APIs is far more than the technology used to access them. It does not matter whether its XML, JSON, REST, NETCONF or anything else. Those are definitions of how information is carried to and from the device and network. I can build a wonderful REST API that takes a CLI command as an argument and spits me back the output from that CLI command in some format. I am sure that sounds familiar to some, but this is not an API. Not in a truly meaningful way that would elevate our automation abilities.

Designing and implementing APIs is not trivial. Believe me, as an entirely API driven solution, we spend a tremendous amount of time discussing our APIs and abstractions to make sure they find that find balance between granularity, functionality, abstraction, scaling and a few other relevant qualifiers.  But the key is that they are part of any feature design from day one, they are part of the overarching architecture, not bolted on at the end. Our APIs are not perfect, there is no such thing, but they are modeled after the workflow of you the user doing the work required to keep the network running and thriving.

So when you need to configure MLAG on a set of Plexxi switches, we do not have a series of API calls to bundle ports together on a switch, give them a unique ID, then tie the switches together as an MLAG pair that shares that unique ID. Oh, and create an MLAG control channel between them, and make each of the switch local LAGs have the same set of VLANs on them. Our API will simply take a list of port objects from any amount of switches in a Plexxi network and turn them into an MLAG. An then you can simply take that entire entity and stick a VLAN on top, we will make sure the participating switches get the pieces they need. That is abstraction, that is workflow encapsulation, that is what APIs are supposed to give you. That is how simple LAG is supposed to be.

We have a long way to go as an industry to get to full APIs the way real software folks think about APIs. The CLI is not it. Scripting against a CLI (or a CLI hidden behind a layer of official sounding API terms) is a useful tool, but one that should be mostly retired to get to true programmable networks that are controlled by real controller (in the broadest definition of the word) using real APIs. Automation is not scripting.

[Today's fun fact: to make sure you do not think I am anti scripting, I once wrote a large chunk of a 10,000 line Perl4 system. It functioned very nicely for years as the RIPE database for IP address allocations back in the mid 90s. Thankfully it has since been tackled by real software engineers.]

The post Scripting is automation, but automation is not scripting appeared first on Plexxi.

Read the original blog entry...

More Stories By Marten Terpstra

Marten Terpstra is a Product Management Director at Plexxi Inc. Marten has extensive knowledge of the architecture, design, deployment and management of enterprise and carrier networks.

CloudEXPO Stories
Extreme Computing is the ability to leverage highly performant infrastructure and software to accelerate Big Data, machine learning, HPC, and Enterprise applications. High IOPS Storage, low-latency networks, in-memory databases, GPUs and other parallel accelerators are being used to achieve faster results and help businesses make better decisions. In his session at 18th Cloud Expo, Michael O'Neill, Strategic Business Development at NVIDIA, focused on some of the unique ways extreme computing is being used on IBM Cloud, Amazon, and Microsoft Azure and how to gain access to these resources in the cloud... for FREE!
"We do one of the best file systems in the world. We learned how to deal with Big Data many years ago and we implemented this knowledge into our software," explained Jakub Ratajczak, Business Development Manager at MooseFS, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
CI/CD is conceptually straightforward, yet often technically intricate to implement since it requires time and opportunities to develop intimate understanding on not only DevOps processes and operations, but likely product integrations with multiple platforms. This session intends to bridge the gap by offering an intense learning experience while witnessing the processes and operations to build from zero to a simple, yet functional CI/CD pipeline integrated with Jenkins, Github, Docker and Azure.
Today, we have more data to manage than ever. We also have better algorithms that help us access our data faster. Cloud is the driving force behind many of the data warehouse advancements we have enjoyed in recent years. But what are the best practices for storing data in the cloud for machine learning and data science applications?
The hierarchical architecture that distributes "compute" within the network specially at the edge can enable new services by harnessing emerging technologies. But Edge-Compute comes at increased cost that needs to be managed and potentially augmented by creative architecture solutions as there will always a catching-up with the capacity demands. Processing power in smartphones has enhanced YoY and there is increasingly spare compute capacity that can be potentially pooled. Uber has successfully been able to harness the excess capacity of privately owned vehicles and turned into a meaningful business. This concept can be step-functioned to harnessing the spare compute capacity of smartphones that can be orchestrated by MEC to provide cloud service at the edge.