Welcome!

SDN Journal Authors: Liz McMillan, Pat Romanski, Elizabeth White, Mark Hoover, Stefan Bernbo

Blog Feed Post

Automation through workflow state

The benefits of automation are well understood: more agile service provisioning, faster time to insight when there are issues, and a reduction in human error as manual interaction is reduced. Much of the premise behind long-term SDN architectural advantages is steeped in the hope that SDN will help enable and ultimately promote automation. But while centralizing control has significant operational advantages, by itself, it doesn’t actually address the most important requirement for automation.

If automation is going to be more than just reducing keystrokes, there will have to be a rise of workflow state.

Referential space

Successfully managing a network is an exercise in constant iteration through network state. Whenever something needs to be done, the architect or operator examines her current frame of reference to figure out the starting point. That frame of reference usually starts with some implicit understanding of how the network is designed. From there, she takes some action. Maybe she pings an endpoint, checks the state of a BGP neighbor, or examines some interface statistics. Whatever the first step, the point is that she knows when she starts that there is work after the first step.

The information gleaned from the first step yields additional understanding. Her frame of reference changes as she now knows more than before. With her new position in referential space, she takes the next step. And the next, and the next after that. Each step yields a different piece of information, and the process of iterating through a constantly changing referential space ultimately yields some outcome or resolution.

Byproducts of iterative workflows

There are two major byproducts of this iterative approach to workflow. The first is that the starting point is rarely based on an absolute understanding of fact. Rather it is an interpretation that the individual operator or architect creates based on a number of somewhat soft conditions – knowledge, experience, intuition, whatever. This means that for each task, the workflow is somewhat unique, depending on the operator and the environment.

The impact here is important. If workflows are unique based on the operator and the conditions (i.e., the referential space or frame of reference), then the outcomes driven by those workflows are difficult to repeat. Part of why networking is so hard is that so much of it borders on arcane dark art. Science demands repeatability, but the very nature of workflow management in networking makes that challenging.

The second byproduct of networking’s iterative nature is that workflows frequently depend on a set of chained tasks, each of which has a dependency on the preceding task. To make things worse, that dependency is actually rarely known at the start of a a workflow. It’s not that tasks cannot be predictably chained – first, you look at the physical layer, and then you move up stack perhaps. But each subsequent task is executed based on not just the previous task but also the output of the previous task. This creates a complex set of if/then statements in most workflows.

Part of the challenge in automation is providing the logic to navigate the conditional nature of networking workflows.

“Network engineers need to think like programmers”

With the rise of movements like DevOps, “network engineers thinking like programmers” has become a popular phrase. This is a very important change in how we handle network architecture and operations. But there are subtleties here that get lost in the cliche.

First, when people toss the phrase around, they often mean that network engineers need to pick up a scripting language (Python, Ruby, even Perl). Thinking like a software developer has very little to do with programming languages. Languages are a way of expressing intent, but it’s entirely possible to know Python and think nothing like a developer.

Second, when people refer to programming in the context of DevOps, they generally mean that network operators need to think about configuration less as a collection of commands and more like code. Once you make that shift, then you can think about things like source code management, automated testing, and rapid deployment.

But networking needs to do more than just treat configuration as code.  DevOps has more to do with deploying and validating changes. It doesn’t fundamentally change how workflows are executed, and it barely touches more operational tasks like troubleshooting network conditions.

Before anyone picks a religious battle over DevOps here, my point is not that DevOps is bad. It’s just that DevOps by itself is not sufficient. And there are things that ought to be done that are separate from DevOps.

Tiny feedback loops

So if thinking like a programmer isn’t about learning a programming language and it’s more than treating configuration as code, what is it?

Software development is really about creating something out of lots of tiny feedback loops. When you write functions, you don’t just execute some task. You generally execute that task and then return a value. The value provides some immediate feedback about the outcome. In some cases, the function returns the value of a computation; in other cases, it simply returns an indication that the function succeeded or failed.

These values are obviously then used by other functions, which allows us to string together small building blocks into complex chains. The important part? These chains can then be repeatably executed in a deterministic way.

Networking workflows shouldn’t be that different. Each individual activity yields some value (sometimes a specific value as when looking at some counter, other times a success or failure as with a ping). The problem is that while networking commands frequently return information, it is up to the operator themselves to parse this information, analyze what it means, and then take the next action.

Workflow state

What we need if we really want to make automation happen in ways that extend beyond just scripting keystrokes is a means of creating deterministic networking workflows. For this to happen, we need people who construct workflows to think more like developers. Each activity within a workflow needs to be a tiny feedback loop with explicit workflow state that is programmatically passed between workflow elements.

We actually instinctively do this at times. XML, NETCONF, and the like have been used to encapsulate networking inputs and outputs for awhile with the intent of making things parseable and thus more automatable.

But we stopped short. We made the outputs more automation-friendly without ever really creating workflows. So while we can programmatically act on values, it only works if someone has automated a particular workflow. As an industry, we haven’t gotten to actually addressing the workflow problem.

Maybe it’s the highly conditional nature of networking combined with the uniqueness of individual networks. Or maybe it’s that outside of a few automation savants, our industry doesn’t generally think about workflows the way a software developer would.

The bottom line

Networking workflows rely way too heavily on an iterative pass through referential space. The reason change is so scary and troubleshooting so hard is that very little in networking is actually deterministic. But if we really want to improve the overall user experience en route to making workflows both repeatable and reliable, we do need to start thinking a bit more like developers. It all starts with a more explicit understanding of the workflows we rely on, and the expression of feedback via some form of workflow state.

And for everyone betting on abstractions, just know that abstracting a poorly-defined workflow results in an equally poor abstraction. We need to be starting elsewhere.

[Today’s fun fact: Only male fireflies can fly. Take that, females!]

The post Automation through workflow state appeared first on Plexxi.

Read the original blog entry...

More Stories By Michael Bushong

The best marketing efforts leverage deep technology understanding with a highly-approachable means of communicating. Plexxi's Vice President of Marketing Michael Bushong has acquired these skills having spent 12 years at Juniper Networks where he led product management, product strategy and product marketing organizations for Juniper's flagship operating system, Junos. Michael spent the last several years at Juniper leading their SDN efforts across both service provider and enterprise markets. Prior to Juniper, Michael spent time at database supplier Sybase, and ASIC design tool companies Synopsis and Magma Design Automation. Michael's undergraduate work at the University of California Berkeley in advanced fluid mechanics and heat transfer lend new meaning to the marketing phrase "This isn't rocket science."

@CloudExpo Stories
In 2014, Amazon announced a new form of compute called Lambda. We didn't know it at the time, but this represented a fundamental shift in what we expect from cloud computing. Now, all of the major cloud computing vendors want to take part in this disruptive technology. In his session at 20th Cloud Expo, John Jelinek IV, a web developer at Linux Academy, will discuss why major players like AWS, Microsoft Azure, IBM Bluemix, and Google Cloud Platform are all trying to sidestep VMs and containers...
Web Real-Time Communication APIs have quickly revolutionized what browsers are capable of. In addition to video and audio streams, we can now bi-directionally send arbitrary data over WebRTC's PeerConnection Data Channels. With the advent of Progressive Web Apps and new hardware APIs such as WebBluetooh and WebUSB, we can finally enable users to stitch together the Internet of Things directly from their browsers while communicating privately and securely in a decentralized way.
IoT is at the core or many Digital Transformation initiatives with the goal of re-inventing a company's business model. We all agree that collecting relevant IoT data will result in massive amounts of data needing to be stored. However, with the rapid development of IoT devices and ongoing business model transformation, we are not able to predict the volume and growth of IoT data. And with the lack of IoT history, traditional methods of IT and infrastructure planning based on the past do not app...
In his session at DevOps Summit, Tapabrata Pal, Director of Enterprise Architecture at Capital One, will tell a story about how Capital One has embraced Agile and DevOps Security practices across the Enterprise – driven by Enterprise Architecture; bringing in Development, Operations and Information Security organizations together. Capital Ones DevOpsSec practice is based upon three "pillars" – Shift-Left, Automate Everything, Dashboard Everything. Within about three years, from 100% waterfall, C...
The Internet of Things can drive efficiency for airlines and airports. In their session at @ThingsExpo, Shyam Varan Nath, Principal Architect with GE, and Sudip Majumder, senior director of development at Oracle, discussed the technical details of the connected airline baggage and related social media solutions. These IoT applications will enhance travelers' journey experience and drive efficiency for the airlines and the airports.
"We're bringing out a new application monitoring system to the DevOps space. It manages large enterprise applications that are distributed throughout a node in many enterprises and we manage them as one collective," explained Kevin Barnes, President of eCube Systems, in this SYS-CON.tv interview at DevOps at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and containers together help companies achieve their business goals faster and more effectively. In his session at DevOps Summit, Ruslan Synytsky, CEO and Co-founder of Jelastic, reviewed the current landscape of Dev...
SYS-CON Events announced today that Catchpoint, a leading digital experience intelligence company, has been named “Silver Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Catchpoint Systems is a leading Digital Performance Analytics company that provides unparalleled insight into your customer-critical services to help you consistently deliver an amazing customer experience. Designed for digital business, C...
"We formed Formation several years ago to really address the need for bring complete modernization and software-defined storage to the more classic private cloud marketplace," stated Mark Lewis, Chairman and CEO of Formation Data Systems, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo 2016 in New York. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place June 6-8, 2017, at the Javits Center in New York City, New York, is co-located with 20th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry p...
@DevOpsSummit at Cloud taking place June 6-8, 2017, at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long developm...
In his General Session at 17th Cloud Expo, Bruce Swann, Senior Product Marketing Manager for Adobe Campaign, explored the key ingredients of cross-channel marketing in a digital world. Learn how the Adobe Marketing Cloud can help marketers embrace opportunities for personalized, relevant and real-time customer engagement across offline (direct mail, point of sale, call center) and digital (email, website, SMS, mobile apps, social networks, connected objects).
Updating DevOps to the latest production data slows down your development cycle. Probably it is due to slow, inefficient conventional storage and associated copy data management practices. In his session at @DevOpsSummit at 20th Cloud Expo, Dhiraj Sehgal, in Product and Solution at Tintri, will talk about DevOps and cloud-focused storage to update hundreds of child VMs (different flavors) with updates from a master VM in minutes, saving hours or even days in each development cycle. He will also...
A look across the tech landscape at the disruptive technologies that are increasing in prominence and speculate as to which will be most impactful for communications – namely, AI and Cloud Computing. In his session at 20th Cloud Expo, Curtis Peterson, VP of Operations at RingCentral, will highlight the current challenges of these transformative technologies and share strategies for preparing your organization for these changes. This “view from the top” will outline the latest trends and developm...
“RackN is a software company and we take how a hybrid infrastructure scenario, which consists of clouds, virtualization, traditional data center technologies - how to make them all work together seamlessly from an operational perspective,” stated Dan Choquette, Founder of RackN, in this SYS-CON.tv interview at @DevOpsSummit at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
"There's a growing demand from users for things to be faster. When you think about all the transactions or interactions users will have with your product and everything that is between those transactions and interactions - what drives us at Catchpoint Systems is the idea to measure that and to analyze it," explained Leo Vasiliou, Director of Web Performance Engineering at Catchpoint Systems, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York Ci...
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...
@DevOpsSummit taking place June 6-8, 2017 at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @DevOpsSummit at Cloud Expo New York Call for Papers is now open.
"Tintri was started in 2008 with the express purpose of building a storage appliance that is ideal for virtualized environments. We support a lot of different hypervisor platforms from VMware to OpenStack to Hyper-V," explained Dan Florea, Director of Product Management at Tintri, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
"Avere Systems is a hybrid cloud solution provider. We have customers that want to use cloud storage and we have customers that want to take advantage of cloud compute," explained Rebecca Thompson, VP of Marketing at Avere Systems, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.