SDN Journal Authors: Yeshim Deniz, Liz McMillan, Elizabeth White, Pat Romanski, TJ Randall

Related Topics: SDN Journal

SDN Journal: Blog Feed Post

Event-Driven Platforms Critical for Next-Gen Network

Node.js is an excellent choice for DevOps tools

I recently stumbled across an article discussing node.js and devops in the same breath. Yes, I was in heaven, why do you ask?

In any case, the article notes two primary reasons why node.js is an excellent choice for devops tools:

Node.js has a couple distinct advantages for infrastructure maintenance:

1. It has a small footprint
2. The event-based nature of Node.js helps keep servers from getting bogged down handling time consuming tasks.

-- Why So Many DevOps Tools are Written in Node.js

While I'm not discounting the value of the first point, I think the second is much more important not just to devops but to the data center (and in particular management of) in general, especially given the trend toward centralized management planes across the entire network stack (think SDN).

The funny thing is that if you take a step back and look at DevOps and SDN you'll see that we've had this discussion under the Infrastructure 2.0 umbrella in the past. One of the core components of the idea of Infrastructure 2.0 (which is really the pre-cursor to SDN, if not its parent) was that some centralized controller (sound familiar?), mediator, whatever you want to call it, would be responsible for accepting and distributing events across the network infrastructure stack. It grew out of concerns around IPAM (IP Address Management) and the increasingly volatile rate of change occurring there thanks to virtualization and cloud computing.

But these controllers would be software, necessarily, and thus concerns regarding scalability and performance quickly become a sticking point. It should be no surprise that one of the primary arguments against OpenFlow-enabled architectures is that the controller does not scale or ultimately perform well. Clustering is an attempt to address both issues in large-scale networks, but this is just rehashing the same "how do we scale web applications" that's been going on since the dot com era began.


That's because traditional models of development were thread-based, synchronous and ultimately blocking. Modern scalable frameworks are non-blocking and support asynchronous modes. The modern non-blocking, asynchronous model is geared toward long sessions with intermittent communication between client and server. Which is exactly the kind of model laid out to support a more dynamic, event-driven network architecture. Network components need to know right now when an event happens, but they can't waste cycles opening and closing connections to "check" every x-seconds. That mode of operation doesn't scale and the overhead from TCP session management on both the component and the server would be devastating.

So asynchronous, high-capacity is what's necessary and node.js fulfills those requirements.

Can you write async, non-blocking servers in other languages and on other platforms? Yes. Java and Erlang and a variety of other languages have frameworks that support such a model. But they are ultimately still encumbered by the underlying platform (application server). The impact on performance and scale still exists, albeit it occurs at a much higher threshold than traditional applications.

Node.js eliminates the platform impact, thus providing DevOps (and developers, for that matter) with a high-capacity, high-performing, highly programmable platform upon which to deploy... well, whatever they want. And because so many devops-related tasks that need to be automated are driven by events (provision, launch, shut-down, add, remove, stop, start) it starts to make a lot of sense to look at node as a platform for DevOps upon which they can build tools. (Etsy has a great blog on the use of node.js for devops tools, if you're interested)

The gravitational pull toward node.js is, likely, because it is inherently event-driven. It isn't that you can do event-driven, it is. That's it. With Java you can write event-driven daemons, yes, but it's not the primary model for Java (or its underlying platforms) and it takes more work than writing a traditional application.

The network (and by "network" I mean the whole stack from L2 to L7) is driven by events - packet in, packet out, connection made, response received, IP address assigned, IP address released - and thus a lightweight, highly scalable event-driven framework fits naturally as a platform upon which to build tools that help manage the network.

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

CloudEXPO Stories
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the public cloud best suits your organization, and what the future holds for operations and infrastructure engineers in a post-container world. Is a serverless world inevitable?
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-centric compute for the most data-intensive applications. Hyperconverged systems already in place can be revitalized with vendor-agnostic, PCIe-deployed, disaggregated approach to composable, maximizing the value of previous investments.
Wooed by the promise of faster innovation, lower TCO, and greater agility, businesses of every shape and size have embraced the cloud at every layer of the IT stack – from apps to file sharing to infrastructure. The typical organization currently uses more than a dozen sanctioned cloud apps and will shift more than half of all workloads to the cloud by 2018. Such cloud investments have delivered measurable benefits. But they’ve also resulted in some unintended side-effects: complexity and risk. End users now struggle to navigate multiple environments with varying degrees of performance. Companies are unclear on the security of their data and network access. And IT squads are overwhelmed trying to monitor and manage it all.
Machine learning provides predictive models which a business can apply in countless ways to better understand its customers and operations. Since machine learning was first developed with flat, tabular data in mind, it is still not widely understood: when does it make sense to use graph databases and machine learning in combination? This talk tackles the question from two ends: classifying predictive analytics methods and assessing graph database attributes. It also examines the ongoing lifecycle for machine learning in production. From this analysis it builds a framework for seeing where machine learning on a graph can be advantageous.'
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.