Welcome!

SDN Journal Authors: Pat Romanski, Destiny Bertucci, Liz McMillan, Elizabeth White, Amitabh Sinha

Related Topics: @DevOpsSummit, Linux Containers, Containers Expo Blog, Release Management , @CloudExpo, SDN Journal

@DevOpsSummit: Blog Feed Post

Just What Does 'Operationalize' Mean Anyway?

We keep saying that, does it mean what you think it means?

Just What Does 'Operationalize' Mean Anyway?

Operationalization (which is really hard to say, go ahead - try it a few times) is a concept that crosses the lines between trends and technologies. Both SDN and DevOps share the notion of "operationalization" as a means to achieve the goal of aligning IT with business priorities, like that of accelerating time to market for all important applications.

But what does it really mean to operationalize the network, or app deployments, or really, anything?

Operationalization is a lot like DevOps in that it's more of an approach to how you deploy and manage operations than it is some concrete, tangible thing. It is a verb, it's something you do that has concrete, measurable impacts on the application environment, aka the data center, and the processes that move an application from development and into the hands of its intended consumers, whether internal or external.

operationalize - meaning

When we say "operationalize the network", what we mean is to apply a systematic approach to automating network tasks and orchestrating operational processes in a way that meets measurable, defined goals that align with business priorities.

Consider the business priority to deliver projects on time. You know, get projects to market before the competition (to meet the business concern of revenue growth) or roll out internal apps faster (to meet the business concern of productivity improvements). The top CIO priorities are intertwined, and IT is in the business of applications as much as it is about technology.

Automate all the network things

Accelerating the time to market (or time to roll out for internal applications) is an imperative that enables IT to meet several business and IT-related goals simultaneously. But to do that, IT has to operationalize all the things - including the network. Operations (whether network or security or application) has to focus on automating tasks and orchestrating processes to achieve the speed, scale, and stability necessary to roll out new or improved apps faster and, in some cases, more frequently. That means taking advantage of programmability (APIs, app templates and even data path) to integrate and automate the provisioning, configuration and elasticity of applications and the services that deliver them.

Does that mean you have to become a coder? Not necessarily. Much of the automation and orchestration of the network is being made available through ecosystems (like those around VMware, Cisco, OpenDaylight and OpenStack) that enable the integration necessary to occur through plug-ins, policies or templates rather requiring network engineers to become developers. No doubt some organizations will choose a more hands-on approach, in which case the answer becomes yes, yes you will have to become familiar with scripting tools and languages and APIs to enable the automation and, ultimately, orchestration required to achieve alignment with business and operational goals.

Measure all the deployment things

Automation and orchestration alone aren't enough, though, to operationalize the network. Measures must be put into place that span the entire application deployment process. Those measures should align with other operations groups and align better with the business, measures that are typically associated with DevOps but are directly relatable to the network, too:

  • Deploy frequency
  • Volume of defects
  • MTTR
  • Number & Frequency of outages
  • Number & Frequency of performance issues
  • Time/cost per release (deployment)

Automation certainly impacts some of these measures, but not all. Process optimization is a critical component of DevOps and operationalization as well that impacts many measures but is people and analysis driven.

Optimize all the process things

Optimization requires understanding the processes that have likely ossified over time and re-evaluating each and every step to improve not just the speed but the efficiency, too (no, they aren't the same, Virginia). Optimization of processes is about measuring and mapping processes to find the bottlenecks and idle time that causes the entire app deployment train to slow to a crawl.

The reality is that orchestrating poor processes just lets you fail faster and more often. So identifying those processes (that include handoffs between silos) causing bottlenecks in the deployment process (or where errors seem to constantly be introduced) is a critical component of successfully operationalizing the network (and other operations, for that matter). Giving the app infrastructure operations group an "easy" button to deploy the appropriate network services isn't going to improve the process if that process is itself broken, after all.

The measures let you ascertain whether changes in the process are going to help or not. Modeling and math can do wonders to help determine where changes must be made to improve the overall results, but both require measurement first - and consistent measurement across groups and the deployment lifecycle.

Share all the app things

All of which requires collaboration. You can automate individual tasks and gain some improvements, yes, but you can't orchestrate a provisioning and configuration process related to a given application or type of application unless you first understand what that application needs. And to do that you've got to talk to the people who develop it and deploy its infrastructure. You have to understand its architecture - is it three-tier? Two-tier? Microservice? Does it present APIs and take advantage of an app proxy or are the integrations and interactions all internal? How is success for this app measured? Productivity improvement? Revenue growth? User adoption?

The answers to these questions are imperative to understanding just what network services need to be deployed, and how. It isn't enough to just give the app an IP address and put it on a VLAN. You've got to deliver value out of the network and that means providing services that will help that application meet its business goals, whatever they might be.

Operationalize. Everything.

Whether you're approaching operationalization of the network from the perspective of implementing a SDN architecture or by applying the principles associated with DevOps you're essentially going to have to embrace and adopt the same basic tenets: automation, sharing and common measurements that result in a cultural change across all of IT's operational groups.

To succeed in an application world you're going to have to operationalize all the things.

And that includes the network.

More in a presentation dedicated to this topic: Operationalize all the Network Things!

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

@CloudExpo Stories
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
Nordstrom is transforming the way that they do business and the cloud is the key to enabling speed and hyper personalized customer experiences. In his session at 21st Cloud Expo, Ken Schow, VP of Engineering at Nordstrom, discussed some of the key learnings and common pitfalls of large enterprises moving to the cloud. This includes strategies around choosing a cloud provider(s), architecture, and lessons learned. In addition, he covered some of the best practices for structured team migration an...
With tough new regulations coming to Europe on data privacy in May 2018, Calligo will explain why in reality the effect is global and transforms how you consider critical data. EU GDPR fundamentally rewrites the rules for cloud, Big Data and IoT. In his session at 21st Cloud Expo, Adam Ryan, Vice President and General Manager EMEA at Calligo, examined the regulations and provided insight on how it affects technology, challenges the established rules and will usher in new levels of diligence arou...
Most technology leaders, contemporary and from the hardware era, are reshaping their businesses to do software. They hope to capture value from emerging technologies such as IoT, SDN, and AI. Ultimately, irrespective of the vertical, it is about deriving value from independent software applications participating in an ecosystem as one comprehensive solution. In his session at @ThingsExpo, Kausik Sridhar, founder and CTO of Pulzze Systems, discussed how given the magnitude of today's application ...
The “Digital Era” is forcing us to engage with new methods to build, operate and maintain applications. This transformation also implies an evolution to more and more intelligent applications to better engage with the customers, while creating significant market differentiators. In both cases, the cloud has become a key enabler to embrace this digital revolution. So, moving to the cloud is no longer the question; the new questions are HOW and WHEN. To make this equation even more complex, most ...
As you move to the cloud, your network should be efficient, secure, and easy to manage. An enterprise adopting a hybrid or public cloud needs systems and tools that provide: Agility: ability to deliver applications and services faster, even in complex hybrid environments Easier manageability: enable reliable connectivity with complete oversight as the data center network evolves Greater efficiency: eliminate wasted effort while reducing errors and optimize asset utilization Security: imple...
Mobile device usage has increased exponentially during the past several years, as consumers rely on handhelds for everything from news and weather to banking and purchases. What can we expect in the next few years? The way in which we interact with our devices will fundamentally change, as businesses leverage Artificial Intelligence. We already see this taking shape as businesses leverage AI for cost savings and customer responsiveness. This trend will continue, as AI is used for more sophistica...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
Digital transformation is about embracing digital technologies into a company's culture to better connect with its customers, automate processes, create better tools, enter new markets, etc. Such a transformation requires continuous orchestration across teams and an environment based on open collaboration and daily experiments. In his session at 21st Cloud Expo, Alex Casalboni, Technical (Cloud) Evangelist at Cloud Academy, explored and discussed the most urgent unsolved challenges to achieve f...
In his session at 21st Cloud Expo, Raju Shreewastava, founder of Big Data Trunk, provided a fun and simple way to introduce Machine Leaning to anyone and everyone. He solved a machine learning problem and demonstrated an easy way to be able to do machine learning without even coding. Raju Shreewastava is the founder of Big Data Trunk (www.BigDataTrunk.com), a Big Data Training and consulting firm with offices in the United States. He previously led the data warehouse/business intelligence and B...
The past few years have brought a sea change in the way applications are architected, developed, and consumed—increasing both the complexity of testing and the business impact of software failures. How can software testing professionals keep pace with modern application delivery, given the trends that impact both architectures (cloud, microservices, and APIs) and processes (DevOps, agile, and continuous delivery)? This is where continuous testing comes in. D
Blockchain is a shared, secure record of exchange that establishes trust, accountability and transparency across business networks. Supported by the Linux Foundation's open source, open-standards based Hyperledger Project, Blockchain has the potential to improve regulatory compliance, reduce cost as well as advance trade. Are you curious about how Blockchain is built for business? In her session at 21st Cloud Expo, René Bostic, Technical VP of the IBM Cloud Unit in North America, discussed the b...
SYS-CON Events announced today that Synametrics Technologies will exhibit at SYS-CON's 22nd International Cloud Expo®, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Synametrics Technologies is a privately held company based in Plainsboro, New Jersey that has been providing solutions for the developer community since 1997. Based on the success of its initial product offerings such as WinSQL, Xeams, SynaMan and Syncrify, Synametrics continues to create and hone in...
The 22nd International Cloud Expo | 1st DXWorld Expo has announced that its Call for Papers is open. Cloud Expo | DXWorld Expo, to be held June 5-7, 2018, at the Javits Center in New York, NY, brings together Cloud Computing, Digital Transformation, Big Data, Internet of Things, DevOps, Machine Learning and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding busin...
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex ...
In his session at 21st Cloud Expo, Michael Burley, a Senior Business Development Executive in IT Services at NetApp, described how NetApp designed a three-year program of work to migrate 25PB of a major telco's enterprise data to a new STaaS platform, and then secured a long-term contract to manage and operate the platform. This significant program blended the best of NetApp’s solutions and services capabilities to enable this telco’s successful adoption of private cloud storage and launching ...
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It’s clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. Tha...
In his general session at 21st Cloud Expo, Greg Dumas, Calligo’s Vice President and G.M. of US operations, discussed the new Global Data Protection Regulation and how Calligo can help business stay compliant in digitally globalized world. Greg Dumas is Calligo's Vice President and G.M. of US operations. Calligo is an established service provider that provides an innovative platform for trusted cloud solutions. Calligo’s customers are typically most concerned about GDPR compliance, application p...