Welcome!

SDN Journal Authors: Yeshim Deniz, Elizabeth White, Liz McMillan, Pat Romanski, TJ Randall

Related Topics: SDN Journal, Containers Expo Blog, @DevOpsSummit

SDN Journal: Blog Post

OpEx Savings and the Ever-Present Emergence of SDN

How do those OpEx savings manifest themselves?

Software-defined networking is fundamentally about two things: the centralization of network intelligence to make smarter decisions, and the creation of a single (or smaller number of) administrative touch points to allow for streamlined operations and to promote workflow automation. The former can potentially lead to new capabilities that make networks better (or create new revenue streams), and the latter is about reducing the overall operating costs of managing a network.

Generating revenue makes perfect sense for the service providers who use their network primarily as a means to drive the business. But most enterprises use the network as an enabling entity, which means they are more interested in the bottom line than the top. For these network technology consumers, the notion of reducing costs can be extremely powerful.

But how do those OpEx savings manifest themselves?

OpEx you can measure

When we consider OpEx, it’s easy to point to the things that are measurable: space, power and cooling. So as enterprise customers examine various solutions, they will look at how many devices are required, and then how those devices consume space, power, and cooling. It is relatively straightforward to do these calculations and line up competing solutions. Essentially, you calculate the number of access ports and fix the oversubscription ratio. From there, you can do side-by-side comparisons in a mostly apples-to-apples comparison.

But SDN by itself does very little to impact the number or devices in a reference architecture. And when everyone is using roughly the same device components, these calculations start to converge on a narrow set of values.

It is worth noting that while SDN doesn’t change architectures a ton, there are other technologies that do. So I am not suggesting that all architectures are the same but rather that SDN doesn’t play the dominant role in determining what those architectures look like.

Beyond the easily measured

Once you get beyond the things that are easy to measure, the OpEx story gets a little bit tougher. To be fair on vendors, a lot of this is because most customer environments are poorly instrumented. It is difficult to know how much spend is associated with specific tasks or even certain parts of the network. Again, the default behavior is to lean on the things that are most easily measured.

At the top of that list is headcount. Most companies understand how human resources are distributed along different network boundaries. So the tendency is to look at a combination of the physical characteristics and headcount to arrive at a general number for OpEx.

So where do OpEx models fall down?

Imagine for a moment that you are a CIO (or if you are a CIO, simply reflect on reality a moment). As SDN solutions lay claim to reducing the number of people, consider how you are likely to respond over the first year of your newfound efficiency. If your new SDN-powered network does indeed require fewer people to manage, how likely are you to actually cut expenses and let people go?

If we are being honest, the answer is: exceedingly unlikely. Cutting headcount is one of those things that plays out well in slides and loosely constructed models, but operating a business is seldom that simple. More likely, your efficiency gains will be translated into a whole set of other things that your team can now do rather than deal with the mundane tasks of operating a network.

This is important though, because it means that the OpEx savings you based your decisions on might never really come to fruition—at least not in the way that you planned up front. And if you sold your decision based on these savings, you will find yourself in a difficult position during the next round of budget planning when you are forced to justify your existing staff in the face of promised reductions.

It takes a visionary

Just because more efficient operations didn’t lead you to cut half your staff doesn’t mean you did not achieve some benefit. The visionary who introduces a more manageable infrastructure will find that, over time, the operator-to-device ratio becomes more favorable over time. More simply, each operator can cover a larger number of devices, which means that you can grow your datacenter capacity faster than you staff new operators.

Even if you are not planning to add a ton of new capacity, the people that you have will be more effective. For example, if less time is spent battling network mechanics, more time can be spent doing the things that you know you ought to be doing but simply lack the time to do now—like documenting your operating procedures, or expanding your test capabilities, or even conducting more training and giving your team time to explore difference-making ideas.

The impact on SDN deployments

Promising OpEx is always tough, especially when it requires an initial capital outlay to get it. While the arguments can be compelling in the moment, there is just never a good time to spend money to save money. When times are tight, the dollars are hard to find. When things are booming, there are other things that are more pressing. And so the kinds of OpEx-friendly changes tend to linger on until some burning platform creates a compelling reason to leap from one architecture to another.

As vendors rely on some of these loose OpEx models, it means we could be in a position where the emergence of SDN is always right on the horizon. Like a mirage, it seems to move out a few more months every time we get closer. More to the point, we simply will not see mass SDN adoption purely for the sake of shaving off a few OpEx dollars. There has to be a compelling vision to get people to make the change…or maybe just a visionary.

The bottom line

It is extremely hard to put a value on doing those strategic things your staff never gets to do. How do you create a business case around developing your teams? How do you calculate in numbers the result of automation that reduces the number of human errors in your datacenter? How do explain that additional architectural review saves the business money? It’s not trivial. This is why it takes a visionary to cap investment in existing architectures and cross the chasm to embrace something new. There’s a reason that groundbreaking change is hard. The question for leaders is whether their job is simply to keep the lights on or to drive meaningful change into the business.

For the network engineers quietly fearing change or unnecessarily adopting a skeptical position, just know that the likely end game isn’t the elimination of your job. Rather you will likely be more critical than ever and less replaceable as you spend more of your time thinking about the hard problems as opposed to keying in simple configuration changes and executing excruciating troubleshooting procedures.

And for the vendors, it likely means looking more at opportunity costs than headcount costs as you construct your OpEx models. Of course this means knowing the customers you are selling to a bit better, which is probably a good thing for everyone.

[Today’s fun fact: Dolphins sleep with one eye open. It’s probably why they make good Navy SEALs]

The post OpEx savings and the ever-present emergence of SDN appeared first on Plexxi.

Read the original blog entry...

More Stories By Michael Bushong

The best marketing efforts leverage deep technology understanding with a highly-approachable means of communicating. Plexxi's Vice President of Marketing Michael Bushong has acquired these skills having spent 12 years at Juniper Networks where he led product management, product strategy and product marketing organizations for Juniper's flagship operating system, Junos. Michael spent the last several years at Juniper leading their SDN efforts across both service provider and enterprise markets. Prior to Juniper, Michael spent time at database supplier Sybase, and ASIC design tool companies Synopsis and Magma Design Automation. Michael's undergraduate work at the University of California Berkeley in advanced fluid mechanics and heat transfer lend new meaning to the marketing phrase "This isn't rocket science."

CloudEXPO Stories
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-centric compute for the most data-intensive applications. Hyperconverged systems already in place can be revitalized with vendor-agnostic, PCIe-deployed, disaggregated approach to composable, maximizing the value of previous investments.
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by sharing information within the building and with outside city infrastructure via real time shared cloud capabilities.
As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more than 24 years of IT Operations experience with a focus on highly-scalable architectures.