SDN Journal Authors: Yeshim Deniz, Liz McMillan, Elizabeth White, Pat Romanski, TJ Randall

Blog Feed Post

<p>There was an article written by

There was an article written by Thomas Gryta in Monday’s Wall Street Journal discussing AT&T’s Domain 2.0 vendor list. The article is behind a pay wall, so I won’t quote liberally from it here, but the major takeaways were these:

  • AT&T is looking to cut billions in infrastructure purchases by expanding the vendors from whom they buy equipment
  • This expansion includes white box solutions, and AT&T is opening the door to smaller companies and startups the company would not have previously considered
  • SDN and NFV are reducing the reliance on underlying hardware
  • Upgrades will not require a rip and replace
  • “What used to take 18 months should take minutes” — John Donovan, AT&T

First, we should be fairly careful before drawing a ton of conclusions about vendor revenues based on this. AT&T will still deploy the likes of Cisco and Juniper en masse. Second, the Domain 2.0 project will not change major buying patterns for some time. So while there was a downward reaction in the stock market, the actual financial impact will be unknown for a few years.

That said, the announcement is significant on a couple of levels. AT&T’s endorsement of some of the major networking technology trends likely bolsters the case for the eventual emergence of these technologies. It also sets a time horizon (5 years, per the article) over which we should start to deployments. This likely serves as the outer bound for carrier adoption.

But more than the long-term vendor implications, what are the drivers in the industry that lead to this type of shift?

AT&T is looking to cut billions

It has been well-documented how expensive networking gear is. On the carrier side, when you exclude Huawei (because of DoD concerns), there are really only a small number of vendors in the space. With so few competitors, there is not a lot of downward pressure on price. The result is that carriers have been forced to pick and choose their equipment from a menu of high-priced options.

So long as demand for new capacity did not outpace budgets, this was a tenable (though not desirable) situation. But traffic continues to grow at a geometric rate, which means that at some point in the not-so-distant future, the cost and revenue lines were going to cross, which would ultimately make the business not viable. AT&T is reacting to this now in the hopes of not only keeping those lines from crossing but also with the intent of widening the gap between them (read: increasing profits).

AT&T is experiencing what a lot of infrastructure owners are experiencing: namely, the Year One problem of finding the dollars to keep up with capacity growth is increasingly difficult. When your deployments are expansive, you have to not only add the requisite new capacity but also refresh devices that are perpetually reaching their useful end of life. The result is an annual CapEx spend that is not sustainable.

White boxes, smaller companies, and startups

Make no mistake about it: the number one downward force on price is competition. The popular school of thought here is that commodity means cheap. But while there is correlation, there is not definite causation between commodity and price. I have written before about the profit margins on bottled water. Water remains one of the most commoditized products available, and yet water companies are making upwards of around 200% margins.

The real source of pricing relief is competition. And AT&T is very predictably opening up their network to a host of new combatants. Note that they are opening the door to these players, not guaranteeing that they will win. AT&T is setting up their own Thunderdome and allowing the vendors to do whatever they will to compete for their rather substantial business.

As this competition heats up, the incumbents will absolutely tout their support and services organizations. They really are the biggest differentiators once the architectural playing field is leveled. It will be interesting to see how AT&T handles this. The larger companies have larger portfolios with bloated software codebases. Put differently, they require more support. Will AT&T engage with smaller companies that have more narrow product focus that requires a lower support footprint?

SDN and NFV; no more rip and replace

The article suggests that these technologies are reducing the dependency on the underlying hardware. While I understand the spirit of the comment, I actually think this is somewhat incorrect. The reality now is that the vast majority of networking features in the big incumbents are delivered in software already. In many cases (routing protocols, for example), the dependence on underlying hardware is near zero already. Juniper, for instance, was successful in extending routing protocols to new platforms largely because of the platform-independence within that part of the software.

The real issue here is that the software and the hardware are inseparable. The meaningful point is not whether the changes are made in the hardware but rather what is required to push those changes into the network. What SDN and NFV do is allow a layer of functionality to be built on top of the existing network (typically using a controller like OpenDaylight or NSX as a platform). This provides a new path for the introduction of new capabilities, and one that does not require as frequent upgrades of the underlying hardware.

It also changes the maintenance and failure domains in a significant way. Not only can features be added separately but they can be upgraded with less risk to subscribers. Managing within AT&T’s billing and customer service constraints should not be overlooked.

What used to take 18 months

This is an obvious nod to the workflow issues that plague any large network operator, particularly those with sprawling networks that require extensive OSS/BSS deployments. Managing thousands of devices through pinpoint control over static configuration is tedious at best. Trying to reconcile edge policy across multiple network domains managed by different teams all in support of any kind of seamless service delivery requires the kind of manual organizational orchestration that would make grown men cry.

AT&T is embracing SDN to help clean up this mess. The unstated but significant point here is that cost cutting does not end with controlling the capital outlays. Longer term, operational cost must be addressed. What AT&T is signaling here is that they are interested in more than the Year One CapEx problem; they have their eyes simultaneously on the Year Three OpEx problem.

What next?

AT&T has very cleverly put everyone on notice. SDN represents more than a new technology; it is a new architecture. And a move to a new architecture means that the reliance on decades of esoteric, niche networking features is going away. This levels the playing field, which stimulates competition, and that will give AT&T a path to the cost cutting (both CapEx and OpEx) that they so crave.

What AT&T is less clear about is how they will get from here to there. The transition to a new architecture is a lot like getting in shape. You don’t drop 40 pounds by running on the treadmill for 22 hours. You do it by running for 45 minutes a day. Similarly, AT&T (and any other company looking to take advantage of the changes in technology) will need to commit dutifully to making the shift. This means changing how they look at gear, who they talk to, how they purchase, and how they deploy. This change will be as much organizational as it is technological.

And those vendors who understand that they are changing how AT&T thinks about business as much as how they manage a network will be in the best position to capitalize.

[Today’s fun fact: It is illegal to shave while driving a car in Massachusetts. For DevOps engineers, it is unconscionable to shave ever.]

The post appeared first on Plexxi.

Read the original blog entry...

More Stories By Michael Bushong

The best marketing efforts leverage deep technology understanding with a highly-approachable means of communicating. Plexxi's Vice President of Marketing Michael Bushong has acquired these skills having spent 12 years at Juniper Networks where he led product management, product strategy and product marketing organizations for Juniper's flagship operating system, Junos. Michael spent the last several years at Juniper leading their SDN efforts across both service provider and enterprise markets. Prior to Juniper, Michael spent time at database supplier Sybase, and ASIC design tool companies Synopsis and Magma Design Automation. Michael's undergraduate work at the University of California Berkeley in advanced fluid mechanics and heat transfer lend new meaning to the marketing phrase "This isn't rocket science."

CloudEXPO Stories
In very short order, the term "Blockchain" has lost an incredible amount of meaning. With too many jumping on the bandwagon, the market is inundated with projects and use cases that miss the real potential of the technology. We have to begin removing Blockchain from the conversation and ground ourselves in the motivating principles of the technology itself; whether it is consumer privacy, data ownership, trust or even participation in the global economy, the world is faced with serious problems that this technology could ultimately help us in at least partially solving. But if we do not unpack what is real and what is not, we can lose sight of the potential. In this presentation, John Bates-who leads data science, machine learning and AI in the Adobe Analytics business unit-will present his 4-prong model of the general areas where Blockchain can have a real impact and the specific use...
The benefits of automated cloud deployments for speed, reliability and security are undeniable. The cornerstone of this approach, immutable deployment, promotes the idea of continuously rolling safe, stable images instead of trying to keep up with managing a fixed pool of virtual or physical machines. In this talk, we'll explore the immutable infrastructure pattern and how to use continuous deployment and continuous integration (CI/CD) process to build and manage server images for any platform. Then we'll show how automate deploying these images quickly and reliability with open DevOps tools like Terraform and Digital Rebar. Not only is this approach fast, it's also more secure and robust for operators.
Cloud is the motor for innovation and digital transformation. CIOs will run 25% of total application workloads in the cloud by the end of 2018, based on recent Morgan Stanley report. Having the right enterprise cloud strategy in place, often in a multi cloud environment, also helps companies become a more intelligent business. Companies that master this path have something in common: they create a culture of continuous innovation. In his presentation, Dilipkumar Khandelwal outlined the latest research and steps companies can take to make innovation a daily work habit by using enterprise cloud computing. He shared examples from companies that have benefited from enterprise cloud computing and took a look into the future of how the cloud helps companies become a more intelligent business.
Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also received the prestigious Outstanding Technical Achievement Award three times - an accomplishment befitting only the most innovative thinkers. Shankar Kalyana is among the most respected strategists in the global technology industry. As CTO, with over 32 years of IT experience, Mr. Kalyana has architected, designed, developed, and implemented custom and packaged software solutions across a vast spectrum of environments and platforms. His current area of expertise includes hybrid, multi-cloud as-a-service strategies that drive digital and cognitive enterprises to operational excellence. Throughout his career, Mr. Kalyana has established himself as a brilliant strategist, respected technical advisor, renowned speaker, admired author, and insigh...
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throughout enterprises of all sizes.