Welcome!

SDN Journal Authors: Liz McMillan, Yeshim Deniz, Elizabeth White, Pat Romanski, TJ Randall

Related Topics: @DevOpsSummit, @CloudExpo, SDN Journal

@DevOpsSummit: Blog Feed Post

Immutable Infrastructure By @LMacVittie | @DevOpsSummit [#DevOps]

Immutable infrastructure is generally defined as a stack that you build once, run one or many instances of & never change again

Can Network Infrastructure Be Immutable Infrastructure?

Immutable infrastructure, which I think is more appropriately called disposable infrastructure, has been enjoying a reinvigorated life with the success of Dockers and containerization over the past year. Too, DevOps has played a role in resurrecting the notion of disposable infrastructure with its association with automation and the use of templates to automate everything from the acquisition  to configuration to provisioning of, well, just about everything in the application data path.

As technology trends naturally move from the nucleus of business today - application development - to the very nether regions of the application data path - the network - it makes sense to ask whether or not network infrastructure can ever be immutable? After all, it seems counterintuitive to apply immutability to anything in the network when trends like SDN tell us the goal is to move in exactly the opposite direction - toward fluidity and extreme dynamism.

Lori MacVittie Joins @DevOpsSummit New York Faculty ▸ Here
Session:
How microservices are changing the underlying architectures needed to scale, secure and deliver applications

Before we can answer that question, we have to quickly visit (or revisit, as the case may be) just what immutable infrastructure means.

I'll answer that by quoting a blog by Julian Dunn of Chef: Immutable Infrastructure: Practical or Not?

Immutable infrastructure is generally defined as a stack that you build once (be it a virtual machine image, container image, or something else), run one or many instances of, and never change again. The deployment model is to terminate the instance/container and start over from step one: build a new image and throw old instances away.

But why, you might be asking, would you do ever do that? If you dig around you'll find the reason is basically because of the disorder caused by changes over time.

Because, entropy.

Law of Software Entropy is described by Ivar Jacobson et al. in "Object-Oriented Software Engineering: A Use Case Driven Approach":

The second law of thermodynamics, in principle, states that a closed system's disorder cannot be reduced, it can only remain unchanged or increased. A measure of this disorder is entropy. This law also seems plausible for software systems; as a system is modified, its disorder, or entropy, always increases. This is known as software entropy.

This law also applies to systems for which firmware or system-level updates must be applied. For which hot fixes and patches are deployed. For which emergency tweaks to configuration that should, in a perfect world, be only changed through a strictly followed change management process. The problem immutable (disposable) infrastructure is trying to solve is that the more change you introduce into a system, the more crufty and unstable they seem to grow. Disorder. Chaos. Entropy.

So enter the notion of disposable infrastructure. Based on the premise of non-change to running systems, disposable infrastructure says that if you need to make a change - to the configuration, as a patch or an upgrade - then you need to build a new image and deploy that using the same process you originally used to deploy the existing one.

And then dispose of the old one.

Whoa. Why would I do that?
The assumption is that by following a known process to create and deploy the infrastructure you don't have to worry about whether Bob manually edited the /etc/resolv.conf or added a new library out of band. Because if he did that, he did it within the context of the deployment process  and thus it is included in the known state of the infrastructure.

This is a key concept - maintaining an externalized, known state of the infrastructure. If that sounds somewhat familiar that's because it's very tightly tied to the concept of SDN and the decoupling of control from data planes to create a centralized command and control model from which the entire state of the network is known. Because you never change the individual nodes in the network, you don't have to worry about them getting crufty or about a route Alice added late one night to fix some weird problem. It's all there, in the controller's view of the network.

Chad Fowler explains this concept well in "Trash Your Servers and Burn Your Code: Immutable Infrastructure and Disposable Components"

If you absolutely know a system has been created via automation and never changed since the moment of creation, most of the problems I describe above disappear. Need to upgrade? No problem. Build a new, upgraded system and throw the old one away. New app revision? Same thing. Build a server (or image) with a new revision and throw away the old ones.

So let's return, then, to the question at hand: can network infrastructure be immutable (disposable) infrastructure?

Yes, it can.

Now the $64, 000 question. Does such disposable network infrastructure need to be virtualized or containerized?

That's somewhat harder to answer. Yes, in the sense that such infrastructure is assumed to be a self-contained entity that can be deployed and disposed of on its own rather than as part of a larger system. You can't group together configuration files for a service on a gigantor network thing and call it disposable infrastructure because it's a part of a larger system that isn't disposable. It has to be self-contained, so virtualized or containerized or software is the best option if you're looking for a truly disposable end-to-end application infrastructure.

Why should you care?
The reason you might care about whether the network infrastructure can be immutable or not is that the closer the infrastructure is to the app, i.e. has greater app affinity, the more likely it is that changes to that infrastructure component will impact the app. Load balancing, for example,   can dramatically change the behavior of an application. A patch or upgrade of the load balancing service has the potential to impact the app. Similarly, upstream services are often the first to be "tweaked" out of band to address some issue - a script to detect and stop an HTTP-based attack, a TCP tweak to improve performance, etc... - and thus are likely to suffer more entropy than services topologically located further upstream.

Thus, the reasons you'd want your app (compute) infrastructure to be disposable apply to upstream infrastructure services, too. To contain the negative impacts of entropy on the entire application architecture stack (which includes the application services infrastructure).

This is one of the reasons you want to stop looking at load balancers as pairs of hardware bricks you insert into the network and start thinking about a more clustered, per-app, service-based approach that is virtualized or based on software. the virtualized / software instances of such application services are more disposable and fit much more easier into an externalized, automated process of provisioning and deployment that enables a disposable infrastructure approach.

Now you might be thinking this looks  a lot like an operationalized infrastructure; something we might get with SDN. If you ignore the disposal of "v1" in favor of a completely new "v2" you basically have the same thing. The question then becomes whether or not the added step of disposal really keeps infrastructure entropy in check or not. And that's something that only you can answer based on the amount of change that actually occurs in your infrastructure. The assumption is the more change, the more entropy. Your mileage may vary.

To sum up, network infrastructure can be disposable (immutable) and the greater the application affinity of the application service the more likely it is that such infrastructure would benefit from being disposable.  Whether or not you realize it, you're probably migrating toward a disposable approach if you're adopting DevOps to operationalize your application infrastructure.

Will you ever achieve a fully disposable infrastructure? Probably not, because realistically we know that sometimes, things happen. But in terms of how we manage roll-outs and upgrades and planned changes, it's completely possible that pieces of your application infrastructure will end up disposable.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

CloudEXPO Stories
Sanjeev Sharma Joins November 11-13, 2018 @DevOpsSummit at @CloudEXPO New York Faculty. Sanjeev Sharma is an internationally known DevOps and Cloud Transformation thought leader, technology executive, and author. Sanjeev's industry experience includes tenures as CTO, Technical Sales leader, and Cloud Architect leader. As an IBM Distinguished Engineer, Sanjeev is recognized at the highest levels of IBM's core of technical leaders.
DXWorldEXPO LLC announced today that Kevin Jackson joined the faculty of CloudEXPO's "10-Year Anniversary Event" which will take place on November 11-13, 2018 in New York City. Kevin L. Jackson is a globally recognized cloud computing expert and Founder/Author of the award winning "Cloud Musings" blog. Mr. Jackson has also been recognized as a "Top 100 Cybersecurity Influencer and Brand" by Onalytica (2015), a Huffington Post "Top 100 Cloud Computing Experts on Twitter" (2013) and a "Top 50 Cloud Computing Blogger for IT Integrators" by CRN (2015). Mr. Jackson's professional career includes service in the US Navy Space Systems Command, Vice President J.P. Morgan Chase, Worldwide Sales Executive for IBM and NJVC Vice President, Cloud Services. He is currently part of a team responsible for onboarding mission applications to the US Intelligence Community cloud computing environment (IC ...
When applications are hosted on servers, they produce immense quantities of logging data. Quality engineers should verify that apps are producing log data that is existent, correct, consumable, and complete. Otherwise, apps in production are not easily monitored, have issues that are difficult to detect, and cannot be corrected quickly. Tom Chavez presents the four steps that quality engineers should include in every test plan for apps that produce log output or other machine data. Learn the steps so your team's apps not only function but also can be monitored and understood from their machine data when running in production.
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.