Welcome!

SDN Journal Authors: Elizabeth White, Destiny Bertucci, Liz McMillan, Jignesh Solanki, Daniel Gordon

Related Topics: @DevOpsSummit, @CloudExpo, SDN Journal

@DevOpsSummit: Blog Feed Post

Immutable Infrastructure By @LMacVittie | @DevOpsSummit [#DevOps]

Immutable infrastructure is generally defined as a stack that you build once, run one or many instances of & never change again

Can Network Infrastructure Be Immutable Infrastructure?

Immutable infrastructure, which I think is more appropriately called disposable infrastructure, has been enjoying a reinvigorated life with the success of Dockers and containerization over the past year. Too, DevOps has played a role in resurrecting the notion of disposable infrastructure with its association with automation and the use of templates to automate everything from the acquisition  to configuration to provisioning of, well, just about everything in the application data path.

As technology trends naturally move from the nucleus of business today - application development - to the very nether regions of the application data path - the network - it makes sense to ask whether or not network infrastructure can ever be immutable? After all, it seems counterintuitive to apply immutability to anything in the network when trends like SDN tell us the goal is to move in exactly the opposite direction - toward fluidity and extreme dynamism.

Lori MacVittie Joins @DevOpsSummit New York Faculty ▸ Here
Session:
How microservices are changing the underlying architectures needed to scale, secure and deliver applications

Before we can answer that question, we have to quickly visit (or revisit, as the case may be) just what immutable infrastructure means.

I'll answer that by quoting a blog by Julian Dunn of Chef: Immutable Infrastructure: Practical or Not?

Immutable infrastructure is generally defined as a stack that you build once (be it a virtual machine image, container image, or something else), run one or many instances of, and never change again. The deployment model is to terminate the instance/container and start over from step one: build a new image and throw old instances away.

But why, you might be asking, would you do ever do that? If you dig around you'll find the reason is basically because of the disorder caused by changes over time.

Because, entropy.

Law of Software Entropy is described by Ivar Jacobson et al. in "Object-Oriented Software Engineering: A Use Case Driven Approach":

The second law of thermodynamics, in principle, states that a closed system's disorder cannot be reduced, it can only remain unchanged or increased. A measure of this disorder is entropy. This law also seems plausible for software systems; as a system is modified, its disorder, or entropy, always increases. This is known as software entropy.

This law also applies to systems for which firmware or system-level updates must be applied. For which hot fixes and patches are deployed. For which emergency tweaks to configuration that should, in a perfect world, be only changed through a strictly followed change management process. The problem immutable (disposable) infrastructure is trying to solve is that the more change you introduce into a system, the more crufty and unstable they seem to grow. Disorder. Chaos. Entropy.

So enter the notion of disposable infrastructure. Based on the premise of non-change to running systems, disposable infrastructure says that if you need to make a change - to the configuration, as a patch or an upgrade - then you need to build a new image and deploy that using the same process you originally used to deploy the existing one.

And then dispose of the old one.

Whoa. Why would I do that?
The assumption is that by following a known process to create and deploy the infrastructure you don't have to worry about whether Bob manually edited the /etc/resolv.conf or added a new library out of band. Because if he did that, he did it within the context of the deployment process  and thus it is included in the known state of the infrastructure.

This is a key concept - maintaining an externalized, known state of the infrastructure. If that sounds somewhat familiar that's because it's very tightly tied to the concept of SDN and the decoupling of control from data planes to create a centralized command and control model from which the entire state of the network is known. Because you never change the individual nodes in the network, you don't have to worry about them getting crufty or about a route Alice added late one night to fix some weird problem. It's all there, in the controller's view of the network.

Chad Fowler explains this concept well in "Trash Your Servers and Burn Your Code: Immutable Infrastructure and Disposable Components"

If you absolutely know a system has been created via automation and never changed since the moment of creation, most of the problems I describe above disappear. Need to upgrade? No problem. Build a new, upgraded system and throw the old one away. New app revision? Same thing. Build a server (or image) with a new revision and throw away the old ones.

So let's return, then, to the question at hand: can network infrastructure be immutable (disposable) infrastructure?

Yes, it can.

Now the $64, 000 question. Does such disposable network infrastructure need to be virtualized or containerized?

That's somewhat harder to answer. Yes, in the sense that such infrastructure is assumed to be a self-contained entity that can be deployed and disposed of on its own rather than as part of a larger system. You can't group together configuration files for a service on a gigantor network thing and call it disposable infrastructure because it's a part of a larger system that isn't disposable. It has to be self-contained, so virtualized or containerized or software is the best option if you're looking for a truly disposable end-to-end application infrastructure.

Why should you care?
The reason you might care about whether the network infrastructure can be immutable or not is that the closer the infrastructure is to the app, i.e. has greater app affinity, the more likely it is that changes to that infrastructure component will impact the app. Load balancing, for example,   can dramatically change the behavior of an application. A patch or upgrade of the load balancing service has the potential to impact the app. Similarly, upstream services are often the first to be "tweaked" out of band to address some issue - a script to detect and stop an HTTP-based attack, a TCP tweak to improve performance, etc... - and thus are likely to suffer more entropy than services topologically located further upstream.

Thus, the reasons you'd want your app (compute) infrastructure to be disposable apply to upstream infrastructure services, too. To contain the negative impacts of entropy on the entire application architecture stack (which includes the application services infrastructure).

This is one of the reasons you want to stop looking at load balancers as pairs of hardware bricks you insert into the network and start thinking about a more clustered, per-app, service-based approach that is virtualized or based on software. the virtualized / software instances of such application services are more disposable and fit much more easier into an externalized, automated process of provisioning and deployment that enables a disposable infrastructure approach.

Now you might be thinking this looks  a lot like an operationalized infrastructure; something we might get with SDN. If you ignore the disposal of "v1" in favor of a completely new "v2" you basically have the same thing. The question then becomes whether or not the added step of disposal really keeps infrastructure entropy in check or not. And that's something that only you can answer based on the amount of change that actually occurs in your infrastructure. The assumption is the more change, the more entropy. Your mileage may vary.

To sum up, network infrastructure can be disposable (immutable) and the greater the application affinity of the application service the more likely it is that such infrastructure would benefit from being disposable.  Whether or not you realize it, you're probably migrating toward a disposable approach if you're adopting DevOps to operationalize your application infrastructure.

Will you ever achieve a fully disposable infrastructure? Probably not, because realistically we know that sometimes, things happen. But in terms of how we manage roll-outs and upgrades and planned changes, it's completely possible that pieces of your application infrastructure will end up disposable.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

@CloudExpo Stories
Sometimes I write a blog just to formulate and organize a point of view, and I think it’s time that I pull together the bounty of excellent information about Machine Learning. This is a topic with which business leaders must become comfortable, especially tomorrow’s business leaders (tip for my next semester University of San Francisco business students!). Machine learning is a key capability that will help organizations drive optimization and monetization opportunities, and there have been some...
"Storpool does only block-level storage so we do one thing extremely well. The growth in data is what drives the move to software-defined technologies in general and software-defined storage," explained Boyan Ivanov, CEO and co-founder at StorPool, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, whic...
The question before companies today is not whether to become intelligent, it’s a question of how and how fast. The key is to adopt and deploy an intelligent application strategy while simultaneously preparing to scale that intelligence. In her session at 21st Cloud Expo, Sangeeta Chakraborty, Chief Customer Officer at Ayasdi, provided a tactical framework to become a truly intelligent enterprise, including how to identify the right applications for AI, how to build a Center of Excellence to oper...
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
As DevOps methodologies expand their reach across the enterprise, organizations face the daunting challenge of adapting related cloud strategies to ensure optimal alignment, from managing complexity to ensuring proper governance. How can culture, automation, legacy apps and even budget be reexamined to enable this ongoing shift within the modern software factory? In her Day 2 Keynote at @DevOpsSummit at 21st Cloud Expo, Aruna Ravichandran, VP, DevOps Solutions Marketing, CA Technologies, was jo...
As Marc Andreessen says software is eating the world. Everything is rapidly moving toward being software-defined – from our phones and cars through our washing machines to the datacenter. However, there are larger challenges when implementing software defined on a larger scale - when building software defined infrastructure. In his session at 16th Cloud Expo, Boyan Ivanov, CEO of StorPool, provided some practical insights on what, how and why when implementing "software-defined" in the datacent...
Blockchain. A day doesn’t seem to go by without seeing articles and discussions about the technology. According to PwC executive Seamus Cushley, approximately $1.4B has been invested in blockchain just last year. In Gartner’s recent hype cycle for emerging technologies, blockchain is approaching the peak. It is considered by Gartner as one of the ‘Key platform-enabling technologies to track.’ While there is a lot of ‘hype vs reality’ discussions going on, there is no arguing that blockchain is b...
Blockchain is a shared, secure record of exchange that establishes trust, accountability and transparency across business networks. Supported by the Linux Foundation's open source, open-standards based Hyperledger Project, Blockchain has the potential to improve regulatory compliance, reduce cost as well as advance trade. Are you curious about how Blockchain is built for business? In her session at 21st Cloud Expo, René Bostic, Technical VP of the IBM Cloud Unit in North America, discussed the b...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and co...
The cloud era has reached the stage where it is no longer a question of whether a company should migrate, but when. Enterprises have embraced the outsourcing of where their various applications are stored and who manages them, saving significant investment along the way. Plus, the cloud has become a defining competitive edge. Companies that fail to successfully adapt risk failure. The media, of course, continues to extol the virtues of the cloud, including how easy it is to get there. Migrating...
The use of containers by developers -- and now increasingly IT operators -- has grown from infatuation to deep and abiding love. But as with any long-term affair, the honeymoon soon leads to needing to live well together ... and maybe even getting some relationship help along the way. And so it goes with container orchestration and automation solutions, which are rapidly emerging as the means to maintain the bliss between rapid container adoption and broad container use among multiple cloud host...
Imagine if you will, a retail floor so densely packed with sensors that they can pick up the movements of insects scurrying across a store aisle. Or a component of a piece of factory equipment so well-instrumented that its digital twin provides resolution down to the micrometer.
The need for greater agility and scalability necessitated the digital transformation in the form of following equation: monolithic to microservices to serverless architecture (FaaS). To keep up with the cut-throat competition, the organisations need to update their technology stack to make software development their differentiating factor. Thus microservices architecture emerged as a potential method to provide development teams with greater flexibility and other advantages, such as the abili...
In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, provided an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settle...
Product connectivity goes hand and hand these days with increased use of personal data. New IoT devices are becoming more personalized than ever before. In his session at 22nd Cloud Expo | DXWorld Expo, Nicolas Fierro, CEO of MIMIR Blockchain Solutions, will discuss how in order to protect your data and privacy, IoT applications need to embrace Blockchain technology for a new level of product security never before seen - or needed.
Leading companies, from the Global Fortune 500 to the smallest companies, are adopting hybrid cloud as the path to business advantage. Hybrid cloud depends on cloud services and on-premises infrastructure working in unison. Successful implementations require new levels of data mobility, enabled by an automated and seamless flow across on-premises and cloud resources. In his general session at 21st Cloud Expo, Greg Tevis, an IBM Storage Software Technical Strategist and Customer Solution Architec...
Nordstrom is transforming the way that they do business and the cloud is the key to enabling speed and hyper personalized customer experiences. In his session at 21st Cloud Expo, Ken Schow, VP of Engineering at Nordstrom, discussed some of the key learnings and common pitfalls of large enterprises moving to the cloud. This includes strategies around choosing a cloud provider(s), architecture, and lessons learned. In addition, he covered some of the best practices for structured team migration an...
In his general session at 21st Cloud Expo, Greg Dumas, Calligo’s Vice President and G.M. of US operations, discussed the new Global Data Protection Regulation and how Calligo can help business stay compliant in digitally globalized world. Greg Dumas is Calligo's Vice President and G.M. of US operations. Calligo is an established service provider that provides an innovative platform for trusted cloud solutions. Calligo’s customers are typically most concerned about GDPR compliance, application p...