Welcome!

SDN Journal Authors: Elizabeth White, Destiny Bertucci, Liz McMillan, Jignesh Solanki, Daniel Gordon

Related Topics: SDN Journal, Microservices Expo, Microsoft Cloud, Containers Expo Blog, @CloudExpo, @DXWorldExpo, @DevOpsSummit

SDN Journal: Article

DevOps: Bringing 'Life' to Application Lifecycle Management

The DevOps methodology is a straightforward and obvious initiative to cater for the changing face of application development

For most organizations application releases are analogous to extremely tense and pressurized situations where risk mitigation and tight time deadlines are key. This is made worse with the complication of internal silos and the consequent lack of cohesion that exists not just within the microcosm of IT infrastructure teams but also amongst the broader departments of development, QA and operations. Now with the increasing demand on IT from application and business unit stakeholders for new releases to be deployed quickly and successfully, the interdependence of software development and IT operations are being seen as an integral part to the successful delivery of IT services. Consequently businesses are recognizing that this can't be achieved unless the traditional methodologies and silos are readdressed or changed. Cue the emergence of a new methodology that's simply called DevOps.

The advancement and agility of web and mobile applications has been one of the key factors that have led many to question the validity or even practicality of the traditional waterfall methodology of software development.  The waterfall's rigorous methodology of conception, initiation, analysis, design, construction, testing, production/implementation and maintenance in an age when the industry demands "agility" can almost seem archaic. While no one can dispute the waterfall methodology's relevance, certainly not companies such as Sony who suffered the embarrassment of the rootkit bug, but with web and mobile app releases needing to be rapidly and regularly deployed, can companies really continue to proceed down a long a continuous integration process?

Much of the problem stems from legacy IT people cultures as opposed to the methodology itself where each individual is responsible for their sole role, within their specific field, within their particular department. Consequently within the same company the development team is often seen as the antithesis of operations with their constant drive for change in needing to meet user needs for frequent delivery of new features. In stark contrast operations are focused on predictability, availability and stability, factors that are nearly always put at risk whenever development request a "change" to be introduced.

This disengagement is further exacerbated with development teams delivering code with little or no involvement from their operations teams. Additionally to support their rapid deployment requirements, development teams will use tools that emphasize flexibility and consequently bear little or no resemblance to the rigid performance and availability-based toolsets of operations. In fact it would be rare to find either operations or development teams being aware of their counterparts toolsets yet alone taking any interest in potentially sharing or integrating them.

Alternatively you have the operations team that will do everything they can to stall any changes and new features that are being proposed to the production environment in an attempt to mitigate any unwanted risk. Eventually when development teams are allowed to get their software release picked up by operations it's usually after operations have gone through a laborious process of script creation and config file editing to accommodate the deployment on a production runtime environment that is significantly different to the one used by development.

Indeed it's commonplace to see inconsistencies between the runtime environment the development teams have used to run their code upon (typically low resourced desktops) and the high resource server OS based environments utilized by operations. With development having tested and successfully run everything on a Windows 7 desktop, it's no surprise that once operations deploy it on a Unix-based server with different Java versions, software load balancers and completely different properties files, etc., that failure and chaos ensues during a "Go Live". What follows is the internal blame game where operations will point to an application that isn't secure, needs restarting and isn't easy to deploy while development will claim that it worked perfectly fine on their workstations and hence operations should be capable of seamlessly scaling and making it work on production server systems.

Subsequently this is what the panacea being termed DevOps was established to address.  DevOps from its outset works to push for collaboration and communication between the development, operations and quality assurance teams. Based on the core concept of unifying processes into a comprehensive "development to operations" lifecycle, the aim is to inculcate an end-to-end sense of ownership and responsibility for all departments. While the QA, development and operations teams have unique methods and aims in the process, they are all part of a single goal and overarching methodology. This entails providing the development team more environmental control while concurrently ensuring operations have a better understanding of the application and its infrastructure requirements. This involves operations even taking part (and consequently having co-ownership) of the development of applications that they can in turn monitor throughout the development to deployment lifecycle.

The result is an elimination of the blame culture especially in the case of any application issues as both software development and operational maintenance is a co-owned process. Instead of operations blaming development for a flaky code and development blaming operations for an unstable infrastructure, the trivial and time consuming internal finger pointing practices are replaced with a traceable root cause analysis between all departments as a single team. Consequently application deployment becomes more reliable, predictable and scalable to the business' demands.

Additionally DevOps calls for a unified and automated tooling process. The evolution of web applications and Big Data has led to infrastructure needing to scale and grow considerably quicker. This means that the traditional model of fire fighting and reactive patching and scripting are no longer a viable option. The need for automation and unified tools whether for deployment, workflows, monitoring, configuration etc. is a must not just to meet time constraints but also to safeguard against configuration discrepancies and errors. Hence the growing awareness of DevOps has aided an emergence in the market of open source software that deal with this very challenge ranging from configuration management and monitoring tools such as Rundeck, Vagrant, Puppet and Chef. While these tools are familiar to development teams the aim is to also make them the concern and interest of operations.

The DevOps methodology is a straightforward and obvious initiative to cater for the changing face of application development and deployment. Despite this it's greatest challenge lies within people and their willingness to change. Both development and operations teams need to remove themselves from their short term silo-focused objectives to the broader long term goals of the business. That necessitates that the objective should be a concerted and unified effort from both teams to have applications deployed in minimum time with minimum risk. I've often worked with operations staff who have little or no idea of how the applications they're supporting are related to the products and services their companies are delivering and how in turn they are generating revenue as well as providing value to the end user. Additionally I've worked with development teams that were outsourced from another country where communication was non-existent not just because of the language barrier. As the demands from the business on IT rapidly increase and change so too must the silo mindset. DevOps is aiming at initiating an inevitable change; those that resist may find that they themselves will get changed. As for those that embrace it, they may just find application releases a lot less painful.

More Stories By Archie Hendryx

SAN, NAS, Back Up / Recovery & Virtualisation Specialist.

@CloudExpo Stories
Sometimes I write a blog just to formulate and organize a point of view, and I think it’s time that I pull together the bounty of excellent information about Machine Learning. This is a topic with which business leaders must become comfortable, especially tomorrow’s business leaders (tip for my next semester University of San Francisco business students!). Machine learning is a key capability that will help organizations drive optimization and monetization opportunities, and there have been some...
"Storpool does only block-level storage so we do one thing extremely well. The growth in data is what drives the move to software-defined technologies in general and software-defined storage," explained Boyan Ivanov, CEO and co-founder at StorPool, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, whic...
The question before companies today is not whether to become intelligent, it’s a question of how and how fast. The key is to adopt and deploy an intelligent application strategy while simultaneously preparing to scale that intelligence. In her session at 21st Cloud Expo, Sangeeta Chakraborty, Chief Customer Officer at Ayasdi, provided a tactical framework to become a truly intelligent enterprise, including how to identify the right applications for AI, how to build a Center of Excellence to oper...
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
As DevOps methodologies expand their reach across the enterprise, organizations face the daunting challenge of adapting related cloud strategies to ensure optimal alignment, from managing complexity to ensuring proper governance. How can culture, automation, legacy apps and even budget be reexamined to enable this ongoing shift within the modern software factory? In her Day 2 Keynote at @DevOpsSummit at 21st Cloud Expo, Aruna Ravichandran, VP, DevOps Solutions Marketing, CA Technologies, was jo...
As Marc Andreessen says software is eating the world. Everything is rapidly moving toward being software-defined – from our phones and cars through our washing machines to the datacenter. However, there are larger challenges when implementing software defined on a larger scale - when building software defined infrastructure. In his session at 16th Cloud Expo, Boyan Ivanov, CEO of StorPool, provided some practical insights on what, how and why when implementing "software-defined" in the datacent...
Blockchain. A day doesn’t seem to go by without seeing articles and discussions about the technology. According to PwC executive Seamus Cushley, approximately $1.4B has been invested in blockchain just last year. In Gartner’s recent hype cycle for emerging technologies, blockchain is approaching the peak. It is considered by Gartner as one of the ‘Key platform-enabling technologies to track.’ While there is a lot of ‘hype vs reality’ discussions going on, there is no arguing that blockchain is b...
Blockchain is a shared, secure record of exchange that establishes trust, accountability and transparency across business networks. Supported by the Linux Foundation's open source, open-standards based Hyperledger Project, Blockchain has the potential to improve regulatory compliance, reduce cost as well as advance trade. Are you curious about how Blockchain is built for business? In her session at 21st Cloud Expo, René Bostic, Technical VP of the IBM Cloud Unit in North America, discussed the b...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and co...
The use of containers by developers -- and now increasingly IT operators -- has grown from infatuation to deep and abiding love. But as with any long-term affair, the honeymoon soon leads to needing to live well together ... and maybe even getting some relationship help along the way. And so it goes with container orchestration and automation solutions, which are rapidly emerging as the means to maintain the bliss between rapid container adoption and broad container use among multiple cloud host...
The cloud era has reached the stage where it is no longer a question of whether a company should migrate, but when. Enterprises have embraced the outsourcing of where their various applications are stored and who manages them, saving significant investment along the way. Plus, the cloud has become a defining competitive edge. Companies that fail to successfully adapt risk failure. The media, of course, continues to extol the virtues of the cloud, including how easy it is to get there. Migrating...
Imagine if you will, a retail floor so densely packed with sensors that they can pick up the movements of insects scurrying across a store aisle. Or a component of a piece of factory equipment so well-instrumented that its digital twin provides resolution down to the micrometer.
The need for greater agility and scalability necessitated the digital transformation in the form of following equation: monolithic to microservices to serverless architecture (FaaS). To keep up with the cut-throat competition, the organisations need to update their technology stack to make software development their differentiating factor. Thus microservices architecture emerged as a potential method to provide development teams with greater flexibility and other advantages, such as the abili...
In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, provided an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settle...
Product connectivity goes hand and hand these days with increased use of personal data. New IoT devices are becoming more personalized than ever before. In his session at 22nd Cloud Expo | DXWorld Expo, Nicolas Fierro, CEO of MIMIR Blockchain Solutions, will discuss how in order to protect your data and privacy, IoT applications need to embrace Blockchain technology for a new level of product security never before seen - or needed.
Leading companies, from the Global Fortune 500 to the smallest companies, are adopting hybrid cloud as the path to business advantage. Hybrid cloud depends on cloud services and on-premises infrastructure working in unison. Successful implementations require new levels of data mobility, enabled by an automated and seamless flow across on-premises and cloud resources. In his general session at 21st Cloud Expo, Greg Tevis, an IBM Storage Software Technical Strategist and Customer Solution Architec...
Nordstrom is transforming the way that they do business and the cloud is the key to enabling speed and hyper personalized customer experiences. In his session at 21st Cloud Expo, Ken Schow, VP of Engineering at Nordstrom, discussed some of the key learnings and common pitfalls of large enterprises moving to the cloud. This includes strategies around choosing a cloud provider(s), architecture, and lessons learned. In addition, he covered some of the best practices for structured team migration an...
In his general session at 21st Cloud Expo, Greg Dumas, Calligo’s Vice President and G.M. of US operations, discussed the new Global Data Protection Regulation and how Calligo can help business stay compliant in digitally globalized world. Greg Dumas is Calligo's Vice President and G.M. of US operations. Calligo is an established service provider that provides an innovative platform for trusted cloud solutions. Calligo’s customers are typically most concerned about GDPR compliance, application p...