Welcome!

SDN Journal Authors: Pat Romanski, Patrick Hubbard, Elizabeth White, Sven Olav Lund, Liz McMillan

Related Topics: SDN Journal, Microservices Expo, Microsoft Cloud, Containers Expo Blog, @CloudExpo, @BigDataExpo, @DevOpsSummit

SDN Journal: Article

DevOps: Bringing 'Life' to Application Lifecycle Management

The DevOps methodology is a straightforward and obvious initiative to cater for the changing face of application development

For most organizations application releases are analogous to extremely tense and pressurized situations where risk mitigation and tight time deadlines are key. This is made worse with the complication of internal silos and the consequent lack of cohesion that exists not just within the microcosm of IT infrastructure teams but also amongst the broader departments of development, QA and operations. Now with the increasing demand on IT from application and business unit stakeholders for new releases to be deployed quickly and successfully, the interdependence of software development and IT operations are being seen as an integral part to the successful delivery of IT services. Consequently businesses are recognizing that this can't be achieved unless the traditional methodologies and silos are readdressed or changed. Cue the emergence of a new methodology that's simply called DevOps.

The advancement and agility of web and mobile applications has been one of the key factors that have led many to question the validity or even practicality of the traditional waterfall methodology of software development.  The waterfall's rigorous methodology of conception, initiation, analysis, design, construction, testing, production/implementation and maintenance in an age when the industry demands "agility" can almost seem archaic. While no one can dispute the waterfall methodology's relevance, certainly not companies such as Sony who suffered the embarrassment of the rootkit bug, but with web and mobile app releases needing to be rapidly and regularly deployed, can companies really continue to proceed down a long a continuous integration process?

Much of the problem stems from legacy IT people cultures as opposed to the methodology itself where each individual is responsible for their sole role, within their specific field, within their particular department. Consequently within the same company the development team is often seen as the antithesis of operations with their constant drive for change in needing to meet user needs for frequent delivery of new features. In stark contrast operations are focused on predictability, availability and stability, factors that are nearly always put at risk whenever development request a "change" to be introduced.

This disengagement is further exacerbated with development teams delivering code with little or no involvement from their operations teams. Additionally to support their rapid deployment requirements, development teams will use tools that emphasize flexibility and consequently bear little or no resemblance to the rigid performance and availability-based toolsets of operations. In fact it would be rare to find either operations or development teams being aware of their counterparts toolsets yet alone taking any interest in potentially sharing or integrating them.

Alternatively you have the operations team that will do everything they can to stall any changes and new features that are being proposed to the production environment in an attempt to mitigate any unwanted risk. Eventually when development teams are allowed to get their software release picked up by operations it's usually after operations have gone through a laborious process of script creation and config file editing to accommodate the deployment on a production runtime environment that is significantly different to the one used by development.

Indeed it's commonplace to see inconsistencies between the runtime environment the development teams have used to run their code upon (typically low resourced desktops) and the high resource server OS based environments utilized by operations. With development having tested and successfully run everything on a Windows 7 desktop, it's no surprise that once operations deploy it on a Unix-based server with different Java versions, software load balancers and completely different properties files, etc., that failure and chaos ensues during a "Go Live". What follows is the internal blame game where operations will point to an application that isn't secure, needs restarting and isn't easy to deploy while development will claim that it worked perfectly fine on their workstations and hence operations should be capable of seamlessly scaling and making it work on production server systems.

Subsequently this is what the panacea being termed DevOps was established to address.  DevOps from its outset works to push for collaboration and communication between the development, operations and quality assurance teams. Based on the core concept of unifying processes into a comprehensive "development to operations" lifecycle, the aim is to inculcate an end-to-end sense of ownership and responsibility for all departments. While the QA, development and operations teams have unique methods and aims in the process, they are all part of a single goal and overarching methodology. This entails providing the development team more environmental control while concurrently ensuring operations have a better understanding of the application and its infrastructure requirements. This involves operations even taking part (and consequently having co-ownership) of the development of applications that they can in turn monitor throughout the development to deployment lifecycle.

The result is an elimination of the blame culture especially in the case of any application issues as both software development and operational maintenance is a co-owned process. Instead of operations blaming development for a flaky code and development blaming operations for an unstable infrastructure, the trivial and time consuming internal finger pointing practices are replaced with a traceable root cause analysis between all departments as a single team. Consequently application deployment becomes more reliable, predictable and scalable to the business' demands.

Additionally DevOps calls for a unified and automated tooling process. The evolution of web applications and Big Data has led to infrastructure needing to scale and grow considerably quicker. This means that the traditional model of fire fighting and reactive patching and scripting are no longer a viable option. The need for automation and unified tools whether for deployment, workflows, monitoring, configuration etc. is a must not just to meet time constraints but also to safeguard against configuration discrepancies and errors. Hence the growing awareness of DevOps has aided an emergence in the market of open source software that deal with this very challenge ranging from configuration management and monitoring tools such as Rundeck, Vagrant, Puppet and Chef. While these tools are familiar to development teams the aim is to also make them the concern and interest of operations.

The DevOps methodology is a straightforward and obvious initiative to cater for the changing face of application development and deployment. Despite this it's greatest challenge lies within people and their willingness to change. Both development and operations teams need to remove themselves from their short term silo-focused objectives to the broader long term goals of the business. That necessitates that the objective should be a concerted and unified effort from both teams to have applications deployed in minimum time with minimum risk. I've often worked with operations staff who have little or no idea of how the applications they're supporting are related to the products and services their companies are delivering and how in turn they are generating revenue as well as providing value to the end user. Additionally I've worked with development teams that were outsourced from another country where communication was non-existent not just because of the language barrier. As the demands from the business on IT rapidly increase and change so too must the silo mindset. DevOps is aiming at initiating an inevitable change; those that resist may find that they themselves will get changed. As for those that embrace it, they may just find application releases a lot less painful.

More Stories By Archie Hendryx

SAN, NAS, Back Up / Recovery & Virtualisation Specialist.

@CloudExpo Stories
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
The session is centered around the tracing of systems on cloud using technologies like ebpf. The goal is to talk about what this technology is all about and what purpose it serves. In his session at 21st Cloud Expo, Shashank Jain, Development Architect at SAP, will touch upon concepts of observability in the cloud and also some of the challenges we have. Generally most cloud-based monitoring tools capture details at a very granular level. To troubleshoot problems this might not be good enough.
When it comes to cloud computing, the ability to turn massive amounts of compute cores on and off on demand sounds attractive to IT staff, who need to manage peaks and valleys in user activity. With cloud bursting, the majority of the data can stay on premises while tapping into compute from public cloud providers, reducing risk and minimizing need to move large files. In his session at 18th Cloud Expo, Scott Jeschonek, Director of Product Management at Avere Systems, discussed the IT and busine...
SYS-CON Events announced today that Dasher Technologies will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Dasher Technologies, Inc. ® is a premier IT solution provider that delivers expert technical resources along with trusted account executives to architect and deliver complete IT solutions and services to help our clients execute their goals, plans and objectives. Since 1999, we'v...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities – ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups. As a result, many firms employ new business models that place enormous impor...
We all know that end users experience the Internet primarily with mobile devices. From an app development perspective, we know that successfully responding to the needs of mobile customers depends on rapid DevOps – failing fast, in short, until the right solution evolves in your customers' relationship to your business. Whether you’re decomposing an SOA monolith, or developing a new application cloud natively, it’s not a question of using microservices – not doing so will be a path to eventual b...
SYS-CON Events announced today that Massive Networks, that helps your business operate seamlessly with fast, reliable, and secure internet and network solutions, has been named "Exhibitor" of SYS-CON's 21st International Cloud Expo ®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. As a premier telecommunications provider, Massive Networks is headquartered out of Louisville, Colorado. With years of experience under their belt, their team of...
SYS-CON Events announced today that TidalScale, a leading provider of systems and services, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. TidalScale has been involved in shaping the computing landscape. They've designed, developed and deployed some of the most important and successful systems and services in the history of the computing industry - internet, Ethernet, operating s...
SYS-CON Events announced today that Taica will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Taica manufacturers Alpha-GEL brand silicone components and materials, which maintain outstanding performance over a wide temperature range -40C to +200C. For more information, visit http://www.taica.co.jp/english/.
SYS-CON Events announced today that MIRAI Inc. will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. MIRAI Inc. are IT consultants from the public sector whose mission is to solve social issues by technology and innovation and to create a meaningful future for people.
Transforming cloud-based data into a reportable format can be a very expensive, time-intensive and complex operation. As a SaaS platform with more than 30 million global users, Cornerstone OnDemand’s challenge was to create a scalable solution that would improve the time it took customers to access their user data. Our Real-Time Data Warehouse (RTDW) process vastly reduced data time-to-availability from 24 hours to just 10 minutes. In his session at 21st Cloud Expo, Mark Goldin, Chief Technolo...
SYS-CON Events announced today that IBM has been named “Diamond Sponsor” of SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California.
SYS-CON Events announced today that TidalScale will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. TidalScale is the leading provider of Software-Defined Servers that bring flexibility to modern data centers by right-sizing servers on the fly to fit any data set or workload. TidalScale’s award-winning inverse hypervisor technology combines multiple commodity servers (including their ass...
In the fast-paced advances and popularity in cloud technology, one of the most critical factors revolves around concerns for security of your critical data. How to assure both your company and your customers they can confidently trust and utilize your cloud environment is most often top on the list. There is a method to evaluating and providing security that exceeds conventional modes of protecting data both within the cloud as well externally on mobile and other devices. With the public failure...
Join IBM November 1 at 21st Cloud Expo at the Santa Clara Convention Center in Santa Clara, CA, and learn how IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Cognitive analysis impacts today’s systems with unparalleled ability that were previously available only to manned, back-end operations. Thanks to cloud processing, IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Imagine a robot vacuum that becomes your personal assistant tha...
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
In his session at 21st Cloud Expo, Raju Shreewastava, founder of Big Data Trunk, will provide a fun and simple way to introduce Machine Leaning to anyone and everyone. Together we will solve a machine learning problem and find an easy way to be able to do machine learning without even coding. Raju Shreewastava is the founder of Big Data Trunk (www.BigDataTrunk.com), a Big Data Training and consulting firm with offices in the United States. He previously led the data warehouse/business intellige...
Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In th...
As popularity of the smart home is growing and continues to go mainstream, technological factors play a greater role. The IoT protocol houses the interoperability battery consumption, security, and configuration of a smart home device, and it can be difficult for companies to choose the right kind for their product. For both DIY and professionally installed smart homes, developers need to consider each of these elements for their product to be successful in the market and current smart homes.
Infoblox delivers Actionable Network Intelligence to enterprise, government, and service provider customers around the world. They are the industry leader in DNS, DHCP, and IP address management, the category known as DDI. We empower thousands of organizations to control and secure their networks from the core-enabling them to increase efficiency and visibility, improve customer service, and meet compliance requirements.