Welcome!

SDN Journal Authors: Liz McMillan, Yeshim Deniz, Elizabeth White, Pat Romanski, TJ Randall

Related Topics: @DevOpsSummit, Linux Containers, Agile Computing, SDN Journal

@DevOpsSummit: Blog Post

How to Adopt DevOps By @TrevParsons | @DevOpsSummit [#DevOps]

It does not take much to understand the benefits of the DevOps culture, processes, and tools

How to Adopt DevOps in Your Organization

It does not take much to understand the benefits of the DevOps culture, processes, and tools. However, implementing DevOps in your organization is not as obvious and usually involves more than simply setting up tools.  You have to convince team members, map old processes to new, and maybe even change the structure of organizational reporting and budgeting.

Unfortunately, there is no magic formula for implementing DevOps in an organization, but there are some strategies to help.

One proven strategy to adopt DevOps is to leverage log analysis

Most technology implementation problems actually end up being people problems.

When you watch a series of failed projects, you start to see a common trend that usually involves the relationship between users, implementers, and decision makers.

Users often ask for functionality and don't understand (or even care) how it is executed. Implementers are stuck holding the bag to get it all done, but not directly tied to the benefits. Decision makers are so detached that they can only look at the hard numbers like time-to-market and ROI.

how to adopt devops in your organization

The cultural side of DevOps tells us this is the wrong approach

There should not be a silo between users, implementers, and decision makers. But, this is a surprisingly common problem, even in high-tech companies. Operations teams are implementing tools for development teams, which were bought by someone else.

The disconnect between these teams result in failed projects.

Fortunately, operations and dev teams share some common ground

They both want to move fast and both are focused on results.

Operations does have the added burden of maintaining up-time, which can sometimes take their focus away from results.

Both also share a love for data.

Today it is hard to find someone who does not love digging into analytics and dashboards. The insights and visibility that modern analytics platforms provide can cast a spell on users.

Human nature compels us to learn more with less effort. This means there is a window of opportunity for organizations to implement and encourage adoption of DevOps with data.

Data in the modern software delivery pipeline can be used to know when something is wrong, when something can be improved, and what users like and don't like. You could say that the DevOps practice is data obsessed. The mechanism for getting at this information is a robust log analysis platform; logging of the systems, applications, and even technical support and project management platforms.

Taking this same analytics platform, which will become the heart of your future DevOps practice and making it a tool right now for your data obsessed operations and dev team, will make the move to DevOps much smoother.

Here's how you can make the move to DevOps easier:

Start early with a common pool of data.

You do not need to wait. Identify an existing pool of data that the organization desperately wants to make more sense of and read that data into a log analysis platform. You can then build a standard set of dashboards that answer common questions. As soon as you share this with the team, watch out; they will immediately double-click into the data and get addicted to seeing what else they can learn. You can do this without even implementing a continuous integration or delivery pipeline.

Invite data to your meetings.

The new log analysis platform should become a common character in all meetings regarding development and infrastructure. The dashboards should be exported and put in presentations with discussion around their insights. The organization will quickly realize that issues are identified faster, response time to issues is faster, and everyone can get on the same page of what is going on without a huge amount of effort.

Share and Share often.

If you configure automatic alerts or periodic reporting of beneficial reports and data via email to common users, you will notice a huge interest in the method of this reporting. You do not need to wait for a request unless there are security constraints; oversharing at the beginning is OK. Be careful thought as you might become so popular that the demands on the types of alerts and dashboards will become its own task. And later when processes are mature you will want to cull the alerts to be more deliberate.

Once everyone is addicted to the tool that provides them this valuable information, they will be hunting ways to get more data into it in order to get more insights. This naturally leads to a DevOps culture and eventually to a practice where teams move quickly and rely on data to tell them how they are doing. This culture also makes them less afraid to break something knowing that they can learn and move on from any failure quickly ("Fail Fast").

Log analysis alone is not DevOps, but it is at the heart of any complete DevOps practice. Because log analysis is interesting to nearly all users in the organization it is the best place to build interest, culture, and practice for DevOps- even before it is implemented.

More Stories By Trevor Parsons

Trevor Parsons is Chief Scientist and Co-founder of Logentries. Trevor has over 10 years experience in enterprise software and, in particular, has specialized in developing enterprise monitoring and performance tools for distributed systems. He is also a research fellow at the Performance Engineering Lab Research Group and was formerly a Scientist at the IBM Center for Advanced Studies. Trevor holds a PhD from University College Dublin, Ireland.

CloudEXPO Stories
DXWorldEXPO LLC announced today that Kevin Jackson joined the faculty of CloudEXPO's "10-Year Anniversary Event" which will take place on November 11-13, 2018 in New York City. Kevin L. Jackson is a globally recognized cloud computing expert and Founder/Author of the award winning "Cloud Musings" blog. Mr. Jackson has also been recognized as a "Top 100 Cybersecurity Influencer and Brand" by Onalytica (2015), a Huffington Post "Top 100 Cloud Computing Experts on Twitter" (2013) and a "Top 50 Cloud Computing Blogger for IT Integrators" by CRN (2015). Mr. Jackson's professional career includes service in the US Navy Space Systems Command, Vice President J.P. Morgan Chase, Worldwide Sales Executive for IBM and NJVC Vice President, Cloud Services. He is currently part of a team responsible for onboarding mission applications to the US Intelligence Community cloud computing environment (IC ...
When applications are hosted on servers, they produce immense quantities of logging data. Quality engineers should verify that apps are producing log data that is existent, correct, consumable, and complete. Otherwise, apps in production are not easily monitored, have issues that are difficult to detect, and cannot be corrected quickly. Tom Chavez presents the four steps that quality engineers should include in every test plan for apps that produce log output or other machine data. Learn the steps so your team's apps not only function but also can be monitored and understood from their machine data when running in production.
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.