Welcome!

SDN Journal Authors: Yeshim Deniz, Liz McMillan, Elizabeth White, Pat Romanski, TJ Randall

Related Topics: @DevOpsSummit, Linux Containers, Containers Expo Blog, SDN Journal

@DevOpsSummit: Blog Post

Logging and Continuous Delivery By @FloMotlik | @DevOpsSummit [#DevOps]

Continuous Delivery is the future of building high class software

Guest blog post by Florian Motlik, Cofounder & CTO of Codeship Inc.

Why Great Logging Is Key to Continuous Delivery

Over the last years Continuous Delivery has gained a massive following with many development teams embracing the style. Companies have chosen (as with many other modern developer tools), to either build their own embrace a hosted service likeCodeship. In the end though, no matter if you go with a hosted service or roll it on your own, the goal is to move faster and build a product that your customers really love. For that you need to iterate quickly, get feedback and iterate again.

Successfully rolling out that process depends on many variables. Proper logging is one of those variables, and can be a helpful tool to remove fear.

Why great logging is key to continuous delivery

Thou shalt not be afraid
As we've moved into the age of cloud software development, it is all about team productivity. Getting started on infrastructure is virtually free, so every team plays on the same level playing field and you need to constantly increase productivity to win.

By far the biggest killer of productivity is fear of moving faster because you might break things.

Then your team, processes or technology are not built with constant change in mind and you decrease the speed in which you release to have better control. This is a downward spiral that will only lead to slower processes, less innovation and you losing your market.

Fear stops experiments and promotes stagnation.

Having a repeatable and easily automated process can drastically reduce that fear. The more you execute that automated process the safer you feel.  Over time this will become stronger as all potential issues get discovered and fixed.

A second very important improvement is getting deep insights into any processes and workflows happening in your application.

When you continuously deploy changes to your application, being able to trace any step your application makes becomes your main tool to debug your production system.

The insight you can gain quickly from looking through your logs will often show you the problem a recent deploy brought into your infrastructure immediately.

This is indispensable with Continuous Delivery.

While Metrics are a great and very important part of getting insight into your infrastructure they only represent the state of the system. Additionally to understand the state you need to be able to trace and deeply understand anything happening in your infrastructure.

Integration with Pager systems provide an additional level of security on top of that to always be aware of problems happening.

Let there be light
We've grown accustomed to have full insight into our testing and deployment process as we are and have been using Codeship to build Codeship for a very long time. We understood we needed that same insight into our application as well to build the kind of infrastructure that supports the quality we want to deliver.

A good logging strategy and overview on your most important workflows is necessary.

Defining a graph of all the states that a workflow can have in your system makes it easy to add logging to each of those states and the transitions between them. At that point logging is not an afterthought, but integral part of your software development effort.

This needs to be clearly communicated to your team so everyone follows it thoroughly.

For example we test and deploy code for thousands of companies with many different language and infrastructure requirements. Those companies connect to GitHub or Bitbucket as their source code management system and various hosting providers like Heroku or AWS to deploy to. There are many moving parts in that system so we need to be able to detect and debug problems anytime without effort. A well thought out logging strategy can help tremendously and make it easy to fix issues when they come up. We can follow any build through all of our infrastructure and correlate issues between builds or our infrastructure anytime.

You can read more about how we use Logentries at Codeship in an earlier post.

Grand Central Logging
Being able to follow any workflow means collecting logs of various services in one place. If your developers have to look through various sources for correlating potential problems productivity and the speed to resolve an issue become very low.

Conclusion
Continuous Delivery is the future of building high class software. Automated testing and deployment build the basis of the workflow, but many other tools like centralized logging and error reporting are important building blocks of that workflow as well.

When your system feels like a black box you will hesitate to release changes to that infrastructure. Make sure you're not stuck with that black box and build a workflow that makes your team more productive and increases your products quality.

If you want to learn more about Continuous Delivery you can also take a look at our crash course which can be found on the Codeship homepage.

More Stories By Trevor Parsons

Trevor Parsons is Chief Scientist and Co-founder of Logentries. Trevor has over 10 years experience in enterprise software and, in particular, has specialized in developing enterprise monitoring and performance tools for distributed systems. He is also a research fellow at the Performance Engineering Lab Research Group and was formerly a Scientist at the IBM Center for Advanced Studies. Trevor holds a PhD from University College Dublin, Ireland.

CloudEXPO Stories
"There is a huge interest in Kubernetes. People are now starting to use Kubernetes and implement it," stated Sebastian Scheele, co-founder of Loodse, in this SYS-CON.tv interview at DevOps at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex to learn. This is because Kubernetes is more of a toolset than a ready solution. Hence it’s essential to know when and how to apply the appropriate Kubernetes constructs.
Dito announced the launch of its "Kubernetes Kickoff" application modernization program. This new packaged service offering is designed to provide a multi-phased implementation and optimization plan for leveraging Kubernetes on Google Kubernetes Engine (GKE). Kubernetes, a relatively new layer of the modern cloud stack, is a production-ready platform that allows companies to deploy and manage containerized applications, update with zero downtime, and securely scale their deployments.
The use of containers by developers -- and now increasingly IT operators -- has grown from infatuation to deep and abiding love. But as with any long-term affair, the honeymoon soon leads to needing to live well together ... and maybe even getting some relationship help along the way. And so it goes with container orchestration and automation solutions, which are rapidly emerging as the means to maintain the bliss between rapid container adoption and broad container use among multiple cloud hosts. This BriefingsDirect cloud services maturity discussion focuses on new ways to gain container orchestration, to better use serverless computing models, and employ inclusive management to keep the container love alive.
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at Dice, he takes a metrics-driven approach to management. His experience in building and managing high performance teams was built throughout his experience at Oracle, Sun Microsystems and SocialEkwity.