Welcome!

SDN Journal Authors: Yeshim Deniz, Elizabeth White, Liz McMillan, Pat Romanski, TJ Randall

Related Topics: @CloudExpo, Microservices Expo, Machine Learning , Agile Computing, Cloud Security, @DXWorldExpo, SDN Journal

@CloudExpo: Article

Deep Insight and Collaboration in the Cloud: A Customer Story

The true power of continuous holistic APM approach in a cloud-based environment

Recently, one of our customers, let's call him PointInFact, had a very typical problem. After deploying a new version of its software, some user requests degraded horribly. Requests that should have taken half a second took up to a minute. Interestingly, the PointInFact team runs a multi-tenant SaaS solution in the AWS Cloud and relies heavily on cloud services. This reliance makes User Experience Management and fault domain isolation very challenging.

Back Story: Application Running in the AWS Cloud
PointInFact runs a SaaS service. Internally this results in a multi-tenant service where each customer has his own instance of the application he subscribes to. All of these applications and services are hosted in Amazon's EC2 Cloud where they dynamically create new application environments and offload some functionality to AWS by using the provided services. As a SaaS business, customer satisfaction is very important to them, as a consequence they monitor all applications and services centrally and from an end-user and server-side perspective with Compuware APM.

Performance Degradation
After one deployment, the APM solution informed the operations team that user experience was degrading. A look at the geographical distribution showed them that this was not a localized phenomenon, but worldwide.

Worldwide distribution of End User Experience

Notice all the red circles in the above screenshot. Each red circle indicates frustrated users. One particular interesting fact in this dashboard is that the average web page response time (upper right corner) remains stable and well below the one-second mark. This means that the system in general is still running fine and not in general melt-down mode. However, it also shows why it is not good to rely on averages for actual monitoring and why server-side response times alone are not enough. Your end-users are at the edge, around the world, and not sitting next to one of Amazon's data centers.

The next thing that the operations team did was look at the application flow. They were hoping for something big to show up immediately, but nothing much out of the ordinary showed up.

Complete Application Flow

This is not really surprising; they were looking at an application flow overview of about half a million transactions - the averaging effect in full action.

The interesting takeaway however was although user experience suffered across the board, it could not be attributed to a general meltdown of the environment. It was time to look at specific transaction types and their baselines.

High-level Performance dashboard that shows a response time violation

In the dashboard above, the marked and highlighted upper right chart shows that one particular service call in the application was off the charts. The dashboard also shows that at the same time the CPU (lower left corner) of one of their servers was exhausted. Were the two events related even though they occurred on different hosts? A detailed look at the offending request type revealed something very interesting.

Detailed Performance Analysis of the offending request showing that most of the CPU is spent in XML and XSLT processing

The highlighted chart in the middle shows the CPU distribution of the offending service calls. CPU was spiking and the root cause could be attributed to XML processing and subsequent XSL transformations. This is indicated by the yellow and blue bars that represent XML and XSL processing, respectively. This was the reason for the CPU exhaustion noticed earlier.

However, having determined the likely root cause for the slow down the PointInFact team took a step back and asked which users and documents were impacted by this.

This shows which End Users are impacted by the performance issue - focus on the last page load taking ~360s

This was very important for two reasons. First, it allowed them to be proactive with their own users who experienced slowdowns. Second, it further isolated the real problem area.

The Performance Bottleneck That Should Not Be
Now that the trigger for the slowdown was revealed, the performance team looked into the root cause. When looking at the Transaction Flow for the impacted business transactions two things stood out.

The Application Flow for the offending requests that most time is spent in the service on the lower right corner

One can see that most of the response time is spent in the document request service (lower-right corner). In addition, they knew from the previous dashboard that the application tier consumed a lot of CPU in the XML/XSLT processing. The conclusion to the performance team was clear, caching did not work.

To understand this, we need to know that the document requests and subsequent transformations should only happen once per document. After that, all follow-up requests should take the result from the Cache. PointInFact is leveraging the memcached-compliant AWS ElastiCache for this purpose. What the analysis revealed was that the same document was transformed many times, hence caching was not working!

The obvious conclusion was that there was a problem with ElastiCache. As this was a third-party component, the customer needed more information before approaching Amazon with support requests. Thanks to their APM strategy they had sufficient insight into their usage of ElastiCache in production. This turned out to be very good, because otherwise opening a support ticket for ElastiCache would not only have been time consuming, it would have also been futile as we shall see.

Do or Do Not Cache, There Is No Try...
In an attempt to get more information about the caching problem, the customer identified the real root cause. While each of the offending document requests was doing a cache lookup upfront, none of them put the result in the cache afterwards. There was no problem with the cache; it simply was not being used.

At this point, the development team started looking into it at the code level and could identify a problem with the cache client library that they were using. That cache client was also a third-party component, but now they had something tangible to share with the maintainers of the cache client. Long story short, the problem was fixed upstream and one deployment later the problem was solved to everybody's satisfaction.

Conclusion
To me, this story shows the true power of continuous holistic APM approach in a cloud-based environment.

  • The customer was able identify a problem in production that had a big end-user impact, although the average transaction was still considered fast.
  • The operations team could identify exactly which users were impacted and be proactive in their customer support.
  • More importantly the R&D team was able to identify the real root cause in one third-party component while avoiding a lengthy back and forth with another third-party vendor (Amazon ElastiCache) which would have been futile.

Finally, PointInFact was able to track down the root cause with sufficient depth to provide the responsible third party with a fix proposal, giving them a faster turnaround time on a permanent solution. And all of this in a public globally distributed multi-tenant cloud application.

More Stories By Michael Kopp

Michael Kopp has over 12 years of experience as an architect and developer in the Enterprise Java space. Before coming to CompuwareAPM dynaTrace he was the Chief Architect at GoldenSource, a major player in the EDM space. In 2009 he joined dynaTrace as a technology strategist in the center of excellence. He specializes application performance management in large scale production environments with special focus on virtualized and cloud environments. His current focus is how to effectively leverage BigData Solutions and how these technologies impact and change the application landscape.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


CloudEXPO Stories
Extreme Computing is the ability to leverage highly performant infrastructure and software to accelerate Big Data, machine learning, HPC, and Enterprise applications. High IOPS Storage, low-latency networks, in-memory databases, GPUs and other parallel accelerators are being used to achieve faster results and help businesses make better decisions. In his session at 18th Cloud Expo, Michael O'Neill, Strategic Business Development at NVIDIA, focused on some of the unique ways extreme computing is being used on IBM Cloud, Amazon, and Microsoft Azure and how to gain access to these resources in the cloud... for FREE!
CI/CD is conceptually straightforward, yet often technically intricate to implement since it requires time and opportunities to develop intimate understanding on not only DevOps processes and operations, but likely product integrations with multiple platforms. This session intends to bridge the gap by offering an intense learning experience while witnessing the processes and operations to build from zero to a simple, yet functional CI/CD pipeline integrated with Jenkins, Github, Docker and Azure.
"We do one of the best file systems in the world. We learned how to deal with Big Data many years ago and we implemented this knowledge into our software," explained Jakub Ratajczak, Business Development Manager at MooseFS, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Today, we have more data to manage than ever. We also have better algorithms that help us access our data faster. Cloud is the driving force behind many of the data warehouse advancements we have enjoyed in recent years. But what are the best practices for storing data in the cloud for machine learning and data science applications?
The hierarchical architecture that distributes "compute" within the network specially at the edge can enable new services by harnessing emerging technologies. But Edge-Compute comes at increased cost that needs to be managed and potentially augmented by creative architecture solutions as there will always a catching-up with the capacity demands. Processing power in smartphones has enhanced YoY and there is increasingly spare compute capacity that can be potentially pooled. Uber has successfully been able to harness the excess capacity of privately owned vehicles and turned into a meaningful business. This concept can be step-functioned to harnessing the spare compute capacity of smartphones that can be orchestrated by MEC to provide cloud service at the edge.