Click here to close now.

Welcome!

SDN Journal Authors: Roger Strukhoff, Lori MacVittie, Carmen Gonzalez, Michael Jannery, Mat Mathews

Related Topics: SOA & WOA, Java, Wireless, .NET, AJAX & REA, Web 2.0

SOA & WOA: Article

Top Seven Website Performance Indicators to Monitor

Whatever the reason for a website crashing or slowing down, it’s bad for business and for your online reputation

Poorly performing websites, like Twitter's recent fiasco with Ellen's selfie, are a constant source of irritation for users. At first you think it's your computer, or maybe someone on your block is downloading the entire "Game of Thrones" series. But, when nothing changes after refreshing the page once or twice, you give up, mutter under your breath, and move on.

Whatever the reason for a website crashing or slowing down, it's bad for business and for your online reputation. According to a survey conducted by Consumer Affairs, a dissatisfied customer will tell between 9-15 people about their experience. And, if your website can't load fast enough (in 400 milliseconds), then most of your customers will search for another website.

Understanding how your website performs under pressure is extremely important for any company. But, it can be daunting trying to figure out what website performance indicators you should monitor.

We have compiled a list of the top seven website performance indicators we believe to be important. Make sure to track each of these to guarantee a great customer experience.

Top Seven Website Performance Indicators

1. Uptime
Monitoring the availability of your website is without a doubt the single most important part of website monitoring. Ideally, you should constantly check the uptime of your key pages from different locations around the world. Measure how many minutes your site is down over a period of two weeks or a month, and then express that as a percentage.

2. Initial Page Speed
Consumers' behavior and tolerance thresholds have changed. Now, people who browse a website expect it to load in a blink of an eye. If it doesn't load quickly, they will leave and turn to a competitor's site. You can check your website's speed using Ping requests (measuring the time it takes from your location until the website starts loading) and loading time measurements, for example, measuring the time it takes to download the source code of a web page. Note that this measurement reflects the time it takes for the raw page to load, but that isn't the complete user experience. For that, you must measure...

3. Full Page Load Time including images, videos, etc.
This performance indicator is usually called End User Experience testing. It's the amount of time it takes for all the images, videos, dynamically-loaded (AJAX) content, and everything else seen by the user to pop up on the their screen. This is different than the time it takes for the raw file to download to the device it's going to display on (as indicated above).

Both full page load time and page speed are important to measure because you can employ different strategies to optimize for both of them. Images, videos, and other static content can be cached on separate, dedicated systems or content delivery networks (CDNs), while dynamic content might need dedicated servers and fast databases. Knowing how your website behaves as it scales will help you put the right infrastructure in place.

4. Geographic Performance
If you are a globally active company or if you have consumers from different parts of the world, understanding your geographical performance - which is your website's speed and availability in different locations - is extremely important. Your ultimate goal is to make sure your website is easily accessible to all visitors regardless of their location to give them an excellent customer experience.

Many companies ignore this factor, only testing performance in familiar geographies. At a minimum, use your website analytics as a guide to put testing in place that shadows the locations from which your visitors are accessing your site.

5. Website Load Tolerance
Do you know how many visitors it takes to considerably slow down your website? It's an important indicator to understand because if you are running aggressive marketing campaigns or are picked up by the press you might be in a situation where your website is flooded with visitors in a matter of minutes.

Regularly run stress tests and compare the results to your visitor numbers at peak times. Once you understand how much load your website can handle then you can adjust your infrastructure to meet the demand. Look for those "tipping points" so you won't be caught by surprised when traffic spikes.

6. Web Server CPU Load
CPU usage is a common culprit in website failures. Too much processing bogs down absolutely everything on the server without much indication as to where the problem lies. You can prevent web server failures by monitoring CPU usage regularly. If you cannot install monitoring software on your web servers due to hosting arrangements or other constraints, consider running a script that publishes the values from available disk space and CPU load to a very simple html page.

7. Website Database Performance
Your database can be one of the most problematic parts of your website. A poorly optimized query, for example, can be the difference between a zippy site and an unusable one. It's important to monitor your database logs closely. Create alerts if the results contain certain error messages, or deliver results outside of expected norms. Use the built-in capabilities of the database to see which queries are taking the most time, and identify ways to optimize those through indices and other techniques. Most importantly, monitor the overall performance of the database to make sure it's not a bottleneck.

No Downtime = Happy Customers
If you can monitor all seven of these metrics, you should have a good idea of how your website performs and what needs to change when it doesn't perform well. Minimizing website downtime will keep your customers happy. If you have any questions on these metrics or load testing let me know.

More Stories By Tim Hinds

Tim Hinds is the Product Marketing Manager for NeoLoad at Neotys. He has a background in Agile software development, Scrum, Kanban, Continuous Integration, Continuous Delivery, and Continuous Testing practices.

Previously, Tim was Product Marketing Manager at AccuRev, a company acquired by Micro Focus, where he worked with software configuration management, issue tracking, Agile project management, continuous integration, workflow automation, and distributed version control systems.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@CloudExpo Stories
Advanced Persistent Threats (APTs) are increasing at an unprecedented rate. The threat landscape of today is drastically different than just a few years ago. Attacks are much more organized and sophisticated. They are harder to detect and even harder to anticipate. In the foreseeable future it's going to get a whole lot harder. Everything you know today will change. Keeping up with this changing landscape is already a daunting task. Your organization needs to use the latest tools, methods and ex...
In his session at DevOps Summit, Tapabrata Pal, Director of Enterprise Architecture at Capital One, will tell a story about how Capital One has embraced Agile and DevOps Security practices across the Enterprise – driven by Enterprise Architecture; bringing in Development, Operations and Information Security organizations together. Capital Ones DevOpsSec practice is based upon three "pillars" – Shift-Left, Automate Everything, Dashboard Everything. Within about three years, from 100% waterfall, C...
Disruptive macro trends in technology are impacting and dramatically changing the "art of the possible" relative to supply chain management practices through the innovative use of IoT, cloud, machine learning and Big Data to enable connected ecosystems of engagement. Enterprise informatics can now move beyond point solutions that merely monitor the past and implement integrated enterprise fabrics that enable end-to-end supply chain visibility to improve customer service delivery and optimize sup...
Hadoop as a Service (as offered by handful of niche vendors now) is a cloud computing solution that makes medium and large-scale data processing accessible, easy, fast and inexpensive. In his session at Big Data Expo, Kumar Ramamurthy, Vice President and Chief Technologist, EIM & Big Data, at Virtusa, will discuss how this is achieved by eliminating the operational challenges of running Hadoop, so one can focus on business growth. The fragmented Hadoop distribution world and various PaaS soluti...
Even as cloud and managed services grow increasingly central to business strategy and performance, challenges remain. The biggest sticking point for companies seeking to capitalize on the cloud is data security. Keeping data safe is an issue in any computing environment, and it has been a focus since the earliest days of the cloud revolution. Understandably so: a lot can go wrong when you allow valuable information to live outside the firewall. Recent revelations about government snooping, along...
The Workspace-as-a-Service (WaaS) market will grow to $6.4B by 2018. In his session at 16th Cloud Expo, Seth Bostock, CEO of IndependenceIT, will begin by walking the audience through the evolution of Workspace as-a-Service, where it is now vs. where it going. To look beyond the desktop we must understand exactly what WaaS is, who the users are, and where it is going in the future. IT departments, ISVs and service providers must look to workflow and automation capabilities to adapt to growing ...
SYS-CON Events announced today that Dyn, the worldwide leader in Internet Performance, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Dyn is a cloud-based Internet Performance company. Dyn helps companies monitor, control, and optimize online infrastructure for an exceptional end-user experience. Through a world-class network and unrivaled, objective intelligence into Internet conditions, Dyn ensures...
Business and IT leaders today need better application delivery capabilities to support critical new innovation. But how often do you hear objections to improving application delivery like, “I can harden it against attack, but not on this timeline”; “I can make it better, but it will cost more”; “I can deliver faster, but not with these specs”; or “I can stay strong on cost control, but quality will suffer”? In the new application economy, these tradeoffs are no longer acceptable. Customers will ...
Red Hat has launched the Red Hat Cloud Innovation Practice, a new global team of experts that will assist companies with more quickly on-ramping to the cloud. They will do this by providing solutions and services such as validated designs with reference architectures and agile methodology consulting, training, and support. The Red Hat Cloud Innovation Practice is born out of the integration of technology and engineering expertise gained through the company’s 2014 acquisitions of leading Ceph s...
The free version of KEMP Technologies' LoadMaster™ application load balancer is now available for unlimited use, making it easy for IT developers and open source technology users to benefit from all the features of a full commercial-grade product at no cost. It can be downloaded at FreeLoadBalancer.com. Load balancing, security and traffic optimization are all key enablers for application performance and functionality. Without these, application services will not perform as expected or have the...
As organizations shift toward IT-as-a-service models, the need for managing and protecting data residing across physical, virtual, and now cloud environments grows with it. CommVault can ensure protection &E-Discovery of your data – whether in a private cloud, a Service Provider delivered public cloud, or a hybrid cloud environment – across the heterogeneous enterprise. In his session at 16th Cloud Expo, Randy De Meno, Chief Technologist - Windows Products and Microsoft Partnerships, will disc...
VictorOps is making on-call suck less with the only collaborative alert management platform on the market. With easy on-call scheduling management, a real-time incident timeline that gives you contextual relevance around your alerts and powerful reporting features that make post-mortems more effective, VictorOps helps your IT/DevOps team solve problems faster.
Skytap Inc., has appointed David Frost as vice president of professional services. David joins Skytap from Deloitte Consulting where he served as Managing Director leading SAP, Cloud, and Advanced Technology Services. At Skytap, David will head the company's professional services organization, and spearhead a new consulting practice that will guide IT organizations through the adoption of DevOps best practices. David's appointment comes on the heels of Skytap's recent $35 million Series D fundin...
Cloud data governance was previously an avoided function when cloud deployments were relatively small. With the rapid adoption in public cloud – both rogue and sanctioned, it’s not uncommon to find regulated data dumped into public cloud and unprotected. This is why enterprises and cloud providers alike need to embrace a cloud data governance function and map policies, processes and technology controls accordingly. In her session at 15th Cloud Expo, Evelyn de Souza, Data Privacy and Compliance...
There are many considerations when moving applications from on-premise to cloud. It is critical to understand the benefits and also challenges of this migration. A successful migration will result in lower Total Cost of Ownership, yet offer the same or higher level of robustness. In his session at 15th Cloud Expo, Michael Meiner, an Engineering Director at Oracle, Corporation, will analyze a range of cloud offerings (IaaS, PaaS, SaaS) and discuss the benefits/challenges of migrating to each of...
Platform-as-a-Service (PaaS) is a technology designed to make DevOps easier and allow developers to focus on application development. The PaaS takes care of provisioning, scaling, HA, and other cloud management aspects. Apache Stratos is a PaaS codebase developed in Apache and designed to create a highly productive developer environment while also supporting powerful deployment options. Integration with the Docker platform, CoreOS Linux distribution, and Kubernetes container management system ...
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, will discuss why containers should be paired with new architectural practices such as microservices ra...
Roberto Medrano, Executive Vice President at SOA Software, had reached 30,000 page views on his home page - http://RobertoMedrano.SYS-CON.com/ - on the SYS-CON family of online magazines, which includes Cloud Computing Journal, Internet of Things Journal, Big Data Journal, and SOA World Magazine. He is a recognized executive in the information technology fields of SOA, internet security, governance, and compliance. He has extensive experience with both start-ups and large companies, having been ...
The industrial software market has treated data with the mentality of “collect everything now, worry about how to use it later.” We now find ourselves buried in data, with the pervasive connectivity of the (Industrial) Internet of Things only piling on more numbers. There’s too much data and not enough information. In his session at @ThingsExpo, Bob Gates, Global Marketing Director, GE’s Intelligent Platforms business, to discuss how realizing the power of IoT, software developers are now focu...
Operational Hadoop and the Lambda Architecture for Streaming Data Apache Hadoop is emerging as a distributed platform for handling large and fast incoming streams of data. Predictive maintenance, supply chain optimization, and Internet-of-Things analysis are examples where Hadoop provides the scalable storage, processing, and analytics platform to gain meaningful insights from granular data that is typically only valuable from a large-scale, aggregate view. One architecture useful for capturing...