Welcome!

SDN Journal Authors: Elizabeth White, Yeshim Deniz, Liz McMillan, Pat Romanski, TJ Randall

Related Topics: SDN Journal, Java IoT, Microservices Expo, Containers Expo Blog, @CloudExpo, @DXWorldExpo

SDN Journal: Blog Feed Post

Back to Basics: Least Connections Is Not Least Loaded

When load balancing, "least connections" does not mean "least loaded"

Performance is important, and that means it's important that our infrastructure support the need for speed. Load balancing algorithms are an integral piece of the performance equation and can both improve - or degrade - performance.

That's why it's important to understand more about the algorithms than their general selection mechanism. Understanding that round robin is basically an iterative choice, traversing a list one by one is good - but understanding what that means in terms of performance and capacity on different types of applications and application workloads is even better.

We last checked out "fastest response time" and today we're diving into "least connections" which, as stated above, does not mean "least loaded."

INTRA-APPLICATION WORKLOADS 

The industry standard "Least connections" load balancing algorithm uses the number of current connections to each application instance (member) to make its load balancing decision. The member with the least number of active connections is chosen. Pretty simple, right?

The premise of this algorithm is a general assumption that fewer connections (and thus fewer users) means less load and therefore better performance. That's operational axiom #2 at work - if performance decreases as load increases it stands to reason that performance increases as load decreases.

That would be true (and in the early days of load balancing it was true) if all intra-application workloads required the same resources. Unfortunately, that's no longer true and the result is uneven load distribution that leads to unpredictable performance fluctuations as demand increases.

Consider a simple example: a user logging into a system takes at least one if not more database queries to validate credentials and then update the system to indicate the activity. Depending on the nature of the application, other intra-application activities will require different quantities of resources. Some are RAM heavy, others CPU heavy, others file or database heavy. Furthermore, depending on the user in question, the usage pattern will vary greatly. One hundred users can be logged into the same system (requiring at a minimum ten connections) but if they're all relatively idle, the system will be lightly loaded and performing well.

Conversely, another application instance may boast only 50 connections, but all fifty users are heavily active with database queries returning large volumes of data. The system is far more heavily loaded and performance may be already beginning to suffer.

When the next request comes in, however, the load balancer using a "least connections" algorithm will choose the latter member, increasing the burden on that member and likely further degrading performance.

The premise of the least connections algorithm is that the application instance with the fewest number of connections is the least loaded. Except, it's not.

The only way to know which application instance is the least loaded is to monitor its system variables directly, gathering CPU utilization and memory and comparing it against known maximums. That generally requires either SNMP, agents, or other active monitoring mechanisms that can unduly tax the system in and of itself by virtue of consuming resources.

This is a quandary for operations, because "application workload" is simply too broad a generalization. Certainly some applications are more I/O heavy than others, still others are more CPU or connection heavy. But all applications have both a general workload profile and an intra-application workload profile. Understanding the usage patterns - the intra-application workload profile - of an application is critical to being able to determine how best to not only choose a load balancing algorithm but specify any limitations that may provide better overall performance and use of capacity during execution.

As always, being aware of the capabilities and the limitations of a given load balancing algorithm will assist in choosing one that is best able to meet the performance and availability requirements of an application (and thus the business).

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

CloudEXPO Stories
The hierarchical architecture that distributes "compute" within the network specially at the edge can enable new services by harnessing emerging technologies. But Edge-Compute comes at increased cost that needs to be managed and potentially augmented by creative architecture solutions as there will always a catching-up with the capacity demands. Processing power in smartphones has enhanced YoY and there is increasingly spare compute capacity that can be potentially pooled. Uber has successfully been able to harness the excess capacity of privately owned vehicles and turned into a meaningful business. This concept can be step-functioned to harnessing the spare compute capacity of smartphones that can be orchestrated by MEC to provide cloud service at the edge.
All zSystem customers have a significant new business opportunity to extend their reach to new customers and markets with new applications and services, and to improve the experience of existing customers. This can be achieved by exposing existing z assets (which have been developed over time) as APIs for accessing Systems of Record, while leveraging mobile and cloud capabilities with new Systems of Engagement applications. In this session, we will explore business drivers with new Node.js apps for delivering enhanced customer experience (with mobile and cloud adoption), how to accelerate development and management of SoE app APIs with API management.
Using new techniques of information modeling, indexing, and processing, new cloud-based systems can support cloud-based workloads previously not possible for high-throughput insurance, banking, and case-based applications. In his session at 18th Cloud Expo, John Newton, CTO, Founder and Chairman of Alfresco, described how to scale cloud-based content management repositories to store, manage, and retrieve billions of documents and related information with fast and linear scalability. He addressed the challenges of scaling document repositories to this level; architectural approaches for coordinating data; search and storage technologies, Solr, and Amazon storage and database technologies; the breadth of use cases that modern content systems need to support; how to support user applications that require subsecond response times.
The technologies behind big data and cloud computing are converging quickly, offering businesses new capabilities for fast, easy, wide-ranging access to data. However, to capitalize on the cost-efficiencies and time-to-value opportunities of analytics in the cloud, big data and cloud technologies must be integrated and managed properly. Pythian's Director of Big Data and Data Science, Danil Zburivsky will explore: The main technology components and best practices being deployed to take advantage of data and analytics in the cloud, Architecture, integration, governance and security scenarios and Key challenges and success factors of moving data and analytics to the cloud
For years the world's most security-focused and distributed organizations - banks, military/defense agencies, global enterprises - have sought to adopt cloud technologies that can reduce costs, future-proof against data growth, and improve user productivity. The challenges of cloud transformation for these kinds of secure organizations have centered around data security, migration from legacy systems, and performance. In our presentation, we will discuss the notion that cloud computing, properly managed, is poised to bring about a digital transformation to enterprise IT. We will discuss the trend, the technology and the timeline for adoption.