Click here to close now.


SDN Journal Authors: Don MacVittie, Lori MacVittie, Liz McMillan, Dinko Eror, Pat Romanski

Related Topics: @CloudExpo, Java IoT, Linux Containers, Containers Expo Blog, @BigDataExpo, SDN Journal

@CloudExpo: Blog Feed Post

How Load Balancing Impacts the Cost of Cloud

Scalability requires load balancing, but it doesn't require efficient or cost-effective load balancing

It's not the first time we've heard the statement that cloud can be too expensive and I doubt it will be the last. This latest episode comes from Alexei Rodriguez, Head of Ops at Evernote by way of Structure 2014:


Original Tweet:

It is important to note that this admission - like those in the past - have come from what we call "web monsters." Web monsters are, as the name implies, web-first (and usually only) organizations who have millions (or billions) of users. Modern web monsters generally have only one application for which they are responsible, a la Evernote, Netflix, Facebook, etc...

It is unlikely that most enterprises will encounter this same conundrum - that of the cloud actually costing more than a DIY approach - for short-lived projects. A marketing campaign, seasonal promotions and offerings, etc... are almost certainly never going to approach the consumption levels of a Facebook or Evernote, and thus their costs will almost certainly be less in the cloud than in house.

That's not to say that enterprises won't run into this problem, or need to carefully evaluate the long-term costs of cloud for an application against their own ability to service it, especially as the Internet of Things begins to arrive and push at oft times already bulging data center seams.

One of the ways in which cloud can end up costing more is based on the load balancing service you choose to use.

The Cost of Inefficient Load Balancing
Load balancing is at the heart of every cloud computing model. Without load balancing of some kind you can't scale, and scalability is one of cloud's biggest benefits, as well as a top driver according to North Bridge Ventures 2014 Future of Cloud.

Load balancing, of course, distributes load across multiple instances of an application to enable scale, improve performance, and maintain availability. In most cloud environments, where provider supplied load balancing services are made available, these services are based on a scale out model, meaning scalability is based purely on the cloning of new application instances when demand reaches a certain (usually customer defined) threshold.

Now, that's all pretty simple stuff. All load balancing services offer scalability this way. What separates enterprise class load balancing from the simplistic offerings from providers is the ability to optimize server-side (virtual or physical) resource utilization in order to eke out the most capacity from each one, without compromising on other service level requirements such as performance.

Enterprise class load balancing services achieve this by using a variety of TCP optimizations designed to offload protocol overhead from the server (instance). TCP multiplexing and response buffering capabilities enable enterprise class load balancing to improve the capacity of servers (instances) by 25% or more, on average.

Obviously if a server (instance) can serve 25% more user requests, you don't scale out as quickly. In other words, you aren't launching more instances as frequently. Which means you aren't paying for more instances as often, either. Interesting, isn't that?

Enterprise load balancing services also offer a variety of load balancing algorithms, each of which has advantages and disadvantages. All load balancing services generally support the most basic of algorithms, round robin, but more sophisticated algorithms are rarely implemented. It is here, along with TCP optimizations, that efficient scalability becomes problematic. Round robin is application and server load agnostic, meaning it doesn't care if the instance selected has 400 connections while a second instance has only 50, it's still going to send that request to the next one in line. While least connections may not be the most efficient algorithm available, it's definitely more application load-aware than round robin.

Most enterprise driven load balancing algorithms take into consideration in some way - whether through weights or connection counts - the load on a given application instance. Rather than just distribute requests, they attempt to efficiently and equally distribute requests in order to maximize resource utilization without impacting performance or availability.

Thus, the use of simple load balancing services with rudimentary algorithmic support and an apathetic view toward server (instance) load serves to distribute load unequally.

These load balancing services do, however, serve to ensure that more instances are launched and more bandwidth is used, which necessarily incurs additional costs.

The load balancing service you choose does ultimately impact the overall cost of cloud. While its not the primary cause behind observations from organizations like Evernote, it's certainly a contributor.

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

@CloudExpo Stories
The modern software development landscape consists of best practices and tools that allow teams to deliver software in a near-continuous manner. By adopting a culture of automation, measurement and sharing, the time to ship code has been greatly reduced, allowing for shorter release cycles and quicker feedback from customers and users. Still, with all of these tools and methods, how can teams stay on top of what is taking place across their infrastructure and codebase? Hopping between services a...
Containers are changing the security landscape for software development and deployment. As with any security solutions, security approaches that work for developers, operations personnel and security professionals is a requirement. In his session at @DevOpsSummit, Kevin Gilpin, CTO and Co-Founder of Conjur, will discuss various security considerations for container-based infrastructure and related DevOps workflows.
DevOps and Continuous Delivery software provider XebiaLabs has announced it has been selected to join the Amazon Web Services (AWS) DevOps Competency partner program. The program is designed to highlight software vendors like XebiaLabs who have demonstrated technical expertise and proven customer success in DevOps and specialized solution areas like Continuous Delivery. DevOps Competency Partners provide solutions to, or have deep experience working with AWS users and other businesses to help t...
Enterprises can achieve rigorous IT security as well as improved DevOps practices and Cloud economics by taking a new, cloud-native approach to application delivery. Because the attack surface for cloud applications is dramatically different than for highly controlled data centers, a disciplined and multi-layered approach that spans all of your processes, staff, vendors and technologies is required. This may sound expensive and time consuming to achieve as you plan how to move selected applicati...
Nowadays, a large number of sensors and devices are connected to the network. Leading-edge IoT technologies integrate various types of sensor data to create a new value for several business decision scenarios. The transparent cloud is a model of a new IoT emergence service platform. Many service providers store and access various types of sensor data in order to create and find out new business values by integrating such data.
Data loss happens, even in the cloud. In fact, if your company has adopted a cloud application in the past three years, data loss has probably happened, whether you know it or not. In his session at 17th Cloud Expo, Bryan Forrester, Senior Vice President of Sales at eFolder, will present how common and costly cloud application data loss is and what measures you can take to protect your organization from data loss.
The cloud has reached mainstream IT. Those 18.7 million data centers out there (server closets to corporate data centers to colocation deployments) are moving to the cloud. In his session at 17th Cloud Expo, Achim Weiss, CEO & co-founder of ProfitBricks, will share how two companies – one in the U.S. and one in Germany – are achieving their goals with cloud infrastructure. More than a case study, he will share the details of how they prioritized their cloud computing infrastructure deployments ...
There are so many tools and techniques for data analytics that even for a data scientist the choices, possible systems, and even the types of data can be daunting. In his session at @ThingsExpo, Chris Harrold, Global CTO for Big Data Solutions for EMC Corporation, will show how to perform a simple, but meaningful analysis of social sentiment data using freely available tools that take only minutes to download and install. Participants will get the download information, scripts, and complete en...
SYS-CON Events announced today that Dyn, the worldwide leader in Internet Performance, will exhibit at SYS-CON's 17th International Cloud Expo®, which will take place on November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Dyn is a cloud-based Internet Performance company. Dyn helps companies monitor, control, and optimize online infrastructure for an exceptional end-user experience. Through a world-class network and unrivaled, objective intelligence into Internet condit...
Between the compelling mockups and specs produced by analysts, and resulting applications built by developers, there exists a gulf where projects fail, costs spiral, and applications disappoint. Methodologies like Agile attempt to address this with intensified communication, with partial success but many limitations. In his session at DevOps Summit, Charles Kendrick, CTO and Chief Architect at Isomorphic Software, will present a revolutionary model enabled by new technologies. Learn how busine...
Interested in leveraging automation technologies and a cloud architecture to make developers more productive? Learn how PaaS can benefit your organization to help you streamline your application development, allow you to use existing infrastructure and improve operational efficiencies. Begin charting your path to PaaS with OpenShift Enterprise.
Achim Weiss is Chief Executive Officer and co-founder of ProfitBricks. In 1995, he broke off his studies to co-found the web hosting company "Schlund+Partner." The company "Schlund+Partner" later became the 1&1 web hosting product line. From 1995 to 2008, he was the technical director for several important projects: the largest web hosting platform in the world, the second largest DSL platform, a video on-demand delivery network, the largest eMail backend in Europe, and a universal billing syste...
Containers are revolutionizing the way we deploy and maintain our infrastructures, but monitoring and troubleshooting in a containerized environment can still be painful and impractical. Understanding even basic resource usage is difficult - let alone tracking network connections or malicious activity. In his session at DevOps Summit, Gianluca Borello, Sr. Software Engineer at Sysdig, will cover the current state of the art for container monitoring and visibility, including pros / cons and li...
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and Containers together help companies to achieve their business goals faster and more effectively.
As a CIO, are your direct reports IT managers or are they IT leaders? The hard truth is that many IT managers have risen through the ranks based on their technical skills, not their leadership ability. Many are unable to effectively engage and inspire, creating forward momentum in the direction of desired change. Renowned for its approach to leadership and emphasis on their people, organizations increasingly look to our military for insight into these challenges.
Today air travel is a minefield of delays, hassles and customer disappointment. Airlines struggle to revitalize the experience. GE and M2Mi will demonstrate practical examples of how IoT solutions are helping airlines bring back personalization, reduce trip time and improve reliability. In their session at @ThingsExpo, Shyam Varan Nath, Principal Architect with GE, and Dr. Sarah Cooper, M2Mi's VP Business Development and Engineering, will explore the IoT cloud-based platform technologies driv...
There are many considerations when moving applications from on-premise to cloud. It is critical to understand the benefits and also challenges of this migration. A successful migration will result in lower Total Cost of Ownership, yet offer the same or higher level of robustness. Migration to cloud shifts computing resources from your data center, which can yield significant advantages provided that the cloud vendor an offer enterprise-grade quality for your application.
The web app is agile. The REST API is agile. The testing and planning are agile. But alas, data infrastructures certainly are not. Once an application matures, changing the shape or indexing scheme of data often forces at best a top down planning exercise and at worst includes schema changes that force downtime. The time has come for a new approach that fundamentally advances the agility of distributed data infrastructures. Come learn about a new solution to the problems faced by software organ...
The buzz continues for cloud, data analytics and the Internet of Things (IoT) and their collective impact across all industries. But a new conversation is emerging - how do companies use industry disruption and technology enablers to lead in markets undergoing change, uncertainty and ambiguity? Organizations of all sizes need to evolve and transform, often under massive pressure, as industry lines blur and merge and traditional business models are assaulted and turned upside down. In this new da...
Cloud computing delivers on-demand resources that provide businesses with flexibility and cost-savings. The challenge in moving workloads to the cloud has been the cost and complexity of ensuring the initial and ongoing security and regulatory (PCI, HIPAA, FFIEC) compliance across private and public clouds. Manual security compliance is slow, prone to human error, and represents over 50% of the cost of managing cloud applications. Determining how to automate cloud security compliance is critical...