|By Lori MacVittie||
|April 25, 2014 09:00 AM EDT||
Way back in the early days of the Internet scalability was an issue (the more things change...). One of the answers to this problem was to scale out web servers using a fairly well-proven concept called load balancing. Simply put, distribute the load across web servers to make sure everyone gets served in a timely fashion. We see this in action at stores every day when more checkout lines are added as demand increases. Well, we hope we see this in action. Too often we don't, much to our chagrin.
Anyway, the way in which early load balancing worked was simply to take a couple variables (IP address and TCP port) and then hash them together and stick them in the equivalent of a queue for a web server. Because hash values tend to distribute fairly evenly, this worked well (until we ran into the mega-proxy issue, thanks to folks like Compuserve and AOL).
This is called "network load balancing" because, well, it uses network variables to distribute load. It's quite fast, actually, because it's based on variables that are in fixed locations within a single packet: source or destination IP and TCP port. All the work is on the ingress, on the inbound side, and once the decision has been made it's a pretty simple thing to hash future packets and match it up before sending it on its way. Voila. Network load balancing.
Application load balancing, however, arose because network load balancing was all based on inbound variables. It couldn't take into consideration how loaded the chosen server was, or whether its response time was falling within acceptable business parameters, or whether it was at capacity or not. Those variables were all on the server side, and required visibility into the application, not the client.
It also couldn't account for the fact that virtual servers were popping up everywhere (multiple applications served from the same IP address and port) and forced the web server to become a load balancer itself. Which, if you think about it, was kind of crazy. If a single server couldn't scale well enough to meet demand, how is putting a single server in front of them going to help the situation?
Application load balancing (which has also been given other fancy names over the years like content switching or routing, application switching, application or page routing, etc...) is really focused on distributing load across applications intelligently. While it can use ingress variables like IP address and port, it generally doesn't because that doesn't offer the insight into which server (application, web, virtual, whatever) is going to be able to respond (has capacity) in a time frame acceptable to the business (response time) for a specific application (or piece of the application like images).
The difference between the two lies primarily in the variables used to distribute load. Network load balancing relies solely on network variables while Application load balancing relies mainly on application variables.
This change in load balancing techniques opened up all sorts of new efficiencies and scalability options because it allowed architectures to specialize - route requests for images to servers focused on serving images, requests for static content to servers focused on serving static content, etc...). It also enabled persistence (sticky sessions) which greatly accelerated the ability to scale out stateful applications in a web format.
Why Is It Important to SDN?
The reason this is important to SDN architectures is because layer 3 switches can, in fact, support network load balancing. Fairly easily, in fact. If you look at how Link Aggregation (trunking) is implemented in most switches, you'll see it's using network load balancing techniques to distribute load across trunked links and that the algorithms used are pretty much the same ones we used back in the day to load balancing servers based on network variables. The hash is pretty simple (and easily implemented) and doesn't require storing state because the hash is always based on the same variables, easily extracted from IP and TCP headers, and don't really tax the system. Forwarding tables are basically sets of inbound IP addresses, TCP ports and (switch) ports matched to outbound IP addresses, TCP ports and (switch) ports. So you can see that network load balancing wouldn't overly tax a controller (it just has to hash the right values and insert a forwarding entry) or a switch.
But it wouldn't be application centric, or be able to take into consideration things that modern load balancing services care about - like application status, connection capacity, and response times, not to mention enabling specialization of services. But in order to be application centric application load balancing must participate in the data path and have visibility into variables that aren't available in packets - they're in payloads and in the application server (instances) itself. Like the implications of being stateful versus stateless, the burden on a centralized controller would be overwhelming.
Thus while SDN principles are certainly applicable, the same architecture used to implement SDN for lower order network layer services is not going to be the same architecture used to implement SDN for higher order network layer services. When evaluating SDN solutions, it's again important to consider how any two SDN network (core and application) architectures complement one another, integrate with one another, and collaborate to enable a complete software-defined network architecture that supports the unique needs of both layer 2-3 and layer 4-7.
Want to enable self-service provisioning of application environments in minutes that mirror production? Can you automatically provide rich data with code-level detail back to the developers when issues occur in production? In his session at DevOps Summit, David Tesar, Microsoft Technical Evangelist on Microsoft Azure and DevOps, will discuss how to accomplish this and more utilizing technologies such as Microsoft Azure, Visual Studio online, and Application Insights in this demo-heavy session.
Jan. 31, 2015 05:15 PM EST Reads: 2,712
Entuity®, a provider of enterprise-class network management solutions, today announced that it solidifies its position as a market leader through global enterprise customer acquisitions and a refined channel strategy. In 2014, Entuity increased new license revenues in EMEA by over 75 percent, and LATAM by over 125 percent as customers embraced Entuity for its highly automated solution and unified architecture. Entuity’s refined channel strategy focuses on even deeper strategic alignment with ke...
Jan. 31, 2015 05:00 PM EST Reads: 747
We are all here because we are sold on the transformative promise of The Cloud. But what good is all of this ephemeral, on-demand infrastructure if your usage doesn't actually improve the agility and speed of your business? How must Operations adapt in order to avoid stifling your Cloud initiative? In his session at DevOps Summit, Damon Edwards, co-founder and managing partner of the DTO Solutions, will highlight the successful organizational, process, and tooling patterns of high-performing c...
Jan. 31, 2015 04:15 PM EST Reads: 2,970
The 4th International DevOps Summit, co-located with16th International Cloud Expo – being held June 9-11, 2015, at the Javits Center in New York City, NY – announces that its Call for Papers is now open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's large...
Jan. 31, 2015 03:00 PM EST Reads: 3,842
Cloud Expo 2014 TV commercials will feature @ThingsExpo, which was launched in June, 2014 at New York City's Javits Center as the largest 'Internet of Things' event in the world.
Jan. 31, 2015 03:00 PM EST Reads: 3,595
“We help people build clusters, in the classical sense of the cluster. We help people put a full stack on top of every single one of those machines. We do the full bare metal install," explained Greg Bruno, Vice President of Engineering and co-founder of StackIQ, in this SYS-CON.tv interview at 15th Cloud Expo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Jan. 31, 2015 02:45 PM EST Reads: 2,594
In this demo at 15th Cloud Expo, John Meza, Product Engineer at Esri, showed how Esri products hook into Hadoop cluster to allow you to do spatial analysis on the spatial data within your cluster, and he demonstrated rendering from a data center with ArcGIS Pro, a new product that has a brand new rendering engine.
Jan. 31, 2015 02:30 PM EST Reads: 1,825
"Blue Box has been around for 10-11 years, and last year we launched Blue Box Cloud. We like the term 'Private Cloud as a Service' because we think that embodies what we are launching as a product - it's a managed hosted private cloud," explained Giles Frith, Vice President of Customer Operations at Blue Box, in this SYS-CON.tv interview at DevOps Summit, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Jan. 31, 2015 02:30 PM EST Reads: 2,774
"People are a lot more knowledgeable about APIs now. There are two types of people who work with APIs - IT people who want to use APIs for something internal and the product managers who want to do something outside APIs for people to connect to them," explained Roberto Medrano, Executive Vice President at SOA Software, in this SYS-CON.tv interview at Cloud Expo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Jan. 31, 2015 02:30 PM EST Reads: 2,742
The Software Defined Data Center (SDDC), which enables organizations to seamlessly run in a hybrid cloud model (public + private cloud), is here to stay. IDC estimates that the software-defined networking market will be valued at $3.7 billion by 2016. Security is a key component and benefit of the SDDC, and offers an opportunity to build security 'from the ground up' and weave it into the environment from day one. In his session at 16th Cloud Expo, Reuven Harrison, CTO and Co-Founder of Tufin,...
Jan. 31, 2015 02:15 PM EST Reads: 1,708
SYS-CON Media announced that Splunk, a provider of the leading software platform for real-time Operational Intelligence, has launched an ad campaign on Big Data Journal. Splunk software and cloud services enable organizations to search, monitor, analyze and visualize machine-generated big data coming from websites, applications, servers, networks, sensors and mobile devices. The ads focus on delivering ROI - how improved uptime delivered $6M in annual ROI, improving customer operations by minin...
Jan. 31, 2015 02:00 PM EST Reads: 3,897
The move in recent years to cloud computing services and architectures has added significant pace to the application development and deployment environment. When enterprise IT can spin up large computing instances in just minutes, developers can also design and deploy in small time frames that were unimaginable a few years ago. The consequent move toward lean, agile, and fast development leads to the need for the development and operations sides to work very closely together. Thus, DevOps become...
Jan. 31, 2015 02:00 PM EST Reads: 2,866
Puppet Labs on Wednesday released the DevOps Salary Report, based on salary data gathered from Puppet Labs' industry-recognized State of DevOps Report. The data confirms that market demand for DevOps skills is growing, and that DevOps engineers are among the highest paid IT practitioners today. That's because IT organizations today are grappling with how to be more agile and responsive to the business, while maintaining the stability of their infrastructure. DevOps practices, such as continuous ...
Jan. 31, 2015 02:00 PM EST Reads: 1,513
CloudBees, Inc., has announced a $23.5 million financing round, led by longtime CloudBees investor Lightspeed Venture Partners. Existing investors Matrix Partners, Verizon Ventures and Blue Cloud Ventures also participated in the round. The latest funding announcement follows earlier rounds of $4 million, $10.5 million and $10.8 million, bringing the total investment in CloudBees to just under $50 million since the company’s inception in 2010. Previous venture investment rounds were led by Ma...
Jan. 31, 2015 02:00 PM EST Reads: 953
“We are strong believers in the DevOps movement and our staff has been doing DevOps for large enterprise environments for a number of years. The solution that we build is intended to allow DevOps teams to do security at the speed of DevOps," explained Justin Lundy, Founder & CTO of Evident.io, in this SYS-CON.tv interview at DevOps Summit, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Jan. 31, 2015 02:00 PM EST Reads: 2,307
The cloud is becoming the de-facto way for enterprises to leverage common infrastructure while innovating and one of the biggest obstacles facing public cloud computing is security. In his session at 15th Cloud Expo, Jeff Aliber, a global marketing executive at Verizon, discussed how the best place for web security is in the cloud. Benefits include: Functions as the first layer of defense Easy operation –CNAME change Implement an integrated solution Best architecture for addressing network-l...
Jan. 31, 2015 01:30 PM EST Reads: 2,330
DevOps Summit 2015 New York, co-located with the 16th International Cloud Expo - to be held June 9-11, 2015, at the Javits Center in New York City, NY - announces that it is now accepting Keynote Proposals. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete...
Jan. 31, 2015 01:15 PM EST Reads: 2,715
In this scenarios approach Joe Thykattil, Technology Architect & Sales at TimeWarner / Navisite, presented examples that will allow business-savvy professionals to make informed decisions based on a sound business model. This model covered the technology options in detail as well as a financial analysis. The TCO (Total Cost of Ownership) and ROI (Return on Investment) demonstrated how to start, develop and formulate a business case that will allow both small and large scale projects to achieve...
Jan. 31, 2015 01:00 PM EST Reads: 2,128
IBM has announced a new strategic technology services agreement with Anthem, Inc., a health benefits company in the U.S. IBM has been selected to provide operational services for Anthem's mainframe and data center server and storage infrastructure for the next five years. Among the benefits of the relationship, Anthem has the ability to leverage IBM Cloud solutions that will help increase the ease, availability and speed of adding infrastructure to support new business requirements.
Jan. 31, 2015 01:00 PM EST Reads: 1,577
Today, IT is not just a cost center. IT is an enabler and driver of business. With the emergence of the hybrid cloud paradigm, IT now has increasingly more capabilities to create new strategic opportunities for a business. Hybrid cloud allows an organization to utilize multi-tenant public clouds, dedicated private clouds, bare metal hosting, and the associated support and services for the right use cases through an on-demand, XaaS model. This model of IT creates tremendous opportunities for busi...
Jan. 31, 2015 01:00 PM EST Reads: 1,676