Click here to close now.




















Welcome!

SDN Journal Authors: Pat Romanski, Lori MacVittie, Wai Lam, Cloud Best Practices Network, Elizabeth White

Related Topics: @DevOpsSummit, Java IoT, Microservices Expo, Containers Expo Blog, SDN Journal

@DevOpsSummit: Blog Post

DevOps & Cloud's Predictable Provisioning Problem | @DevOpsSummit [#DevOps]

Cloud and software-defined architectures have brought to the fore the critical nature of load balancing

What DevOps Can Do About Cloud's Predictable Provisioning Problem

Go ahead. Name a cloud environment that doesn't include load balancing as the key enabler of elastic scalability. I've got coffee... so it's good, take your time...

Exactly. Load balancing - whether implemented as traditional high availability pairs or clustering - provides the means by which applications (and infrastructure, in many cases) scale horizontally. It is load balancing that is at the heart of elastic scalability models, and that provides a means to ensure availability and even improve performance of applications.

But simple load balancing alone isn't enough. Too many environments and architectures are wont to toss a simple, network-based solution at the problem and call it a day. But rudimentary load balancing techniques that rely solely on a set of metrics are doomed to fail eventually. That's because a simple number like "connection count" does not provide enough context to make an intelligent load balancing decision. An application instance may currently have only 100 connections while another has 500, but if the capacity of the former is only 200 while the capacity of the other is 5000, a decision based on "least connections" is not the right one.

Application-aware networking tells us that load balancing decisions - even rudimentary ones - should be made based on a variety of variables such as application load, response time, and capacity. That means a modern load balancing service capable of not just tracking these metrics but gathering them from the application instances under management.

(Un)Predictable Provisioning
In data centers, it is best practice to deploy application instances on similarly capable hardware. This is because doing so provides predictable capacity and performance that can be used to better scale an application and ensure compliance with service level expectations.

When moving to a cloud environment - whether public or private - this practice can be lost. In the public cloud, that's because you have no control over the underlying hardware capabilities - you can only specific the compute capabilities of an instance. In a private cloud, you have more control over this but may not have provisioning systems intelligent enough to provide the visibility you need to make a provisioning decision in real time.

That can lead to problems. Consider this nugget from a recent blog post:

One thing that I’ve learned is that you can end up on a variety of different hardware but they don’t always act the same. Stackdriver has been a great help with this. For example, if we’re firing up 6 web servers, Stackdriver can help us see that 5 are cruising along at 20% CPU, while one is at 50% CPU. It allows us to see and address that anomaly.

http://www.stackdriver.com/devops-focus-matt-trescot-studyblue/

Let's assume, for a moment, this is true. Because it can be. Anyone who's ever dealt with hardware servers knows it's true - hardware, though matched in terms of basic capacity, can wind up performing differently. That's due to a number of things including the natural degradation of capacity over time due to "wear and tear" as well as the possibility of misconfiguration or the presence of some other artifact or code that may be eating up cycles. operational axiom 2a

In any case, the reason is not as important as the fact that this happens. It's important because we know operational axiom #2: as load increases, performance decreases. It also follows that as load increases, capacity decreases because, well, capacity and load go hand in hand.

Thus, in a cloud environment the aforementioned situation presents a problem: one of the "servers" is at a disadvantage and is not going to perform as well as the other five. Not only that, but its capacity as understood (and likely configured manually) by the load balancing is now inaccurate. The load balancing service believes all six servers have a capacity of X connections, but the reality is that a higher CPU utilization rate can reduce that.

A simple load balancing service is not going to adjust because it doesn't have the visibility or intelligence to make that connection. Whether the service is configured to use round robin (almost never a good idea) or a least connections (can be an acceptable choice if all other factors are predictable) algorithm, service levels are going to degrade unless the service is aware enough to recognize the discordance occurring.

Thus, we end up with a situation in which predictable performance and availability are, well, not necessarily predictable. Which introduces operational risk that must, somehow be countered.

Correcting for Unpredictable Provisioning

state-of-apm-issuesIn enterprise-class data centers, application aware networking services are able to factor in not just connection counts and response times, but server load and a variety of other variables that can offset the unpredictability of provisioning processes. As noted earlier, application-aware load balancing services have the visibility and programmability necessary to monitor and measure the status of application instances and servers for a variety of metrics including CPU utilization (load).

What's perhaps even more interesting is that programmability enables extensibility of gathering and monitoring those statistics. If the application instance can present a variable which you deem critical for making load balancing decisions, programmability of the load balancing service makes it possible to incorporate that variable into its algorithm (or create a completely new one, if that's what it takes).

All these factors combine to answer the question, "Why does the network need to be dynamic?" or "Why do we need SD<insert preferred "N" or "DC" here>?"

Dynamic implies an ability to react in the face of unanticipated (unpredictable) situations. Unpredictable provisioning that can result in inconsistent capacity and performance has to be countered somewhere, and that somewhere is going to be upstream of the application instances exhibiting erratic behavior. Upstream is usually (and almost always in any of today's scalable architectures) an ADC or load balancing service.

That load balancing service must be application-aware and programmable if it's going to execute on its mission of maintaining performance and availability of applications in the face of the potentially unpredictable provisioning processes of cloud computing environments.

DevOps: More than just deployment
DevOps practitioners must become adept at not only understanding the complex relationships between performance and availability and capacity and load, but how to turn those business and operational expectations into reality by taking advantage of both application and network infrastructure capabilities.

DevOps isn't, after all, just about scripting and automation. Those are tools that enable devops practitioners to do something, and that something is more than just deploying apps - it's delivering them, too.

•   •   •

Excerpt from the State of APM Infographic courtesy of Germain Software, LLC.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

@CloudExpo Stories
DevOps Summit, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development...
Mobile, social, Big Data, and cloud have fundamentally changed the way we live. “Anytime, anywhere” access to data and information is no longer a luxury; it’s a requirement, in both our personal and professional lives. For IT organizations, this means pressure has never been greater to deliver meaningful services to the business and customers.
17th Cloud Expo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterprises ar...
The 3rd International WebRTC Summit, to be held Nov. 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA, announces that its Call for Papers is now open. Topics include all aspects of improving IT delivery by eliminating waste through automated business models leveraging cloud technologies. WebRTC Summit is co-located with 15th International Cloud Expo, 6th International Big Data Expo, 3rd International DevOps Summit and 2nd Internet of @ThingsExpo. WebRTC (Web-based Real-Time Com...
SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.
The 17th International Cloud Expo has announced that its Call for Papers is open. 17th International Cloud Expo, to be held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, brings together Cloud Computing, APM, APIs, Microservices, Security, Big Data, Internet of Things, DevOps and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding bu...
SYS-CON Events announced today that the "Second Containers & Microservices Expo" will take place November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Containers and microservices have become topics of intense interest throughout the cloud developer and enterprise IT communities.
The 5th International DevOps Summit, co-located with 17th International Cloud Expo – being held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the ...
In his session at @ThingsExpo, Lee Williams, a producer of the first smartphones and tablets, will talk about how he is now applying his experience in mobile technology to the design and development of the next generation of Environmental and Sustainability Services at ETwater. He will explain how M2M controllers work through wirelessly connected remote controls; and specifically delve into a retrofit option that reverse-engineers control codes of existing conventional controller systems so the...
Moving an existing on-premise infrastructure into the cloud can be a complex and daunting proposition. It is critical to understand the benefits as well as the challenges associated with either a full or hybrid approach. In his session at 17th Cloud Expo, Richard Weiss, Principal Consultant at Pythian, will present a roadmap that can be leveraged by any organization to plan, analyze, evaluate and execute on a cloud migration solution. He will review the five major cloud transformation phases a...
Too often with compelling new technologies market participants become overly enamored with that attractiveness of the technology and neglect underlying business drivers. This tendency, what some call the “newest shiny object syndrome,” is understandable given that virtually all of us are heavily engaged in technology. But it is also mistaken. Without concrete business cases driving its deployment, IoT, like many other technologies before it, will fade into obscurity.
Akana has announced the availability of the new Akana Healthcare Solution. The API-driven solution helps healthcare organizations accelerate their transition to being secure, digitally interoperable businesses. It leverages the Health Level Seven International Fast Healthcare Interoperability Resources (HL7 FHIR) standard to enable broader business use of medical data. Akana developed the Healthcare Solution in response to healthcare businesses that want to increase electronic, multi-device acce...
Amazon and Google have built software-defined data centers (SDDCs) that deliver massively scalable services with great efficiency. Yet, building SDDCs has proven to be a near impossibility for ‘normal’ companies without hyper-scale resources. In his session at 17th Cloud Expo, David Cauthron, founder and chief executive officer of Nimboxx, will discuss the evolution of virtualization (hardware, application, memory, storage) and how commodity / open source hyper converged infrastructure (HCI) so...
In today's digital world, change is the one constant. Disruptive innovations like cloud, mobility, social media, and the Internet of Things have reshaped the market and set new standards in customer expectations. To remain competitive, businesses must tap the potential of emerging technologies and markets through the rapid release of new products and services. However, the rigid and siloed structures of traditional IT platforms and processes are slowing them down – resulting in lengthy delivery ...
In 2014, the market witnessed a massive migration to the cloud as enterprises finally overcame their fears of the cloud’s viability, security, etc. Over the past 18 months, AWS, Google and Microsoft have waged an ongoing battle through a wave of price cuts and new features. For IT executives, sorting through all the noise to make the best cloud investment decisions has become daunting. Enterprises can and are moving away from a "one size fits all" cloud approach. The new competitive field has ...
Enterprises can achieve rigorous IT security as well as improved DevOps practices and Cloud economics by taking a new, cloud-native approach to application delivery. Because the attack surface for cloud applications is dramatically different than for highly controlled data centers, a disciplined and multi-layered approach that spans all of your processes, staff, vendors and technologies is required. This may sound expensive and time consuming to achieve as you plan how to move selected applicati...
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo in Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading in...
SYS-CON Events announced today the Containers & Microservices Bootcamp, being held November 3-4, 2015, in conjunction with 17th Cloud Expo, @ThingsExpo, and @DevOpsSummit at the Santa Clara Convention Center in Santa Clara, CA. This is your chance to get started with the latest technology in the industry. Combined with real-world scenarios and use cases, the Containers and Microservices Bootcamp, led by Janakiram MSV, a Microsoft Regional Director, will include presentations as well as hands-on...
With the proliferation of connected devices underpinning new Internet of Things systems, Brandon Schulz, Director of Luxoft IoT – Retail, will be looking at the transformation of the retail customer experience in brick and mortar stores in his session at @ThingsExpo. Questions he will address include: Will beacons drop to the wayside like QR codes, or be a proximity-based profit driver? How will the customer experience change in stores of all types when everything can be instrumented and a...
This Enterprise Strategy Group lab validation report of the NEC Express5800/R320 server with Intel® Xeon® processor presents the benefits of 99.999% uptime NEC fault-tolerant servers that lower overall virtualized server total cost of ownership. This report also includes survey data on the significant costs associated with system outages impacting enterprise and web applications. Click Here to Download Report Now!