Click here to close now.

Welcome!

SDN Journal Authors: Lori MacVittie, Elizabeth White, Liz McMillan, Pat Romanski, Yeshim Deniz

Related Topics: @CloudExpo, @MicroservicesE Blog, Agile Computing, Cloud Security, @BigDataExpo, SDN Journal

@CloudExpo: Article

Basic Cloud Computing Patterns for Application Development

Design patterns help not only in the development process but across the application development life cycle

Over the past few years, the cloud evolution has answered all questions on the cloud being the right strategy. The key challenge that remains now is leveraging cloud capabilities and features in such a way that they can be used to innovate as well as solve business problems. If we relate different cloud migration strategies executed over time, we'll find many similarities. There has been focus on cloud assessment as well as a consideration for application development approaches. Even though business cases are different, we can still link the proposed or implemented cloud-based solutions with a set of design patterns. If we have to define a design pattern, the most common definition states it as, ‘A widely used concept in computer science to describe good solutions to re-occurring problems in an abstract form.' Any abstract solution to recurring problems in the domain of cloud computing can be referred to as a cloud computing pattern that is independent of concrete providers, products and programming languages.

The following are some basic application architecture patterns. Most of these were referred to as cloud best practices in the beginning. As we come across multiple real-time implementations, we shall be able to easily identify a pattern in them.

Composite Application
On a higher level, traditional application architecture has to deal with challenges such as difficulties integrating with other applications and lack of flexibility for supporting changing functionalities in an application lifecycle. Since in a cloud environment applications can be scaled individually, it's always a good option to divide the application functionality into multiple components that can later be integrated to form a unified application.

Composite applications are one of the main elements in service-oriented architecture (SOA) that help in contextual collaboration. This approach makes applications extendable right from the beginning. The integration of other applications is also simplified by using the same integration techniques inside individual applications.

Example of a Composite Application for a Travel Booking Process

The key to a successful implementation of this pattern is achieving the correct balance in the distribution of functionality across multiple components. With too few components, integrating new functionality and changing the application flexibly will need extra time due to likelihood of errors. On the other hand, if the functionality is distributed among too many components, there will be a higher communication overhead for the application to perform. Composite application patterns used along with loose coupling (explained earlier) helps extract the benefits of cloud features like elasticity, payment models and standardized management.

Loose Coupling
In essence, loose coupling isolates the various layers and components of your application so that each component interacts asynchronously with the others and treats them as a "black box." The key principal for this pattern is to reduce the set of assumptions for the information exchange between components, which eventually results in better scalability.

Decoupling your components, building asynchronous systems and scaling horizontally become very important in the context of the cloud. It will not only allow you to scale out by adding more instances of the same component but also allow you to design innovative hybrid models in which a few components continue to run ‘on-premise' while the other components can take advantage of the ‘cloud-scale' and use the cloud for additional compute-power and bandwidth.

The following is a sample illustration of decoupling components using queues and AWS specific tactics:

Ref: Whitepaper on Architecting for the AWS Cloud: Best Practices.

AWS specific techniques for implementing this best practice are as follows:

  1. Use Amazon SQS to isolate components
  2. Use Amazon SQS as a buffer between components
  3. Design every component in a way that it exposes a service interface and is responsible for its own scalability in all appropriate dimensions and interacts with other components asynchronously
  4. Bundle the logical construct of a component into an Amazon Machine Image so that it can be deployed more often
  5. Make your applications as stateless as possible. Store session state outside of component (in Amazon SimpleDB, if appropriate)

Loose coupling normally results in performance reduction because asynchronous communication using messages adds a lot of overhead due to the communication path being longer. Though it needs to be weighed between loose coupling and performance, things can be easily handled by scaling resources out.

Elastic Component
As an application is componentized, components are distributed among multiple compute nodes. The system utilization is tracked by these nodes using parameters like CPU load, memory usage, or network I/O for scaling decisions. As the utilization of compute nodes exceeds a specified threshold, additional hosting components are provisioned that contain the same application component.

In cloud, elasticity can be implemented in three ways:

  1. Proactive Cyclic Scaling: Periodic scaling that occurs at fixed interval
  2. Proactive Event-Based Scaling: Scaling just when you are expecting a big surge of traffic requests due to a scheduled business event
  3. Auto-scaling based on demand

Other Cloud Computing Patterns
The following are some other commonly used cloud computing patterns:

Stateless Component
In regular component-based applications in cloud, the chances of failure increase as components can be distributed across multiple nodes. Components are added/ removed to address scalability needs with changes in demand. ‘Stateless Components' is a pattern in which components do not contain any internal state, rather external persistence storage is used for state management.

Map-Reduce
The Map-Reduce pattern is used to achieve performance requirements for complex queries on large data sets as most of the conventional storage solutions do not support such queries natively. Map-Reduce is often used to query large amounts of weakly structured/unstructured data for analysis purposes. For example, it can be used for the analysis of web service logs to determine user access statistics or the analysis of order information to find popular products.

Design patterns help not only in the development process but across the application development life cycle. In their abstracted form, patterns make themselves applicable to challenges that the developers of cloud application face today that are independent of the actual technologies as well as cloud services that are being used. Applying them to the cloud lets your application extract maximum benefits of cloud platforms.

More Stories By Mahesh Kumar

Mahesh Kumar is currently working as a Senior Tech Lead at Harbinger. He is a member of Technology Forum and Proposal Engineering Group at Harbinger Systems. He is an active contributor in the technology arm of Harbinger’s Marketing division. Mahesh Kumar has over 7 years of experience in design and development of Enterprise Applications in BI, Healthcare and eLearning domain. His core technology expertise is in Java, J2ee, Java Frameworks and libraries, Android, Big Data and Cloud. He is frequently invited as a guest speaker at Management Colleges and Universities.

Mahesh Kumar holds a Bachelors of Engineering in Information Technology from University of Pune, India.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@CloudExpo Stories
"We help to transform an organization and their operations and make them more efficient, more agile, and more nimble to move into the cloud or to move between cloud providers and create an agnostic tool set," noted Jeremy Steinert, DevOps Services Practice Lead at WSM International, in this SYS-CON.tv interview at @DevOpsSummit, held June 9-11, 2015, at the Javits Center in New York City.
The basic integration architecture, as defined by ESBs, hasn’t changed for more than a decade. Most cloud integration providers still rely on an ESB architecture and their proprietary connectors. As a result, enterprise integration projects suffer from constraints of availability and reliability of these connectors that are not re-usable across other integration vendors. However, the rapid adoption of APIs and almost ubiquitous availability of APIs amongst most SaaS and Cloud applications are ra...
Agile, which started in the development organization, has gradually expanded into other areas downstream - namely IT and Operations. Teams – then teams of teams – have streamlined processes, improved feedback loops and driven a much faster pace into IT departments which have had profound effects on the entire organization. In his session at DevOps Summit, Anders Wallgren, Chief Technology Officer of Electric Cloud, will discuss how DevOps and Continuous Delivery have emerged to help connect dev...
"What Dyn is able to do with our Internet performance and our Internet intelligence is give companies visibility into what is actually going on in that cloud," noted Corey Hamilton, Product Marketing Manager at Dyn, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.
The Internet of Things is not only adding billions of sensors and billions of terabytes to the Internet. It is also forcing a fundamental change in the way we envision Information Technology. For the first time, more data is being created by devices at the edge of the Internet rather than from centralized systems. What does this mean for today's IT professional? In this Power Panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists addressed this very serious issue of pro...
Internet of Things is moving from being a hype to a reality. Experts estimate that internet connected cars will grow to 152 million, while over 100 million internet connected wireless light bulbs and lamps will be operational by 2020. These and many other intriguing statistics highlight the importance of Internet powered devices and how market penetration is going to multiply many times over in the next few years.
Manufacturing has widely adopted standardized and automated processes to create designs, build them, and maintain them through their life cycle. However, many modern manufacturing systems go beyond mechanized workflows to introduce empowered workers, flexible collaboration, and rapid iteration. Such behaviors also characterize open source software development and are at the heart of DevOps culture, processes, and tooling.
Today air travel is a minefield of delays, hassles and customer disappointment. Airlines struggle to revitalize the experience. GE and M2Mi will demonstrate practical examples of how IoT solutions are helping airlines bring back personalization, reduce trip time and improve reliability. In their session at @ThingsExpo, Shyam Varan Nath, Principal Architect with GE, and Dr. Sarah Cooper, M2Mi’s VP Business Development and Engineering, will explore the IoT cloud-based platform technologies drivi...
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and Containers together help companies to achieve their business goals faster and more effectively. In his session at DevOps Summit, Ruslan Synytsky, CEO and Co-founder of Jelastic, reviewed the current landscape of...
Live Webinar with 451 Research Analyst Peter Christy. Join us on Wednesday July 22, 2015, at 10 am PT / 1 pm ET In a world where users are on the Internet and the applications are in the cloud, how do you maintain your historic SLA with your users? Peter Christy, Research Director, Networks at 451 Research, will discuss this new network paradigm, one in which there is no LAN and no WAN, and discuss what users and network administrators gain and give up when migrating to the agile world of clo...
SYS-CON Events announced today that JFrog, maker of Artifactory, the popular Binary Repository Manager, will exhibit at SYS-CON's @DevOpsSummit Silicon Valley, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Based in California, Israel and France, founded by longtime field-experts, JFrog, creator of Artifactory and Bintray, has provided the market with the first Binary Repository solution and a software distribution social platform.
"We got started as search consultants. On the services side of the business we have help organizations save time and save money when they hit issues that everyone more or less hits when their data grows," noted Otis Gospodnetić, Founder of Sematext, in this SYS-CON.tv interview at @DevOpsSummit, held June 9-11, 2015, at the Javits Center in New York City.
Internet of Things (IoT) will be a hybrid ecosystem of diverse devices and sensors collaborating with operational and enterprise systems to create the next big application. In their session at @ThingsExpo, Bramh Gupta, founder and CEO of robomq.io, and Fred Yatzeck, principal architect leading product development at robomq.io, discussed how choosing the right middleware and integration strategy from the get-go will enable IoT solution developers to adapt and grow with the industry, while at th...
Containers are revolutionizing the way we deploy and maintain our infrastructures, but monitoring and troubleshooting in a containerized environment can still be painful and impractical. Understanding even basic resource usage is difficult – let alone tracking network connections or malicious activity. In his session at DevOps Summit, Gianluca Borello, Sr. Software Engineer at Sysdig, will cover the current state of the art for container monitoring and visibility, including pros / cons and liv...
The last decade was about virtual machines, but the next one is about containers. Containers enable a service to run on any host at any time. Traditional tools are starting to show cracks because they were not designed for this level of application portability. Now is the time to look at new ways to deploy and manage applications at scale. In his session at @DevOpsSummit, Brian “Redbeard” Harrington, a principal architect at CoreOS, will examine how CoreOS helps teams run in production. Attende...
"We have a tagline - "Power in the API Economy." What that means is everything that is built in applications and connected applications is done through APIs," explained Roberto Medrano, Executive Vice President at Akana, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.
Malicious agents are moving faster than the speed of business. Even more worrisome, most companies are relying on legacy approaches to security that are no longer capable of meeting current threats. In the modern cloud, threat diversity is rapidly expanding, necessitating more sophisticated security protocols than those used in the past or in desktop environments. Yet companies are falling for cloud security myths that were truths at one time but have evolved out of existence.
The cloud has transformed how we think about software quality. Instead of preventing failures, we must focus on automatic recovery from failure. In other words, resilience trumps traditional quality measures. Continuous delivery models further squeeze traditional notions of quality. Remember the venerable project management Iron Triangle? Among time, scope, and cost, you can only fix two or quality will suffer. Only in today's DevOps world, continuous testing, integration, and deployment upend...
IT data is typically silo'd by the various tools in place. Unifying all the log, metric and event data in one analytics platform stops finger pointing and provides the end-to-end correlation. Logs, metrics and custom event data can be joined to tell the holistic story of your software and operations. For example, users can correlate code deploys to system performance to application error codes. In his session at DevOps Summit, Michael Demmer, VP of Engineering at Jut, will discuss how this can...
"A lot of the enterprises that have been using our systems for many years are reaching out to the cloud - the public cloud, the private cloud and hybrid," stated Reuven Harrison, CTO and Co-Founder of Tufin, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.