|By Mark Casey||
|June 2, 2013 03:00 PM EDT||
It's clear that cloud computing has transformed the enterprise IT landscape, from the computing infrastructure layer up through enterprise software, as companies move to leverage more efficient and cost-effective service-delivery models and bring new cloud-based products and services to the market. Perhaps less known is the innovation taking place at the network level and how leading companies are transforming their Wide Area Networks (WAN) to more quickly and efficiently fuel their move to the cloud.
Moving to the cloud requires network managers and IT shops to implement scalable solutions that ensure the reliability and performance of cloud-based applications across their extended enterprise. Cloud computing drives the need for more reliability across the WAN and ever-increasing amounts of highly available, secure and reliable bandwidth across all users, locations and geographies. However, many enterprises are constrained by their existing network infrastructure, both from a cost and performance perspective. They can't cost-effectively scale their networks; and latency, jitter and packet loss impact performance and reliability in the cloud.
Transforming to a next-generation WAN architecture plays a critical role in enabling enterprises to more easily and cost-effectively migrate to and better support public, private and hybrid cloud environments.
According to Forrester, ‘enterprise use of the cloud has arrived,' with nearly half of all companies in North America and Europe setting aside budget for private cloud investments in 2013. Legitimate budgeting to integrate cloud services into existing platforms and deploy software apps to the cloud confirm that IT shops are ‘no longer denying it's happening in their company.' Increasingly, enterprises are moving beyond their own data centers to leverage infrastructure and applications, choosing to host their own applications externally or leverage services from third-party providers.
Cloud infrastructure providers, such as Amazon and Rackspace, are well established in the enterprise IaaS market delivering compute, storage and hosting services to businesses of all sizes from SME to large multi-national corporations. Since its launch in 2006, Amazon Web Services' (AWS) S3 offering has grown to encompass more than two trillion objects stored and company revenues have grown past $2 billion. AWS clearly dominates the cloud platform space holding as much as 70 percent of the market, with its enterprise clients spending anywhere from $12,000 to $2.5 million per year on its infrastructure services. The push of traditional companies such as Microsoft, IBM and HP, as well as and a host of other players, into this space further validates the arrival of cloud in the enterprise market.
On the software side, service providers like Salesforce.com have been offering cloud-based enterprise software for years, enabling companies to optimize their costs under a pay-per-use model, while simplifying the delivery of reliable apps that scale more easily. According to the Aberdeen Group, SaaS is becoming an increasingly important deployment model for enterprise applications, with highest adoption among CRM and ERP solutions. Nearly 80 percent of all companies are currently using at least two or more SaaS applications and many have reported decreased spending on application deployment resulting from SaaS usage.
Enterprises will continue to use a range of cloud solutions, developed internally and sourced from external providers, to more efficiently and effectively distribute mission-critical applications on a global scale. Most will need to move beyond their traditional legacy networks to ensure higher levels of performance, reliability and scalability of these applications across the WAN.
Traditional WAN Design and Optimization Approaches: Falling Short in the Cloud
Cloud is causing an explosion in enterprise bandwidth and making traditional WAN management obsolete. Bandwidth demands will continue to grow and the enterprise WAN will continue to need more bandwidth. While the actual services delivered are the main attraction in the cloud, enterprises are finding that traditional Multiprotocol Label Switching (MPLS) networks and WAN acceleration technologies can't keep up.
Migrating to the cloud has put new pressures on WAN connectivity, from both a cost and performance perspective. Existing networks and optimization solutions cannot provide the capacity, reliability and scalability required across all users, locations and geographies. Work environments and application needs have changed, and will continue to change dramatically. In many cases, network design has become a limiting factor with reliance on traditional architectures that have not been optimized to support how applications are being hosted and accessed in public, private and hybrid cloud environments, or how and where people work.
Traditional WAN architecture is based on a hub-and-spoke model with data distributed from headquarters to branch locations and across centers (DC2DC) connected via public and private networks. At the branch or edge locations, sites are connected via low bandwidth MPLS links often over T1 to DS3 access links from the local telco. Connectivity across larger more bandwidth-intensive sites, such as corporate HQs and data centers, use expensive MPLS WAN links configured in a higher bandwidth core, typically in the range of 100Mbs.
The majority of enterprise WAN links are high cost, site-to-site private MPLS lines sourced from incumbent telcos like Verizon and AT&T. As enterprise bandwidth demands increase, the high cost of MPLS-based WAN connectivity and the complexity of underlying networks impact the enterprise's ability to cost-effectively scale their networks with the growth in demand.
Not only is the enterprise use of global MPLS for "backbone" traffic becoming less cost competitive as scale increases, but it is increasingly challenging to control costs associated with real-time applications, distributed cloud services and rich media. Traditional network topologies can also limit an enterprise's ability to fully leverage infrastructure and server virtualization as a means to more effectively distribute enterprise applications across all locations and users, and application performance suffers over long distance network paths. Furthermore, as enterprises seek to leverage solutions sourced from external providers, using MPLS as the connectivity method to SaaS and other public cloud locations is not agile enough and doesn't scale effectively given the high cost per bit.
MPLS is not the only factor driving the need for enterprises to rethink their networks. The public Internet is becoming an increasingly important distribution medium to reach customers and stakeholders, but managing performance is becoming critical.
While the Internet provides ease of access across a broad base of users, it often lacks the performance and reliability required to support mission-critical, cloud-based enterprise solutions. Packet loss and jitter are more common across the Internet than MPLS; and network congestion and latency vary across locations and geographies as no single provider can guarantee end-to-end performance. Nevertheless, accessing services via the Internet is a reality, and it is increasingly important for enterprises to architect network solutions that best optimize "public" access to cloud-based apps and services.
Another approach enterprises have used to optimize the performance of business-critical applications over the enterprise network has been through WAN optimization. Traditional WAN optimization techniques use appliances and hardware installed at corporate and remote locations to improve end-to-end application performance by increasing data-transfer efficiencies across wide-area-networks. These technologies are often application or protocol-specific and seek to optimize how individual applications work over the WAN instead of making the WAN work better for all applications.
While these appliances have helped deliver better application performance, this approach tends to be more tactical in nature, rationing a limited supply of bandwidth instead of addressing the organization's more strategic need to add more bandwidth or capacity to support ever-increasing demands. As more applications and services are deployed to the cloud, and more bandwidth-intensive applications and real-time data are delivered across the extended enterprise, the enterprise's demand for bandwidth will continue to increase.
Furthermore, while traditional WAN optimization solutions are dual-sided with one box at a data center and another at a branch office, this approach to optimize cloud applications can only be implemented with a single-sided solution since an appliance cannot be placed in front of an application residing in the cloud. As such, traditional solutions can fall short in the cloud, and are better suited for improving the performance of non-real-time applications, such as email, network backup and remote file access.
Rethinking Enterprise Networks: Next-Generation WAN Architecture
Enterprises that wish to leverage private, public or hybrid cloud solutions to distribute data and applications across a country or around the globe need to rethink their WAN architecture to achieve the required scale within existing budgets. Bandwidth economies of scale between highly connected network aggregation points offer exponential improvements in bandwidth availability at a fraction of the cost, but most enterprises are unaware of how to tap into these aggregation points or even that they exist.
The first step is connecting existing enterprise data centers and the WAN directly into carrier-neutral data centers that are "highly connected" and provide direct access to wide array of high capacity, high bandwidth connectivity options, as well as a growing base of cloud infrastructure and application providers.
These carrier-neutral data centers, operated by providers such as Equinix and Telx, are well known for outsourced IT services, including data center colocation, managed hosting of external-facing websites and applications, proximity to public cloud services, and as secondary sites for disaster recovery and business continuity. However, many enterprises are less familiar with these facilities as a key enabler of a high performance, next-generation WAN architecture.
Integrating these facilities as "super nodes" in the WAN provides enterprises a long-term approach to increase control over performance, reliability and scalability for the cloud while providing a means to significantly drive down bandwidth costs.
Carrier-neutral facilities are centrally located and provide enterprises broad access to competitive carrier markets with a near limitless supply of diverse, inexpensive bandwidth from Tier-1 and Tier-2 network carriers. By leveraging these facilities, enterprises are no longer constrained by the incumbent telcos and their legacy networks and have direct access to fiber and bandwidth from competitive providers at prices much lower than MPLS along with a wider array of MPLS and similar services.
Re-architecting existing networks to a next-generation WAN architecture provides a means to more cost-effectively scale the WAN to grow with the enterprise's demands than traditional MPLS-dense, hub-and-spoke networks. Additionally, bandwidth can easily be added at lower cost, secure hosting or rack space for new hardware or software can be deployed, and latency performance can be improved by connecting additional proximity locations.
Beyond the cost and scalability benefits of network transformation, by building out a higher performance core network integrating super nodes and direct fiber connectivity, enterprises can substantially improve performance and reliability of virtualized, networked and cloud-based solutions, both for intranet applications as well as SaaS and cloud-based services.
Carrier-neutral, data centers often serve as network access points or public peering locations, close to the core of the Internet and public cloud services. Moving closer to the Internet core enables more reliable access to third party SaaS, IaaS and other cloud-based services, even delivering close to "on-net" reliability of cloud services located at the same colocation facility. Furthermore, these facilities are often close, in terms of latency, to a large number of users and businesses connecting to the Internet, enabling more reliable access and service delivery to a broader base of users.
This architectural approach can provide better performance and help to address several of the key WAN factors affecting application performance while delivering enhanced end-to-end network performance, speed and reliability. A next-generation WAN architecture sets the foundation to enable enterprises to better leverage the power of virtualization and gain the efficiencies of the cloud to more effectively distribute enterprise applications and services. A higher performance core network connecting corporate data centers and third-party facilities with more robust WAN connectivity allows enterprises to take advantage of bandwidth costs and application performance benefits today, while providing the ability to cost-effectively scale to meet future demands.
This next-generation WAN architecture is the exact approach that today's leading companies are using to transform their global WAN architectures around highly connected aggregation points or "super nodes". Moving from legacy MPLS networks, these companies are building out their own high capacity, highly connected core backbones, and pushing MPLS to the edge. Once connected to the right network aggregation points, bandwidth costs begin to fall rapidly while bandwidth increases and access to cloud-based infrastructure and applications is streamlined and simplified.
CFN Services works with leading companies to map their legacy WAN to this new cloud world order. To learn more about CFN's network transformation solutions and how next-generation WAN architecture can improve business performance, please visit www.cfnservices.com
Gain additional insights on how leading organizations are utilizing smarter networking strategies to improve network and application performance in the Aberdeen Group's "Building a Smarter Networking Strategy for the Modern Large Enterprise" white paper.
- Cloudyn, AWS Client Research
- The Growing Importance of SaaS as an Application Deployment Model, Aberdeen Group, March 1, 2013
Cloud Expo, Inc. has announced today that Andi Mann returns to DevOps Summit 2015 as Conference Chair. The 4th International DevOps Summit will take place on June 9-11, 2015, at the Javits Center in New York City. "DevOps is set to be one of the most profound disruptions to hit IT in decades," said Andi Mann. "It is a natural extension of cloud computing, and I have seen both firsthand and in independent research the fantastic results DevOps delivers. So I am excited to help the great team at ...
May. 24, 2015 02:00 PM EDT Reads: 1,401
How does one bridge the gap between traditional enterprise storage infrastructures and the private, hybrid, and public cloud? In his session at 15th Cloud Expo, Dan Pollack, Chief Architect of Storage Operations at AOL Inc., examed the workload differences and required changes to reuse existing knowledge and components when building and using a cloud infrastructure. He also looked into the operational considerations, tool requirements, and behavioral changes required for private cloud storage s...
May. 24, 2015 02:00 PM EDT Reads: 2,415
Software is eating the world. Companies that were not previously in the technology space now find themselves competing with Google and Amazon on speed of innovation. As the innovation cycle accelerates, companies must embrace rapid and constant change to both applications and their infrastructure, and find a way to deliver speed and agility of development without sacrificing reliability or efficiency of operations. In her Day 2 Keynote DevOps Summit, Victoria Livschitz, CEO of Qubell, discussed...
May. 24, 2015 02:00 PM EDT Reads: 5,403
The time is ripe for high speed resilient software defined storage solutions with unlimited scalability. ISS has been working with the leading open source projects and developed a commercial high performance solution that is able to grow forever without performance limitations. In his session at DevOps Summit, Alex Gorbachev, President of Intelligent Systems Services Inc., will share foundation principles of Ceph architecture, as well as the design to deliver this storage to traditional SAN st...
May. 24, 2015 01:45 PM EDT Reads: 1,278
Gartner predicts that the bulk of new IT spending by 2016 will be for cloud platforms and applications and that nearly half of large enterprises will have cloud deployments by the end of 2017. The benefits of the cloud may be clear for applications that can tolerate brief periods of downtime, but for critical applications like SQL Server, Oracle and SAP, companies need a strategy for HA and DR protection. While traditional SAN-based clusters are not possible in these environments, SANless cluste...
May. 24, 2015 12:45 PM EDT Reads: 1,351
Hardware will never be more valuable than on the day it hits your loading dock. Each day new servers are not deployed to production the business is losing money. While Moore's Law is typically cited to explain the exponential density growth of chips, a critical consequence of this is rapid depreciation of servers. The hardware for clustered systems (e.g., Hadoop, OpenStack) tends to be significant capital expenses. In his session at Big Data Expo, Mason Katz, CTO and co-founder of StackIQ, disc...
May. 24, 2015 12:30 PM EDT Reads: 5,034
In a recent research, analyst firm IDC found that the average cost of a critical application failure is $500,000 to $1 million per hour and the average total cost of unplanned application downtime is $1.25 billion to $2.5 billion per year for Fortune 1000 companies. In addition to the findings on the cost of the downtime, the research also highlighted best practices for development, testing, application support, infrastructure, and operations teams.
May. 24, 2015 12:30 PM EDT Reads: 792
In their general session at 16th Cloud Expo, Michael Piccininni, Global Account Manager – Cloud SP at EMC Corporation, and Mike Dietze, Regional Director at Windstream Hosted Solutions, will review next generation cloud services, including the Windstream-EMC Tier Storage solutions, and discuss how to increase efficiencies, improve service delivery and enhance corporate cloud solution development. Speaker Bios Michael Piccininni is Global Account Manager – Cloud SP at EMC Corporation. He has b...
May. 24, 2015 12:15 PM EDT Reads: 888
All major researchers estimate there will be tens of billions devices - computers, smartphones, tablets, and sensors - connected to the Internet by 2020. This number will continue to grow at a rapid pace for the next several decades. With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo, June 9-11, 2015, at the Javits Center in New York City. Learn what is going on, contribute to the discussions, and ensure that your enter...
May. 24, 2015 12:15 PM EDT Reads: 1,675
With worldwide spending on cloud services and infrastructure growing by 23% in 2015 to $118B, it is clear that cloud services are here to stay. Yet, the rate of cloud adoption varies by companies and markets around the world. With thousands of outages and hijacks across the Internet every day, one reason for hesitation is the faith in quality Internet performance. In his session at 16th Cloud Expo, Michael Kane, Senior Manager at Dyn, will explore how Internet performance affects your end-user...
May. 24, 2015 12:00 PM EDT Reads: 1,312
Container frameworks, such as Docker, provide a variety of benefits, including density of deployment across infrastructure, convenience for application developers to push updates with low operational hand-holding, and a fairly well-defined deployment workflow that can be orchestrated. Container frameworks also enable a DevOps approach to application development by cleanly separating concerns between operations and development teams. But running multi-container, multi-server apps with containers ...
May. 24, 2015 12:00 PM EDT Reads: 1,732
SYS-CON Events announced today that DragonGlass, an enterprise search platform, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. After eleven years of designing and building custom applications, OpenCrowd has launched DragonGlass, a cloud-based platform that enables the development of search-based applications. These are a new breed of applications that utilize a search index as their backbone for data...
May. 24, 2015 12:00 PM EDT Reads: 1,678
As the Internet of Things unfolds, mobile and wearable devices are blurring the line between physical and digital, integrating ever more closely with our interests, our routines, our daily lives. Contextual computing and smart, sensor-equipped spaces bring the potential to walk through a world that recognizes us and responds accordingly. We become continuous transmitters and receivers of data. In his session at @ThingsExpo, Andrew Bolwell, Director of Innovation for HP's Printing and Personal S...
May. 24, 2015 11:30 AM EDT Reads: 3,750
There is no doubt that Big Data is here and getting bigger every day. Building a Big Data infrastructure today is no easy task. There are an enormous number of choices for database engines and technologies. To make things even more challenging, requirements are getting more sophisticated, and the standard paradigm of supporting historical analytics queries is often just one facet of what is needed. As Big Data growth continues, organizations are demanding real-time access to data, allowing immed...
May. 24, 2015 11:30 AM EDT Reads: 2,704
The OpenStack cloud operating system includes Trove, a database abstraction layer. Rather than applications connecting directly to a specific type of database, they connect to Trove, which in turn connects to one or more specific databases. One target database is Postgres Plus Cloud Database, which includes its own RESTful API. Trove was originally developed around MySQL, whose interfaces are significantly less complicated than those of the Postgres cloud database. In his session at 16th Cloud...
May. 24, 2015 11:30 AM EDT Reads: 920
SYS-CON Events announced today that EnterpriseDB (EDB), the leading worldwide provider of enterprise-class Postgres products and database compatibility solutions, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. EDB is the largest provider of Postgres software and services that provides enterprise-class performance and scalability and the open source freedom to divert budget from more costly traditiona...
May. 24, 2015 11:00 AM EDT Reads: 1,389
Cloud computing started a technology revolution; now DevOps is driving that revolution forward. By enabling new approaches to service delivery, cloud and DevOps together are delivering even greater speed, agility, and efficiency. No wonder leading innovators are adopting DevOps and cloud together! In his session at DevOps Summit, Andi Mann, Vice President of Strategic Solutions at CA Technologies, explored the synergies in these two approaches, with practical tips, techniques, research data, wa...
May. 24, 2015 11:00 AM EDT Reads: 6,527
Enterprises are fast realizing the importance of integrating SaaS/Cloud applications, API and on-premises data and processes, to unleash hidden value. This webinar explores how managers can use a Microservice-centric approach to aggressively tackle the unexpected new integration challenges posed by proliferation of cloud, mobile, social and big data projects. Industry analyst and SOA expert Jason Bloomberg will strip away the hype from microservices, and clearly identify their advantages and d...
May. 24, 2015 10:45 AM EDT Reads: 1,162
Data-intensive companies that strive to gain insights from data using Big Data analytics tools can gain tremendous competitive advantage by deploying data-centric storage. Organizations generate large volumes of data, the vast majority of which is unstructured. As the volume and velocity of this unstructured data increases, the costs, risks and usability challenges associated with managing the unstructured data (regardless of file type, size or device) increases simultaneously, including end-to-...
May. 24, 2015 10:30 AM EDT Reads: 4,094
If cloud computing benefits are so clear, why have so few enterprises migrated their mission-critical apps? The answer is often inertia and FUD. No one ever got fired for not moving to the cloud - not yet. In his session at 15th Cloud Expo, Michael Hoch, SVP, Cloud Advisory Service at Virtustream, discussed the six key steps to justify and execute your MCA cloud migration.
May. 24, 2015 10:30 AM EDT Reads: 2,770