|By Jeramiah Dooley||
|May 22, 2014 07:15 AM EDT||
Whether they admit it or not, the emergence of public cloud providers has dramatically altered the playing field for hardware vendors of every type. Amazon Web Services (AWS) and its competitors opened Pandora's box by introducing the world to a completely programmatic, scalable, evolving, and pay-as-you-go way to procure and utilize network, compute and storage resources on a global scale. They have disrupted many layers of the technology industry from the applications being written to the way companies interact with the infrastructure being used to support those applications.
Nowhere is this disruption easier to see than in the virtualization ecosystem. For the better part of the last decade, hypervisor companies like VMware, Citrix, Microsoft and Red Hat worked hand-in-hand with hardware manufacturers like Cisco, NetApp, EMC, HP and Dell to define both the infrastructure foundation as well as the virtualized abstraction layer that sat underneath the entirety of the client/server era. These companies provided a direct link between the enterprise applications, the hypervisor and the hardware. They owned the traditional datacenter construct.
It's that construct, since rebranded as "private cloud," that is directly under attack by public cloud providers. I predict that this will be the battlefield for the heart and soul of enterprise IT for the next decade.
The response to the public cloud threat has been varied, and often reflects the ability of traditional companies to pivot and meet the challenge. Interestingly, erstwhile competitors Microsoft and VMware reacted similarly. This is because they were both uniquely positioned to create a software-defined solution to the problem.
For both companies, the response started with existing enterprise workloads. One of the largest challenges of the AWS public cloud is the fact that getting workloads, and especially data, into and out of an enterprise environment can be both technically challenging and expensive. Most workloads running on an enterprise-virtualized platform today can't be easily ported into AWS and this increases the cost and risk of any migration. As companies with extensive and hard-won experience running mission-critical enterprise workloads, Microsoft and VMware came to much the same conclusion: build a public cloud using their existing platform and allow customers and developers to leverage all of the investment they've made in their own data centers as they selectively move workloads outside of their own data centers. Thus, Microsoft Azure and VMware vCHS were born. Both are clouds that customers can move workloads to without the need to rewrite or re-architect them. They can also be licensed using existing agreements and can be managed by existing staff and tools.
Unfortunately, the traditional data center infrastructure is now the weak link in this new software-defined world. In each of the public clouds referenced, the focus has been on the abstraction layer and how it interacts with the end users. What's missing is how the abstraction layer and the applications and tools that sit on top of it interact with the infrastructure directly.
There have been attempts at hardware-based offloading, especially with regards to storage. VAAI is a good example of VMware trying to create a way to let enterprise storage arrays handle the tasks they are good at without requiring the direct involvement of the hypervisor. But even there it's a rudimentary exchange at best: the hypervisor asks "can you do this task instead of me?" and the array responds. If the answer is yes, the hypervisor waits for the task to complete; if the answer is no, the hypervisor does the task itself. This relationship isn't dynamic, and is ignorant of the reason for and context behind the task in the first place.
In summary, we have an outside force, AWS and public cloud, being the primary catalyst driving change into the enterprise, yet very little of that change is happening below the cloud management or hypervisor layer. Why is that? Why is it important that the infrastructure layer become more of an asset to the rest of the stack? What would that look like? Let's dig in.
The question of why is actually pretty simple: it's really, really hard to take legacy hardware architecture and retrofit it into something agile and programmatic. In some cases, it's just a new concept that requires a hardware refresh (like Cisco UCS and its take on XML-defined BIOS policies), but in many cases, especially around storage, it requires a complete reimagining of the platform. It's no coincidence that most of the innovation in this agile infrastructure space is being done by startups who have no legacy customers, technical debt or margins to deal with.
Why is it important? While the best hardware is boring hardware, it's still a critical part to providing a flexible, reliable and high-performance foundation to handle applications that matter to enterprises. There are times where the best way to handle the demands of an application or, more important, multiple applications at once is in hardware. This is true at the network layer, where the manipulation of packets benefits from proximity to processing resources; the compute layer, where apps can benefit from having specialized GPU resources to handle unique requirements; and most especially at the storage layer.
Storage services can have the most dramatic impact on workload performance, yet are often implemented in such a way that they have no direct relationship with those workloads. Services like compression, deduplication and quality-of-service are usually "on or off" features when it comes to storage arrays. Best case, a storage administrator will create a volume or LUN, choose the features that need to be enabled, and then a virtualization admin will map that volume to a data store. Perhaps the virtualization team will create manual storage profiles that define the features offered by that data store, but placing and migrating VMs remains a manual process, and they will not have the ability to map application policy equally across the hypervisor and hardware layers. (Of course, it's not impossible to create programmatic, hypervisor-aware infrastructure, but it is pretty hard.)
Enterprises have come to expect some fundamental features from the public cloud space: simple architecture, linear scaling, API availability and granular application of services. These features allow an infrastructure to respond to the increased requirements of a workload natively, without the overhead of a bolt-on orchestration engine. They provide the ability for the hypervisor to be both a northbound and southbound policy enforcer. They enable the Next-Generation Data Center, one in which the hardware, the hypervisor and the application all play an integrated, coordinated role in providing the performance and availability demanded by the enterprise.
No matter where your workloads run, the rise of public cloud has ushered in an era of computing defined by a seamless, programmatic experience. The old, monolithic infrastructure of yesterday's client/server wave is giving way to a more agile, more responsive, more services-rich and more scalable cloud-based model. The battle for the enterprise soul is beginning and, inside or outside the firewall, the clouds that can best adapt to the demands of the workloads they are supporting will be best positioned for success.
As a CIO, are your direct reports IT managers or are they IT leaders? The hard truth is that many IT managers have risen through the ranks based on their technical skills, not their leadership ability. Many are unable to effectively engage and inspire, creating forward momentum in the direction of desired change. Renowned for its approach to leadership and emphasis on their people, organizations increasingly look to our military for insight into these challenges.
Oct. 9, 2015 11:15 PM EDT Reads: 195
SYS-CON Events announced today that Dyn, the worldwide leader in Internet Performance, will exhibit at SYS-CON's 17th International Cloud Expo®, which will take place on November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Dyn is a cloud-based Internet Performance company. Dyn helps companies monitor, control, and optimize online infrastructure for an exceptional end-user experience. Through a world-class network and unrivaled, objective intelligence into Internet condit...
Oct. 9, 2015 11:00 PM EDT Reads: 618
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and Containers together help companies to achieve their business goals faster and more effectively.
Oct. 9, 2015 10:15 PM EDT Reads: 210
Today air travel is a minefield of delays, hassles and customer disappointment. Airlines struggle to revitalize the experience. GE and M2Mi will demonstrate practical examples of how IoT solutions are helping airlines bring back personalization, reduce trip time and improve reliability. In their session at @ThingsExpo, Shyam Varan Nath, Principal Architect with GE, and Dr. Sarah Cooper, M2Mi's VP Business Development and Engineering, will explore the IoT cloud-based platform technologies driv...
Oct. 9, 2015 10:15 PM EDT Reads: 135
There are many considerations when moving applications from on-premise to cloud. It is critical to understand the benefits and also challenges of this migration. A successful migration will result in lower Total Cost of Ownership, yet offer the same or higher level of robustness. Migration to cloud shifts computing resources from your data center, which can yield significant advantages provided that the cloud vendor an offer enterprise-grade quality for your application.
Oct. 9, 2015 09:30 PM EDT Reads: 311
The web app is agile. The REST API is agile. The testing and planning are agile. But alas, data infrastructures certainly are not. Once an application matures, changing the shape or indexing scheme of data often forces at best a top down planning exercise and at worst includes schema changes that force downtime. The time has come for a new approach that fundamentally advances the agility of distributed data infrastructures. Come learn about a new solution to the problems faced by software organ...
Oct. 9, 2015 08:00 PM EDT Reads: 935
The buzz continues for cloud, data analytics and the Internet of Things (IoT) and their collective impact across all industries. But a new conversation is emerging - how do companies use industry disruption and technology enablers to lead in markets undergoing change, uncertainty and ambiguity? Organizations of all sizes need to evolve and transform, often under massive pressure, as industry lines blur and merge and traditional business models are assaulted and turned upside down. In this new da...
Oct. 9, 2015 08:00 PM EDT Reads: 316
Achim Weiss is Chief Executive Officer and co-founder of ProfitBricks. In 1995, he broke off his studies to co-found the web hosting company "Schlund+Partner." The company "Schlund+Partner" later became the 1&1 web hosting product line. From 1995 to 2008, he was the technical director for several important projects: the largest web hosting platform in the world, the second largest DSL platform, a video on-demand delivery network, the largest eMail backend in Europe, and a universal billing syste...
Oct. 9, 2015 06:45 PM EDT Reads: 230
Secure Cloud through Automated Compliance | @CloudExpo @CloudRaxak #Cloud #BigData #DevOps #Microservices
Cloud computing delivers on-demand resources that provide businesses with flexibility and cost-savings. The challenge in moving workloads to the cloud has been the cost and complexity of ensuring the initial and ongoing security and regulatory (PCI, HIPAA, FFIEC) compliance across private and public clouds. Manual security compliance is slow, prone to human error, and represents over 50% of the cost of managing cloud applications. Determining how to automate cloud security compliance is critical...
Oct. 9, 2015 06:00 PM EDT Reads: 324
The Internet of Everything is re-shaping technology trends–moving away from “request/response” architecture to an “always-on” Streaming Web where data is in constant motion and secure, reliable communication is an absolute necessity. As more and more THINGS go online, the challenges that developers will need to address will only increase exponentially. In his session at @ThingsExpo, Todd Greene, Founder & CEO of PubNub, will explore the current state of IoT connectivity and review key trends an...
Oct. 9, 2015 05:30 PM EDT Reads: 115
The Internet of Things (IoT) is growing rapidly by extending current technologies, products and networks. By 2020, Cisco estimates there will be 50 billion connected devices. Gartner has forecast revenues of over $300 billion, just to IoT suppliers. Now is the time to figure out how you’ll make money – not just create innovative products. With hundreds of new products and companies jumping into the IoT fray every month, there’s no shortage of innovation. Despite this, McKinsey/VisionMobile data...
Oct. 9, 2015 04:00 PM EDT Reads: 247
You have your devices and your data, but what about the rest of your Internet of Things story? Two popular classes of technologies that nicely handle the Big Data analytics for Internet of Things are Apache Hadoop and NoSQL. Hadoop is designed for parallelizing analytical work across many servers and is ideal for the massive data volumes you create with IoT devices. NoSQL databases such as Apache HBase are ideal for storing and retrieving IoT data as “time series data.”
Oct. 9, 2015 03:45 PM EDT Reads: 510
In recent years, at least 40% of companies using cloud applications have experienced data loss. One of the best prevention against cloud data loss is backing up your cloud data. In his General Session at 17th Cloud Expo, Bryan Forrester, Senior Vice President of Sales at eFolder, will present how organizations can use eFolder Cloudfinder to automate backups of cloud application data. He will also demonstrate how easy it is to search and restore cloud application data using Cloudfinder.
Oct. 9, 2015 03:00 PM EDT Reads: 502
Saviynt Inc. has announced the availability of the next release of Saviynt for AWS. The comprehensive security and compliance solution provides a Command-and-Control center to gain visibility into risks in AWS, enforce real-time protection of critical workloads as well as data and automate access life-cycle governance. The solution enables AWS customers to meet their compliance mandates such as ITAR, SOX, PCI, etc. by including an extensive risk and controls library to detect known threats and b...
Oct. 9, 2015 03:00 PM EDT Reads: 241
SYS-CON Events announced today that Key Information Systems, Inc. (KeyInfo), a leading cloud and infrastructure provider offering integrated solutions to enterprises, will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Key Information Systems is a leading regional systems integrator with world-class compute, storage and networking solutions and professional services for the most advanced softwa...
Oct. 9, 2015 03:00 PM EDT Reads: 413
The enterprise is being consumerized, and the consumer is being enterprised. Moore's Law does not matter anymore, the future belongs to business virtualization powered by invisible service architecture, powered by hyperscale and hyperconvergence, and facilitated by vertical streaming and horizontal scaling and consolidation. Both buyers and sellers want instant results, and from paperwork to paperless to mindless is the ultimate goal for any seamless transaction. The sweetest sweet spot in innov...
Oct. 9, 2015 02:15 PM EDT Reads: 217
The IoT is upon us, but today’s databases, built on 30-year-old math, require multiple platforms to create a single solution. Data demands of the IoT require Big Data systems that can handle ingest, transactions and analytics concurrently adapting to varied situations as they occur, with speed at scale. In his session at @ThingsExpo, Chad Jones, chief strategy officer at Deep Information Sciences, will look differently at IoT data so enterprises can fully leverage their IoT potential. He’ll sha...
Oct. 9, 2015 01:45 PM EDT Reads: 565
Overgrown applications have given way to modular applications, driven by the need to break larger problems into smaller problems. Similarly large monolithic development processes have been forced to be broken into smaller agile development cycles. Looking at trends in software development, microservices architectures meet the same demands. Additional benefits of microservices architectures are compartmentalization and a limited impact of service failure versus a complete software malfunction....
Oct. 9, 2015 01:15 PM EDT Reads: 259
For almost two decades, businesses have discovered great opportunities to engage with customers and even expand revenue through digital systems, including web and mobile applications. Yet, even now, the conversation between the business and the technologists that deliver these systems is strained, in large part due to misaligned objectives. In his session at DevOps Summit, James Urquhart, Senior Vice President of Performance Analytics at SOASTA, Inc., will discuss how measuring user outcomes –...
Oct. 9, 2015 01:00 PM EDT Reads: 481
As a company adopts a DevOps approach to software development, what are key things that both the Dev and Ops side of the business must keep in mind to ensure effective continuous delivery? In his session at DevOps Summit, Mark Hydar, Head of DevOps, Ericsson TV Platforms, will share best practices and provide helpful tips for Ops teams to adopt an open line of communication with the development side of the house to ensure success between the two sides.
Oct. 9, 2015 01:00 PM EDT Reads: 618