|By Derek Kol||
|February 5, 2013 06:09 AM EST||
by George Crump, Storage Switzerland
VDI (Virtual Desktop Infrastructure) implementation projects are going to be priorities for many IT Managers in 2013 and a key concern will be end-user acceptance. If the users don't embrace their virtual desktops they won't use them and the project is doomed to failure. The key to acceptance is to provide users with an environment that feels the same, performs better and is more reliable than their current stand-alone system. The storage system bears most of the responsibility in delivering that experience.
IT managers who want to capitalize on the opportunity that the virtual desktop environment can focus on two key capabilities when they evaluate storage system vendors. The first is being able to deliver the raw performance that the virtual desktop architecture needs and the second is doing so in the most cost effective way possible. These are two capabilities that are traditionally at odds with each other and not always well-reflected in benchmark testing.
For most organizations the number-one priority for gaining user acceptance is to keep the virtual desktop experience as similar to the physical desktop as possible. Typically, this will mean using persistent desktops, a VDI implementation in which each user's desktop is a stand-alone element in the virtual environment for which they can customize settings and add their own applications just like they could on their physical desktop.
The problem with persistent desktops is that a unique image is created for each desktop or user, which can add up to thousands of images for larger VDI populations. Obviously, allocating storage for thousands of virtual desktops is a high price to pay for maintaining a positive user experience.
In an effort to reduce the amount of storage required for all of these images, virtualized environments have incorporate features such as thin provisioning and linked clones. The goal is to have the storage system deliver a VDI environment that's built from just a few thinly provisioned ‘golden' VDI images, which are then cloned for each user.
As users customize their clones, only the differences between the golden image and the users' VDIs need to be stored. The result is a significant reduction in the total amount of storage required, lowering its overall cost. Also, the small number of golden images allows for much of the VDI read traffic to be served from a flash-based tier or cache.
When a write occurs from a thinly provisioned, cloned virtual desktop more has to happen then just the operation to write that data object. The volume needs to have additional space allocated to it (one write operation), the metadata table that tracks unique branches of the cloned volume has to be updated (another write operation) and some sort of parity data needs to be written, depending on the RAID protection in place. Then, finally, the data object is written. This entire process has to happen with each data change no matter how small.
Herein lays the tradeoff in using these features. While reducing the amount of space required for the VDI images, thin provisioning and cloning increase the demand for high write performance in the storage system. This presents a significant opportunity for storage system vendors who can address these new performance requirements.
Many storage systems that use a mix of flash memory and hard disk technology don't use the higher performing flash for writes; they use it for actively reading data. While these storage systems have storage controllers designed to handle high read loads, the increased write activity generated by thin provisioning and cloning is still going to relatively slow hard disk drives. Because this type of I/O traffic is highly random, the hard drives are constantly "thrashing about". Basically the controller sits idle while it waits for the hard disk to rotate into position to complete each write command. Even systems with an SSD tier or cache may have problems providing adequate performance because they too don't leverage the high speed flash for write traffic.
Due to the high level of thin provisioning and cloning, plus the fact that once a desktop is created a large part of its I/O is write traffic, many cached or tiered systems do not perform well in real-world VDI environments and can provide misleading VDI Benchmark scores.
The Truth Behind VDI Benchmarks
Most VDI Benchmarks focus primarily on one aspect of the VDI experience, the time it takes to boot a given number of virtual desktops. The problem with using a "boot storm test" is that this important but read-heavy event is only a part of the overall VDI storage challenge. During most of the day desktops are writing data, not reading it. In addition, simple activities such as logging out and application updates are very write-intensive. The capability of a storage system to handle these write activities is not measured by many VDI benchmarking routines.
A second problem with many VDI benchmarking claims is that for their testing configuration they do not use thinly provisioned and cloned volumes. Instead, they use thick volumes in order to show maximum VDI performance.
As discussed above, in order to keep user adoption high and costs low most VDI implementations would preferentially use persistent desktops with thin provisioning and cloning. Be wary of vendors claiming a single device can support over 1000 VDI users. These claims are usually based on the amount of storage that a typical VDI user might need as opposed to the Read/Write IOPS performance they will most likely need.
Trustworthy VDI Performance
A successful VDI project is one that gains end-user acceptance while reducing desktop support costs. The cost of a storage system that can provide thin provisioning, cloning and an adequate sized flash storage area to support the virtual environment could be too high for some enterprises to afford. And, an additional cost could be incurred with the performance problems that are likely to appear after the initial desktop boot is completed because of the high level of write I/O.
The simplest solution may be to deploy a solid state appliance like Astute Networks ViSX for VDI. These devices are 100% solid state storage to provide high performance on both reads AND writes. This means that boot performance is excellent and performance throughout the day is maintained as well.
With a solid state based solution to the above problems, performance will not be an issue, but cost may still be. Even though it can provide consistent read/write performance throughout the day for a given number of virtual desktops, the cost per desktop of a flash based solution can be significantly higher than a hard drive based system.
However, it's likely in larger VDI environments (400+ users) that flash-based systems are really the only viable alternative to meet the performance requirements which can easily exceed 100 IOPS per user. Fortunately, flash-based systems can also produce efficiencies that bring down that cost in addition to the well-known benefits of using 1/10th the floor space, power and cooling compared to traditional storage systems.
First, the density of virtual desktops per host can be significantly higher with a flash appliance. And, the system is unaffected by the increase in random I/O as the density of virtual machines increases.
Second, the speed of the storage device compensates for the increased demands of thin provisioning and cloning operations run on the hypervisor. These data reduction services can now be used without a performance penalty. This means that the cost of a storage system with a more powerful storage controller and expensive data services like thin provisioning and cloning can be avoided.
Finally, the flash appliance is designed to tap into more of the full potential of solid state-based storage. For example, Astute uses a unique DataPump Engine protocol processor that's designed to specifically accelerate data onto and off of the network and through the appliance to the fast flash storage. This lowers the cost per IOPS compared to other flash-based storage systems.
Most legacy storage systems use traditional networking components and get nowhere near the full potential of flash. In short, the appliance can deliver better performance with the same amount of flash memory space. This leads to further increases in virtual machine density and space efficiency because more clones can be made - resulting in very low cost per VDI user.
VDI benchmark data can be useful but the test itself must be analyzed. Users should look for tests that not only focus on boot performance but also performance throughout the day, and at the end of the day. If systems with a mix of flash and HDD are used then enough flash must be purchased to avoid a cache miss, since these systems rarely have enough disk spindles to provide adequate secondary performance.
A simpler and better performing solution may be to use a solid state appliance like those available from Astute Networks. These allow for consistent, high performance throughout the day at a cost per IOPS that hybrid and traditional storage vendors can't match. Their enablement of the built-in hypervisor capabilities, like thin provisioning, cloning and snapshots, also means that they can be deployed very cost effectively.
George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments.
In 2014, Amazon announced a new form of compute called Lambda. We didn't know it at the time, but this represented a fundamental shift in what we expect from cloud computing. Now, all of the major cloud computing vendors want to take part in this disruptive technology. In his session at 20th Cloud Expo, John Jelinek IV, a web developer at Linux Academy, will discuss why major players like AWS, Microsoft Azure, IBM Bluemix, and Google Cloud Platform are all trying to sidestep VMs and containers...
Jan. 17, 2017 05:00 PM EST Reads: 347
Web Real-Time Communication APIs have quickly revolutionized what browsers are capable of. In addition to video and audio streams, we can now bi-directionally send arbitrary data over WebRTC's PeerConnection Data Channels. With the advent of Progressive Web Apps and new hardware APIs such as WebBluetooh and WebUSB, we can finally enable users to stitch together the Internet of Things directly from their browsers while communicating privately and securely in a decentralized way.
Jan. 17, 2017 04:45 PM EST Reads: 3,034
IoT is at the core or many Digital Transformation initiatives with the goal of re-inventing a company's business model. We all agree that collecting relevant IoT data will result in massive amounts of data needing to be stored. However, with the rapid development of IoT devices and ongoing business model transformation, we are not able to predict the volume and growth of IoT data. And with the lack of IoT history, traditional methods of IT and infrastructure planning based on the past do not app...
Jan. 17, 2017 04:45 PM EST Reads: 581
In his session at DevOps Summit, Tapabrata Pal, Director of Enterprise Architecture at Capital One, will tell a story about how Capital One has embraced Agile and DevOps Security practices across the Enterprise – driven by Enterprise Architecture; bringing in Development, Operations and Information Security organizations together. Capital Ones DevOpsSec practice is based upon three "pillars" – Shift-Left, Automate Everything, Dashboard Everything. Within about three years, from 100% waterfall, C...
Jan. 17, 2017 04:30 PM EST Reads: 9,524
The Internet of Things can drive efficiency for airlines and airports. In their session at @ThingsExpo, Shyam Varan Nath, Principal Architect with GE, and Sudip Majumder, senior director of development at Oracle, discussed the technical details of the connected airline baggage and related social media solutions. These IoT applications will enhance travelers' journey experience and drive efficiency for the airlines and the airports.
Jan. 17, 2017 04:15 PM EST Reads: 1,963
"We're bringing out a new application monitoring system to the DevOps space. It manages large enterprise applications that are distributed throughout a node in many enterprises and we manage them as one collective," explained Kevin Barnes, President of eCube Systems, in this SYS-CON.tv interview at DevOps at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Jan. 17, 2017 03:30 PM EST Reads: 5,312
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and containers together help companies achieve their business goals faster and more effectively. In his session at DevOps Summit, Ruslan Synytsky, CEO and Co-founder of Jelastic, reviewed the current landscape of Dev...
Jan. 17, 2017 02:45 PM EST Reads: 4,099
SYS-CON Events announced today that Catchpoint, a leading digital experience intelligence company, has been named “Silver Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Catchpoint Systems is a leading Digital Performance Analytics company that provides unparalleled insight into your customer-critical services to help you consistently deliver an amazing customer experience. Designed for digital business, C...
Jan. 17, 2017 02:30 PM EST Reads: 1,732
"We formed Formation several years ago to really address the need for bring complete modernization and software-defined storage to the more classic private cloud marketplace," stated Mark Lewis, Chairman and CEO of Formation Data Systems, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Jan. 17, 2017 02:15 PM EST Reads: 6,315
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo 2016 in New York. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place June 6-8, 2017, at the Javits Center in New York City, New York, is co-located with 20th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry p...
Jan. 17, 2017 02:15 PM EST Reads: 3,625
@DevOpsSummit at Cloud taking place June 6-8, 2017, at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long developm...
Jan. 17, 2017 02:15 PM EST Reads: 3,349
In his General Session at 17th Cloud Expo, Bruce Swann, Senior Product Marketing Manager for Adobe Campaign, explored the key ingredients of cross-channel marketing in a digital world. Learn how the Adobe Marketing Cloud can help marketers embrace opportunities for personalized, relevant and real-time customer engagement across offline (direct mail, point of sale, call center) and digital (email, website, SMS, mobile apps, social networks, connected objects).
Jan. 17, 2017 02:00 PM EST Reads: 5,358
Updating DevOps to the latest production data slows down your development cycle. Probably it is due to slow, inefficient conventional storage and associated copy data management practices. In his session at @DevOpsSummit at 20th Cloud Expo, Dhiraj Sehgal, in Product and Solution at Tintri, will talk about DevOps and cloud-focused storage to update hundreds of child VMs (different flavors) with updates from a master VM in minutes, saving hours or even days in each development cycle. He will also...
Jan. 17, 2017 02:00 PM EST Reads: 1,072
A look across the tech landscape at the disruptive technologies that are increasing in prominence and speculate as to which will be most impactful for communications – namely, AI and Cloud Computing. In his session at 20th Cloud Expo, Curtis Peterson, VP of Operations at RingCentral, will highlight the current challenges of these transformative technologies and share strategies for preparing your organization for these changes. This “view from the top” will outline the latest trends and developm...
Jan. 17, 2017 01:45 PM EST Reads: 914
“RackN is a software company and we take how a hybrid infrastructure scenario, which consists of clouds, virtualization, traditional data center technologies - how to make them all work together seamlessly from an operational perspective,” stated Dan Choquette, Founder of RackN, in this SYS-CON.tv interview at @DevOpsSummit at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Jan. 17, 2017 01:15 PM EST Reads: 3,522
"There's a growing demand from users for things to be faster. When you think about all the transactions or interactions users will have with your product and everything that is between those transactions and interactions - what drives us at Catchpoint Systems is the idea to measure that and to analyze it," explained Leo Vasiliou, Director of Web Performance Engineering at Catchpoint Systems, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York Ci...
Jan. 17, 2017 12:45 PM EST Reads: 5,537
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...
Jan. 17, 2017 12:45 PM EST Reads: 5,052
@DevOpsSummit taking place June 6-8, 2017 at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @DevOpsSummit at Cloud Expo New York Call for Papers is now open.
Jan. 17, 2017 12:45 PM EST Reads: 3,417
"Tintri was started in 2008 with the express purpose of building a storage appliance that is ideal for virtualized environments. We support a lot of different hypervisor platforms from VMware to OpenStack to Hyper-V," explained Dan Florea, Director of Product Management at Tintri, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Jan. 17, 2017 12:45 PM EST Reads: 4,451
"Avere Systems is a hybrid cloud solution provider. We have customers that want to use cloud storage and we have customers that want to take advantage of cloud compute," explained Rebecca Thompson, VP of Marketing at Avere Systems, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Jan. 17, 2017 12:15 PM EST Reads: 6,241