Welcome!

SDN Journal Authors: Jerry Melnick, Esmeralda Swartz, Michelle Drolet, Pat Romanski, Liz McMillan

Related Topics: Virtualization, .NET, Open Source, Cloud Expo, Security, SDN Journal

Virtualization: Article

Can You Trust VDI Storage Benchmarks?

The truth behind VDI benchmarks

by George Crump, Storage Switzerland

VDI (Virtual Desktop Infrastructure) implementation projects are going to be priorities for many IT Managers in 2013 and a key concern will be end-user acceptance. If the users don't embrace their virtual desktops they won't use them and the project is doomed to failure. The key to acceptance is to provide users with an environment that feels the same, performs better and is more reliable than their current stand-alone system. The storage system bears most of the responsibility in delivering that experience.

IT managers who want to capitalize on the opportunity that the virtual desktop environment can focus on two key capabilities when they evaluate storage system vendors. The first is being able to deliver the raw performance that the virtual desktop architecture needs and the second is doing so in the most cost effective way possible. These are two capabilities that are traditionally at odds with each other and not always well-reflected in benchmark testing.

For most organizations the number-one priority for gaining user acceptance is to keep the virtual desktop experience as similar to the physical desktop as possible. Typically, this will mean using persistent desktops, a VDI implementation in which each user's desktop is a stand-alone element in the virtual environment for which they can customize settings and add their own applications just like they could on their physical desktop.

The problem with persistent desktops is that a unique image is created for each desktop or user, which can add up to thousands of images for larger VDI populations. Obviously, allocating storage for thousands of virtual desktops is a high price to pay for maintaining a positive user experience.

In an effort to reduce the amount of storage required for all of these images, virtualized environments have incorporate features such as thin provisioning and linked clones. The goal is to have the storage system deliver a VDI environment that's built from just a few thinly provisioned ‘golden' VDI images, which are then cloned for each user.

As users customize their clones, only the differences between the golden image and the users' VDIs need to be stored. The result is a significant reduction in the total amount of storage required, lowering its overall cost. Also, the small number of golden images allows for much of the VDI read traffic to be served from a flash-based tier or cache.

When a write occurs from a thinly provisioned, cloned virtual desktop more has to happen then just the operation to write that data object. The volume needs to have additional space allocated to it (one write operation), the metadata table that tracks unique branches of the cloned volume has to be updated (another write operation) and some sort of parity data needs to be written, depending on the RAID protection in place. Then, finally, the data object is written. This entire process has to happen with each data change no matter how small.

Herein lays the tradeoff in using these features. While reducing the amount of space required for the VDI images, thin provisioning and cloning increase the demand for high write performance in the storage system. This presents a significant opportunity for storage system vendors who can address these new performance requirements.

Many storage systems that use a mix of flash memory and hard disk technology don't use the higher performing flash for writes; they use it for actively reading data. While these storage systems have storage controllers designed to handle high read loads, the increased write activity generated by thin provisioning and cloning is still going to relatively slow hard disk drives. Because this type of I/O traffic is highly random, the hard drives are constantly "thrashing about". Basically the controller sits idle while it waits for the hard disk to rotate into position to complete each write command. Even systems with an SSD tier or cache may have problems providing adequate performance because they too don't leverage the high speed flash for write traffic.

Due to the high level of thin provisioning and cloning, plus the fact that once a desktop is created a large part of its I/O is write traffic, many cached or tiered systems do not perform well in real-world VDI environments and can provide misleading VDI Benchmark scores.

The Truth Behind VDI Benchmarks
Most VDI Benchmarks focus primarily on one aspect of the VDI experience, the time it takes to boot a given number of virtual desktops. The problem with using a "boot storm test" is that this important but read-heavy event is only a part of the overall VDI storage challenge. During most of the day desktops are writing data, not reading it. In addition, simple activities such as logging out and application updates are very write-intensive. The capability of a storage system to handle these write activities is not measured by many VDI benchmarking routines.

A second problem with many VDI benchmarking claims is that for their testing configuration they do not use thinly provisioned and cloned volumes. Instead, they use thick volumes in order to show maximum VDI performance.

As discussed above, in order to keep user adoption high and costs low most VDI implementations would preferentially use persistent desktops with thin provisioning and cloning. Be wary of vendors claiming a single device can support over 1000 VDI users. These claims are usually based on the amount of storage that a typical VDI user might need as opposed to the Read/Write IOPS performance they will most likely need.

Trustworthy VDI Performance
A successful VDI project is one that gains end-user acceptance while reducing desktop support costs. The cost of a storage system that can provide thin provisioning, cloning and an adequate sized flash storage area to support the virtual environment could be too high for some enterprises to afford.  And, an additional cost could be incurred with the performance problems that are likely to appear after the initial desktop boot is completed because of the high level of write I/O.

The simplest solution may be to deploy a solid state appliance like Astute Networks ViSX for VDI. These devices are 100% solid state storage to provide high performance on both reads AND writes. This means that boot performance is excellent and performance throughout the day is maintained as well.

With a solid state based solution to the above problems, performance will not be an issue, but cost may still be. Even though it can provide consistent read/write performance throughout the day for a given number of virtual desktops, the cost per desktop of a flash based solution can be significantly higher than a hard drive based system.

However, it's likely in larger VDI environments (400+ users) that flash-based systems are really the only viable alternative to meet the performance requirements which can easily exceed 100 IOPS per user. Fortunately, flash-based systems can also produce efficiencies that bring down that cost in addition to the well-known benefits of using 1/10th the floor space, power and cooling compared to traditional storage systems.

First, the density of virtual desktops per host can be significantly higher with a flash appliance. And, the system is unaffected by the increase in random I/O as the density of virtual machines increases.

Second, the speed of the storage device compensates for the increased demands of thin provisioning and cloning operations run on the hypervisor. These data reduction services can now be used without a performance penalty. This means that the cost of a storage system with a more powerful storage controller and expensive data services like thin provisioning and cloning can be avoided.

Finally, the flash appliance is designed to tap into more of the full potential of solid state-based storage. For example, Astute uses a unique DataPump Engine protocol processor that's designed to specifically accelerate data onto and off of the network and through the appliance to the fast flash storage. This lowers the cost per IOPS compared to other flash-based storage systems.

Most legacy storage systems use traditional networking components and get nowhere near the full potential of flash. In short, the appliance can deliver better performance with the same amount of flash memory space. This leads to further increases in virtual machine density and space efficiency because more clones can be made - resulting in very low cost per VDI user.

Conclusion

VDI benchmark data can be useful but the test itself must be analyzed. Users should look for tests that not only focus on boot performance but also performance throughout the day, and at the end of the day. If systems with a mix of flash and HDD are used then enough flash must be purchased to avoid a cache miss, since these systems rarely have enough disk spindles to provide adequate secondary performance.

A simpler and better performing solution may be to use a solid state appliance like those available from Astute Networks. These allow for consistent, high performance throughout the day at a cost per IOPS that hybrid and traditional storage vendors can't match. Their enablement of the built-in hypervisor capabilities, like thin provisioning, cloning and snapshots, also means that they can be deployed very cost effectively.

>

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments.

More Stories By Derek Kol

Derek Kol is a technology specialist focused on SMB and enterprise IT innovations.

Cloud Expo Breaking News
Web conferencing in a public cloud has the same risks as any other cloud service. If you have ever had concerns over the types of data being shared in your employees’ web conferences, such as IP, financials or customer data, then it’s time to look at web conferencing in a private cloud. In her session at 14th Cloud Expo, Courtney Behrens, Senior Marketing Manager at Brother International, will discuss how issues that had previously been out of your control, like performance, advanced administration and compliance, can now be put back behind your firewall.
Cloud scalability and performance should be at the heart of every successful Internet venture. The infrastructure needs to be resilient, flexible, and fast – it’s best not to get caught thinking about architecture until the middle of an emergency, when it's too late. In his interactive, no-holds-barred session at 14th Cloud Expo, Phil Jackson, Development Community Advocate for SoftLayer, will dive into how to design and build-out the right cloud infrastructure.
Cloud backup and recovery services are critical to safeguarding an organization’s data and ensuring business continuity when technical failures and outages occur. With so many choices, how do you find the right provider for your specific needs? In his session at 14th Cloud Expo, Daniel Jacobson, Technology Manager at BUMI, will outline the key factors including backup configurations, proactive monitoring, data restoration, disaster recovery drills, security, compliance and data center resources. Aside from the technical considerations, the secret sauce in identifying the best vendor is the level of focus, expertise and specialization of their engineering team and support group, and how they monitor your day-to-day backups, provide recommendations, and guide you through restores when necessary.
More and more enterprises today are doing business by opening up their data and applications through APIs. Though forward-thinking and strategic, exposing APIs also increases the surface area for potential attack by hackers. To benefit from APIs while staying secure, enterprises and security architects need to continue to develop a deep understanding about API security and how it differs from traditional web application security or mobile application security. In his session at 14th Cloud Expo, Sachin Agarwal, VP of Product Marketing and Strategy at SOA Software, will walk you through the various aspects of how an API could be potentially exploited. He will discuss the necessary best practices to secure your data and enterprise applications while continue continuing to support your business’s digital initiatives.
The revolution that happened in the server universe over the past 15 years has resulted in an eco-system that is more open, more democratically innovative and produced better results in technically challenging dimensions like scale. The underpinnings of the revolution were common hardware, standards based APIs (ex. POSIX) and a strict adherence to layering and isolation between applications, daemons and kernel drivers/modules which allowed multiple types of development happen in parallel without hindering others. Put simply, today's server model is built on a consistent x86 platform with few surprises in its core components. A kernel abstracts away the platform, so that applications and daemons are decoupled from the hardware. In contrast, networking equipment is still stuck in the mainframe era. Today, networking equipment is a single appliance, including hardware, OS, applications and user interface come as a monolithic entity from a single vendor. Switching between different vendor'...
You use an agile process; your goal is to make your organization more agile. What about your data infrastructure? The truth is, today’s databases are anything but agile – they are effectively static repositories that are cumbersome to work with, difficult to change, and cannot keep pace with application demands. Performance suffers as a result, and it takes far longer than it should to deliver on new features and capabilities needed to make your organization competitive. As your application and business needs change, data repositories and structures get outmoded rapidly, resulting in increased work for application developers and slow performance for end users. Further, as data sizes grow into the Big Data realm, this problem is exacerbated and becomes even more difficult to address. A seemingly simple schema change can take hours (or more) to perform, and as requirements evolve the disconnect between existing data structures and actual needs diverge.
SYS-CON Events announced today that SherWeb, a long-time leading provider of cloud services and Microsoft's 2013 World Hosting Partner of the Year, will exhibit at SYS-CON's 14th International Cloud Expo®, which will take place on June 10–12, 2014, at the Javits Center in New York City, New York. A worldwide hosted services leader ranking in the prestigious North American Deloitte Technology Fast 500TM, and Microsoft's 2013 World Hosting Partner of the Year, SherWeb provides competitive cloud solutions to businesses and partners around the world. Founded in 1998, SherWeb is a privately owned company headquartered in Quebec, Canada. Its service portfolio includes Microsoft Exchange, SharePoint, Lync, Dynamics CRM and more.
The world of cloud and application development is not just for the hardened developer these days. In their session at 14th Cloud Expo, Phil Jackson, Development Community Advocate for SoftLayer, and Harold Hannon, Sr. Software Architect at SoftLayer, will pull back the curtain of the architecture of a fun demo application purpose-built for the cloud. They will focus on demonstrating how they leveraged compute, storage, messaging, and other cloud elements hosted at SoftLayer to lower the effort and difficulty of putting together a useful application. This will be an active demonstration and review of simple command-line tools and resources, so don’t be afraid if you are not a seasoned developer.
SYS-CON Events announced today that BUMI, a premium managed service provider specializing in data backup and recovery, will exhibit at SYS-CON's 14th International Cloud Expo®, which will take place on June 10–12, 2014, at the Javits Center in New York City, New York. Manhattan-based BUMI (Backup My Info!) is a premium managed service provider specializing in data backup and recovery. Founded in 2002, the company’s Here, There and Everywhere data backup and recovery solutions are utilized by more than 500 businesses. BUMI clients include professional service organizations such as banking, financial, insurance, accounting, hedge funds and law firms. The company is known for its relentless passion for customer service and support, and has won numerous awards, including Customer Service Provider of the Year and 10 Best Companies to Work For.
Chief Security Officers (CSO), CIOs and IT Directors are all concerned with providing a secure environment from which their business can innovate and customers can safely consume without the fear of Distributed Denial of Service attacks. To be successful in today's hyper-connected world, the enterprise needs to leverage the capabilities of the web and be ready to innovate without fear of DDoS attacks, concerns about application security and other threats. Organizations face great risk from increasingly frequent and sophisticated attempts to render web properties unavailable, and steal intellectual property or personally identifiable information. Layered security best practices extend security beyond the data center, delivering DDoS protection and maintaining site performance in the face of fast-changing threats.
From data center to cloud to the network. In his session at 3rd SDDC Expo, Raul Martynek, CEO of Net Access, will identify the challenges facing both data center providers and enterprise IT as they relate to cross-platform automation. He will then provide insight into designing, building, securing and managing the technology as an integrated service offering. Topics covered include: High-density data center design Network (and SDN) integration and automation Cloud (and hosting) infrastructure considerations Monitoring and security Management approaches Self-service and automation
In his session at 14th Cloud Expo, David Holmes, Vice President at OutSystems, will demonstrate the immense power that lives at the intersection of mobile apps and cloud application platforms. Attendees will participate in a live demonstration – an enterprise mobile app will be built and changed before their eyes – on their own devices. David Holmes brings over 20 years of high-tech marketing leadership to OutSystems. Prior to joining OutSystems, he was VP of Global Marketing for Damballa, a leading provider of network security solutions. Previously, he was SVP of Global Marketing for Jacada where his branding and positioning expertise helped drive the company from start-up days to a $55 million initial public offering on Nasdaq.
Performance is the intersection of power, agility, control, and choice. If you value performance, and more specifically consistent performance, you need to look beyond simple virtualized compute. Many factors need to be considered to create a truly performant environment. In his General Session at 14th Cloud Expo, Marc Jones, Vice President of Product Innovation for SoftLayer, will explain how to take advantage of a multitude of compute options and platform features to make cloud the cornerstone of your online presence.
Are you interested in accelerating innovation, simplifying deployments, reducing complexity, and lowering development costs? The cloud is changing the face of application development and deployment, with enterprise-grade infrastructure and platform services making it possible for you to build and rapidly scale enterprise applications. In his session at 14th Cloud Expo, Gene Eun, Sr. Director, Oracle Cloud at Oracle, will discuss the latest solutions and strategies for application developers and enterprise IT organizations to leverage Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) to build and deploy modern business applications in the cloud.
Hybrid cloud refers to the federation of a public and private cloud environment for the purpose of extending the elastic and flexibility of compute, storage and network capabilities, in an on-demand, pay-as-you go basis. The hybrid approach allows a business to take advantage of the scalability and cost-effectiveness that a public cloud computing environment offers without exposing mission-critical applications and data to third-party vulnerabilities. Hybrid cloud environments involve complex management challenges. First, organizations struggle to maintain control over the resources that lie outside of their managed IT scope. They also need greater infrastructure visibility to help reduce maintenance costs and ensure that their company data and resources are properly handled and secured.