Welcome!

SDN Journal Authors: Liz McMillan, Pat Romanski, Elizabeth White, Yeshim Deniz, TJ Randall

Related Topics: Microsoft Cloud

Microsoft Cloud: Blog Feed Post

Performance Tuning Windows Server 2008 R2 Hyper-V: Hardware Selection

The hardware considerations for your Hyper-V servers are not all that different from non-virtualized servers

clip_image002After our previous article “XenDesktop on Microsoft Server 2008 R2 Hyper-V: Best Practices”, we decided to expand on this series and post some additional articles that are more specific to Hyper-V itself and in particular, tuning performance of Windows Server 2008 R2 Hyper-V. We’ll discuss processor, memory, disk I/O, and network I/O tuning tips, but first let’s look at the type of hardware you would need to ensure Hyper-V performs well when running multiple virtual machines. After all; all virtual machines that run on a Hyper-V system share the same hardware.

The hardware considerations for your Hyper-V servers are not all that different from non-virtualized servers, but obviously Hyper-V servers has increased CPU usage, requires more memory, and needs more I/O bandwidth because the same hardware services multiple virtual systems

Processor Selection

The first consideration you should make is that Windows 2008 R2 requires 64-bit processors. In a non-virtual setup, performance benefits more from a processor with a higher frequency compare to adding cores to a processor. We recommend for your Hyper-V environment to select multiple processors with multiple cores and at the highest frequency available.

To obtain additional efficiency use processors that support Second Level Address Translation (SLAT) technologies (for example EPT or NPT). Other features that you want, when selecting a processor, are support for deep idle state and core parking.

Hyper-V benefits from processors with large caches, especially when the ratio of virtual CPUs to logical CPUs is high in your VM configuration. If you have to choose between a large cache or CPU frequency, go with the larger cache.

Memory Selection

The Hyper-V server itself will need enough memory for the root and child partitions. Hyper-V will allocate memory for the child partitions first. You should provide each child partition with the enough RAM to be able to handle the load for each specific VM. The root partition needs enough additional memory so it can efficiently handle I/O for the virtual machines and other tasks such as VM snapshots. We’ll discuss more details about memory sizing in our follow-up article “Performance Tuning Microsoft Server 2008 R2 Hyper-V: Memory”.

Network Adapter

If you expect the virtual machines in your environment to be network traffic intensive, install multiple network adapters (or multiport adapters) in your Hyper-V system(s). This way network traffic is distributed over each adapter resulting in better overall performance.

Hyper-V supports various hardware offloads such as Large Send Offload (LSOv1), TCPv4 checksum offload, Chimney, and VMQ. Selecting network adapters that support hardware offloading will reduce the CPU usage of network I/Os.

Disk Storage Selection

Disk selection is similar to choosing hard drives for a non-virtual system. Higher rotational speeds are better and a larger set of smaller capacity drives performs better than a smaller number of high capacity drives. The hardware should have sufficient I/O bandwidth and capacity to meet current and future needs of the VMs that the physical server hosts. Consider these requirements when you select storage controllers and disks and choose the RAID configuration. Placing VMs with highly disk-intensive workloads on different physical disks will likely improve overall performance. For example, if four VMs share a single disk and actively use it, each VM can yield only 25 percent of the bandwidth of that disk. We’ll discuss more details in our upcoming article “Performance Tuning Microsoft Server 2008 R2 Hyper-V: Storage”.

Installation Options

Microsoft recommends that you use the Server Core installation option in the root partition. This leaves additional memory available for the virtual machines, and even though the Core option only offers a command prompt, you can use WMI to manage the server remotely.

It is also recommended that the root partition is dedicated to the virtualization server role. Adding additional roles to the root partition can affect the performance of the virtualization server.

In our next article we’ll go into processor performance tuning of Windows Server 2008 R2 Hyper-V.

Share Now:del.icio.usDiggFacebookLinkedInBlinkListDZoneGoogle BookmarksRedditStumbleUponTwitterRSS

Read the original blog entry...

More Stories By Hovhannes Avoyan

Hovhannes Avoyan is the CEO of PicsArt, Inc.,

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


CloudEXPO Stories
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-centric compute for the most data-intensive applications. Hyperconverged systems already in place can be revitalized with vendor-agnostic, PCIe-deployed, disaggregated approach to composable, maximizing the value of previous investments.
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by sharing information within the building and with outside city infrastructure via real time shared cloud capabilities.
As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more than 24 years of IT Operations experience with a focus on highly-scalable architectures.