SDN Journal Authors: Pat Romanski, Yeshim Deniz, Elizabeth White, Destiny Bertucci, Liz McMillan

Related Topics: Containers Expo Blog, Microservices Expo, Open Source Cloud, @CloudExpo, Cloud Security, SDN Journal

Containers Expo Blog: Article

Handling Incident Management in a Virtualized Environment

Five steps lead to tight integration of VM and existing incident response processes

Incident management (IM) is a necessary part of a security program. When effective, it mitigates business impact, identifies weaknesses in controls, and helps fine-tune response processes. Traditional IM approaches, however, are not always effective in a partially or completely virtualized data center. Consequently, some aspects of incident management and response processes require review and adjustment as an increasing number of critical systems move to virtual servers.

For our discussion of IM, virtualization is defined as the abstraction of logical servers from underlying hardware resources. This is not always the case, but it is a good starting point.

Why an IM Review is Important
Some organizations are eager to implement virtualization to quickly gain associated cost and flexibility advantages. In my experience, this rush to a virtualized data center assumes that either existing controls are enough or that - for some unexplainable reason - virtualized servers are isolated from common attack vectors and therefore more secure. Neither assumption is true.

Inherent IM Challenges
Because of VM abstraction, servers, their configurations, and their data are subject to being moved from one hardware platform to another. Further, data can travel between virtual machines (VM) on the same platform without passing through traditional network devices. Although these characteristics provide many of the benefits of virtualization, they also create challenges for security professionals. They include packet bypass of IPS or log management solutions and lack of consistent MAC address references.

In addition to monitoring issues, virtualized data centers provide a fertile environment for attack. For example, compromised servers in a traditional data center provide an attacker with a single corresponding production server for data extraction or for launching further attacks. However, compromise of a hypervisor hands an attacker access to the several servers it manages. Even with strong, traditional IM processes in place, this can result in multiple breaches before detection.

Probability of a VM Attack
Yes, virtualization is a relatively new technology. As such, it has not been a prime target for cybercriminals. However, that is changing. According to IBM X-Force (2010),

... 18.2 percent of all new servers shipped in the fourth quarter of 2009 were virtualized, representing a 20 percent increase over the 15.2 percent shipped in the fourth quarter of 2008 (p. 49).

Although this increase does not correlate to an increase in disclosed virtualization vulnerabilities, as shown in Figure 1, the overall increase of vulnerabilities does track with the increase in growth of virtualization as a strategic technology. It also indicates that the increase in the number of virtualized servers increases the attack surface for those attackers focusing on the hypervisor as a high-value breach target.

Figure 1 (IBM X-Force, 2010, p. 50)

The number of virtualization solution vulnerabilities is small compared to the number of vulnerabilities across all applications and operating systems - about one percent of the total. As Figure 2 shows, however, this is still reason for concern: a large majority of reported vulnerabilities allow an attacker to gain full control of a single hardware platform's multi-server environment.

Figure 2: (IBM XForce, 2010, p. 53)

It is important not to view the 2009 drop in reported vulnerabilities as a trend. One explanation for the drop is the richer target environment in traditional data centers and on desktops. This tends to focus security researchers' attention in those environments. However, as virtualization growth continues, and traditional targets harden, virtualization products will garner additional focus from both security experts and criminals.

Finally, the distribution of reported vulnerabilities seems to track closely with market share, as shown in Figure 3. Using Microsoft's share of reported vulnerabilities on the desktop as an example, vulnerability research tends to focus on market leaders. In the virtualization market, the leader is VMware. Extending this comparison, other vendors' solutions possibly possess just as many, although undiscovered, vulnerabilities. Efforts by researchers and criminals show greater ROI when focused on the larger number of possible targets.

This brief look at the vulnerabilities inherent in virtualization solutions demonstrates the potential for a high-risk attack. High risk because a single breach can result in access to multiple servers and the data they process or store. Consequently, a close look at detection, containment, and response capabilities for the unique needs of VMs is an important step in integrating virtualization into the organization's security program.

Figure 3 (IBM X-Force, 2010, p. 56)

Incident Management Basics
IM in a virtualized data center consists of the same steps used in traditional environments, as shown in Figure 4. Note the cyclical nature of this process. After each attack/incident, or each training event, a root cause analysis and after action review helps identify weaknesses in the organization's response. Remediation tasks placed in an action plan are executed to strengthen the organization's ability to mitigate business impact. For more information on this process, see Incident Management: Managing the Inevitable.

Figure 4: Incident Management Cycle

Most of the steps in this process are planned, designed, implemented, and documented during the prepare step. It is in this phase of incident management, security and infrastructure design teams address the unique challenges associated with virtualization. These challenges go beyond simple documentation changes. In most cases, infrastructure design changes - changes intended to enable quick detection and response - are required.

In the following sections, we examine areas for review in the preparation process. Because recovery is directly affected by virtualization, we also look at additional steps necessary to enable quick and safe recovery.

Unique Response Challenges
The flexibility and productivity gains virtualization brings to an organization can also weaken its ability to respond to attacks or other unwanted device or user behavior, including:

  • VMs managed by the same hypervisor instance might share information without having to send it out onto the physical network;
  • strict separation between partitions is not implemented by default, requiring design and build documentation changes;
  • VMs are either manually or automatically moved to react to changes in resources or workload;
  • there is limited physical access to the intra-partition pathways from outside the host;
  • direct host memory access capabilities can prevent quickly moving a complete partition from a compromised platform to a recovery host; and
  • it is easy to mix servers with different trust levels on the same host.

Five Steps to Ensuring Effective Response
A few simple steps - not always so simple to implement - will ensure an organization's ability to detect unwanted behavior and effectively respond as virtual servers spread across the enterprise, including:

Step 1: Group VMs according to data classification

Step 2: Ensure monitoring tools see packets internal to VMs managed by the same hypervisor

Step 3: Segment virtual networks

Step 4: Remedy forensics issues

Step 5: Mitigate business impact

Step 1: Group VMs according to data classification.
Virtualization allows IT staff to place servers on any available host. This helps maximize available resources, but it can unnecessarily increase security complexity and costs. For example, VMs processing sensitive data require all controls defined as reasonable and appropriate for confidential data. Less sensitive VMs do not, but unnecessarily spreading a small number of sensitive VMs across multiple hosts, instead of aggregating them on restricted hosts, requires applying all controls to all hosts.

Do not mix VMs processing sensitive data with those that do not. This allows maximum protection while minimizing costs and complexity. It also helps enable adjustments to incident management processes without asking for large budget increases.

Step 2: Ensure monitoring tools see packets internal to VMs managed by the same hypervisor.
Monitoring for anomalous packets passing in an around VMs on the same host is just as important as seeing them passing between physical servers. The problem is the inability of traditional security appliances (i.e. IPS, IDS, firewalls, etc.) to see inside the virtual space. This looking into the pathways between VMs, the VMs and the hypervisor, and to and from the host operating system is called introspection.

Introspection is possible using some of the tools that come with a hypervisor solution. However, they are not always full implementations. Further, if not integrated with current monitoring controls, the administrator now has multiple sets of rules to manage.

Introspection and Detection
First, ask the right questions. Does the solution you currently use, or are about to purchase, allow introspection of all activity within the virtual space? In addition to packets, does it monitor memory and processes to ensure VM and hypervisor integrity? In other words, the virtual space cannot be an introspection-resistant black box.

If complete introspection is not part of the product's capabilities, there are workarounds. For example, critical or high-value breach targets might reside on a host with more than the recommended two NICs (one for partition management and one for VMs to access the physical network). The extra NICs can route data from one VM, out to a monitoring device, and back to the internal target VM, as depicted in Figure 5. (The management NIC is not shown.)

All intra-VM traffic in this Microsoft virtual server example is routed to VLAN4. Attached to VLAN 4 is a physical IPS used to monitor physical network traffic. Once inspected and approved, packets are then returned through VLAN4 to the virtual switch and routed to the target partition. Although a possible solution, it has a disadvantage.

Figure 5: External Packet Inspection

Packets routed to an external monitoring device add additional propagation delay to response time. Consequently, be sure this is absolutely necessary before adding it to your design toolkit. In the interest of balance between business productivity and security, you might consider this only for your most sensitive data exchanges. Another alternative is making the case for additional funding so you can purchase one of the growing number of third-party products that add introspection capabilities. Either way, do not ignore internal traffic.

In addition to monitoring, integrate virtual environments into your log management processes. Does your virtualization solution support integration with your security information and event management (SIEM) controls? If not, does it support syslog integration? Whatever it takes, ensure VM, hypervisor, third-party application, and host system logs make it to your aggregation point and your correlation engine.

For more information about log management, see Guide to Computer Security Log Management, NIST SP 800-92.

Step 3: Segment virtual networks
Also shown in Figure 5 is a segmentation scheme to limit traffic between partitions. Implemented using VLANs configured in a virtual switch and VLAN access control lists (VACLs), this example is one way to help ensure unwanted traffic does not pass from a compromised VM to other VMs on the same host. Further, it allows response teams to quickly isolate one or more compromised systems, preventing enterprise-wide effects.

The reasons for segmentation are not different from those in the physical world. However, virtual server segmentation is often forgotten, even though the physical hosts might be placed in secure network segments. Remember that many controls you implement on the physical network must be configured in virtual environments, but VMs are by design isolated from controls on your physical network.

Step 4: Remedy forensics issues
Forensics is directly affected by virtualization in at least three areas: time synchronization, hardware addressing, and server seizure.

Time Synchronization
In addition to log management solutions, forensics solutions require time synchronization across the enterprise. Without it, correlation engines miss relationships between events and investigators struggle to reconstruct incidents. Most organizations use a time service-internal, external, or both-to ensure the same time is synchronized across all physical devices.

Virtual servers must synchronize with the same service. However, this is not always automatically configured. Ensure each VM directly synchronizes with the time service or with the physical host. Using the physical host assumes it synchronizes with the time service.

Hardware Addresses
VMs use virtual hardware (MAC) addresses. When a VM moves, its MAC address changes. If these address changes are not tracked and logged, reconstructing a security incident within virtual environments is difficult. See Figure 6.

Figure 6: Logging Challenges (Brandon Gillespie, 2009, Slide 8)

Consequently, logging is not enough; knowing where a VM was located at any point in the past is also necessary. According to Brandon Gillespie (2009), security teams must ensure virtual MAC addresses are tracked, logged, and available for analysis. In addition, log management processes must consider the possibility that a moving VM has left behind logs on multiple hosts.

For an example of how scheduled tracking might be accomplished in an environment without an automated tracking solution, see Tracking a VM in a Nexus 1000v Virtual Network.

Seizing a Virtual Server
When a server is compromised or used to commit a crime, it is often necessary to seize it for forensics analysis. Security teams often face two challenges when trying to remove a physical server from service: retention of potential evidence in volatile storage or removal of a device from a critical business process. Proper planning mitigates the effects of both when seizing a VM.

Evidence retention
Evidence retention is a problem when the investigator wants to retain RAM content. For example, removing power from a server starts the process of mitigating business impact, but it also denies forensic analysis of data, processes, keys, and possible footprints left by an attacker. This is one advantage VMs have over physical servers.

Most virtualization solutions, like VMware's ESXi and Microsoft's Hyper-V, provide snapshot capability. A VMware snapshot is a point-in-time image of a VM, including RAM and the virtual machine disk file (Siebert, 2011). The resulting file can provide an investigator with an encapsulated copy of the server at the time the breach or criminal activity occurs. When placed on a quarantined replica of the original hardware, the recovered VM presents a rich forensics environment.

Another method of both disabling a compromised server and retaining critical forensics data is VM suspension. In VMware, for example, suspending a VM creates a suspended state file (.vmss) representing "...the state of the machine at the time it was suspended, or paused..." (Durick, 2011, Suspend, para. 1). The .vmss file is similar to the hibernation file used on Windows systems. For more information on this topic, see the Durick reference above.

A snapshot or suspension file might not be enough, however. When planning snapshots or other processes for evidence preparation, be sure to collect all files your VM uses while running. In a VMware ESXi 4 update 1 environment, files to seize include those with the following extensions (Durick, 2011):

  • .vmxf
  • .vmx
  • .vmsd
  • .vmdk (snapshot file)
  • vmname-flat.vmdk
  • .log
  • .nvram
  • .vswp

When using Microsoft Server 2008 R2 virtualization, look for the following file extensions (Microsoft TechNet, 2011):

  • .xml
  • .vsv
  • .vhd
  • .avhd (snapshot file)

Server removal
Administrators typically strive to meet four goals when a virtual server is removed from service: 1) contain a breach or malware infestation by removing the affected server from the network; 2) prevent any further damage to, or loss of, information residing on local storage; 3) remove the server to a secure location for forensics analysis; and 4) restore services provided by the VM. Meeting these goals requires planning, testing, and documenting processes.

Removal of a server usually starts with isolating it from the network. As I wrote in Step 3: Segment virtual networks, this is easily accomplished using documented steps to isolate one or more virtual network segments. In many cases, isolating the entire physical host is necessary, and proper physical network design enables this. Isolation protects the rest of the network and shuts down external attack sessions. Another option is to suspend the affected VM.

If quick isolation is not necessary or practical, suspend the VM. Shutting it down might or might not preserve volatile storage, such as RAM. Whether or not suspension affects forensics investigations depends on how your virtualization vendor implements this capability and how you configure it.

Both snapshots and suspensions allow preservation of evidence. Seizing the related files and taking them to a secure forensics lab is easily accomplished.

Restoring service, the last step in server seizure, is also an important business continuity planning step for any service-interruption event.

Step 5: Mitigate business impact
The bottom line is that IM is all about mitigating business impact. Modifying detection, containment, and evidence retention processes do not ensure continued operation of processes affected by a compromised server. Rather, this requires quick recovery of the server and its data to a point in time just before the compromise. In addition to traditional backups, a virtualized data center has unique tools to accomplish this.

Immediately after suspending a VM, virtualization technology allows another VM to take its place. Two tools make this possible: images and snapshots. VM images, created by the virtual server creation process, usually exist for every VM in the data center. If regularly patched, they allow quick recovery of a server when recovery of dynamic configurations or data is not necessary. (See Patch archived VMs...) Images, however, do not restore a VMs' operational state at a specific point in time.

A better alternative to recovering a VM is using a snapshot. Snapshots save enough information to restore a server to a specific point in time. However, this requires regularly taking snapshots rather than waiting for an incident. It also means implementing snapshot management processes.

Snapshots introduce a new set of management challenges; the biggest is possible performance hits. For example, in VMware environments, post snapshot reading and writing to disk happens as shown in Figure 7. The VM retrieves data created before the snapshot is taken from the pre-snapshot virtual disk file. Reads and writes for other data and are sent to a delta file created at the time of the snapshot.

Figure 7: VM Snapshot Reads and Writes (Siebert, 2011a)

Microsoft snapshots work similarly. Figure 8 provides a step-by-step look of snapshot creation in a Microsoft Hyper-V environment.

Figure 8: Hyper-V Manager Snapshot Creation (Microsoft TechNet, 2011a)

Performance issues, and the need for additional storage, are necessary planning topics when considering snapshots. Also plan to seize all delta- and other snapshot-related files if snapshots are taken regularly, including:

  • .vmsn (VMware snapshot state file)
  • delta.vmdk (VMware differential disk file)
  • .vmsd (VMware snapshot metadata file)
  • .avhd (Microsoft snapshot file)

Finally, neither suspending a VM nor taking a snapshot is impossible if you cannot access the VM's hypervisor. Be sure your management path to the parent partition, via the administration NIC, is truly isolated from the physical network. However, if you believe the hypervisor is compromised, simply isolate the physical platform and all servers it contains; assume all VMs are compromised, too.

Introduction of virtualized servers requires rethinking incident management processes. Revisiting the prepare step associated with incident response - including asking your vendor the right questions - is the best place to start.

Five steps lead to tight integration of VM and existing incident response processes. They examine and help remedy system, network, and process design challenges associated with VM placement, incident detection and containment, and business process recovery unique to virtualization. Without them, existing response documentation is only effective for the physical environment.


More Stories By Tom Olzak

Tom Olzak is a security researcher for the InfoSec Institute and an IT Professional with over 27 years of experience in programming, network engineering and security. He has an MBA as well as CISSP and MCSE certifications and has written two books, "Just Enough Security" and "Microsoft Virtualization

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

@CloudExpo Stories
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
There is a huge demand for responsive, real-time mobile and web experiences, but current architectural patterns do not easily accommodate applications that respond to events in real time. Common solutions using message queues or HTTP long-polling quickly lead to resiliency, scalability and development velocity challenges. In his session at 21st Cloud Expo, Ryland Degnan, a Senior Software Engineer on the Netflix Edge Platform team, will discuss how by leveraging a reactive stream-based protocol,...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
DXWorldEXPO LLC announced today that Kevin Jackson joined the faculty of CloudEXPO's "10-Year Anniversary Event" which will take place on November 11-13, 2018 in New York City. Kevin L. Jackson is a globally recognized cloud computing expert and Founder/Author of the award winning "Cloud Musings" blog. Mr. Jackson has also been recognized as a "Top 100 Cybersecurity Influencer and Brand" by Onalytica (2015), a Huffington Post "Top 100 Cloud Computing Experts on Twitter" (2013) and a "Top 50 C...
Digital transformation is about embracing digital technologies into a company's culture to better connect with its customers, automate processes, create better tools, enter new markets, etc. Such a transformation requires continuous orchestration across teams and an environment based on open collaboration and daily experiments. In his session at 21st Cloud Expo, Alex Casalboni, Technical (Cloud) Evangelist at Cloud Academy, explored and discussed the most urgent unsolved challenges to achieve fu...
The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addr...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
DXWorldEXPO LLC announced today that "Miami Blockchain Event by FinTechEXPO" has announced that its Call for Papers is now open. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Financial enterprises in New York City, London, Singapore, and other world financial capitals are embracing a new generation of smart, automated FinTech that eliminates many cumbersome, slow, and expe...
DXWorldEXPO | CloudEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
Evan Kirstel is an internationally recognized thought leader and social media influencer in IoT (#1 in 2017), Cloud, Data Security (2016), Health Tech (#9 in 2017), Digital Health (#6 in 2016), B2B Marketing (#5 in 2015), AI, Smart Home, Digital (2017), IIoT (#1 in 2017) and Telecom/Wireless/5G. His connections are a "Who's Who" in these technologies, He is in the top 10 most mentioned/re-tweeted by CMOs and CIOs (2016) and have been recently named 5th most influential B2B marketeer in the US. H...
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of bus...
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, @CloudEXPO and DXWorldEXPO are two of the most important technology events of the year. Since its launch over eight years ago, @CloudEXPO and DXWorldEXPO have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, we provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading...
Cloud Expo | DXWorld Expo have announced the conference tracks for Cloud Expo 2018. Cloud Expo will be held June 5-7, 2018, at the Javits Center in New York City, and November 6-8, 2018, at the Santa Clara Convention Center, Santa Clara, CA. Digital Transformation (DX) is a major focus with the introduction of DX Expo within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive ov...
DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
@DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises - and delivering real results.