SDN Journal Authors: John Walsh, Elizabeth White, Liz McMillan, Sven Olav Lund, Simon Hill

Related Topics: Containers Expo Blog, Microservices Expo, Open Source Cloud, @CloudExpo, Cloud Security, SDN Journal

Containers Expo Blog: Article

Handling Incident Management in a Virtualized Environment

Five steps lead to tight integration of VM and existing incident response processes

Incident management (IM) is a necessary part of a security program. When effective, it mitigates business impact, identifies weaknesses in controls, and helps fine-tune response processes. Traditional IM approaches, however, are not always effective in a partially or completely virtualized data center. Consequently, some aspects of incident management and response processes require review and adjustment as an increasing number of critical systems move to virtual servers.

For our discussion of IM, virtualization is defined as the abstraction of logical servers from underlying hardware resources. This is not always the case, but it is a good starting point.

Why an IM Review is Important
Some organizations are eager to implement virtualization to quickly gain associated cost and flexibility advantages. In my experience, this rush to a virtualized data center assumes that either existing controls are enough or that - for some unexplainable reason - virtualized servers are isolated from common attack vectors and therefore more secure. Neither assumption is true.

Inherent IM Challenges
Because of VM abstraction, servers, their configurations, and their data are subject to being moved from one hardware platform to another. Further, data can travel between virtual machines (VM) on the same platform without passing through traditional network devices. Although these characteristics provide many of the benefits of virtualization, they also create challenges for security professionals. They include packet bypass of IPS or log management solutions and lack of consistent MAC address references.

In addition to monitoring issues, virtualized data centers provide a fertile environment for attack. For example, compromised servers in a traditional data center provide an attacker with a single corresponding production server for data extraction or for launching further attacks. However, compromise of a hypervisor hands an attacker access to the several servers it manages. Even with strong, traditional IM processes in place, this can result in multiple breaches before detection.

Probability of a VM Attack
Yes, virtualization is a relatively new technology. As such, it has not been a prime target for cybercriminals. However, that is changing. According to IBM X-Force (2010),

... 18.2 percent of all new servers shipped in the fourth quarter of 2009 were virtualized, representing a 20 percent increase over the 15.2 percent shipped in the fourth quarter of 2008 (p. 49).

Although this increase does not correlate to an increase in disclosed virtualization vulnerabilities, as shown in Figure 1, the overall increase of vulnerabilities does track with the increase in growth of virtualization as a strategic technology. It also indicates that the increase in the number of virtualized servers increases the attack surface for those attackers focusing on the hypervisor as a high-value breach target.

Figure 1 (IBM X-Force, 2010, p. 50)

The number of virtualization solution vulnerabilities is small compared to the number of vulnerabilities across all applications and operating systems - about one percent of the total. As Figure 2 shows, however, this is still reason for concern: a large majority of reported vulnerabilities allow an attacker to gain full control of a single hardware platform's multi-server environment.

Figure 2: (IBM XForce, 2010, p. 53)

It is important not to view the 2009 drop in reported vulnerabilities as a trend. One explanation for the drop is the richer target environment in traditional data centers and on desktops. This tends to focus security researchers' attention in those environments. However, as virtualization growth continues, and traditional targets harden, virtualization products will garner additional focus from both security experts and criminals.

Finally, the distribution of reported vulnerabilities seems to track closely with market share, as shown in Figure 3. Using Microsoft's share of reported vulnerabilities on the desktop as an example, vulnerability research tends to focus on market leaders. In the virtualization market, the leader is VMware. Extending this comparison, other vendors' solutions possibly possess just as many, although undiscovered, vulnerabilities. Efforts by researchers and criminals show greater ROI when focused on the larger number of possible targets.

This brief look at the vulnerabilities inherent in virtualization solutions demonstrates the potential for a high-risk attack. High risk because a single breach can result in access to multiple servers and the data they process or store. Consequently, a close look at detection, containment, and response capabilities for the unique needs of VMs is an important step in integrating virtualization into the organization's security program.

Figure 3 (IBM X-Force, 2010, p. 56)

Incident Management Basics
IM in a virtualized data center consists of the same steps used in traditional environments, as shown in Figure 4. Note the cyclical nature of this process. After each attack/incident, or each training event, a root cause analysis and after action review helps identify weaknesses in the organization's response. Remediation tasks placed in an action plan are executed to strengthen the organization's ability to mitigate business impact. For more information on this process, see Incident Management: Managing the Inevitable.

Figure 4: Incident Management Cycle

Most of the steps in this process are planned, designed, implemented, and documented during the prepare step. It is in this phase of incident management, security and infrastructure design teams address the unique challenges associated with virtualization. These challenges go beyond simple documentation changes. In most cases, infrastructure design changes - changes intended to enable quick detection and response - are required.

In the following sections, we examine areas for review in the preparation process. Because recovery is directly affected by virtualization, we also look at additional steps necessary to enable quick and safe recovery.

Unique Response Challenges
The flexibility and productivity gains virtualization brings to an organization can also weaken its ability to respond to attacks or other unwanted device or user behavior, including:

  • VMs managed by the same hypervisor instance might share information without having to send it out onto the physical network;
  • strict separation between partitions is not implemented by default, requiring design and build documentation changes;
  • VMs are either manually or automatically moved to react to changes in resources or workload;
  • there is limited physical access to the intra-partition pathways from outside the host;
  • direct host memory access capabilities can prevent quickly moving a complete partition from a compromised platform to a recovery host; and
  • it is easy to mix servers with different trust levels on the same host.

Five Steps to Ensuring Effective Response
A few simple steps - not always so simple to implement - will ensure an organization's ability to detect unwanted behavior and effectively respond as virtual servers spread across the enterprise, including:

Step 1: Group VMs according to data classification

Step 2: Ensure monitoring tools see packets internal to VMs managed by the same hypervisor

Step 3: Segment virtual networks

Step 4: Remedy forensics issues

Step 5: Mitigate business impact

Step 1: Group VMs according to data classification.
Virtualization allows IT staff to place servers on any available host. This helps maximize available resources, but it can unnecessarily increase security complexity and costs. For example, VMs processing sensitive data require all controls defined as reasonable and appropriate for confidential data. Less sensitive VMs do not, but unnecessarily spreading a small number of sensitive VMs across multiple hosts, instead of aggregating them on restricted hosts, requires applying all controls to all hosts.

Do not mix VMs processing sensitive data with those that do not. This allows maximum protection while minimizing costs and complexity. It also helps enable adjustments to incident management processes without asking for large budget increases.

Step 2: Ensure monitoring tools see packets internal to VMs managed by the same hypervisor.
Monitoring for anomalous packets passing in an around VMs on the same host is just as important as seeing them passing between physical servers. The problem is the inability of traditional security appliances (i.e. IPS, IDS, firewalls, etc.) to see inside the virtual space. This looking into the pathways between VMs, the VMs and the hypervisor, and to and from the host operating system is called introspection.

Introspection is possible using some of the tools that come with a hypervisor solution. However, they are not always full implementations. Further, if not integrated with current monitoring controls, the administrator now has multiple sets of rules to manage.

Introspection and Detection
First, ask the right questions. Does the solution you currently use, or are about to purchase, allow introspection of all activity within the virtual space? In addition to packets, does it monitor memory and processes to ensure VM and hypervisor integrity? In other words, the virtual space cannot be an introspection-resistant black box.

If complete introspection is not part of the product's capabilities, there are workarounds. For example, critical or high-value breach targets might reside on a host with more than the recommended two NICs (one for partition management and one for VMs to access the physical network). The extra NICs can route data from one VM, out to a monitoring device, and back to the internal target VM, as depicted in Figure 5. (The management NIC is not shown.)

All intra-VM traffic in this Microsoft virtual server example is routed to VLAN4. Attached to VLAN 4 is a physical IPS used to monitor physical network traffic. Once inspected and approved, packets are then returned through VLAN4 to the virtual switch and routed to the target partition. Although a possible solution, it has a disadvantage.

Figure 5: External Packet Inspection

Packets routed to an external monitoring device add additional propagation delay to response time. Consequently, be sure this is absolutely necessary before adding it to your design toolkit. In the interest of balance between business productivity and security, you might consider this only for your most sensitive data exchanges. Another alternative is making the case for additional funding so you can purchase one of the growing number of third-party products that add introspection capabilities. Either way, do not ignore internal traffic.

In addition to monitoring, integrate virtual environments into your log management processes. Does your virtualization solution support integration with your security information and event management (SIEM) controls? If not, does it support syslog integration? Whatever it takes, ensure VM, hypervisor, third-party application, and host system logs make it to your aggregation point and your correlation engine.

For more information about log management, see Guide to Computer Security Log Management, NIST SP 800-92.

Step 3: Segment virtual networks
Also shown in Figure 5 is a segmentation scheme to limit traffic between partitions. Implemented using VLANs configured in a virtual switch and VLAN access control lists (VACLs), this example is one way to help ensure unwanted traffic does not pass from a compromised VM to other VMs on the same host. Further, it allows response teams to quickly isolate one or more compromised systems, preventing enterprise-wide effects.

The reasons for segmentation are not different from those in the physical world. However, virtual server segmentation is often forgotten, even though the physical hosts might be placed in secure network segments. Remember that many controls you implement on the physical network must be configured in virtual environments, but VMs are by design isolated from controls on your physical network.

Step 4: Remedy forensics issues
Forensics is directly affected by virtualization in at least three areas: time synchronization, hardware addressing, and server seizure.

Time Synchronization
In addition to log management solutions, forensics solutions require time synchronization across the enterprise. Without it, correlation engines miss relationships between events and investigators struggle to reconstruct incidents. Most organizations use a time service-internal, external, or both-to ensure the same time is synchronized across all physical devices.

Virtual servers must synchronize with the same service. However, this is not always automatically configured. Ensure each VM directly synchronizes with the time service or with the physical host. Using the physical host assumes it synchronizes with the time service.

Hardware Addresses
VMs use virtual hardware (MAC) addresses. When a VM moves, its MAC address changes. If these address changes are not tracked and logged, reconstructing a security incident within virtual environments is difficult. See Figure 6.

Figure 6: Logging Challenges (Brandon Gillespie, 2009, Slide 8)

Consequently, logging is not enough; knowing where a VM was located at any point in the past is also necessary. According to Brandon Gillespie (2009), security teams must ensure virtual MAC addresses are tracked, logged, and available for analysis. In addition, log management processes must consider the possibility that a moving VM has left behind logs on multiple hosts.

For an example of how scheduled tracking might be accomplished in an environment without an automated tracking solution, see Tracking a VM in a Nexus 1000v Virtual Network.

Seizing a Virtual Server
When a server is compromised or used to commit a crime, it is often necessary to seize it for forensics analysis. Security teams often face two challenges when trying to remove a physical server from service: retention of potential evidence in volatile storage or removal of a device from a critical business process. Proper planning mitigates the effects of both when seizing a VM.

Evidence retention
Evidence retention is a problem when the investigator wants to retain RAM content. For example, removing power from a server starts the process of mitigating business impact, but it also denies forensic analysis of data, processes, keys, and possible footprints left by an attacker. This is one advantage VMs have over physical servers.

Most virtualization solutions, like VMware's ESXi and Microsoft's Hyper-V, provide snapshot capability. A VMware snapshot is a point-in-time image of a VM, including RAM and the virtual machine disk file (Siebert, 2011). The resulting file can provide an investigator with an encapsulated copy of the server at the time the breach or criminal activity occurs. When placed on a quarantined replica of the original hardware, the recovered VM presents a rich forensics environment.

Another method of both disabling a compromised server and retaining critical forensics data is VM suspension. In VMware, for example, suspending a VM creates a suspended state file (.vmss) representing "...the state of the machine at the time it was suspended, or paused..." (Durick, 2011, Suspend, para. 1). The .vmss file is similar to the hibernation file used on Windows systems. For more information on this topic, see the Durick reference above.

A snapshot or suspension file might not be enough, however. When planning snapshots or other processes for evidence preparation, be sure to collect all files your VM uses while running. In a VMware ESXi 4 update 1 environment, files to seize include those with the following extensions (Durick, 2011):

  • .vmxf
  • .vmx
  • .vmsd
  • .vmdk (snapshot file)
  • vmname-flat.vmdk
  • .log
  • .nvram
  • .vswp

When using Microsoft Server 2008 R2 virtualization, look for the following file extensions (Microsoft TechNet, 2011):

  • .xml
  • .vsv
  • .vhd
  • .avhd (snapshot file)

Server removal
Administrators typically strive to meet four goals when a virtual server is removed from service: 1) contain a breach or malware infestation by removing the affected server from the network; 2) prevent any further damage to, or loss of, information residing on local storage; 3) remove the server to a secure location for forensics analysis; and 4) restore services provided by the VM. Meeting these goals requires planning, testing, and documenting processes.

Removal of a server usually starts with isolating it from the network. As I wrote in Step 3: Segment virtual networks, this is easily accomplished using documented steps to isolate one or more virtual network segments. In many cases, isolating the entire physical host is necessary, and proper physical network design enables this. Isolation protects the rest of the network and shuts down external attack sessions. Another option is to suspend the affected VM.

If quick isolation is not necessary or practical, suspend the VM. Shutting it down might or might not preserve volatile storage, such as RAM. Whether or not suspension affects forensics investigations depends on how your virtualization vendor implements this capability and how you configure it.

Both snapshots and suspensions allow preservation of evidence. Seizing the related files and taking them to a secure forensics lab is easily accomplished.

Restoring service, the last step in server seizure, is also an important business continuity planning step for any service-interruption event.

Step 5: Mitigate business impact
The bottom line is that IM is all about mitigating business impact. Modifying detection, containment, and evidence retention processes do not ensure continued operation of processes affected by a compromised server. Rather, this requires quick recovery of the server and its data to a point in time just before the compromise. In addition to traditional backups, a virtualized data center has unique tools to accomplish this.

Immediately after suspending a VM, virtualization technology allows another VM to take its place. Two tools make this possible: images and snapshots. VM images, created by the virtual server creation process, usually exist for every VM in the data center. If regularly patched, they allow quick recovery of a server when recovery of dynamic configurations or data is not necessary. (See Patch archived VMs...) Images, however, do not restore a VMs' operational state at a specific point in time.

A better alternative to recovering a VM is using a snapshot. Snapshots save enough information to restore a server to a specific point in time. However, this requires regularly taking snapshots rather than waiting for an incident. It also means implementing snapshot management processes.

Snapshots introduce a new set of management challenges; the biggest is possible performance hits. For example, in VMware environments, post snapshot reading and writing to disk happens as shown in Figure 7. The VM retrieves data created before the snapshot is taken from the pre-snapshot virtual disk file. Reads and writes for other data and are sent to a delta file created at the time of the snapshot.

Figure 7: VM Snapshot Reads and Writes (Siebert, 2011a)

Microsoft snapshots work similarly. Figure 8 provides a step-by-step look of snapshot creation in a Microsoft Hyper-V environment.

Figure 8: Hyper-V Manager Snapshot Creation (Microsoft TechNet, 2011a)

Performance issues, and the need for additional storage, are necessary planning topics when considering snapshots. Also plan to seize all delta- and other snapshot-related files if snapshots are taken regularly, including:

  • .vmsn (VMware snapshot state file)
  • delta.vmdk (VMware differential disk file)
  • .vmsd (VMware snapshot metadata file)
  • .avhd (Microsoft snapshot file)

Finally, neither suspending a VM nor taking a snapshot is impossible if you cannot access the VM's hypervisor. Be sure your management path to the parent partition, via the administration NIC, is truly isolated from the physical network. However, if you believe the hypervisor is compromised, simply isolate the physical platform and all servers it contains; assume all VMs are compromised, too.

Introduction of virtualized servers requires rethinking incident management processes. Revisiting the prepare step associated with incident response - including asking your vendor the right questions - is the best place to start.

Five steps lead to tight integration of VM and existing incident response processes. They examine and help remedy system, network, and process design challenges associated with VM placement, incident detection and containment, and business process recovery unique to virtualization. Without them, existing response documentation is only effective for the physical environment.


More Stories By Tom Olzak

Tom Olzak is a security researcher for the InfoSec Institute and an IT Professional with over 27 years of experience in programming, network engineering and security. He has an MBA as well as CISSP and MCSE certifications and has written two books, "Just Enough Security" and "Microsoft Virtualization

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

@CloudExpo Stories
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...
"Infoblox does DNS, DHCP and IP address management for not only enterprise networks but cloud networks as well. Customers are looking for a single platform that can extend not only in their private enterprise environment but private cloud, public cloud, tracking all the IP space and everything that is going on in that environment," explained Steve Salo, Principal Systems Engineer at Infoblox, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventio...
"Cloud Academy is an enterprise training platform for the cloud, specifically public clouds. We offer guided learning experiences on AWS, Azure, Google Cloud and all the surrounding methodologies and technologies that you need to know and your teams need to know in order to leverage the full benefits of the cloud," explained Alex Brower, VP of Marketing at Cloud Academy, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clar...
In his session at 21st Cloud Expo, Carl J. Levine, Senior Technical Evangelist for NS1, will objectively discuss how DNS is used to solve Digital Transformation challenges in large SaaS applications, CDNs, AdTech platforms, and other demanding use cases. Carl J. Levine is the Senior Technical Evangelist for NS1. A veteran of the Internet Infrastructure space, he has over a decade of experience with startups, networking protocols and Internet infrastructure, combined with the unique ability to it...
The question before companies today is not whether to become intelligent, it’s a question of how and how fast. The key is to adopt and deploy an intelligent application strategy while simultaneously preparing to scale that intelligence. In her session at 21st Cloud Expo, Sangeeta Chakraborty, Chief Customer Officer at Ayasdi, provided a tactical framework to become a truly intelligent enterprise, including how to identify the right applications for AI, how to build a Center of Excellence to oper...
"IBM is really all in on blockchain. We take a look at sort of the history of blockchain ledger technologies. It started out with bitcoin, Ethereum, and IBM evaluated these particular blockchain technologies and found they were anonymous and permissionless and that many companies were looking for permissioned blockchain," stated René Bostic, Technical VP of the IBM Cloud Unit in North America, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventi...
Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In th...
In his session at 21st Cloud Expo, James Henry, Co-CEO/CTO of Calgary Scientific Inc., introduced you to the challenges, solutions and benefits of training AI systems to solve visual problems with an emphasis on improving AIs with continuous training in the field. He explored applications in several industries and discussed technologies that allow the deployment of advanced visualization solutions to the cloud.
Agile has finally jumped the technology shark, expanding outside the software world. Enterprises are now increasingly adopting Agile practices across their organizations in order to successfully navigate the disruptive waters that threaten to drown them. In our quest for establishing change as a core competency in our organizations, this business-centric notion of Agile is an essential component of Agile Digital Transformation. In the years since the publication of the Agile Manifesto, the conn...
"MobiDev is a software development company and we do complex, custom software development for everybody from entrepreneurs to large enterprises," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
"ZeroStack is a startup in Silicon Valley. We're solving a very interesting problem around bringing public cloud convenience with private cloud control for enterprises and mid-size companies," explained Kamesh Pemmaraju, VP of Product Management at ZeroStack, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"Codigm is based on the cloud and we are here to explore marketing opportunities in America. Our mission is to make an ecosystem of the SW environment that anyone can understand, learn, teach, and develop the SW on the cloud," explained Sung Tae Ryu, CEO of Codigm, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
High-velocity engineering teams are applying not only continuous delivery processes, but also lessons in experimentation from established leaders like Amazon, Netflix, and Facebook. These companies have made experimentation a foundation for their release processes, allowing them to try out major feature releases and redesigns within smaller groups before making them broadly available. In his session at 21st Cloud Expo, Brian Lucas, Senior Staff Engineer at Optimizely, discussed how by using ne...
Vulnerability management is vital for large companies that need to secure containers across thousands of hosts, but many struggle to understand how exposed they are when they discover a new high security vulnerability. In his session at 21st Cloud Expo, John Morello, CTO of Twistlock, addressed this pressing concern by introducing the concept of the “Vulnerability Risk Tree API,” which brings all the data together in a simple REST endpoint, allowing companies to easily grasp the severity of the ...
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
"NetApp is known as a data management leader but we do a lot more than just data management on-prem with the data centers of our customers. We're also big in the hybrid cloud," explained Wes Talbert, Principal Architect at NetApp, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, discussed how from store operations and ...
"We're focused on how to get some of the attributes that you would expect from an Amazon, Azure, Google, and doing that on-prem. We believe today that you can actually get those types of things done with certain architectures available in the market today," explained Steve Conner, VP of Sales at Cloudistics, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.