Welcome!

SDN Journal Authors: Pat Romanski, Patrick Hubbard, Elizabeth White, Sven Olav Lund, Liz McMillan

Related Topics: Containers Expo Blog, Microservices Expo, Open Source Cloud, @CloudExpo, Cloud Security, SDN Journal

Containers Expo Blog: Article

Handling Incident Management in a Virtualized Environment

Five steps lead to tight integration of VM and existing incident response processes

Incident management (IM) is a necessary part of a security program. When effective, it mitigates business impact, identifies weaknesses in controls, and helps fine-tune response processes. Traditional IM approaches, however, are not always effective in a partially or completely virtualized data center. Consequently, some aspects of incident management and response processes require review and adjustment as an increasing number of critical systems move to virtual servers.

For our discussion of IM, virtualization is defined as the abstraction of logical servers from underlying hardware resources. This is not always the case, but it is a good starting point.

Why an IM Review is Important
Some organizations are eager to implement virtualization to quickly gain associated cost and flexibility advantages. In my experience, this rush to a virtualized data center assumes that either existing controls are enough or that - for some unexplainable reason - virtualized servers are isolated from common attack vectors and therefore more secure. Neither assumption is true.

Inherent IM Challenges
Because of VM abstraction, servers, their configurations, and their data are subject to being moved from one hardware platform to another. Further, data can travel between virtual machines (VM) on the same platform without passing through traditional network devices. Although these characteristics provide many of the benefits of virtualization, they also create challenges for security professionals. They include packet bypass of IPS or log management solutions and lack of consistent MAC address references.

In addition to monitoring issues, virtualized data centers provide a fertile environment for attack. For example, compromised servers in a traditional data center provide an attacker with a single corresponding production server for data extraction or for launching further attacks. However, compromise of a hypervisor hands an attacker access to the several servers it manages. Even with strong, traditional IM processes in place, this can result in multiple breaches before detection.

Probability of a VM Attack
Yes, virtualization is a relatively new technology. As such, it has not been a prime target for cybercriminals. However, that is changing. According to IBM X-Force (2010),

... 18.2 percent of all new servers shipped in the fourth quarter of 2009 were virtualized, representing a 20 percent increase over the 15.2 percent shipped in the fourth quarter of 2008 (p. 49).

Although this increase does not correlate to an increase in disclosed virtualization vulnerabilities, as shown in Figure 1, the overall increase of vulnerabilities does track with the increase in growth of virtualization as a strategic technology. It also indicates that the increase in the number of virtualized servers increases the attack surface for those attackers focusing on the hypervisor as a high-value breach target.

Figure 1 (IBM X-Force, 2010, p. 50)

The number of virtualization solution vulnerabilities is small compared to the number of vulnerabilities across all applications and operating systems - about one percent of the total. As Figure 2 shows, however, this is still reason for concern: a large majority of reported vulnerabilities allow an attacker to gain full control of a single hardware platform's multi-server environment.

Figure 2: (IBM XForce, 2010, p. 53)

It is important not to view the 2009 drop in reported vulnerabilities as a trend. One explanation for the drop is the richer target environment in traditional data centers and on desktops. This tends to focus security researchers' attention in those environments. However, as virtualization growth continues, and traditional targets harden, virtualization products will garner additional focus from both security experts and criminals.

Finally, the distribution of reported vulnerabilities seems to track closely with market share, as shown in Figure 3. Using Microsoft's share of reported vulnerabilities on the desktop as an example, vulnerability research tends to focus on market leaders. In the virtualization market, the leader is VMware. Extending this comparison, other vendors' solutions possibly possess just as many, although undiscovered, vulnerabilities. Efforts by researchers and criminals show greater ROI when focused on the larger number of possible targets.

This brief look at the vulnerabilities inherent in virtualization solutions demonstrates the potential for a high-risk attack. High risk because a single breach can result in access to multiple servers and the data they process or store. Consequently, a close look at detection, containment, and response capabilities for the unique needs of VMs is an important step in integrating virtualization into the organization's security program.

Figure 3 (IBM X-Force, 2010, p. 56)

Incident Management Basics
IM in a virtualized data center consists of the same steps used in traditional environments, as shown in Figure 4. Note the cyclical nature of this process. After each attack/incident, or each training event, a root cause analysis and after action review helps identify weaknesses in the organization's response. Remediation tasks placed in an action plan are executed to strengthen the organization's ability to mitigate business impact. For more information on this process, see Incident Management: Managing the Inevitable.

Figure 4: Incident Management Cycle

Most of the steps in this process are planned, designed, implemented, and documented during the prepare step. It is in this phase of incident management, security and infrastructure design teams address the unique challenges associated with virtualization. These challenges go beyond simple documentation changes. In most cases, infrastructure design changes - changes intended to enable quick detection and response - are required.

In the following sections, we examine areas for review in the preparation process. Because recovery is directly affected by virtualization, we also look at additional steps necessary to enable quick and safe recovery.

Unique Response Challenges
The flexibility and productivity gains virtualization brings to an organization can also weaken its ability to respond to attacks or other unwanted device or user behavior, including:

  • VMs managed by the same hypervisor instance might share information without having to send it out onto the physical network;
  • strict separation between partitions is not implemented by default, requiring design and build documentation changes;
  • VMs are either manually or automatically moved to react to changes in resources or workload;
  • there is limited physical access to the intra-partition pathways from outside the host;
  • direct host memory access capabilities can prevent quickly moving a complete partition from a compromised platform to a recovery host; and
  • it is easy to mix servers with different trust levels on the same host.

Five Steps to Ensuring Effective Response
A few simple steps - not always so simple to implement - will ensure an organization's ability to detect unwanted behavior and effectively respond as virtual servers spread across the enterprise, including:

Step 1: Group VMs according to data classification

Step 2: Ensure monitoring tools see packets internal to VMs managed by the same hypervisor

Step 3: Segment virtual networks

Step 4: Remedy forensics issues

Step 5: Mitigate business impact

Step 1: Group VMs according to data classification.
Virtualization allows IT staff to place servers on any available host. This helps maximize available resources, but it can unnecessarily increase security complexity and costs. For example, VMs processing sensitive data require all controls defined as reasonable and appropriate for confidential data. Less sensitive VMs do not, but unnecessarily spreading a small number of sensitive VMs across multiple hosts, instead of aggregating them on restricted hosts, requires applying all controls to all hosts.

Do not mix VMs processing sensitive data with those that do not. This allows maximum protection while minimizing costs and complexity. It also helps enable adjustments to incident management processes without asking for large budget increases.

Step 2: Ensure monitoring tools see packets internal to VMs managed by the same hypervisor.
Monitoring for anomalous packets passing in an around VMs on the same host is just as important as seeing them passing between physical servers. The problem is the inability of traditional security appliances (i.e. IPS, IDS, firewalls, etc.) to see inside the virtual space. This looking into the pathways between VMs, the VMs and the hypervisor, and to and from the host operating system is called introspection.

Introspection is possible using some of the tools that come with a hypervisor solution. However, they are not always full implementations. Further, if not integrated with current monitoring controls, the administrator now has multiple sets of rules to manage.

Introspection and Detection
First, ask the right questions. Does the solution you currently use, or are about to purchase, allow introspection of all activity within the virtual space? In addition to packets, does it monitor memory and processes to ensure VM and hypervisor integrity? In other words, the virtual space cannot be an introspection-resistant black box.

If complete introspection is not part of the product's capabilities, there are workarounds. For example, critical or high-value breach targets might reside on a host with more than the recommended two NICs (one for partition management and one for VMs to access the physical network). The extra NICs can route data from one VM, out to a monitoring device, and back to the internal target VM, as depicted in Figure 5. (The management NIC is not shown.)

All intra-VM traffic in this Microsoft virtual server example is routed to VLAN4. Attached to VLAN 4 is a physical IPS used to monitor physical network traffic. Once inspected and approved, packets are then returned through VLAN4 to the virtual switch and routed to the target partition. Although a possible solution, it has a disadvantage.

Figure 5: External Packet Inspection

Packets routed to an external monitoring device add additional propagation delay to response time. Consequently, be sure this is absolutely necessary before adding it to your design toolkit. In the interest of balance between business productivity and security, you might consider this only for your most sensitive data exchanges. Another alternative is making the case for additional funding so you can purchase one of the growing number of third-party products that add introspection capabilities. Either way, do not ignore internal traffic.

Logging
In addition to monitoring, integrate virtual environments into your log management processes. Does your virtualization solution support integration with your security information and event management (SIEM) controls? If not, does it support syslog integration? Whatever it takes, ensure VM, hypervisor, third-party application, and host system logs make it to your aggregation point and your correlation engine.

For more information about log management, see Guide to Computer Security Log Management, NIST SP 800-92.

Step 3: Segment virtual networks
Also shown in Figure 5 is a segmentation scheme to limit traffic between partitions. Implemented using VLANs configured in a virtual switch and VLAN access control lists (VACLs), this example is one way to help ensure unwanted traffic does not pass from a compromised VM to other VMs on the same host. Further, it allows response teams to quickly isolate one or more compromised systems, preventing enterprise-wide effects.

The reasons for segmentation are not different from those in the physical world. However, virtual server segmentation is often forgotten, even though the physical hosts might be placed in secure network segments. Remember that many controls you implement on the physical network must be configured in virtual environments, but VMs are by design isolated from controls on your physical network.

Step 4: Remedy forensics issues
Forensics is directly affected by virtualization in at least three areas: time synchronization, hardware addressing, and server seizure.

Time Synchronization
In addition to log management solutions, forensics solutions require time synchronization across the enterprise. Without it, correlation engines miss relationships between events and investigators struggle to reconstruct incidents. Most organizations use a time service-internal, external, or both-to ensure the same time is synchronized across all physical devices.

Virtual servers must synchronize with the same service. However, this is not always automatically configured. Ensure each VM directly synchronizes with the time service or with the physical host. Using the physical host assumes it synchronizes with the time service.

Hardware Addresses
VMs use virtual hardware (MAC) addresses. When a VM moves, its MAC address changes. If these address changes are not tracked and logged, reconstructing a security incident within virtual environments is difficult. See Figure 6.

Figure 6: Logging Challenges (Brandon Gillespie, 2009, Slide 8)

Consequently, logging is not enough; knowing where a VM was located at any point in the past is also necessary. According to Brandon Gillespie (2009), security teams must ensure virtual MAC addresses are tracked, logged, and available for analysis. In addition, log management processes must consider the possibility that a moving VM has left behind logs on multiple hosts.

For an example of how scheduled tracking might be accomplished in an environment without an automated tracking solution, see Tracking a VM in a Nexus 1000v Virtual Network.

Seizing a Virtual Server
When a server is compromised or used to commit a crime, it is often necessary to seize it for forensics analysis. Security teams often face two challenges when trying to remove a physical server from service: retention of potential evidence in volatile storage or removal of a device from a critical business process. Proper planning mitigates the effects of both when seizing a VM.

Evidence retention
Evidence retention is a problem when the investigator wants to retain RAM content. For example, removing power from a server starts the process of mitigating business impact, but it also denies forensic analysis of data, processes, keys, and possible footprints left by an attacker. This is one advantage VMs have over physical servers.

Most virtualization solutions, like VMware's ESXi and Microsoft's Hyper-V, provide snapshot capability. A VMware snapshot is a point-in-time image of a VM, including RAM and the virtual machine disk file (Siebert, 2011). The resulting file can provide an investigator with an encapsulated copy of the server at the time the breach or criminal activity occurs. When placed on a quarantined replica of the original hardware, the recovered VM presents a rich forensics environment.

Another method of both disabling a compromised server and retaining critical forensics data is VM suspension. In VMware, for example, suspending a VM creates a suspended state file (.vmss) representing "...the state of the machine at the time it was suspended, or paused..." (Durick, 2011, Suspend, para. 1). The .vmss file is similar to the hibernation file used on Windows systems. For more information on this topic, see the Durick reference above.

A snapshot or suspension file might not be enough, however. When planning snapshots or other processes for evidence preparation, be sure to collect all files your VM uses while running. In a VMware ESXi 4 update 1 environment, files to seize include those with the following extensions (Durick, 2011):

  • .vmxf
  • .vmx
  • .vmsd
  • .vmdk (snapshot file)
  • vmname-flat.vmdk
  • .log
  • .nvram
  • .vswp

When using Microsoft Server 2008 R2 virtualization, look for the following file extensions (Microsoft TechNet, 2011):

  • .xml
  • .vsv
  • .vhd
  • .avhd (snapshot file)

Server removal
Administrators typically strive to meet four goals when a virtual server is removed from service: 1) contain a breach or malware infestation by removing the affected server from the network; 2) prevent any further damage to, or loss of, information residing on local storage; 3) remove the server to a secure location for forensics analysis; and 4) restore services provided by the VM. Meeting these goals requires planning, testing, and documenting processes.

Removal of a server usually starts with isolating it from the network. As I wrote in Step 3: Segment virtual networks, this is easily accomplished using documented steps to isolate one or more virtual network segments. In many cases, isolating the entire physical host is necessary, and proper physical network design enables this. Isolation protects the rest of the network and shuts down external attack sessions. Another option is to suspend the affected VM.

If quick isolation is not necessary or practical, suspend the VM. Shutting it down might or might not preserve volatile storage, such as RAM. Whether or not suspension affects forensics investigations depends on how your virtualization vendor implements this capability and how you configure it.

Both snapshots and suspensions allow preservation of evidence. Seizing the related files and taking them to a secure forensics lab is easily accomplished.

Restoring service, the last step in server seizure, is also an important business continuity planning step for any service-interruption event.

Step 5: Mitigate business impact
The bottom line is that IM is all about mitigating business impact. Modifying detection, containment, and evidence retention processes do not ensure continued operation of processes affected by a compromised server. Rather, this requires quick recovery of the server and its data to a point in time just before the compromise. In addition to traditional backups, a virtualized data center has unique tools to accomplish this.

Immediately after suspending a VM, virtualization technology allows another VM to take its place. Two tools make this possible: images and snapshots. VM images, created by the virtual server creation process, usually exist for every VM in the data center. If regularly patched, they allow quick recovery of a server when recovery of dynamic configurations or data is not necessary. (See Patch archived VMs...) Images, however, do not restore a VMs' operational state at a specific point in time.

A better alternative to recovering a VM is using a snapshot. Snapshots save enough information to restore a server to a specific point in time. However, this requires regularly taking snapshots rather than waiting for an incident. It also means implementing snapshot management processes.

Snapshots introduce a new set of management challenges; the biggest is possible performance hits. For example, in VMware environments, post snapshot reading and writing to disk happens as shown in Figure 7. The VM retrieves data created before the snapshot is taken from the pre-snapshot virtual disk file. Reads and writes for other data and are sent to a delta file created at the time of the snapshot.

Figure 7: VM Snapshot Reads and Writes (Siebert, 2011a)

Microsoft snapshots work similarly. Figure 8 provides a step-by-step look of snapshot creation in a Microsoft Hyper-V environment.

Figure 8: Hyper-V Manager Snapshot Creation (Microsoft TechNet, 2011a)

Performance issues, and the need for additional storage, are necessary planning topics when considering snapshots. Also plan to seize all delta- and other snapshot-related files if snapshots are taken regularly, including:

  • .vmsn (VMware snapshot state file)
  • delta.vmdk (VMware differential disk file)
  • .vmsd (VMware snapshot metadata file)
  • .avhd (Microsoft snapshot file)

Finally, neither suspending a VM nor taking a snapshot is impossible if you cannot access the VM's hypervisor. Be sure your management path to the parent partition, via the administration NIC, is truly isolated from the physical network. However, if you believe the hypervisor is compromised, simply isolate the physical platform and all servers it contains; assume all VMs are compromised, too.

Conclusion
Introduction of virtualized servers requires rethinking incident management processes. Revisiting the prepare step associated with incident response - including asking your vendor the right questions - is the best place to start.

Five steps lead to tight integration of VM and existing incident response processes. They examine and help remedy system, network, and process design challenges associated with VM placement, incident detection and containment, and business process recovery unique to virtualization. Without them, existing response documentation is only effective for the physical environment.

References

More Stories By Tom Olzak

Tom Olzak is a security researcher for the InfoSec Institute and an IT Professional with over 27 years of experience in programming, network engineering and security. He has an MBA as well as CISSP and MCSE certifications and has written two books, "Just Enough Security" and "Microsoft Virtualization

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@CloudExpo Stories
When it comes to cloud computing, the ability to turn massive amounts of compute cores on and off on demand sounds attractive to IT staff, who need to manage peaks and valleys in user activity. With cloud bursting, the majority of the data can stay on premises while tapping into compute from public cloud providers, reducing risk and minimizing need to move large files. In his session at 18th Cloud Expo, Scott Jeschonek, Director of Product Management at Avere Systems, discussed the IT and busine...
As businesses evolve, they need technology that is simple to help them succeed today and flexible enough to help them build for tomorrow. Chrome is fit for the workplace of the future — providing a secure, consistent user experience across a range of devices that can be used anywhere. In her session at 21st Cloud Expo, Vidya Nagarajan, a Senior Product Manager at Google, will take a look at various options as to how ChromeOS can be leveraged to interact with people on the devices, and formats th...
First generation hyperconverged solutions have taken the data center by storm, rapidly proliferating in pockets everywhere to provide further consolidation of floor space and workloads. These first generation solutions are not without challenges, however. In his session at 21st Cloud Expo, Wes Talbert, a Principal Architect and results-driven enterprise sales leader at NetApp, will discuss how the HCI solution of tomorrow will integrate with the public cloud to deliver a quality hybrid cloud e...
SYS-CON Events announced today that Yuasa System will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Yuasa System is introducing a multi-purpose endurance testing system for flexible displays, OLED devices, flexible substrates, flat cables, and films in smartphones, wearables, automobiles, and healthcare.
Is advanced scheduling in Kubernetes achievable? Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, will answer these questions and demonstrate techniques for implementing advanced scheduling. For example, using spot instances ...
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
The session is centered around the tracing of systems on cloud using technologies like ebpf. The goal is to talk about what this technology is all about and what purpose it serves. In his session at 21st Cloud Expo, Shashank Jain, Development Architect at SAP, will touch upon concepts of observability in the cloud and also some of the challenges we have. Generally most cloud-based monitoring tools capture details at a very granular level. To troubleshoot problems this might not be good enough.
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
SYS-CON Events announced today that Taica will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Taica manufacturers Alpha-GEL brand silicone components and materials, which maintain outstanding performance over a wide temperature range -40C to +200C. For more information, visit http://www.taica.co.jp/english/.
When it comes to cloud computing, the ability to turn massive amounts of compute cores on and off on demand sounds attractive to IT staff, who need to manage peaks and valleys in user activity. With cloud bursting, the majority of the data can stay on premises while tapping into compute from public cloud providers, reducing risk and minimizing need to move large files. In his session at 18th Cloud Expo, Scott Jeschonek, Director of Product Management at Avere Systems, discussed the IT and busine...
We all know that end users experience the Internet primarily with mobile devices. From an app development perspective, we know that successfully responding to the needs of mobile customers depends on rapid DevOps – failing fast, in short, until the right solution evolves in your customers' relationship to your business. Whether you’re decomposing an SOA monolith, or developing a new application cloud natively, it’s not a question of using microservices – not doing so will be a path to eventual b...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities – ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups. As a result, many firms employ new business models that place enormous impor...
SYS-CON Events announced today that SourceForge has been named “Media Sponsor” of SYS-CON's 21st International Cloud Expo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. SourceForge is the largest, most trusted destination for Open Source Software development, collaboration, discovery and download on the web serving over 32 million viewers, 150 million downloads and over 460,000 active development projects each and every month.
The next XaaS is CICDaaS. Why? Because CICD saves developers a huge amount of time. CD is an especially great option for projects that require multiple and frequent contributions to be integrated. But… securing CICD best practices is an emerging, essential, yet little understood practice for DevOps teams and their Cloud Service Providers. The only way to get CICD to work in a highly secure environment takes collaboration, patience and persistence. Building CICD in the cloud requires rigorous ar...
SYS-CON Events announced today that Dasher Technologies will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Dasher Technologies, Inc. ® is a premier IT solution provider that delivers expert technical resources along with trusted account executives to architect and deliver complete IT solutions and services to help our clients execute their goals, plans and objectives. Since 1999, we'v...
As popularity of the smart home is growing and continues to go mainstream, technological factors play a greater role. The IoT protocol houses the interoperability battery consumption, security, and configuration of a smart home device, and it can be difficult for companies to choose the right kind for their product. For both DIY and professionally installed smart homes, developers need to consider each of these elements for their product to be successful in the market and current smart homes.
In the fast-paced advances and popularity in cloud technology, one of the most critical factors revolves around concerns for security of your critical data. How to assure both your company and your customers they can confidently trust and utilize your cloud environment is most often top on the list. There is a method to evaluating and providing security that exceeds conventional modes of protecting data both within the cloud as well externally on mobile and other devices. With the public failure...
SYS-CON Events announced today that MIRAI Inc. will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. MIRAI Inc. are IT consultants from the public sector whose mission is to solve social issues by technology and innovation and to create a meaningful future for people.
Transforming cloud-based data into a reportable format can be a very expensive, time-intensive and complex operation. As a SaaS platform with more than 30 million global users, Cornerstone OnDemand’s challenge was to create a scalable solution that would improve the time it took customers to access their user data. Our Real-Time Data Warehouse (RTDW) process vastly reduced data time-to-availability from 24 hours to just 10 minutes. In his session at 21st Cloud Expo, Mark Goldin, Chief Technolo...
SYS-CON Events announced today that Massive Networks, that helps your business operate seamlessly with fast, reliable, and secure internet and network solutions, has been named "Exhibitor" of SYS-CON's 21st International Cloud Expo ®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. As a premier telecommunications provider, Massive Networks is headquartered out of Louisville, Colorado. With years of experience under their belt, their team of...