Welcome!

SDN Journal Authors: Pat Romanski, Destiny Bertucci, Liz McMillan, Elizabeth White, Amitabh Sinha

Related Topics: Containers Expo Blog, Java IoT, Microservices Expo, Microsoft Cloud, Open Source Cloud

Containers Expo Blog: Article

How Service Virtualization Helped Comcast Release Tested Software Faster

Service virtualization has allowed us to get great utilization from our testing staff and complete more projects on time

Frank Jennings Comcast Service Virtualization

"Service virtualization has allowed us to get great utilization from our testing staff, complete more projects on time, and also save money by lowering the overall total cost of performing those tests for a given release."

Frank Jennings, TQM Performance Director at Comcast, shares his service virtualization experiences with Parasoft in this Q & A...

Why did Comcast explore service virtualization?

There were two primary issues that led Comcast to explore service virtualization. First, we wanted to increase the accuracy and consistency of performance test results. Second, we were constantly working around frequent and lengthy downtimes in the staged test environments.

My team executes performance testing across a number of verticals in the company-from business services, to our enterprise services platform, to customer-facing UIs, to the backend systems that perform the provisioning and activation of devices for subscribers on the Comcast network. While our testing targets (AUTs) typically have staged environments that accurately represent the performance of the production systems, the staging systems for the AUT's dependencies do not.

Complicating the matter further was the fact that these environments were difficult to access. When we did gain access, we would sometimes impact the lower environments (the QA or integration test environments) because they weren't adequately scaled and could not handle the load. Even when the systems could withstand the load, we received very poor response times from these systems. This meant that our performance test results were not truly predictive of real world performance.

Another issue was that we had to work around frequent and lengthy downtimes in the staging environments. The staging environment was not available during the frequent upgrades or software updates. As a result, we couldn't run our full performance tests. Performance testing teams had to switch off key projects at critical time periods in order to keep busy- they knew they wouldn't be able to work on their primary responsibility because the systems they needed to access just weren't available.

How did this impact the business?

These challenges were driving up costs, reducing the team's efficiency, and impacting the reliability and predictability of our performance testing. Ultimately, we found that the time and cost of implementing service virtualization was far less than the time and cost associated with implementing all the various systems across all those staging environments-or building up the connectivity between the different staging environments.

Did you consider expanding service virtualization beyond performance testing?

Yes, the functional testing teams sometimes experience the same issues with dependent systems being unavailable and impeding their test efforts. They're starting to use service virtualization so that they can continue testing rather than get stuck waiting for systems to come back up.

We're currently in the process of expanding service virtualization to the functional testing of our most business-critical applications. We're deploying service virtualization not only to capture live traffic for those applications, but also to enable functional testers to quickly select and provision test environments. In addition to providing the team the appropriate technologies and training, we're taking time to reassure them that their test results won't be impacted by using virtual assets rather than live services.

In your opinion, what is the key benefit of service virtualization?

The key benefit of service virtualization is the increased uptime and availability of test environments. Service virtualization has allowed us to get great utilization from our testing staff, complete more projects on time, and also save money by lowering the overall total cost of performing those tests for a given release.

If you could start all over again with service virtualzation, what would you do differently?

I think things would have run more smoothly if we had a champion in place across all teams at the beginning to marshal the appropriate resources. The ideal rollout would involve centralizing the management and implementation of the virtual assets, implementing standards right off the bat, and using the lessons learned in each group to make improvements across all teams.

Any other tips for organizations just starting off with service virtualization?

Make sure that your virtual assets can be easily reused across different environments (development, performance, system integration test, etc.). It's really helpful to be able to capture data in one in environment then use it across your other environments. Obtaining data for realistic responses can be challenging, so you don't want to constantly reinvent the wheel.

Also, don't underestimate the amount of education that's needed to get the necessary level of buy-in. For each team or project where we introduced service virtualization, we needed to spend a fair amount of time educating the project teams and business owners about what service virtualization is, what business risks are associated with using it for testing, and how the system proactively mitigates those risks. People are understandably nervous when they hear that you're removing live elements from the testing environment, so some education is needed to put everyone at ease.

Service Virtualization: Real Results Webinar
Want to learn more about service virtualization at Comcast, including how it saved them about $500,000 and helped them reduce downtime by 60%?

Watch the on-demand webinar Service Virtualization: Accelerating the SDLC with Simulated Test Environments.

New Research Package from Gartner and Parasoft: Accelerating the SDLC with Service Virtualization

gartner service virtualizationThe new Service Virtualization research package from Gartner and Parasoft provides more details about how service virtualization helps organizations accelerate the SDLC. Download it to learn:

  • Why service virtualization is a "must-have" for accelerating the SDLC.

  • How service virtualization helps organizations release thoroughly-tested software faster-and at a lower total overall cost.

  • Recommendations for organizations getting started with service virtualization.

  • Strategies for streamlining the release management process beyond service virtualization.

More Stories By Cynthia Dunlop

Cynthia Dunlop, Lead Content Strategist/Writer at Tricentis, writes about software testing and the SDLC—specializing in continuous testing, functional/API testing, DevOps, Agile, and service virtualization. She has written articles for publications including SD Times, Stickyminds, InfoQ, ComputerWorld, IEEE Computer, and Dr. Dobb's Journal. She also co-authored and ghostwritten several books on software development and testing for Wiley and Wiley-IEEE Press. Dunlop holds a BA from UCLA and an MA from Washington State University.

@CloudExpo Stories
Digital transformation is about embracing digital technologies into a company's culture to better connect with its customers, automate processes, create better tools, enter new markets, etc. Such a transformation requires continuous orchestration across teams and an environment based on open collaboration and daily experiments. In his session at 21st Cloud Expo, Alex Casalboni, Technical (Cloud) Evangelist at Cloud Academy, explored and discussed the most urgent unsolved challenges to achieve f...
In his session at 21st Cloud Expo, Raju Shreewastava, founder of Big Data Trunk, provided a fun and simple way to introduce Machine Leaning to anyone and everyone. He solved a machine learning problem and demonstrated an easy way to be able to do machine learning without even coding. Raju Shreewastava is the founder of Big Data Trunk (www.BigDataTrunk.com), a Big Data Training and consulting firm with offices in the United States. He previously led the data warehouse/business intelligence and B...
The past few years have brought a sea change in the way applications are architected, developed, and consumed—increasing both the complexity of testing and the business impact of software failures. How can software testing professionals keep pace with modern application delivery, given the trends that impact both architectures (cloud, microservices, and APIs) and processes (DevOps, agile, and continuous delivery)? This is where continuous testing comes in. D
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
Mobile device usage has increased exponentially during the past several years, as consumers rely on handhelds for everything from news and weather to banking and purchases. What can we expect in the next few years? The way in which we interact with our devices will fundamentally change, as businesses leverage Artificial Intelligence. We already see this taking shape as businesses leverage AI for cost savings and customer responsiveness. This trend will continue, as AI is used for more sophistica...
The “Digital Era” is forcing us to engage with new methods to build, operate and maintain applications. This transformation also implies an evolution to more and more intelligent applications to better engage with the customers, while creating significant market differentiators. In both cases, the cloud has become a key enabler to embrace this digital revolution. So, moving to the cloud is no longer the question; the new questions are HOW and WHEN. To make this equation even more complex, most ...
Blockchain is a shared, secure record of exchange that establishes trust, accountability and transparency across business networks. Supported by the Linux Foundation's open source, open-standards based Hyperledger Project, Blockchain has the potential to improve regulatory compliance, reduce cost as well as advance trade. Are you curious about how Blockchain is built for business? In her session at 21st Cloud Expo, René Bostic, Technical VP of the IBM Cloud Unit in North America, discussed the b...
SYS-CON Events announced today that Synametrics Technologies will exhibit at SYS-CON's 22nd International Cloud Expo®, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Synametrics Technologies is a privately held company based in Plainsboro, New Jersey that has been providing solutions for the developer community since 1997. Based on the success of its initial product offerings such as WinSQL, Xeams, SynaMan and Syncrify, Synametrics continues to create and hone in...
With tough new regulations coming to Europe on data privacy in May 2018, Calligo will explain why in reality the effect is global and transforms how you consider critical data. EU GDPR fundamentally rewrites the rules for cloud, Big Data and IoT. In his session at 21st Cloud Expo, Adam Ryan, Vice President and General Manager EMEA at Calligo, examined the regulations and provided insight on how it affects technology, challenges the established rules and will usher in new levels of diligence arou...
As you move to the cloud, your network should be efficient, secure, and easy to manage. An enterprise adopting a hybrid or public cloud needs systems and tools that provide: Agility: ability to deliver applications and services faster, even in complex hybrid environments Easier manageability: enable reliable connectivity with complete oversight as the data center network evolves Greater efficiency: eliminate wasted effort while reducing errors and optimize asset utilization Security: imple...
Nordstrom is transforming the way that they do business and the cloud is the key to enabling speed and hyper personalized customer experiences. In his session at 21st Cloud Expo, Ken Schow, VP of Engineering at Nordstrom, discussed some of the key learnings and common pitfalls of large enterprises moving to the cloud. This includes strategies around choosing a cloud provider(s), architecture, and lessons learned. In addition, he covered some of the best practices for structured team migration an...
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
The 22nd International Cloud Expo | 1st DXWorld Expo has announced that its Call for Papers is open. Cloud Expo | DXWorld Expo, to be held June 5-7, 2018, at the Javits Center in New York, NY, brings together Cloud Computing, Digital Transformation, Big Data, Internet of Things, DevOps, Machine Learning and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding busin...
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex ...
In his session at 21st Cloud Expo, Michael Burley, a Senior Business Development Executive in IT Services at NetApp, described how NetApp designed a three-year program of work to migrate 25PB of a major telco's enterprise data to a new STaaS platform, and then secured a long-term contract to manage and operate the platform. This significant program blended the best of NetApp’s solutions and services capabilities to enable this telco’s successful adoption of private cloud storage and launching ...
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It’s clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. Tha...
In his general session at 21st Cloud Expo, Greg Dumas, Calligo’s Vice President and G.M. of US operations, discussed the new Global Data Protection Regulation and how Calligo can help business stay compliant in digitally globalized world. Greg Dumas is Calligo's Vice President and G.M. of US operations. Calligo is an established service provider that provides an innovative platform for trusted cloud solutions. Calligo’s customers are typically most concerned about GDPR compliance, application p...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
Smart cities have the potential to change our lives at so many levels for citizens: less pollution, reduced parking obstacles, better health, education and more energy savings. Real-time data streaming and the Internet of Things (IoT) possess the power to turn this vision into a reality. However, most organizations today are building their data infrastructure to focus solely on addressing immediate business needs vs. a platform capable of quickly adapting emerging technologies to address future ...