Welcome!

SDN Journal Authors: Elizabeth White, Liz McMillan, Yeshim Deniz, Pat Romanski, TJ Randall

Related Topics: @CloudExpo, Microservices Expo, Containers Expo Blog, Cloud Security, @DXWorldExpo, SDN Journal

@CloudExpo: Article

Cloud Shifts the Burden of Security to Development

The application remains your last line of defense

The following is an excerpt from an article that Parasoft recently authored for CrossTalk, an approved Department of Defense journal...

Abstract
The move to the cloud brings a number of new security challenges, but the application remains your last line of defense. Engineers are extremely well poised to perform tasks critical for securing the application-provided that certain key obstacles are overcome.

Service Virtualization Cloud Security

Introduction
This paper explores three ways to help development bear the burden of security that the cloud places on them:

  • Use penetration testing results to help engineers determine how to effectively "harden" the most vulnerable parts of the application.

  • Apply the emerging practice of "service virtualization" to provide engineers the test environment access needed to exercise realistic security scenarios from the development environment.

  • Implement policy-driven development to help engineers understand and satisfy management's security expectations.

New Risks, Same Vulnerability
Before the move to the cloud, few organizations lost sleep over application security because they assumed their internally-controlled security infrastructure provided ample protection. With the move to cloud, security concerns are thrust into the forefront as organizations consider how much security control they are willing to relinquish to cloud service providers and what level of exposure they are willing to allow.

The fact of the matter is that with or without the cloud, failure to secure the application always is-and always has been-a dangerous proposition. Even when the bulk of the network security rested under the organization's direct control, attackers still managed to successfully launch attacks via the application layer. From the 2002 breach at the Australian Taxation office where a hacker accessed tax details on 17,000 businesses to the 2006 incident where Russian hackers stole credit card information from Rhode Island government systems, to the recent attack that brought down the National Institute of Standards and Technology (NIST) vulnerability database, it's clear that a deficiency in the application layer can the be one and only entry point an attacker needs.

Public cloud, private cloud, or no cloud at all, the application is your last line of defense and if you don't properly secure the application, you're putting the organization at risk/ Nevertheless, the move to the cloud does bring some significant changes to the application security front:

  • Applications developed under the assumption of a bulletproof security infrastructure might need to have their strategies for authorization, encryption, message exchange, and data storage re-envisioned for cloud-based deployment.

  • The move to cloud architectures increases the attack surface area, potentially exposing more entry points for hackers. This attack surface area is compounded with more distributed computing technologies, such as mobile, web, and APIs.

  • As applications shift from monolithic architectures to composite ones, there is a high degree of interconnectedness with 3rd party services-and a poorly-engineered or malfunctioning dependency could raise the security risk of all connected components. For example, a recent attack on Yahoo exploited a vulnerability from a third-party application. The composite application is only as secure as its weakest link.

  • As organizations push more (and more critical) functionality to the cloud, the potential impact of an attack or breach escalates from embarrassing to potentially devastating-in terms of safety, reputation, and liability.

With the move to the cloud placing more at stake, it's now more critical than ever to make application security a primary concern. The industry has long recognized that development can and should play a significant role in securing the application. This is underscored by the DoD's directive for certifications in the area of software development security (e.g., via CISSP).  Select organizations that have successfully adopted a secure application development initiative have achieved promising results. However, such success stories still remain the exception rather than the rule.

Should Development Be Responsible for Application Security?
Due to software engineers' intimate familiarity with the application's architecture and functionality, they are extremely well-poised to accomplish the various tasks required to safeguard application security. Yet, a number of factors impede engineers' ability to shoulder the burden of security:

  • The organization's security objectives are not effectively communicated to the development level.

  • For engineers to determine whether a particular module they developed is secure, they need to access and configure dependent resources (e.g., partner services, mainframes, databases) for realistic security scenarios-and such access and configurability is not commonly available within the development environment.

  • Management often overlooks security when defining non-functional requirements for engineers and planning development schedules; this oversight, paired with the myopic nature of coding new functionality, commonly reduces security concerns to an afterthought.

  • Security tests are frequently started at the testing phase, when it is typically too late to make the necessary critical architectural changes.

In the following sections, we explore how strategies related to penetration testing, service virtualization, and policy-driven development can better prepare engineers to bear the heavy burden of security that accompanies the shift to the cloud.

Moving Beyond Penetration Testing: Divide and Conquer
Penetration testing is routinely used to barrage the application with attack scenarios and determine whether or not the application can fend them off. When a simulated attack succeeds, you know for a fact that the application has a vulnerability which makes you susceptible to a particular breed of attacks. It alerts you to real vulnerabilities that can be exploited by known attack patterns-essentially sitting ducks in your applications. When a penetration attack succeeds, there is little need to discuss whether it needs to be repaired. It's not a matter of "if", but rather of "how" and "when."

The common reaction to a reported penetration failure is to have engineers patch the vulnerability as soon as possible, then move on. In some situations, taking the path of least resistance to eliminating a particular known vulnerability is a necessary evil. However, relying solely on a "whack a mole" strategy for application security leaves a considerable amount of valuable information on the table-information that could be critical for averting the next security crisis.

Switching to a non-software example for a moment, consider what happened when the US Army realized how susceptible Humvees were to roadside bombs in the early 2000s. After initial ad-hoc attempts to improve security with one-off fixes (such as adding sandbags to floorboards and bolting miscellaneous metal to the sides of the vehicles), the Army devised add-on armor kits to address structural vulnerabilities and deployed them across the existing fleet . In parallel with this effort, they also took steps to ensure that additional protection was built into new vehicles that were requisitioned from that point forward.

How does such as strategy play out in terms of software? The first step is recognizing that successful attacks-actual or simulated-are a valuable weapon in determining what parts of your application are the most susceptible to attack. For example, if the penetration tests run this week succeed in an area of the application where penetration tests have failed before-and this is also an area that you've already had to patch twice in response to actual attacks-this module is clearly suffering from some underlying security issues that probably won't be solved by yet another patch...

Want to read more? You can access the complete article here.

More Stories By Cynthia Dunlop

Cynthia Dunlop, Lead Content Strategist/Writer at Tricentis, writes about software testing and the SDLC—specializing in continuous testing, functional/API testing, DevOps, Agile, and service virtualization. She has written articles for publications including SD Times, Stickyminds, InfoQ, ComputerWorld, IEEE Computer, and Dr. Dobb's Journal. She also co-authored and ghostwritten several books on software development and testing for Wiley and Wiley-IEEE Press. Dunlop holds a BA from UCLA and an MA from Washington State University.

@CloudExpo Stories
HyperConvergence came to market with the objective of being simple, flexible and to help drive down operating expenses. It reduced the footprint by bundling the compute/storage/network into one box. This brought a new set of challenges as the HyperConverged vendors are very focused on their own proprietary building blocks. If you want to scale in a certain way, let's say you identified a need for more storage and want to add a device that is not sold by the HyperConverged vendor, forget about it...
Enterprises are moving to the cloud faster than most of us in security expected. CIOs are going from 0 to 100 in cloud adoption and leaving security teams in the dust. Once cloud is part of an enterprise stack, it’s unclear who has responsibility for the protection of applications, services, and data. When cloud breaches occur, whether active compromise or a publicly accessible database, the blame must fall on both service providers and users. In his session at 21st Cloud Expo, Ben Johnson, C...
It is of utmost importance for the future success of WebRTC to ensure that interoperability is operational between web browsers and any WebRTC-compliant client. To be guaranteed as operational and effective, interoperability must be tested extensively by establishing WebRTC data and media connections between different web browsers running on different devices and operating systems. In his session at WebRTC Summit at @ThingsExpo, Dr. Alex Gouaillard, CEO and Founder of CoSMo Software, presented ...
In this presentation, you will learn first hand what works and what doesn't while architecting and deploying OpenStack. Some of the topics will include:- best practices for creating repeatable deployments of OpenStack- multi-site considerations- how to customize OpenStack to integrate with your existing systems and security best practices.
As you move to the cloud, your network should be efficient, secure, and easy to manage. An enterprise adopting a hybrid or public cloud needs systems and tools that provide: Agility: ability to deliver applications and services faster, even in complex hybrid environments Easier manageability: enable reliable connectivity with complete oversight as the data center network evolves Greater efficiency: eliminate wasted effort while reducing errors and optimize asset utilization Security: implemen...
Your homes and cars can be automated and self-serviced. Why can't your storage? From simply asking questions to analyze and troubleshoot your infrastructure, to provisioning storage with snapshots, recovery and replication, your wildest sci-fi dream has come true. In his session at @DevOpsSummit at 20th Cloud Expo, Dan Florea, Director of Product Management at Tintri, provided a ChatOps demo where you can talk to your storage and manage it from anywhere, through Slack and similar services with...
Most people haven’t heard the word, “gamification,” even though they probably, and perhaps unwittingly, participate in it every day. Gamification is “the process of adding games or game-like elements to something (as a task) so as to encourage participation.” Further, gamification is about bringing game mechanics – rules, constructs, processes, and methods – into the real world in an effort to engage people. In his session at @ThingsExpo, Robert Endo, owner and engagement manager of Intrepid D...
Recently, WebRTC has a lot of eyes from market. The use cases of WebRTC are expanding - video chat, online education, online health care etc. Not only for human-to-human communication, but also IoT use cases such as machine to human use cases can be seen recently. One of the typical use-case is remote camera monitoring. With WebRTC, people can have interoperability and flexibility for deploying monitoring service. However, the benefit of WebRTC for IoT is not only its convenience and interopera...
Evan Kirstel is an internationally recognized thought leader and social media influencer in IoT (#1 in 2017), Cloud, Data Security (2016), Health Tech (#9 in 2017), Digital Health (#6 in 2016), B2B Marketing (#5 in 2015), AI, Smart Home, Digital (2017), IIoT (#1 in 2017) and Telecom/Wireless/5G. His connections are a "Who's Who" in these technologies, He is in the top 10 most mentioned/re-tweeted by CMOs and CIOs (2016) and have been recently named 5th most influential B2B marketeer in the US. H...
Michael Maximilien, better known as max or Dr. Max, is a computer scientist with IBM. At IBM Research Triangle Park, he was a principal engineer for the worldwide industry point-of-sale standard: JavaPOS. At IBM Research, some highlights include pioneering research on semantic Web services, mashups, and cloud computing, and platform-as-a-service. He joined the IBM Cloud Labs in 2014 and works closely with Pivotal Inc., to help make the Cloud Found the best PaaS.
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
"With Digital Experience Monitoring what used to be a simple visit to a web page has exploded into app on phones, data from social media feeds, competitive benchmarking - these are all components that are only available because of some type of digital asset," explained Leo Vasiliou, Director of Web Performance Engineering at Catchpoint Systems, in this SYS-CON.tv interview at DevOps Summit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"This week we're really focusing on scalability, asset preservation and how do you back up to the cloud and in the cloud with object storage, which is really a new way of attacking dealing with your file, your blocked data, where you put it and how you access it," stated Jeff Greenwald, Senior Director of Market Development at HGST, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
"Venafi has a platform that allows you to manage, centralize and automate the complete life cycle of keys and certificates within the organization," explained Gina Osmond, Sr. Field Marketing Manager at Venafi, in this SYS-CON.tv interview at DevOps at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Creating replica copies to tolerate a certain number of failures is easy, but very expensive at cloud-scale. Conventional RAID has lower overhead, but it is limited in the number of failures it can tolerate. And the management is like herding cats (overseeing capacity, rebuilds, migrations, and degraded performance). In his general session at 18th Cloud Expo, Scott Cleland, Senior Director of Product Marketing for the HGST Cloud Infrastructure Business Unit, discussed how a new approach is neces...
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
"We're developing a software that is based on the cloud environment and we are providing those services to corporations and the general public," explained Seungmin Kim, CEO/CTO of SM Systems Inc., in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
Leading companies, from the Global Fortune 500 to the smallest companies, are adopting hybrid cloud as the path to business advantage. Hybrid cloud depends on cloud services and on-premises infrastructure working in unison. Successful implementations require new levels of data mobility, enabled by an automated and seamless flow across on-premises and cloud resources. In his general session at 21st Cloud Expo, Greg Tevis, an IBM Storage Software Technical Strategist and Customer Solution Architec...
In his session at Cloud Expo, Alan Winters, U.S. Head of Business Development at MobiDev, presented a success story of an entrepreneur who has both suffered through and benefited from offshore development across multiple businesses: The smart choice, or how to select the right offshore development partner Warning signs, or how to minimize chances of making the wrong choice Collaboration, or how to establish the most effective work processes Budget control, or how to maximize project result...
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...