Welcome!

SDN Journal Authors: Daniel Gordon, John Walsh, Elizabeth White, Liz McMillan, Sven Olav Lund

Related Topics: @CloudExpo, Microservices Expo, Containers Expo Blog, Agile Computing, @DXWorldExpo, SDN Journal

@CloudExpo: Article

Replication & Erasure Coding Is the Future for Cloud Storage & Big Data

Organizations are increasingly turning to cloud storage infrastructures to manage their data

In the course of IT history, many schemes have been devised and deployed to protect data against storage system failure, especially disk drive hardware. These protection mechanisms have nearly always been variants on two themes: duplication of files or objects (backup, archiving, synchronization, remote replication come to mind); or parity-based schemes at disk level (RAID) or at object level (erasure coding, often also referred to as Reed-Solomon coding). Regardless of implementation details, the latter always consists of the computation and storage of "parity" information over a number of data entities (whether disks, blocks or objects). Many different parity schemes exist, offering a wide range of protection trade-offs between capacity overhead and protection level - hence their interest.

Erasure Coding
As of late, erasure coding has received a lot of attention in the object storage field as a ‘one-size-fits-all' approach to content protection. This is a stretch. Erasure coding is a solid approach to storage footprint reduction for an interesting but bounded field of use cases, involving both large streams and large clusters, but at the cost of sacrificing the numerous use cases that involve small streams, small clusters, or a combination of the two.

Most readers will be familiar with the concept of RAID content protection on hard disk drives. For example, the contents of a set of five drives is used to compute the contents of what is called a parity drive adding one more drive to the RAID set for a total of six drives. Of the total set of sox, if any single drive fails, the content that is lost can be rebuilt from the five remaining drives. Aside such a 5+1 scheme, many others are possible, where even multiple drives can fail simultaneously and yet the full content can be rebuilt: there is a continuum in the trade-off between footprint and robustness.

More recently, the same class of algorithms that is used for RAID has been applied to the world of object storage: they are commonly called Erasure Codes. The concept is similar: imagine an object to be stored in a cluster. Now, rather than storing and replicating it wholly we will cut the incoming stream into (say) six segments in a 5:1 scheme each with parity information. Similar to the RAID mechanism above, any missing segment out of the six can be rebuilt from the five remaining ones, hence the one. This provides a mechanism to survive a failed disk drive without making a full replica: the footprint overhead is just 20% here rather than 100% with comparable data durability.

Beyond this "5+1" scheme, many more Erasure Coding (EC) schemes are possible. They can survive as many disk failures as their number of parity segments: a 10+6 scheme can survive six simultaneous segment failures without data loss, for instance. Here the overhead will be 60% ((10+6)/10).

Erasure Coding Comes with Trade-offs
The underlying objective is clear: provide protection against failure at a lower footprint cost. However, as usual, there is no such thing as a ‘free lunch.' There are trade-offs to be considered when compared to replication. The key is to have the freedom to choose the best protection for each particular use case.

When chopping up objects to store the resulting segments across a number  of nodes, the "physical" object count of the underlying storage system is multiplied (e.g., for a 10:6 scheme, it's multiplied by 16). Not all competing object storage systems handle high object count well. It is also clear that the granularity (i.e., minimum file size) of the underlying file system or object storage system will play a role in suggesting how small an object can be to be economically stored using erasure coding. It doesn't really make sense from an efficiency perspective to store, say, a 50K object using a 10:6 erasure coding scheme if there is a file system at the core of a storage system. This is because file systems still segment files into blocks with minimum block sizes. A common threshold for this block size for a Linux file system is 32K so the resulting storage needed for a 50K file using a 10:6 erasure coding scheme would be would be 512K (32K * 16 segments) or a 10X increase in footprint. As we will see replication is a much better approach for small files.

Replication
The simplest form of protective data redundancy is replication, with one or more additional copies of an "original" object being created and maintained to be available if that original somehow gets damaged or lost. In spite of the recent hype around erasure coding, we will see that there still are substantial use case areas where replication clearly is the superior option. For the sake of the example, imagine a cluster of a 100 CPUs with one disk drive each, and 50 million objects with two replicas each, 100 million objects grand total. When we speak of replicas in this context, we mean an instance - any instance - of an object; there is no notion of "original" or "copy." Two replicas equal a grand total of two instances of a given object, somewhere in the cluster, on two randomly different nodes. When an object loss is detected, a recovery cycle begins. Data loss only occurs if both replicas are lost, which is why it is important to store replicas on different nodes and if possible different locations. It is also important to have efficient and rapid recovery cycles; you want to ensure that your objects are quickly replicated in case of an overlapping recovery cycle, which may lead to data loss. If there are three replicas per object, three overlapping recovery cycles (a very low probability event) will be required to cause any data loss.

Replication and Erasure Combined Is the Answer
As so often in IT, there is no single perfect solution to a wide array of use cases. In object storage applications, cluster sizes run the gamut between just a few nodes built into a medical imaging modality to thousands of nodes spanning multiple data centers, with object sizes ranging between just a few Kilobytes for an email message and hundreds of Gigabytes for seismic activity data sets. If we want to fulfill the economic and manageability promises of the single unified storage, we need technology that is fully capable of seamlessly adapting between those use cases.

To deal with the velocity and variability of unstructured information, organizations are increasingly turning to cloud storage infrastructures to manage their data in a cost-effective, just-in-time manner, while others may need the robustness of Big Data repositories to handle the volume that today's boundless storage requires. A combination of both replication and erasure coding, combined into a singular object storage solution, will provide the best option to access and analyze data regardless of object size, object count or storage amount while ensuring data integrity aligned with business value. Traditional file systems simply cannot provide the ease of management and accessibility required for cloud storage, nor will they provide the massive scalability and footprint efficiency required for Big Data repositories. The future of both cloud storage and Big Data remain firmly entrenched in an object storage solution that incorporates both replication and erasure coding into its architecture to overcome the limitations of either one technology.

To see an in depth paper on "Replication and Erasure Coding Explained" please visit http://www.caringo.com/

More Stories By Paul Carpentier

Paul Carpentier is CTO and Founder of Caringo. Known as the father of the Content Addressing concept, He invented the patent pending scalable and upgradeable security that is at the heart of Caringo. He was the architect of SequeLink — the first client/server middleware product to connect heterogeneous front ends running over multiple networks to multiple databases on the server side.

Paul founded Wave Research and conceived FileWave, the first fully automated, model-driven software distribution and management system. At FilePool, he invented the technology that created the Content Addressed Storage industry. FilePool, was sold to EMC who turned CAS into a multi-billion dollar marketplace. Caringo CAStor, based on two of Mr. Carpentier's six patents promises to revolutionize the data storage business in much the same manner that CAS created a whole new marketplace.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@CloudExpo Stories
DX World EXPO, LLC, a Lighthouse Point, Florida-based startup trade show producer and the creator of "DXWorldEXPO® - Digital Transformation Conference & Expo" has announced its executive management team. The team is headed by Levent Selamoglu, who has been named CEO. "Now is the time for a truly global DX event, to bring together the leading minds from the technology world in a conversation about Digital Transformation," he said in making the announcement.
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Conference Guru has been named “Media Sponsor” of the 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great deals to gre...
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
The next XaaS is CICDaaS. Why? Because CICD saves developers a huge amount of time. CD is an especially great option for projects that require multiple and frequent contributions to be integrated. But… securing CICD best practices is an emerging, essential, yet little understood practice for DevOps teams and their Cloud Service Providers. The only way to get CICD to work in a highly secure environment takes collaboration, patience and persistence. Building CICD in the cloud requires rigorous ar...
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
"ZeroStack is a startup in Silicon Valley. We're solving a very interesting problem around bringing public cloud convenience with private cloud control for enterprises and mid-size companies," explained Kamesh Pemmaraju, VP of Product Management at ZeroStack, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...
"Infoblox does DNS, DHCP and IP address management for not only enterprise networks but cloud networks as well. Customers are looking for a single platform that can extend not only in their private enterprise environment but private cloud, public cloud, tracking all the IP space and everything that is going on in that environment," explained Steve Salo, Principal Systems Engineer at Infoblox, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventio...
In his session at 21st Cloud Expo, James Henry, Co-CEO/CTO of Calgary Scientific Inc., introduced you to the challenges, solutions and benefits of training AI systems to solve visual problems with an emphasis on improving AIs with continuous training in the field. He explored applications in several industries and discussed technologies that allow the deployment of advanced visualization solutions to the cloud.
The question before companies today is not whether to become intelligent, it’s a question of how and how fast. The key is to adopt and deploy an intelligent application strategy while simultaneously preparing to scale that intelligence. In her session at 21st Cloud Expo, Sangeeta Chakraborty, Chief Customer Officer at Ayasdi, provided a tactical framework to become a truly intelligent enterprise, including how to identify the right applications for AI, how to build a Center of Excellence to oper...
"IBM is really all in on blockchain. We take a look at sort of the history of blockchain ledger technologies. It started out with bitcoin, Ethereum, and IBM evaluated these particular blockchain technologies and found they were anonymous and permissionless and that many companies were looking for permissioned blockchain," stated René Bostic, Technical VP of the IBM Cloud Unit in North America, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventi...
In his session at 21st Cloud Expo, Carl J. Levine, Senior Technical Evangelist for NS1, will objectively discuss how DNS is used to solve Digital Transformation challenges in large SaaS applications, CDNs, AdTech platforms, and other demanding use cases. Carl J. Levine is the Senior Technical Evangelist for NS1. A veteran of the Internet Infrastructure space, he has over a decade of experience with startups, networking protocols and Internet infrastructure, combined with the unique ability to it...
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
"Cloud Academy is an enterprise training platform for the cloud, specifically public clouds. We offer guided learning experiences on AWS, Azure, Google Cloud and all the surrounding methodologies and technologies that you need to know and your teams need to know in order to leverage the full benefits of the cloud," explained Alex Brower, VP of Marketing at Cloud Academy, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clar...
Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In th...