Welcome!

SDN Journal Authors: Greg Schulz, Carl J. Levine, Patrick MeLampy, Destiny Bertucci, Rene Buest

Related Topics: @CloudExpo, Microservices Expo, Containers Expo Blog, Cloud Security, @BigDataExpo, SDN Journal

@CloudExpo: Blog Feed Post

Can the Cloud Do ‘In Perpetuity’?

One thing, of course, that most public cloud providers are good at is offering a platform upon which others can build

Cloud computing is great, right? As a way to get something up and running quickly, affordably, and with a minimum of fuss, it can rarely be beaten.

But some of the most compelling attributes of the public cloud are best suited to ephemeral or (relatively!) short-term use cases. You can spin up a cloud server in minutes. You can scale a cloud-based application to cope with the peaks and troughs of demand. You can control all of this through a web console, with no more than a credit card and a laptop. Silicon Valley, SoMa, Silicon Alley, Silicon Roundabout, Silicon Allee, Silicon Wadi, Silicon Forest, Silicon Welly, and the Silicon Bog (only one of those was made up, I think) are full to bursting with bright young things building exciting new products (and silly photo sharing sites) powered only by the cloud and expensive coffee.

3166391937_f273e4e212_zAnd then you have government, private, and commercial Archives, with an over-riding imperative to keep stuff for a very, very long time. These Archives clearly can (and do) use cloud computing in the same ways as everyone else. They use clouds to cost-effectively transform data from one format to another, they use clouds to stream large and popular media files to the public, and they use clouds in all sorts of other ways to make innumerable workflows and processes easier, cheaper, or more robust. For those use cases, even the biggest, grandest, and most important of archives is actually pretty much like any other user. Cloud’s as useful to them as it is to the rest of us, and that’s great.

Does it make sense, though, for Archives to entrust any of their long-term preservation role to the cloud? I’m not sure (yet), but The National Archives (TNA) here in the UK wants to find out. They’ve commissioned a study from a small consultancy, Charles Beagrie, and I’m subcontracted to provide a bit of cloud knowledge to the team.

Out of the box, you’d have to question the sense of an archive entrusting anything to the public cloud for purposes of long-term preservation. That’s not really what Amazon’s Simple Storage Service or Rackspace’s Cloud Files or any of the other cloud-based filestores are for. Their Service Level Agreements and their technical underpinnings are all about cost-effectively storing lots of stuff and losing as little as possible. If a file is lost or damaged, the service provider might pay out a few service credits, and/or the customer might restore from a backup, and everyone continues on their way. Archivists, we were reminded at one of the project’s focus groups, have this peculiar expectation that the systems they use to preserve their primary materials won’t lose anything at all. A couple of service credits don’t really help when you just lost, truncated, or changed a few words in the digital equivalent of the Magna Carta or the Domesday Book or the Book of Kells or the Declaration of Arbroath. And, just to be totally clear, losing a digital copy of the Declaration of Arbroath would be ok. The National Archives of Scotland still has the vellum (I presume their copy was written on vellum?) in a climate-controlled vault. They probably also have a CD or two of backups for the digital images. Things become a bit more serious when the content is ‘born digital,’ and the file you’re preserving is the thing itself and not just an image of some physical artefact.

Even with archival-ish services like Glacier, which Amazon says

is designed to provide average annual durability of 99.999999999% for an archive. The service redundantly stores data in multiple facilities and on multiple devices within each facility. To increase durability, Amazon Glacier synchronously stores your data across multiple facilities before returning SUCCESS on uploading archives. Unlike traditional systems that can require laborious data verification and manual repair, Glacier performs regular, systematic data integrity checks and is built to be automatically self-healing,

(my emphasis)

the big public cloud providers aren’t really in the business of supporting the extreme needs of an Archive. Archives demand a whole extra level of error checking, resilience, redundancy and integrity, and it would be cost-prohibitive for AWS and their competitors to do all that across their sprawling data centres when most customers are actually perfectly happy with “redundantly stores data in multiple facilities” and “automatically self-healing.”

Interestingly, Seagate sees value in offering a Glacier competitor capable of storing data “intact for decades” and offering access instantly rather than in a matter of hours as Glacier does. As it’s based in Utah I doubt that European government archives would touch it, but it will be interesting to see whether their North American cousins show any interest…

One thing, of course, that most public cloud providers are good at is offering a platform upon which others can build. Archivists, like others, have begun to layer rules, policies, procedures and processes on top of the bare-bones cloud infrastructure offerings, to build something a little more robust and dependable. Services like DuraCloud take AWS and Rackspace (currently only in their US data centres, but that could change), and add things like proactive error checking and even more backups to deliver something that an archivist might be prepared to trust.

There’s a use case here, and there are plenty of (mostly university) archives in the States putting DuraCloud and similar cloud-powered tools to work as part of their preservation strategy.

But I can’t help wondering if some great big enterprise data management solution, with multiply redundant disks, multiply redundant backups and a whole heap of watertight, ironclad, fault tolerant, and ridiculously over-specified policies might be a better (albeit eye-wateringly expensive) way to preserve the truly irreplaceable? Either that, or archives and archivists need to explicitly embrace a more pragmatic approach to what they’re attempting with these systems.

‘Design for failure’ is a core tenet of cloud-powered systems. What’s the archival equivalent? ‘Lose nothing, ever’ just won’t cut it.

Disclaimer: Charles Beagrie is a client. TNA is a client of theirs. This post is not part of the project. Any opinions expressed here are my own, a work in progress… and subject to change!

Image of The National Archives by Flickr user ‘electropod’

Read the original blog entry...

More Stories By Paul Miller

Paul Miller works at the interface between the worlds of Cloud Computing and the Semantic Web, providing the insights that enable you to exploit the next wave as we approach the World Wide Database.

He blogs at www.cloudofdata.com.

@CloudExpo Stories
In recent years, containers have taken the world by storm. Companies of all sizes and industries have realized the massive benefits of containers, such as unprecedented mobility, higher hardware utilization, and increased flexibility and agility; however, many containers today are non-persistent. Containers without persistence miss out on many benefits, and in many cases simply pass the responsibility of persistence onto other infrastructure, adding additional complexity.
The buzz continues for cloud, data analytics and the Internet of Things (IoT) and their collective impact across all industries. But a new conversation is emerging - how do companies use industry disruption and technology enablers to lead in markets undergoing change, uncertainty and ambiguity? Organizations of all sizes need to evolve and transform, often under massive pressure, as industry lines blur and merge and traditional business models are assaulted and turned upside down. In this new da...
Unsecured IoT devices were used to launch crippling DDOS attacks in October 2016, targeting services such as Twitter, Spotify, and GitHub. Subsequent testimony to Congress about potential attacks on office buildings, schools, and hospitals raised the possibility for the IoT to harm and even kill people. What should be done? Does the government need to intervene? This panel at @ThingExpo New York brings together leading IoT and security experts to discuss this very serious topic.
It is one thing to build single industrial IoT applications, but what will it take to build the Smart Cities and truly society changing applications of the future? The technology won’t be the problem, it will be the number of parties that need to work together and be aligned in their motivation to succeed. In his Day 2 Keynote at @ThingsExpo, Henrik Kenani Dahlgren, Portfolio Marketing Manager at Ericsson, discussed how to plan to cooperate, partner, and form lasting all-star teams to change the...
SYS-CON Events announced today that Hitrons Solutions will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Hitrons Solutions Inc. is distributor in the North American market for unique products and services of small and medium-size businesses, including cloud services and solutions, SEO marketing platforms, and mobile applications.
When it comes to cloud computing, the ability to turn massive amounts of compute cores on and off on demand sounds attractive to IT staff, who need to manage peaks and valleys in user activity. With cloud bursting, the majority of the data can stay on premises while tapping into compute from public cloud providers, reducing risk and minimizing need to move large files. In his session at 18th Cloud Expo, Scott Jeschonek, Director of Product Management at Avere Systems, discussed the IT and busine...
All clouds are not equal. To succeed in a DevOps context, organizations should plan to develop/deploy apps across a choice of on-premise and public clouds simultaneously depending on the business needs. This is where the concept of the Lean Cloud comes in - resting on the idea that you often need to relocate your app modules over their life cycles for both innovation and operational efficiency in the cloud. In his session at @DevOpsSummit at19th Cloud Expo, Valentin (Val) Bercovici, CTO of Soli...
“We're a global managed hosting provider. Our core customer set is a U.S.-based customer that is looking to go global,” explained Adam Rogers, Managing Director at ANEXIA, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
The buzz continues for cloud, data analytics and the Internet of Things (IoT) and their collective impact across all industries. But a new conversation is emerging - how do companies use industry disruption and technology enablers to lead in markets undergoing change, uncertainty and ambiguity? Organizations of all sizes need to evolve and transform, often under massive pressure, as industry lines blur and merge and traditional business models are assaulted and turned upside down. In this new da...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In his general session at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, will explore...
WebRTC services have already permeated corporate communications in the form of videoconferencing solutions. However, WebRTC has the potential of going beyond and catalyzing a new class of services providing more than calls with capabilities such as mass-scale real-time media broadcasting, enriched and augmented video, person-to-machine and machine-to-machine communications. In his session at @ThingsExpo, Luis Lopez, CEO of Kurento, introduced the technologies required for implementing these idea...
In an era of historic innovation fueled by unprecedented access to data and technology, the low cost and risk of entering new markets has leveled the playing field for business. Today, any ambitious innovator can easily introduce a new application or product that can reinvent business models and transform the client experience. In their Day 2 Keynote at 19th Cloud Expo, Mercer Rowe, IBM Vice President of Strategic Alliances, and Raejeanne Skillern, Intel Vice President of Data Center Group and G...
Financial Technology has become a topic of intense interest throughout the cloud developer and enterprise IT communities. Accordingly, attendees at the upcoming 20th Cloud Expo at the Javits Center in New York, June 6-8, 2017, will find fresh new content in a new track called FinTech.
@GonzalezCarmen has been ranked the Number One Influencer and @ThingsExpo has been named the Number One Brand in the “M2M 2016: Top 100 Influencers and Brands” by Onalytica. Onalytica analyzed tweets over the last 6 months mentioning the keywords M2M OR “Machine to Machine.” They then identified the top 100 most influential brands and individuals leading the discussion on Twitter.
20th Cloud Expo, taking place June 6-8, 2017, at the Javits Center in New York City, NY, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy.
SYS-CON Events announced today that delaPlex will exhibit at SYS-CON's @CloudExpo, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. delaPlex pioneered Software Development as a Service (SDaaS), which provides scalable resources to build, test, and deploy software. It’s a fast and more reliable way to develop a new product or expand your in-house team.
"Matrix is an ambitious open standard and implementation that's set up to break down the fragmentation problems that exist in IP messaging and VoIP communication," explained John Woolf, Technical Evangelist at Matrix, in this SYS-CON.tv interview at @ThingsExpo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Web Real-Time Communication APIs have quickly revolutionized what browsers are capable of. In addition to video and audio streams, we can now bi-directionally send arbitrary data over WebRTC's PeerConnection Data Channels. With the advent of Progressive Web Apps and new hardware APIs such as WebBluetooh and WebUSB, we can finally enable users to stitch together the Internet of Things directly from their browsers while communicating privately and securely in a decentralized way.
"We host and fully manage cloud data services, whether we store, the data, move the data, or run analytics on the data," stated Kamal Shannak, Senior Development Manager, Cloud Data Services, IBM, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Zerto exhibited at SYS-CON's 18th International Cloud Expo®, which took place at the Javits Center in New York City, NY, in June 2016. Zerto is committed to keeping enterprise and cloud IT running 24/7 by providing innovative, simple, reliable and scalable business continuity software solutions. Through the Zerto Cloud Continuity Platform™, organizations can seamlessly move and protect virtualized workloads between public, private and hybrid clouds. The company’s flagship product, Zerto Virtual...