Welcome!

SDN Journal Authors: Liz McMillan, Yeshim Deniz, Elizabeth White, Pat Romanski, TJ Randall

Related Topics: @CloudExpo, Java IoT, Containers Expo Blog, Cloud Security, @DXWorldExpo, SDN Journal

@CloudExpo: Blog Feed Post

Here’s What NOT Using Cloud Storage Is Costing You

Cloud storage presents a very interesting analogy

It’s no secret that doing nothing is often considered a safe bet. The psychology behind inaction is well understood, particularly in the case of IT — the path of least disruption is usually maintaining the status quo rather than trying something new,

But once in a while, a decision of inaction can prove very costly. For instance, would you ignore leaky plumbing in your home? Barring any flooding or damage, there may not be much urgency to act — perhaps until the water bill arrives, at which point you experience a change of heart. But what if the leak preexisted before you moved in to the home and you never realized you were overpaying for water to begin with?

Cloud storage presents a very interesting analogy to the above situation. You may never realize how much unnecessary spending is a part of maintaining traditional storage until you examine some of the cloud-based alternatives.

Take for instance a hypothetical organization using 50TB of storage capacity today. Let’s examine the cost of traditional storage versus cloud storage using a few reasonable assumptions:

  • Cost of traditional storage: $1500 per TB for traditional on-prem storage with 25% in annual maintenance. Assume replacement every 3 years
  • Cost of cloud storage: $0.026 per GB per month for cloud storage (using Google Cloud Storage pricing). Assume another 50% for bandwidth (downloads) and puts/gets. Let’s call it  $0.039 per GB per month total
  • Starting capacity: 50TB
  • Capacity growth: 30% annually
  • Storage price reduction: 20% annually
  • Administration and physical costs are ignored for now

Below is a chart that illustrates the differences in total cost of ownership (TCO) between cloud and traditional storage over the next 9 years:

TCO-Capture

So how can the the gap between cloud and traditional storage be so substantial? Some will argue $1500 per TB is expensive for a storage system, as raw disk can be purchased for $100 per TB from an e-tailer. But raw disk capacity does not make a high-durability, always-on storage system. Most enterprise storage utilizes RAID protection which raises costs and reduces usable capacity. Furthermore, enterprise storage typically requires multi-site redundancy for disaster recovery. In that light, $1500 per usable TB is a great, if not implausibly good, deal.

Contrast that to top tier cloud storage, which comes standard with triple data center redundancy and intra-site redundancy. Cloud storage requires virtually no maintenance or replacement ever, avoiding the 2 replacement cycle “spikes” for traditional storage. What’s more eye-popping is that this comparison does not take into account the administrative cost savings of cloud storage — doing away with day to day tasks such as failure management, maintenance, upgrades, etc — nor does it take into account the environmental costs — power, cooling and floor space.

What’s missing in the comparison? A way to deliver cloud storage as a replacement for traditional storage. Cloud-integrated storage provides that route, offering the familiar interfaces and performance of local storage and enabling the cost saving of cloud.

Next time you are budgeting for data storage, consider the cost of maintaining the status quo.

The post Here’s what NOT using cloud storage is costing you appeared first on TwinStrata.

More Stories By Nicos Vekiarides

Nicos Vekiarides is the Chief Executive Officer & Co-Founder of TwinStrata. He has spent over 20 years in enterprise data storage, both as a business manager and as an entrepreneur and founder in startup companies.

Prior to TwinStrata, he served as VP of Product Strategy and Technology at Incipient, Inc., where he helped deliver the industry's first storage virtualization solution embedded in a switch. Prior to Incipient, he was General Manager of the storage virtualization business at Hewlett-Packard. Vekiarides came to HP with the acquisition of StorageApps where he was the founding VP of Engineering. At StorageApps, he built a team that brought to market the industry's first storage virtualization appliance. Prior to StorageApps, he spent a number of years in the data storage industry working at Sun Microsystems and Encore Computer. At Encore, he architected and delivered Encore Computer's SP data replication products that were a key factor in the acquisition of Encore's storage division by Sun Microsystems.

CloudEXPO Stories
DXWorldEXPO LLC announced today that Kevin Jackson joined the faculty of CloudEXPO's "10-Year Anniversary Event" which will take place on November 11-13, 2018 in New York City. Kevin L. Jackson is a globally recognized cloud computing expert and Founder/Author of the award winning "Cloud Musings" blog. Mr. Jackson has also been recognized as a "Top 100 Cybersecurity Influencer and Brand" by Onalytica (2015), a Huffington Post "Top 100 Cloud Computing Experts on Twitter" (2013) and a "Top 50 Cloud Computing Blogger for IT Integrators" by CRN (2015). Mr. Jackson's professional career includes service in the US Navy Space Systems Command, Vice President J.P. Morgan Chase, Worldwide Sales Executive for IBM and NJVC Vice President, Cloud Services. He is currently part of a team responsible for onboarding mission applications to the US Intelligence Community cloud computing environment (IC ...
When applications are hosted on servers, they produce immense quantities of logging data. Quality engineers should verify that apps are producing log data that is existent, correct, consumable, and complete. Otherwise, apps in production are not easily monitored, have issues that are difficult to detect, and cannot be corrected quickly. Tom Chavez presents the four steps that quality engineers should include in every test plan for apps that produce log output or other machine data. Learn the steps so your team's apps not only function but also can be monitored and understood from their machine data when running in production.
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.