Welcome!

SDN Journal Authors: Pat Romanski, Elizabeth White, Yeshim Deniz, Liz McMillan, TJ Randall

Blog Feed Post

Evan's predictions for 2013

Now that 2012 is over, I guess it’s time to start looking at what’s coming down the track in 2013. Here are my top five predictions for the year ahead:

ZFS will be recognized as the most broadly deployed storage file system in the world.

Okay, so I cheated on that one. It already is. We alone have half as much storage, we figure, under management as NetApp claims.  Add Oracle and you’re already bigger than any one-storage file system.  Add all Solaris and illumos deployments on top of that and you are 3-5x larger than NetApp’s OnTap.  In fact, the number of ZFS users is larger than those using NetApp’s OnTap file system and EMC’s Isilon file system combined.

“Other” will again be the only storage vendor growing product sales year on year

Take a look at EMC’s recent earnings results.  They show that while EMC is gaining market share, it is dropping year on year product sales.  Results from NetApp are similar; again, it is likely gaining share versus the much, much larger system vendors while dropping sales quarter on quarter.  

Given that storage spend is actually increasing, the only explanation that makes sense is that “Other” is taking more and more share within storage and is taking ALL of the revenue growth in the space.

What this means is you are out of touch if you are not at least evaluating “Other”.  Companies like Nexenta are pioneering software defined storage that offers superior enterprise class performance and data protection without the vendor lock in and ridiculous pricing of legacy storage vendors.   

 

Software defined storage will be more disruptive and more difficult than the rest of the software defined data centre

There are about $1.2 billion reasons software defined networking was hot in 2012, such as VMware acquiring Nicera for $1.2 billion.  And with good reason.  Fixing, making more flexible, networking is an important part of fixing the data centre.

But storage is the real bottleneck.  At current rates of growth, storage is on pace to consume more dollars than networking, security, and compute put together by 2014.  That’s simply not sustainable.

Perhaps even more importantly, storage is hard and data is heavy. You can move network port definitions around with a VM and have a software infrastructure in place plus the hardware to forward those packets accordingly and achieve software defined networking.  You cannot move the data around.  

Repeat after me, you cannot move the data around.  You cannot move zetabytes of data here around because the speed of light has not changed and it takes time to get that data over the network.  So it’s increasingly important to work out what SLAs are acceptable from compute and networking to deliver per application performance on the storage. Perhaps this will be done increasingly by performing compute ON the storage, such as in our VSA for View product.  

SaaS and web companies will continue to vote against IaaS offerings from major vendors

Take a poll of the CEOs of the top SaaS companies and they’ll all tell you, “No legacy IaaS company has a clue how to run infrastructure for the enterprise”. They cannot match the price point of those based on commodity hardware. Relatively few data center providers pass muster.  

NVMe and anti-competitive behavior by flash factories will shift the flash storage world towards openness 

With recent moves by the four or five companies that make pretty much all the world’s NAND for SSDs and consumer devices to limit global supply in the hope of restraining price drops, vendors and users reliant on flash are concerned about locking themselves into a single vendor.   

NVMe offers some hope. Unlike FusionIO, which is getting users to adopt a proprietary set of APIs to get to their data, NVMe is a standard approach to accessing data on flash-enabled systems.  Nexenta and most other storage vendors will support NVMe, which should level the playing field somewhat.

In either case, openness is important.  And software defined storage that abstracts the underlying hardware dependencies away is important if storage and compute buyers want to avoid more vendor lock in as the world shifts towards flash.  

That’s my top five for 2013 but I also have a bonus prediction for you: All flash isn’t a company, it’s a feature

Every storage system vendor will have all flash capabilities in their product offering in 2013.  We launched ours earlier in 2012 with partners announcing systems based on NexentaStor achieving over 1 million IOPS, more than 3x faster than proprietary all flash systems on the market.  

Our users don’t want to sacrifice enterprise class requirements like data protection, NAS access and 24x7 around the clock support in order to have all flash appliances.  So they won’t.  They’ll buy all flash from legacy vendors or from other suppliers like Nexenta and our partners, including Dell, SGI, Wipro, Racktop, Cisco and others that have a track record of making many thousands of customers successful.   

To paraphrase John Chambers of Cisco and many other leaders of the IT industry, when industries shift, they shift.  All we can do as companies is try to anticipate and then keep up with the shift.  

With increasing coverage in mainstream IT and in analyst reports – and mounting interest on the part of Wall Street, including countless public investors and bankers with whom I’ve been spending time – the storage industry is shifting right before our eyes.  In 2012 all major vendors saw declining core product sales despite a fast growing overall storage sector.  We also saw confirmation that an originally general-purpose file system, ZFS, passed the legacy storage vendors in terms of capacity under management.  And with software defined storage gaining visibility I’m confident that by the end of 2013 we will look back on the early 2000s storage industry and wonder, “what were we thinking”.  

The world has changed.  And openness and flexibility has come to storage.   The result will be a better IT industry and a smarter world.  But that’s a subject for another blog.

What do you think about my projections?  What did I miss?  What is the most likely to occur?  What is least likely?

Read the original blog entry...

More Stories By Bill Roth

Bill Roth is a Silicon Valley veteran with over 20 years in the industry. He has played numerous product marketing, product management and engineering roles at companies like BEA, Sun, Morgan Stanley, and EBay Enterprise. He was recently named one of the World's 30 Most Influential Cloud Bloggers.

CloudEXPO Stories
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-centric compute for the most data-intensive applications. Hyperconverged systems already in place can be revitalized with vendor-agnostic, PCIe-deployed, disaggregated approach to composable, maximizing the value of previous investments.
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by sharing information within the building and with outside city infrastructure via real time shared cloud capabilities.
As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more than 24 years of IT Operations experience with a focus on highly-scalable architectures.