Welcome!

SDN Journal Authors: John Walsh, Elizabeth White, Liz McMillan, Sven Olav Lund, Simon Hill

Related Topics: @CloudExpo, Microservices Expo, @DXWorldExpo, SDN Journal

@CloudExpo: Blog Post

Big Compute Gives Life to New Data By @Nimbix | @CloudExpo [#BigData]

Deriving value through computation

What's Big Data without Big Compute? Basically just a large collection of unstructured information with little purpose and value.  It's not enough for data just to exist, we must derive value from it through computation - something commonly referred to as analytics.

The Quantum Nature of Big Data
With traditional data, we simply query it to derive results; all we need is currently stored within the data set itself.  For instance, for a customer database with dates of birth, we may just fetch the list of customers who were born after a certain date.  This is a simple query, not a computation, and therefore cannot be considered analytics.

However, with Big Data, we can distribute the information so that we can run complex analytics on it at scale.  Unlike traditional data, the information itself has little meaning until we process it.  The reason we distribute the data sets is not because they are large, but because we want to leverage clusters of computers to run more than just simple queries.  That is why in the Big Data model, the data itself doesn't hold the answer - to unleash it we must compute it.  Think of this as a virtual "Schrödinger's Cat"... it can mean anything until we actually look "in the box."  The difference is that we're not asking a simple question such as, "is it dead or alive," but rather more complex inquiries such as, "assuming it's alive, what might its future behavior be?"  Analytics, especially predictive ones, rely on patterns and their associations.  Because the data sets tend to change (or grow) over time, then we have to understand that the results of these complex computations will most definitely vary as well.

Knowing this, it's perhaps a major understatement to associate the term "Big Data" with data itself since it cannot really exist without Big Compute to make sense of all the information.

What's So Special About Big Compute?
Big Compute implies one of two things:

  1. Ordinary computing scaled across a massive parallel cluster
  2. High-Performance Computing (HPC)

The problem with the former is that it can only scale so far before performance drops off.  Furthermore, for it to really succeed, the data itself must be scaled just as wide.  This also brings with it practical challenges of systems management, complexity and infrastructure constraints such as networking and power.

High-Performance Computing is a more natural form of "Big Compute" because it scales and packs a powerful per-unit punch.  What does this mean? Simply that we can realize higher computation density with far fewer "moving parts."  A good example is using Graphics Processing Unit (GPUs) for vector calculations.  Sure, you can do this with Central Processing Unit (CPU) cores alone, but you've only got around 8-16 in each typical server node.  Each GPU can have hundreds or even thousands of cores.  If you vectorize your calculations to take advantage of this, you can do far more work with far less power and management complexity (at scale) than if you had to spread it across dozens or even hundreds of CPUs (and the servers they live in).

So this begs the question: Why does Big Compute really matter?  Can't simple algorithms on commodity compute already do predictive analytics?

The answer is of course yes, but there are two problems - one immediate and one future.

The immediate problem is that in many cases, the speed at which you get results matters just as much as the results themselves.  For example, if you are planning on using analytics to improve e-commerce, the best time to do this is while the customer is engaged in a transaction. Sure, there's still value in following up with the customer later, after you've crunched the data, but why not take advantage of the moment while he or she has credit card in hand to provide facts that may increase spend?

When you combine this with the fact that there may be thousands of concurrent transactions at any given time, over-subscribing commodity compute to perform complex analytics won't get you the results you need in time to maximize the value of Big Data.

This is where Big Compute can perform the same operations thousands of times faster.  In many cases, the value of the data is sensitive to the amount of time needed to compute it.  There are many examples of this - e-commerce is just a popular one.  In other cases, the data set itself is changing (generally growing) rapidly.  If analytics take too long, the results may already be obsolete or irrelevant once delivered.

Simply put, Big Compute powered by HPC is the fastest, most efficient way to derive value from data at scale - at the exact moment needed.

This brings us to the future problem with commodity compute.

Innovation in Algorithms
How do you derive future value from the same data you have (or are collecting) today?  If we look at Big Data as a two-part problem - storing the data and analyzing the data - we quickly realize where the greatest potential for innovation is.  It's not in storage because, although challenging, we've seen densities increase dramatically since the dawn of computing.  As a (crude) point of reference, a consumer could buy a three-terabyte hard disk in 2014 for less than the cost of a 200 gigabyte one just 10 years prior.  Higher storage densities mean less infrastructure to manage, and thus make storing large data sets more practical over time as well (not just cheaper.)  So we can rest easy knowing that all things being equal, as the data sets grow, so will the storage to hold it all in a relatively cost-effective way.

Obviously the most room for innovation is in the analytics algorithms themselves.  We will see both the speed and the quality of the computations increase dramatically over time.  Thanks to Big Compute, there's no need to compromise.  Commodity compute is a non-starter for algorithms that are too complex to run quickly against large data sets.

Just imagine the opportunities we'd miss if we avoided problems that are seemingly too hard to solve.  Big Compute makes it possible to run the most complex algorithms quickly, and the sky's the limit when it comes to the types of analytics we'll see as a result.

Big Compute will help Big Data evolve to not just be "bigger," but to be far more meaningful than we can ever imagine.

More Stories By Leo Reiter

Leo Reiter is CTO of Nimbix, providers of cloud-based High Performance Computing and Big Data platforms and applications to help organizations solve their most complex problems faster and easier.

@CloudExpo Stories
Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In th...
In his session at 21st Cloud Expo, James Henry, Co-CEO/CTO of Calgary Scientific Inc., introduced you to the challenges, solutions and benefits of training AI systems to solve visual problems with an emphasis on improving AIs with continuous training in the field. He explored applications in several industries and discussed technologies that allow the deployment of advanced visualization solutions to the cloud.
Agile has finally jumped the technology shark, expanding outside the software world. Enterprises are now increasingly adopting Agile practices across their organizations in order to successfully navigate the disruptive waters that threaten to drown them. In our quest for establishing change as a core competency in our organizations, this business-centric notion of Agile is an essential component of Agile Digital Transformation. In the years since the publication of the Agile Manifesto, the conn...
"MobiDev is a software development company and we do complex, custom software development for everybody from entrepreneurs to large enterprises," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
"ZeroStack is a startup in Silicon Valley. We're solving a very interesting problem around bringing public cloud convenience with private cloud control for enterprises and mid-size companies," explained Kamesh Pemmaraju, VP of Product Management at ZeroStack, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
In his session at 21st Cloud Expo, Carl J. Levine, Senior Technical Evangelist for NS1, will objectively discuss how DNS is used to solve Digital Transformation challenges in large SaaS applications, CDNs, AdTech platforms, and other demanding use cases. Carl J. Levine is the Senior Technical Evangelist for NS1. A veteran of the Internet Infrastructure space, he has over a decade of experience with startups, networking protocols and Internet infrastructure, combined with the unique ability to it...
"Infoblox does DNS, DHCP and IP address management for not only enterprise networks but cloud networks as well. Customers are looking for a single platform that can extend not only in their private enterprise environment but private cloud, public cloud, tracking all the IP space and everything that is going on in that environment," explained Steve Salo, Principal Systems Engineer at Infoblox, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventio...
"Codigm is based on the cloud and we are here to explore marketing opportunities in America. Our mission is to make an ecosystem of the SW environment that anyone can understand, learn, teach, and develop the SW on the cloud," explained Sung Tae Ryu, CEO of Codigm, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
The question before companies today is not whether to become intelligent, it’s a question of how and how fast. The key is to adopt and deploy an intelligent application strategy while simultaneously preparing to scale that intelligence. In her session at 21st Cloud Expo, Sangeeta Chakraborty, Chief Customer Officer at Ayasdi, provided a tactical framework to become a truly intelligent enterprise, including how to identify the right applications for AI, how to build a Center of Excellence to oper...
"IBM is really all in on blockchain. We take a look at sort of the history of blockchain ledger technologies. It started out with bitcoin, Ethereum, and IBM evaluated these particular blockchain technologies and found they were anonymous and permissionless and that many companies were looking for permissioned blockchain," stated René Bostic, Technical VP of the IBM Cloud Unit in North America, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventi...
High-velocity engineering teams are applying not only continuous delivery processes, but also lessons in experimentation from established leaders like Amazon, Netflix, and Facebook. These companies have made experimentation a foundation for their release processes, allowing them to try out major feature releases and redesigns within smaller groups before making them broadly available. In his session at 21st Cloud Expo, Brian Lucas, Senior Staff Engineer at Optimizely, discussed how by using ne...
"Cloud Academy is an enterprise training platform for the cloud, specifically public clouds. We offer guided learning experiences on AWS, Azure, Google Cloud and all the surrounding methodologies and technologies that you need to know and your teams need to know in order to leverage the full benefits of the cloud," explained Alex Brower, VP of Marketing at Cloud Academy, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clar...
Vulnerability management is vital for large companies that need to secure containers across thousands of hosts, but many struggle to understand how exposed they are when they discover a new high security vulnerability. In his session at 21st Cloud Expo, John Morello, CTO of Twistlock, addressed this pressing concern by introducing the concept of the “Vulnerability Risk Tree API,” which brings all the data together in a simple REST endpoint, allowing companies to easily grasp the severity of the ...
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
"NetApp is known as a data management leader but we do a lot more than just data management on-prem with the data centers of our customers. We're also big in the hybrid cloud," explained Wes Talbert, Principal Architect at NetApp, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, discussed how from store operations and ...
"We're focused on how to get some of the attributes that you would expect from an Amazon, Azure, Google, and doing that on-prem. We believe today that you can actually get those types of things done with certain architectures available in the market today," explained Steve Conner, VP of Sales at Cloudistics, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.