Welcome!

SDN Journal Authors: John Walsh, Elizabeth White, Liz McMillan, Sven Olav Lund, Simon Hill

Related Topics: @CloudExpo, Microservices Expo, Containers Expo Blog, Agile Computing, @DXWorldExpo, SDN Journal

@CloudExpo: Article

Can We Finally Find the Database Holy Grail?

What are the alternative patterns for designing a truly distributed transactional database system?

The world runs on transactional database systems. Every business depends on them, and we each interact with them many times each day. Furthermore the world needs to build thousands more applications of transactional database systems to support the next-generation web. Nothing controversial there, but there is a problem: transactional database systems have stubbornly refused to join the 21st century.

The rest of the world is moving toward data center architectures predicated on thousands of commodity machines, commodity networks, and "scale-out" designs. These data centers will offer on-demand computing services that can instantly increase or decrease capacity to applications as needed. It's easy to just add more web servers, application servers, or storage servers as required. Unfortunately while that works at every other level in the computing stack it does not work at the database layer. Database systems are scale-up systems, not scale-out systems.

Traditionally if you want a transactional database to go faster you need a bigger machine. Period. Solving that problem is the Holy Grail of the database world. And as with the other Holy Grail, many people have given up the quest. But there is a new set of ideas that may just change the game. There are three common design patterns for building distributed transactional databases ("Shared Disk", "Shared Nothing" and "Synchronous Commit"), and there is now a new idea called Durable Distributed Cache (DDC). It is this fourth way that we at NuoDB are really excited about. But it makes sense to provide some background before talking about that.

Database transactions are pretty powerful things. People have often asked me to name applications that really need transactions, and my response is usually to ask them to name an application that really needs a database. It's the wrong question in both cases, as you plainly don't really need transactions, databases, compilers, operating systems, etc., any more than you really need an umbrella. The absence of some of these things can be worked around, and other tools and subsystems can be substituted by a rudimentary home-baked thing. The reason for using database transactions is not that you can't find some kludgy way of working around them in your application, but that it vastly simplifies the semantics of your system, especially as relates to exceptional conditions. The simplification serves to reduce cost, reduce implementation time and increase quality and maintainability of the end product. Transactional guarantees are highly desirable in almost any application but we sometimes consider the computational (and dollar) costs too high. The best answer would not be in the direction of using transactional systems more sparingly, but in a technological innovation that allows us to use them more pervasively.

In order to illustrate the situation, let's take an example in which you and I both want to take out the last $1,000 from the same account at the same time. The account balance is stored in an ACID compliant transactional database, and therefore the database system must ensure that at most one of us succeeds. There are multiple ways to do this if both transactions have to run on the same machine. Whatever algorithm is used it effectively orders the transactions so that whichever of us is judged to have arrived first will get the $1,000. That works. And one cool thing is that it will work even faster if I get a bigger machine.

But what if I don't want to buy a bigger machine? What if I want to speed it up by using a second small machine? Even in our very simple example you can quickly see the problem. Each of us has a transaction running on our behalf that is trying to decrement the $1,000 account balance by $1,000. If your transactions run on different machines, then we would each have more machine resources allocated to us and might expect to go faster, but our machines have to coordinate. There is no choice because it is not allowable for both transactions to succeed. In principle each machine needs to obtain permission to update the value of the account balance. This involves network communications, and is orders-of-magnitude more expensive than coordinating the two transactions on a single machine. In many cases a second machine can actually slow down your applications, hence the traditional answer of just getting a bigger machine.

The Ever Bigger Machine strategy is far from ideal. Obviously there are limits to how big your big machine can be. You have to hope that your new web site does not attract too many active users for your maxed-out database server to handle. In addition to on-demand capacity, scale-out systems make other very tempting promises. Aside from much more attractive base economics relating to commodity hardware and/or cloud deployment, they also allow you to provision capacity to your load requirements, and potentially offer much better redundancy and availability models. So we are directed to the topic of this blog series, namely what are the alternative patterns for designing a truly distributed transactional database system?

In the rest of this series I'll lay out my views of the four alternative models: Shared-Disk, Shared-Nothing, Synchronous Commit and Durable Distributed Cache (DDC).

More Stories By Barry Morris

Barry Morris is CEO & Co-Founder of NuoDB, Inc. An accomplished software CEO with over 25 years of industry experience in the USA and Europe, running private and public companies ranging in scale from early startup phase to 1,000+ employees, he loves to build companies around industry-changing paradigm-shifts in technology. Morris was previously CEO of StreamBase and Iona Technologies.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@CloudExpo Stories
DX World EXPO, LLC, a Lighthouse Point, Florida-based startup trade show producer and the creator of "DXWorldEXPO® - Digital Transformation Conference & Expo" has announced its executive management team. The team is headed by Levent Selamoglu, who has been named CEO. "Now is the time for a truly global DX event, to bring together the leading minds from the technology world in a conversation about Digital Transformation," he said in making the announcement.
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Conference Guru has been named “Media Sponsor” of the 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great deals to gre...
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
The next XaaS is CICDaaS. Why? Because CICD saves developers a huge amount of time. CD is an especially great option for projects that require multiple and frequent contributions to be integrated. But… securing CICD best practices is an emerging, essential, yet little understood practice for DevOps teams and their Cloud Service Providers. The only way to get CICD to work in a highly secure environment takes collaboration, patience and persistence. Building CICD in the cloud requires rigorous ar...
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
"ZeroStack is a startup in Silicon Valley. We're solving a very interesting problem around bringing public cloud convenience with private cloud control for enterprises and mid-size companies," explained Kamesh Pemmaraju, VP of Product Management at ZeroStack, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...
"Infoblox does DNS, DHCP and IP address management for not only enterprise networks but cloud networks as well. Customers are looking for a single platform that can extend not only in their private enterprise environment but private cloud, public cloud, tracking all the IP space and everything that is going on in that environment," explained Steve Salo, Principal Systems Engineer at Infoblox, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventio...
In his session at 21st Cloud Expo, James Henry, Co-CEO/CTO of Calgary Scientific Inc., introduced you to the challenges, solutions and benefits of training AI systems to solve visual problems with an emphasis on improving AIs with continuous training in the field. He explored applications in several industries and discussed technologies that allow the deployment of advanced visualization solutions to the cloud.
The question before companies today is not whether to become intelligent, it’s a question of how and how fast. The key is to adopt and deploy an intelligent application strategy while simultaneously preparing to scale that intelligence. In her session at 21st Cloud Expo, Sangeeta Chakraborty, Chief Customer Officer at Ayasdi, provided a tactical framework to become a truly intelligent enterprise, including how to identify the right applications for AI, how to build a Center of Excellence to oper...
"IBM is really all in on blockchain. We take a look at sort of the history of blockchain ledger technologies. It started out with bitcoin, Ethereum, and IBM evaluated these particular blockchain technologies and found they were anonymous and permissionless and that many companies were looking for permissioned blockchain," stated René Bostic, Technical VP of the IBM Cloud Unit in North America, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventi...
In his session at 21st Cloud Expo, Carl J. Levine, Senior Technical Evangelist for NS1, will objectively discuss how DNS is used to solve Digital Transformation challenges in large SaaS applications, CDNs, AdTech platforms, and other demanding use cases. Carl J. Levine is the Senior Technical Evangelist for NS1. A veteran of the Internet Infrastructure space, he has over a decade of experience with startups, networking protocols and Internet infrastructure, combined with the unique ability to it...
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
"Cloud Academy is an enterprise training platform for the cloud, specifically public clouds. We offer guided learning experiences on AWS, Azure, Google Cloud and all the surrounding methodologies and technologies that you need to know and your teams need to know in order to leverage the full benefits of the cloud," explained Alex Brower, VP of Marketing at Cloud Academy, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clar...
Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In th...