|By Daniel Thompson||
|March 14, 2013 12:00 PM EDT||
I came across an article discussing NoSQL and partition tolerance.
The NoSQL Partition Tolerance Myth (link)
I may not entirely agree with the author.
But what most NoSQL systems offer is a peculiar behavior that is not partition tolerant, but partition oblivious instead.
No argument here. While NoSQL implementations are aware that nodes have left, they are not aware that said nodes have formed a separate partition.
In this case, we would want failure detection and carry out those transfers where the accounts are both on the same side of the partition, while denying or deferring transfers that cross the chasm.
The author is assuming that the account is only on one side of the partition. If that is that is case, is doesn’t matter whether the NoSQL implementation is eventually consistent or not. If the account is on both sides of the partition, the solution the author provides still results in an inconsistent state.
In such cases, it is almost always better to build services that degrade gracefully under partitions.
Bingo! The author implies that all NoSQL implementations sacrifice consistency in the event of a partition. That is not true. There are AP implementations (available & partition tolerant), and there are CP implementations (consistent & partition tolerant). However, an AP implementation can function as a CP implementation depending on the configuration and the application. For example, if accounts do not have multiple owners and the application can not withdraw funds from or deposit funds to an account if it can not access the account.
This post is not perfect and it is a bit outdated, but I think its helpful nonetheless.
Visual Guide to NoSQL Systems (link)
If I were to build a bank based on Dynamo, the granddaddy of all first-generation NoSQL data stores, it would silently split into two halves, like a lobotomized patient.
I would not say that Amazon Dynamo is the grandparent of all NoSQL implementations. I would say that there are two parents: Amazon Dynamo and Google BigTable. Then there are the grandparents…
A Brief History of NoSQL (link)
In this scenario, the hypothetical backend for Banko Dynamo would not only not provide any indication of failure, but allow a customer to create as many new accounts as there are partitions, one in each.
Why is the author now using account creation instead of withdrawals and deposits, and what is the relevance of creating multiple accounts? If my debit card does not work, I do not create a new account. That, and I maintain two checking accounts and one savings account with the same bank.
Let’s go back to withdrawals and deposits. If the accounts do not have multiple owners, it does not matter whether the NoSQL implementation is eventually consistent or not. If the accounts do have multiple owners, it depends on the NoSQL implementation. If it is inspired by Google BigTable (e.g. Apache HBase) or both Google BigTable and Amazon Dynamo (e.g. Apache Cassandra), it does not matter. These NoSQL implementations are CP, or can be configured to be CP. If it inspired only by Amazon Dynamo and it is eventually consistent, it may or may not matter…
Let’s assume that account withdrawals / deposits are separate from the accounts themselves and that the account is both consistent and available during a partition. The account has multiple owners but it is more or less read only.
My account has a balance of $100 (calculated from the withdrawals and deposits). Now, there are two partitions: A and B. I purchase $50 of St. Bernardus Abt 12 at Binny’s via partition A. Partition A now has withdrawal #1. I have dinner at Baume & Brix for $75 via partition B. Partition B now has withdrawal #2. My account has a balance of $50 in partition A. It has a balance of $25 in partition B. My account should have a balance of minus $25.
Does it matter? My account may not have a balance of minus $25, but it will. When the partition is repaired, the application will be able to access all of the withdrawals and deposits on my account. I may be charged an overlimit fee.
What if the NoSQL implementation sacrificed availability? My payment at Binny’s did not go through. That’s not a problem. No St. Bernardus Abt 12 for me. My payment at Baume & Brix did not go through. That’s a problem. I can’t pay for dinner. Baume & Brix can’t accept my payment nor that of any other customer paying with a debit card from the same bank as me via partition B.
What if I made a deposit of $25 at an ATM via partition A? My account will have a balance of $0 after the partition is repaired. I will not be charged an overlimit fee.
There are other scenarios. Perhaps I’m charged an insufficient funds fee and Baume & Brix does not receive payment. Perhaps Baume & Brix later resubmits the payment and receives payment.
Do you really want to sell tickets from both halves of your system? By definition, there is no way you can guarantee uniqueness of those tickets. There will be customers holding identical tickets with identical seat numbers.
Maybe, maybe not. If there is only a single owner per ticket, then yes. However, there may be availability issues. For example, partition A has tickets 1-150 and partition B has tickets 151-200. If all the tickets in partition B have been purchased, visitors may be unable to purchase tickets despite the fact that there may be tickets available in partition A. If there are multiple owners per ticket, I would prefer a NoSQL implementation that is CP. In this case, I would prefer to sacrifice availability rather than consistency.
Here is a better example. What if I report my debit card stolen? Sacrificing availability is not appropriate. What if customer service is accessing my account via the partition with no availability? My debit card must be reported stolen or the thief can continue to make purchases with it. Sacrificing consistency is not appropriate. The thief can continue to make purchases with my debit card via the partition where my account has not been reported stolen. Perhaps account information should not be stored in a distributed system.
And if they did, the first-generation NoSQL stores usually take the ultimate punt by presenting all versions of the divergent objects to the application, and let the application resolve the mess.
No argument here.
But if your data is that soft and inconsequential, why not just use memcached? It’s wicked fast, far faster than Mongo.
Perhaps because MongoDB is a document store and as such provides features that are not provided by key / value stores.
A lot of NoSQL developers pretend that being partition oblivious is a difficult thing to implement. This is false. It’s easy to make a program oblivious to a particular event; namely, you write no code to handle that event.
No argument here.
The thing that greatly helps first generation NoSQL data stores, the thing that enables them to package partition obliviousness as if it were equivalent to partition tolerance, is that they provide a very weak service guarantee in the first place. These systems cannot guarantee that, on a good day, your GET will return the latest PUT.
Sure they can.
In fact, eventual consistency means that a GET can return any previous value, including the Does Not Exist response from the very initial state of the system.
No argument here. Of course, not all NoSQL implementations are eventually consistent.
With all this being said, a NoSQL implementation may or may not be appropriate. To be more specific, a NoSQL implementation that is eventually consistent and sacrifices consistency in the event of a partition may or may not be appropriate. The behaviour is determined by the NoSQL implementation, its configuration, and the application that reads and writes to it. Whether that behaviour is appropriate or not depends on the business requirements.
Without a clear strategy for cost control and an architecture designed with cloud services in mind, costs and operational performance can quickly get out of control. To avoid multiple architectural redesigns requires extensive thought and planning. Boundary (now part of BMC) launched a new public-facing multi-tenant high resolution monitoring service on Amazon AWS two years ago, facing challenges and learning best practices in the early days of the new service. In his session at 19th Cloud Exp...
Dec. 4, 2016 09:00 AM EST Reads: 542
"Venafi has a platform that allows you to manage, centralize and automate the complete life cycle of keys and certificates within the organization," explained Gina Osmond, Sr. Field Marketing Manager at Venafi, in this SYS-CON.tv interview at DevOps at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Dec. 4, 2016 08:45 AM EST Reads: 785
Effectively SMBs and government programs must address compounded regulatory compliance requirements. The most recent are Controlled Unclassified Information and the EU's GDPR have Board Level implications. Managing sensitive data protection will likely result in acquisition criteria, demonstration requests and new requirements. Developers, as part of the pre-planning process and the associated supply chain, could benefit from updating their code libraries and design by incorporating changes. In...
Dec. 4, 2016 08:30 AM EST Reads: 969
"Coalfire is a cyber-risk, security and compliance assessment and advisory services firm. We do a lot of work with the cloud service provider community," explained Ryan McGowan, Vice President, Sales (West) at Coalfire Systems, Inc., in this SYS-CON.tv interview at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Dec. 4, 2016 08:30 AM EST Reads: 760
Regulatory requirements exist to promote the controlled sharing of information, while protecting the privacy and/or security of the information. Regulations for each type of information have their own set of rules, policies, and guidelines. Cloud Service Providers (CSP) are faced with increasing demand for services at decreasing prices. Demonstrating and maintaining compliance with regulations is a nontrivial task and doing so against numerous sets of regulatory requirements can be daunting task...
Dec. 4, 2016 08:15 AM EST Reads: 750
CloudJumper, a Workspace as a Service (WaaS) platform innovator for agile business IT, has been recognized with the Customer Value Leadership Award for its nWorkSpace platform by Frost & Sullivan. The company was also featured in a new report(1) by the industry research firm titled, “Desktop-as-a-Service Buyer’s Guide, 2016,” which provides a comprehensive comparison of DaaS providers, including CloudJumper, Amazon, VMware, and Microsoft.
Dec. 4, 2016 08:15 AM EST Reads: 705
Businesses and business units of all sizes can benefit from cloud computing, but many don't want the cost, performance and security concerns of public cloud nor the complexity of building their own private clouds. Today, some cloud vendors are using artificial intelligence (AI) to simplify cloud deployment and management. In his session at 20th Cloud Expo, Ajay Gulati, Co-founder and CEO of ZeroStack, will discuss how AI can simplify cloud operations. He will cover the following topics: why clou...
Dec. 4, 2016 08:00 AM EST Reads: 689
Fact: storage performance problems have only gotten more complicated, as applications not only have become largely virtualized, but also have moved to cloud-based infrastructures. Storage performance in virtualized environments isn’t just about IOPS anymore. Instead, you need to guarantee performance for individual VMs, helping applications maintain performance as the number of VMs continues to go up in real time. In his session at Cloud Expo, Dhiraj Sehgal, Product and Marketing at Tintri, sha...
Dec. 4, 2016 08:00 AM EST Reads: 804
In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, provided an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life sett...
Dec. 4, 2016 06:15 AM EST Reads: 6,971
20th Cloud Expo, taking place June 6-8, 2017, at the Javits Center in New York City, NY, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy.
Dec. 4, 2016 05:30 AM EST Reads: 1,753
More and more companies are looking to microservices as an architectural pattern for breaking apart applications into more manageable pieces so that agile teams can deliver new features quicker and more effectively. What this pattern has done more than anything to date is spark organizational transformations, setting the foundation for future application development. In practice, however, there are a number of considerations to make that go beyond simply “build, ship, and run,” which changes how...
Dec. 4, 2016 04:45 AM EST Reads: 4,968
WebRTC is the future of browser-to-browser communications, and continues to make inroads into the traditional, difficult, plug-in web communications world. The 6th WebRTC Summit continues our tradition of delivering the latest and greatest presentations within the world of WebRTC. Topics include voice calling, video chat, P2P file sharing, and use cases that have already leveraged the power and convenience of WebRTC.
Dec. 4, 2016 04:30 AM EST Reads: 1,554
Amazon has gradually rolled out parts of its IoT offerings, but these are just the tip of the iceberg. In addition to optimizing their backend AWS offerings, Amazon is laying the ground work to be a major force in IoT - especially in the connected home and office. In his session at @ThingsExpo, Chris Kocher, founder and managing director of Grey Heron, explained how Amazon is extending its reach to become a major force in IoT by building on its dominant cloud IoT platform, its Dash Button strat...
Dec. 4, 2016 04:00 AM EST Reads: 6,226
Let’s face it, embracing new storage technologies, capabilities and upgrading to new hardware often adds complexity and increases costs. In his session at 18th Cloud Expo, Seth Oxenhorn, Vice President of Business Development & Alliances at FalconStor, discussed how a truly heterogeneous software-defined storage approach can add value to legacy platforms and heterogeneous environments. The result reduces complexity, significantly lowers cost, and provides IT organizations with improved efficienc...
Dec. 4, 2016 04:00 AM EST Reads: 4,959
Internet-of-Things discussions can end up either going down the consumer gadget rabbit hole or focused on the sort of data logging that industrial manufacturers have been doing forever. However, in fact, companies today are already using IoT data both to optimize their operational technology and to improve the experience of customer interactions in novel ways. In his session at @ThingsExpo, Gordon Haff, Red Hat Technology Evangelist, will share examples from a wide range of industries – includin...
Dec. 4, 2016 03:45 AM EST Reads: 1,570
"We build IoT infrastructure products - when you have to integrate different devices, different systems and cloud you have to build an application to do that but we eliminate the need to build an application. Our products can integrate any device, any system, any cloud regardless of protocol," explained Peter Jung, Chief Product Officer at Pulzze Systems, in this SYS-CON.tv interview at @ThingsExpo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Dec. 4, 2016 02:15 AM EST Reads: 886
When it comes to cloud computing, the ability to turn massive amounts of compute cores on and off on demand sounds attractive to IT staff, who need to manage peaks and valleys in user activity. With cloud bursting, the majority of the data can stay on premises while tapping into compute from public cloud providers, reducing risk and minimizing need to move large files. In his session at 18th Cloud Expo, Scott Jeschonek, Director of Product Management at Avere Systems, discussed the IT and busin...
Dec. 4, 2016 02:00 AM EST Reads: 3,786
Between 2005 and 2020, data volumes will grow by a factor of 300 – enough data to stack CDs from the earth to the moon 162 times. This has come to be known as the ‘big data’ phenomenon. Unfortunately, traditional approaches to handling, storing and analyzing data aren’t adequate at this scale: they’re too costly, slow and physically cumbersome to keep up. Fortunately, in response a new breed of technology has emerged that is cheaper, faster and more scalable. Yet, in meeting these new needs they...
Dec. 4, 2016 12:30 AM EST Reads: 1,788
The cloud promises new levels of agility and cost-savings for Big Data, data warehousing and analytics. But it’s challenging to understand all the options – from IaaS and PaaS to newer services like HaaS (Hadoop as a Service) and BDaaS (Big Data as a Service). In her session at @BigDataExpo at @ThingsExpo, Hannah Smalltree, a director at Cazena, provided an educational overview of emerging “as-a-service” options for Big Data in the cloud. This is critical background for IT and data professionals...
Dec. 3, 2016 11:00 PM EST Reads: 4,167
"Once customers get a year into their IoT deployments, they start to realize that they may have been shortsighted in the ways they built out their deployment and the key thing I see a lot of people looking at is - how can I take equipment data, pull it back in an IoT solution and show it in a dashboard," stated Dave McCarthy, Director of Products at Bsquare Corporation, in this SYS-CON.tv interview at @ThingsExpo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Dec. 3, 2016 11:00 PM EST Reads: 972