Welcome!

SDN Journal Authors: Elizabeth White, Yeshim Deniz, Liz McMillan, Pat Romanski, TJ Randall

Related Topics: @DXWorldExpo, Agile Computing, @CloudExpo, Cloud Security, Government Cloud, SDN Journal

@DXWorldExpo: Article

Big Data Governance for Good or Evil

Lessons of the NSA PRISM Initiative

In the days since the news of the NSA’s secret PRISM spying – oops, surveillance initiative broke, there has been no end of consternation among the media and the Twitterverse. And regardless of where you fall on the political spectrum or what you think of the morality of the NSA’s efforts to collect information about our phone calls or social media interactions, one clear fact shines through: Big Data are real. They are here to stay. But they are also increasingly dangerous. As I explain in my book The Agile Architecture Revolution, the more powerful the technology, the more importance we must place on governance. So too with Big Data.

PRISM’s Big Data Governance Lessons
Most people would agree that finding terrorists and stopping them before they can wreak havoc is a good thing. It is also safe to assume that most people would allow that the US Government should be in the intelligence-gathering business, if only to stop the aforesaid terrorists. Countries have been gathering intelligence for millennia, after all, and victories frequently go to the adversary with the better intelligence. Why, then, are people livid about the NSA this time around?

The answer, of course, is that we’re not angry that the NSA is gathering intelligence on terrorists. We’re upset that the NSA is gathering intelligence on everybody else, including ourselves. We’re not talking about some James Bond-style spy mission here. We’re talking about Big Data.

Here, then, is PRISM Big Data lesson number one: It’s not just the data you want that are important, you also have to worry about the data you don’t want. Traditional data governance generally focuses on the data you want: let’s make sure our data are clean, correct, and properly secured. When we have a limited quantity of data and they all have value, then issues like data quality are relatively straightforward (although achieving data quality in practice may still be a major headache).

In the Big Data scenario, however, we’re miners looking for that nugget of gold hidden in vast quantities of dross. Yes, we must govern that nugget of value, but that’s the easy task, relatively speaking. The lesson from PRISM is that we must also govern the dross: the data we don’t want, because they open up a range of governance challenges like the privacy issues at the core of the PRISM scandal.

Your Big Data governance challenge may not be privacy related, but the fact remains that the more leftover data you have, the harder it is to govern them. After all, just because you don’t find value in Big Data doesn’t mean your competition or a hacker won’t.

The second lesson from PRISM: metadata may be Big Data as well. Data professionals are used to thinking of metadata as having technical value but little worth outside the bowels of the IT organization. In the case of PRISM, however, the NSA went after call detail records (CDRs), not the calls themselves. True, I felt a strangely geeky thrill when President Obama used the word metadata – and used it correctly, by the way – but the recent focus on call metadata only serves to highlight the fact that the metadata themselves may be the most valuable Big Data you own. Ask yourself: how robust is your metadata governance? If it’s not every bit as rock solid as your everyday data governance, then perhaps you’re not ready for Big Data after all.

PRISM lesson number 3: Big Data analytics apps can be data governance tools themselves, particularly when the central challenge is data quality. Terrorists, after all, aren’t quite stupid enough to send tweets like buying #plasticexplosives now, meet me at the #Boston #Marathon. They may be fanatics, but let’s posit that we’ve already taken out the real numbskulls already, OK? We can safely assume terrorists are actively seeking to obscure their communications, which from the enterprise perspective, is an example of (in this case intentionally) poor data quality.

The NSA naturally has sophisticated algorithms for cutting through such obfuscation. As your Big Data sets grow, you’ll need similarly sophisticated tools for cleaning up run of the mill data quality issues. Remember, the bigger the data sets, the more diverse and messy your data quality challenges will become. After all, fixing mailing address formats in your ERP system is dramatically simpler than bringing a vast hodgepodge of structured, semi-structured, and unstructured information into some kind of order.

On to PRISM lesson number four: Your Big Data analytics results may not only be valuable, they may also be dangerous. While it’s common to liken Big Data analytics to mining for gold, in reality it may be more like mining for uranium. True, uranium has monetary value, but put too much pure uranium in the same place and you’re asking for Big Trouble – Trouble with a capital T.

For example, US Census data are publicly available, but they are not allowed to provide any personally identifiable information. However, if it turns out that there is, say, only one Native American family with two children in a given zip code, then it may be possible to uniquely identify them by crunching the data. As a result, the Census Bureau must be very careful not to publish any data that may lead to such results.

Similarly, a significant danger in the NSA analysis is the risk of false positives. Mistakenly identifying an innocent citizen as a terrorist is an appalling risk that outweighs ordinary privacy concerns – at least in the opinion of the innocent civilian. And while on the one hand, the more data the NSA crunches, the less likely a false positive may be, it also follows that such false positives are all the more dangerous for their rarity.

Onto the fifth lesson, what ZapThink likes to call the Big Data corollary to Parkinson’s Law. You may recall that Parkinson’s Law states that the amount of work you have will expand to fill the available time. The Big Data corollary states that the amount of data you collect will expand to consume your ability to store and process it. In other words, if it’s possible to collect Big Data, then somebody will. It’s a question of what to do with it, not a question of whether to collect it in the first place. So let’s not worry about whether the NSA should collect the data it does. If they don’t, then someone else will – or already has. Any Big Data governance effort faces the same challenge: what to do with your data, not whether to collect it in the first place.

Finally, the sixth lesson, which is actually a lesson from something the NSA isn’t doing. Note that in the case of the NSA, current data are more valuable than historical data, even historical data that are one day old. Their paramount concern is to mine current intelligence: what terrorists are doing right now. But your problem area might find value in historical data as well as current data. If your problem deals with historical trends, then your data sets have just ballooned again, as have your data governance challenges.

The ZapThink Take
The NSA was only collecting phone call metadata, because those metadata met their needs. But what about the data themselves—the call audio? Perhaps they are unable to collect such vast quantities of data. But if not, it’s only a matter of time. The question is, once they’re able to collect all call audio, will they? Yes, of course they will. The corollary to Parkinson’s Law in action, after all.

In fact, we might as well just go ahead and assume that somewhere in the Federal Government, they’re collecting all the data – all the phone calls, all the emails, all the tweets, blog posts, forum comments, log files, everything. Because even if they aren’t quite able to amass the whole shebang yet, it’s just a matter of time till they can. And while this scenario seems like a page out of Orwell’s 1984, the most important lesson here is that data governance is now of central importance. It’s no longer a question of whether we can collect Big Data. The entire question is what we should do with Big Data once we have them.

More Stories By Jason Bloomberg

Jason Bloomberg is a leading IT industry analyst, Forbes contributor, keynote speaker, and globally recognized expert on multiple disruptive trends in enterprise technology and digital transformation. He is ranked #5 on Onalytica’s list of top Digital Transformation influencers for 2018 and #15 on Jax’s list of top DevOps influencers for 2017, the only person to appear on both lists.

As founder and president of Agile Digital Transformation analyst firm Intellyx, he advises, writes, and speaks on a diverse set of topics, including digital transformation, artificial intelligence, cloud computing, devops, big data/analytics, cybersecurity, blockchain/bitcoin/cryptocurrency, no-code/low-code platforms and tools, organizational transformation, internet of things, enterprise architecture, SD-WAN/SDX, mainframes, hybrid IT, and legacy transformation, among other topics.

Mr. Bloomberg’s articles in Forbes are often viewed by more than 100,000 readers. During his career, he has published over 1,200 articles (over 200 for Forbes alone), spoken at over 400 conferences and webinars, and he has been quoted in the press and blogosphere over 2,000 times.

Mr. Bloomberg is the author or coauthor of four books: The Agile Architecture Revolution (Wiley, 2013), Service Orient or Be Doomed! How Service Orientation Will Change Your Business (Wiley, 2006), XML and Web Services Unleashed (SAMS Publishing, 2002), and Web Page Scripting Techniques (Hayden Books, 1996). His next book, Agile Digital Transformation, is due within the next year.

At SOA-focused industry analyst firm ZapThink from 2001 to 2013, Mr. Bloomberg created and delivered the Licensed ZapThink Architect (LZA) Service-Oriented Architecture (SOA) course and associated credential, certifying over 1,700 professionals worldwide. He is one of the original Managing Partners of ZapThink LLC, which was acquired by Dovel Technologies in 2011.

Prior to ZapThink, Mr. Bloomberg built a diverse background in eBusiness technology management and industry analysis, including serving as a senior analyst in IDC’s eBusiness Advisory group, as well as holding eBusiness management positions at USWeb/CKS (later marchFIRST) and WaveBend Solutions (now Hitachi Consulting), and several software and web development positions.

CloudEXPO Stories
Extreme Computing is the ability to leverage highly performant infrastructure and software to accelerate Big Data, machine learning, HPC, and Enterprise applications. High IOPS Storage, low-latency networks, in-memory databases, GPUs and other parallel accelerators are being used to achieve faster results and help businesses make better decisions. In his session at 18th Cloud Expo, Michael O'Neill, Strategic Business Development at NVIDIA, focused on some of the unique ways extreme computing is being used on IBM Cloud, Amazon, and Microsoft Azure and how to gain access to these resources in the cloud... for FREE!
"We do one of the best file systems in the world. We learned how to deal with Big Data many years ago and we implemented this knowledge into our software," explained Jakub Ratajczak, Business Development Manager at MooseFS, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
CI/CD is conceptually straightforward, yet often technically intricate to implement since it requires time and opportunities to develop intimate understanding on not only DevOps processes and operations, but likely product integrations with multiple platforms. This session intends to bridge the gap by offering an intense learning experience while witnessing the processes and operations to build from zero to a simple, yet functional CI/CD pipeline integrated with Jenkins, Github, Docker and Azure.
Today, we have more data to manage than ever. We also have better algorithms that help us access our data faster. Cloud is the driving force behind many of the data warehouse advancements we have enjoyed in recent years. But what are the best practices for storing data in the cloud for machine learning and data science applications?
The hierarchical architecture that distributes "compute" within the network specially at the edge can enable new services by harnessing emerging technologies. But Edge-Compute comes at increased cost that needs to be managed and potentially augmented by creative architecture solutions as there will always a catching-up with the capacity demands. Processing power in smartphones has enhanced YoY and there is increasingly spare compute capacity that can be potentially pooled. Uber has successfully been able to harness the excess capacity of privately owned vehicles and turned into a meaningful business. This concept can be step-functioned to harnessing the spare compute capacity of smartphones that can be orchestrated by MEC to provide cloud service at the edge.