Welcome!

SDN Journal Authors: David Paquette, Elizabeth White, Liz McMillan, Sal Fernando, Nate Lindstrom

Related Topics: @BigDataExpo, Java IoT, Linux Containers, Agile Computing, @CloudExpo, SDN Journal

@BigDataExpo: Blog Feed Post

Scaling Big Data Fabrics

The size of the network might be the least interesting aspect of scaling Big Data fabrics

When people talk about Big Data, the emphasis is usually on the Big. Certainly, Big Data applications are distributed largely because the size of the data on which computations are executed warrants more than a typical application can handle. But scaling the network that provides connectivity between Big Data nodes is not just about creating massive interconnects.

In fact, the size of the network might be the least interesting aspect of scaling Big Data fabrics.

Just how big is Big Data?

Not that long ago, I asked the question: how large is a typical Big Data deployment? I was expecting, as I suspect many people are, that the Big in the title meant that the deployments would be, in a word, big. But the average Big Data deployment is actually far smaller than most people realize. I grabbed a list from HadoopWizard in an article dating back to last year.

What is remarkable about this list is just how unremarkable the sizes of the deployments are. Sure, the list is dated, and deployments have certainly gotten larger. And yes, companies like Yahoo! are pushing scaling limits. But the average deployment if you take Yahoo! out is a mere 113 nodes. Even if every node is multi-homed to two switches, this means the average deployment could be handled by 4 access switches.

Even if every deployment quadrupled, you would still only be talking about 16-access-switch deployments. When our industry talks about scaling, we usually think well beyond 16 switches.

Is scaling an issue?

So if deployments are small, does that mean scaling is a solved issue? The answer is both yes and no. If the end game is building individual networks for each Big Data application, then yes. While the web scale companies will always need more, the vast majority of customers will be well-served by the scaling limits that are around today.

But the issue with Big Data is that it isn’t really just Big Data. When we talk about Big Data, we usually ought to be using a different moniker. For most people, Big Data is less about Hadoop and more about clustered applications (at least so far as the network is concerned). By expanding the definition to clustered applications, you move past Hadoop and into clustered compute and even clustered storage environments. Anything clustered has a dependency on some kind of interconnect.

The challenge in clustered environments

The challenge of all these types of clustered environments is that their requirements vary. For Hadoop, job completion times are dominated by the compute side of things, so the network is really about providing a congestion-free interconnect that is always available. For clustered compute, latency might be more important. And for multi-tenant environments, it might be most important to isolate traffic. Whatever the application, the point is that the requirements are highly contextual.

Which brings us back to scaling.

The real issue in scaling Big Data fabrics is less about making a small interconnect larger. Networks are not going to scale along the lines of single applications (or at least they shouldn’t). The actual scaling challenge is plotting a course from a single Big Data application to an environment that hosts multiple clustered applications, each with different requirements.

This might seem dead simple, but it isn’t. When people deploy Big Data applications today, the Big part leads people to purpose-build architecture with massive data workloads in mind. In many cases, this includes building out separate networks aimed at specific workloads.

But even in the best cases, Hadoop makes use of things like rack awareness, which help provide application resilience while minimizing traffic across the network. Regardless of whether you view this as for the application or for the network, the result is that proximity and locality are built into the infrastructure. This creates interesting considerations (and potentially limitations) when expanding. If you want to grow a cluster, you can’t just use any available server in the datacenter; there are servers that are more preferable than others based solely on their physical location.

Scalability is more than scaling

Making a scalable interconnect for these types of clustered applications is more than just supporting a large (or as I mentioned previously, not so large) number of nodes. The objective for scalability is to provide a graceful path from start to finish. This means architectures need to consider not just what the ending state is but also how to get from here to there.

With Hadoop, this means that things like locality have to be an explicit consideration in architecting the interconnect. Is the right answer a bunch of cross-connects zigzagging across the datacenter? Maybe. Or it might be a different architectural approach to providing interconnect between clustered servers.

Additionally, it isn’t just about one application. Architecting for bandwidth because you have a Hadoop-y application is great, but what if the next clustered application is latency-sensitive? Or if it brings with it a set of auditing and compliance requirements more typical of HIPAA-style applications?

If the architecture doesn’t explicitly consider how to expand beyond a single application, even if it can grow to thousands of switches, it won’t really matter.

The bottom line

The punch line here is that scaling is not only about growing larger. It also means potentially growing more diverse. And if there is one thing that the Hadoop deployment numbers tell me, it’s that people are still experimenting. If you are still experimenting, how can you predict with certainty what the next 5 or 10 years will mean in terms of applications for your business? You can’t. Which means that the most important architectural objective might go well beyond the number of switches in a deployment. Scalability could be about building flexibility into you datacenter. How do you get a bunch of different purpose-built capabilities into a single, general-purpose network? Answering that might be the real key to determining how to scale Big Data fabrics.

[Today’s fun fact: It is against the law to use the Star Spangled Banner as dance music in Massachusetts. There go my party plans!]

The post Scaling Big Data fabrics appeared first on Plexxi.

More Stories By Michael Bushong

The best marketing efforts leverage deep technology understanding with a highly-approachable means of communicating. Plexxi's Vice President of Marketing Michael Bushong has acquired these skills having spent 12 years at Juniper Networks where he led product management, product strategy and product marketing organizations for Juniper's flagship operating system, Junos. Michael spent the last several years at Juniper leading their SDN efforts across both service provider and enterprise markets. Prior to Juniper, Michael spent time at database supplier Sybase, and ASIC design tool companies Synopsis and Magma Design Automation. Michael's undergraduate work at the University of California Berkeley in advanced fluid mechanics and heat transfer lend new meaning to the marketing phrase "This isn't rocket science."

@CloudExpo Stories
SYS-CON Events announced today that 910Telecom will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Housed in the classic Denver Gas & Electric Building, 910 15th St., 910Telecom is a carrier-neutral telecom hotel located in the heart of Denver. Adjacent to CenturyLink, AT&T, and Denver Main, 910Telecom offers connectivity to all major carriers, Internet service providers, Internet backbones and ...
SYS-CON Events announced today that eCube Systems, a leading provider of middleware modernization, integration, and management solutions, will exhibit at @DevOpsSummit at 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. eCube Systems offers a family of middleware evolution products and services that maximize return on technology investment by leveraging existing technical equity to meet evolving business needs. ...
DevOps at Cloud Expo – being held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real results. Am...
Pulzze Systems was happy to participate in such a premier event and thankful to be receiving the winning investment and global network support from G-Startup Worldwide. It is an exciting time for Pulzze to showcase the effectiveness of innovative technologies and enable them to make the world smarter and better. The reputable contest is held to identify promising startups around the globe that are assured to change the world through their innovative products and disruptive technologies. There w...
Internet of @ThingsExpo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 19th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound change in personal and enterprise IT since the creation of the Worldwide Web more than 20 years ago. All major researchers estimate there will be tens of billions devices - comp...
DevOps at Cloud Expo, taking place Nov 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 19th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long dev...
SYS-CON Events announced today that StarNet Communications will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. StarNet Communications’ FastX is the industry first cloud-based remote X Windows emulator. Using standard Web browsers (FireFox, Chrome, Safari, etc.) users from around the world gain highly secure access to applications and data hosted on Linux-based servers in a central data center. ...
Traditional on-premises data centers have long been the domain of modern data platforms like Apache Hadoop, meaning companies who build their business on public cloud were challenged to run Big Data processing and analytics at scale. But recent advancements in Hadoop performance, security, and most importantly cloud-native integrations, are giving organizations the ability to truly gain value from all their data. In his session at 19th Cloud Expo, David Tishgart, Director of Product Marketing ...
Data is the fuel that drives the machine learning algorithmic engines and ultimately provides the business value. In his session at Cloud Expo, Ed Featherston, a director and senior enterprise architect at Collaborative Consulting, will discuss the key considerations around quality, volume, timeliness, and pedigree that must be dealt with in order to properly fuel that engine.
There is growing need for data-driven applications and the need for digital platforms to build these apps. In his session at 19th Cloud Expo, Muddu Sudhakar, VP and GM of Security & IoT at Splunk, will cover different PaaS solutions and Big Data platforms that are available to build applications. In addition, AI and machine learning are creating new requirements that developers need in the building of next-gen apps. The next-generation digital platforms have some of the past platform needs a...
19th Cloud Expo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterpri...
SYS-CON Events announced today Telecom Reseller has been named “Media Sponsor” of SYS-CON's 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.
The 19th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Digital Transformation, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportuni...
As the world moves toward more DevOps and Microservices, application deployment to the cloud ought to become a lot simpler. The Microservices architecture, which is the basis of many new age distributed systems such as OpenStack, NetFlix and so on, is at the heart of Cloud Foundry - a complete developer-oriented Platform as a Service (PaaS) that is IaaS agnostic and supports vCloud, OpenStack and AWS. Serverless computing is revolutionizing computing. In his session at 19th Cloud Expo, Raghav...
Enterprises have forever faced challenges surrounding the sharing of their intellectual property. Emerging cloud adoption has made it more compelling for enterprises to digitize their content, making them available over a wide variety of devices across the Internet. In his session at 19th Cloud Expo, Santosh Ahuja, Director of Architecture at Impiger Technologies, will introduce various mechanisms provided by cloud service providers today to manage and share digital content in a secure manner....
StarNet Communications Corp has announced the addition of three Secure Remote Desktop modules to its flagship X-Win32 PC X server. The new modules enable X-Win32 to safely tunnel the remote desktops from Linux and Unix servers to the user’s PC over encrypted SSH. Traditionally, users of PC X servers deploy the XDMCP protocol to display remote desktop environments such as the Gnome and KDE desktops on Linux servers and the CDE environment on Solaris Unix machines. XDMCP is used primarily on comp...
Fact: storage performance problems have only gotten more complicated, as applications not only have become largely virtualized, but also have moved to cloud-based infrastructures. Storage performance in virtualized environments isn’t just about IOPS anymore. Instead, you need to guarantee performance for individual VMs, helping applications maintain performance as the number of VMs continues to go up in real time. In his session at Cloud Expo, Dhiraj Sehgal, Product and Marketing at Tintri, wil...
SYS-CON Events announced today that Isomorphic Software will exhibit at DevOps Summit at 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Isomorphic Software provides the SmartClient HTML5/AJAX platform, the most advanced technology for building rich, cutting-edge enterprise web applications for desktop and mobile. SmartClient combines the productivity and performance of traditional desktop software with the simp...
With so much going on in this space you could be forgiven for thinking you were always working with yesterday’s technologies. So much change, so quickly. What do you do if you have to build a solution from the ground up that is expected to live in the field for at least 5-10 years? This is the challenge we faced when we looked to refresh our existing 10-year-old custom hardware stack to measure the fullness of trash cans and compactors.
Extreme Computing is the ability to leverage highly performant infrastructure and software to accelerate Big Data, machine learning, HPC, and Enterprise applications. High IOPS Storage, low-latency networks, in-memory databases, GPUs and other parallel accelerators are being used to achieve faster results and help businesses make better decisions. In his session at 18th Cloud Expo, Michael O'Neill, Strategic Business Development at NVIDIA, focused on some of the unique ways extreme computing is...