Welcome!

SDN Journal Authors: Steven Lamb, John Basso, Liz McMillan, Pat Romanski, Rishi Bhargava

Related Topics: SDN Journal, Containers Expo Blog

SDN Journal: Blog Feed Post

Data Center Architecture: Together and Apart

The datacenter represents a diverse set of orchestrated resources bound together by the applications they serve

The challenge in architecting, building, and managing data centers is one of balance. There are forces competing to both push together and pull apart datacenter resources. Finding an equilibrium point that is technological sustainable, operationally viable, and business friendly is challenging. The result is frequently a set of compromises that outweigh the advantages.

Logically together

The datacenter represents a diverse set of orchestrated resources bound together by the applications they serve. At its most simplest, these resources are physically co-located. At its extreme, these resources are geographically distributed across many sites. Whatever the physical layout, these resources are under pressure to be treated as a single logical group.

Resource collaboration - The datacenter is a collection of compute and storage resources that must work in concert in support of application workloads. The simple requirement of coordination creates an inward force pulling resources closer together, even if only logically. How can multiple elements work together towards a common goal if they are completely separate?

The answer is that they cannot. And as IT moves increasingly towards distributed applications, the interdependence between resources only grows.

Interestingly, the performance advantages of distributed architectures are only meaningful when communication between servers is uninhibited. If the network that makes communication possible slows down, the efficacy of the distributed architecture decreases. This means that datacenter architects must solve simultaneously for compute and storage demand, and the interconnect capacity required between them.

Resource availability - Building out a datacenter is an exercise in matching resource capacity to demand. But not just in aggregate.

Individual applications, tenants, and geographies all place localized demands on datacenter resources. If the aggregate demand is sufficient but the resources exist in separate resource pools, you end up in a perpetual state of mismatch. There is always too much or too little workload capacity. The former means you have overbuilt. The latter leaves you wanting for more, which oddly enough means you end up having to overbuild.

Combatting these resource islands requires pulling resources closer together. In the most simple case, this is a physical act. But even if resources cannot be physically co-located, there are entire classes of technologies whose primary function is to allow physically separate resources to behave as if they are in close proximity.

Of course this does not come without a cost. The complexity of managing the disparate technologies required to logically pool physically separate resources can be prohibitively difficult. Even the most skilled specialists have to invest time in creating a properly engineered fabric between sites that accounts for queuing, prioritization, load balancing, and so on. The number of protocols and technologies required is high, and the volume of devices over which they must be applied can be huge. The result is a level of complexity that makes the network more expensive to manage and more difficult to change.

Organizational process - Friction is greatest at boundaries. Whenever a task requires involvement across different organizations or teams, the act of human coordination imposes a tax on both effort and time. In larger organizations, the handoff between teams might be automated to reduce communication mistakes (as with a ticketing system), but the shift in context is still expensive.

This creates organizational pressure to pull together things that might otherwise be separate. If distributed resources can be logically centralized and managed within a common organization, it reduces the dependence on outside teams. The removal of boundaries from common workflows lowers organizational friction and makes easier the overall task of managing the infrastructure.

Physically separate

At the same time that forces are pulling things together, there are equally strong oppositional forces exerting outward pressure on datacenter resources.

Business continuity - For many companies, the datacenter represents a mission critical element of their infrastructure. For companies whose existence depends on the presence of the resources within the datacenter (be they data, servers, or applications), it is untenably risky to rely on a single physical site. This exerts an outward force on resources as companies must create multiple physical sites, typically separated by enough distance that a disaster would not meaningfully impact all sites.

Despite the operational desire to keep things together, the risk to the business dictates that resources be physically separate.

Natural expansion - As resources are added to a datacenter, they are typically installed in racks in relative close proximity to each other. When racks are empty, there is no reason to unnecessarily create physical separation between resources working in concert. Over time, adjacent rack space is filled through the natural expansion of compute, storage, and networking capacity.

As equipment expands, available rack space is depleted, and new racks and rows are populated. Eventually, the device sprawl can occupy entire data centers.

Imagine now that a cluster of servers occupies a rack in one corner of the datacenter. If that cluster is to be expanded, where does the next server go? If the nearby racks are already built out, that resource must be installed some physical distance away from the resources with which it must coordinate.

It is near impossible to plan for all future growth at the time of datacenter inception. Leaving enough space in adjacent racks to account for a decade of growth is impractically expensive. A sparsely populated datacenter suffers from poor space utilization, challenging power distribution, and difficult cabling. Thus, the mere act of expansion actually exerts an outward force leading to physically distributed resources.

Real estate - Sometimes, even when architects want to keep resources together, physical limitations create problems. There is no more immovable object than real estate (which serves as a proxy for all of space, power, and HVAC). In some cases, it is impossible to build out either laterally or even up. In other cases, there is no additional power to be had from the grid. Either of these scenarios forces an expansion to another site, which requires the physical separation of resources that might be expected to function in concert.

Additionally, as land rates change and technologies evolve, the best spots for data centers are not always known. It is difficult at best to predict with enough certainty how a physical site will evolve over an arbitrarily long time horizon. For example, not long ago, the thought of building cooling-hungry data centers in the hot desert was foreign. Today, Las Vegas is home to some of the most cutting edge facilities in the world. This means that geographical dispersion is likely a certainty for large companies. The forces pulling resources physically apart are unlikely to be neutralized.

Finding a balance

Given the strong forces working to keep resources logically together and the equally strong forces keeping them physically separate, how does anyone find a balance?

The price for balance is cost and complexity. You pay for reach directly, and control requires complexity. Both translate into higher carrying costs for the infrastructure. The push-pull dynamic in datacenters is not going away anytime soon. In fact, a move towards more distributed applications will only make harder the balancing act that already exists.

Newer technology offerings like SDN and datacenter fabrics offer some hope, but only insofar as they offer alternatives to the existing problems. Whatever the solution, architects will need to evaluate approaches based not just on the features but on the long-term costs of those features.

[Today’s fun fact: “Way” is the most frequently used noun in the English language. No way!]

The post Datacenter architecture: Together and apart appeared first on Plexxi.

More Stories By Michael Bushong

The best marketing efforts leverage deep technology understanding with a highly-approachable means of communicating. Plexxi's Vice President of Marketing Michael Bushong has acquired these skills having spent 12 years at Juniper Networks where he led product management, product strategy and product marketing organizations for Juniper's flagship operating system, Junos. Michael spent the last several years at Juniper leading their SDN efforts across both service provider and enterprise markets. Prior to Juniper, Michael spent time at database supplier Sybase, and ASIC design tool companies Synopsis and Magma Design Automation. Michael's undergraduate work at the University of California Berkeley in advanced fluid mechanics and heat transfer lend new meaning to the marketing phrase "This isn't rocket science."

@CloudExpo Stories
Internet of @ThingsExpo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with the 19th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world and ThingsExpo Silicon Valley Call for Papers is now open.
Extreme Computing is the ability to leverage highly performant infrastructure and software to accelerate Big Data, machine learning, HPC, and Enterprise applications. High IOPS Storage, low-latency networks, in-memory databases, GPUs and other parallel accelerators are being used to achieve faster results and help businesses make better decisions. In his session at 18th Cloud Expo, Michael O'Neill, Strategic Business Development at NVIDIA, focused on some of the unique ways extreme computing is...
"We view the cloud not really as a specific technology but as a way of doing business and that way of doing business is transforming the way software, infrastructure and services are being delivered to business," explained Matthew Rosen, CEO and Director at Fusion, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Redis is not only the fastest database, but it is the most popular among the new wave of databases running in containers. Redis speeds up just about every data interaction between your users or operational systems. In his session at 19th Cloud Expo, Dave Nielsen, Developer Advocate, Redis Labs, will share the functions and data structures used to solve everyday use cases that are driving Redis' popularity.
Aspose.Total for .NET is the most complete package of all file format APIs for .NET as offered by Aspose. It empowers developers to create, edit, render, print and convert between a wide range of popular document formats within any .NET, C#, ASP.NET and VB.NET applications. Aspose compiles all .NET APIs on a daily basis to ensure that it contains the most up to date versions of each of Aspose .NET APIs. If a new .NET API or a new version of existing APIs is released during the subscription peri...
Organizations planning enterprise data center consolidation and modernization projects are faced with a challenging, costly reality. Requirements to deploy modern, cloud-native applications simultaneously with traditional client/server applications are almost impossible to achieve with hardware-centric enterprise infrastructure. Compute and network infrastructure are fast moving down a software-defined path, but storage has been a laggard. Until now.
"My role is working with customers, helping them go through this digital transformation. I spend a lot of time talking to banks, big industries, manufacturers working through how they are integrating and transforming their IT platforms and moving them forward," explained William Morrish, General Manager Product Sales at Interoute, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
To leverage Continuous Delivery, enterprises must consider impacts that span functional silos, as well as applications that touch older, slower moving components. Managing the many dependencies can cause slowdowns. See how to achieve continuous delivery in the enterprise.
You think you know what’s in your data. But do you? Most organizations are now aware of the business intelligence represented by their data. Data science stands to take this to a level you never thought of – literally. The techniques of data science, when used with the capabilities of Big Data technologies, can make connections you had not yet imagined, helping you discover new insights and ask new questions of your data. In his session at @ThingsExpo, Sarbjit Sarkaria, data science team lead ...
SYS-CON Events announced today the Kubernetes and Google Container Engine Workshop, being held November 3, 2016, in conjunction with @DevOpsSummit at 19th Cloud Expo at the Santa Clara Convention Center in Santa Clara, CA. This workshop led by Sebastian Scheele introduces participants to Kubernetes and Google Container Engine (GKE). Through a combination of instructor-led presentations, demonstrations, and hands-on labs, students learn the key concepts and practices for deploying and maintainin...
Security, data privacy, reliability, and regulatory compliance are critical factors when evaluating whether to move business applications from in-house, client-hosted environments to a cloud platform. Quality assurance plays a vital role in ensuring that the appropriate level of risk assessment, verification, and validation takes place to ensure business continuity during the migration to a new cloud platform.
Extracting business value from Internet of Things (IoT) data doesn’t happen overnight. There are several requirements that must be satisfied, including IoT device enablement, data analysis, real-time detection of complex events and automated orchestration of actions. Unfortunately, too many companies fall short in achieving their business goals by implementing incomplete solutions or not focusing on tangible use cases. In his general session at @ThingsExpo, Dave McCarthy, Director of Products...
Security, data privacy, reliability and regulatory compliance are critical factors when evaluating whether to move business applications from in-house client hosted environments to a cloud platform. In her session at 18th Cloud Expo, Vandana Viswanathan, Associate Director at Cognizant, In this session, will provide an orientation to the five stages required to implement a cloud hosted solution validation strategy.
UpGuard has become a member of the Center for Internet Security (CIS), and will continue to help businesses expand visibility into their cyber risk by providing hardening benchmarks to all customers. By incorporating these benchmarks, UpGuard's CSTAR solution builds on its lead in providing the most complete assessment of both internal and external cyber risk. CIS benchmarks are a widely accepted set of hardening guidelines that have been publicly available for years. Numerous solutions exist t...
Up until last year, enterprises that were looking into cloud services usually undertook a long-term pilot with one of the large cloud providers, running test and dev workloads in the cloud. With cloud’s transition to mainstream adoption in 2015, and with enterprises migrating more and more workloads into the cloud and in between public and private environments, the single-provider approach must be revisited. In his session at 18th Cloud Expo, Yoav Mor, multi-cloud solution evangelist at Cloudy...
Verizon Communications Inc. (NYSE, Nasdaq: VZ) and Yahoo! Inc. (Nasdaq: YHOO) have entered into a definitive agreement under which Verizon will acquire Yahoo's operating business for approximately $4.83 billion in cash, subject to customary closing adjustments. Yahoo informs, connects and entertains a global audience of more than 1 billion monthly active users** -- including 600 million monthly active mobile users*** through its search, communications and digital content products. Yahoo also co...
A critical component of any IoT project is what to do with all the data being generated. This data needs to be captured, processed, structured, and stored in a way to facilitate different kinds of queries. Traditional data warehouse and analytical systems are mature technologies that can be used to handle certain kinds of queries, but they are not always well suited to many problems, particularly when there is a need for real-time insights.
"Software-defined storage is a big problem in this industry because so many people have different definitions as they see fit to use it," stated Peter McCallum, VP of Datacenter Solutions at FalconStor Software, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Amazon has gradually rolled out parts of its IoT offerings in the last year, but these are just the tip of the iceberg. In addition to optimizing their back-end AWS offerings, Amazon is laying the ground work to be a major force in IoT – especially in the connected home and office. Amazon is extending its reach by building on its dominant Cloud IoT platform, its Dash Button strategy, recently announced Replenishment Services, the Echo/Alexa voice recognition control platform, the 6-7 strategic...
The best-practices for building IoT applications with Go Code that attendees can use to build their own IoT applications. In his session at @ThingsExpo, Indraneel Mitra, Senior Solutions Architect & Technology Evangelist at Cognizant, provided valuable information and resources for both novice and experienced developers on how to get started with IoT and Golang in a day. He also provided information on how to use Intel Arduino Kit, Go Robotics API and AWS IoT stack to build an application tha...