Welcome!

SDN Journal Authors: Liz McMillan, Yeshim Deniz, Elizabeth White, Pat Romanski, TJ Randall

Related Topics: SDN Journal, Java IoT, Containers Expo Blog, Agile Computing, @CloudExpo, @DXWorldExpo

SDN Journal: Blog Feed Post

The Event-Driven Data Center

Like planets aligning, dev & the network sync up on architectural foundations so infrequently that it should be a major event

One of the primary reasons node.js is currently ascending in the data center is because of its core model: event-driven, non-blocking processing.

Historically, developers write applications based on connections and requests. It's blocking; it's not asynchronous; it's not fire and forget until some other event reminds them that something needs to be done.

If the underlying network fabric worked like applications today work, we'd be in a heap of trouble. A switch would grab an incoming packet and forward it and then... wait for it to return. You can imagine what that would do to traffic flow and just how much bigger and beefier switches would have to be to support the kind of traffic experienced today by enterprises and web monsters alike.

Luckily, the network isn't like that. It doesn't block waiting for a response. It grabs an ingress packet, determines where it should go next, forwards it and then moves on to the next packet in line. It does not hang out, mooning over and writing bad love poetry about the packet it just forwarded, wondering if it will ever come back.

That, in part*, is why networks scale so well, why they are so fast and able to sustain a significant order of magnitude more concurrent connections than a web or application server.

So imagine what happens when a web or application server adopts a more laissez-faire attitude toward processing requests; when it fires-and-forgets until it is reminded by the return of a response?

Exactly. It gains phenomenal network-like speed and much better scalability.

That's what node.js is bringing to the data center table - an event-driven, non-blocking application infrastructure that aligns with the event-driven, non-blocking nature of the network fabric. Louis Simoneau sums it up well:

Node.js is the New Black

Here’s where some of that jargon from before comes into play: specifically non-blocking and event-driven. What those terms mean in this context is less complicated than you might fear. Think of a non-blocking server as a loop: it just keeps going round and round. A request comes in, the loop grabs it, passes it along to some other process (like a database query), sets up a callback, and keeps going round, ready for the next request. It doesn’t just sit there, waiting for the database to come back with the requested info.

And it's a quite capable platform based on the numerous benchmarks and tests performed by developers and devops interested in understanding the differences between it and the old guard (Apache, PHP, etc...)

Developers deploying on off-the-shelf operating systems have scaled node.js to 250000 connections. (See chart)

On a purpose-built operating system, node.js has been clocked at over 4 million simultaneous connections. It scales, and it scales well.

Suffice to say that the application and network infrastructure is starting to align in terms of performance capabilities and, interestingly enough, programmability. What the network is taking from development is programmability, and what development is taking from the network is speed and capacity.

They're aligning in so many ways that it's almost mind-boggling to consider the potential.

It's not just like Christmas - it's like Christmas when you're five years old. Yeah, it's that awesomesauce. I haven't been this excited about a technology since application switching broke onto the scene in, well, quite some years ago now.

Counting Down

This is not to say that the entire network fabric is truly event-driven. It's not quite Christmas yet, but it is close enough to taste ...

Network components, individually, are event-driven, but the overall data center network is not yet. But we're getting closer. You may recall that when we first started talking about Infrastructure 2.0, when cloud was in its infancy (almost pre-infancy, actually), we talked about event-driven configuration and policy deployment:

Infrastructure 2.0: As a matter of fact that isn't what it means

The configuration and policies applied by dynamic infrastructure are not static; they are able to change based on predefined criteria or events that occur in the environment such that the security, scalability, or performance of an application and its environs are preserved.

Some solutions implement this capability through event-driven architectures, such as "IP_ADDRESS_ASSIGNED" or "HTTP_REQUEST_MADE".

Today we'd call that SDN (software-defined networking) or the SDDC (software-defined data center). Regardless of what we call it, the core principle remains: events trigger the configuration, deployment, and enforcement of infrastructure policies across the data center.

Now couple that with an inherently event-driven application infrastructure that complements the event-driven network infrastructure. And consider how one might use a platform that is event-driven and isn't going to bottleneck like more traditional development languages. Exactly. All the capacity and performance concerns we had around trying to architect an event-driven data center with Infrastructure 2.0 just evaporate, for the most part.

The data center planets are aligning and what's yet to come will hopefully be a leap forward towards a dynamic, adaptable data center fabric that's capable of acting and reacting to common events across the entire network and application infrastructure.

* Yes, there's hardware and firmware and operating system design that also contributes to the speed and capacity of the network fabric, but that would be all be undone were the network to sit around like a lovesick Juliet waiting for her Romeo-packet to return.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

CloudEXPO Stories
The current age of digital transformation means that IT organizations must adapt their toolset to cover all digital experiences, beyond just the end users’. Today’s businesses can no longer focus solely on the digital interactions they manage with employees or customers; they must now contend with non-traditional factors. Whether it's the power of brand to make or break a company, the need to monitor across all locations 24/7, or the ability to proactively resolve issues, companies must adapt to the new world.
DXWorldEXPO LLC announced today that ICC-USA, a computer systems integrator and server manufacturing company focused on developing products and product appliances, will exhibit at the 22nd International CloudEXPO | DXWorldEXPO. DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City. ICC is a computer systems integrator and server manufacturing company focused on developing products and product appliances to meet a wide range of computational needs for many industries. Their solutions provide benefits across many environments, such as datacenter deployment, HPC, workstations, storage networks and standalone server installations. ICC has been in business for over 23 years and their phenomenal range of clients include multinational corporations, universities, and small businesses.
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a member of the Society of Information Management (SIM) Atlanta Chapter. She received a Business and Economics degree with a minor in Computer Science from St. Andrews Presbyterian University (Laurinburg, North Carolina). She resides in metro-Atlanta (Georgia).
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. His expertise is in automating deployment, management, and problem resolution in these environments, allowing his teams to run large transactional applications with high availability and the speed the consumer demands.
Everyone wants the rainbow - reduced IT costs, scalability, continuity, flexibility, manageability, and innovation. But in order to get to that collaboration rainbow, you need the cloud! In this presentation, we'll cover three areas: First - the rainbow of benefits from cloud collaboration. There are many different reasons why more and more companies and institutions are moving to the cloud. Benefits include: cost savings (reducing on-prem infrastructure, reducing data center foot print, reducing IT support costs), enabling growth (ensuring a highly available, highly scalable infrastructure), increasing employee access & engagement (by having collaboration tools that are usable and available globally regardless of location there will be an increased connectedness amongst teams and individuals that will help increase both efficiency and productivity.)