Welcome!

SDN Journal Authors: TJ Randall, Yeshim Deniz, Liz McMillan, Elizabeth White, Pat Romanski

Related Topics: SDN Journal, Containers Expo Blog

SDN Journal: Blog Post

The Power of Correlated Visualization by @MartenT1999

Today’s networks are very much standalone items. The data you get to understand it is usually provided by the network

I am sure our work environment is not all that different from many others. There are large whiteboards everywhere and you cannot find a meeting room that does not have circles, lines and squares drawn on them. Some of our favorite bloggers have written blogs about network drawing tools and aids. Probably not restricted to just networking folks, but we certainly love to visualize the things we do. Out of all the customers I have visited, the amount of them where one of us did not end up on a whiteboard can probably be counted on one hand.

It is not surprising that we are drawn to diagrams of the networks we have created. We build our network one device at a time, then use network links to connect the next and on we go until our network is complete. Which of course it never is. To track how we have connected all our devices we need diagrams. They tell us what devices we have, how they are attached to each other, how they are addressed and what protocols we have used to govern their connectivity. They are multi-layered and the layers are semi independent.

I have previously said that this may be one of our most important network tools and the one that is the least accurate. The human brain’s most dominant sense is vision. Almost half of our brain’s resources is used to support vision. It trumps all other senses. It is rare that networks have unusual smells or sounds (stick around long enough and you will have a device actually burn out on you to provide you with a real network meltdown), but this visual dominance is part of the reason we draw so much to understand what we have created or what we are trying to manage and control.

We will remember information with visual cues much better than information without it. We have all sat through endless PowerPoint presentations with streams of words and could not remember a word of it a half hour later. Sit through that same presentation but now aided with some very straightforward visual cues related to the information, and you will remember it much longer.

When building and running networks, visual cues help us understand what we do in many ways. Documentation from your vendor and the documentation you create yourself should be littered within pictures and diagrams, not words. Heck, we even established that no one RTFM, but I bet many of us will look for pictures and corresponding config examples.

As vendors and consumers we can do so much more in visualizing the state our networks. And not just how it is connected and what devices exist where. We need to visualize the health of your network, and beyond some basic green and red icons and markers. We need to give you access to easy to understand visual cues that provide you with quick view of how your network is doing, what your network is doing and whether all is well.

We have had several decades of evolution of network management solutions (you know, that thing you really need but don’t want to spend a lot of money on) and they typically do a pretty decent job with the basics of device status, link status, capacity indications, utilization etc. They give us the fundamental view of health of the network, or the components in the network.

What they do not give you is a sense of how the network works. How is it forwarding traffic? What traffic is going where? Do I have hot spots? What policies are applied where and what is their impact? What is attached to my network and how are those things related? Are there special user groups on my network and are they getting what they need? All of these things can be found. Parts of configuration, parts statistics, and more importantly in 3rd party data that can be applied to truly visualize the exact state and behavior of the network.

Today’s networks are very much standalone items. The data you get to understand it is usually provided by the network itself. There is little correlation of data that can be retrieved from systems that surround the network, or even use the network. Servers, VMs, application, storage, they can call tell us what they are sending, who they are sending it to. There is cluster and user group information available in the management systems of these applications. This type of information is critical to enable to provide the best possible service to these applications.

Last week Plexxi’s Ed Henry showed that when you start combining some of these information sources, your network tools in general, but your visualization tools specifically get so much more powerful. He showed that by sharing data from Cloudera about Hadoop cluster members with Plexxi’s view of the network and where those members are attached, we can create extremely powerful policy and actually visualize this policy. We can track traffic between cluster members within the overall network context. We can view the paths they use to communicate. We can indicate whether there are problems with that specific application clusters network performance. We can isolate some or all portions of the cluster in the network. We have context. Without context we only have data to stare at and cannot possibly determine whether what we are looking at is good or not.

Many times we focus on configuration and provisioning in DevOps discussions and the evolution of the network engineer. Matt Oswalt (@Mierdin) wrote an article entitled “Why Network Automation Won’t Kill Your Job” two weeks ago and states:

“The truth is that network automation, like any other form, is about visibility, and trust. It’s about network engineers stepping up and providing a way for other disciplines to consume networking more effectively.”

Using all the data sources at your disposal to increase the visibility of the state, health and contextual performance of your network is key to just that.

 

[Today's fun fact: Samhainophobia is the fear of Halloween. Yes, we have words for *everything*.]

The post The Power of Correlated Visualization appeared first on Plexxi.

Read the original blog entry...

More Stories By Marten Terpstra

Marten Terpstra is a Product Management Director at Plexxi Inc. Marten has extensive knowledge of the architecture, design, deployment and management of enterprise and carrier networks.

CloudEXPO Stories
Using serverless computing has a number of obvious benefits over traditional application infrastructure - you pay only for what you use, scale up or down immediately to match supply with demand, and avoid operating any server infrastructure at all. However, implementing maintainable and scalable applications using serverless computing services like AWS Lambda poses a number of challenges. The absence of long-lived, user-managed servers means that states cannot be maintained by the service. Longer function invocation times (referred to as cold starts) become very important to track, because they impact the response time of the service and will impose additional cost. Additionally, the transition to smaller individual components (much like breaking a monolithic application into microservices) results in a simpler deployment model, but makes the system as a whole increasingly complex.
Here to help unpack insights into the new era of using containers to gain ease with multi-cloud deployments are our panelists: Matt Baldwin, Founder and CEO at StackPointCloud, based in Seattle; Nic Jackson, Developer Advocate at HashiCorp, based in San Francisco, and Reynold Harbin, Director of Product Marketing at DigitalOcean, based in New York. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.
Using serverless computing has a number of obvious benefits over traditional application infrastructure - you pay only for what you use, scale up or down immediately to match supply with demand, and avoid operating any server infrastructure at all. However, implementing maintainable and scalable applications using serverless computing services like AWS Lambda poses a number of challenges. The absence of long-lived, user-managed servers means that states cannot be maintained by the service. Longer function invocation times (referred to as cold starts) become very important to track, because they impact the response time of the service and will impose additional cost. Additionally, the transition to smaller individual components (much like breaking a monolithic application into microservices) results in a simpler deployment model, but makes the system as a whole increasingly complex.
With the rise of Docker, Kubernetes, and other container technologies, the growth of microservices has skyrocketed among dev teams looking to innovate on a faster release cycle. This has enabled teams to finally realize their DevOps goals to ship and iterate quickly in a continuous delivery model. Why containers are growing in popularity is no surprise — they’re extremely easy to spin up or down, but come with an unforeseen issue. However, without the right foresight, DevOps and IT teams may lose a lot of visibility into these containers resulting in operational blind spots and even more haystacks to find the presumptive performance issue needle.
Isomorphic Software is the global leader in high-end, web-based business applications. We develop, market, and support the SmartClient & Smart GWT HTML5/Ajax platform, combining the productivity and performance of traditional desktop software with the simplicity and reach of the open web. With staff in 10 timezones, Isomorphic provides a global network of services related to our technology, with offerings ranging from turnkey application development to SLA-backed enterprise support. Leading global enterprises use Isomorphic technology to reduce costs and improve productivity, developing & deploying sophisticated business applications with unprecedented ease and simplicity.