Welcome!

SDN Journal Authors: Liz McMillan, Yeshim Deniz, Elizabeth White, Pat Romanski, TJ Randall

Related Topics: @DevOpsSummit, Containers Expo Blog, Apache, SDN Journal

@DevOpsSummit: Blog Post

DevOps Teams Get Docker Flexibility @Ruxit | @DevOpsSummit [#DevOps]

If your applications are distributed via Docker containers to give your DevOps teams some flexibility

Network in the Cloud is No Free Lunch

If you have your applications running on AWS or a similar cloud-based solution, you’ve effectively “outsourced” your networking to the cloud as well. Of course, this can be of great value. Most significantly because it frees you from maintaining physical network infrastructure. Not having physical access to your network doesn’t however mean that you’re free from taking care of your network.

A bit of history

In traditional application architectures, network infrastructure was kept under the strict control of network teams. These teams were responsible for upgrading overloaded equipment before problems arose, identifying and replacing weak network links, resolving bottlenecks, observing latency metrics and delayed data delivery, and even detecting security threats. In other words, traditional network teams looked after all seven OSI layers.

Modern architectures need more networking than ever

In cloud-based architectures, the situation is different and the network has become even more important. Let’s imagine a typical cloud-based architecture situation. You run a datacenter with a flexible number of allocated computing instances (for example, due to the pricing model and volatile demands for CPU). Your datacenter serves distributed applications that are backed by, for example, microservices. Additionally, let’s say that your applications are distributed via Docker containers to give your DevOps teams some flexibility. In situations like this you need more networking than ever. Your network must shoulder all the communications required between the microservices. It serves as a virtual nervous system for your applications.

Even though the network is not physically available to system administrators, it still exists and requires attention. It’s often difficult to know where exactly your machines are physically hosted and how they’re connected with other hosts in your network. Related virtual machines and services can even be hosted on the same virtual host, in which case your network exists only as a memory read operation. This means, the physical network very often coexists with multiple virtual networks.

Challenges in cloud-based networks

The inability to physically access a network (OSI layers 1-2) makes it hard for DevOps teams to keep an eye on it. They can use the monitoring tools offered by their cloud provider, for example Cloud Watch, to fetch network metrics like NetworkIn and NetworkOut, but these metrics can be insufficient in detecting network problems.

Here are some of the key challenges that DevOps faces in maintaining virtual network performance:

  • Competing processes for network resources (for example, the TCP Incast problem)
  • Variable network infrastructure through new or stopped instances
  • Scalability of network load via elastic network interfaces
  • Quality of connections inside your data center
  • Quality of connections to private networks outside your data center

Monitoring network usage

Your network monitoring must be able to react to infrastructure changes such as mentioned above. In particular, it needs to be able to handle virtual network interfaces. Hence, your monitoring needs to run on your hosts and consistently keep watch for changes of your virtual infrastructure. From this position it can observe network connections between processes that communicate with other processes and services, thereby monitoring actual network usage instead of just network devices.

Infrastructure with all levels

Resource monitoring is key… and simple!

With this monitoring approach your network won’t be seen simply as a collection of network interfaces, routing tables, and security groups. Rather your network will be viewed as a limited resource used by processes and applications. This resource can be monitored along with CPU, memory, and storage, and even measured at process level. This enables full-stack application performance monitoring and the ability to trace network problems all the way up to the application level.

Network connection quality

There are a few basic network performance metrics you should keep in mind:

  • The traffic of network data (throughput) is the basic indicator for network performance.
  • The Connectivity metric provides the percentage of successfully established TCP (Transmission Control Protocol) connections and indicates accessibility of services. TCP connections may be refused or end up in timeouts, so connectivity is a good indicator of network problems between sender and receiver.
  • With respect to the quality of established TCP connections, the retransmission rate is also worth monitoring. The TCP protocol is a reliable and error-checked protocol. This means the receiver must confirm the packets sent over a network link; otherwise they are considered lost and then retransmitted by the sender. Therefore, retransmission rate is a good indicator of poor network links and overloaded network infrastructure.

The bottom line is that you shouldn’t blindly trust cloud providers regarding the health of your “outsourced” virtual network infrastructure. Virtualized networks cannot be monitored in a more or less traditional manner. They should at least be monitored from the point of view of your hosts and processes so that you have some meaningful network performance indicators.

The post Network in the Cloud is No Free Lunch appeared first on The ruxit blog.

Read the original blog entry...

More Stories By Dynatrace Blog

Building a revolutionary approach to software performance monitoring takes an extraordinary team. With decades of combined experience and an impressive history of disruptive innovation, that’s exactly what we ruxit has.

Get to know ruxit, and get to know the future of data analytics.

CloudEXPO Stories
Founded in 2002 and headquartered in Chicago, Nexum® takes a comprehensive approach to security. Nexum approaches business with one simple statement: “Do what’s right for the customer and success will follow.” Nexum helps you mitigate risks, protect your data, increase business continuity and meet your unique business objectives by: Detecting and preventing network threats, intrusions and disruptions Equipping you with the information, tools, training and resources you need to effectively manage IT risk Nexum, Latin for an arrangement by which one pledged one’s very liberty as security, Nexum is committed to ensuring your security. At Nexum, We Mean Security®.
The Transparent Cloud-computing Consortium (T-Cloud) is a neutral organization for researching new computing models and business opportunities in IoT era. In his session, Ikuo Nakagawa, Co-Founder and Board Member at Transparent Cloud Computing Consortium, will introduce the big change toward the "connected-economy" in the digital age. He'll introduce and describe some leading-edge business cases from his original points of view, and discuss models & strategies in the connected-economy. Nowadays, "digital innovation" is a big wave of business transformation based on digital technologies. IoT, Big Data, AI, FinTech and various leading-edge technologies are key components of such business drivers.
Doug was appointed CEO of Big Switch in 2013 to lead the company on its mission to provide modern cloud and data center networking solutions capable of disrupting the stronghold by legacy vendors. Under his guidance, Big Switch has experienced 30+% average QoQ growth for the last 16 quarters; more than quadrupled headcount; successfully shifted to a software-only and subscription-based recurring revenue model; solidified key partnerships with Accton/Edgecore, Dell EMC, HPE, Nutanix, RedHat and VMware; developed Open Network Linux, an open source NOS foundational component designed in partnership with Facebook and Google; and he played an integral role in raising two-thirds of the company's $120MM of funding. Prior to Big Switch, Doug was SVP & GM of Juniper Networks $1BN business across Asia-Pacific, Japan and Greater China, and he began his time at Juniper as SVP & GM of its Security bu...
Digital transformation is about embracing digital technologies into a company's culture to better connect with its customers, automate processes, create better tools, enter new markets, etc. Such a transformation requires continuous orchestration across teams and an environment based on open collaboration and daily experiments. In his session at 21st Cloud Expo, Alex Casalboni, Technical (Cloud) Evangelist at Cloud Academy, explored and discussed the most urgent unsolved challenges to achieve full cloud literacy in the enterprise world.
Having been in the web hosting industry since 2002, dhosting has gained a great deal of experience while working on a wide range of projects. This experience has enabled the company to develop our amazing new product, which they are now excited to present! Among dHosting's greatest achievements, they can include the development of their own hosting panel, the building of their fully redundant server system, and the creation of dhHosting's unique product, Dynamic Edge.