Welcome!

SDN Journal Authors: Scott Davis, Vinod Mohan, Chris Colosimo, Jason Bloomberg, Mike Wood

Related Topics: SDN Journal, Java IoT, Linux Containers, Containers Expo Blog, @CloudExpo, @BigDataExpo

SDN Journal: Blog Feed Post

What a Network Engineer Does

Network Engineering workflow can be characterized by overlapping cycles of Activity and Modeling

In a previous article, we talked about “Short T’s.”  We talked about how, in network engineering, the “T” is very long:  Configuring a network to achieve business goals requires considerable skill and knowledge.  While we set up a conceptual model in that post to talk about what “T” means in general terms, we did not discuss in detail how to articulate “T” more specifically for network engineering.  In this post, we’ll explore this in a little more detail.

The NetEng Cycle

Figure 1: The Network Engineering Cycle

Network Engineering workflow can be characterized by overlapping cycles of Activity and Modeling.  In figure 1, I have depicted 4 cycles.  From smallest timescale to largest, these are called:  1. Referential Traversal, 2. Interactive, 3. Design, and 4. Architecture.  The crest of each of these cycles is “Activity” and the trough is “Modeling.”  Modeling on the smaller cycles is simple and correlative, while on the larger cycles it is more abstract and analytical.  Activity on the smaller cycles is characterized by direct interactivity with the network, while on larger scales it is indirect and more design oriented.

As is implied from the diagram, a network engineer will oscillate between activities and modeling.  For instance, in the interactive cycle, they may configure a QoS classification policy, but then immediately issue show commands to see if traffic is being classified appropriately.  Configuring a policy and issuing of show commands are activities, but the show commands start to transition into modeling.  The engineer is attempting to model the immediate effect of the changes they have made.  Based on this modeling of “how things are,” the engineer might start thinking about modifications to the classification policy to bring the operation of the network closer to an expected model of “how things should be.”  As far as it is possible to do so, an attempt might be made to model “how things will be” to check for possible side effects.  The cycle, then, repeats.

Referential Space
However, which show commands should they use to accurately model how the configuration is actually working?  If you were to write down the exact sequence of commands, you might find that the engineer is taking data from the output of the first command and using that as either input into the second command, or as a point of reference while examining output from the second command.  The output from the second command might be, in turn, used similarly when executing a third show command.  This is what is called Referential Traversal.  Referential Traversal is when a network engineer engages in iterative data correlation in support of a workflow.  In the context of a workflow, this data represents that workflow’s state.

Another well known referential traversal is doing a manual packet-walk of the network:  Examining nodes along the way to determine if there is a potential issue along the path between two endpoints on the edge of the network.  Here, the engineer will examine lookup tables, arp entries, and LLDP neighbor information, jumping from one node to the next.  This particular workflow can tangent in tricky ways such as examining when and what configuration changes were made to see if they could impact traffic between those two endpoints.  When tangenting into examination of a device configuration, you enter a different set of correlated data:  A route-map applied to an interface can, in turn, reference access-lists or prefix-lists.  The rules for evaluating packet flow through a policy follows different logic than the general rules for packet flow across a series of devices.

Figure 2: Referential Space

Figure 2: Referential Space

If you take the set of rules, relationships, and data points from “configuration space” and the rules, relationships, and data points from the “forwarding space,” and you combine them with all other such spaces that a network engineer must deal with in the course of their activities, the sum of these is called “referential space” (See Figure 2).  A network engineering workflow will follow some referential path through this space, examining data and following it’s relationships to yet other data.  There are numerous interconnected spaces in the management, control, forwarding, and device planes of a network each with their own logic and types of data. There are more abstract spaces as well, such as a “design” space that contains the rules and relationships that govern network design.  A network engineer’s expertise is measured by how well they can navigate referential space in support of longer time-scale cycles.

Enablement versus Obviation
The challenge of networking, and the reason that automation (and UX/UI for that matter) has not evolved terribly well, is that these referential paths vary greatly based on what the network engineer is trying to do and how a particular network is built.  There is a vast set of rules governing the many relationships that exist between the seemingly infinite array of data types.  The dynamic nature of referential traversal, and the intimidating size of referential space, should justify a healthy skepticism of vendors claiming to encapsulate network complexity or automate network workflows.  More often than not, they are simply moving the complexity around, while making it more difficult to navigate in the process.

It’s long since overdue to move innovation in networking towards enabling network engineers to be more effective instead of trying to obviate them.  Unlike the past, this should happen with a keen understanding of what network engineers actually do and how they think through their activities.  We can augment these activities to reduce time-to-completion, and reduce time-to-insight while at the same reducing risk and increasing accountability.  There are many networking workflows, which after 20 years, are still notoriously difficult and risky to model and complete.  Let’s solve these problems first.

Make Things Better
As a network engineer, how many times have you heard about the glorious wonders of a product that automates networking or encapsulates network complexity in some way?  After 20 years, we have been trained to identify this language as snake-oil, or perhaps a little nicer, “marketing speak.”  When we buy into these products or features, it’s always just a matter of time before they go unused, or the ugly realities of their operation surfaces.

Encapsulating network complexity, or automating network workflows, can’t just be about “faster.”  That’s only part of the problem.  It has to make things “better.”  This can only happen with a deeper understanding of referential space.

The post What a Network Engineer Does appeared first on Plexxi.

Read the original blog entry...

More Stories By Derick Winkworth

Derick Winkworth has been a developer, network engineer, and IT architect in various verticals throughout his career.He is currently a Product Manager at Plexxi, Inc where he focuses on workflow automation and product UX.

@CloudExpo Stories
SYS-CON Events announced today that Interoute, owner-operator of one of Europe's largest networks and a global cloud services platform, has been named “Bronze Sponsor” of SYS-CON's 20th Cloud Expo, which will take place on June 6-8, 2017 at the Javits Center in New York, New York. Interoute is the owner-operator of one of Europe's largest networks and a global cloud services platform which encompasses 12 data centers, 14 virtual data centers and 31 colocation centers, with connections to 195 add...
With billions of sensors deployed worldwide, the amount of machine-generated data will soon exceed what our networks can handle. But consumers and businesses will expect seamless experiences and real-time responsiveness. What does this mean for IoT devices and the infrastructure that supports them? More of the data will need to be handled at - or closer to - the devices themselves.
Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like “How is my application doing” but no id...
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo 2016 in New York. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place June 6-8, 2017, at the Javits Center in New York City, New York, is co-located with 20th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry p...
Multiple data types are pouring into IoT deployments. Data is coming in small packages as well as enormous files and data streams of many sizes. Widespread use of mobile devices adds to the total. In this power panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists will look at the tools and environments that are being put to use in IoT deployments, as well as the team skills a modern enterprise IT shop needs to keep things running, get a handle on all this data, and deli...
SYS-CON Events announced today that CollabNet, a global leader in enterprise software development, release automation and DevOps solutions, will be a Bronze Sponsor of SYS-CON's 20th International Cloud Expo®, taking place from June 6-8, 2017, at the Javits Center in New York City, NY. CollabNet offers a broad range of solutions with the mission of helping modern organizations deliver quality software at speed. The company’s latest innovation, the DevOps Lifecycle Manager (DLM), supports Value S...
Automation is enabling enterprises to design, deploy, and manage more complex, hybrid cloud environments. Yet the people who manage these environments must be trained in and understanding these environments better than ever before. A new era of analytics and cognitive computing is adding intelligence, but also more complexity, to these cloud environments. How smart is your cloud? How smart should it be? In this power panel at 20th Cloud Expo, moderated by Conference Chair Roger Strukhoff, pane...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
SYS-CON Events announced today that Progress, a global leader in application development, has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Enterprises today are rapidly adopting the cloud, while continuing to retain business-critical/sensitive data inside the firewall. This is creating two separate data silos – one inside the firewall and the other outside the firewall. Cloud ISVs oft...
SYS-CON Events announced today that Grape Up will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct. 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Grape Up is a software company specializing in cloud native application development and professional services related to Cloud Foundry PaaS. With five expert teams that operate in various sectors of the market across the U.S. and Europe, Grape Up works with a variety of customers from emergi...
@ThingsExpo has been named the Most Influential ‘Smart Cities - IIoT' Account and @BigDataExpo has been named fourteenth by Right Relevance (RR), which provides curated information and intelligence on approximately 50,000 topics. In addition, Right Relevance provides an Insights offering that combines the above Topics and Influencers information with real time conversations to provide actionable intelligence with visualizations to enable decision making. The Insights service is applicable to eve...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm.
Building a cross-cloud operational model can be a daunting task. Per-cloud silos are not the answer, but neither is a fully generic abstraction plane that strips out capabilities unique to a particular provider. In his session at 20th Cloud Expo, Chris Wolf, VP & Chief Technology Officer, Global Field & Industry at VMware, will discuss how successful organizations approach cloud operations and management, with insights into where operations should be centralized and when it’s best to decentraliz...
In recent years, containers have taken the world by storm. Companies of all sizes and industries have realized the massive benefits of containers, such as unprecedented mobility, higher hardware utilization, and increased flexibility and agility; however, many containers today are non-persistent. Containers without persistence miss out on many benefits, and in many cases simply pass the responsibility of persistence onto other infrastructure, adding additional complexity.
Quickly find the root cause of complex database problems slowing down your applications. Up to 88% of all application performance issues are related to the database. DPA’s unique response time analysis shows you exactly what needs fixing - in four clicks or less. Optimize performance anywhere. Database Performance Analyzer monitors on-premises, on VMware®, and in the Cloud, including Amazon® AWS and Azure™ virtual machines.
Most technology leaders, contemporary and from the hardware era, are reshaping their businesses to do software in the hope of capturing value in IoT. Although IoT is relatively new in the market, it has already gone through many promotional terms such as IoE, IoX, SDX, Edge/Fog, Mist Compute, etc. Ultimately, irrespective of the name, it is about deriving value from independent software assets participating in an ecosystem as one comprehensive solution.
Developers want to create better apps faster. Static clouds are giving way to scalable systems, with dynamic resource allocation and application monitoring. You won't hear that chant from users on any picket line, but helping developers to create better apps faster is the mission of Lee Atchison, principal cloud architect and advocate at New Relic Inc., based in San Francisco. His singular job is to understand and drive the industry in the areas of cloud architecture, microservices, scalability ...
SYS-CON Events announced today that CA Technologies has been named "Platinum Sponsor" of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, New York, and 21st International Cloud Expo, which will take place in November in Silicon Valley, California.
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...
New competitors, disruptive technologies, and growing expectations are pushing every business to both adopt and deliver new digital services. This ‘Digital Transformation’ demands rapid delivery and continuous iteration of new competitive services via multiple channels, which in turn demands new service delivery techniques – including DevOps. In this power panel at @DevOpsSummit 20th Cloud Expo, moderated by DevOps Conference Co-Chair Andi Mann, panelists will examine how DevOps helps to meet th...