|By Derick Winkworth||
|August 5, 2014 02:00 PM EDT||
In a previous article, we talked about “Short T’s.” We talked about how, in network engineering, the “T” is very long: Configuring a network to achieve business goals requires considerable skill and knowledge. While we set up a conceptual model in that post to talk about what “T” means in general terms, we did not discuss in detail how to articulate “T” more specifically for network engineering. In this post, we’ll explore this in a little more detail.
The NetEng Cycle
Network Engineering workflow can be characterized by overlapping cycles of Activity and Modeling. In figure 1, I have depicted 4 cycles. From smallest timescale to largest, these are called: 1. Referential Traversal, 2. Interactive, 3. Design, and 4. Architecture. The crest of each of these cycles is “Activity” and the trough is “Modeling.” Modeling on the smaller cycles is simple and correlative, while on the larger cycles it is more abstract and analytical. Activity on the smaller cycles is characterized by direct interactivity with the network, while on larger scales it is indirect and more design oriented.
As is implied from the diagram, a network engineer will oscillate between activities and modeling. For instance, in the interactive cycle, they may configure a QoS classification policy, but then immediately issue show commands to see if traffic is being classified appropriately. Configuring a policy and issuing of show commands are activities, but the show commands start to transition into modeling. The engineer is attempting to model the immediate effect of the changes they have made. Based on this modeling of “how things are,” the engineer might start thinking about modifications to the classification policy to bring the operation of the network closer to an expected model of “how things should be.” As far as it is possible to do so, an attempt might be made to model “how things will be” to check for possible side effects. The cycle, then, repeats.
However, which show commands should they use to accurately model how the configuration is actually working? If you were to write down the exact sequence of commands, you might find that the engineer is taking data from the output of the first command and using that as either input into the second command, or as a point of reference while examining output from the second command. The output from the second command might be, in turn, used similarly when executing a third show command. This is what is called Referential Traversal. Referential Traversal is when a network engineer engages in iterative data correlation in support of a workflow. In the context of a workflow, this data represents that workflow’s state.
Another well known referential traversal is doing a manual packet-walk of the network: Examining nodes along the way to determine if there is a potential issue along the path between two endpoints on the edge of the network. Here, the engineer will examine lookup tables, arp entries, and LLDP neighbor information, jumping from one node to the next. This particular workflow can tangent in tricky ways such as examining when and what configuration changes were made to see if they could impact traffic between those two endpoints. When tangenting into examination of a device configuration, you enter a different set of correlated data: A route-map applied to an interface can, in turn, reference access-lists or prefix-lists. The rules for evaluating packet flow through a policy follows different logic than the general rules for packet flow across a series of devices.
If you take the set of rules, relationships, and data points from “configuration space” and the rules, relationships, and data points from the “forwarding space,” and you combine them with all other such spaces that a network engineer must deal with in the course of their activities, the sum of these is called “referential space” (See Figure 2). A network engineering workflow will follow some referential path through this space, examining data and following it’s relationships to yet other data. There are numerous interconnected spaces in the management, control, forwarding, and device planes of a network each with their own logic and types of data. There are more abstract spaces as well, such as a “design” space that contains the rules and relationships that govern network design. A network engineer’s expertise is measured by how well they can navigate referential space in support of longer time-scale cycles.
Enablement versus Obviation
The challenge of networking, and the reason that automation (and UX/UI for that matter) has not evolved terribly well, is that these referential paths vary greatly based on what the network engineer is trying to do and how a particular network is built. There is a vast set of rules governing the many relationships that exist between the seemingly infinite array of data types. The dynamic nature of referential traversal, and the intimidating size of referential space, should justify a healthy skepticism of vendors claiming to encapsulate network complexity or automate network workflows. More often than not, they are simply moving the complexity around, while making it more difficult to navigate in the process.
It’s long since overdue to move innovation in networking towards enabling network engineers to be more effective instead of trying to obviate them. Unlike the past, this should happen with a keen understanding of what network engineers actually do and how they think through their activities. We can augment these activities to reduce time-to-completion, and reduce time-to-insight while at the same reducing risk and increasing accountability. There are many networking workflows, which after 20 years, are still notoriously difficult and risky to model and complete. Let’s solve these problems first.
Make Things Better
As a network engineer, how many times have you heard about the glorious wonders of a product that automates networking or encapsulates network complexity in some way? After 20 years, we have been trained to identify this language as snake-oil, or perhaps a little nicer, “marketing speak.” When we buy into these products or features, it’s always just a matter of time before they go unused, or the ugly realities of their operation surfaces.
Encapsulating network complexity, or automating network workflows, can’t just be about “faster.” That’s only part of the problem. It has to make things “better.” This can only happen with a deeper understanding of referential space.
There are many considerations when moving applications from on-premise to cloud. It is critical to understand the benefits and also challenges of this migration. A successful migration will result in lower Total Cost of Ownership, yet offer the same or higher level of robustness. Migration to cloud shifts computing resources from your data center, which can yield significant advantages provided that the cloud vendor an offer enterprise-grade quality for your application.
Oct. 9, 2015 05:30 PM EDT Reads: 296
The Internet of Everything is re-shaping technology trends–moving away from “request/response” architecture to an “always-on” Streaming Web where data is in constant motion and secure, reliable communication is an absolute necessity. As more and more THINGS go online, the challenges that developers will need to address will only increase exponentially. In his session at @ThingsExpo, Todd Greene, Founder & CEO of PubNub, will explore the current state of IoT connectivity and review key trends an...
Oct. 9, 2015 05:30 PM EDT
Today air travel is a minefield of delays, hassles and customer disappointment. Airlines struggle to revitalize the experience. GE and M2Mi will demonstrate practical examples of how IoT solutions are helping airlines bring back personalization, reduce trip time and improve reliability. In their session at @ThingsExpo, Shyam Varan Nath, Principal Architect with GE, and Dr. Sarah Cooper, M2Mi's VP Business Development and Engineering, will explore the IoT cloud-based platform technologies driv...
Oct. 9, 2015 05:15 PM EDT Reads: 111
Achim Weiss is Chief Executive Officer and co-founder of ProfitBricks. In 1995, he broke off his studies to co-found the web hosting company "Schlund+Partner." The company "Schlund+Partner" later became the 1&1 web hosting product line. From 1995 to 2008, he was the technical director for several important projects: the largest web hosting platform in the world, the second largest DSL platform, a video on-demand delivery network, the largest eMail backend in Europe, and a universal billing syste...
Oct. 9, 2015 04:45 PM EDT Reads: 199
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and Containers together help companies to achieve their business goals faster and more effectively.
Oct. 9, 2015 04:15 PM EDT Reads: 182
The buzz continues for cloud, data analytics and the Internet of Things (IoT) and their collective impact across all industries. But a new conversation is emerging - how do companies use industry disruption and technology enablers to lead in markets undergoing change, uncertainty and ambiguity? Organizations of all sizes need to evolve and transform, often under massive pressure, as industry lines blur and merge and traditional business models are assaulted and turned upside down. In this new da...
Oct. 9, 2015 04:00 PM EDT Reads: 303
The Internet of Things (IoT) is growing rapidly by extending current technologies, products and networks. By 2020, Cisco estimates there will be 50 billion connected devices. Gartner has forecast revenues of over $300 billion, just to IoT suppliers. Now is the time to figure out how you’ll make money – not just create innovative products. With hundreds of new products and companies jumping into the IoT fray every month, there’s no shortage of innovation. Despite this, McKinsey/VisionMobile data...
Oct. 9, 2015 04:00 PM EDT Reads: 238
The web app is agile. The REST API is agile. The testing and planning are agile. But alas, data infrastructures certainly are not. Once an application matures, changing the shape or indexing scheme of data often forces at best a top down planning exercise and at worst includes schema changes that force downtime. The time has come for a new approach that fundamentally advances the agility of distributed data infrastructures. Come learn about a new solution to the problems faced by software organ...
Oct. 9, 2015 04:00 PM EDT Reads: 906
You have your devices and your data, but what about the rest of your Internet of Things story? Two popular classes of technologies that nicely handle the Big Data analytics for Internet of Things are Apache Hadoop and NoSQL. Hadoop is designed for parallelizing analytical work across many servers and is ideal for the massive data volumes you create with IoT devices. NoSQL databases such as Apache HBase are ideal for storing and retrieving IoT data as “time series data.”
Oct. 9, 2015 03:45 PM EDT Reads: 506
As a CIO, are your direct reports IT managers or are they IT leaders? The hard truth is that many IT managers have risen through the ranks based on their technical skills, not their leadership ability. Many are unable to effectively engage and inspire, creating forward momentum in the direction of desired change. Renowned for its approach to leadership and emphasis on their people, organizations increasingly look to our military for insight into these challenges.
Oct. 9, 2015 03:15 PM EDT Reads: 156
SYS-CON Events announced today that Key Information Systems, Inc. (KeyInfo), a leading cloud and infrastructure provider offering integrated solutions to enterprises, will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Key Information Systems is a leading regional systems integrator with world-class compute, storage and networking solutions and professional services for the most advanced softwa...
Oct. 9, 2015 03:00 PM EDT Reads: 406
In recent years, at least 40% of companies using cloud applications have experienced data loss. One of the best prevention against cloud data loss is backing up your cloud data. In his General Session at 17th Cloud Expo, Bryan Forrester, Senior Vice President of Sales at eFolder, will present how organizations can use eFolder Cloudfinder to automate backups of cloud application data. He will also demonstrate how easy it is to search and restore cloud application data using Cloudfinder.
Oct. 9, 2015 03:00 PM EDT Reads: 496
Saviynt Inc. has announced the availability of the next release of Saviynt for AWS. The comprehensive security and compliance solution provides a Command-and-Control center to gain visibility into risks in AWS, enforce real-time protection of critical workloads as well as data and automate access life-cycle governance. The solution enables AWS customers to meet their compliance mandates such as ITAR, SOX, PCI, etc. by including an extensive risk and controls library to detect known threats and b...
Oct. 9, 2015 03:00 PM EDT Reads: 234
The enterprise is being consumerized, and the consumer is being enterprised. Moore's Law does not matter anymore, the future belongs to business virtualization powered by invisible service architecture, powered by hyperscale and hyperconvergence, and facilitated by vertical streaming and horizontal scaling and consolidation. Both buyers and sellers want instant results, and from paperwork to paperless to mindless is the ultimate goal for any seamless transaction. The sweetest sweet spot in innov...
Oct. 9, 2015 02:15 PM EDT Reads: 212
The IoT is upon us, but today’s databases, built on 30-year-old math, require multiple platforms to create a single solution. Data demands of the IoT require Big Data systems that can handle ingest, transactions and analytics concurrently adapting to varied situations as they occur, with speed at scale. In his session at @ThingsExpo, Chad Jones, chief strategy officer at Deep Information Sciences, will look differently at IoT data so enterprises can fully leverage their IoT potential. He’ll sha...
Oct. 9, 2015 01:45 PM EDT Reads: 562
Overgrown applications have given way to modular applications, driven by the need to break larger problems into smaller problems. Similarly large monolithic development processes have been forced to be broken into smaller agile development cycles. Looking at trends in software development, microservices architectures meet the same demands. Additional benefits of microservices architectures are compartmentalization and a limited impact of service failure versus a complete software malfunction....
Oct. 9, 2015 01:15 PM EDT Reads: 252
For almost two decades, businesses have discovered great opportunities to engage with customers and even expand revenue through digital systems, including web and mobile applications. Yet, even now, the conversation between the business and the technologists that deliver these systems is strained, in large part due to misaligned objectives. In his session at DevOps Summit, James Urquhart, Senior Vice President of Performance Analytics at SOASTA, Inc., will discuss how measuring user outcomes –...
Oct. 9, 2015 01:00 PM EDT Reads: 474
As a company adopts a DevOps approach to software development, what are key things that both the Dev and Ops side of the business must keep in mind to ensure effective continuous delivery? In his session at DevOps Summit, Mark Hydar, Head of DevOps, Ericsson TV Platforms, will share best practices and provide helpful tips for Ops teams to adopt an open line of communication with the development side of the house to ensure success between the two sides.
Oct. 9, 2015 01:00 PM EDT Reads: 608
SYS-CON Events announced today that ProfitBricks, the provider of painless cloud infrastructure, will exhibit at SYS-CON's 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. ProfitBricks is the IaaS provider that offers a painless cloud experience for all IT users, with no learning curve. ProfitBricks boasts flexible cloud servers and networking, an integrated Data Center Designer tool for visual control over the...
Oct. 9, 2015 01:00 PM EDT Reads: 798
Manufacturing has widely adopted standardized and automated processes to create designs, build them, and maintain them through their life cycle. However, many modern manufacturing systems go beyond mechanized workflows to introduce empowered workers, flexible collaboration, and rapid iteration. Such behaviors also characterize open source software development and are at the heart of DevOps culture, processes, and tooling.
Oct. 9, 2015 12:30 PM EDT Reads: 1,105