Welcome!

SDN Journal Authors: Yeshim Deniz, Mike Hicks, Kira Makagon, Liz McMillan, Hovhannes Avoyan

Related Topics: Virtualization

Virtualization: Blog Post

Network abstractions need equivalent of packet walkthrough

Some people will justify the depth by talking about troubleshooting complex systems

Whenever a new networking platform is evaluated, one of the early sales calls includes a packet walkthrough. In excruciating detail, someone walks the customer through the path a packet takes from ingress port, through the device, across the switching or routing ASIC, and back down to the egress port. The technical deep dive frequently includes internals that even the vendor engineers are not all familiar with.

But why?

Some people will justify the depth by talking about troubleshooting complex systems. Others will pull on random technical details that suggest one platform is better than another in some regard or under some set of circumstances. Others will actually parrot some of the vendor’s marketing efforts with claims of flexibility, scalability, or extensibility.

While all of these are absolutely valid, they actually miss the biggest reason the packet walkthrough is a ubiquitous part of every selling motion.

Get comfortable

We networking gearheads are a skeptical lot. We learned long ago that listening to someone and taking their words for granted was a short path to operational hell. Their words might have sounded true but their promises rang hollow. The platform, or even the architecture, did not perform as advertised. And because the result of a network failure is catastrophically worse than any other infrastructure failure, we have collectively vowed to look at every opportunity with a sideways glance from a somewhat disbelieving perspective.

Trust but verify

The real reason that we evaluate in such detail new platforms and solutions is not because of the inherent troubleshooting value of examining the architecture. Nor is it because we can determine with any certainty what the scaling limits are based on a cursory glance at the internals of a system. We examine architectures in detail because it allows us to put the vendor under a bit of scrutiny. If they stand up to a few somewhat randomly placed questions (less random if you have had particularly painful issues in the past), then we believe with a bit more certainty other claims that are made.

I don’t mention this because I think this is a bad way to do things, mind you. Rather, I bring this up because the collective psyche of the networking buyer needs to be understood if architectural advances like SDN and abstractions are to bring any any real value.

Control freaks and abstraction

Networking generally has operated through meticulous control for decades. Network management via configuration knob puts a ton of power at the hands of the network architect. Behavior can be precisely specified. And when something goes wrong, it can be queried to surmise the cause.

A shift to abstractions might make things easier in terms of actual physical workload (how much typing there is), but it comes with a gigantic leap of faith. Control freaks might complain about how much effort things are, but they absolutely cringe at the thought of giving any of that work up lest something go wrong.

When behavior is specified by an abstraction (as with an edge policy abstraction), not only must the syntax be correct but also the translation of that abstraction into underlying behavior. The former is easy to verify, but the latter requires a bit of faith on behalf of the user that the vendor has done the right thing under the hood.

A peek under the hood

There are already a bunch of industry efforts around SDN and abstractions. Whether it’s vendor-specific (as with Cisco’s ACI) or a part of open source (OpenDaylight, for example), there are a number of movements that either focus on or include some abstraction as part of the solution. But if our past teaches us anything, it is that network architects are not happy with a basic understanding of what the abstractions do. They require additional information so they have at least some concept of how they do it.

It would seem that people peddling abstractions will ultimately need to provide the equivalent of a packet walkthrough. With platforms, this is easy. Where does the packet physically enter the device, and where does it leave? But with abstractions, the equivalent is a bit harder.

Abstraction walkthroughs

Initially, this dynamic favors abstractions that merely replace well-understood configuration with something less. The abstraction walkthrough for a replacement is essentially an expansion of the abstraction into the underlying configuration knobs. Think of this as more indirection than abstraction, more similar to header files than anything else.

But if abstractions are about more than saving keystrokes, this type of walkthrough will not permit itself for even slightly more complex scenarios. This leaves the abstraction salesperson in a tough spot: how do you demonstrate that something works if you cannot provide a meaningful look at the internals?

Behavior determines success

The long-term answer here is going to necessarily fall to actual behavior. The creators of abstractions will need to show in the affirmative that the network (or the applications) behave appropriately when an abstraction is used. This might seem obvious, but the implications are actually quite profound.

For networks today, there are lots of ways to verify specific state in the network (BGP neighbors, interface stats, and so on). And when there is no network state, the configuration itself serves as the check. But what if that configuration is not there?

In the long term, the infrastructure broadly (including but not limited to the network) will need to be instrumented with meaningful abstractions in mind. If abstractions become common around managing edge policy, there will need to be additional ways to instrument specific applications, tenants, and flows. For example, if abstractions allow network engineers to specify a particular application as PCI compliant, then there might need to be ways to verify PCI compliance via command.

The bottom line

The abstraction market initially will be focused on keyboard time reduction. That is a fine place to start, and it is easy to verify. But if the real value of abstractions is in the removal of complexity (not just masking it) and the increased collaboration of infrastructure, then abstraction salespeople are going to need to think through the post-sales elements of their products. Those that do this early will certainly find that having an abstraction walkthrough shortens the evaluation time for new solutions. And if no one else has done this, the existence of such a walkthrough could prove a killer element of the product sales cycle.

[Today’s fun fact: Right-handed people tend to chew food on the right side of their mouths, and lefties on the left side.]

The post Network abstractions need equivalent of packet walkthrough appeared first on Plexxi.

Read the original blog entry...

More Stories By Michael Bushong

The best marketing efforts leverage deep technology understanding with a highly-approachable means of communicating. Plexxi's Vice President of Marketing Michael Bushong has acquired these skills having spent 12 years at Juniper Networks where he led product management, product strategy and product marketing organizations for Juniper's flagship operating system, Junos. Michael spent the last several years at Juniper leading their SDN efforts across both service provider and enterprise markets. Prior to Juniper, Michael spent time at database supplier Sybase, and ASIC design tool companies Synopsis and Magma Design Automation. Michael's undergraduate work at the University of California Berkeley in advanced fluid mechanics and heat transfer lend new meaning to the marketing phrase "This isn't rocket science."

Cloud Expo Latest Stories
Come learn about what you need to consider when moving your data to the cloud. In her session at 15th Cloud Expo, Skyla Loomis, a Program Director of Cloudant Development at Cloudant, will discuss the security, performance, and operational implications of keeping your data on premise, moving it to the cloud, or taking a hybrid approach. She will use real customer examples to illustrate the tradeoffs, key decision points, and how to be successful with a cloud or hybrid cloud solution.
In today's application economy, enterprise organizations realize that it's their applications that are the heart and soul of their business. If their application users have a bad experience, their revenue and reputation are at stake. In his session at 15th Cloud Expo, Anand Akela, Senior Director of Product Marketing for Application Performance Management at CA Technologies, will discuss how a user-centric Application Performance Management solution can help inspire your users with every application transaction.
With the explosion of the cloud, more businesses are transitioning to a recurring revenue model to generate reliable sales, grow profits, and open new markets. This opportunity requires businesses to get to market quickly with the pricing and packaging options customers want. In addition, you will want to take advantage of the ensuing tidal wave of data to more effectively upsell, cross-sell and manage your customers. All of this is possible, but only with the right approach. At 15th Cloud Expo, Brendan O'Brien, Co-founder at Aria Systems and the inventor of cloud billing panelists, will lead a panel discussion on what it takes to launch and manage a successful recurring revenue business. The panelists will offer their insights about what each department will need to consider, from financial management to line of business and IT. The panelists will also offer examples from their success in recurring revenue with companies such as Audi, Constant Contact, Experian, Pitney-Bowes, Teleko...
Planning scalable environments isn't terribly difficult, but it does require a change of perspective. In his session at 15th Cloud Expo, Phil Jackson, Development Community Advocate for SoftLayer, will broaden your views to think on an Internet scale by dissecting a video publishing application built with The SoftLayer Platform, Message Queuing, Object Storage, and Drupal. By examining a scalable modular application build that can handle unpredictable traffic, attendees will able to grow your development arsenal and pick up a few strategies to apply to your own projects.
The cloud provides an easy onramp to building and deploying Big Data solutions. Transitioning from initial deployment to large-scale, highly performant operations may not be as easy. In his session at 15th Cloud Expo, Harold Hannon, Sr. Software Architect at SoftLayer, will discuss the benefits, weaknesses, and performance characteristics of public and bare metal cloud deployments that can help you make the right decisions.
Over the last few years the healthcare ecosystem has revolved around innovations in Electronic Health Record (HER) based systems. This evolution has helped us achieve much desired interoperability. Now the focus is shifting to other equally important aspects – scalability and performance. While applying cloud computing environments to the EHR systems, a special consideration needs to be given to the cloud enablement of Veterans Health Information Systems and Technology Architecture (VistA), i.e., the largest single medical system in the United States.
Cloud and Big Data present unique dilemmas: embracing the benefits of these new technologies while maintaining the security of your organization’s assets. When an outside party owns, controls and manages your infrastructure and computational resources, how can you be assured that sensitive data remains private and secure? How do you best protect data in mixed use cloud and big data infrastructure sets? Can you still satisfy the full range of reporting, compliance and regulatory requirements? In his session at 15th Cloud Expo, Derek Tumulak, Vice President of Product Management at Vormetric, will discuss how to address data security in cloud and Big Data environments so that your organization isn’t next week’s data breach headline.
Scott Jenson leads a project called The Physical Web within the Chrome team at Google. Project members are working to take the scalability and openness of the web and use it to talk to the exponentially exploding range of smart devices. Nearly every company today working on the IoT comes up with the same basic solution: use my server and you'll be fine. But if we really believe there will be trillions of these devices, that just can't scale. We need a system that is open a scalable and by using the URL as a basic building block, we open this up and get the same resilience that the web enjoys.
Is your organization struggling to deal with skyrocketing volumes of digital assets? The amount of data is growing exponentially and organizations are having a hard time managing this growth. In his session at 15th Cloud Expo, Amar Kapadia, Senior Director of Open Cloud Strategy at Seagate, will walk through the essential considerations when developing a cloud storage strategy. In this discussion, you will understand the challenges IT is facing, why companies need to move to cloud, and how the right cloud model can help your business economically overcome the data struggle.
If cloud computing benefits are so clear, why have so few enterprises migrated their mission-critical apps? The answer is often inertia and FUD. No one ever got fired for not moving to the cloud – not yet. In his session at 15th Cloud Expo, Michael Hoch, SVP, Cloud Advisory Service at Virtustream, will discuss the six key steps to justify and execute your MCA cloud migration.
The 16th International Cloud Expo announces that its Call for Papers is now open. 16th International Cloud Expo, to be held June 9–11, 2015, at the Javits Center in New York City brings together Cloud Computing, APM, APIs, Security, Big Data, Internet of Things, DevOps and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal today!
Most of today’s hardware manufacturers are building servers with at least one SATA Port, but not every systems engineer utilizes them. This is considered a loss in the game of maximizing potential storage space in a fixed unit. The SATADOM Series was created by Innodisk as a high-performance, small form factor boot drive with low power consumption to be plugged into the unused SATA port on your server board as an alternative to hard drive or USB boot-up. Built for 1U systems, this powerful device is smaller than a one dollar coin, and frees up otherwise dead space on your motherboard. To meet the requirements of tomorrow’s cloud hardware, Innodisk invested internal R&D resources to develop our SATA III series of products. The SATA III SATADOM boasts 500/180MBs R/W Speeds respectively, or double R/W Speed of SATA II products.
SYS-CON Events announced today that Gridstore™, the leader in software-defined storage (SDS) purpose-built for Windows Servers and Hyper-V, will exhibit at SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Gridstore™ is the leader in software-defined storage purpose built for virtualization that is designed to accelerate applications in virtualized environments. Using its patented Server-Side Virtual Controller™ Technology (SVCT) to eliminate the I/O blender effect and accelerate applications Gridstore delivers vmOptimized™ Storage that self-optimizes to each application or VM across both virtual and physical environments. Leveraging a grid architecture, Gridstore delivers the first end-to-end storage QoS to ensure the most important App or VM performance is never compromised. The storage grid, that uses Gridstore’s performance optimized nodes or capacity optimized nodes, starts with as few a...
SYS-CON Events announced today that Cloudian, Inc., the leading provider of hybrid cloud storage solutions, has been named “Bronze Sponsor” of SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Cloudian is a Foster City, Calif.-based software company specializing in cloud storage. Cloudian HyperStore® is an S3-compatible cloud object storage platform that enables service providers and enterprises to build reliable, affordable and scalable hybrid cloud storage solutions. Cloudian actively partners with leading cloud computing environments including Amazon Web Services, Citrix Cloud Platform, Apache CloudStack, OpenStack and the vast ecosystem of S3 compatible tools and applications. Cloudian's customers include Vodafone, Nextel, NTT, Nifty, and LunaCloud. The company has additional offices in China and Japan.
SYS-CON Events announced today that TechXtend (formerly Programmer’s Paradise), a leading value-added provider of server and storage virtualization, and r-evolution will exhibit at SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. TechXtend (formerly Programmer’s Paradise) is a leading value-added provider of software, systems and solutions for corporations, government organizations, and academic institutions across the United States and Canada. TechXtend is the Exclusive Reseller in the United States for r-evolution