Click here to close now.


SDN Journal Authors: Don MacVittie, Lori MacVittie, Liz McMillan, Dinko Eror, Pat Romanski

Related Topics: Containers Expo Blog, Microservices Expo, @CloudExpo, Cloud Security, @BigDataExpo, SDN Journal

Containers Expo Blog: Article

Bare Metal Blog: Quality Is Systemic, or It Is Not

In all critical systems the failure of even one piece can have catastrophic results for the user

February 5, 2013

BareMetalBlog talking about quality testing of hardware, in all its forms. F5 does a great job in this space.

For those of you new to the Bare Metal Blog series, find them all right here.

In all critical systems – from home heating units to military firearms – the failure of even one piece can have catastrophic results for the user. While it is unlikely that the failure of an ADC is going to be quite so catastrophic, it can certainly make IT staff’s day(s) terrible and cost the organization a fortune in lost revenue. That’s not to mention the problems that downtime’s impact on an organizations’ brand can have over the longer term. It is actually pretty scary to ponder the loss of any core system, but one that acts as a gateway and scaling factor for remote employee workload and/or customer access is even higher on the list of Things To Be Avoided ™.

In general, if you think about it the number of hardware failures out there is relatively minimal. There are a ton of pieces of network gear doing their thing every day, and yes, there is the occasional outage, but if you consider the number of devices NOT going down on a given day, the failure rate is very tiny.

Still, no one wants to be in that tiny percentage any more than they absolutely must. Hardware breaks, and will always do so, it is the nature of electronic and mechanical things. But we should ask more questions of our vendors to make certain they’re doing all that they can to keep the chances of their device breaking during their otherwise useful lifetime to a minimum.

For an example of doing it right, we’ll talk a bit about the lengths that F5 goes to in an attempt to make devices as reliable as possible from an  electro-mechanical perspective. While I am an F5 employee, I will note that there is no doubt that F5 gear is highly reliable. It was known for quality before I came to F5, and I have not heard anything since joining that would change that impression. So I use F5 because (a) I am aware of the steps we take as an organization and (b) because our hardware testing is an example of doing it right.

And of course, there are things I can’t tell you, and things that we just will not have room to delve into very deeply in this overview blog. I am considering extending the Bare Metal Blog series to include (among other things) more detail about those parts that I would want to know more about if I were a reader, but for this blog, we’re going to skim so there is space to cover everything without making the blog so long you don’t read to the end.

I admit it, I’ve talked to a lot of companies about testing over the years, and can’t recall a vendor that did a more thorough job – though I can think of a few whose record in the field says they probably have a similar program. So let’s look at some of the quality testing done on hardware.

Parts are not just parts.
An ADC, like any computerized system, is a complex beast. There is a lot going on and the quality of the weakest link is the piece that sets the life expectancy and out-of-the-box quality standards for the overall product. As such there are some detailed parts and subassembly tests that gear must go through.

For F5, these tests include:

  • Signal Integrity Tests to test for signal degradation between parts/subsystems.
  • BIOS Test Suites to validate that BIOS performs as expected and handles exception cases reliably.
  • Software Design Verification Testing to detect and eliminate software quality issues early in the development process.
  • Sub- Assembly Tests to verify correct subsystem performance and quality.
  • FPGA System Validation Tests determines that the FPGA design and hardware perform as expected.
  • Automated Optical Inspection used on the PCB production line to prevent and detect defects.
  • Automated X-Ray Inspection takes 3D slices of an assembled circuit board to prevent and detect defects.
  • In-Circuit Test using a series of probes to test the populated circuit board with power applied to detect defects.
  • Flying Probe uses a “golden board” (perfect sample) to compare against a newly produced board to verify there are no defects.

Now that’s a lot of testing, though I have to admit I’m still learning about the testing process, there may well be more. But you’ll note that some things aren’t immediately called out here – like items picked from suppliers, which could be caught in some of these tests but might not  either. That is because supplier quality standards are separate from actual testing, and require that suppliers whose parts make it into F5 gear are up to standard.

Supply demands
So what do we, as an organization, require from a quality perspective of those who wish to be our suppliers? Here’s a list. This list I KNOW isn’t complete, because I pared it down for the purposes of this blog. I think you’ll get the idea from what’s here though.

  • All assembly suppliers are ISO9000 and 140001 certified.
  • Suppliers assemble and test their products to F5 specifications.
  • Suppliers are monitored with closed loop performance metrics including delivery and quality.
  • Formal Supplier Corrective Action Response program – when a fault is determined in supplier quality, a formal system to quickly address the issue.
  • Quarterly reviews with senior management utilizing a formal supplier scorecard to evaluate supplier quality, stability, and more.

The biggest one in the list, IMO, is that suppliers assemble and test product to F5 specifications. Their part is going in our box, but our name is going on it. F5 has a vested interest in protecting that name, so setting the standards by which the suppliers put together and test the product they are supplying is huge. After all, many suppliers are building tiny little subsystems for inside an F5 device, so holding them to F5 standards makes the whole stronger.

By way of example, we require the more reliable but more expensive version of capacitors from our suppliers. For a bit of background on the problem, there is an excellent article on (and a pretty good overview on about capacitors. By demanding that our suppliers use better quality components, the overall life expectancy of our hardware is higher, meaning you get less calls in the middle of the night.

The whole is different than the sum of the parts
While an organization can test parts until the sun rises in the west, that will not guarantee the quality of the overall product. And in the end, it is the overall product that a vendor sells. As such, manufacturers generally (and F5 specifically) keep an entire suite of whole-product tests on-hand for product quality assessment. Here are some of them used at F5.

  • Mechanical Testing Test the construction of the system by  applying shock, drop, vibe, repetitive insertion/extractions, and more.
  • Highly Accelerated Life Testing -  Heat and vibration are used to determine the quality and operational limits of the device. The goal is to simulate years of use in a manageable timeframe.
  • Environmental Stress Screening – Expose the device to extremes of environment, from temperature to voltage.
  • MFG Test Suite System Stress testing - turn everything on, Reboot, Power Cycle, et cetera. By way of example, we cycle power up to 10,000 times during this testing.
  • On-Going Reliability Testing - The products currently in the manufacturing line are randomly picked and then put in a burn-in chamber which then test the device at elevated temperature.
  • Post Pack out Audit – Pull random samples from our finished good inventory to verify quality.

That’s a lot of testing, and it is not anywhere near all that F5 does to validate a box. For example, while software testing got a hat-tip at the component level, our Traffic Management Operating System (TMOS) has a completely separate set of testing, validation, and QA processes that are not listed here because this is the Bare Metal Blog. Maybe at some point in the future I’ll do a series like Bare Metal Blog on our software. That would be interesting for me, hopefully for you also.

It’s not over when it’s over
The entire time that Lori and I were application developers, there was a party to celebrate every time we finished a major piece of software. From an evening out with the team when our tax prep software shipped to a bottle of champagne on the roof of an AutoDesk office building when AutoCAD Map shipped, we always got to relax and enjoy it a bit.

While our hardware dev teams get something similar, our hardware test teams don’t pack up the gear and call it a product. For the entire lifecycle of an F5 box – from first prototype to End of Life – our test team does continuous testing to monitor and improve the quality of the product. Unlike most of what you will find in this blog, that is pretty unique to F5. Other companies do it, but unlike ISO certification or HALT testing, continuous testing is not accepted as a mandatory part of product engineering in the computing space. F5 does this because it makes the most sense. From variations in quality of chips to suppliers changing their suppliers, things change over the production of a product, and F5 feels it is important to overall quality to stay on top of that fact. This system also allows for continuous improvement of the product over its lifecycle.

One of the many reasons I think F5 is a great company. I have twice run into scenarios that involved a vendor who did not do this type of testing, and it cost me. Once was as a reviewer, which means it was worse for the vendor than for me, and once as an IT manager, which means it was worse for me than the vendor. I would suggest you start asking your vendors about lifetime testing, because a manufacturing or supplier change can impact the reliability of the gear. And if it does, either they catch it, or you could be walking into a nightmare. The perfect example (because so many of us had to deal with it) was a huge multinational selling systems with “DeskStar” disks that we all now lovingly call “Death Star” disks.

You can rely on it
This process is a proactive investment by F5 in your satisfaction. While you might think “doesn’t all that testing – particularly when continuous testing occurs over the breadth of devices you sell – cost a lot of money?”, the answer is “nowhere near as much as having to visit every device of model X and repair it, nowhere near as much as the loss of business persistent quality issues generates”. And it is true. We truly care about your satisfaction and the reliability of your network, but when it comes down to it, that caring is based upon enlightened self interest. The net result though is devices you can trust to just keep going.

I know, we have one in our basement from before we came to F5, It’s old and looks funny next to our shiny newer one. But it still works. It’s EOL’d, so it isn’t getting any better, and when it breaks it’s done, but the device is nearly a decade old, and still operates as originally advertised.

If only our laptops could do that.

More Stories By Don MacVittie

Don MacVittie is currently a Senior Solutions Architect at StackIQ, Inc. He is also working with Mesamundi on D20PRO, and is a member of the Stacki Open Source project. He has experience in application development, architecture, infrastructure, technical writing, and IT management. MacVittie holds a B.S. in Computer Science from Northern Michigan University, and an M.S. in Computer Science from Nova Southeastern University.

@CloudExpo Stories
As operational failure becomes more acceptable to discuss within the software industry, the necessity for holding constructive, actionable postmortems increases. But most of what we know about postmortems from "pop culture" isn't actually relevant for the software systems we work on and within. In his session at DevOps Summit, J. Paul Reed will look at postmortem pitfalls, techniques, and tools you'll be able to take back to your own environment so they will be able to lay the foundations for h...
SYS-CON Events announced today that Super Micro Computer, Inc., a global leader in high-performance, high-efficiency server, storage technology and green computing, will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology is a premier provider of advanced server Building Block Solutions® for Data ...
Containers are revolutionizing the way we deploy and maintain our infrastructures, but monitoring and troubleshooting in a containerized environment can still be painful and impractical. Understanding even basic resource usage is difficult - let alone tracking network connections or malicious activity. In his session at DevOps Summit, Gianluca Borello, Sr. Software Engineer at Sysdig, will cover the current state of the art for container monitoring and visibility, including pros / cons and li...
Containers are all the rage among developers and web companies, but they also represent two very substantial benefits to larger organizations. First, they have the potential to dramatically accelerate the application lifecycle from software builds and testing to deployment and upgrades. Second they represent the first truly hybrid-approach to consuming infrastructure, allowing organizations to run the same workloads on any cloud, virtual machine or physical server. Together, they represent a ver...
SYS-CON Events announced today the Containers & Microservices Bootcamp, being held November 3-4, 2015, in conjunction with 17th Cloud Expo, @ThingsExpo, and @DevOpsSummit at the Santa Clara Convention Center in Santa Clara, CA. This is your chance to get started with the latest technology in the industry. Combined with real-world scenarios and use cases, the Containers and Microservices Bootcamp, led by Janakiram MSV, a Microsoft Regional Director, will include presentations as well as hands-on...
WebRTC converts the entire network into a ubiquitous communications cloud thereby connecting anytime, anywhere through any point. In his session at WebRTC Summit,, Mark Castleman, EIR at Bell Labs and Head of Future X Labs, will discuss how the transformational nature of communications is achieved through the democratizing force of WebRTC. WebRTC is doing for voice what HTML did for web content.
Through WebRTC, audio and video communications are being embedded more easily than ever into applications, helping carriers, enterprises and independent software vendors deliver greater functionality to their end users. With today’s business world increasingly focused on outcomes, users’ growing calls for ease of use, and businesses craving smarter, tighter integration, what’s the next step in delivering a richer, more immersive experience? That richer, more fully integrated experience comes ab...
For almost two decades, businesses have discovered great opportunities to engage with customers and even expand revenue through digital systems, including web and mobile applications. Yet, even now, the conversation between the business and the technologists that deliver these systems is strained, in large part due to misaligned objectives. In his session at DevOps Summit, James Urquhart, Senior Vice President of Performance Analytics at SOASTA, Inc., will discuss how measuring user outcomes –...
SYS-CON Events announced today that Dyn, the worldwide leader in Internet Performance, will exhibit at SYS-CON's 17th International Cloud Expo®, which will take place on November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Dyn is a cloud-based Internet Performance company. Dyn helps companies monitor, control, and optimize online infrastructure for an exceptional end-user experience. Through a world-class network and unrivaled, objective intelligence into Internet condit...
SYS-CON Events announced today that Sandy Carter, IBM General Manager Cloud Ecosystem and Developers, and a Social Business Evangelist, will keynote at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA.
Saviynt Inc. has announced the availability of the next release of Saviynt for AWS. The comprehensive security and compliance solution provides a Command-and-Control center to gain visibility into risks in AWS, enforce real-time protection of critical workloads as well as data and automate access life-cycle governance. The solution enables AWS customers to meet their compliance mandates such as ITAR, SOX, PCI, etc. by including an extensive risk and controls library to detect known threats and b...
DevOps Summit, taking place at the Santa Clara Convention Center in Santa Clara, CA, and Javits Center in New York City, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait...
Overgrown applications have given way to modular applications, driven by the need to break larger problems into smaller problems. Similarly large monolithic development processes have been forced to be broken into smaller agile development cycles. Looking at trends in software development, microservices architectures meet the same demands. Additional benefits of microservices architectures are compartmentalization and a limited impact of service failure versus a complete software malfunction....
DevOps and Continuous Delivery software provider XebiaLabs has announced it has been selected to join the Amazon Web Services (AWS) DevOps Competency partner program. The program is designed to highlight software vendors like XebiaLabs who have demonstrated technical expertise and proven customer success in DevOps and specialized solution areas like Continuous Delivery. DevOps Competency Partners provide solutions to, or have deep experience working with AWS users and other businesses to help t...
The last decade was about virtual machines, but the next one is about containers. Containers enable a service to run on any host at any time. Traditional tools are starting to show cracks because they were not designed for this level of application portability. Now is the time to look at new ways to deploy and manage applications at scale. In his session at @DevOpsSummit, Brian “Redbeard” Harrington, a principal architect at CoreOS, will examine how CoreOS helps teams run in production. Attende...
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo in Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading in...
The Internet of Things (IoT) is growing rapidly by extending current technologies, products and networks. By 2020, Cisco estimates there will be 50 billion connected devices. Gartner has forecast revenues of over $300 billion, just to IoT suppliers. Now is the time to figure out how you’ll make money – not just create innovative products. With hundreds of new products and companies jumping into the IoT fray every month, there’s no shortage of innovation. Despite this, McKinsey/VisionMobile data...
As a company adopts a DevOps approach to software development, what are key things that both the Dev and Ops side of the business must keep in mind to ensure effective continuous delivery? In his session at DevOps Summit, Mark Hydar, Head of DevOps, Ericsson TV Platforms, will share best practices and provide helpful tips for Ops teams to adopt an open line of communication with the development side of the house to ensure success between the two sides.
The enterprise is being consumerized, and the consumer is being enterprised. Moore's Law does not matter anymore, the future belongs to business virtualization powered by invisible service architecture, powered by hyperscale and hyperconvergence, and facilitated by vertical streaming and horizontal scaling and consolidation. Both buyers and sellers want instant results, and from paperwork to paperless to mindless is the ultimate goal for any seamless transaction. The sweetest sweet spot in innov...
The IoT is upon us, but today’s databases, built on 30-year-old math, require multiple platforms to create a single solution. Data demands of the IoT require Big Data systems that can handle ingest, transactions and analytics concurrently adapting to varied situations as they occur, with speed at scale. In his session at @ThingsExpo, Chad Jones, chief strategy officer at Deep Information Sciences, will look differently at IoT data so enterprises can fully leverage their IoT potential. He’ll sha...