Welcome!

SDN Journal Authors: Elizabeth White, Johnnie Konstantas, Cloud Best Practices Network, Sarah Patrick, Ramesh Ganapathy

Related Topics: Containers Expo Blog, Microservices Expo, @CloudExpo, Cloud Security, @BigDataExpo, SDN Journal

Containers Expo Blog: Article

Bare Metal Blog: Quality Is Systemic, or It Is Not

In all critical systems the failure of even one piece can have catastrophic results for the user

February 5, 2013

BareMetalBlog talking about quality testing of hardware, in all its forms. F5 does a great job in this space.

For those of you new to the Bare Metal Blog series, find them all right here.

In all critical systems – from home heating units to military firearms – the failure of even one piece can have catastrophic results for the user. While it is unlikely that the failure of an ADC is going to be quite so catastrophic, it can certainly make IT staff’s day(s) terrible and cost the organization a fortune in lost revenue. That’s not to mention the problems that downtime’s impact on an organizations’ brand can have over the longer term. It is actually pretty scary to ponder the loss of any core system, but one that acts as a gateway and scaling factor for remote employee workload and/or customer access is even higher on the list of Things To Be Avoided ™.

In general, if you think about it the number of hardware failures out there is relatively minimal. There are a ton of pieces of network gear doing their thing every day, and yes, there is the occasional outage, but if you consider the number of devices NOT going down on a given day, the failure rate is very tiny.

Still, no one wants to be in that tiny percentage any more than they absolutely must. Hardware breaks, and will always do so, it is the nature of electronic and mechanical things. But we should ask more questions of our vendors to make certain they’re doing all that they can to keep the chances of their device breaking during their otherwise useful lifetime to a minimum.

For an example of doing it right, we’ll talk a bit about the lengths that F5 goes to in an attempt to make devices as reliable as possible from an  electro-mechanical perspective. While I am an F5 employee, I will note that there is no doubt that F5 gear is highly reliable. It was known for quality before I came to F5, and I have not heard anything since joining that would change that impression. So I use F5 because (a) I am aware of the steps we take as an organization and (b) because our hardware testing is an example of doing it right.

And of course, there are things I can’t tell you, and things that we just will not have room to delve into very deeply in this overview blog. I am considering extending the Bare Metal Blog series to include (among other things) more detail about those parts that I would want to know more about if I were a reader, but for this blog, we’re going to skim so there is space to cover everything without making the blog so long you don’t read to the end.

I admit it, I’ve talked to a lot of companies about testing over the years, and can’t recall a vendor that did a more thorough job – though I can think of a few whose record in the field says they probably have a similar program. So let’s look at some of the quality testing done on hardware.

Parts are not just parts.
An ADC, like any computerized system, is a complex beast. There is a lot going on and the quality of the weakest link is the piece that sets the life expectancy and out-of-the-box quality standards for the overall product. As such there are some detailed parts and subassembly tests that gear must go through.

For F5, these tests include:

  • Signal Integrity Tests to test for signal degradation between parts/subsystems.
  • BIOS Test Suites to validate that BIOS performs as expected and handles exception cases reliably.
  • Software Design Verification Testing to detect and eliminate software quality issues early in the development process.
  • Sub- Assembly Tests to verify correct subsystem performance and quality.
  • FPGA System Validation Tests determines that the FPGA design and hardware perform as expected.
  • Automated Optical Inspection used on the PCB production line to prevent and detect defects.
  • Automated X-Ray Inspection takes 3D slices of an assembled circuit board to prevent and detect defects.
  • In-Circuit Test using a series of probes to test the populated circuit board with power applied to detect defects.
  • Flying Probe uses a “golden board” (perfect sample) to compare against a newly produced board to verify there are no defects.

Now that’s a lot of testing, though I have to admit I’m still learning about the testing process, there may well be more. But you’ll note that some things aren’t immediately called out here – like items picked from suppliers, which could be caught in some of these tests but might not  either. That is because supplier quality standards are separate from actual testing, and require that suppliers whose parts make it into F5 gear are up to standard.

Supply demands
So what do we, as an organization, require from a quality perspective of those who wish to be our suppliers? Here’s a list. This list I KNOW isn’t complete, because I pared it down for the purposes of this blog. I think you’ll get the idea from what’s here though.

  • All assembly suppliers are ISO9000 and 140001 certified.
  • Suppliers assemble and test their products to F5 specifications.
  • Suppliers are monitored with closed loop performance metrics including delivery and quality.
  • Formal Supplier Corrective Action Response program – when a fault is determined in supplier quality, a formal system to quickly address the issue.
  • Quarterly reviews with senior management utilizing a formal supplier scorecard to evaluate supplier quality, stability, and more.

The biggest one in the list, IMO, is that suppliers assemble and test product to F5 specifications. Their part is going in our box, but our name is going on it. F5 has a vested interest in protecting that name, so setting the standards by which the suppliers put together and test the product they are supplying is huge. After all, many suppliers are building tiny little subsystems for inside an F5 device, so holding them to F5 standards makes the whole stronger.

By way of example, we require the more reliable but more expensive version of capacitors from our suppliers. For a bit of background on the problem, there is an excellent article on hardwaresecrets.com (and a pretty good overview on wikipedia.com) about capacitors. By demanding that our suppliers use better quality components, the overall life expectancy of our hardware is higher, meaning you get less calls in the middle of the night.

The whole is different than the sum of the parts
While an organization can test parts until the sun rises in the west, that will not guarantee the quality of the overall product. And in the end, it is the overall product that a vendor sells. As such, manufacturers generally (and F5 specifically) keep an entire suite of whole-product tests on-hand for product quality assessment. Here are some of them used at F5.

  • Mechanical Testing Test the construction of the system by  applying shock, drop, vibe, repetitive insertion/extractions, and more.
  • Highly Accelerated Life Testing -  Heat and vibration are used to determine the quality and operational limits of the device. The goal is to simulate years of use in a manageable timeframe.
  • Environmental Stress Screening – Expose the device to extremes of environment, from temperature to voltage.
  • MFG Test Suite System Stress testing - turn everything on, Reboot, Power Cycle, et cetera. By way of example, we cycle power up to 10,000 times during this testing.
  • On-Going Reliability Testing - The products currently in the manufacturing line are randomly picked and then put in a burn-in chamber which then test the device at elevated temperature.
  • Post Pack out Audit – Pull random samples from our finished good inventory to verify quality.

That’s a lot of testing, and it is not anywhere near all that F5 does to validate a box. For example, while software testing got a hat-tip at the component level, our Traffic Management Operating System (TMOS) has a completely separate set of testing, validation, and QA processes that are not listed here because this is the Bare Metal Blog. Maybe at some point in the future I’ll do a series like Bare Metal Blog on our software. That would be interesting for me, hopefully for you also.

It’s not over when it’s over
The entire time that Lori and I were application developers, there was a party to celebrate every time we finished a major piece of software. From an evening out with the team when our tax prep software shipped to a bottle of champagne on the roof of an AutoDesk office building when AutoCAD Map shipped, we always got to relax and enjoy it a bit.

While our hardware dev teams get something similar, our hardware test teams don’t pack up the gear and call it a product. For the entire lifecycle of an F5 box – from first prototype to End of Life – our test team does continuous testing to monitor and improve the quality of the product. Unlike most of what you will find in this blog, that is pretty unique to F5. Other companies do it, but unlike ISO certification or HALT testing, continuous testing is not accepted as a mandatory part of product engineering in the computing space. F5 does this because it makes the most sense. From variations in quality of chips to suppliers changing their suppliers, things change over the production of a product, and F5 feels it is important to overall quality to stay on top of that fact. This system also allows for continuous improvement of the product over its lifecycle.

One of the many reasons I think F5 is a great company. I have twice run into scenarios that involved a vendor who did not do this type of testing, and it cost me. Once was as a reviewer, which means it was worse for the vendor than for me, and once as an IT manager, which means it was worse for me than the vendor. I would suggest you start asking your vendors about lifetime testing, because a manufacturing or supplier change can impact the reliability of the gear. And if it does, either they catch it, or you could be walking into a nightmare. The perfect example (because so many of us had to deal with it) was a huge multinational selling systems with “DeskStar” disks that we all now lovingly call “Death Star” disks.

You can rely on it
This process is a proactive investment by F5 in your satisfaction. While you might think “doesn’t all that testing – particularly when continuous testing occurs over the breadth of devices you sell – cost a lot of money?”, the answer is “nowhere near as much as having to visit every device of model X and repair it, nowhere near as much as the loss of business persistent quality issues generates”. And it is true. We truly care about your satisfaction and the reliability of your network, but when it comes down to it, that caring is based upon enlightened self interest. The net result though is devices you can trust to just keep going.

I know, we have one in our basement from before we came to F5, It’s old and looks funny next to our shiny newer one. But it still works. It’s EOL’d, so it isn’t getting any better, and when it breaks it’s done, but the device is nearly a decade old, and still operates as originally advertised.

If only our laptops could do that.

More Stories By Don MacVittie

Don MacVittie is currently a Senior Solutions Architect at StackIQ, Inc. He is also working with Mesamundi on D20PRO, and is a member of the Stacki Open Source project. He has experience in application development, architecture, infrastructure, technical writing, and IT management. MacVittie holds a B.S. in Computer Science from Northern Michigan University, and an M.S. in Computer Science from Nova Southeastern University.

@CloudExpo Stories
In most cases, it is convenient to have some human interaction with a web (micro-)service, no matter how small it is. A traditional approach would be to create an HTTP interface, where user requests will be dispatched and HTML/CSS pages must be served. This approach is indeed very traditional for a web site, but not really convenient for a web service, which is not intended to be good looking, 24x7 up and running and UX-optimized. Instead, talking to a web service in a chat-bot mode would be muc...
SYS-CON Events announced today that Men & Mice, the leading global provider of DNS, DHCP and IP address management overlay solutions, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. The Men & Mice Suite overlay solution is already known for its powerful application in heterogeneous operating environments, enabling enterprises to scale without fuss. Building on a solid range of diverse platform support,...
As someone who has been dedicated to automation and Application Release Automation (ARA) technology for almost six years now, one of the most common questions I get asked regards Platform-as-a-Service (PaaS). Specifically, people want to know whether release automation is still needed when a PaaS is in place, and why. Isn't that what a PaaS provides? A solution to the deployment and runtime challenges of an application? Why would anyone using a PaaS then need an automation engine with workflow ...
With an estimated 50 billion devices connected to the Internet by 2020, several industries will begin to expand their capabilities for retaining end point data at the edge to better utilize the range of data types and sheer volume of M2M data generated by the Internet of Things. In his session at @ThingsExpo, Don DeLoach, CEO and President of Infobright, will discuss the infrastructures businesses will need to implement to handle this explosion of data by providing specific use cases for filte...
Predictive analytics tools monitor, report, and troubleshoot in order to make proactive decisions about the health, performance, and utilization of storage. Most enterprises combine cloud and on-premise storage, resulting in blended environments of physical, virtual, cloud, and other platforms, which justifies more sophisticated storage analytics. In his session at 18th Cloud Expo, Peter McCallum, Vice President of Datacenter Solutions at FalconStor, will discuss using predictive analytics to ...
SYS-CON Events announced today that Pythian, a global IT services company specializing in helping companies adopt disruptive technologies to optimize revenue-generating systems, has been named “Bronze Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2015 at the Javits Center in New York, New York. Founded in 1997, Pythian is a global IT services company that helps companies compete by adopting disruptive technologies such as cloud, Big Data, advanced analytics, and DevO...
Fortunately, meaningful and tangible business cases for IoT are plentiful in a broad array of industries and vertical markets. These range from simple warranty cost reduction for capital intensive assets, to minimizing downtime for vital business tools, to creating feedback loops improving product design, to improving and enhancing enterprise customer experiences. All of these business cases, which will be briefly explored in this session, hinge on cost effectively extracting relevant data from ...
SYS-CON Events announced today that iDevices®, the preeminent brand in the connected home industry, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. iDevices, the preeminent brand in the connected home industry, has a growing line of HomeKit-enabled products available at the largest retailers worldwide. Through the “Designed with iDevices” co-development program and its custom-built IoT Cloud Infrastruc...
SYS-CON Events announced today that Commvault, a global leader in enterprise data protection and information management, has been named “Bronze Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY, and the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Commvault is a leading provider of data protection and information management...
As enterprises work to take advantage of Big Data technologies, they frequently become distracted by product-level decisions. In most new Big Data builds this approach is completely counter-productive: it presupposes tools that may not be a fit for development teams, forces IT to take on the burden of evaluating and maintaining unfamiliar technology, and represents a major up-front expense. In his session at @BigDataExpo at @ThingsExpo, Andrew Warfield, CTO and Co-Founder of Coho Data, will dis...
Advances in technology and ubiquitous connectivity have made the utilization of a dispersed workforce more common. Whether that remote team is located across the street or country, management styles/ approaches will have to be adjusted to accommodate this new dynamic. In his session at 17th Cloud Expo, Sagi Brody, Chief Technology Officer at Webair Internet Development Inc., focused on the challenges of managing remote teams, providing real-world examples that demonstrate what works and what do...
With the proliferation of both SQL and NoSQL databases, organizations can now target specific fit-for-purpose database tools for their different application needs regarding scalability, ease of use, ACID support, etc. Platform as a Service offerings make this even easier now, enabling developers to roll out their own database infrastructure in minutes with minimal management overhead. However, this same amount of flexibility also comes with the challenges of picking the right tool, on the right ...
More and more companies are looking to microservices as an architectural pattern for breaking apart applications into more manageable pieces so that agile teams can deliver new features quicker and more effectively. What this pattern has done more than anything to date is spark organizational transformations, setting the foundation for future application development. In practice, however, there are a number of considerations to make that go beyond simply “build, ship, and run,” which changes ho...
SYS-CON Events announced today that AppNeta, the leader in performance insight for business-critical web applications, will exhibit and present at SYS-CON's @DevOpsSummit at Cloud Expo New York, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. AppNeta is the only application performance monitoring (APM) company to provide solutions for all applications – applications you develop internally, business-critical SaaS applications you use and the networks that deli...
Father business cycles and digital consumers are forcing enterprises to respond faster to customer needs and competitive demands. Successful integration of DevOps and Agile development will be key for business success in today’s digital economy. In his session at DevOps Summit, Pradeep Prabhu, Co-Founder & CEO of Cloudmunch, covered the critical practices that enterprises should consider to seamlessly integrate Agile and DevOps processes, barriers to implementing this in the enterprise, and pr...
Eighty percent of a data scientist’s time is spent gathering and cleaning up data, and 80% of all data is unstructured and almost never analyzed. Cognitive computing, in combination with Big Data, is changing the equation by creating data reservoirs and using natural language processing to enable analysis of unstructured data sources. This is impacting every aspect of the analytics profession from how data is mined (and by whom) to how it is delivered. This is not some futuristic vision: it's ha...
Let’s face it, embracing new storage technologies, capabilities and upgrading to new hardware often adds complexity and increases costs. In his session at 18th Cloud Expo, Seth Oxenhorn, Vice President of Business Development & Alliances at FalconStor, will discuss how a truly heterogeneous software-defined storage approach can add value to legacy platforms and heterogeneous environments. The result reduces complexity, significantly lowers cost, and provides IT organizations with improved effi...
Data-as-a-Service is the complete package for the transformation of raw data into meaningful data assets and the delivery of those data assets. In her session at 18th Cloud Expo, Lakshmi Randall, an industry expert, analyst and strategist, will address: What is DaaS (Data-as-a-Service)? Challenges addressed by DaaS Vendors that are enabling DaaS Architecture options for DaaS
Cognitive Computing is becoming the foundation for a new generation of solutions that have the potential to transform business. Unlike traditional approaches to building solutions, a cognitive computing approach allows the data to help determine the way applications are designed. This contrasts with conventional software development that begins with defining logic based on the current way a business operates. In her session at 18th Cloud Expo, Judith S. Hurwitz, President and CEO of Hurwitz & ...
It's easy to assume that your app will run on a fast and reliable network. The reality for your app's users, though, is often a slow, unreliable network with spotty coverage. What happens when the network doesn't work, or when the device is in airplane mode? You get unhappy, frustrated users. An offline-first app is an app that works, without error, when there is no network connection.