|By Michael Bushong||
|April 24, 2014 11:30 AM EDT||
From a cost perspective, the networking dialogue is dominated by CapEx. Acquisition costs for new networking gear have historically been tied to hardware, and despite the relatively recent rise of bare metal switching, networking hardware remains a high-stakes business. But SDN is changing this dynamic in potentially significant ways.
The first point to clarify when talking about CapEx is that CapEx does not necessarily mean hardware (at least not the way that most people mean). While there is a strict financial definition for CapEx, in the networking industry it has become shorthand for Procurement Costs. Because networking solutions have been predominantly monetized through hardware, we associate procurement costs with hardware, but this is changing.
The fact that the ’S’ in SDN stands for software is reason enough for people to look beyond the chassis. But the reality is that while vendors have monetized the hardware, the value has been increasingly moving to the software side for more than a decade. So long as everyone was selling hardware, it didn’t really matter that much whether the cost was tied to the hardware or the software, so we have been a little bit lazy collectively in determining a deliberate pricing mix.
More recently, however, there have been additional solutions that are offered entirely through software. With virtual networking devices, for example, there is no physical hardware (unless you count the servers and the network that connects the servers). A common sales tactic for these types of solutions is to point out how expensive physical solutions are. Why pay for all that sheet metal when you can get the same functionality in a virtual form factor? Of course, you are not really paying for the sheet metal; your check also pays for the software and all the features that go into that sheet metal. But the argument is pretty compelling.
The point here is that the only thing that really matters is how much you pay for the whole solution. Whether the price is affixed to hardware or software is an accounting detail – important for some people, but not really the most important thing for the majority of buyers. Rather than calling it CapEx, we ought to be referring more broadly to procurement or acquisition costs. All in, Solution A costs X dollars to bring in house, and Solution B costs Y dollars.
This would certainly simplify the conversation some. But even then, it isn’t all about procurement costs anymore either.
Depending on the solution, the procurement costs account for roughly one-third of the total cost of ownership. The remaining two-thirds of the cost is ongoing operating expense (power, cooling, space, management, support, and so on). The models here for most solutions start to get pretty squishy. While we can fairly formulaically determine things like power, space, and support, when it comes to estimating the cost of managing a device, the models are so dependent on uncontrollable things that they border on useless. And even when the models are sound, most companies have not sufficiently instrumented their network operations to really know what they are spending.
But just because it is difficult to model OpEx does not mean that network teams should ignore it.
If there is one thing that the gaming industry has taught us, it is that there are all kinds of creative ways to separate someone from their money. In the early days of video games, 100% of the cost was procurement cost. After you bought the install media, you had paid everything you were ever going to pay. Before long, some of the more popular games figured out that they could lower initial costs (make the barrier to entry lower) and then charge for ongoing use through subscriptions.
As the networking world adjusts the pricing mix – associating more of the cost with the software – we should expect that charge models will mirror what we have seen on the consumer side. It is not a big stretch (and in fact already happening) to see massive up-front hardware costs replaced with more palatable hardware pricing combined with either higher software or potentially support costs. This has the dual benefit of making it easier for customers to select a vendor, and creating annuities for said vendor.
But the evolution of game pricing models did not end with subscriptions.
For anyone who has gotten sucked into the hell that is Candy Crush, you are already well aware of in-app purchases. The initial game is free, but if you want to get a special advantage or unlock a level, you can make an in-app purchase. They have cleverly priced the in-app purchases to feel like you are hardly spending anything. It’s less than a dollar. I should just go ahead and get that spotted donut thingy! Of course, by the time you add up all those just a dollar moments, you end up paying far more than you ever would have up front.
The magic of this type of pricing is that most of this is not really known up front. When you first get Candy Crush, you don’t really think you are going to buy the special extras. And Candy Crush doesn’t tell you that the levels get progressively harder to the point that they are nigh impossible without a little extra help.
Before you write this off as not applicable to networking, consider a few points.
First, despite the huge open source push, there are still a lot of companies pursuing commercial grade versions of the otherwise free software. Sure, you might buy into the open source controller, but if you need the networking version of the spotted donut thing, what do you do? This is essentially the networking equivalent of the in-app purchase. Call it the in-arch purchase. Once you buy into a particular architecture, the switching costs are prohibitively high. If you have to pay more for the commercial software, can you really say no?
Second, some of the tiered pricing models that are taking root make it more difficult to accurately model ongoing license costs. If you are not thinking about how the costs will scale with the number of ports, users, VMs, or whatever, you might find out down the road that your solution is contributing more ongoing costs than anticipated. For example, buying one VM from Amazon might seem easy enough, but what if you need thousands? It doesn’t stay cheap forever.
Maybe the in-arch costs are just extra features or capabilities. Or ongoing support and services. Whatever the source, these types of costs contribute to the ongoing operating expenses. And because the primary purchasing criterion is CapEx (procurement costs), burying some of these costs a little later in the product lifecycle and making them a bit smaller in magnitude (but larger in volume) will be attractive.
The punch line here is that we are on the cusp of a change in monetization strategies. You might think that pricing and costs will be transparent, but has the networking community given us a real reason to believe that to date? If you think so, consider this: why do buyers celebrate 50% discounts? It’s because pricing is ridiculously obfuscated in this industry. Until we all start expecting more, I just don’t know why this would change.
Along those lines, my colleague Bill Koss posted some facts about Plexxi costs. In the interest of transparency, it’s worth taking a look here.
[Today’s fun fact: The wettest spot in the world is located on the island of Kauai. Mt. Waialeale consistently records rainfall at the rate of nearly 500 inches per year. That’s enough so drown 7 6-foot-tall men standing on each other’s heads.]
It's time to face reality: "Americans are from Mars, Europeans are from Venus," and in today's increasingly connected world, understanding "inter-planetary" alignments and deviations is mission-critical for cloud. In her session at 15th Cloud Expo, Evelyn de Souza, Data Privacy and Compliance Strategy Leader at Cisco Systems, discussed cultural expectations of privacy based on new research across these elements
Jan. 26, 2015 09:15 AM EST Reads: 2,126
The consumption economy is here and so are cloud applications and solutions that offer more than subscription and flat fee models and at the same time are available on a pure consumption model, which not only reduces IT spend but also lowers infrastructure costs, and offers ease of use and availability. In their session at 15th Cloud Expo, Ermanno Bonifazi, CEO & Founder of Solgenia, and Ian Khan, Global Strategic Positioning & Brand Manager at Solgenia, discussed this shifting dynamic with an ...
Jan. 26, 2015 09:00 AM EST Reads: 1,973
More and more file-based and machine generated data is being created every day causing exponential data and content growth, and creating a management nightmare for IT managers. What data centers really need to cope with this growth is a purpose-built tiered archive appliance that enables users to establish a single storage target for all of their applications - an appliance that will intelligently place and move data to and between storage tiers based on user-defined policies. In her session a...
Jan. 26, 2015 09:00 AM EST Reads: 2,318
Recurring revenue models are great for driving new business in every market sector, but they are complex and need to be effectively managed to maximize profits. How you handle the range of options for pricing, co-terming and proration will ultimately determine the fate of your bottom line. In his session at 15th Cloud Expo, Brendan O'Brien, Co-founder at Aria Systems, session examined: How time impacts recurring revenue How to effectively handle customer plan changes The range of pricing a...
Jan. 26, 2015 09:00 AM EST Reads: 2,433
Mobile, the cloud and data have upended traditional ways of doing business. Agile, continuous delivery and DevOps have stepped in to hasten product development, but one crucial process still hasn't caught up. Continuous content delivery is the missing limb of the success ecosystem. Currently workers spend countless, non-value add hours working in independent silos, hunting for versions, manually pushing documents between platforms, all while trying to manage the continuous update and flow of mul...
Jan. 26, 2015 09:00 AM EST Reads: 2,164
Over the last few years the healthcare ecosystem has revolved around innovations in Electronic Health Record (HER) based systems. This evolution has helped us achieve much desired interoperability. Now the focus is shifting to other equally important aspects - scalability and performance. While applying cloud computing environments to the EHR systems, a special consideration needs to be given to the cloud enablement of Veterans Health Information Systems and Technology Architecture (VistA), i.e....
Jan. 26, 2015 09:00 AM EST Reads: 2,527
"At Harbinger we do products as well as services. Our services are with helping companies move their products to the cloud operating systems. Some of the challenges we have seen as far as cloud adoption goes are in the cloud security space," noted Shrikant Pattathil, Executive Vice President at Harbinger Systems, in this SYS-CON.tv interview at Cloud Expo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Jan. 26, 2015 09:00 AM EST Reads: 2,099
After a couple of false starts, cloud-based desktop solutions are picking up steam, driven by trends such as BYOD and pervasive high-speed connectivity. In his session at 15th Cloud Expo, Seth Bostock, CEO of IndependenceIT, cut through the hype and the acronyms, and discussed the emergence of full-featured cloud workspaces that do for the desktop what cloud infrastructure did for the server. He also discussed VDI vs DaaS, implementation strategies and evaluation criteria.
Jan. 26, 2015 09:00 AM EST Reads: 2,308
Things are being built upon cloud foundations to transform organizations. This CEO Power Panel at 15th Cloud Expo, moderated by Roger Strukhoff, Cloud Expo and @ThingsExpo conference chair, addressed the big issues involving these technologies and, more important, the results they will achieve. Rodney Rogers, chairman and CEO of Virtustream; Brendan O'Brien, co-founder of Aria Systems, Bart Copeland, president and CEO of ActiveState Software; Jim Cowie, chief scientist at Dyn; Dave Wagstaff, VP ...
Jan. 26, 2015 09:00 AM EST Reads: 2,475
“The year of the cloud – we have no idea when it's really happening but we think it's happening now. For those technology providers like Zentera that are helping enterprises move to the cloud - it's been fun to watch," noted Mike Loftus, VP Product Management and Marketing at Zentera Systems, in this SYS-CON.tv interview at Cloud Expo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Jan. 26, 2015 09:00 AM EST Reads: 1,806
Bring the world's best IaaS to the world's best PaaS. In their session at 15th Cloud Expo, Animesh Singh, a Senior Cloud Architect and Strategist for IBM Cloud Labs, and Jason Anderson, a Cloud Architect for IBM Cloud Labs, shared their experiences running Cloud Foundry on OpenStack. They focused on how Cloud Foundry and OpenStack complement each other, how they technically integrate using cloud provider interface (CPI), how we could automate an OpenStack setup for Cloud Foundry deployments, an...
Jan. 26, 2015 09:00 AM EST Reads: 2,253
"Application monitoring and intelligence can smooth the path in a DevOps environment. In a DevOps environment you see constant change. If you are trying to monitor things in a constantly changing environment, you're going to spend a lot of your job fixing your monitoring," explained Todd Rader, Solutions Architect at AppDynamics, in this SYS-CON.tv interview at DevOps Summit, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Jan. 26, 2015 09:00 AM EST Reads: 2,575
"Desktop as a Service is emerging as a very big trend. One of the big influencers of this – for Esri – is that we have a large user base that uses virtualization and they are looking at Desktop as a Service right now," explained John Meza, Product Engineer at Esri, in this SYS-CON.tv interview at Cloud Expo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Jan. 26, 2015 09:00 AM EST Reads: 2,084
The Industrial Internet revolution is now underway, enabled by connected machines and billions of devices that communicate and collaborate. The massive amounts of Big Data requiring real-time analysis is flooding legacy IT systems and giving way to cloud environments that can handle the unpredictable workloads. Yet many barriers remain until we can fully realize the opportunities and benefits from the convergence of machines and devices with Big Data and the cloud, including interoperability, ...
Jan. 26, 2015 07:45 AM EST Reads: 2,384
IBM has announced a new strategic technology services agreement with Anthem, Inc., a health benefits company in the U.S. IBM has been selected to provide operational services for Anthem's mainframe and data center server and storage infrastructure for the next five years. Among the benefits of the relationship, Anthem has the ability to leverage IBM Cloud solutions that will help increase the ease, availability and speed of adding infrastructure to support new business requirements.
Jan. 26, 2015 05:00 AM EST Reads: 1,227
Vormetric on Wednesday announced the results of its 2015 Insider Threat Report (ITR), conducted online on their behalf by Harris Poll and in conjunction with analyst firm Ovum in fall 2014 among 818 IT decision makers in various countries, including 408 in the United States. The report details striking findings around how U.S. and international enterprises perceive security threats, the types of employees considered most dangerous, environments at the greatest risk for data loss and the steps or...
Jan. 26, 2015 04:45 AM EST Reads: 1,485
Companies today struggle to manage the types and volume of data their customers and employees generate and use every day. With billions of requests daily, operational consistency can be elusive. In his session at Big Data Expo, Dave McCrory, CTO at Basho Technologies, will explore how a distributed systems solution, such as NoSQL, can give organizations the consistency and availability necessary to succeed with on-demand data, offering high availability at massive scale.
Jan. 26, 2015 03:00 AM EST Reads: 1,469
Code Halos - aka "digital fingerprints" - are the key organizing principle to understand a) how dumb things become smart and b) how to monetize this dynamic. In his session at @ThingsExpo, Robert Brown, AVP, Center for the Future of Work at Cognizant Technology Solutions, outlined research, analysis and recommendations from his recently published book on this phenomena on the way leading edge organizations like GE and Disney are unlocking the Internet of Things opportunity and what steps your o...
Jan. 26, 2015 02:00 AM EST Reads: 2,752
The move in recent years to cloud computing services and architectures has added significant pace to the application development and deployment environment. When enterprise IT can spin up large computing instances in just minutes, developers can also design and deploy in small time frames that were unimaginable a few years ago. The consequent move toward lean, agile, and fast development leads to the need for the development and operations sides to work very closely together. Thus, DevOps become...
Jan. 26, 2015 02:00 AM EST Reads: 2,434
In their session at @ThingsExpo, Shyam Varan Nath, Principal Architect at GE, and Ibrahim Gokcen, who leads GE's advanced IoT analytics, focused on the Internet of Things / Industrial Internet and how to make it operational for business end-users. Learn about the challenges posed by machine and sensor data and how to marry it with enterprise data. They also discussed the tips and tricks to provide the Industrial Internet as an end-user consumable service using Big Data Analytics and Industrial C...
Jan. 26, 2015 01:00 AM EST Reads: 2,563