|By Mat Mathews||
|September 3, 2014 06:00 AM EDT||
A good friend and business colleague once regaled me with his definition of a good corporate lawyer: “A good lawyer never says ‘no’; she says ‘here’s how’.” I thought this was an interesting and telling description – not because it conjured up creative interpretations of the law and loop-hole sleuthing corporate counsels – but that it imagined a seasoned practioner who understood the plasticity of her infrastructure (in this case the law) and the end goals of her client and therefore would often find innovative solutions that yielded business advantage. Plasticity in this context means that a seemingly rigid structure, like the law, can be deformed to meet a new need. Examples of this range from the mundane structuring of contracts to limit the downside of risky deals to the industry redefining methods of companies like Uber that challenge conventional practices and laws.
The law and the network – both meant to be broken?
A similar notion can be applied to networking infrastructure. It is often repeated that networking infrastructure is ‘rigid’ and ‘complex’. Other than just being evocative marketing terms, these words signify a level of resistance to adaption. Marketeering aside, business leaders are in fact expressing that what they want or need to do cannot be done – either feasibly, in a timely manner, or with an appropriate risk profile due to infrastructure obstacles. Every time connectivity needs change (think mainframe networks to multi-protocol client/server networks to IP routers/switches to remote access VPNs to high density data center switches, etc.) a new set of technologies, platforms, protocols, and ultimately infrastructure is put in place. For many years this may have been ok, and possibly even expected. Yet, for probably the past decade, the increasing pace of change of business needs and the continuous uncertainty of competitive environments have forced businesses to push harder on the aspects of their organization that prevent rapid change, that don’t exhibit plasticity.
SDN, the movement
Enter networking infrastructure, and more specifically SDN. While the canonical definition of SDN is accepted to be something about a decoupled control plane, there is also the notion of SDN the movement. This notion of SDN bears not an architectural definition, but rather embodies a user-led reaction to this lack of plasticity in their infrastructure. How are network engineers expected to say “here’s how” when their infrastructure requires generational shifts or years of standardization to catch up to yesterday’s demands? In many ways, SDN is nothing more than the desire of users to bring a level of adaptability to the uncertainty and change they experience in their business to the infrastructure.
Haven’t we heard this before?
Network plasticity is most likely not a new idea (either that or it’s naïve and unachievable.) Many a marketer has talked about the coming age of infrastructure that is fluid, dynamic, software-defined, change-ready, yada yada yada. Yet most of what is described by this fluid, dynamic, software-defined infrastructure is generally related to the shrinking, scaling, or movement of physical resources to match a desired processing need, ultimately to meet a utility cost objective. What network plasticity is about, however, is a more fundamental notion that connectivity needs will change ahead of generational or architecture product lifetimes, and that the answer cannot be to put the business needs on hold until the products catch up. What plasticity affords is a fundamental deformation of the primary design use-case into one that was a priori unforeseen – a set of carefully planned escape valves that prevent their operators from having to say, “no, we cannot.”
Will current networks bend and snap?
Many networks conceived for the world of client-server computing are being tested and stretched for the needs of highly distributed, edge-processing, no-central data store, scale-out applications. While the industry attempts to move the architectural needle forward with new encapsulations to remove restrictions of L3 boundaries, bigger buffers to accommodate the predominance of server-to-server flows, new chipsets, new interface technology, better Ethernet storage traffic handling, etc, it is still not assessing the fundamental desire to prevent the need for this catch-up game in the first place. At some point, applications will decide for themselves how they would like their various components to be connected and will be able to express policy, SLA, risk profiles, and other constraints and objectives that ultimately translate into a set of network behaviors and topologies. Will the underlying infrastructure be capable of handling the resulting permutations of requirements without deriving an exhaustive and limited set of supported behaviors? Will it be able to grow with the increasingly sophisticated demands of these applications to achieve what may previously have been thought to be unfeasible? This largely depends on how we as an industry approach plasticity as an inherent infrastructure trait – perhaps the only one that really matters anymore.
[Today’s fun fact: GE's Living Environment Concept House, aka the "Plastic House" in Pittsfield, MA (pictured here) was built using 45,000 lbs of various plastics throughout much of the construction including the roof, windows, siding, plumbing, foundation, electrical, and mechanical systems. I'm guessing its not BPA free.]
There is no question that the cloud is where businesses want to host data. Until recently hypervisor virtualization was the most widely used method in cloud computing. Recently virtual containers have been gaining in popularity, and for good reason. In the debate between virtual machines and containers, the latter have been seen as the new kid on the block – and like other emerging technology have had some initial shortcomings. However, the container space has evolved drastically since coming on...
May. 29, 2015 09:15 AM EDT Reads: 1,364
In a recent research, analyst firm IDC found that the average cost of a critical application failure is $500,000 to $1 million per hour and the average total cost of unplanned application downtime is $1.25 billion to $2.5 billion per year for Fortune 1000 companies. In addition to the findings on the cost of the downtime, the research also highlighted best practices for development, testing, application support, infrastructure, and operations teams.
May. 29, 2015 09:15 AM EDT Reads: 1,253
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo in Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading in...
May. 29, 2015 09:00 AM EDT Reads: 2,502
Even though it’s now Microservices Journal, long-time fans of SOA World Magazine can take comfort in the fact that the URL – soa.sys-con.com – remains unchanged. And that’s no mistake, as microservices are really nothing more than a new and improved take on the Service-Oriented Architecture (SOA) best practices we struggled to hammer out over the last decade. Skeptics, however, might say that this change is nothing more than an exercise in buzzword-hopping. SOA is passé, and now that people are ...
May. 29, 2015 09:00 AM EDT Reads: 3,543
SYS-CON Events announced today that MetraTech, now part of Ericsson, has been named “Silver Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York, NY. Ericsson is the driving force behind the Networked Society- a world leader in communications infrastructure, software and services. Some 40% of the world’s mobile traffic runs through networks Ericsson has supplied, serving more than 2.5 billion subscribers.
May. 29, 2015 09:00 AM EDT Reads: 1,441
Enterprises are fast realizing the importance of integrating SaaS/Cloud applications, API and on-premises data and processes, to unleash hidden value. This webinar explores how managers can use a Microservice-centric approach to aggressively tackle the unexpected new integration challenges posed by proliferation of cloud, mobile, social and big data projects. Industry analyst and SOA expert Jason Bloomberg will strip away the hype from microservices, and clearly identify their advantages and d...
May. 29, 2015 09:00 AM EDT Reads: 1,525
The Domain Name Service (DNS) is one of the most important components in networking infrastructure, enabling users and services to access applications by translating URLs (names) into IP addresses (numbers). Because every icon and URL and all embedded content on a website requires a DNS lookup loading complex sites necessitates hundreds of DNS queries. In addition, as more internet-enabled ‘Things' get connected, people will rely on DNS to name and find their fridges, toasters and toilets. Acco...
May. 29, 2015 09:00 AM EDT Reads: 5,179
The most often asked question post-DevOps introduction is: “How do I get started?” There’s plenty of information on why DevOps is valid and important, but many managers still struggle with simple basics for how to initiate a DevOps program in their business. They struggle with issues related to current organizational inertia, the lack of experience on Continuous Integration/Delivery, understanding where DevOps will affect revenue and budget, etc. In their session at DevOps Summit, JP Morgentha...
May. 29, 2015 09:00 AM EDT Reads: 602
SYS-CON Media named Andi Mann editor of DevOps Journal. DevOps Journal is focused on this critical enterprise IT topic in the world of cloud computing. DevOps Journal brings valuable information to DevOps professionals who are transforming the way enterprise IT is done. Andi Mann, Vice President, Strategic Solutions, at CA Technologies, is an accomplished digital business executive with extensive global expertise as a strategist, technologist, innovator, marketer, communicator, and thought lea...
May. 29, 2015 09:00 AM EDT Reads: 1,462
T-Mobile has been transforming the wireless industry with its “Uncarrier” initiatives. Today as T-Mobile’s IT organization works to transform itself in a like manner, technical foundations built over the last couple of years are now key to their drive for more Agile delivery practices. In his session at DevOps Summit, Martin Krienke, Sr Development Manager at T-Mobile, will discuss where they started their Continuous Delivery journey, where they are today, and where they are going in an effort ...
May. 29, 2015 08:00 AM EDT Reads: 1,443
The Internet of Things promises to transform businesses (and lives), but navigating the business and technical path to success can be difficult to understand. In his session at @ThingsExpo, Sean Lorenz, Technical Product Manager for Xively at LogMeIn, demonstrated how to approach creating broadly successful connected customer solutions using real world business transformation studies including New England BioLabs and more.
May. 29, 2015 08:00 AM EDT Reads: 5,917
There are 182 billion emails sent every day, generating a lot of data about how recipients and ISPs respond. Many marketers take a more-is-better approach to stats, preferring to have the ability to slice and dice their email lists based numerous arbitrary stats. However, fundamentally what really matters is whether or not sending an email to a particular recipient will generate value. Data Scientists can design high-level insights such as engagement prediction models and content clusters that a...
May. 29, 2015 07:00 AM EDT Reads: 5,165
Containers Expo Blog covers the world of containers, as this lightweight alternative to virtual machines enables developers to work with identical dev environments and stacks. Containers Expo Blog offers top articles, news stories, and blog posts from the world's well-known experts and guarantees better exposure for its authors than any other publication. Bookmark Containers Expo Blog ▸ Here Follow new article posts on Twitter at @ContainersExpo
May. 29, 2015 07:00 AM EDT Reads: 1,221
It's time to face reality: "Americans are from Mars, Europeans are from Venus," and in today's increasingly connected world, understanding "inter-planetary" alignments and deviations is mission-critical for cloud. In her session at 15th Cloud Expo, Evelyn de Souza, Data Privacy and Compliance Strategy Leader at Cisco Systems, discussed cultural expectations of privacy based on new research across these elements
May. 29, 2015 06:00 AM EDT Reads: 3,590
In today's application economy, enterprise organizations realize that it's their applications that are the heart and soul of their business. If their application users have a bad experience, their revenue and reputation are at stake. In his session at 15th Cloud Expo, Anand Akela, Senior Director of Product Marketing for Application Performance Management at CA Technologies, discussed how a user-centric Application Performance Management solution can help inspire your users with every applicati...
May. 29, 2015 04:00 AM EDT Reads: 4,789
As enterprises engage with Big Data technologies to develop applications needed to meet operational demands, new computation fabrics are continually being introduced. To leverage these new innovations, organizations are sacrificing market opportunities to gain expertise in learning new systems. In his session at Big Data Expo, Supreet Oberoi, Vice President of Field Engineering at Concurrent, Inc., discussed how to leverage existing infrastructure and investments and future-proof them against e...
May. 29, 2015 03:00 AM EDT Reads: 3,218
The consumption economy is here and so are cloud applications and solutions that offer more than subscription and flat fee models and at the same time are available on a pure consumption model, which not only reduces IT spend but also lowers infrastructure costs, and offers ease of use and availability. In their session at 15th Cloud Expo, Ermanno Bonifazi, CEO & Founder of Solgenia, and Ian Khan, Global Strategic Positioning & Brand Manager at Solgenia, discussed this shifting dynamic with an ...
May. 29, 2015 02:00 AM EDT Reads: 3,265
Due of the rise of Hadoop, many enterprises are now deploying their first small clusters of 10 to 20 servers. At this small scale, the complexity of operating the cluster looks and feels like general data center servers. It is not until the clusters scale, as they inevitably do, when the pain caused by the exponential complexity becomes apparent. We've seen this problem occur time and time again. In his session at Big Data Expo, Greg Bruno, Vice President of Engineering and co-founder of StackI...
May. 29, 2015 01:00 AM EDT Reads: 4,585
Once the decision has been made to move part or all of a workload to the cloud, a methodology for selecting that workload needs to be established. How do you move to the cloud? What does the discovery, assessment and planning look like? What workloads make sense? Which cloud model makes sense for each workload? What are the considerations for how to select the right cloud model? And how does that fit in with the overall IT transformation?
May. 29, 2015 12:00 AM EDT Reads: 4,303
You use an agile process; your goal is to make your organization more agile. But what about your data infrastructure? The truth is, today's databases are anything but agile - they are effectively static repositories that are cumbersome to work with, difficult to change, and cannot keep pace with application demands. Performance suffers as a result, and it takes far longer than it should to deliver new features and capabilities needed to make your organization competitive. As your application an...
May. 29, 2015 12:00 AM EDT Reads: 3,510