Click here to close now.



Welcome!

SDN Journal Authors: Nicole Bryan, Liz McMillan, Elizabeth White, Pat Romanski, Greg Schulz

Related Topics: SDN Journal, Java IoT, Microservices Expo, Linux Containers, Open Source Cloud, Containers Expo Blog

SDN Journal: Blog Post

Changing the Way We Configure and Provision Our Networks

In enterprises we have never really made a big distinction between configuration and provisioning

Some people believe good or bad things always happen in threes. I believe you will always be able to find three (and probably more) things that are good or bad and somewhat related, but sometimes I get surprised by the apparent coincidental appearance of several closely related “things”. Last week the folks at networkheresy.com posted a second installment of their “policy in the datacenter” discussion, Cisco announced the acquisition of tail-f and internal to Plexxi we had several intense architectural discussions around Configuration, Provisioning and Policy management. Maybe we can declare June CP&P month for networking.

It is mostly accepted that configuration deals with the deployment of devices and applications within an infrastructure. For network devices, it covers the portions of creating a fabric, protocols to maintain this fabric, access and control to the device itself, management connectivity etc. Once a network device is configured, it is a functioning element in a network.

Provisioning is more a telco term, focused on creating the customer facing end of a device of applications. For network devices, this would cover ports that are facing customer or end devices, the edge protocols required, VLANs, IP subnets, etc.

Lastly, policy defines a level of communication service that is created across the configured infrastructure through the attached provisioned interfaces. It defines what communication is allowed and not allowed and with what specific service and service levels.

In enterprises we have never really made a big distinction between configuration and provisioning. But with the evolution to a virtualized infrastructure and more rapid client facing changes as a result of VM creation and movement, I believe the two have enough differences that it makes sense to adopt this separation more broadly.

I catch myself using them interchangeably at times, but there are very distinct differences between them, even if all of them end up as a set of instructions to a switch or set of switches, physical or virtual. The types of instructions are different for each type, the folks responsible for their functionality are different, and the rate of change is different.

More importantly though, the mechanism by which we instruct our network components is rapidly changing. The complexity of the configuration and policy components and the sheer volume and rate of change of the provisioning component is driving more automated methods of instructing our network elements what to do. We have long ago adopted centralized database driven CP&P systems in most our world.

Except for networking. A very large majority of instructions for network devices is still hand crafted, or script-assisted hand crafted. And the entirety of the instruction set for these devices lives on the switches itself. For which we have then created systems that capture and archive them for backup purposes and forensics. When something goes wrong with the device, we go find the latest backup we have and attempt to restore the service.

But this is all changing. Finally. Newer network solutions use centralized systems that have real databases behind them to power the information that needs to be stored, shared, backed up, check pointed, logged, replicated and all the other good stuff real databases have solved for many years.

The biggest hump we need to get past is one of control. Regardless of where or how CP&P data is stored, the more important question is who controls the data. Even if this data is stored on a v- or pSwitch, that same switch should not be the master of that information any longer. There are portions of CP&P information that have network wide meaning and should be specified in a network wide manner. Today we manually construct network wide provisioning or policy semantics. We have to get to a point where we define these in network wide terms and let our tools worry about what that means for the individual network elements that create the service.

We started this a few years ago with abstracted policies we call Affinities. Policies that are defined network wide, without any specificity to location or what elements to apply the policy to. In Plexxi Control, similar concepts exist for some of the provisioning: certain types of data we consider global, they are used and applied network wide without you worrying what elements it applies to.

Centralized or federated CP&P provides huge benefits and potential. All the backend tools exist to make this safe, replicated, audited and logged, better so than any legacy network system. If only we can get our minds past the change in control and not expect the element to be the master source of its data. It is one of the many things we need to accept to allow the network to change.

The post Changing the way we configure and provision our networks appeared first on Plexxi.

Read the original blog entry...

More Stories By Marten Terpstra

Marten Terpstra is a Product Management Director at Plexxi Inc. Marten has extensive knowledge of the architecture, design, deployment and management of enterprise and carrier networks.

@CloudExpo Stories
In his general session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed cloud as a ‘better data center’ and how it adds new capacity (faster) and improves application availability (redundancy). The cloud is a ‘Dynamic Tool for Dynamic Apps’ and resource allocation is an integral part of your application architecture, so use only the resources you need and allocate /de-allocate resources on the fly.
University of Colorado Athletics has selected FORTRUST, Colorado’s only Tier III Gold certified data center, as their official data center and colocation services provider, FORTRUST announced today. A nationally recognized and prominent collegiate athletics program, CU provides a high quality and comprehensive student-athlete experience. The program sponsors 17 varsity teams and in their history, the Colorado Buffaloes have collected an impressive 28 national championships. Maintaining uptime...
CenturyLink has announced that application server solutions from GENBAND are now available as part of CenturyLink’s Networx contracts. The General Services Administration (GSA)’s Networx program includes the largest telecommunications contract vehicles ever awarded by the federal government. CenturyLink recently secured an extension through spring 2020 of its offerings available to federal government agencies via GSA’s Networx Universal and Enterprise contracts. GENBAND’s EXPERiUS™ Application...
Connected devices and the industrial internet are growing exponentially every year with Cisco expecting 50 billion devices to be in operation by 2020. In this period of growth, location-based insights are becoming invaluable to many businesses as they adopt new connected technologies. Knowing when and where these devices connect from is critical for a number of scenarios in supply chain management, disaster management, emergency response, M2M, location marketing and more. In his session at @Th...
Machine Learning helps make complex systems more efficient. By applying advanced Machine Learning techniques such as Cognitive Fingerprinting, wind project operators can utilize these tools to learn from collected data, detect regular patterns, and optimize their own operations. In his session at 18th Cloud Expo, Stuart Gillen, Director of Business Development at SparkCognition, discussed how research has demonstrated the value of Machine Learning in delivering next generation analytics to imp...
The idea of comparing data in motion (at the sensor level) to data at rest (in a Big Data server warehouse) with predictive analytics in the cloud is very appealing to the industrial IoT sector. The problem Big Data vendors have, however, is access to that data in motion at the sensor location. In his session at @ThingsExpo, Scott Allen, CMO of FreeWave, discussed how as IoT is increasingly adopted by industrial markets, there is going to be an increased demand for sensor data from the outermos...
In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, provided an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life sett...
IoT is rapidly changing the way enterprises are using data to improve business decision-making. In order to derive business value, organizations must unlock insights from the data gathered and then act on these. In their session at @ThingsExpo, Eric Hoffman, Vice President at EastBanc Technologies, and Peter Shashkin, Head of Development Department at EastBanc Technologies, discussed how one organization leveraged IoT, cloud technology and data analysis to improve customer experiences and effi...
"SpeedyCloud's specialty lies in providing cloud services - we provide IaaS for Internet and enterprises companies," explained Hao Yu, CEO and co-founder of SpeedyCloud, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
It's easy to assume that your app will run on a fast and reliable network. The reality for your app's users, though, is often a slow, unreliable network with spotty coverage. What happens when the network doesn't work, or when the device is in airplane mode? You get unhappy, frustrated users. An offline-first app is an app that works, without error, when there is no network connection. In his session at 18th Cloud Expo, Bradley Holt, a Developer Advocate with IBM Cloud Data Services, discussed...
"Avere Systems is a hybrid cloud solution provider. We have customers that want to use cloud storage and we have customers that want to take advantage of cloud compute," explained Rebecca Thompson, VP of Marketing at Avere Systems, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
The U.S. Army Intelligence and Security Command (INSCOM) has awarded BAE Systems a five-year contract worth as much as $75 million to provide enhanced geospatial intelligence technical and analytical support. The award was issued under the INSCOM Global Intelligence indefinite delivery, indefinite quantity contract.
More and more companies are looking to microservices as an architectural pattern for breaking apart applications into more manageable pieces so that agile teams can deliver new features quicker and more effectively. What this pattern has done more than anything to date is spark organizational transformations, setting the foundation for future application development. In practice, however, there are a number of considerations to make that go beyond simply “build, ship, and run,” which changes ho...
There are several IoTs: the Industrial Internet, Consumer Wearables, Wearables and Healthcare, Supply Chains, and the movement toward Smart Grids, Cities, Regions, and Nations. There are competing communications standards every step of the way, a bewildering array of sensors and devices, and an entire world of competing data analytics platforms. To some this appears to be chaos. In this power panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, Bradley Holt, Developer Advocate a...
In addition to all the benefits, IoT is also bringing new kind of customer experience challenges - cars that unlock themselves, thermostats turning houses into saunas and baby video monitors broadcasting over the internet. This list can only increase because while IoT services should be intuitive and simple to use, the delivery ecosystem is a myriad of potential problems as IoT explodes complexity. So finding a performance issue is like finding the proverbial needle in the haystack.
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, wh...
Creating replica copies to tolerate a certain number of failures is easy, but very expensive at cloud-scale. Conventional RAID has lower overhead, but it is limited in the number of failures it can tolerate. And the management is like herding cats (overseeing capacity, rebuilds, migrations, and degraded performance). Download Slide Deck: ▸ Here In his general session at 18th Cloud Expo, Scott Cleland, Senior Director of Product Marketing for the HGST Cloud Infrastructure Business Unit, discusse...
What does it look like when you have access to cloud infrastructure and platform under the same roof? Let’s talk about the different layers of Technology as a Service: who cares, what runs where, and how does it all fit together. In his session at 18th Cloud Expo, Phil Jackson, Lead Technology Evangelist at SoftLayer, an IBM company, spoke about the picture being painted by IBM Cloud and how the tools being crafted can help fill the gaps in your IT infrastructure.
SaaS companies can greatly expand revenue potential by pushing beyond their own borders. The challenge is how to do this without degrading service quality. In his session at 18th Cloud Expo, Adam Rogers, Managing Director at Anexia, discussed how IaaS providers with a global presence and both virtual and dedicated infrastructure can help companies expand their service footprint with low “go-to-market” costs.
"delaPlex is a software development company. We do team-based outsourcing development," explained Mark Rivers, COO and Co-founder of delaPlex Software, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.