Welcome!

SDN Journal Authors: Pat Romanski, Patrick Hubbard, Elizabeth White, Sven Olav Lund, Liz McMillan

Related Topics: @CloudExpo, Machine Learning , Artificial Intelligence, @BigDataExpo, @ThingsExpo

@CloudExpo: Article

AI-Defined Infrastructure | @CloudExpo #AI #DX #IoT #ArtificialIntelligence

The Foundation for New Generation Business Models and Applications

In 2016, artificial intelligence (AI) reached its climax. Research and advisory firm Tractica predicted that the annual worldwide AI revenue will grow from $643.7 million in 2016 to $38.8 billion by 2025. The revenue for enterprise AI applications will increase from $358 million in 2016 to $31.2 billion by 2025, representing a compound annual growth rate (CAGR) of 64.3%. Thus, IT and business decision makers must face up to the potentials of AI already today. For each kind of organization this leads to the question, which type of technologies or infrastructure they can leverage to operate an AI-ready enterprise stack.

What Is Artificial Intelligence (AI)?
In 1955, Prof. John McCarthy defined AI as "The goal of AI is to develop machines that behave as though they were intelligent." Discussing "intelligent" in this context, we are talking about a vigorous system. A system that must be considered as a raw IQ container, a system that needs unstructured input to train its senses, a system that needs a semantic understanding of the world to be able to take further actions. A system that needs a detailed map of its context to act independently and transfer experience from one context to another, a system that is equipped with all the necessities to develop, foster and maintain knowledge.

It is our responsibility to share our knowledge with these machines as we would share it with our children, spouses or colleagues. This is the only way to transform these machines, made of hard- and software, into a status we would describe as "smart", helping them to become more intelligent by learning on a daily basis, building the groundwork to create a self-learning system. In doing so, research distinguishes three types of AIs:

  • Strong AI: A strong AI (or superintelligence) is a self-aware machine with ideal thoughts, feelings, consciousness and all the necessary links. For all those who are already looking forward to a reality á la "Her" or "Ex_Machina" still need to wait. Large neural networks have millions of neurons. Brains have billions of neurons. Neural networks only simulate the electrical system in a brain, the brain also has a chemical, potentially a quantum mechanics based system. The layer based modelling of deep learning networks is to simplify training, the brain has no such restrictions. Neural networks are about as far away from a brain that thinks as a snail is from a supersonic jet. Thus, a strong AI doesn't exist yet and is very far away.
  • Narrow AI: Most business cases in AI focus on solving particular very pointed challenges. These narrow AIs are great at optimizing specific tasks like recommending songs on Pandora ormanaging analyses to improve tomato growth in a greenhouse.
  • General AI: A general AI can handle tasks from different areas and origins with the ability to shorten training time from one area to the next by applying experience gathered in one area and applied in a different area. This knowledge transfer is only possible if there is a semantic connection between these areas. The stronger and denser this connection, the faster and easier knowledge transition is achieved. In comparison to a narrow AI, a general AI has all the necessary knowledge and abilities to improve not only tomato growth in a greenhouse but cucumber, eggplant, peppers, radishes and kohlrabi as well. Thus, a general AI is a system, that can handle more than just one specific task.

However, one thing is obvious. Without technologies, such as cloud computing, AI wouldn't have achieved its boom particularly today. Both cloud services and progress in machine intelligence have made it easier for organizations to apply AI-based functionalities to interact closer with its customers. More and more companies like Airbnb, Netflix, Uber or Expedia are already using cloud-based systems to process AI relevant tasks that draw on an intensive utilization of CPU/ GPU as well as services for comprehensive computing and analysis tasks.

In the context of their AI strategy, companies should evaluate AI services from different cloud providers. Another part of their strategy should contain an AI-defined infrastructure. The foundation for this kind of infrastructure is a general AI that unifies three typical human characteristics, which empower an organization to autonomously operate its IT and business processes.

  • Learning: The general AI receives best practices and reasoning from experts based on ongoing learning units. For this purpose, the knowledge is taught in granular pieces that consist of discrete parts of a process. In the context of a greenhouse, the experts teach the AI any process step by step, e.g., how to grow cucumbers, eggplants or paprika. In doing so, they share their context-based knowledge with the AI that includes among others "what has to be done" and "why this has to be done".
  • Understanding: By creating a semantic data graph the general AI gets an understanding of the world in which the organization is acting with its IT and business objectives. Thus, the semantic data graph of a greenhouse would consolidate different contexts (e.g., information, characteristics and specifics of the greenhouse, cucumber culture, eggplant culture and paprika culture) and enrich (compare learning) it on an ongoing basis. The IT of an organization plays an important role, since all data are running together here.
  • Solving: With the concept of machine reasoning, problems are solved in ambiguous and changing environments. The general AI dynamically reacts to the ever-changing context, selecting the best course of action. Based on the trained knowledge (learning) and the creation of the semantic graph (understanding) the general AI can grow more than one single type of vegetable in a greenhouse. This is ensured with the growing amount of trained knowledge pieces that lead to a knowledge pool that is further optimized by machine selection of best knowledge combinations for problem resolution. This type of collaborative learning improves process time task by task. However, the number of possible permutations grows exponentially with added knowledge. Connected to a knowledge core, the General AI continuously optimizes performance by eliminating unnecessary steps and even changing routes based on other contextual learning. Thus, the bigger the semantic data graph gets, the better and more dynamically further types of vegetables can be cultured.

What Requirements Concerning Infrastructure Environments Does an AI Have?
Right now, AI is the technology that has the potential not only to improve existing infrastructure like cloud environments but expedite a new generation of infrastructure technologies as well. As an important technology trend, AI has influenced a new generation of development frameworks as well as a new generation of hardware technologies to run scalable AI applications.

Mobile and IoT applications have only minor requirements concerning runtime environments to an infrastructure. However, it is critical to provide appropriate services to build a backend for those types of applications. By contrast, AI applications do not only expect sophisticated backend services but also optimized runtime environments that are adapted for GPU intensive requirements of AI solutions. AI applications challenge the infrastructure with regards to the simultaneous task processing in very short time cycles. For accelerating deep learning applications in particular GPU processors are employed. GPU optimized applications distribute CPU-intensive areas of an application to the GPU and let the ordinary computations handle by the CPU. In doing so, the execution of the entire application is accelerated. The advantage of a GPU towards a CPU is reflected in the respective architectures. A CPU is exclusively designed for serial data processing and only has a few cores. A GPU, however, is composed of a parallel architecture with a vast number of small cores that process the tasks simultaneously. According to NVIDIA, the application throughput of a GPU is 10 to 100 times higher in comparison to a CPU. Thus, an infrastructure should be able to provide a deep learning framework such as TensorFlow and Torch over hundreds or thousands of nodes on a demand basis that immediately are deployed with the optimal CPU configuration.

The following list (in a partial state) deals with the requirements for infrastructure to support AI applications:

  • Support of current frameworks: Infrastructure must be able to support AI application based on AI frameworks like TensorFlow, Caffe, Theano and Torch the same way as web applications and backend processes. Thus, an infrastructure should not exclusively focus on AI frameworks but design the portfolio in the interests of a developer.
  • GPU optimized environment: An infrastructure has to make sure that every AI process can be processed. Thus, it must support GPU environments in order to provide fast computational power. Microsoft was the frontrunner in this area by offering its N-series GPU instances.
  • Management environment and tools: One of the biggest challenges of current infrastructure environments is the drawback of management tools for running AI frameworks. Here, in particular, the direct interaction between AI frameworks and the infrastructure is necessary to ensure the best balance and thus deliver the best performance.
  • AI-integrated infrastructure services: Infrastructure provider must and will not only support AI functionalities but integrate AI as a central part of their infrastructure and service stacks. This type of an AI-defined Infrastructure won't only increase the intelligence of cloud services and applications but also simplify the setup and operations of the infrastructure by the customer.
  • Machine Reasoning: Infrastructure providers who provide their customers with technologies for machine reasoning are helping them to solve problems in ambiguous and changing environments. Based on machine reasoning the AI environment is able to dynamically react to the ever-changing context, selecting the best course of action. This is ensured by selecting the best knowledge combinations for problem resolution. In the end, the results are optimized with machine learning algorithms.

Infrastructure Environments and Technologies for AI
In the course of years, cloud platform provider made enormous investments into AI functionalities and services. The leading public cloud provider in particular Amazon, Microsoft and Google are in the lead. But also several PaaS providers extended their offerings with AI services. The current AI technology landscape consists of the following three main categories:

  • Cloud machine learning (ML) platforms:Technologies like AWS Machine Learning or Google Machine Learning make it possible to use machine learning models based on proprietary technologies. Because even if Google Cloud ML sets on TensorFlow, most of the other cloud based ML services do not allow to execute AI applications that e.g. have been written in Theano, Torch, TensorFlow or Caffe.
  • AI cloud services:Technologies like Microsoft Cognitive Services, Google Cloud Vision or Natural Language APIs enable the use of complex AI abilities based on a simple API call. This allows organizations to develop applications with AI capabilities without investing into and owning the necessary AI infrastructure.
  • Technologies for private and public cloud environments: Technologies like HIRO are designed to run on top of public cloud environments like Amazon Web Services as well as private clouds such as OpenStack or VMware. They enable organizations to develop and operate transcontextual AI-based business models based on a general AI.

Further AI relevant categories and vendors are:

  • Machine Learning: Rapidminer, Context Relevant, H20, Datarpm, LiftIngniter, Spark Beyond, Yhat, Wise.io, Sense, GraphLab, Alpine, Nutonian
  • Conversational AI/ Bots: Mindfield, SemanticMachines, Maluuba, Mobvoi, KITT AI, Clara, Automat, Wit.ai, Cortical.io, Idibon, Luminoso
  • Vision: Clarifai, Chronocam, Orbital Insight, Pilot.ai, Captricity, Crokstyle
  • Auto: NuTonomy, Drive.ai, AI Motive, Nauto, Nexar, Zoox
  • Robotics: Ubtech, Anki, Rokid, Dispatch
  • Cybersecurity: Cyclance, Sift Science, Spark Cognition, Deep Instict, Shift Technology, Dark Trace
  • BI & Analytics: DataRobot, Trifaca, Tamr, Esigopt, Paxata, Dataminr, CrowdFlower, Logz.io
  • Ad, Sales and CRM: TalkIQ, Deepgram, Persado, Appier, Chors, InsideSales.com, Drawbridge, DigitalGenius, Resci
  • Healthcare: Freenome, Cloud Medx, Zebra, Enlitic, Two AR, iCarbonX, Atomwise, Deep Genomics, Babylon, Lunit
  • Text Analysis: Textio, Fido.ai, Narrative
  • IoT: Nanit, Konux, Verdigris, Sight Machine
  • Commerce: Bloomreach, Mode.ai
  • Fintech & Insurance: Cape Analytics, Kensho, Numerai, Alphasense, Kasisto

At the end of the day, the progressive developments of AI technologies are going to influence infrastructure environments and let them shift from a supporting mode towards a model where AI applications get the equal support like today's web applications and services.

The Future Is an AI-enabled Enterprise
An AI-enabled Infrastructure is an essential part of today's enterprise stack and builds the foundation for the AI-enabled enterprise. Because one thing is obvious. There are multiple challenges established companies are facing nowadays. Like the often-quoted war for talent or the inability of many large corporates to change effectively. But there is still an underestimated threat called competition - not from their own peers - but from high-tech companies like Amazon, Google, Facebook, etc., that are unstoppably marching into their markets. These high-tech companies invade the well-known competitive space of established companies with unimaginable financial resources and by hijacking the consumer life cycle.

Amazon is just one example who already has started to cut out the middleman within the own supply chain. We can be sure that business models of companies like DHL, UPS or FedEx are going to look different in the future - hint: Amazon Prime Air. Furthermore, Amazon has arranged everything to become a complete end-to-end provider of goods - digital as well as non-digital. It's likely that it won't be long until Facebook gets its banking license. Access to potential customers, enough information about its users and the necessary financial resources already exist. Consequently, established companies need to have powerful answers if they still want to exist tomorrow.

AI is one of these answers in the corporate toolkit to help overcome these competitive threats. However, time is running out for established companies. High-tech companies have already become uncatchable.

More Stories By Rene Buest

Rene Buest is Director of Market Research & Technology Evangelism at Arago. Prior to that he was Senior Analyst and Cloud Practice Lead at Crisp Research, Principal Analyst at New Age Disruption and member of the worldwide Gigaom Research Analyst Network. At this time he was considered a top cloud computing analyst in Germany and one of the worldwide top analysts in this area. In addition, he was one of the world’s top cloud computing influencers and belongs to the top 100 cloud computing experts on Twitter and Google+. Since the mid-90s he is focused on the strategic use of information technology in businesses and the IT impact on our society as well as disruptive technologies.

Rene Buest is the author of numerous professional technology articles. He regularly writes for well-known IT publications like Computerwoche, CIO Magazin, LANline as well as Silicon.de and is cited in German and international media – including New York Times, Forbes Magazin, Handelsblatt, Frankfurter Allgemeine Zeitung, Wirtschaftswoche, Computerwoche, CIO, Manager Magazin and Harvard Business Manager. Furthermore he is speaker and participant of experts rounds. He is founder of CloudUser.de and writes about cloud computing, IT infrastructure, technologies, management and strategies. He holds a diploma in computer engineering from the Hochschule Bremen (Dipl.-Informatiker (FH)) as well as a M.Sc. in IT-Management and Information Systems from the FHDW Paderborn.

@CloudExpo Stories
When it comes to cloud computing, the ability to turn massive amounts of compute cores on and off on demand sounds attractive to IT staff, who need to manage peaks and valleys in user activity. With cloud bursting, the majority of the data can stay on premises while tapping into compute from public cloud providers, reducing risk and minimizing need to move large files. In his session at 18th Cloud Expo, Scott Jeschonek, Director of Product Management at Avere Systems, discussed the IT and busine...
As businesses evolve, they need technology that is simple to help them succeed today and flexible enough to help them build for tomorrow. Chrome is fit for the workplace of the future — providing a secure, consistent user experience across a range of devices that can be used anywhere. In her session at 21st Cloud Expo, Vidya Nagarajan, a Senior Product Manager at Google, will take a look at various options as to how ChromeOS can be leveraged to interact with people on the devices, and formats th...
First generation hyperconverged solutions have taken the data center by storm, rapidly proliferating in pockets everywhere to provide further consolidation of floor space and workloads. These first generation solutions are not without challenges, however. In his session at 21st Cloud Expo, Wes Talbert, a Principal Architect and results-driven enterprise sales leader at NetApp, will discuss how the HCI solution of tomorrow will integrate with the public cloud to deliver a quality hybrid cloud e...
SYS-CON Events announced today that Yuasa System will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Yuasa System is introducing a multi-purpose endurance testing system for flexible displays, OLED devices, flexible substrates, flat cables, and films in smartphones, wearables, automobiles, and healthcare.
Is advanced scheduling in Kubernetes achievable? Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, will answer these questions and demonstrate techniques for implementing advanced scheduling. For example, using spot instances ...
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
The session is centered around the tracing of systems on cloud using technologies like ebpf. The goal is to talk about what this technology is all about and what purpose it serves. In his session at 21st Cloud Expo, Shashank Jain, Development Architect at SAP, will touch upon concepts of observability in the cloud and also some of the challenges we have. Generally most cloud-based monitoring tools capture details at a very granular level. To troubleshoot problems this might not be good enough.
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
SYS-CON Events announced today that Taica will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Taica manufacturers Alpha-GEL brand silicone components and materials, which maintain outstanding performance over a wide temperature range -40C to +200C. For more information, visit http://www.taica.co.jp/english/.
When it comes to cloud computing, the ability to turn massive amounts of compute cores on and off on demand sounds attractive to IT staff, who need to manage peaks and valleys in user activity. With cloud bursting, the majority of the data can stay on premises while tapping into compute from public cloud providers, reducing risk and minimizing need to move large files. In his session at 18th Cloud Expo, Scott Jeschonek, Director of Product Management at Avere Systems, discussed the IT and busine...
We all know that end users experience the Internet primarily with mobile devices. From an app development perspective, we know that successfully responding to the needs of mobile customers depends on rapid DevOps – failing fast, in short, until the right solution evolves in your customers' relationship to your business. Whether you’re decomposing an SOA monolith, or developing a new application cloud natively, it’s not a question of using microservices – not doing so will be a path to eventual b...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities – ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups. As a result, many firms employ new business models that place enormous impor...
SYS-CON Events announced today that SourceForge has been named “Media Sponsor” of SYS-CON's 21st International Cloud Expo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. SourceForge is the largest, most trusted destination for Open Source Software development, collaboration, discovery and download on the web serving over 32 million viewers, 150 million downloads and over 460,000 active development projects each and every month.
The next XaaS is CICDaaS. Why? Because CICD saves developers a huge amount of time. CD is an especially great option for projects that require multiple and frequent contributions to be integrated. But… securing CICD best practices is an emerging, essential, yet little understood practice for DevOps teams and their Cloud Service Providers. The only way to get CICD to work in a highly secure environment takes collaboration, patience and persistence. Building CICD in the cloud requires rigorous ar...
SYS-CON Events announced today that Dasher Technologies will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Dasher Technologies, Inc. ® is a premier IT solution provider that delivers expert technical resources along with trusted account executives to architect and deliver complete IT solutions and services to help our clients execute their goals, plans and objectives. Since 1999, we'v...
As popularity of the smart home is growing and continues to go mainstream, technological factors play a greater role. The IoT protocol houses the interoperability battery consumption, security, and configuration of a smart home device, and it can be difficult for companies to choose the right kind for their product. For both DIY and professionally installed smart homes, developers need to consider each of these elements for their product to be successful in the market and current smart homes.
In the fast-paced advances and popularity in cloud technology, one of the most critical factors revolves around concerns for security of your critical data. How to assure both your company and your customers they can confidently trust and utilize your cloud environment is most often top on the list. There is a method to evaluating and providing security that exceeds conventional modes of protecting data both within the cloud as well externally on mobile and other devices. With the public failure...
SYS-CON Events announced today that MIRAI Inc. will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. MIRAI Inc. are IT consultants from the public sector whose mission is to solve social issues by technology and innovation and to create a meaningful future for people.
Transforming cloud-based data into a reportable format can be a very expensive, time-intensive and complex operation. As a SaaS platform with more than 30 million global users, Cornerstone OnDemand’s challenge was to create a scalable solution that would improve the time it took customers to access their user data. Our Real-Time Data Warehouse (RTDW) process vastly reduced data time-to-availability from 24 hours to just 10 minutes. In his session at 21st Cloud Expo, Mark Goldin, Chief Technolo...
SYS-CON Events announced today that Massive Networks, that helps your business operate seamlessly with fast, reliable, and secure internet and network solutions, has been named "Exhibitor" of SYS-CON's 21st International Cloud Expo ®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. As a premier telecommunications provider, Massive Networks is headquartered out of Louisville, Colorado. With years of experience under their belt, their team of...