More

    Processor Battle for Control over AI Workloads Drives Wave of Innovation

    A fierce contest between Intel and NVIDIA for control over artificial intelligence (AI) workloads is now under way in earnest.

    Intel at a Data Centric Innovation Summit this week laid out its plans to usurp graphical processor units (GPUs) that over the last few years have been widely employed to build and train AI models. In fact, adoption of GPUs by cloud service providers to run AI workloads is enabling NVIDIA to mount one of the most serious challenges to Intel dominance of core processors in recent memory.

    AI models today are based largely on machine learning and deep learning algorithms, also known as neural networks. Intel still dominates when it comes to applications incorporating machine learning algorithms. But more complex deep learning algorithms are more cost-effective today to run on GPUs.

    To address that issue, the Intel approach to AI will span multiple classes of processors. This week, Intel revealed it is extending instruction sets in next-generation Xeon processors, code-named Copper Lake and Ice Lake, to enable AI models to run both machine and deep learning algorithms as much as 11 times faster. Navin Shenoy, executive vice president and general manager of the Data Center Group at Intel, says Copper Lake is due out the end of 2019, while Ice Lake processors will become available in 2020.

    “We’re reinventing Xeon,” says Shenoy.

    Shenoy says Intel is already generating $1 billion in Xeon revenues specifically from AI applications, a number it expects to substantially increase as the total addressable market for AI processors is to reach $10 billion by 2022, a 25 percent compound annual growth rate. Intel clearly expects Xeon class processors to account for the biggest segment of that market. But Intel is also making major AI related investments in other processors, including Intel Nervana processors optimized for deep learning algorithms and field-programmable gate arrays (FPGAs) based on technology Intel gained when it acquired Altera. Intel is betting that a mix of processors will be able to better address the requirement of AI applications that require access to training, inference and, soon, learning engines. FPGAs, for example, deployed alongside Intel Xeon processors will be able to overcome some of the memory and I/O limitations developers of AI models currently encounter when relying on GPUs.

    In fact, those limitations are one of the primary reasons Google developed its own ASIC, known as Tesla processor units (TPUs) to process deep learning algorithms. Google, like most cloud service providers, is making available a range of Intel, GPUs and ASIC processor types and classes that can be used to train and deploy AI engines. In fact, Google recently announced it plans to make TPUs available both in the cloud and at the network edge.

    Intel, in the meantime, is betting that an open source nGraph compiler project will make it simple to deploy multiple types of AI engines on top of multiple classes of processors. Other critical ongoing investment areas identified by Intel this week include natural language processing in the form of an NLP Architect and an application programming interface (API) project dubbed Onyx.

    Shenoy made it clear that Intel views machine and deep learning algorithms as core functionality that every application to one degree or another will require. To drive awareness of that fundamental shift, Intel is also pouring resources into an AI Academy that promises to teach traditional enterprise developers how to build and employ AI models.

    The one technology Intel at this moment appears to have no interest in when it comes to AI is GPUs. Intel is developing GPU processors for use on desktops, expected sometime in 2020. But as far as the data center is concerned, GPUs are not on the Intel roadmap.

    NVIDIA, as the leading provider of GPUs, however, continues to gain ground as usage of various cloud services for building AI applications continues to expand. In fact, NVIDIA is now starting to concentrate on making it simpler to employ AI software technologies by packaging them in containers. Chris Kawalek, a senior product marketing manager of NVIDIA, says the goal now is to make it simpler for data scientists to access and deploy AI technologies that come prepackaged in Docker containers.

    “We’re increasing the size of the ecosystem,” says Kawalek.

    NVIDIA is also expanding technology alliances with storage vendors such as NetApp to address I/O issues. NetApp and NVIDIA recently announced NetApp ONTAP AI, which integrates NetApp all-Flash storage systems with NVIDIA DGX supercomputers. Octavian Tanase, senior vice president for ONTAP at NetApp, says that combined effort will make it simpler for organizations to set up IT infrastructure optimized for AI workloads.

    “It eliminates the guesswork,” says Tanase. “Data scientists don’t want to have to worry about storage.”

    Dell EMC, meanwhile, countered that move with a series of Dell EMC Ready Solutions for AI that are based on both Intel Xeon and NVIDIA GPUs. Jon Siegal, vice president of product marketing for Dell Technologies, says these systems are also designed to free data scientists that are expensive to hire from spending too much time on IT operations.

    “Data scientists are spending too much time on non-data science tasks,” says Siegal.

    In the future, Siegal also notes that FPGAs will be an important AI option, which is why the PowerEdge c4140 allows customers to use GPUs today while providing an option to install FPGAs at a future date.

    Steve Conner, vice president of solutions engineering at Vantage Data Centers, a provider of hosting services, credits NVIDIA for filling a clear AI gap. But it remains to be seen whether in the long term GPUs will be needed to process AI workloads, says Conner. Intel and Advanced Micro Devices (AMD) are both working toward redesigning motherboards to optimally run AI workloads in ways that generate less heat and are less costly to employ, says Conner.

    “GPUs are a great short-term solution,” says Conner.

    But continued reliance on using GPUs at scale to run massive AI workloads will create significant cooling challenges that may require increased reliance on either water in the data center or more esoteric approaches such as liquid Teflon that are just starting to emerge, says Conner.

    Regardless of the path chosen, the one thing that is clear is that AI represents the most significant opportunity to move IT forward in recent memory, says Charles King, principal analyst for Pund-IT. The challenge is reducing the cost of running those AI workloads to a point where it becomes cost-effective to pervasively apply them, says King.

    “Time is still money,” says King.

    Mike Vizard
    Mike Vizard
    Michael Vizard is a seasoned IT journalist, with nearly 30 years of experience writing and editing about enterprise IT issues. He is a contributor to publications including Programmableweb, IT Business Edge, CIOinsight and UBM Tech. He formerly was editorial director for Ziff-Davis Enterprise, where he launched the company’s custom content division, and has also served as editor in chief for CRN and InfoWorld. He also has held editorial positions at PC Week, Computerworld and Digital Review.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles