It’s a given that artificial intelligence (AI) requires a high level of computing power in order to produce a worthwhile return on the investment. But the question remains whether this calls for specialized hardware and integrated software, or is the enterprise better off with a commodity systems approach.
Top vendors, of course, are pulling for the integrated angle, arguing that they are better able to tailor AI’s enormous potential for key enterprise applications. As with traditional data infrastructure, the thinking is that through tight integration between high-performance computing, high-density storage and advanced networking, organizations will be able to deploy and maintain AI’s support infrastructure at less cost and with less in-house technical expertise.
HPE recently unveiled a suite of new AI-ready HPC platforms, anchored by the newest Apollo servers and the LTO-8 tape-based storage system. Systems range from the Apollo 70, the first to utilize an ARM-based HPC architecture, to the Apollo 2000 Gen 10 server, which offers plug-and-play deployment and the latest Nvidia Tesla V100 GPU accelerators that support deep learning and video analytics. With the LTO-8 tape system, the company says it can offer up to 30 TB per cartridge, plus a high degree of protection against ransomware attacks and other forms of cybercrime. (Disclosure: I provide content services to HPE.)
Meanwhile, Lenovo is out with a new AI-ready platform built around its latest ThinkSystem server and integrated management/orchestration suite. The ThinkSystem SD350 supports the V100 GPU as well as Intel’s Xeon Scalable Systems processors, which Lenovo says provides a powerful base on which to build training applications and those that require inference-driven data navigation. As well, the Lenovo Intelligent Computing Orchestrator (LiCO) offers an intuitive GUI and support for leading open source AI frameworks to monitor app development, schedule workloads and coordinate activity across third-party solutions. At the same time, the company is partnering with leading research organizations, such as North Carolina State University and University College London to devise tailored AI solutions for key industry verticals and targeted scientific research.
Dell EMC is also delving into this area with bundled solutions built around its PowerEdge line as a means to allow organizations to quickly implement deep learning and machine learning in private and hybrid clouds. The packages are pre-tested and validated across the server, storage, network and services layers to provide deep data insights and high degrees of automation without sacrificing security and control across the data chain. As with HPE and Lenovo, Dell EMC has turned to the Nvidia V100 GPU as the main AI acceleration engine, backed by the NVLink interconnect.
In Europe, digital transformation specialist Atos has released a new line of scalable servers featuring a unique architecture that the company says optimizes AI for business-critical and in-memory applications. The BullSequana S server utilizes Intel Skylake CPUs with GPU acceleration plus high-capacity storage to allow enterprises to quickly add machine learning and other tools to SAP HANA applications. The device can be configured with up to 32 CPU/GPU combinations, for a total of 896 cores, and up to 64 TB of non-volatile RAM in support of real-time analytics. As well, it comes with 2 PB of internal storage, enough to support data lakes and scalable virtualized environments.
The main appeal of vendor platforms for AI development is that it allows the enterprise to embark on the next phase of IT within the confines of familiar technologies and long-established provider relationships. Few organizations have the knowledge to craft working intelligent data environments, and since speed is of the essence, it naturally makes sense to go with what you know.
At some point, however, costs will become a key factor in advanced computing architectures, and if all goes as planned, AI will foster a world in which computing expertise is no longer a prerequisite for building and deploying applications and services. At this point, the enterprise probably won’t need an integrated vendor solution for its AI-driven operations, but it probably will need help deploying whatever comes next.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.