Google, during a recent online Google I/O conference, unveiled a managed service that promises to make it easier to build artificial intelligence (AI) models in addition to setting the stage for the unification of machine learning and IT operations.
At the same time, Google also unveiled a Language Model for Dialogue Applications (LaMDA) that promise to make it possible for chatbots to engage in more open-ended conversations and a multitask unified model (MUM) that can be employed to respond to natural language queries against related text, images, and videos in 75 different languages.
Finally, Google also announced its fourth-generation tensor processing units (TPUs) that on average will run AI models 2.7 times faster than the previous generation of TPUs.
The managed service provided by Google, dubbed Vertex AI, is part of an effort to make all the disparate services that Google makes available for building AI models available via both a unified application programming interface (API) or common user interface depending on what an IT prefers to employ, says Craig Wiley, director of product management for Google CloudAI.
The goal is to reduce the friction organizations frequently encounter when attempting to move models from the experimentation phase into production environments, notes Wiley.
Also read: AIOps Trends & Benefits for 2021
Aligning MLOps and ITOps
Today most AI models are built by data scientists that often work in isolation from the rest of the IT organization, otherwise known as machine learning (MLOps). However, when it comes time to deploy an AI model in a production environment, data scientists don’t always fully appreciate the need to align with the processes an IT operations team employs to deploy and update applications. Vertex AI is designed to meld the AI models a data science team creates with the artifacts a software development team creates to construct an application, adds Wiley. “ML is not a separate area of computer science,” he says.
Capabilities that provided via Vertex AI that enable IT teams to achieve that goal include Vertex Vizier for tuning parameters in AI models; a Vertex Feature Store for storing and maintain version control of AI models; Vertex Experiments to streamline deployment of AI models in production environments; Vertex Continuous Monitoring to observe the overall process and Vertex Pipeline to manage the data pipelines used to build AI models.
Overall, Google claims that AI models built using VertexAI will require nearly 80% fewer lines of code to train than rival platforms.
It’s unclear at what rate MLOps will converge with traditional IT operations, but at this point it’s all but inevitable. If every application to varying degrees is going to be infused with AI models it’s not practical to construct an AI model in a way that isn’t aligned with the DevOps processes that organizations use to build modern applications. AI models are also subject to drift, which means DevOps will also need to replace AI models as new data sources become available or the assumption used to construct the AI model are not long valid.
Ultimately, organizations will come to view AI models as just one more type of artifact that needs to be managed like any other. That artifact may have been constructed differently than traditional software but that does not mean it should be managed any differently than any other software artifact.
Also read: Natural Language Processing Will Make Business Intelligence Apps More Accessible