Serverless computing has emerged as a means for IT organizations to more cost effectively run and scale their IT infrastructure. This is done in two important ways.
The first way is organizations only pay for computing that is consumed. A cloud service provider typically bases the cost for employing a serverless computing framework on the number of requests for services made and the duration of the workload rounded up to the nearest millisecond. At a cost of, for example, $0.00001667 for every GB-second used, a serverless computing provider can enable IT teams to save a significant amount of money on workloads that only need to run for a short amount of time.
The second way serverless computing proves cost effective is by reducing the amount of code a developer needs to include in their application. For example, rather than having to embed an analytics capability within an application, a developer can embed a small amount of code known as a function into their application that can invoke an external serverless computing provider to execute that task. Not only does that approach reduce the amount of infrastructure being consumed by that application, it also enables developers to build applications faster.
Also read: Best Practices for Effective Cloud Control and Cost Management
Creating Serverless Efficiencies
At a time when more organizations are focused on reducing IT costs as a result of the economic downturn brought on by the COVID-19 pandemic, it’s not surprising to see IT organizations employing these capabilities with greater frequency. A report based on consumption of IT resources by 500 organizations published last month by CloudHealth by VMware, a provider of IT monitoring tools, finds usage of serverless computing frameworks increased 13.5% from January through September of 2020.
While the same report notes that spending on cloud services increased during the period, the actual percentage of the budget allocated to cloud computing resources declined 4.5% for the period. Increased reliance on serverless computing and other cloud optimization techniques all played a role in enabling organizations to consume cloud computing resources more efficiently, says CloudHealth by VMware CTO Joe Kinsella. “There’s been a lot of budget overrun in the cloud,” he notes.
Serverless computing is not used more broadly to contain cloud costs because it requires small bits of code to invoke. Building an application made up of thousands of functions is not especially practical. Theoretically, it can be done but the overhead required to orchestrate thousands of functions in the right sequence consistently is significant. Most long-running application workloads will be deployed as part of a monolithic or microservices-based application that will employ functions to invoke external computing resources to execute a very specific well-defined task.
Also read: The Impact of 5G on Cloud Computing
Serverless Computing Platforms
The most widely employed serverless computing framework today is the Lambda service provided by Amazon Web Services (AWS). However, rival cloud service providers have embraced open source serverless computing frameworks that are typically deployed on top of a Kubernetes cluster as an alternative to a proprietary Lambda service that locks customers into AWS. Interoperability between the open source serverless computing frameworks is achieved mainly using Knative, an open source middleware originally developed by Google that is now widely supported by cloud service providers other than AWS.
Triggermesh, a provider of integration software based on Knative, has a platform that allows applications to invoke AWS Lambda services via Knative. This capability makes it possible for IT organizations to create event-driven applications that can trigger functions in real-time across a multicloud computing environment, says TriggerMesh CEO Mark Hinkle.
Rather than continuing to rely on traditional batch-oriented updates to applications that typically occur once a day, Hinkle notes it becomes possible to continuously update multiple applications in real time without consuming a massive amount of compute resources. “You can make it as little as you need,” he says.
Of course, serverless computing frameworks will also have a major impact on IT in ways that organizations don’t directly invoke. Vendia, for example, has created a service of processing transactions that makes use of NoSQL databases running on a serverless computing framework that are invoked as needed. Intended to provide a faster alternative to blockchain databases that don’t scale especially efficiently, Vendia is making a case for using functions to trigger transactions that can be processed in milliseconds across a distributed network of cloud services, says Shruthi Rao, chief business officer for Vendai.
“We enable organizations to share both data and code across a different ecosystem,” says Rao.
While a serverless computing framework plays a key role in enabling those transactions to be processed, the organizations employing the Vendia service never directly interact with it.
Also read: Thread is the Next Big Step in IoT
What’s Next for Serverless Computing
While serverless computing has been around now for more than a decade, the concept didn’t really start gaining traction until AWS unfurled Lambda in 2014. Other cloud services providers followed suit over the next two years. Flash forward six-plus years later and it becomes clear how long it takes for IT organizations to absorb a fundamental change to the way they build and deploy applications.
However, as microservices-based applications are deployed more widely across the enterprise there’s no doubt serverless computing will play a larger role in enterprise IT environments in the months and years ahead. The challenge now is not so much understanding how serverless computing actually works, but rather how best to employ it when given the actual use case at hand.