During its online Red Hat Summit this week, Red Hat announced it is donating software valued at $551 million to Boston University (BU) to advance open source software development and hybrid cloud computing.
The software contribution is the largest Red Hat has ever made to any single institution, said Hugh Brock, director of research for Red Hat. “The BU relationship is unique,” he said.
Red Hat is also renewing an existing collaborative research and development initiative for five years that is valued at $20 million. The partnership is rooted in a Massachusetts Green High Performance Computing Center (MGHPCC) initiative that includes BU, Harvard University, and the University of Massachusetts. The ultimate goal is to build a hybrid cloud computing environment based on high-performance computing (HPC) platforms that could be accessed as a cloud service by researchers anywhere in the world, said Brock.
Researchers would no longer need to be concerned about the amount of HPC resources that any one institution might make available to them. All HPC platforms would be accessed via a common pool of infrastructure that could scale up or out as required.
Also read: Workflow Management Now Requires New Agile Tools
Finding a Workload Solution
Red Hat’s research and development effort comes at a critical time for enterprise IT organizations. Most IT teams today are managing at least one cloud in addition to an on-premises IT environment. Over time, IT teams are expanding the number of cloud platforms they employ largely based on the unique requirements of varying classes of workloads.
The challenge IT teams face today is that each platform is managed in isolation from all the others, which typically requires a dedicated team to manage via a console that is optimized for a single platform. Each platform that is added to the IT firmament within any enterprise, therefore, winds up increasing the total cost of IT as additional management tools are acquired along with specialists that know how to employ them.
In the wake of the economic downturn brought on by the COVID-19 pandemic, IT teams are now trying to serve two masters. Organizations want to deploy workloads as they best see fit while, at the same time, reducing costs by centralizing the management of what is becoming a very extended enterprise.
Today, HPC environments run some of the most complex workloads, including AI models based on machine learning algorithms that consume massive amounts of data. Research that enables those classes of applications to run across a highly distributed computing environment will inevitably trickle down to enterprise IT environments.
Read also: Is Serverless Computing Ready to Go Mainstream?
Utilizing Application Environments
In the meantime, application environments will continue to become more complex. Microservices-based applications based on containers, Kubernetes, and serverless computing frameworks, for example, are being deployed with greater frequency alongside legacy applications based on batch-oriented processes.
Modern applications, in contrast, make greater use of event-driven architectures to process and analyze data in near real time at the point it is being created and consumed. The unified open source management framework required to manage a diverse portfolio of applications that can be easily extended to a wide range of emerging and legacy computing platforms doesn’t really yet exist. However, it’s clear that some of the best minds in the world are definitely working on building it.
Read next: APM Platforms are Driving Digital Business Transformation