The ongoing question when it comes to converging data center hardware is whether to adopt simple convergence or gravitate toward full hyperconvergence.
Naturally, different solutions will offer different features, but the general rule-of-thumb is that converged infrastructure (CI) is fairly loosely integrated so that each component (compute, storage, networking) can be deployed and managed independently but in a more cohesive manner than traditional solutions. Hyperconverged infrastructure (HCI) features fully modular hardware supporting software-defined architectures and federated resource consumption.
A key differentiator between the two approaches is start-up costs, says Todd Pavone, president, Dell EMC Consulting Services at Dell Technologies. In an interview with business.com, he notes that converged solutions can generally be implemented using hardware that was already budgeted for expansion, and can therefore be folded into traditional infrastructure should the program not succeed. HCI, on the other hand, cannot be decoupled from software, which means initial costs are higher and there is less flexibility in repurposing the hardware. In both cases, however, the real savings are derived from lower ongoing support and maintenance costs.
But if a little convergence is good for the enterprise, why would anyone hedge on going full-bore HCI? One of the key factors is disruption. A recent survey by ESG showed that most IT executives have a strong desire to maintain existing infrastructure, processes and workflows even as they try to make their data centers more cloud-like. CI allows them to meet new scalability, reliability and performance requirements while at the same time preserve the ability to repurpose resources independently from newly virtualized environments. Meanwhile, HCI is seen as an effective solution for VDI, email and other tier-2 workloads, particularly at remote sites and branch offices where space is limited.
So what is the typical enterprise to do given the various pros and cons of each approach? NetApp’s Dhruv Dhumatkar advises organizations to turn the traditional deployment model on its head. Instead of launching a technology first and then figuring out how to use it, take stock of application needs, business objectives and stakeholder requirements first and then craft the infrastructure that best fulfills all criteria from that basis. Do you need to simplify delivery of private cloud or IT resources? HCI is probably your best bet. Interested in flexible deployment options and high levels of customization? Try CI. Either way, it helps to deploy best-of-breed solutions that offer top-notch vendor support and guidance.
Regardless of what kind of hardware is in play, however, the most important aspect of any data environment is the people using and maintaining it. Proper training into new data environments is the best way to achieve optimal results, particularly when embarking on entirely new work paradigms like DevOps and automated, intent-based infrastructure provisioning.
After all, even the fastest sports car in the world is of little use to those who don’t know how to drive.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.