Home Tech Hyper-convergence: a quick look at data centre infrastructures

Hyper-convergence: a quick look at data centre infrastructures

Hyper-convergence - a shift in data centre infrastructures 

When it comes to data centre infrastructures, it's safe to say they are incredibly complex and immensely complicated systems. There's probably been tens, if not, hundreds of thousands of pages written on the subject, in an attempt to cover everything there is to know and understand about their design and fundamental inner-workings. But in recent years we've seen a rise in the use of these new 'hyper-converged' systems, which seem to be the next best thing in data centre infrastructure design. Perhaps you're wondering whether it's right for you or your organisation, or maybe you aren't sure what hyper-convergence is? Well, today, we're attempting to 'hyper-converge' all the information out there into a quick, easy to digest article, that'll give you a baseline understanding of what it is and how it differs from other data centre infrastructures.

Data centre

What is hyper-convergence and how is it different? 

In the grand scheme of things, Hyper-convergence isn't anything new on the scene when it comes to data centre infrastructures; they are essentially the 21st century equivalent of what, 40+ years ago, was more commonly known as a Mainframe.  

By this, what we mean to say, is that Hyper-convergence is merely a fancy way to describe a traditional data centre infrastructure that has been 'compressed' from multiple pieces of hardware into only one or two 'units,' which make use of software to perform many of the tasks and functions that were handled previously by the hardware they have replaced (or down-sized). The end result being a system capable of delivering specific utilities to multiple end-users. But, despite what first impressions would have you believe, is it really the next best thing for anyone seeking an upgrade, regardless of their current data centre set-ups? 

Data centre infrastructures

So what are the different types of data centre infrastructures and how do they compare? 

 

Traditional

The first type of data centre infrastructure is the traditional architecture, which has been the go-to design for around the last 30 or so years and is arguably the closest modern day relative to the Mainframe. Often termed the DIY option, these set-ups keep the majority of its necessary hardware separate, thus tend to have larger footprints. In the long run, they are prone to becoming less efficient as stacks are forced to scale out, eventually introducing latency, higher cost and more maintenance.  

Traditional infrastructure

Since multiple pieces of hardware are required to form the constituent parts of the system, functioning in conjunction as a whole to operate, everything is hand-picked by the administrator, meaning the system can be fully controlled in every aspect - price per piece (since hardware can be individually purchased from different vendors), security protocols, policies, resource management and so on. But, with this freedom comes a trade-off: validation of components. Since no one component is sourced from the same vendor, or at least not every component, administrators are left with a great deal more work in the long run, as well as on deployment. Everything will need to be validated for compatibility manually, in-house, which often times requires administrators to carefully plan and test proposed set-ups, in order to ensure this. Plus, management, configuration and troubleshooting will need to be per device or hardware which can be quite time-consuming, especially for smaller teams.  

Another drawback often cited is the fact that these architectures are designed as 'silos.' This can cause scalability to be more limited for organisations with smaller budgets, as stepping up capacity is not as simple as just expanding storage or compute - scaling one element is tied to the entire 'silo' structure. As a result, it often leads to systems being built to be future-proof, which could mean spending way over initial budgets, and that can be very difficult to justify in some sectors and businesses, where cash-flow or funding can be difficult to come by. It can also lead to over-estimating these future needs, which means over-spending on unnecessary or redundant equipment.

Converged  

Converged seeks to combat many of the issues with traditional infrastructures by combining key, separate elements (commonly compute, storage and network) into a coherent single system with a substantially smaller footprint.

Converged infrastructure

With the whole system being designed and built by a single vendor, the hardware is validated 'out-of-the-box,' removing the need for IT resources to be dedicated to configuration, forward-planning and testing. This, therefore, is often the preferred and most suited solution for businesses looking to save time, money and resources.

The method can still prove to be limited however, where once again, scaling versus budget limitations come into play - rather than being locked into 'silos,' organisations are instead locked into 'blocks,' meaning, in the long run, upgrading can be tricky to get right. Perhaps a small increase in capacity is needed to accommodate a handful of new employees, but purchasing a whole new block is way beyond these new requirements? Unfortunately, it's block or no block, and the line here is fairly solid. There is the option of buying a different block from another vendor, one which by design is much closer to the administrator's actual requirements, but this brings with it the issue of compatibility with existing blocks - which before long, could result in a situation not dissimilar to those that arise with traditional infrastructure hardware.

That said, converged systems remove a great deal of hassle from the design and deployment stages, and still offer the freedom to fine-tune each component (in order to meet certain policies or requirements) whilst saving time and resources compared to traditional infrastructures. This makes them a great solution for schools and smaller businesses or even specific departments within larger enterprises, where end-user requirements aren't likely to see any real increase, since employee or student numbers are limited by the size of the actual facilities or departments in which they operate - a mid-sized secondary school with a capacity for ~1200 students will, for example, in say 10 years, still have a capacity of ~1200 students. Unless, of course, specific extensions to onsite facilities are made, or existing rooms are re-purposed. But in this case, new blocks can be easily purchased and integrated to meet the increase in end-user numbers, which will generally be scaling in much larger increments.

Hyper-converged

Hyper-converged infrastructures are therefore the spiritual successors to converged infrastructures, condensing the system even further still by merging software and hardware from conventional convergence systems into the most compact solution available (without considering cloud services). Occupying only one or two units, as opposed to entire racks, and using an x86-based compute with software-defined storage, it combines almost everything - storage, servers, network and virtualisation - into a cohesive whole, allowing for advanced system management and swift deployment, making it ideal for smaller IT operations that require a cost-effective and flexible solution with options for easy scalability and centralised support.

Hyper-converged infrastructure

As with the other infrastructures, it does however, still have its drawbacks. Namely, in condensing all of this hardware and replacing many of the functions with software, the actual power of the unit itself is more limited, meaning 'case use' and workload (or type) are key factors in choosing this architecture over one of the others. Equally, depending on which company is supplying these hyper-converged systems, there may be additional software costs associated, which would not apply to the two former infrastructures and could therefore negate the savings on hardware.

Although marketed as being ideal for those who require a smaller starting point, which they can then scale in as many steps as they need at any given point in time, in certain situations it could be wise to invest in a larger converged system if the expectation is for end-user demand to increase. Depending on circumstances, upgrading step at a time with a hyper-converged system could end up being more expensive over extended periods than investing in a steeper, initial upfront cost.

There's also the consideration to be made regarding software-defined-storage. Despite its advantages, it still has its pitfalls. Firmware updates and security patches, difficult-to-identity bottlenecks, standardisation and difficulties finding support for software vs. hardware are all things worth thinking about, since each come with their own challenges and trouble.

The Verdict

When it comes to IT, unfortunately, there's never a definitive answer to whether X solution will solve Y problem. With data centre infrastructures it's just the same - almost every solution will be on a case-by-case basis. But that's why we have experts on hand to help, whether you're starting a new company and need a brand new IT suite or Future Proofing your School's IT.

If you're looking to better understand what systems you may need, and which could offer you the optimal price-performance ratios, then get in touch with our helpful staff today. With over 30 years of experience in the industry, our IT experts are ready to offer you your perfect solution.

You can get in touch by filling out the form below or by giving us a call on 02392 322500. Or feel free to leave us a question in the comments below and we'll get back to you as soon as we can.

Posted in Tech

Author -

Published on 25 Mar 2020

Last updated on 25 Mar 2020

Recent posts