Go to Top

How do virtual IT solutions store data?

How do virtual IT solutions store data?

For several years, IT administrators and managers have often found it necessary to streamline their costs and hardware in their data centre. As a result, more and more servers in data centres are now run as virtual servers (VMs), meaning less physical servers, less energy, hardware and maintenance costs.

SDS and hyper-converged solutions

To manage the increasing numbers of VMs and virtual servers a Software-Defined Storage (SDS) solution, which uses and establishes a software layer to manage and combine all necessary data centre components like compute, server, storage, networking and security. Whilst these software solutions can theoretically run on a number of different hardware products, in reality it is wise to use hardware which has already been tested and approved by the software vendor.

On the other hand, hyper-converged infrastructure systems are the latest development in an effort to streamline and make IT more efficient and less costly. These solutions use several technologies that are seamlessly integrated and managed as a single system. Hyper-converged systems merge all previously used concepts like converged, traditional storage and SDS into a single, server-based solution. Like a SDS solution, a hyper-converged solution is also based on virtualisation and software-based management, but it is coupled with the hardware too. Simply put; hyper-converged systems are infrastructure systems with software-centric architecture, integrating compute (CPU), storage, networking, virtualisation resources and other technologies in a hardware box, all supplied by one vendor.

From a technical standpoint, users of these solutions gain significant benefits by managing all infrastructure resources and VMs under one single point of administration. What’s more, all data centre resources are bought into the resource stack and therefore only a single shared resource pool is used. By delivering virtualisation, storage, compute, network, management and data protection in a scalable application, a company can manage a complex infrastructure with ease.

Companies using hyper-converged solutions are also able to avoid spending more on expensive hardware, as these systems rely on cheaper and more accessible x86 commodity hardware. Unlike integrated systems, upgrades are therefore fairly straightforward and failed hardware can be replaced in much smaller units. Overall, these solutions don’t need as much storage or bandwidth as integrated solutions, which not only saves on hardware costs but also energy consumption.

Hyper-converged storage vs software-defined storage

Where is the data actually stored?

There isn’t a simple answer to this question, as it depends heavily on the product used. In a nutshell, SDS or hyper-converged solutions consist of several different data structure layers. User data is located in the deepest layer, whilst other technologies make up the data layers on top of it. As outlined in the graphic above; there are four layers that add up to the final data structure. Starting from the top: the highest data layer is the one created by the SDS controller, including the information about the virtual storage arrays. The next layer is the virtualisation created by the hypervisor and beneath this lies the server layers. Lastly, the bottom layer is the physical media itself.

In contrast to SDS; in a hyper-converged solution the layer which the hypervisor creates is the highest layer of the data structure. Information for the SDS controller software is added underneath this layer. Additionally, a layer from one of the attached nodes is created and lastly the user data is inside this ‘container’. This is challenging enough to understand in itself, but there’s more…

Challenges of proprietary systems

Another characteristic of a SDS or a hyper-converged solution is that several of them use proprietary file systems. For example, NetApp storage solutions use their own WAFL (Write Anywhere File Layout) system, which was created especially for their Data Ontap OS and optimised for use in networking environments. NetApp also offers two other operating systems, each with their own advantages. VMware´s VSAN uses its own filesystem called on-disk Filesystem (VSAN FS) since Version 6 of their SDS solution.  Dell EMC is offering VMware VSAN as a hypervisor-convergent storage technology for its PowerEdge server products and their big data storage solution (Isilon) has another file system called Isilon OneFS. In this file system metadata is spread throughout the many nodes attached to the system in a homogeneous fashion.

As you can see, there are many different vendors who use different file systems and OS versions. This makes these solutions much more difficult to work with when it comes to getting lost data back, as each one can be highly complex and unique in its architecture.

Is data recovery in SDS or hyper-converged solutions possible?

Lost data can be recovered in high end storage systems; however it’s not an easy feat, especially with multiple data layers. Engineers at Kroll Ontrack were once called in to attend to an almost brand new VMware vSAN system which completely failed because of just one SSD, which was used as a system cache memory. VMware started offering the vSAN option for vSphere EXSi servers to organise and manage storages back in March 2014, but nevertheless, this particular system failed a few months later.

The vSAN system in question combined applications and data saved in VMs in a joint, clustered Shared Storage Data-store. All connected host computers and their hard drives are part of this joint data-store. This means that in the event of a hardware fault or data loss, it is necessary to deal with an additional information level when you are working to get the lost data back. This particular system consisted of 15 hard drives and 3 SSD memories, but with the breakdown of this one SSD three host computer/nodes failed, leading to a temporary loss of four large VMs.

In this particular situation new software tools had to be developed to find the description and log-files necessary for the identification and assembly of the data. The data-stores were functioning as containers, so the links to the contained VMs had to be identified and then reconstructed. Thanks to the unique tools to match the unique system architecture, it became possible to get information about how the VMs were saved in the vSAN’s data-store and distributed to the affected hard drives. This allowed the data recovery engineers working on the case to find the necessary description and log-files much faster, making the recovery process significantly easier to manage. With these tools at hand, the specialists were able to recover the VMs and all data stored on the vSAN system, but it was not without its challenges.

We can conclude from this example that while data recovery is possible for SDS or hyper-converged solutions, it will take specialist tools, detailed analysis and is not a guaranteed success. The number of layers that actually need restoring depends on the specific product and technologies used, which means each case of data loss will present its own unique problems. It is therefore important not only to have a disaster recovery plan in place to cover data loss in these systems, but also to document your architecture and data storage practices so it is easier to recover your data in the event of a system failure.

Does your organisation use SDS or hyper-converged storage? Have you ever lost data in these systems and if so, what happened? Let us know by commenting below, or tweet @DrDataRecovery

, , , , , , , , ,