April 23, 2025
Trending News

The what and why of virtualization

  • December 28, 2023
  • 0

A computer, a network, a server…: If it’s part of your IT infrastructure, you can virtualize it today. What exactly is virtualization and why is it so ubiquitous?

The what and why of virtualization

A computer, a network, a server…: If it’s part of your IT infrastructure, you can virtualize it today. What exactly is virtualization and why is it so ubiquitous?

From the virtual machine to the virtual network: In today’s IT world, physical hardware is increasingly taking a back seat. Virtualization has been popular for some time, but why exactly? After all, the concept is not very new. IBM experimented with the virtualization of its mainframes as early as the 1960s, and supported software was available to customers in 1970. However, for a real breakthrough, the technology had to mature for another 30 years. Virtualization has become an indispensable part of a modern IT environment for good reason.

Wasted system resources

At its core, virtualization solves an old efficiency problem. Without virtualization, certain applications run on their own servers. Consider having one server for email, one for the company’s CRM system, and one for a specific legacy application you’ve built. Each server has a CPU, RAM and storage to handle this task. In most cases, the server in question is too powerful for the application running on it. For example, the mail server may be running at 30 percent of its total capacity, the CRM server may be running at 40 percent, and the legacy application may only need 20 percent.

Virtualization solves an old efficiency problem.

It is not an option to lump all applications together. The mail server might be running on Windows Server, the CRM application on one version of Linux, and the legacy application on another. Virtualization solves this problem.

Virtual servers

Through virtualization, you divide a single server into three digital virtual servers. You then place the desired operating system and application on each virtual server. The applications themselves still think they are running on their own hardware, but nothing could be further from the truth. In our example, we can virtualize three servers into one physical server. For example, the mail server, the CRM application and the legacy application each run in their own environment on the same hardware, which is now 90 percent utilized. This makes the other two physical servers unnecessary.

The hypervisor

To create virtual machines on a physical server (or PC), you need a special software layer: the hypervisor. A hypervisor collects all of a server’s physical system resources (CPU, memory, RAM, network) and distributes them among the virtual machines. Each virtual machine is given access to some of the underlying physical hardware through the hypervisor. In the server context, the hypervisor often runs directly on the hardware, without an additional operating system. We call something like this a bare metal hypervisor. However, you can easily install an operating system like Windows or Linux on your server or PC and run a hypervisor on it. Oracle’s VirtualBox is a good example of a Windows solution.

A hypervisor works closely with a system’s processor. It must therefore be compatible with virtualization, which is usually the case. When an application in a virtual machine wants to perform an operation, the hypervisor routes the instructions to the underlying hardware very efficiently. For the application in the virtual machine itself, there is little to no difference in speed or efficiency compared to the same application running directly on the hardware.

Efficiency gains

Virtualized systems have many similarities with physical systems. Once an application or human has access to a virtual system, the user experience is similar. Additionally, virtualized systems are separate from each other even though they run on the same physical hardware. The virtual mail server cannot see what the virtual server for the legacy applications is currently doing, and vice versa.

Virtualization lowers the bar for building and running applications. Where previously an organization had to order a new server to introduce a new application, today it is sufficient to add a virtual machine to an existing server. So you can get started straight away.

In its simplest form, virtualization ensures that virtual machines can be used to run different applications together on the same physical server, resulting in significant efficiency gains.

Safe and flexible

Virtualization also brings security and flexibility. When you run a Windows virtual machine on your own system through VirtualBox, you will notice that the virtual machine takes the form of a large file on the physical machine. You can simply copy this file. For example, virtualization allows so-called snapshots: you can create a copy of the file containing the virtualized computer at different points in time and thus have extensive backups.

Is something going wrong? You then replace the broken virtual machine with a slightly older version of itself that still works perfectly. Because virtual machines don’t depend on virtual hardware, you can run them anywhere. Does a physical server fail? Then just run the VMs on another copy.

There are also arguments for virtualization from an IT perspective. Managing virtual machines is much easier to centralize than managing multiple physical servers. You can maintain, troubleshoot, and update virtual machines from a central console.

Because virtual machines behave like computers and servers, their functionality is almost unlimited. You can create, modify, and delete VMs at will, as long as the underlying physical hardware has enough power. Many business applications therefore run in VMs. Containers are a more modern alternative with important similarities, but also big differences. We have already discussed the distinction between VMs and containers in detail.

PCs in a server

The flexibility of virtualization has driven the rise of desktop virtual machines. Finally, you don’t have to limit yourself to virtualizing servers for specific applications. On a powerful server you can virtualize many Windows 10/11 computers that have enough power for office work. Employees can use these virtual computers instead of physical desktops or laptops. For example, the Virtual Machine Platform in Windows 11 allows you to run Android and Linux apps on your OC.

This has advantages: Theoretically, the virtual computer can be accessed from anywhere. You can connect via your laptop at home or a so-called thin client at work. This is a very light and inexpensive computer that is just powerful enough to connect to a virtual machine (VM). Input from your mouse and keyboard travels to the virtual machine via the thin client, and the output returns and appears on your screen. Without a thin client, you connect to a VM via an agent. Citrix, VMware and Red Hat are well-known companies that offer desktop virtualization on Windows, Linux and macOS.

The flexibility of virtualization has driven the rise of desktop virtual machines.

Desktop virtualization initially took place primarily on company-owned servers on site. However, today we are witnessing a shift to the cloud. Windows 365 and the Cloud PC are a great example of Windows virtual PCs running on cloud servers. A good connection to the server on which the virtual machine is running is essential at all times.

Storage and network virtualization

We can go one step further and virtualize the storage or network. Storage virtualization relies on a hypervisor to combine data located in different locations into a single data source. This is useful for applications that use all data. This gives applications access to all data, regardless of source, storage location or format.

Network virtualization is a bit more complex. Network virtualization lumps all of the underlying network components together. A hypervisor considers servers, switches, and firewalls as system resources. You can then create virtual segmented networks on the hypervisor without having to extend cables. The underlying physical infrastructure remains one network, but the networks above are separate. This is useful, for example, to separate an IoT network and a guest network from the regular company network.

Virtualization therefore forms the basis of the cloud computing model. The provider provides its infrastructure and the customer can operate it without needing its own physical servers. This occurs to varying degrees depending on how far up the pyramid the customer wants to rise.

From virtualization to HCI

The essence of virtualization is simple. All of a device’s physical hardware is aggregated, and a hypervisor then distributes the available physical hardware across virtual machines. You can do whatever you want on these virtual machines.

What if you wanted to become even more efficient? Can you combine the hardware of different servers and run virtual machines on them? Absolutely, but this exercise introduces additional complexity. Traditional virtualization takes place on a physical device where the hardware communicates smoothly with each other. If you want to combine the computing power of one server with the storage of a second one and run a virtual machine on it, you need a different technology. This cutting and pasting of system resources is called hyperconverged infrastructure (HCI) and is a logical evolution of classic server virtualization.

This article was originally published on August 16, 2021. The text has been updated with the latest information.

Source: IT Daily

Leave a Reply

Your email address will not be published. Required fields are marked *