[clug] Why virtual x86 machines?

Hugh Fisher hugo.fisher at gmail.com
Fri Aug 21 13:26:55 UTC 2020


On Fri, Aug 21, 2020 at 12:49 AM Randall Crook <rcrook9190 at gmail.com> wrote:
>
> In my case it's a matter of abstraction.

Upfront, I want to be clear that I'm not telling anyone you are Doing
It Wrong. I'm interested in why we, the computing industry, went down
a particular development path.

> You have a multi systems application that runs over a number of systems
> specifically for security. Splitting work loads over multiple virtual
> machines to add a layer of security supplied by the hypervisor.
>
> Now if you want to test this application after code modification that
> effects everything from the kernel up. Having to re-install the OS on
> multiple bits of hardware and then do regression testing etc is time
> consuming and could cost a lot.
>
> So you can automate the creation, configuration and testing of the
> entire end to end environment in the "cloud" using multiple virtual
> machines. Using tools like ansible and standardized hypervisor APIs you
> can build in minutes the entire eco system and run automated test
> against it. When you're done and got the test result, just delete the lot.

It seems to me that automated creation and configuration of Linux
systems has never needed virtualization. There are well established
methods at all levels, from DHCP/PXE, to Ansible as discussed recently
on this list, to the all-singing all-dancing solutions sold by RedHat.
Bob Edwards at ANU Computer Science has been automatically managing
hundreds of non-virtual Linux machines for a couple of decades now.

What does virtualization and hypervisors make possible, or
qualitatively different, than before?

> On top of that Each instance of the operating system has to deal with
> the hardware its running on. When its abstracted via virtualization,
> only the hypervisor needs to know the real hardware. All the guests only
> need to know the hypervisor. So you are not locked into buying IBM.
> Because only the hypervisor needs to handle changes in hardware as you
> refresh and switch vendors. Not every single install of linux, or windows.

Isn't abstracting the hardware kind of the whole point of having an
operating system? Debian Linux currently has official support for ten
different CPU families, from retro MIPS and PowerPC to IBM mainframes.

And while I'm not a kernel developer, I'm fairly sure that if say a
system has an SSD drive but the hypervisor provides an 'abstract'
spinning disk device to the guest OS(es), that will cause Really Bad
Things to happen.

> Another consideration is better utilization of expensive infrastructure.
> Why buy a server for every database, web and file server.

And this is where I see mainframe reasoning being applied to
microprocessors: we need to make better utilization of expensive
infrastructure, but why is the infrastructure expensive in the first
place? Why not buy a cheap server for every database?

Repeating myself, this isn't meant to be a critique of you personally
or your company / organisation.

-- 

        cheers,
        Hugh Fisher



More information about the linux mailing list