[clug] Why virtual x86 machines?
hugo.fisher at gmail.com
Fri Aug 21 13:45:33 UTC 2020
On Fri, Aug 21, 2020 at 7:18 AM Michael Still <mikal at stillhq.com> wrote:
> There are also bin packing efficiencies which aren't being accounted for in the original post. As an example, Google at the start of the GFC did an analysis and found something like 25% of their corporate servers (not the web facing stuff) were not doing _anything_at_all_. They were machines which had simply been forgotten about and were idling away happily. Those numbers are not uncommon for enterprises. VMs give me a way to pack many "machines" onto a single real machine and if some of them are idle it doesn't really matter because I just keep packing VMs on until the underlying hardware hits a certain satisfying level of utilization.
Upfront declaration: I'm not telling anyone you are Doing It Wrong, I
don't mean to criticise individuals or organisations. I'm interested
in why the computing industry has gone down a particular development
path where virtual x86 has become important.
If a machine is idling away happily, so what? Why do we think it
worthwhile or necessary to reach a level of utilization?
I assume it's not hardcore Protestant theology, "the Devil finds work
for idle CPUs".
For environmental / monetary reasons, if you have expensive CPUs it is
worthwhile to have the minimum number of systems running with as much
utilization as required. But it seems to me that this is an industry
choice, not the only way to do things.
The alternative is demonstrated by my phone. Like most people I have a
Ghz CPU with gigabytes of RAM and storage in my pocket. It's idle a
lot of the time, but I don't feel at all guilty about this because
these phone computer systems are designed for irregular, varying
Modern phones, which are mostly running Linux and the rest a variant
of BSD Unix, can switch into and out of low power mode in fractions of
a second. The OS can switch on or off individual hardware units within
each chip. This doesn't stop them from being extremely fast: current
generation ARM CPUs in phones have single threaded performance
comparable or better than many Intel CPUs, and multithreaded isn't
I'm really curious as to why similar technology isn't being used in
data centres. (Or if it is, why we don't hear more about it.)
More information about the linux