[clug] Why virtual x86 machines?
steve at mcinerney.email
Fri Aug 21 22:36:50 UTC 2020
On 2020-08-21 23:26, Hugh Fisher via linux wrote:
> On Fri, Aug 21, 2020 at 12:49 AM Randall Crook <rcrook9190 at gmail.com>
>> In my case it's a matter of abstraction.
> Upfront, I want to be clear that I'm not telling anyone you are Doing
> It Wrong.
You can if you like. That doesn't make you correct tho. :-)
> I'm interested in why we, the computing industry, went down
> a particular development path.
Rather than being limited to answers from a maillist, suggest some
research on "why use virtualisation" may be a worthwhile path, if this
genuinely is your desire.
And to repeat a prior answer, it's cheaper. Hugely cheaper.
> It seems to me that automated creation and configuration of Linux
> systems has never needed virtualization. There are well established
You would be so very misguided here.
I'll give you a real example. We have 3 servers, they're a bit old and
not particularly amazing. Each server is 1 or 2RU (I forget), so quite
minimal DC costs there.
We have a working mini cloud on those. Storage, CPU, Memory all the
things we need.
With those 3 servers, we can:
* Enable a group of about 15 developers to individually and in
isolation, spin up, test, and destroy a virtual system that replicates
about 8 servers.
* They can also spin up hundreds of virtual devices - that is the key
part of their work - where each device in it's physical form costs
thousands of dollars.
They can respec those machines on the fly, more disk, more cpu, more
memory, more networks, connect to different networks, funky and scary
routing, less cpu, less memory. All by changing an "8" to a "12" in a
In your physical example, we would need 120 Physical servers, plus 500+
3 vs 620+
And don't forget the storage, networking, etc costs with that - your 620
servers would be insanely expensive to run, and a nightmare to manage.
Have fun justifing that cost differential to your management.
And that excludes how FAR more efficient the devs are when they don't
have to pay any attention to the hardware at all. If they need more
resources, they just configure it.
If they forget about their servers, I just destroy them. If folks leave,
we dont have their hardware sitting idle waiting for a new starter.
Want to add your server to a different network to try something out?
That'll be a 2 hour wait for me to drive to a DC, find the right server,
add network cables, switch management etc.
We haven't even touched on hardware failure. In virtualisation land,
done well, it's almost invisible. Good luck to your productivity if a
critical physical server fails. Oh? you have failover servers? Are you
suggesting it's not 620+ servers, it's now 1240? Plus loadbalancer
> What does virtualization and hypervisors make possible, or
> qualitatively different, than before?
And revisit the many other replies that have already answered this
>> Another consideration is better utilization of expensive
>> Why buy a server for every database, web and file server.
> And this is where I see mainframe reasoning being applied to
> microprocessors: we need to make better utilization of expensive
> infrastructure, but why is the infrastructure expensive in the first
> place? Why not buy a cheap server for every database?
Because it isn't.
I can buy one expensive server, and host dozens of database servers on
it, for a fraction of the cost of those as individual cheap servers.
tl;dr Your logic here is arse about.
It's not an expensive server best utilised. It's an expensive amount in
LOTS of servers (your view), than can be consolidated into a far cheaper
amount in a much smaller number of servers (virtualisation). Which is
why and how this drive came about in the early 2000's. It was to get rid
of the racks and racks and racks of servers, and consolidate down into
half a rack or so. Because that was vastly cheaper.
More information about the linux