From adrian.blake at ieee.org Sun Aug 9 17:09:50 2020 From: adrian.blake at ieee.org (Adrian Blake) Date: Sun, 9 Aug 2020 20:09:50 +0300 Subject: [clug] ABC iview Message-ID: All, Not specifically linux related but .... Because I am in Estonia at the moment I use nordvpn to access our ABC 's iview. It works but performance is poor, slow. The feed stops and starts often at less than 30 second intervals. Viewing SBS is ok, not perfect but 1or 2 pauses per hour. I have tried stopping and starting nordvpn and sometimes that helps, at least for a while. Restarting the computer, Ubuntu latest, will help for a while ... 10 minutes. Of course it is most likely related to iview. Has anyone else experienced this difficulty and has a solution. Adrian -- Adrian Blake, VK2ALF 101 Mulach St Cooma, NSW, 2630 Australia Mobile +61 407232978 P?rna tn 3 Otep?? Valga Estonia Mobile +372 51971441 From hugo.fisher at gmail.com Thu Aug 20 11:09:11 2020 From: hugo.fisher at gmail.com (Hugh Fisher) Date: Thu, 20 Aug 2020 21:09:11 +1000 Subject: [clug] Why virtual x86 machines? Message-ID: Inspired by the questions about KVM, I've been doing some reading on virtual machines and containers and some of the other new abstraction & protection mechanisms being used today. I like to write things down to clarify my thinking, and am posting this to the list in the hope that people with more knowledge will correct me if I'm wrong. And I do have questions, at the end. First up I'm not including the Java Virtual Machine, or the similar bytecode like systems used in .NET, Python, etc. Those are designed for user level programs, not OS kernels. And I'm not including emulation/simulation where machine instructions are interpreted by another program, because then it's turtles all the way down. A 6502 Apple II running ProDOS can be emulated by a program on a M68030 Macintosh running System 7 which itself is being emulated by a program running on a PowerPC Macintosh running MacOS X ... So, a virtual machine, usually associated with a hypervisor and guest operating system kernels, executes as many as possible machine instructions on the actual CPU hardware. (Using the old definition that you can kick hardware, but only swear at software. And just skip over microcode.) >From my old Andy Tanenbaum textbook the first virtual machine in widespread use was VM/370 for IBM mainframes, around 1970. I think the history is important because of a question I'll bring up later. A 370 series IBM mainframe, ancestor of the backwardly compatible zOS mainframes still sold today, could easily cost a million dollars. A 370 mainframe would run an entire bank financial system, or an entire airline reservation network. Which was awkward if a new release of the operating system was due and you wanted to test that all your software would still work. Shut down everything while you reboot into a beta OS? Buy another million dollar mainframe just for testing? VM/370 was what today we call a hypervisor, that could run multiple guest operating systems side by side on a single CPU, providing each operating system its own "virtual 370". Now the bank could run VM/370 on its single mainframe, with say 90% of machine resources allocated to the guest production OS and the rest given to whatever the developers wanted. This was a major technical achievement. Then, like now, the operating system distinguished 'user mode' from 'kernel' or 'privileged' or 'system' mode. User mode machine instructions could not modify virtual memory page tables, issue DMA instructions to IO hardware, and so on. Only kernel code could do that. So unlike a regular operating system the hypervisor had to work with guest operating system kernels executing privileged machine instructions. The guest kernels didn't know that they were running on a virtual 370, so it was up to the hypervisor to ensure that if, say, one guest OS disabled interrupts, this wouldn't shut down every other guest. Once IBM got VM/370 to work, it was a big hit. It was so popular both inside and outside IBM that some new instructions and microcode modifications were added to the 370 machine architecture to make IO and memory paging within the guest operating systems more efficient. And IBM then developed CMS, a hypervisor-aware operating system kernel designed to run only on VM/370. A conventional OS protects multiple users from affecting each other, whether deliberate or accidental. CMS was a single user OS, and VM/370 gave every user their own copy on their own virtual 370. Even if there was a kernel exploit in the CMS operating system (not the hypervisor), the only person you could attack would be yourself. CMS was a smaller and simpler operating system because it didn't duplicate functions that VM/370 was already doing. Now fast forward to the 21st century. If you cat /proc/cpuinfo on an x86 Linux system and you see 'vmx' in the output, you have the Intel virtual machine hardware extensions. The original x86 architecture had Ring 0 for privileged machine instructions as used by operating system kernels. The virtual hardware extensions add Ring -1 for a hypervisor such as VMWare, which can run multiple guest Linux or MS Win kernels side by side. Each of these thinks it is running with Ring 0 privilege and can update page tables, issue IO instructions to PCI slots or disk controllers, and so on. So Intel virtual x86 is just like VM/370. Except ... x86 computers don't cost a million dollars. So my most important question, why bother? Just buy another CPU. I did a quick price comparison on www.mwave.com.au. The cheapest Intel Xeon is about $4,000 and it's possible to spend $14,000 if you want to. For those amounts of money you could buy a shoebox up to shipping container full of Raspberry Pis, complete 64 bit Ghz systems with RAM and ports. Or if you have to stay within the x86 family, Intel Celerons are at least five times cheaper than Xeons. Looking instead at power budget, the cheapest Xeon CPU consumes as many watts as five entire Raspberry Pis. Looking at these prices I understand why Intel want us to virtualise x86 CPUs and run multiple guest operating systems. I don't see why anyone else would want to. But since datacentres and cloud systems do use hypervisors I must be missing something. Anyone want to explain? Second question, are there custom Linux kernels designed to run on hypervisors? Not a Container OS, which I think is something else, but like CMS designed to be single user or otherwise not duplicate what the hypervisor is already doing? And lastly I'm assuming that there's nothing in virtual x86 design and implementation that VM/370 didn't already do. Am I wrong? What new and interesting uses for hypervisors have been thought of? -- cheers, Hugh Fisher From stephen.hocking at gmail.com Thu Aug 20 11:46:25 2020 From: stephen.hocking at gmail.com (Stephen Hocking) Date: Thu, 20 Aug 2020 21:46:25 +1000 Subject: [clug] Why virtual x86 machines? In-Reply-To: References: Message-ID: Well, you don't need a Xeon to run virtual machines - most desktop processors are quite happy to. I'm running a few VMs on a Ryzen 7 over in my rack, a couple on my macbook pro, some more on my mac mini more on various Linux laptops. They're a great way of prototyping stuff and then being able to throw it away. The snapshot ability of various hypervisors makes that method of experimentation quite easy. On Thu, 20 Aug 2020 at 21:09, Hugh Fisher via linux wrote: > > Inspired by the questions about KVM, I've been doing some reading on > virtual machines and containers and some of the other new abstraction > & protection mechanisms being used today. I like to write things down > to clarify my thinking, and am posting this to the list in the hope > that people with more knowledge will correct me if I'm wrong. And I do > have questions, at the end. > > First up I'm not including the Java Virtual Machine, or the similar > bytecode like systems used in .NET, Python, etc. Those are designed > for user level programs, not OS kernels. And I'm not including > emulation/simulation where machine instructions are interpreted by > another program, because then it's turtles all the way down. A 6502 > Apple II running ProDOS can be emulated by a program on a M68030 > Macintosh running System 7 which itself is being emulated by a program > running on a PowerPC Macintosh running MacOS X ... > > So, a virtual machine, usually associated with a hypervisor and guest > operating system kernels, executes as many as possible machine > instructions on the actual CPU hardware. (Using the old definition > that you can kick hardware, but only swear at software. And just skip > over microcode.) > > From my old Andy Tanenbaum textbook the first virtual machine in > widespread use was VM/370 for IBM mainframes, around 1970. I think the > history is important because of a question I'll bring up later. > > A 370 series IBM mainframe, ancestor of the backwardly compatible zOS > mainframes still sold today, could easily cost a million dollars. A > 370 mainframe would run an entire bank financial system, or an entire > airline reservation network. Which was awkward if a new release of the > operating system was due and you wanted to test that all your software > would still work. Shut down everything while you reboot into a beta > OS? Buy another million dollar mainframe just for testing? > > VM/370 was what today we call a hypervisor, that could run multiple > guest operating systems side by side on a single CPU, providing each > operating system its own "virtual 370". Now the bank could run VM/370 > on its single mainframe, with say 90% of machine resources allocated > to the guest production OS and the rest given to whatever the > developers wanted. > > This was a major technical achievement. Then, like now, the operating > system distinguished 'user mode' from 'kernel' or 'privileged' or > 'system' mode. User mode machine instructions could not modify virtual > memory page tables, issue DMA instructions to IO hardware, and so on. > Only kernel code could do that. So unlike a regular operating system > the hypervisor had to work with guest operating system kernels > executing privileged machine instructions. The guest kernels didn't > know that they were running on a virtual 370, so it was up to the > hypervisor to ensure that if, say, one guest OS disabled interrupts, > this wouldn't shut down every other guest. > > Once IBM got VM/370 to work, it was a big hit. It was so popular both > inside and outside IBM that some new instructions and microcode > modifications were added to the 370 machine architecture to make IO > and memory paging within the guest operating systems more efficient. > > And IBM then developed CMS, a hypervisor-aware operating system kernel > designed to run only on VM/370. A conventional OS protects multiple > users from affecting each other, whether deliberate or accidental. CMS > was a single user OS, and VM/370 gave every user their own copy on > their own virtual 370. Even if there was a kernel exploit in the CMS > operating system (not the hypervisor), the only person you could > attack would be yourself. CMS was a smaller and simpler operating > system because it didn't duplicate functions that VM/370 was already > doing. > > Now fast forward to the 21st century. If you > cat /proc/cpuinfo > on an x86 Linux system and you see 'vmx' in the output, you have the > Intel virtual machine hardware extensions. The original x86 > architecture had Ring 0 for privileged machine instructions as used by > operating system kernels. The virtual hardware extensions add Ring -1 > for a hypervisor such as VMWare, which can run multiple guest Linux or > MS Win kernels side by side. Each of these thinks it is running with > Ring 0 privilege and can update page tables, issue IO instructions to > PCI slots or disk controllers, and so on. > > So Intel virtual x86 is just like VM/370. Except ... x86 computers > don't cost a million dollars. > > So my most important question, why bother? Just buy another CPU. > > I did a quick price comparison on www.mwave.com.au. The cheapest Intel > Xeon is about $4,000 and it's possible to spend $14,000 if you want > to. For those amounts of money you could buy a shoebox up to shipping > container full of Raspberry Pis, complete 64 bit Ghz systems with RAM > and ports. Or if you have to stay within the x86 family, Intel > Celerons are at least five times cheaper than Xeons. Looking instead > at power budget, the cheapest Xeon CPU consumes as many watts as five > entire Raspberry Pis. > > Looking at these prices I understand why Intel want us to virtualise > x86 CPUs and run multiple guest operating systems. I don't see why > anyone else would want to. > > But since datacentres and cloud systems do use hypervisors I must be > missing something. Anyone want to explain? > > > Second question, are there custom Linux kernels designed to run on > hypervisors? Not a Container OS, which I think is something else, but > like CMS designed to be single user or otherwise not duplicate what > the hypervisor is already doing? > > > And lastly I'm assuming that there's nothing in virtual x86 design and > implementation that VM/370 didn't already do. Am I wrong? What new and > interesting uses for hypervisors have been thought of? > > > -- > > cheers, > Hugh Fisher > > -- > linux mailing list > linux at lists.samba.org > https://lists.samba.org/mailman/listinfo/linux -- "I and the public know what all schoolchildren learn Those to whom evil is done Do evil in return" W.H. Auden, "September 1, 1939" From steve at mcinerney.email Thu Aug 20 12:05:05 2020 From: steve at mcinerney.email (Steve McInerney) Date: Thu, 20 Aug 2020 22:05:05 +1000 Subject: [clug] Why virtual x86 machines? In-Reply-To: References: Message-ID: <160edc0e52552fff02405777b82a9dbd@mcinerney.email> What's the max cores/ram on a pi? Can I run multiple VMs with 4, 8, 24 cpus and 8/16/32/64/128 G ram? 10GiB nic for disk IO? Separate nics for other traffic? Do pi's come in nice neat rack units to make racking easy in a DC? Dual power etc etc. If I need local storage, do they come with built in raid? What about OOB/LOM/IDRAC et al? etc etc I don't know much about pi's beyond a quick google - hence the questions, but that's the sort of things I'm looking for in large and small cloud usage hardware. If Pi's can do all the above, then sure, they're a useful alternate, please point me at some built hardware and costings! Cheers! Steve On 2020-08-20 21:09, Hugh Fisher via linux wrote: > Inspired by the questions about KVM, I've been doing some reading on > virtual machines and containers and some of the other new abstraction > & protection mechanisms being used today. I like to write things down > to clarify my thinking, and am posting this to the list in the hope > that people with more knowledge will correct me if I'm wrong. And I do > have questions, at the end. > > First up I'm not including the Java Virtual Machine, or the similar > bytecode like systems used in .NET, Python, etc. Those are designed > for user level programs, not OS kernels. And I'm not including > emulation/simulation where machine instructions are interpreted by > another program, because then it's turtles all the way down. A 6502 > Apple II running ProDOS can be emulated by a program on a M68030 > Macintosh running System 7 which itself is being emulated by a program > running on a PowerPC Macintosh running MacOS X ... > > So, a virtual machine, usually associated with a hypervisor and guest > operating system kernels, executes as many as possible machine > instructions on the actual CPU hardware. (Using the old definition > that you can kick hardware, but only swear at software. And just skip > over microcode.) > > From my old Andy Tanenbaum textbook the first virtual machine in > widespread use was VM/370 for IBM mainframes, around 1970. I think the > history is important because of a question I'll bring up later. > > A 370 series IBM mainframe, ancestor of the backwardly compatible zOS > mainframes still sold today, could easily cost a million dollars. A > 370 mainframe would run an entire bank financial system, or an entire > airline reservation network. Which was awkward if a new release of the > operating system was due and you wanted to test that all your software > would still work. Shut down everything while you reboot into a beta > OS? Buy another million dollar mainframe just for testing? > > VM/370 was what today we call a hypervisor, that could run multiple > guest operating systems side by side on a single CPU, providing each > operating system its own "virtual 370". Now the bank could run VM/370 > on its single mainframe, with say 90% of machine resources allocated > to the guest production OS and the rest given to whatever the > developers wanted. > > This was a major technical achievement. Then, like now, the operating > system distinguished 'user mode' from 'kernel' or 'privileged' or > 'system' mode. User mode machine instructions could not modify virtual > memory page tables, issue DMA instructions to IO hardware, and so on. > Only kernel code could do that. So unlike a regular operating system > the hypervisor had to work with guest operating system kernels > executing privileged machine instructions. The guest kernels didn't > know that they were running on a virtual 370, so it was up to the > hypervisor to ensure that if, say, one guest OS disabled interrupts, > this wouldn't shut down every other guest. > > Once IBM got VM/370 to work, it was a big hit. It was so popular both > inside and outside IBM that some new instructions and microcode > modifications were added to the 370 machine architecture to make IO > and memory paging within the guest operating systems more efficient. > > And IBM then developed CMS, a hypervisor-aware operating system kernel > designed to run only on VM/370. A conventional OS protects multiple > users from affecting each other, whether deliberate or accidental. CMS > was a single user OS, and VM/370 gave every user their own copy on > their own virtual 370. Even if there was a kernel exploit in the CMS > operating system (not the hypervisor), the only person you could > attack would be yourself. CMS was a smaller and simpler operating > system because it didn't duplicate functions that VM/370 was already > doing. > > Now fast forward to the 21st century. If you > cat /proc/cpuinfo > on an x86 Linux system and you see 'vmx' in the output, you have the > Intel virtual machine hardware extensions. The original x86 > architecture had Ring 0 for privileged machine instructions as used by > operating system kernels. The virtual hardware extensions add Ring -1 > for a hypervisor such as VMWare, which can run multiple guest Linux or > MS Win kernels side by side. Each of these thinks it is running with > Ring 0 privilege and can update page tables, issue IO instructions to > PCI slots or disk controllers, and so on. > > So Intel virtual x86 is just like VM/370. Except ... x86 computers > don't cost a million dollars. > > So my most important question, why bother? Just buy another CPU. > > I did a quick price comparison on www.mwave.com.au. The cheapest Intel > Xeon is about $4,000 and it's possible to spend $14,000 if you want > to. For those amounts of money you could buy a shoebox up to shipping > container full of Raspberry Pis, complete 64 bit Ghz systems with RAM > and ports. Or if you have to stay within the x86 family, Intel > Celerons are at least five times cheaper than Xeons. Looking instead > at power budget, the cheapest Xeon CPU consumes as many watts as five > entire Raspberry Pis. > > Looking at these prices I understand why Intel want us to virtualise > x86 CPUs and run multiple guest operating systems. I don't see why > anyone else would want to. > > But since datacentres and cloud systems do use hypervisors I must be > missing something. Anyone want to explain? > > > Second question, are there custom Linux kernels designed to run on > hypervisors? Not a Container OS, which I think is something else, but > like CMS designed to be single user or otherwise not duplicate what > the hypervisor is already doing? > > > And lastly I'm assuming that there's nothing in virtual x86 design and > implementation that VM/370 didn't already do. Am I wrong? What new and > interesting uses for hypervisors have been thought of? > > > -- > > cheers, > Hugh Fisher From rossb at fwi.net.au Thu Aug 20 12:44:27 2020 From: rossb at fwi.net.au (Brenton Ross) Date: Thu, 20 Aug 2020 22:44:27 +1000 Subject: [clug] Why virtual x86 machines? In-Reply-To: References: Message-ID: First I must say "good work" on your research - that was quite interesting. On the question of "Why bother?" I will describe my system and try to give my reasons. The computer is a laptop with a 6 core [12 thread] CPU, 64GB of RAM and a couple of large SSDs. [It wasn't cheap.] On this machine I have Ubuntu as the host OS. Its main job is to run a set of virtual machines using KVM QEMU and libvirt. There is a server VM running Centos, a general purpose VM running Scientific Linux, and currently Fedora on which I am doing some software development. There are several other VMs which currently are not running. They use a variety of Linux versions. [No Windows VM.] I spend my time working on several different software projects and they each seem to need a different variety of Linux. One project uses a lot of computer vision software which seems to be best supported on Debian or Ubuntu. Another project requires a Red Hat based OS and needs to be stable so that uses Scientific Linux. For another project I want something that has more up-to-date software, so that one gets Fedora. By putting the projects on separate VMs I can then install various packages without having the projects mess each other up. One of the nice things about this configuration is that if some project needs more memory or a few more CPUs it can be quickly adjusted to use more of the overall available resources. The alternatives to using VMs would be to multiboot, but then I could only have one going at a time; or to have a flock of separate machines which would be quite difficult to use. You mentioned "single user". Each of my VMs has at least two users, one is an administrator and the other is me or some alias. Some have more users for testing. [I use LDAP to try and keep things a bit under control.] I will admit there are a few down-sides: Keeping them all up-to-date takes a bit more time and having yum, dnf and apt is "interesting". Having multiple different desk top environments is fun, especially when its been a while since they were last used. However, I don't think I would like to go back to just a single machine. That's my use case. Brenton On Thu, 2020-08-20 at 21:09 +1000, Hugh Fisher via linux wrote: > Inspired by the questions about KVM, I've been doing some reading on > virtual machines and containers and some of the other new abstraction > & protection mechanisms being used today. I like to write things down > to clarify my thinking, and am posting this to the list in the hope > that people with more knowledge will correct me if I'm wrong. And I > do > have questions, at the end. > > First up I'm not including the Java Virtual Machine, or the similar > bytecode like systems used in .NET, Python, etc. Those are designed > for user level programs, not OS kernels. And I'm not including > emulation/simulation where machine instructions are interpreted by > another program, because then it's turtles all the way down. A 6502 > Apple II running ProDOS can be emulated by a program on a M68030 > Macintosh running System 7 which itself is being emulated by a > program > running on a PowerPC Macintosh running MacOS X ... > > So, a virtual machine, usually associated with a hypervisor and guest > operating system kernels, executes as many as possible machine > instructions on the actual CPU hardware. (Using the old definition > that you can kick hardware, but only swear at software. And just skip > over microcode.) > > From my old Andy Tanenbaum textbook the first virtual machine in > widespread use was VM/370 for IBM mainframes, around 1970. I think > the > history is important because of a question I'll bring up later. > > A 370 series IBM mainframe, ancestor of the backwardly compatible zOS > mainframes still sold today, could easily cost a million dollars. A > 370 mainframe would run an entire bank financial system, or an entire > airline reservation network. Which was awkward if a new release of > the > operating system was due and you wanted to test that all your > software > would still work. Shut down everything while you reboot into a beta > OS? Buy another million dollar mainframe just for testing? > > VM/370 was what today we call a hypervisor, that could run multiple > guest operating systems side by side on a single CPU, providing each > operating system its own "virtual 370". Now the bank could run VM/370 > on its single mainframe, with say 90% of machine resources allocated > to the guest production OS and the rest given to whatever the > developers wanted. > > This was a major technical achievement. Then, like now, the operating > system distinguished 'user mode' from 'kernel' or 'privileged' or > 'system' mode. User mode machine instructions could not modify > virtual > memory page tables, issue DMA instructions to IO hardware, and so on. > Only kernel code could do that. So unlike a regular operating system > the hypervisor had to work with guest operating system kernels > executing privileged machine instructions. The guest kernels didn't > know that they were running on a virtual 370, so it was up to the > hypervisor to ensure that if, say, one guest OS disabled interrupts, > this wouldn't shut down every other guest. > > Once IBM got VM/370 to work, it was a big hit. It was so popular both > inside and outside IBM that some new instructions and microcode > modifications were added to the 370 machine architecture to make IO > and memory paging within the guest operating systems more efficient. > > And IBM then developed CMS, a hypervisor-aware operating system > kernel > designed to run only on VM/370. A conventional OS protects multiple > users from affecting each other, whether deliberate or accidental. > CMS > was a single user OS, and VM/370 gave every user their own copy on > their own virtual 370. Even if there was a kernel exploit in the CMS > operating system (not the hypervisor), the only person you could > attack would be yourself. CMS was a smaller and simpler operating > system because it didn't duplicate functions that VM/370 was already > doing. > > Now fast forward to the 21st century. If you > cat /proc/cpuinfo > on an x86 Linux system and you see 'vmx' in the output, you have the > Intel virtual machine hardware extensions. The original x86 > architecture had Ring 0 for privileged machine instructions as used > by > operating system kernels. The virtual hardware extensions add Ring -1 > for a hypervisor such as VMWare, which can run multiple guest Linux > or > MS Win kernels side by side. Each of these thinks it is running with > Ring 0 privilege and can update page tables, issue IO instructions to > PCI slots or disk controllers, and so on. > > So Intel virtual x86 is just like VM/370. Except ... x86 computers > don't cost a million dollars. > > So my most important question, why bother? Just buy another CPU. > > I did a quick price comparison on www.mwave.com.au. The cheapest > Intel > Xeon is about $4,000 and it's possible to spend $14,000 if you want > to. For those amounts of money you could buy a shoebox up to shipping > container full of Raspberry Pis, complete 64 bit Ghz systems with RAM > and ports. Or if you have to stay within the x86 family, Intel > Celerons are at least five times cheaper than Xeons. Looking instead > at power budget, the cheapest Xeon CPU consumes as many watts as five > entire Raspberry Pis. > > Looking at these prices I understand why Intel want us to virtualise > x86 CPUs and run multiple guest operating systems. I don't see why > anyone else would want to. > > But since datacentres and cloud systems do use hypervisors I must be > missing something. Anyone want to explain? > > > Second question, are there custom Linux kernels designed to run on > hypervisors? Not a Container OS, which I think is something else, but > like CMS designed to be single user or otherwise not duplicate what > the hypervisor is already doing? > > > And lastly I'm assuming that there's nothing in virtual x86 design > and > implementation that VM/370 didn't already do. Am I wrong? What new > and > interesting uses for hypervisors have been thought of? > > > -- > > cheers, > Hugh Fisher > From savillep at protonmail.com Thu Aug 20 12:49:13 2020 From: savillep at protonmail.com (Peter Saville) Date: Thu, 20 Aug 2020 12:49:13 +0000 Subject: [clug] Why virtual x86 machines? In-Reply-To: <160edc0e52552fff02405777b82a9dbd@mcinerney.email> References: <160edc0e52552fff02405777b82a9dbd@mcinerney.email> Message-ID: <0c8d90bd-dbbb-76fb-d81f-5d3d0a9dde64@protonmail.com> nice post, enjoyed it. the thought of un-boxing, wiring, networking and troubleshooting $14,000 worth a rPi's. that's a hard no from me... Not exactly sure about the CMS question, and correct me if I'm wrong, but KVM is built into the linux kernel so it might look like a type 2 hypervisor, but is a native hypervisor that talks directly to hardware. Cheers, Pete On 20/8/20 10:05 pm, Steve McInerney via linux wrote: > What's the max cores/ram on a pi? > Can I run multiple VMs with 4, 8, 24 cpus and 8/16/32/64/128 G ram? > 10GiB nic for disk IO? > Separate nics for other traffic? > Do pi's come in nice neat rack units to make racking easy in a DC? Dual > power etc etc. > If I need local storage, do they come with built in raid? > What about OOB/LOM/IDRAC et al? > etc etc > > I don't know much about pi's beyond a quick google - hence the > questions, but that's the sort of things I'm looking for in large and > small cloud usage hardware. > If Pi's can do all the above, then sure, they're a useful alternate, > please point me at some built hardware and costings! > > Cheers! > Steve > > On 2020-08-20 21:09, Hugh Fisher via linux wrote: > >> Inspired by the questions about KVM, I've been doing some reading on >> virtual machines and containers and some of the other new abstraction >> & protection mechanisms being used today. I like to write things down >> to clarify my thinking, and am posting this to the list in the hope >> that people with more knowledge will correct me if I'm wrong. And I do >> have questions, at the end. >> >> First up I'm not including the Java Virtual Machine, or the similar >> bytecode like systems used in .NET, Python, etc. Those are designed >> for user level programs, not OS kernels. And I'm not including >> emulation/simulation where machine instructions are interpreted by >> another program, because then it's turtles all the way down. A 6502 >> Apple II running ProDOS can be emulated by a program on a M68030 >> Macintosh running System 7 which itself is being emulated by a program >> running on a PowerPC Macintosh running MacOS X ... >> >> So, a virtual machine, usually associated with a hypervisor and guest >> operating system kernels, executes as many as possible machine >> instructions on the actual CPU hardware. (Using the old definition >> that you can kick hardware, but only swear at software. And just skip >> over microcode.) >> >> From my old Andy Tanenbaum textbook the first virtual machine in >> widespread use was VM/370 for IBM mainframes, around 1970. I think the >> history is important because of a question I'll bring up later. >> >> A 370 series IBM mainframe, ancestor of the backwardly compatible zOS >> mainframes still sold today, could easily cost a million dollars. A >> 370 mainframe would run an entire bank financial system, or an entire >> airline reservation network. Which was awkward if a new release of the >> operating system was due and you wanted to test that all your software >> would still work. Shut down everything while you reboot into a beta >> OS? Buy another million dollar mainframe just for testing? >> >> VM/370 was what today we call a hypervisor, that could run multiple >> guest operating systems side by side on a single CPU, providing each >> operating system its own "virtual 370". Now the bank could run VM/370 >> on its single mainframe, with say 90% of machine resources allocated >> to the guest production OS and the rest given to whatever the >> developers wanted. >> >> This was a major technical achievement. Then, like now, the operating >> system distinguished 'user mode' from 'kernel' or 'privileged' or >> 'system' mode. User mode machine instructions could not modify virtual >> memory page tables, issue DMA instructions to IO hardware, and so on. >> Only kernel code could do that. So unlike a regular operating system >> the hypervisor had to work with guest operating system kernels >> executing privileged machine instructions. The guest kernels didn't >> know that they were running on a virtual 370, so it was up to the >> hypervisor to ensure that if, say, one guest OS disabled interrupts, >> this wouldn't shut down every other guest. >> >> Once IBM got VM/370 to work, it was a big hit. It was so popular both >> inside and outside IBM that some new instructions and microcode >> modifications were added to the 370 machine architecture to make IO >> and memory paging within the guest operating systems more efficient. >> >> And IBM then developed CMS, a hypervisor-aware operating system kernel >> designed to run only on VM/370. A conventional OS protects multiple >> users from affecting each other, whether deliberate or accidental. CMS >> was a single user OS, and VM/370 gave every user their own copy on >> their own virtual 370. Even if there was a kernel exploit in the CMS >> operating system (not the hypervisor), the only person you could >> attack would be yourself. CMS was a smaller and simpler operating >> system because it didn't duplicate functions that VM/370 was already >> doing. >> >> Now fast forward to the 21st century. If you >> cat /proc/cpuinfo >> on an x86 Linux system and you see 'vmx' in the output, you have the >> Intel virtual machine hardware extensions. The original x86 >> architecture had Ring 0 for privileged machine instructions as used by >> operating system kernels. The virtual hardware extensions add Ring -1 >> for a hypervisor such as VMWare, which can run multiple guest Linux or >> MS Win kernels side by side. Each of these thinks it is running with >> Ring 0 privilege and can update page tables, issue IO instructions to >> PCI slots or disk controllers, and so on. >> >> So Intel virtual x86 is just like VM/370. Except ... x86 computers >> don't cost a million dollars. >> >> So my most important question, why bother? Just buy another CPU. >> >> I did a quick price comparison on >> www.mwave.com.au >> . The cheapest Intel >> Xeon is about $4,000 and it's possible to spend $14,000 if you want >> to. For those amounts of money you could buy a shoebox up to shipping >> container full of Raspberry Pis, complete 64 bit Ghz systems with RAM >> and ports. Or if you have to stay within the x86 family, Intel >> Celerons are at least five times cheaper than Xeons. Looking instead >> at power budget, the cheapest Xeon CPU consumes as many watts as five >> entire Raspberry Pis. >> >> Looking at these prices I understand why Intel want us to virtualise >> x86 CPUs and run multiple guest operating systems. I don't see why >> anyone else would want to. >> >> But since datacentres and cloud systems do use hypervisors I must be >> missing something. Anyone want to explain? >> >> Second question, are there custom Linux kernels designed to run on >> hypervisors? Not a Container OS, which I think is something else, but >> like CMS designed to be single user or otherwise not duplicate what >> the hypervisor is already doing? >> >> And lastly I'm assuming that there's nothing in virtual x86 design and >> implementation that VM/370 didn't already do. Am I wrong? What new and >> interesting uses for hypervisors have been thought of? >> >> -- >> >> cheers, >> Hugh Fisher > > -- > linux mailing list > linux at lists.samba.org > > https://lists.samba.org/mailman/listinfo/linux From rcrook9190 at gmail.com Thu Aug 20 14:49:08 2020 From: rcrook9190 at gmail.com (Randall Crook) Date: Fri, 21 Aug 2020 00:49:08 +1000 Subject: [clug] Why virtual x86 machines? In-Reply-To: References: Message-ID: <399a2e57-6455-73f6-1b37-d59ce01d3f8d@gmail.com> In my case it's a matter of abstraction. You have a multi systems application that runs over a number of systems specifically for security. Splitting work loads over multiple virtual machines to add a layer of security supplied by the hypervisor. Now if you want to test this application after code modification that effects everything from the kernel up. Having to re-install the OS on multiple bits of hardware and then do regression testing etc is time consuming and could cost a lot. So you can automate the creation, configuration and testing of the entire end to end environment in the "cloud" using multiple virtual machines. Using tools like ansible and standardized hypervisor APIs you can build in minutes the entire eco system and run automated test against it. When you're done and got the test result, just delete the lot. On top of that Each instance of the operating system has to deal with the hardware its running on. When its abstracted via virtualization, only the hypervisor needs to know the real hardware. All the guests only need to know the hypervisor. So you are not locked into buying IBM. Because only the hypervisor needs to handle changes in hardware as you refresh and switch vendors. Not every single install of linux, or windows. Another consideration is better utilization of expensive infrastructure. Why buy a server for every database, web and file server. And in these days of COVID and working from home, using VDI and virtualization you can give every one the same working environment no matter what PC or Mac they are using at home. But you can take it one step further. You can virtualize on a mobile phone. I have seen environments where they run multiple virtual linux machines and a couple of android on a single phone. Once more for creating isolated security zones on a single piece of hardware. Just a couple of reasons you want virtualization. Randall. On 20/08/2020 9:09 pm, Hugh Fisher via linux wrote: > Inspired by the questions about KVM, I've been doing some reading on > virtual machines and containers and some of the other new abstraction > & protection mechanisms being used today. I like to write things down > to clarify my thinking, and am posting this to the list in the hope > that people with more knowledge will correct me if I'm wrong. And I do > have questions, at the end. > > First up I'm not including the Java Virtual Machine, or the similar > bytecode like systems used in .NET, Python, etc. Those are designed > for user level programs, not OS kernels. And I'm not including > emulation/simulation where machine instructions are interpreted by > another program, because then it's turtles all the way down. A 6502 > Apple II running ProDOS can be emulated by a program on a M68030 > Macintosh running System 7 which itself is being emulated by a program > running on a PowerPC Macintosh running MacOS X ... > > So, a virtual machine, usually associated with a hypervisor and guest > operating system kernels, executes as many as possible machine > instructions on the actual CPU hardware. (Using the old definition > that you can kick hardware, but only swear at software. And just skip > over microcode.) > > From my old Andy Tanenbaum textbook the first virtual machine in > widespread use was VM/370 for IBM mainframes, around 1970. I think the > history is important because of a question I'll bring up later. > > A 370 series IBM mainframe, ancestor of the backwardly compatible zOS > mainframes still sold today, could easily cost a million dollars. A > 370 mainframe would run an entire bank financial system, or an entire > airline reservation network. Which was awkward if a new release of the > operating system was due and you wanted to test that all your software > would still work. Shut down everything while you reboot into a beta > OS? Buy another million dollar mainframe just for testing? > > VM/370 was what today we call a hypervisor, that could run multiple > guest operating systems side by side on a single CPU, providing each > operating system its own "virtual 370". Now the bank could run VM/370 > on its single mainframe, with say 90% of machine resources allocated > to the guest production OS and the rest given to whatever the > developers wanted. > > This was a major technical achievement. Then, like now, the operating > system distinguished 'user mode' from 'kernel' or 'privileged' or > 'system' mode. User mode machine instructions could not modify virtual > memory page tables, issue DMA instructions to IO hardware, and so on. > Only kernel code could do that. So unlike a regular operating system > the hypervisor had to work with guest operating system kernels > executing privileged machine instructions. The guest kernels didn't > know that they were running on a virtual 370, so it was up to the > hypervisor to ensure that if, say, one guest OS disabled interrupts, > this wouldn't shut down every other guest. > > Once IBM got VM/370 to work, it was a big hit. It was so popular both > inside and outside IBM that some new instructions and microcode > modifications were added to the 370 machine architecture to make IO > and memory paging within the guest operating systems more efficient. > > And IBM then developed CMS, a hypervisor-aware operating system kernel > designed to run only on VM/370. A conventional OS protects multiple > users from affecting each other, whether deliberate or accidental. CMS > was a single user OS, and VM/370 gave every user their own copy on > their own virtual 370. Even if there was a kernel exploit in the CMS > operating system (not the hypervisor), the only person you could > attack would be yourself. CMS was a smaller and simpler operating > system because it didn't duplicate functions that VM/370 was already > doing. > > Now fast forward to the 21st century. If you > cat /proc/cpuinfo > on an x86 Linux system and you see 'vmx' in the output, you have the > Intel virtual machine hardware extensions. The original x86 > architecture had Ring 0 for privileged machine instructions as used by > operating system kernels. The virtual hardware extensions add Ring -1 > for a hypervisor such as VMWare, which can run multiple guest Linux or > MS Win kernels side by side. Each of these thinks it is running with > Ring 0 privilege and can update page tables, issue IO instructions to > PCI slots or disk controllers, and so on. > > So Intel virtual x86 is just like VM/370. Except ... x86 computers > don't cost a million dollars. > > So my most important question, why bother? Just buy another CPU. > > I did a quick price comparison on www.mwave.com.au. The cheapest Intel > Xeon is about $4,000 and it's possible to spend $14,000 if you want > to. For those amounts of money you could buy a shoebox up to shipping > container full of Raspberry Pis, complete 64 bit Ghz systems with RAM > and ports. Or if you have to stay within the x86 family, Intel > Celerons are at least five times cheaper than Xeons. Looking instead > at power budget, the cheapest Xeon CPU consumes as many watts as five > entire Raspberry Pis. > > Looking at these prices I understand why Intel want us to virtualise > x86 CPUs and run multiple guest operating systems. I don't see why > anyone else would want to. > > But since datacentres and cloud systems do use hypervisors I must be > missing something. Anyone want to explain? > > > Second question, are there custom Linux kernels designed to run on > hypervisors? Not a Container OS, which I think is something else, but > like CMS designed to be single user or otherwise not duplicate what > the hypervisor is already doing? > > > And lastly I'm assuming that there's nothing in virtual x86 design and > implementation that VM/370 didn't already do. Am I wrong? What new and > interesting uses for hypervisors have been thought of? > > -- Randall Crook From mikal at stillhq.com Thu Aug 20 21:17:52 2020 From: mikal at stillhq.com (Michael Still) Date: Fri, 21 Aug 2020 07:17:52 +1000 Subject: [clug] Why virtual x86 machines? In-Reply-To: <399a2e57-6455-73f6-1b37-d59ce01d3f8d@gmail.com> References: <399a2e57-6455-73f6-1b37-d59ce01d3f8d@gmail.com> Message-ID: There are also bin packing efficiencies which aren't being accounted for in the original post. As an example, Google at the start of the GFC did an analysis and found something like 25% of their corporate servers (not the web facing stuff) were not doing _anything_at_all_. They were machines which had simply been forgotten about and were idling away happily. Those numbers are not uncommon for enterprises. VMs give me a way to pack many "machines" onto a single real machine and if some of them are idle it doesn't really matter because I just keep packing VMs on until the underlying hardware hits a certain satisfying level of utilization. OpenStack actually quantifies how much over subscription they think is reasonable -- the default is 16x CPU, 1.5x RAM. I'll leave thinking about the 1.5 times ram thing and KSM as an exercise for the reader. Michael On Fri, Aug 21, 2020 at 12:49 AM Randall Crook via linux < linux at lists.samba.org> wrote: > In my case it's a matter of abstraction. > > You have a multi systems application that runs over a number of systems > specifically for security. Splitting work loads over multiple virtual > machines to add a layer of security supplied by the hypervisor. > > Now if you want to test this application after code modification that > effects everything from the kernel up. Having to re-install the OS on > multiple bits of hardware and then do regression testing etc is time > consuming and could cost a lot. > > So you can automate the creation, configuration and testing of the > entire end to end environment in the "cloud" using multiple virtual > machines. Using tools like ansible and standardized hypervisor APIs you > can build in minutes the entire eco system and run automated test > against it. When you're done and got the test result, just delete the lot. > > On top of that Each instance of the operating system has to deal with > the hardware its running on. When its abstracted via virtualization, > only the hypervisor needs to know the real hardware. All the guests only > need to know the hypervisor. So you are not locked into buying IBM. > Because only the hypervisor needs to handle changes in hardware as you > refresh and switch vendors. Not every single install of linux, or windows. > > Another consideration is better utilization of expensive infrastructure. > Why buy a server for every database, web and file server. And in these > days of COVID and working from home, using VDI and virtualization you > can give every one the same working environment no matter what PC or Mac > they are using at home. > > But you can take it one step further. You can virtualize on a mobile > phone. I have seen environments where they run multiple virtual linux > machines and a couple of android on a single phone. Once more for > creating isolated security zones on a single piece of hardware. > > Just a couple of reasons you want virtualization. > > Randall. > > On 20/08/2020 9:09 pm, Hugh Fisher via linux wrote: > > Inspired by the questions about KVM, I've been doing some reading on > > virtual machines and containers and some of the other new abstraction > > & protection mechanisms being used today. I like to write things down > > to clarify my thinking, and am posting this to the list in the hope > > that people with more knowledge will correct me if I'm wrong. And I do > > have questions, at the end. > > > > First up I'm not including the Java Virtual Machine, or the similar > > bytecode like systems used in .NET, Python, etc. Those are designed > > for user level programs, not OS kernels. And I'm not including > > emulation/simulation where machine instructions are interpreted by > > another program, because then it's turtles all the way down. A 6502 > > Apple II running ProDOS can be emulated by a program on a M68030 > > Macintosh running System 7 which itself is being emulated by a program > > running on a PowerPC Macintosh running MacOS X ... > > > > So, a virtual machine, usually associated with a hypervisor and guest > > operating system kernels, executes as many as possible machine > > instructions on the actual CPU hardware. (Using the old definition > > that you can kick hardware, but only swear at software. And just skip > > over microcode.) > > > > From my old Andy Tanenbaum textbook the first virtual machine in > > widespread use was VM/370 for IBM mainframes, around 1970. I think the > > history is important because of a question I'll bring up later. > > > > A 370 series IBM mainframe, ancestor of the backwardly compatible zOS > > mainframes still sold today, could easily cost a million dollars. A > > 370 mainframe would run an entire bank financial system, or an entire > > airline reservation network. Which was awkward if a new release of the > > operating system was due and you wanted to test that all your software > > would still work. Shut down everything while you reboot into a beta > > OS? Buy another million dollar mainframe just for testing? > > > > VM/370 was what today we call a hypervisor, that could run multiple > > guest operating systems side by side on a single CPU, providing each > > operating system its own "virtual 370". Now the bank could run VM/370 > > on its single mainframe, with say 90% of machine resources allocated > > to the guest production OS and the rest given to whatever the > > developers wanted. > > > > This was a major technical achievement. Then, like now, the operating > > system distinguished 'user mode' from 'kernel' or 'privileged' or > > 'system' mode. User mode machine instructions could not modify virtual > > memory page tables, issue DMA instructions to IO hardware, and so on. > > Only kernel code could do that. So unlike a regular operating system > > the hypervisor had to work with guest operating system kernels > > executing privileged machine instructions. The guest kernels didn't > > know that they were running on a virtual 370, so it was up to the > > hypervisor to ensure that if, say, one guest OS disabled interrupts, > > this wouldn't shut down every other guest. > > > > Once IBM got VM/370 to work, it was a big hit. It was so popular both > > inside and outside IBM that some new instructions and microcode > > modifications were added to the 370 machine architecture to make IO > > and memory paging within the guest operating systems more efficient. > > > > And IBM then developed CMS, a hypervisor-aware operating system kernel > > designed to run only on VM/370. A conventional OS protects multiple > > users from affecting each other, whether deliberate or accidental. CMS > > was a single user OS, and VM/370 gave every user their own copy on > > their own virtual 370. Even if there was a kernel exploit in the CMS > > operating system (not the hypervisor), the only person you could > > attack would be yourself. CMS was a smaller and simpler operating > > system because it didn't duplicate functions that VM/370 was already > > doing. > > > > Now fast forward to the 21st century. If you > > cat /proc/cpuinfo > > on an x86 Linux system and you see 'vmx' in the output, you have the > > Intel virtual machine hardware extensions. The original x86 > > architecture had Ring 0 for privileged machine instructions as used by > > operating system kernels. The virtual hardware extensions add Ring -1 > > for a hypervisor such as VMWare, which can run multiple guest Linux or > > MS Win kernels side by side. Each of these thinks it is running with > > Ring 0 privilege and can update page tables, issue IO instructions to > > PCI slots or disk controllers, and so on. > > > > So Intel virtual x86 is just like VM/370. Except ... x86 computers > > don't cost a million dollars. > > > > So my most important question, why bother? Just buy another CPU. > > > > I did a quick price comparison on www.mwave.com.au. The cheapest Intel > > Xeon is about $4,000 and it's possible to spend $14,000 if you want > > to. For those amounts of money you could buy a shoebox up to shipping > > container full of Raspberry Pis, complete 64 bit Ghz systems with RAM > > and ports. Or if you have to stay within the x86 family, Intel > > Celerons are at least five times cheaper than Xeons. Looking instead > > at power budget, the cheapest Xeon CPU consumes as many watts as five > > entire Raspberry Pis. > > > > Looking at these prices I understand why Intel want us to virtualise > > x86 CPUs and run multiple guest operating systems. I don't see why > > anyone else would want to. > > > > But since datacentres and cloud systems do use hypervisors I must be > > missing something. Anyone want to explain? > > > > > > Second question, are there custom Linux kernels designed to run on > > hypervisors? Not a Container OS, which I think is something else, but > > like CMS designed to be single user or otherwise not duplicate what > > the hypervisor is already doing? > > > > > > And lastly I'm assuming that there's nothing in virtual x86 design and > > implementation that VM/370 didn't already do. Am I wrong? What new and > > interesting uses for hypervisors have been thought of? > > > > > -- > Randall Crook > > > -- > linux mailing list > linux at lists.samba.org > https://lists.samba.org/mailman/listinfo/linux > From hugo.fisher at gmail.com Fri Aug 21 01:03:51 2020 From: hugo.fisher at gmail.com (Hugh Fisher) Date: Fri, 21 Aug 2020 11:03:51 +1000 Subject: [clug] Why virtual x86 machines? In-Reply-To: References: Message-ID: On Thu, Aug 20, 2020 at 10:45 PM Brenton Ross via linux wrote: > On the question of "Why bother?" I will describe my system and try to > give my reasons. > > The computer is a laptop with a 6 core [12 thread] CPU, 64GB of RAM and > a couple of large SSDs. [It wasn't cheap.] > > On this machine I have Ubuntu as the host OS. Its main job is to run a > set of virtual machines using KVM QEMU and libvirt. There is a server > VM running Centos, a general purpose VM running Scientific Linux, and > currently Fedora on which I am doing some software development. There > are several other VMs which currently are not running. They use a > variety of Linux versions. [No Windows VM.] > > I spend my time working on several different software projects and they > each seem to need a different variety of Linux. ... That is a really good use of virtual x86. Now I'm looking at my developer setup and thinking that I have too many machines, too many boot options. Instead I should have one really high powered laptop that runs Linux and MS Windows side by side instead of dual boot. And a third VM running as a Hackintosh. Do you have any experience with using GPUs in VMs? I do a lot of graphics. -- cheers, Hugh Fisher From hugo.fisher at gmail.com Fri Aug 21 01:06:39 2020 From: hugo.fisher at gmail.com (Hugh Fisher) Date: Fri, 21 Aug 2020 11:06:39 +1000 Subject: [clug] Why virtual x86 machines? In-Reply-To: <0c8d90bd-dbbb-76fb-d81f-5d3d0a9dde64@protonmail.com> References: <160edc0e52552fff02405777b82a9dbd@mcinerney.email> <0c8d90bd-dbbb-76fb-d81f-5d3d0a9dde64@protonmail.com> Message-ID: On Thu, Aug 20, 2020 at 10:49 PM Peter Saville wrote: > > nice post, enjoyed it. the thought of un-boxing, wiring, networking and troubleshooting $14,000 worth a rPi's. that's a hard no from me... > > Not exactly sure about the CMS question, and correct me if I'm wrong, but KVM is built into the linux kernel so it might look like a type 2 hypervisor, but is a native hypervisor that talks directly to hardware. Hmmm, my understanding is that a hypervisor can run multiple kernels at once, eg MS Windows and Linux. Can KVM do this? I thought KVM provided isolation between processes but they were still sharing the same Linux kernel? -- cheers, Hugh Fisher From andrew at donnellan.id.au Fri Aug 21 01:11:11 2020 From: andrew at donnellan.id.au (Andrew Donnellan) Date: Fri, 21 Aug 2020 11:11:11 +1000 Subject: [clug] Why virtual x86 machines? In-Reply-To: References: <160edc0e52552fff02405777b82a9dbd@mcinerney.email> <0c8d90bd-dbbb-76fb-d81f-5d3d0a9dde64@protonmail.com> Message-ID: On Fri, 21 Aug 2020, 11:07 Hugh Fisher via linux, wrote: > On Thu, Aug 20, 2020 at 10:49 PM Peter Saville > wrote: > > > > nice post, enjoyed it. the thought of un-boxing, wiring, networking and > troubleshooting $14,000 worth a rPi's. that's a hard no from me... > > > > Not exactly sure about the CMS question, and correct me if I'm wrong, > but KVM is built into the linux kernel so it might look like a type 2 > hypervisor, but is a native hypervisor that talks directly to hardware. > > Hmmm, my understanding is that a hypervisor can run multiple kernels > at once, eg MS Windows and Linux. Can KVM do this? I thought KVM > provided isolation between processes but they were still sharing the > same Linux kernel? > You're thinking containers (Docker, LXC, etc). KVM is a proper hypervisor. Andrew From rossb at fwi.net.au Fri Aug 21 01:44:07 2020 From: rossb at fwi.net.au (Brenton Ross) Date: Fri, 21 Aug 2020 11:44:07 +1000 Subject: [clug] Why virtual x86 machines? In-Reply-To: References: Message-ID: <85e682d8e08e1f0186cde3f73a55b1d0461e9ed8.camel@fwi.net.au> The host machine also has a quite nice Nvidia GPU. I have not been able to work out how to access it from the VMs. The instructions I have found are way too scary for me to attempt. [If anyone has a straightforward way of doing it I would love to hear about it.] For the few things that need it [Blender and MeshLab] I just use the host machine. Brenton On Fri, 2020-08-21 at 11:03 +1000, Hugh Fisher wrote: > On Thu, Aug 20, 2020 at 10:45 PM Brenton Ross via linux > wrote: > > > On the question of "Why bother?" I will describe my system and try > > to > > give my reasons. > > > > The computer is a laptop with a 6 core [12 thread] CPU, 64GB of RAM > > and > > a couple of large SSDs. [It wasn't cheap.] > > > > On this machine I have Ubuntu as the host OS. Its main job is to > > run a > > set of virtual machines using KVM QEMU and libvirt. There is a > > server > > VM running Centos, a general purpose VM running Scientific Linux, > > and > > currently Fedora on which I am doing some software development. > > There > > are several other VMs which currently are not running. They use a > > variety of Linux versions. [No Windows VM.] > > > > I spend my time working on several different software projects and > > they > > each seem to need a different variety of Linux. > > ... > > That is a really good use of virtual x86. Now I'm looking at my > developer > setup and thinking that I have too many machines, too many boot > options. > Instead I should have one really high powered laptop that runs Linux > and > MS Windows side by side instead of dual boot. And a third VM running > as a Hackintosh. > > Do you have any experience with using GPUs in VMs? I do a lot of > graphics. > From hugo.fisher at gmail.com Fri Aug 21 13:26:55 2020 From: hugo.fisher at gmail.com (Hugh Fisher) Date: Fri, 21 Aug 2020 23:26:55 +1000 Subject: [clug] Why virtual x86 machines? In-Reply-To: <399a2e57-6455-73f6-1b37-d59ce01d3f8d@gmail.com> References: <399a2e57-6455-73f6-1b37-d59ce01d3f8d@gmail.com> Message-ID: On Fri, Aug 21, 2020 at 12:49 AM Randall Crook wrote: > > In my case it's a matter of abstraction. Upfront, I want to be clear that I'm not telling anyone you are Doing It Wrong. I'm interested in why we, the computing industry, went down a particular development path. > You have a multi systems application that runs over a number of systems > specifically for security. Splitting work loads over multiple virtual > machines to add a layer of security supplied by the hypervisor. > > Now if you want to test this application after code modification that > effects everything from the kernel up. Having to re-install the OS on > multiple bits of hardware and then do regression testing etc is time > consuming and could cost a lot. > > So you can automate the creation, configuration and testing of the > entire end to end environment in the "cloud" using multiple virtual > machines. Using tools like ansible and standardized hypervisor APIs you > can build in minutes the entire eco system and run automated test > against it. When you're done and got the test result, just delete the lot. It seems to me that automated creation and configuration of Linux systems has never needed virtualization. There are well established methods at all levels, from DHCP/PXE, to Ansible as discussed recently on this list, to the all-singing all-dancing solutions sold by RedHat. Bob Edwards at ANU Computer Science has been automatically managing hundreds of non-virtual Linux machines for a couple of decades now. What does virtualization and hypervisors make possible, or qualitatively different, than before? > On top of that Each instance of the operating system has to deal with > the hardware its running on. When its abstracted via virtualization, > only the hypervisor needs to know the real hardware. All the guests only > need to know the hypervisor. So you are not locked into buying IBM. > Because only the hypervisor needs to handle changes in hardware as you > refresh and switch vendors. Not every single install of linux, or windows. Isn't abstracting the hardware kind of the whole point of having an operating system? Debian Linux currently has official support for ten different CPU families, from retro MIPS and PowerPC to IBM mainframes. And while I'm not a kernel developer, I'm fairly sure that if say a system has an SSD drive but the hypervisor provides an 'abstract' spinning disk device to the guest OS(es), that will cause Really Bad Things to happen. > Another consideration is better utilization of expensive infrastructure. > Why buy a server for every database, web and file server. And this is where I see mainframe reasoning being applied to microprocessors: we need to make better utilization of expensive infrastructure, but why is the infrastructure expensive in the first place? Why not buy a cheap server for every database? Repeating myself, this isn't meant to be a critique of you personally or your company / organisation. -- cheers, Hugh Fisher From hugo.fisher at gmail.com Fri Aug 21 13:45:33 2020 From: hugo.fisher at gmail.com (Hugh Fisher) Date: Fri, 21 Aug 2020 23:45:33 +1000 Subject: [clug] Why virtual x86 machines? In-Reply-To: References: <399a2e57-6455-73f6-1b37-d59ce01d3f8d@gmail.com> Message-ID: On Fri, Aug 21, 2020 at 7:18 AM Michael Still wrote: > > There are also bin packing efficiencies which aren't being accounted for in the original post. As an example, Google at the start of the GFC did an analysis and found something like 25% of their corporate servers (not the web facing stuff) were not doing _anything_at_all_. They were machines which had simply been forgotten about and were idling away happily. Those numbers are not uncommon for enterprises. VMs give me a way to pack many "machines" onto a single real machine and if some of them are idle it doesn't really matter because I just keep packing VMs on until the underlying hardware hits a certain satisfying level of utilization. Upfront declaration: I'm not telling anyone you are Doing It Wrong, I don't mean to criticise individuals or organisations. I'm interested in why the computing industry has gone down a particular development path where virtual x86 has become important. If a machine is idling away happily, so what? Why do we think it worthwhile or necessary to reach a level of utilization? I assume it's not hardcore Protestant theology, "the Devil finds work for idle CPUs". For environmental / monetary reasons, if you have expensive CPUs it is worthwhile to have the minimum number of systems running with as much utilization as required. But it seems to me that this is an industry choice, not the only way to do things. The alternative is demonstrated by my phone. Like most people I have a Ghz CPU with gigabytes of RAM and storage in my pocket. It's idle a lot of the time, but I don't feel at all guilty about this because these phone computer systems are designed for irregular, varying workloads. Modern phones, which are mostly running Linux and the rest a variant of BSD Unix, can switch into and out of low power mode in fractions of a second. The OS can switch on or off individual hardware units within each chip. This doesn't stop them from being extremely fast: current generation ARM CPUs in phones have single threaded performance comparable or better than many Intel CPUs, and multithreaded isn't bad. I'm really curious as to why similar technology isn't being used in data centres. (Or if it is, why we don't hear more about it.) -- cheers, Hugh Fisher From andrew at donnellan.id.au Fri Aug 21 15:02:47 2020 From: andrew at donnellan.id.au (Andrew Donnellan) Date: Sat, 22 Aug 2020 01:02:47 +1000 Subject: [clug] Why virtual x86 machines? In-Reply-To: References: <399a2e57-6455-73f6-1b37-d59ce01d3f8d@gmail.com> Message-ID: On Fri, 21 Aug 2020 at 23:46, Hugh Fisher via linux wrote: > If a machine is idling away happily, so what? Why do we think it > worthwhile or necessary to reach a level of utilization? > You correctly identify "monetary reasons". For rather obvious reasons, the industry does indeed choose to have more money rather than choosing to have less money. > The alternative is demonstrated by my phone. Like most people I have a > Ghz CPU with gigabytes of RAM and storage in my pocket. It's idle a > lot of the time, but I don't feel at all guilty about this because > these phone computer systems are designed for irregular, varying > workloads. > > Modern phones, which are mostly running Linux and the rest a variant > of BSD Unix, can switch into and out of low power mode in fractions of > a second. The OS can switch on or off individual hardware units within > each chip. This doesn't stop them from being extremely fast: current > generation ARM CPUs in phones have single threaded performance > comparable or better than many Intel CPUs, and multithreaded isn't > bad. > > I'm really curious as to why similar technology isn't being used in > data centres. (Or if it is, why we don't hear more about it.) Every modern high performance CPU has various advanced power management capabilities, capable of sending various parts of the chip into different levels of sleep state. The reason you don't hear more about it is that it's so ubiquitous that it's impossible to buy a CPU that can't do that. Power management doesn't address the capital cost of hardware, or the operational limits of rack space, networking equipment, etc etc etc. -- Andrew Donnellan http://andrew.donnellan.id.au andrew at donnellan.id.au From andrew at donnellan.id.au Fri Aug 21 15:17:41 2020 From: andrew at donnellan.id.au (Andrew Donnellan) Date: Sat, 22 Aug 2020 01:17:41 +1000 Subject: [clug] Why virtual x86 machines? In-Reply-To: References: <399a2e57-6455-73f6-1b37-d59ce01d3f8d@gmail.com> Message-ID: On Fri, 21 Aug 2020 at 23:28, Hugh Fisher via linux wrote: > And while I'm not a kernel developer, I'm fairly sure that if say a > system has an SSD drive but the hypervisor provides an 'abstract' > spinning disk device to the guest OS(es), that will cause Really Bad > Things to happen. > Typically, in a paravirtualisation environment such as KVM, the guest will not be provided with an "abstract spinning disk device" but rather an abstract block storage device using a standard such as VirtIO. The guest cooperates with the hypervisor, knowing that it is a virtualised guest - the virtual device doesn't try to simulate the actual interface to a real storage device but rather serves as a quick pipe to get data from the guest into a buffer that the hypervisor can read from and work out how to write to disk in a way that makes sense to maximise performance for whatever the real hardware is. > And this is where I see mainframe reasoning being applied to > microprocessors: we need to make better utilization of expensive > infrastructure, but why is the infrastructure expensive in the first > place? Why not buy a cheap server for every database? > It's all about elasticity ( https://en.wikipedia.org/wiki/Elasticity_(cloud_computing)). If I want to spin up a new virtualised server on a VM host for whatever reason, I click a button or make an API call to deploy a new guest based on my existing image, and a few seconds later, I can have a working virtualised server. If I want to buy a cheap server for my new database, I have to buy the server, find a place to put the server, install the server, and then maintain this physical piece of hardware indefinitely, times however many databases I'm running. At a typical company, just getting the budget approved for one new server will take you months. You are correct in that this is "mainframe reasoning". Mainframe reasoning is absolutely fine! Mainframes are good, and they are the natural state of large-scale enterprise computing. (I do work for IBM, I am biased I guess...) An x86 cloud provider really is just renting out a system that on the whole is like a giant mainframe - but built out of smaller and cheaper chunks than an IBM Z installation. -- Andrew Donnellan http://andrew.donnellan.id.au andrew at donnellan.id.au From steve at mcinerney.email Fri Aug 21 22:36:50 2020 From: steve at mcinerney.email (Steve McInerney) Date: Sat, 22 Aug 2020 08:36:50 +1000 Subject: [clug] Why virtual x86 machines? In-Reply-To: References: <399a2e57-6455-73f6-1b37-d59ce01d3f8d@gmail.com> Message-ID: <9d5ac73cac7479e13dcbe8bbe18c60fb@mcinerney.email> On 2020-08-21 23:26, Hugh Fisher via linux wrote: > On Fri, Aug 21, 2020 at 12:49 AM Randall Crook > wrote: >> >> In my case it's a matter of abstraction. > > Upfront, I want to be clear that I'm not telling anyone you are Doing > It Wrong. You can if you like. That doesn't make you correct tho. :-) > I'm interested in why we, the computing industry, went down > a particular development path. Rather than being limited to answers from a maillist, suggest some research on "why use virtualisation" may be a worthwhile path, if this genuinely is your desire. And to repeat a prior answer, it's cheaper. Hugely cheaper. > It seems to me that automated creation and configuration of Linux > systems has never needed virtualization. There are well established You would be so very misguided here. I'll give you a real example. We have 3 servers, they're a bit old and not particularly amazing. Each server is 1 or 2RU (I forget), so quite minimal DC costs there. We have a working mini cloud on those. Storage, CPU, Memory all the things we need. With those 3 servers, we can: * Enable a group of about 15 developers to individually and in isolation, spin up, test, and destroy a virtual system that replicates about 8 servers. * They can also spin up hundreds of virtual devices - that is the key part of their work - where each device in it's physical form costs thousands of dollars. They can respec those machines on the fly, more disk, more cpu, more memory, more networks, connect to different networks, funky and scary routing, less cpu, less memory. All by changing an "8" to a "12" in a yaml file. In your physical example, we would need 120 Physical servers, plus 500+ physical devices. 3 vs 620+ And don't forget the storage, networking, etc costs with that - your 620 servers would be insanely expensive to run, and a nightmare to manage. Have fun justifing that cost differential to your management. And that excludes how FAR more efficient the devs are when they don't have to pay any attention to the hardware at all. If they need more resources, they just configure it. If they forget about their servers, I just destroy them. If folks leave, we dont have their hardware sitting idle waiting for a new starter. Want to add your server to a different network to try something out? That'll be a 2 hour wait for me to drive to a DC, find the right server, add network cables, switch management etc. We haven't even touched on hardware failure. In virtualisation land, done well, it's almost invisible. Good luck to your productivity if a critical physical server fails. Oh? you have failover servers? Are you suggesting it's not 620+ servers, it's now 1240? Plus loadbalancer appliances? > What does virtualization and hypervisors make possible, or > qualitatively different, than before? See above. And revisit the many other replies that have already answered this question. >> Another consideration is better utilization of expensive >> infrastructure. >> Why buy a server for every database, web and file server. > > And this is where I see mainframe reasoning being applied to > microprocessors: we need to make better utilization of expensive > infrastructure, but why is the infrastructure expensive in the first > place? Why not buy a cheap server for every database? Because it isn't. I can buy one expensive server, and host dozens of database servers on it, for a fraction of the cost of those as individual cheap servers. tl;dr Your logic here is arse about. It's not an expensive server best utilised. It's an expensive amount in LOTS of servers (your view), than can be consolidated into a far cheaper amount in a much smaller number of servers (virtualisation). Which is why and how this drive came about in the early 2000's. It was to get rid of the racks and racks and racks of servers, and consolidate down into half a rack or so. Because that was vastly cheaper. Cheers! Steve From robertc99 at gmail.com Sat Aug 22 00:34:16 2020 From: robertc99 at gmail.com (Robert Cohen) Date: Sat, 22 Aug 2020 10:34:16 +1000 Subject: [clug] Why virtual x86 machines? In-Reply-To: References: Message-ID: <5e1b31bf-c95f-e189-5637-c840a0ac380f@gmail.com> On 20/08/2020 9:09 pm, Hugh Fisher via linux wrote: > \ > Looking at these prices I understand why Intel want us to virtualise > x86 CPUs and run multiple guest operating systems. I don't see why > anyone else would want to. > > But since datacentres and cloud systems do use hypervisors I must be > missing something. Anyone want to explain? > > For serious virtualisation eg cloud provider, its all about density. In a rack, you can stack say 10 blades with 2 CPU's each with 64 cores (128 with hyperthreading). On that you can run 2000+ small VM's (1-2 vcpu's). Or 200 largish VM's (8 VCPU). If you tried to do that without virtualisation, you'd need 200 racks for the small machines. Assuming 10 physical machine in a rack. From kim.holburn at gmail.com Sat Aug 22 00:41:16 2020 From: kim.holburn at gmail.com (Kim Holburn) Date: Sat, 22 Aug 2020 10:41:16 +1000 Subject: [clug] Why virtual x86 machines? In-Reply-To: References: <399a2e57-6455-73f6-1b37-d59ce01d3f8d@gmail.com> Message-ID: On 2020/08/21 11:45 pm, Hugh Fisher via linux wrote: > If a machine is idling away happily, so what? Why do we think it > worthwhile or necessary to reach a level of utilization? > > I assume it's not hardcore Protestant theology, "the Devil finds work > for idle CPUs". > > For environmental / monetary reasons, if you have expensive CPUs it is > worthwhile to have the minimum number of systems running with as much > utilization as required. But it seems to me that this is an industry > choice, not the only way to do things. For a long time I have used almost the opposite system to virtualisation, although virtualisation is growing on me. I have been working on home systems, especially home network security. To call home network security pretty poor is a major understatement. I like to have small servers to create network infrastructure. DNS, syslog, LDAP, things like that. I like them physically separate, on the basis that there are often ways to penetrate the virtual barriers and small so they don't end up loaded with too many applications. Also for homes, I don't like large noisy power hungry servers. RaspberryPis are perfect for this type of use. Kim -- Kim Holburn IT Network & Security Consultant T: +61 2 61402408 M: +61 404072753 mailto:kim at holburn.net aim://kimholburn skype://kholburn - PGP Public Key on request From hugo.fisher at gmail.com Sat Aug 22 10:25:12 2020 From: hugo.fisher at gmail.com (Hugh Fisher) Date: Sat, 22 Aug 2020 20:25:12 +1000 Subject: [clug] Why virtual x86 machines? In-Reply-To: References: <399a2e57-6455-73f6-1b37-d59ce01d3f8d@gmail.com> Message-ID: On Sat, Aug 22, 2020 at 1:17 AM Andrew Donnellan wrote: > Typically, in a paravirtualisation environment such as KVM, the guest will not be provided with an "abstract spinning disk device" but rather an abstract block storage device using a standard such as VirtIO. The guest cooperates with the hypervisor, knowing that it is a virtualised guest - the virtual device doesn't try to simulate the actual interface to a real storage device but rather serves as a quick pipe to get data from the guest into a buffer that the hypervisor can read from and work out how to write to disk in a way that makes sense to maximise performance for whatever the real hardware is. That answers my second question, about a version of Linux being designed for virtualization. Now i'm curious about how functionality is divided up. Are the hypervisor and/or guest custom Linux builds or regular distros? Virtual memory in the hypervisor, guest OS just sees flat physical memory space? Guest OS allocates storage through something like LVM? Feel free to tell me to RTFM, but name of distro or project please so I read the right FM? > It's all about elasticity (https://en.wikipedia.org/wiki/Elasticity_(cloud_computing)). > > If I want to spin up a new virtualised server on a VM host for whatever reason, I click a button or make an API call to deploy a new guest based on my existing image, and a few seconds later, I can have a working virtualised server. Improvement in speed and convenience becomes a qualitative change, new ways of working are practical. Got it. > If I want to buy a cheap server for my new database, I have to buy the server, find a place to put the server, install the server, and then maintain this physical piece of hardware indefinitely, times however many databases I'm running. At a typical company, just getting the budget approved for one new server will take you months. > > You are correct in that this is "mainframe reasoning". Mainframe reasoning is absolutely fine! Mainframes are good, and they are the natural state of large-scale enterprise computing. (I do work for IBM, I am biased I guess...) An x86 cloud provider really is just renting out a system that on the whole is like a giant mainframe - but built out of smaller and cheaper chunks than an IBM Z installation. I will try to think that way from now on. Not individual servers, a very modular mainframe. -- cheers, Hugh Fisher From kim.holburn at gmail.com Sun Aug 23 07:40:17 2020 From: kim.holburn at gmail.com (Kim Holburn) Date: Sun, 23 Aug 2020 17:40:17 +1000 Subject: [clug] Why virtual x86 machines? In-Reply-To: References: <399a2e57-6455-73f6-1b37-d59ce01d3f8d@gmail.com> Message-ID: Thanks, I'll have a look. Doesn't really solve the problem of noise and power usage for home machines. On 2020/08/23 9:27 am, David C wrote: > On that, have a look at firecracker - KVM with a truly minimal and > security focussed VM interface. > > On Sat, 22 Aug 2020, 10:41 am Kim Holburn via linux, > > wrote: > > > > On 2020/08/21 11:45 pm, Hugh Fisher via linux wrote: > > If a machine is idling away happily, so what? Why do we think it > > worthwhile or necessary to reach a level of utilization? > > > > I assume it's not hardcore Protestant theology, "the Devil finds work > > for idle CPUs". > > > > For environmental / monetary reasons, if you have expensive CPUs it is > > worthwhile to have the minimum number of systems running with as much > > utilization as required. But it seems to me that this is an industry > > choice, not the only way to do things. > > For a long time I have used almost the opposite system to > virtualisation, although virtualisation is growing on me. > > I have been working on home systems, especially home network security. > To call home network security pretty poor is a major understatement.? I > like to have small servers to create network infrastructure.? DNS, > syslog, LDAP, things like that.? I like them physically separate, on the > basis that there are often ways to penetrate the virtual barriers and > small so they don't end up loaded with too many applications.? Also for > homes, I don't like large noisy power hungry servers.? RaspberryPis are > perfect for this type of use. > > > Kim > > -- > Kim Holburn > IT Network & Security Consultant > T: +61 2 61402408? M: +61 404072753 > mailto:kim at holburn.net ? aim://kimholburn > skype://kholburn - PGP Public Key on request > > > -- > linux mailing list > linux at lists.samba.org > https://lists.samba.org/mailman/listinfo/linux > From sjenkin at canb.auug.org.au Tue Aug 25 06:45:50 2020 From: sjenkin at canb.auug.org.au (steve jenkin) Date: Tue, 25 Aug 2020 16:45:50 +1000 Subject: [clug] Why virtual x86 machines? In-Reply-To: References: Message-ID: > On 20 Aug 2020, at 21:09, Hugh Fisher via linux wrote: > > > Q.1 > (Raspberry Pi vs Xeon) Looking at these prices I understand why Intel want us to virtualise x86 CPUs and run multiple guest operating systems. > I don't see why anyone else would want to. > > But since datacentres and cloud systems do use hypervisors I must be missing something. > Anyone want to explain? > > Q.2 > Second question, > are there custom Linux kernels designed to run on hypervisors? > Not a Container OS, which I think is something else, but like CMS designed to be single user or otherwise not duplicate what the hypervisor is already doing? > > Q. 3 > And lastly I'm assuming that there's nothing in virtual x86 design and implementation that VM/370 didn't already do. > Am I wrong? > What new and interesting uses for hypervisors have been thought of? > > -- > > cheers, > Hugh Fisher Hugh, On Q3: You?re right about VM/370 from the 1970?s implementing most of the current functionality. Large-scale networking across multiple datacenters (eg. AWS, Azure, Google) is only 21st Century. IBM ?mainframes? focus on a different market to x86 servers, large single compute facilities, including storage & DB's - IBM have always emphasised ?RAS? - Reliability Availability Serviceability - and mostly provided it via hardware. Systems don?t need to be taken off-line very often, even for most hardware maintenance - one reason they?re popular for 24/7 systems (Police, Airline bookings, Credit Cards, Banking, ?). Google were the first Internet scale operation to address ?RAS? and 100% notional uptime using cheap, imperfect hardware, with software + network providing High Availability & ?RAS? functionality. In 1990, IBM did clusters with NUMA (Non Uniform Memory Architecture) and called them a ?Sysplex? I don?t follow IBM & z-Series, but know its possible to network z-Series & connect an x86 cabinet (for Java JVM?s) to a Sysplex. The version I saw had to be managed by the z-Series. VM/370 provided Virtual Machines via a single machine, later single ?sysplex', later extended with remote facilities (share a workload with another datacentre within ?Metro? area (200km) - targeted at banks & financial institutions). VMware & friends commercially, and AWS, Azure & Google with own proprietarily services, allow large fleets of x86 servers, multiple 10-50 Mwatt Datacentres and Tbps interconnects. They have a very different set of Operations, Administration and Management issues to address than z-Series Sysplex. When you?ve got 100,000 servers, everything has to be automated, nothing can be manual or ?Joe (alone) knows how to do that?. VMware & friends can take snapshots of running instances, live migrate them to another physical host and much more. AWS etc can manage running instances transparently for clients - physical hosts fail or overload and customer workloads will get moved all the time. I?ve no experience of the FOSS VM cluster management software. None of that functionality was needed by VM/370, even with Sysplexes. The ?Hub & Spoke? compute model didn?t embrace even 10k servers. ========= I wasn?t aware that IBM?s CMS was either ?single user? or that its dependant on the hypervisor, wasn?t a ?full? O/S itself. In the 1970?s, I used VM/CMS for work. We only had 10?s of users, not the 1,000?s others used. But we were able to share files - it was our source code editing system & I think build, vaguely remember being able to submit batch jobs to the Target system, DOS/VS. I know I could edit / browse and print files across the whole system, but can?t remember the Access Control scheme. The security boundary of ?single user? in CMS is almost unique: "Plan 9? circa 1990 created a ?local universe? per user, per login. It?s the closest I can think of. Because of shared filesystems in CMS, a rogue CMS process or malware inside a full O/S image executing alongside might not be able to access the process / memory space, but there?ll be a vector for viruses to move around, via the filesystem. However? After the x86 ?Spectre & Meltdown? bugs - which only leaked information AFAIK - just how ?secure? is that security barrier? Joanna Rutkowska?s work & ?Qubes? show that making VM?s secure (preventing both escaping their enclosures and leaking information) is very subtle, very hard. Qubes is based on Linux on a Xen hypervisor. The notion of a standalone hypervisor is included in Wikipedia?s Comparison page as "No host OS?. [not full list] - Xen (runs in ?dom0?) - SUN/ Oracle ?VM server? (not virtual box) VMware ESX is tagged as ?No host OS? - though I always thought it was a heavily modified early Linux kernel, with a vestigial set of commands. I know the one time I ran it, I could SSH into the hypervisor environment. Xen and ?dom0? is different again - the hypervisor is incorporated into a linux kernel, but is really separate. KVM is a little different again - it?s a set of kernel modules that provide the capability to run ?user mode hosts?, but need a user-mode program to run guest O/S?s. QEMU/ KVM is a more accurate description. KVM provides device abstraction but no processor emulation. QEMU versions 0.10.1 and later is one such userspace host. Amazon AWS has relied on the security barriers of hypervisors to keep customer instances isolated. In the last two years, AWS added their own hardware, ?Nitro?, to manage servers and allow them to sell ?bare metal?, not just O/S instances. Presumably ?Nitro? shares design elements with ?Trusted Execution Environments. ========= Answer to Q2: I was unable to find another execution environment (an O/S equiv) that relied on services from a hypervisor. The L4/ OKL4/ SEL4 people might disagree, but then what about Rusty's ?lguest?? The Guest O/S uses the services of the Host. ========= Wiki comparison page mentions OKL4, (Open Kernal Labs, L4), but not Rusty Russel?s ?lguest? which was dropped from the kernel after v 4.14. If you?ve never seen it, it was very elegant & small - not general purpose, only creating VM?s of the current linux kernel. Ideal for kernel developers :) L4Linx - a kernel that can run virtualised on top of the L4 microkernel. Does that make L4 a hypervisor? [The UNSW L4 people strongly hold that view.] ========= ?ibvirt? was mentioned in the thread as well. This was a major addition to the Linux kernel - a single interface for virtualisation functions and management tools. libvirt is an open-source API, daemon and management tool for managing platform virtualization. It can be used to manage KVM, Xen, VMware ESXi, QEMU and other virtualization technologies. ========= On Q1: Cheap CPU cycles (R-Pi) vs ?Big Iron? (now Xeon or z-Series for some people ) - this question goes back to 1960?s, DEC et al and ?mini-computers? vs ?mainframes?. In the 1950?s & 60?s, there were only ?computers?. With the 360-series, IBM, by sales volume, grew to be larger than all the other computer vendors combined. In the 1960?s, it was IBM and ?the seven dwarfs?, morphing in the 1970?s to IBM and ?the BUNCH? - Burroughs, Univac, NCR, CDC, Honeywell DEC with the PDP-8 in the 1960?s showed there was a market for cheaper, smaller computers, especially for embedded systems and control tasks. Got named ?mini-computers?. Intel pioneered the microprocessor which led to ?commercial grade? Personal Computers (PC) within a decade. [IBM PC in 1981. Not the first micro, but with IBM?s imprimatur, PC?s became ?proper' computers for business, not the preserve of hobbyists.] The background driver is Bell?s Law of Computing Classes - the application of Moore?s Law to whole systems. - every decade-ish, the smallest viable compute platform reduces in cost by a factor of 10-1000. As parts prices goes down and performance increases, manufacturers need to choose to build cheaper machines or faster machines - they are forced to ?go high? or ?go low?. [Full paper + extracts at end] Image from 2007 paper The Raspberry Pi - an ARM processor almost in a SoC (System on a Chip), is a Single Board Computer (SBC). It compares directly to 2005 x86 PC?s, but uses 1W - 5W. Is it A Good Idea to lash a few thousand ARM processors together and run a datacentre on them? Maybe. ARM processors aren?t ?super computers? though they achieve higher MIPS / Watt than x86, especially high-end Xeon. Over the last decade, there?s been quite a few start-ups building exactly that architecture. The R-Pi was the first ARM computer to sell ?at scale? - it was far from the first SBC or microchip embedded system and not the, largest, fastest or cheapest. It?s ironic that the R-Pi comes from the UK, entirely designed, manufactured and supported there. The ARM is a licensed CPU design, sold to chip designers for inclusion in their silicon. Hard Disks often include an ARM processor in their silicon. If you want the definitive ?over-spec?d? CPU (99% idle), its these. ARM stands for ?Acorn RISC Machine? - as in the BBC Acorn - a UK designed CPU and microcomputer for home and educational use. This is the irony of the R-Pi: they?ve reinvented the BBC micro at a cheaper price point & it went global. ARM SBC?s, exemplified by the R-Pi, abound, but many other CPU types can be found. [MIPS, ATmega, ?] Embedded Linux Distributions Hackerboards - no idea if site commercial or not I?d rephrase your Q1 to: - Where do Raspberry Pi?s excel? - Where do large x86 systems running VM?s excel? - In the overlap region, how to decide which to use? Engineering Questions start with: - how much money do you have to spend? - what?s going to make you happy? or, What are you trying to achieve or optimise? - how much time & money do you have to keep this running, once built? For a hobbyist around their own home, DIY embedded ?appliances' using a favourite SBC + Distro is a great way to ?solve a problem?, including learning new technologies. If this is for a work environment where a lot of people will have to depend, for years, on any hardware / software selection, external factors such as maintenance and paid support will likely dominate due to wages cost of unplanned outages. To scale up, the units have to be standardised, so they are replaceable by identical units. If you end up with more than a few SBC?s, they?ll need to be networked to be managed and very quickly they?ll need a single monitoring & management console - which gets complex and tricky. For serious general purpose compute power, x86 CPU?s are still the benchmark, though apparently AMD is overtaking Intel in bang-for-buck at the moment. If you?re a large enterprise, running a large fleet of x86 physical hosts supporting a plethora of platforms, DB?s and licensed software, x86 will work best using VM?s and commercial management solutions. Upgrades, extensions and maintenance can be ?safely" performed live during the day. Using more advanced management tools, multi-site operations are possible, given a sufficiently capable network. As Brenton noted, he?s got a beefy laptop that he uses to develop across multiple environments. It?s cheaper, easier and more reliable for him to have a single ?commodity? laptop to do that, rather than lashing together a series of SBC?s in an ad-hoc fashion. His interest is writing & testing software, but lashing together hardware in new ways. The SAMBA developers used standard VM images to run their testing - on modest hardware it was possible to store, not run, a standard image of every version of Microsoft SMB-supporting products. That?s not possible with a fleet of SBC?s. [I presume they had a licensing deal, possibly a simple DevNet. subscription] Answering Q1. What to buy, R-Pi or x86 + VMs, depends on what tasks you need performed, what constraints they?ll be under (power, heat, load, ?) and the Quality of Service to be delivered. Notably, ?the best? solution now will probably not be ?the best? in 3-5 years when the system needs ?refreshing?. Technology has changed substantially every decade until now and, while the rate is slower, is still changing. Which isn?t a definitive answer, but I hope provides a framework for decision making. all my best steve jenkin ==================== There are people lashing together hundreds or thousands of ARM processors to create ?high performance, low power? systems. I?m sure Google, AWS and Facebook have all taken a look at this. Parallella & their ?Epiphany IV? 2016 Adapteva ==================== 2007 - Gordon Bell [PDF] Bell?s Law accounts for the formation, evolution, and death of computer classes. for classes to form and evolve, all technologies need to evolve in scale, size, and performance, though at comparable, but their own rates! The universal nature of stored program computers is such that a computer may be programmed to replicate function from another class. Hence, over time, one class may subsume or kill off another class. Market demand for a class and among all classes is fairly elastic. In 2010, the number of units sold in a class varies from 10s, for computers costing around $100 million to billions for small form factor devices e.g. cell phones selling for o($100). Costs decline by increasing volume through manufacturing learning curves Finally, computing resources including processing, memory, and network are fungible and can be traded off at various levels of a computing hierarchy e.g. data can be held personally or provided globally and held on the web. Chart with 4 trajectories: a.Supercomputer: ?the largest computers of the day? b. Constant price, increasing performance. c. Sub-class formation (cheaper, constant performance) d. New, ?minimal priced? computers: smallest, useful computer, new apps, 1. Computers are born i.e. classes come into existence through intense, competitive, entrepreneurial action over a period of 2-3 years to occupy a price range, 2. A computer class, determined by a unique price range evolves in functionality and gradually expanding price range of 10 maintains a stable market. This is followed by a similar lower priced sub-class that expands the range another factor of 5 to 10. 3. Semiconductor density and packaging inherently enable performance increase to support a trajectory of increasing price and function 4. Approximately every decade a new computer class forms as a new ?minimal? computer either through using fewer components or use of a small fractional part of the state-of- the-art chips. 5. Computer classes die or are overtaken by lower priced, more rapidly evolving general purpose computers as the less expensive alternatives operating alone, combined into multiple shared memory micro-processors, and multiple computer clusters. ==================== -- Steve Jenkin, IT Systems and Design 0412 786 915 (+61 412 786 915) PO Box 38, Kippax ACT 2615, AUSTRALIA mailto:sjenkin at canb.auug.org.au http://members.tip.net.au/~sjenkin From sjenkin at canb.auug.org.au Sat Aug 29 07:46:13 2020 From: sjenkin at canb.auug.org.au (steve jenkin) Date: Sat, 29 Aug 2020 17:46:13 +1000 Subject: [clug] Open Source Developers in CBR & region and LCA 2021 Message-ID: <2D838498-1BA9-483C-B908-35E7AB3EAC0D@canb.auug.org.au> Is there an opportunity to have a strictly local event that complied with local Covid-safe rules and perhaps only ran for a day or two? Videos could still be streamed as part of the on-line event. It?s not very different to the TED-x concept. It could be advertised in the local paper (CBR Times) if still extant, garnering more local interest. There are multiple Open Source developers in town, both full-time and volunteer. They?d be a good start to putting together 6-8 speakers needed for a single day. Does anyone know if there?s a list of local FOSS developers, or a simple way to put one together? There is interest about Open Source within ANU?s CECS, not just the professional support staff. It shouldn?t be hard to get something small going with their help - not unlike the AUUG Summer Conferences of old. steve ========== linux.conf.au 2021 Moves Online We have decided that it is for the best to postpone LCA2021 to LCA2022. We have secured the 2022 dates with the Australian National University events booking team. We will be running LCA Online! ========== -- Steve Jenkin, IT Systems and Design 0412 786 915 (+61 412 786 915) PO Box 38, Kippax ACT 2615, AUSTRALIA mailto:sjenkin at canb.auug.org.au http://members.tip.net.au/~sjenkin From rossb at fwi.net.au Sun Aug 30 05:47:13 2020 From: rossb at fwi.net.au (Brenton Ross) Date: Sun, 30 Aug 2020 15:47:13 +1000 Subject: [clug] Open Source Developers in CBR & region and LCA 2021 In-Reply-To: <4A06104A-916A-4939-9C32-040709D56D77@canb.auug.org.au> References: <2D838498-1BA9-483C-B908-35E7AB3EAC0D@canb.auug.org.au> <930301d1303f5e8ebe37f67992dd24c1263489b4.camel@fwi.net.au> <4A06104A-916A-4939-9C32-040709D56D77@canb.auug.org.au> Message-ID: <684c3d9b0359a5606dc1bac8b368cdfb235354d9.camel@fwi.net.au> It would seem that the main aim is for some social interaction. I know I miss the monthly CLUG meetings, so I understand. However, I am not sure about group meetings in the current situation. Many of us are in the age group most at risk from the virus and might be unwilling to attend. While we are all hoping for a vaccine to be available soon, I think we may have long wait until personal social interactions that we have become accustomed to return. Even when there is a vaccine there are still a lot cases where "in person" interactions are difficult. Maybe a group with a lot of IT skills could come up with a solution ;) Perhaps we can start by turning this email group into a "watercooler" by encouraging people to just start threads on whatever FOSS projects they are working on. Don't just wait until you have a problem to start a thread. What does the group think of this idea ? Other things floating around in my mind include video [preferably peer to peer], chat, and maybe telepresence. Brenton On Sun, 2020-08-30 at 11:05 +1000, steve jenkin wrote: > > On 30 Aug 2020, at 00:05, Brenton Ross wrote: > > > > I suspect my problem is that you have not nominated an objective > > for this event. > > What would its purpose be ? > > Brenton, > > Thanks for the reply, good question. Hadn?t thought that through. > > 1. Replace LCA with a local physical meeting, scaled down to a day > and 6-8 speakers. > Why? Something for people to look forward, provide a forum to > interact with peers. > > 2. Potentially provide a group venue to watch LCA streamed events > live, then socialise. > I think just one or two days, not across the full LCA > programme. > It won?t be as interesting or rewarding as attending the big > conference for a full week. > > 3. Provide a local Open Source event that includes sessions for the > general public. > - could run a small tutorial on building and using a Raspberry > Pi > - ditto for Arduino > - not sure if it?s necessary these days to show people how to > install Ubuntu / Fedora on an old PC > > Alastair D?Silva did a workshop for Arduino in 2019 > > > cheers > steve > > -- > Steve Jenkin, IT Systems and Design > 0412 786 915 (+61 412 786 915) > PO Box 38, Kippax ACT 2615, AUSTRALIA > > mailto:sjenkin at canb.auug.org.au http://members.tip.net.au/~sjenkin > From Clug at goproject.info Sun Aug 30 07:37:39 2020 From: Clug at goproject.info (George at Clug) Date: Sun, 30 Aug 2020 17:37:39 +1000 Subject: [clug] Open Source Developers in CBR & region and LCA 2021 In-Reply-To: <684c3d9b0359a5606dc1bac8b368cdfb235354d9.camel@fwi.net.au> References: <684c3d9b0359a5606dc1bac8b368cdfb235354d9.camel@fwi.net.au> Message-ID: On Sunday, 30-08-2020 at 15:47 Brenton Ross via linux wrote: > It would seem that the main aim is for some social interaction. I know > I miss the monthly CLUG meetings, so I understand. +1, though my reasons are more due to family reasons (dad's taxi), than COVID-19. > > However, I am not sure about group meetings in the current situation. > Many of us are in the age group most at risk from the virus and might > be unwilling to attend. Sadly it would seem to be that way. Adding up the number of Sweden's COVID-19 deaths below 50 years of age, I counted 72, compared to the total of 5,821 deaths. https://www.statista.com/statistics/1107913/number-of-coronavirus-deaths-in-sweden-by-age-groups/ https://www.worldometers.info/coronavirus/country/sweden/ Deaths: 5,821 > > While we are all hoping for a vaccine to be available soon, I think we > may have long wait until personal social interactions that we have > become accustomed to return. Even when there is a vaccine there are > still a lot cases where "in person" interactions are difficult. > > Maybe a group with a lot of IT skills could come up with a solution ;) > > Perhaps we can start by turning this email group into a "watercooler" > by encouraging people to just start threads on whatever FOSS projects > they are working on. Don't just wait until you have a problem to start > a thread. What does the group think of this idea ? +1 (not that I have ever had any success trying to get people to converse via this email group, maybe we would need another email group so we don't clog up people's in-boxes) > > Other things floating around in my mind include video [preferably peer > to peer], chat, and maybe telepresence. +1 I have been reading up on BigBlueButton. While it is not without its issues, I have used it and find does a great job. It has nice break out rooms. There are other opensource video conferencing solutions, too. Openmeetings comes to mind. We would have to get our own pizza (or choice of food), and eat together, as one suggestion. Does the ANU have video conferencing systems that people can use? How do people feel about security in today's world? https://arstechnica.com/tech-policy/2020/08/unredacted-suit-shows-googles-own-engineers-confused-by-privacy-settings/ https://www.theguardian.com/technology/2020/apr/08/zoom-privacy-video-chat-alternatives https://www.consumerreports.org/video-conferencing-services/videoconferencing-privacy-issues-google-microsoft-webex/ It's Not Just Zoom. Google Meet, Microsoft Teams, and Webex Have Privacy Issues, Too. CR evaluated videoconferencing privacy policies and found these services may collect more data than consumers realize By Allen St. John, April 30, 2020 https://www.lifesize.com/en/video-conferencing-blog/video-conferencing-privacy We could all use voice changers and wear Anonymous masks ? (please note I am joking, this is satire) https://www.amazon.com/PomeMall-Anonymous-Guy-Mask-White/dp/B07S6CYM5T > > Brenton > > > > On Sun, 2020-08-30 at 11:05 +1000, steve jenkin wrote: > > > On 30 Aug 2020, at 00:05, Brenton Ross wrote: > > > > > > I suspect my problem is that you have not nominated an objective > > > for this event. > > > What would its purpose be ? > > > > Brenton, > > > > Thanks for the reply, good question. Hadn?t thought that through. > > > > 1. Replace LCA with a local physical meeting, scaled down to a day > > and 6-8 speakers. > > Why? Something for people to look forward, provide a forum to > > interact with peers. > > > > 2. Potentially provide a group venue to watch LCA streamed events > > live, then socialise. > > I think just one or two days, not across the full LCA > > programme. > > It won?t be as interesting or rewarding as attending the big > > conference for a full week. > > > > 3. Provide a local Open Source event that includes sessions for the > > general public. > > - could run a small tutorial on building and using a Raspberry > > Pi > > - ditto for Arduino > > - not sure if it?s necessary these days to show people how to > > install Ubuntu / Fedora on an old PC > > > > Alastair D?Silva did a workshop for Arduino in 2019 > > > > > > cheers > > steve > > > > -- > > Steve Jenkin, IT Systems and Design > > 0412 786 915 (+61 412 786 915) > > PO Box 38, Kippax ACT 2615, AUSTRALIA > > > > mailto:sjenkin at canb.auug.org.au http://members.tip.net.au/~sjenkin > > > > > -- > linux mailing list > linux at lists.samba.org > https://lists.samba.org/mailman/listinfo/linux >