[clug] How to make my server robust for booting

George at Clug Clug at goproject.info
Thu Sep 12 02:01:11 UTC 2019


Tony,

I started to ask myself, is it possible to just have grub and /boot on two RAID1 drives, then have root (/) in the RAID 6 array?  I guess so.  Since grub and boot are very small, and usually read only, then using standard SSDs should not be too much of an issue?

I would be curious of your final configuration and success (particularly if your game enough to do testing by manually degrading the system (i.e. remove a drive out of each RAID and see if you can still boot, and then replace the drive and see if you can still boot).  

All this takes so much time. I spent about a week once replacing each of the 5400 RPM drives for 7200 RPM drives in a six drive RAID, and then changing from RAID 5 to RAID 6.  To sync a drive took about 8 hours to resync. I was not always around when the resync completed to immediately start the next drive, though I did my best to time things.

I like stability and therefore like simplicity, I do not have a good understanding of UEFI, and it has given me issues in the past. BIOS is so simple, I can swap my drives out between different physical computers without any issues. UEFI is supposed to be more secure, yet I still deem it as insecure.

Below are some notes I had from my RAID testing some time ago.

http://www.texsoft.it/index.php?c=hardware&m=hw.storage.boot-raid-squeeze

The RAID-1 device is mounted as /boot, in order to allow a safe boot even if the first disk fails or it's removed. The root filesystem and the a swap partition uses RAID-6 devices. It's possible to use RAID-5 instead of RAID-6, but in production environments is safer to use RAID-6 as it guarantees a higher level of redundancy.

The Debian installer does not seem to recognize that GRUB should be installed on each disk participating to the RAID-1 array, and it just installs to the first disk /dev/sda. It's possible to install manually on the other disks with the grub-install command (see further on): 

Install GRUB to all disks

The Debian installer installs GRUB just to the first disk /dev/sda. To allow booting from other disk in the array GRUB must be manually installed to each disk with the grub-install command:

grup-install /dev/sdb
grub-install /dev/sdb
grub-install /dev/sdc
grub-install /dev/sdd
grub-install /dev/sde

cat /proc/mdstat 

http://askubuntu.com/questions/355727/how-to-install-ubuntu-server-with-uefi-and-raid1-lvm

mount /dev/sda1 /boot/efi
grub-install /dev/sda1
umount /dev/sda1
mount /dev/sdb1 /boot/efi
grub-install /dev/sdb1
reboot

--------------------------------
http://www.thomas-krenn.com/en/wiki/Restoring_UEFI_Boot_Entry_after_Ubuntu_Update
Problem

The system being used has two hard drives installed that are mirrored with Linux Software RAID 1 (md). In this case, two EFI System Partitions (ESP) are required. In our case /dev/sda1 and /dev/sda2 with a FAT32 file system. To our knowledge, EFI System Partitions can not be mirrored with Linux Software RAID. Therefore, only an EFI system partition is created on the first hard disk /dev/sda1 and mounted as /boot/efi. The following procedure has been chosen in order to allow booting from the second drive:

mount | grep sda1
sudo umount /boot/efi
sudo mkfs.vfat /dev/sdb1
sudo parted /dev/sdb set 1 boot on
sudo mount /dev/sdb1 /boot/efi
sudo grub-install --bootloader-id ubuntu-hdd2 /dev/sdb
sudo umount /boot/efi
sudo mount /boot/efi

This creates another UEFI BIOS entry called "ubuntu-hdd2". 

=====================================================================================
http://tedytirta.com/2013/10/01/root-disk-mirroring-on-debian-linux/

It gets slightly more amusing on UEFI, where the installer needs to be smart enough to create (or reuse) the EFI System partition on each device [2] for the bootloader but NOT for the grub.cfg [3], otherwise we have separate grub.cfgs on each ESP to update when there are kernel updates.

And if a disk fails, and is replaced, while grub-install works on BIOS, it doesn't work on UEFI because it'll only install a bootloader if the ESP is mounted in the right location.

So until every duck is in the row, I think we can hardly point one finger when it comes to making a degrade system bootable without any human intervention.

http://www.gocit.vn/bai-viet/set-up-software-raid1-on-ubuntu-incl-grub2-configuration/
6 Preparing GRUB2 (Part 1)

Afterwards we must make sure that the GRUB2 bootloader is installed on both hard drives, /dev/sda and /dev/sdb:
    grub-install /dev/sda
    grub-install /dev/sdb

Now we reboot the system and hope that it boots ok from our RAID arrays:
    reboot






On Thursday, 12-09-2019 at 10:42 Tony Lewis via linux wrote:
> On 12/9/19 10:21 am, George at Clug via linux wrote:
> > This is how I did it once, a long time ago.  I had RAID1 for boot
> > drives (i.e. grub, /boot and OS) and RAID 6 for data (virtual
> > machines).
> > https://unix.stackexchange.com/questions/230349/how-to-correctly-install-grub-on-a-soft-raid-1
> >   If the two disks are /dev/sda and /dev/sdb, run both grub-install
> > /dev/sda and grub-install /dev/sdb. Then both drives will be able to
> > boot alone.
> 
> Thanks for the link.  From that, it recommends making sure root is not 
> hardcoded as /dev/hd0, which it isn't; it uses /dev/mapper/md1_crypt.
> 
> So it looks like it should work in the real world.  I'll try it when I 
> get that far.
> 
> 
> 
> -- 
> linux mailing list
> linux at lists.samba.org
> https://lists.samba.org/mailman/listinfo/linux
> 



More information about the linux mailing list