[clug] f14->f15 upgrade: now unclean boot (mdadm issue)
Eyal Lebedinsky
eyal at eyal.emu.id.au
Wed Mar 7 04:05:15 MST 2012
The upgrade (using preupgrade) was a nightmare (and I still have f15->f16 to do).
I slowly picked up the pieces and now have one issue.
I boot from sda and then have a RAID /dev/md0 where most of my data lives as /data1.
The bootup attempts to mount /data1 before mdadm completed the preparation of /dev/md0.
At least this is how it looks to me. Anyone understands what is happening and how to
fix it (short of *not* auto-mounting /data1 and doing it in rc.local)?
It seems like a common thing (if one trusts the talk on the web). Is this also the case
in f16? If not then I will suffer quietly until the next upgrade (this weekend if one
believed the weather forecast) (not anymore, clear weekend expected now...) since I
do not reboot this server too often.
Here is the log around that time:
[ 14.980801] mdadm: sending ioctl 800c0910 to a partition!
[ 14.981484] mdadm: sending ioctl 800c0910 to a partition!
[ 14.981487] mdadm: sending ioctl 800c0910 to a partition!
[ 14.981493] mdadm: sending ioctl 1261 to a partition!
[ 14.981494] mdadm: sending ioctl 1261 to a partition!
[ 14.981672] mdadm: sending ioctl 1261 to a partition!
[ 14.981674] mdadm: sending ioctl 1261 to a partition!
[ 14.987718] mdadm: sending ioctl 1261 to a partition!
[ 14.987725] mdadm: sending ioctl 800c0910 to a partition!
[ 14.987744] mdadm: sending ioctl 1261 to a partition!
...
[ 23.750596] mdadm: sending ioctl 800c0910 to a partition!
[ 23.750599] mdadm: sending ioctl 1261 to a partition!
[ 23.750615] mdadm: sending ioctl 1261 to a partition!
[ 23.750628] mdadm: sending ioctl 1261 to a partition!
[ 23.750641] mdadm: sending ioctl 1261 to a partition!
[ 23.750646] mdadm: sending ioctl 1261 to a partition!
[ 23.750648] mdadm: sending ioctl 1261 to a partition!
[ 23.750664] mdadm: sending ioctl 1261 to a partition!
[ 23.750669] mdadm: sending ioctl 1261 to a partition!
[ 23.750675] mdadm: sending ioctl 1261 to a partition!
[ 23.782169] mtp-probe[748]: bus: 3, device: 2 was not an MTP device
[ 23.783177] mtp-probe[658]: bus: 6, device: 2 was not an MTP device
[ 23.784118] mtp-probe[659]: bus: 6, device: 3 was not an MTP device
[ 23.785022] mtp-probe[749]: bus: 4, device: 2 was not an MTP device
[ 23.796395] md: bind<sdg1>
[ 23.800054] md: bind<sde1>
[ 23.803426] md: bind<sdf1>
[ 23.807301] md: bind<sdd1>
[ 23.811194] md: bind<sdb1>
[ 23.815536] md: bind<sdc1>
[ 23.873241] EXT4-fs (md0): unable to read superblock
[ 23.880180] mount[867]: mount: wrong fs type, bad option, bad superblock on /dev/md0,
[ 23.880525] systemd[1]: data1.mount mount process exited, code=exited status=32
[ 23.881648] mount[867]: missing codepage or helper program, or other error
[ 23.882328] mount[867]: (could this be the IDE device where you in fact use
[ 23.882986] mount[867]: ide-scsi so that sr0 or sda or so is needed?)
[ 23.883656] mount[867]: In some cases useful info is found in syslog - try
[ 23.884342] mount[867]: dmesg | tail or so
[ 23.894132] systemd[1]: Job fedora-autorelabel-mark.service/start failed with result 'dependency'.
[ 23.894957] systemd[1]: Job fedora-autorelabel.service/start failed with result 'dependency'.
[ 23.894963] systemd[1]: Job local-fs.target/start failed with result 'dependency'.
[ 23.894968] systemd[1]: Triggering OnFailure= dependencies of local-fs.target.
[ 23.897923] systemd[1]: Job boot.mount/start failed with result 'dependency'.
[ 23.898738] systemd[1]: Job fsck at dev-disk-by\x2duuid-f9f345f0\x2d39d7\x2d4b2b\x2d8ed0\x2d57b25da62edf.service/start failed with result 'dependency'.
[ 23.899556] systemd[1]: Unit data1.mount entered failed state.
Above is the failure line leading to an emergency shell.
[ 23.990638] async_tx: api initialized (async)
[ 24.152290] xor: automatically using best checksumming function: generic_sse
[ 24.158008] generic_sse: 9248.000 MB/sec
[ 24.159064] xor: using function: generic_sse (9248.000 MB/sec)
[ 24.374029] raid6: int64x1 1949 MB/s
[ 24.392017] raid6: int64x2 2625 MB/s
[ 24.410011] raid6: int64x4 1996 MB/s
[ 24.428017] raid6: int64x8 1695 MB/s
[ 24.446022] raid6: sse2x1 4449 MB/s
[ 24.464020] raid6: sse2x2 6343 MB/s
[ 24.482019] raid6: sse2x4 7300 MB/s
[ 24.483084] raid6: using algorithm sse2x4 (7300 MB/s)
[ 24.645247] md: raid6 personality registered for level 6
[ 24.646319] md: raid5 personality registered for level 5
[ 24.647381] md: raid4 personality registered for level 4
[ 24.648613] bio: create slab <bio-1> at 1
[ 24.649720] md/raid:md0: device sdc1 operational as raid disk 1
[ 24.650813] md/raid:md0: device sdb1 operational as raid disk 0
[ 24.651905] md/raid:md0: device sdd1 operational as raid disk 2
[ 24.652997] md/raid:md0: device sdf1 operational as raid disk 4
[ 24.654084] md/raid:md0: device sde1 operational as raid disk 3
[ 24.655144] md/raid:md0: device sdg1 operational as raid disk 5
[ 24.656670] md/raid:md0: allocated 6384kB
[ 24.657769] md/raid:md0: raid level 6 active with 6 out of 6 devices, algorithm 2
[ 24.658817] RAID conf printout:
[ 24.658819] --- level:6 rd:6 wd:6
[ 24.658820] disk 0, o:1, dev:sdb1
[ 24.658822] disk 1, o:1, dev:sdc1
[ 24.658824] disk 2, o:1, dev:sdd1
[ 24.658825] disk 3, o:1, dev:sde1
[ 24.658826] disk 4, o:1, dev:sdf1
[ 24.658828] disk 5, o:1, dev:sdg1
[ 24.658963] created bitmap (15 pages) for device md0
[ 24.660378] md0: bitmap initialized from disk: read 1/1 pages, set 0 of 29809 bits
[ 24.688848] md0: detected capacity change from 0 to 8001578598400
[ 24.751155] systemd[1]: Startup finished in 2s 879ms 556us (kernel) + 15s 113ms 278us (initrd) + 6s 758ms 257us (userspace) = 24s 751ms 91us.
At this point I log in, mount /data1 (no problem) and exit for the boot to complete. Note how the
device completed coming up just above.
My mount:
[ 37.378259] md0: unknown partition table
[ 37.601528] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)
exit, and now the bootup continues:
[ 39.959976] Adding 16458588k swap on /dev/sda2. Priority:0 extents:1 across:16458588k
...
--
Eyal Lebedinsky (eyal at eyal.emu.id.au)
More information about the linux
mailing list