[clug] LVM recovery -- Canberra LUG a last resort!

Daniel Rose drose at nla.gov.au
Sun Nov 4 08:27:59 GMT 2007


Mark Triggs wrote:
> Daniel Rose <drose at nla.gov.au> writes:
>   
>> All I think I've altered on disk are these bytes near the beginning of
>> the partition.
>>
>> [...binary...]LABELONE  [...binary...]  LVM2
>> 0016oVVgdWIVKqB0yPXk2Acu5ZxcIgMj5JD [...binary...]
>>
>> By doing
>> pvcreate -u 6oVVgd-WIVK-qB0y-PXk2-Acu5-ZxcI-gMj5JD /dev/hdd2
>> Software RAID md superblock detected on /dev/hdd2. Wipe it? [y/n] n
>> Physical volume “/dev/hdd2″ successfully created
>>
>> As suggested by someone somewhere on the Internet.
>>     
> [...]
>   
>> So... I'm not sure what's going on.  Further along the disk can be found
>> text files detailing the construction, first with seqno1, 2 and finally
>> this (edited) promise of potential success:
>>
>> main {
>>     
> [...]
>
>   
>> # Generated by LVM2: Tue Nov 21 18:40:21 2006
>>
>>
>> It seems to me that given this information, it should not be
>> particularly difficult to do this.
>>     
>
>
> This article:
>
>   http://www.linuxjournal.com/article/8874
>
> describes restoring the LVM metadata using the stored information from
> the beginning of the raw disk.  A half-baked test with a USB device here
> seemed to work, but don't blame me if you lose your data ;o)
>   
Well it doesn't work for me:

#    cat /proc/mdstat
Personalities :
md1 : inactive hdd1[0]
      104320 blocks
      
md2 : inactive hdd2[0]
      78043648 blocks

#       mdadm -A -s /dev/md2
mdadm: device /dev/md2 already active - cannot assemble it


And:

#     vgcfgrestore -f main main
  Couldn't find device with uuid '6oVVgd-WIVK-qB0y-PXk2-Acu5-ZxcI-gMj5JD'.
  Couldn't find all physical volumes for volume group main.
  Restore failed.

#     sudo pvcreate -u 6oVVgd-WIVK-qB0y-PXk2-Acu5-ZxcI-gMj5JD /dev/hdd2
  Can't open /dev/hdd2 exclusively.  Mounted filesystem?

#     vgcfgrestore -f main main
  Restored volume group main

See how the pvcreate apparently failed, but now vgcfrestore is happy to
report a restored group?
 #     vgscan
  Reading all physical volumes.  This may take a while...
  No volume groups found

unhappy.  Hm...

pvscan and pvdisplay also come in empty-handed.  I suspect that this is
abnormal behaviourr, however I seem to be getting closer!



#     vgchange main -a y
  device-mapper: reload ioctl failed: Invalid argument
  device-mapper: reload ioctl failed: Invalid argument
  2 logical volume(s) in volume group "main" now active

#     lvscan
  ACTIVE            '/dev/main/root' [74.12 GB] inherit
  ACTIVE            '/dev/main/swap' [256.00 MB] inherit

#     ls /dev/mapper/main-root          
/dev/mapper/main-root

#     mount /dev/mapper/main-root /wow
mount: /dev/mapper//main-root: can't read superblock

#     cat /dev/mapper/main-root
#

So I've got the device node, but it's empty; and that's probably because
/proc/mdstat reports the partition(s) as inactive.



More information about the linux mailing list