Help With Restoring - system disk exercise experience

Tim Moore timothymoore at
Fri Jan 10 20:47:32 EST 2003

Here's my experience with a recent full restore exercise on a system/boot disk
which is
among the more interesting scenarios.

Rsync Backup Overview
I've a script based on Mike Rubel's rsync/snapshot backup notes
with two primary differences:
   1. Files that no longer exist on the client are not deleted on the backup
      This trades true incremental backup function for a slightly more storage
   2. Each incremental backup is touched & tagged with it's original root time
      This provides iterations limited only by the backup server's capacity.

I worked for Network Appliance when remote mirroring was relatively new, and
particularly delighted to replicate the functionality so easily thanks to Mike
and so
many others.

Anyway, for each client the backup server pull cycle is basically:
  cd snapshot
  if (client) cp -al client client.old
  rsync -a -e ssh --stats $client.excludes client:/ client
  touch timestamp(s)

and over time:
# find snapshot -maxdepth 1 -ls
244801    4 drwxr-xr-x  29 root  root  4096 Jan  3 12:48 snapshot/dell
441635    4 drwxr-xr-x  29 root  root  4096 Dec 28 21:30
1420716   4 drwxr-xr-x  29 root  root  4096 Dec 23 14:24
# ls -F snapshot/dell
A/  backup/  bootblock.446  etc/     lib/         opt@            sbin/  vid/
C/  big/     cdr/           floppy/  lost+found/  root/           tmp@  
E/  bin/     cdrom/         home/    mnt/         rsync.exclude   usr/  
Z/  boot/    dev/           kits/    net/         rsync.exclude~  var/   zip/

Excludes are /proc, /tmp, /var/tmp, /var/spool, .netscape/cache + client

Full Drive Restore Exercise
The restore test assumed total drive failure.  The process was more tedious
than I had
previously thought through:

1. Power cycle the server to attach a new drive.
2. Manually replicate the partition structure on the new drive.  You would
need to
   have previously saved each logical drive's map or have a really good memory
      fdisk -l /dev/sda | sed 's/^/#/' >> /etc/fstab
3. Individually create file system or swap init on each partition.
4. Individually mount and copy each data partition with special attention to /
   for directories that are mount points i.e.-
      mount /dev/sdd4 /mnt; cd /snapshot/dell/usr; cp -a . /mnt; umount /mnt
5. Restore the boot sector if applicable.
6. Power cycle the server to remove the newly rebuilt drive.

If server power cycles were undesirable, I could have worked on the client
(drive swap,
minimal boot, restore the network, network copies).  Tedious or not the good
news is
the backup clone booted and functioned perfectly.

If anyone can improve on the restore steps, I'd like to hear back.  The
development time does not seem worth it unless there were enough clients to
complete system disk failure probabilities.  I'm glad I did the exercise as
there is
now a defined albeit labor intensive process for worst case.


"Dr. Poo" wrote:
> I'm very interested in this...
> My question(s) are why did you have to install a minimal system?
> Could you have just booted up with network and rsync and just rsynced to a
> freshly paritioned/formatted disk?
> What command did you use (to clarify) to make the origal rsync's of your
> disk, and what were the steps neccessary and the command you used (to
> clarify) to make the restore? (I guess that includes the install of a base
> system.. did you use the same system, say redhat 7.2?)
> I'm curious because i have a full rsync of a production server on my
> development box, and i put it in Lilo to boot up with, disconnected the
> network (so know conflicts occur, would any?) and tried to boot, but it
> failed... (i don't have access right now to the error message).
> The production box is i686 and so is the development box... Not the same
> processor though.... would that make a difference?
> Do you think if i just tried to install a minimal system to a partion and
> copied the / tree it would work??


More information about the rsync mailing list