[clug] Linux Backups: dump vs ...

Stephen Granger sgranger at stepsoft.com.au
Wed Apr 20 05:19:23 GMT 2005


I've been looking into some disaster recovery/backup options for our
redhat servers and dump was our first choice. However it seems that dump
maynot be the best solution on an ext(?) filesystem according to this
post below. Do other's share this opinion on dump? I've always found it
highly regarded on other unix's. This post is from a couple of years ago
on an early 2.4 kernel, maybe things have been improved upon?

I'm thinking a process of using kickstart and restoring from tars is the
best way to rebuild a Redhat machine in under 30 mins. Any other
options? I prefer not to consider norton's ghost seems a bit too evil.


######################### dump/restore: Not Recommended!

The dump and restore programs are Linux equivalents to the UNIX programs
of the same name. As such, many system administrators with UNIX
experience may feel that dump and restore are viable candidates for a
good backup program under Red Hat Linux. Unfortunately, the design of
the Linux kernel has moved ahead of dump's design. Here is Linus
Torvald's comment on the subject:

(the original thread)

From:	 Linus Torvalds
To:	 Neil Conway
Subject: Re: [PATCH] SMP race in ext2 - metadata corruption.
Date:	 Fri, 27 Apr 2001 09:59:46 -0700 (PDT)
Cc:	 Kernel Mailing List <linux-kernel At vger Dot kernel Dot org>

[ linux-kernel added back as a cc ]

On Fri, 27 Apr 2001, Neil Conway wrote:
> I'm surprised that dump is deprecated (by you at least ;-)).  What
> to use instead for backups on machines that can't umount disks
> regularly?

Note that dump simply won't work reliably at all even in 2.4.x: the
buffer cache and the page cache (where all the actual data is) are not
coherent. This is only going to get even worse in 2.5.x, when the
directories are moved into the page cache as well.

So anybody who depends on "dump" getting backups right is already
playing Russian roulette with their backups.  It's not at all guaranteed
to get the right results - you may end up having stale data in the
buffer cache that ends up being "backed up".

Dump was a stupid program in the first place. Leave it behind.

> I've always thought "tar" was a bit undesirable (updates atimes or
> ctimes for example).

Right now, the cpio/tar/xxx solutions are definitely the best ones, and
will work on multiple filesystems (another limitation of "dump").
Whatever problems they have, they are still better than the
_guaranteed_(*)  data corruptions of "dump".

However, it may be that in the long run it would be advantageous to have
a "filesystem maintenance interface" for doing things like backups and


(*) Dump may work fine for you a thousand times. But it _will_ fail
under the right circumstances. And there is nothing you can do about it.

Given this problem, the use of dump/restore is strongly discouraged.

This post mentions a few solutions

For DR there is also
http://www.mondorescue.org/  it made it into this list

and then there is this one

Here's a tool to build a complete linux system in under 15 minutes
Oh... just for Debian :) though it does mention Kickstart for redhat.

Mr IBM doesn't mention too many down falls against dump
though mentions it's only good for ext2, ext3... the slow/poor
performing standard file systems that we are running. If we were to
switch to Reiserfs we couldn't use dump.

My favourite quote
"Go forth and back up"


More information about the linux mailing list