rsyncing *to* live system
mbp at sourcefrog.net
Wed Sep 10 13:03:37 EST 2003
On 26 Aug 2003 jw schultz <jw at pegasys.ws> wrote:
> On Wed, Aug 27, 2003 at 09:25:41AM +1200, Steve Wray wrote:
> > Hi there,
> > I have been asked to develop a system for keeping
> > a bunch of machines remotely configured and updated.
> > The client has asked for this to be implemented using rsync.
> > The machines involved are located at remote sites and
> > physical access by competent personel is non trivial.
> > And the systems are running Debian.
> > I am a little concerned at the prospect of using rsync to
> > replace running binaries, open files and system libraries.
> > I've searched for an example where rsync has been used in this way.
> > So far I have found nothing; people use it to backup a live
> > filesystem; we are tasked with doing the reverse (sort of).
> > And there are people who use rsync to replicate systems (rolling out
> > a bunch of identical boxes; typically these recieve the rsync
> > *before* they go live not after).
> > So, can anyone please give me arguments or reasons for
> > or against using rsync in this way? References to sites
> > which currently use rsync in this way would be much appreciated.
> There are some difficulties that can occur depending on how
> you structure your filesystems.
> It is possible to produce temporary dangling symlinks.
> Rsync may remove the destination of the link before
> the symlink is updated or deleted (see --delete-after); or
> if rsync creates or updates a symlink before the destination
> is created.
> You can get inter-file inconsistencies. The file sets are not
> updated atomically so different config files and binaries
> may be updated at slightly different times. Because rsync
> processes the file list in lexical order the window size will
> depend on the relative remoteness of files in the directory
> hierarchy so files in the same directory have small windows
> but files in different subtrees will have a somewhat larger
Here is an example of a bad case: a program depends on a shared
library, and needs to be recompiled when a new version of the library
is released. Your transfer upgrades the program before it updates the
library (or vice versa) and the program crashes.
I agree with JW and will just add that the inter-file inconsistencies
could be far worse if the transfer is ever interrupted due to e.g. a
network outage. If you interrupt it at the right (or wrong) time it
is possible that rsync will no longer be able to run.
dpkg knows how to upgrade software in a safe and sane way, avoiding
all these problems. Let it do its job. By all means use rsync to
transfer the packages, but then run apt or dpkg.
In addition, once you upgrade software, you will want to restart
daemons to make sure the upgraded stuff is used. dpkg handles that
More information about the rsync