Berkley DB

Chuck Wolber chuckw at
Sat Jul 17 17:24:02 GMT 2004

We are rsync'ing large (hundreds of GB) and constantly changing Berkley DB 
(aka Sleepycat) datasets (the RPM database uses the same thing, but its 
dataset is extremely small). When a change occurs (insert, update, delete, 
etc) in a BDB it has a tendency to propagate through the binary database 
files such that rsync has to re-download a great quantity of old data 
(much like the recent example of why you shouldn't gzip large files before 
rsync'ing them).

Are there any known methods for making rsync backups of these databases 
more efficient?


 Quantum Linux Laboratories, LLC.
 ACCELERATING Business with Open Technology

 "The measure of the restoration lies in the extent to which we apply 
  social values more noble than mere monetary profit." - FDR

More information about the rsync mailing list