strange behavior of --inplace on ZFS

Hendrik Visage hvjunk at gmail.com
Thu Mar 6 14:45:22 MST 2014


Question: The source and destination folder host OS and are both sides ZFS?

I'd like to see some stats that rsync said it transfered, also add the
-S flag as an extra set of tests.

The other Question that would be interested (both with and without -S)
is when you use the dd if=/dev/urandom created file, but change some
places with dd =/dev/zero (ie the reverse of the A test case, creatin
with dd if=/dev/zero and changes with dd if=/dev/urandom)

When you are on Solaris, also see the impact of a test case using
mkfile and not dd if=/dev/zero.


On Thu, Mar 6, 2014 at 11:17 PM,  <devzero at web.de> wrote:
> Hi Pavel,
>
> maybe that´s related to zfs compression ?
>
> on compressed zfs filesystem, zeroes are not written to disk.
>
> # dd if=/dev/zero of=test.dat bs=1024k count=100
>
> /zfspool # ls -la
> total 8
> drwxr-xr-x  3 root root         4 Feb 26 10:18 .
> drwxr-xr-x 27 root root      4096 Mar 29  2013 ..
> drwxr-xr-x 25 root root        25 Mar 29  2013 backup
> -rw-r--r--  1 root root 104857600 Feb 26 10:18 test.dat
>
> /zfspool # du -k test.dat
> 1       test.dat
>
> /zfspool # du -k --apparent-size test.dat
> 102400  test.dat
>
> despite that, space calculation on compressed fs is a difficult thing...
>
> if that gives no pointer, i think that question is better being placed on a zfs mailing list.
>
> regards
> roland
>
>
>
> List:       rsync
> Subject:    strange behavior of --inplace on ZFS
> From:       Pavel Herrmann <morpheus.ibis () gmail ! com>
> Date:       2014-02-25 3:26:03
> Message-ID: 5129524.61kVAFkjCM () bloomfield
> [Download message RAW]
>
> Hi
>
> I am extending my ZFS+rsync backup to be able to handle large files (think
> virtual machine disk images) in an efficient manner. however, during testing I
> have found a very strange behavior of --inplace flag (which seems to be what I
> am looking for).
>
> what I did: create a 100MB file, rsync, snapshot, change 1k in random location,
> rsync, snapshot, change 1K in other random location, repeat a couple times,
> `zfs list` to see how large my volume actually is.
>
> the strange thing here is that the resulting size was wildly different
> depending on how I created the initial file. all modifications were done by the
> same command, namely
> dd if=/dev/urandom of=testfile count=1 bs=1024 seek=some_num conv=notrunc
>
> situation A:
> file was created by running
> dd if=/dev/zero of=testfile bs=1024 count=102400
> the resulting size of the volume is approximately 100MB times the number of
> snapshots
>
> situation B:
> file was created by running
> dd if=/dev/urandom of=testfile count=102400 bs=1024
> the resulting size of the volume is just a bit over 100MB
>
> the rsync command used was
> rsync -aHAv --delete --inplace root at remote:/test/ .
>
> rsync on backup machine (the destination) is 3.1.0, remote has 3.0.9
>
> there is no compression or dedup enabled on the zfs volume
>
> anyone seen this behavior before? is it a bug? can I avoid it? can I make
> rsync give me disk IO statistics to confirm?
>
> regards
> Pavel Herrmann
> --
> Please use reply-all for most replies to avoid omitting the mailing list.
> To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
> Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


More information about the rsync mailing list