Fwd: transferring large encrypted images.

Xen list at xenhideout.nl
Wed Oct 14 09:58:59 UTC 2015


On Tue, 13 Oct 2015, Selva Nair wrote:

> On Tue, Oct 13, 2015 at 5:03 PM, Xen <list at xenhideout.nl> wrote:

>> Sure if the files are small and not encrypted. Or, not constantly changing
>> (with their encryption).
>
>
> Not so. For small file there is no advantage as any change may change the
> whole file. Its for large files where only a  few blocks change that the
> delta algorithms saves transfer time. And that's exactly where eCryptfs
> ould help.

But you said "you can keep the image in the filesystem" or something like 
that. Now, If I would be backing up a single filesystem obviously there 
won't be encrypted images (normally). But that means you can't use outer 
layer (lower layer, as you mention it) eCryptFS, because it will probably 
use a randomized cipher/encryption.

That means you need to use the decrypted files. I.e. a mounted eCryptFS to 
operate from. In that case there are no advantages to eCryptFS, you might 
just as well encrypt the entire volume/partition/system

Depending, perhaps, I guess, on whether you need user home directories 
that are encrypted apart from the rest, but etc.

> You don't like file level encryption but that is exactly what you have been
> asking about. You cant move out of Windows but still want a scalable,
> stable solution. Its all a contradiction of terms..

Euhm, no. Unless I'm mistaken, I have been asking about block-container 
encryption, but perhaps that is the same to you? A container file is still 
a file.

Anyway, Duplicity is the only system I've heard of (I heard about it 
before) and now I've read it, it seems to work well. I don't like GnuPG, 
but there you have it. On the other hand, restoring Linux would require a 
live session with duplicity after manually creating the filesystems and 
then chrooting and hopefully restoring the boot manager; all fine and 
simple.

But that means you need to run a Linux system, as you say. Which has its 
own drawbacks ;-). The point of even backing up a system like that kinda 
disappears. But all in all, these are filesystem deltas of real 
unencrypted files. It doesn't use rsync (by default, it doesn't have to) 
but it uses the rsync algorithm to create diffs. And the incremental diffs 
are stored remotely.

Well, that's what my Windows software does too. You see, it's all the same 
in that regard. Perhaps it creates huge diffs - that might be a flaw of 
the software. Duplicity creates a lot of temp files or uses a lot of temp 
space, I take it to mean that it first creates the tarball locally. So 
what you have is a system that merely facilitates the transfer process and 
makes it more intuitive to use it to transfer to a remote location.

But that means Duplicity does what I do: I create encrypted "tarballs" 
with encrypted "diffs" of those tarballs with the newest "filesystem" and 
both of them are stored remotely currently through scp and/or rsync.

I could mount a remote filesystem (such as webdav, or whatever) and write 
to it directly and apart from some amount of failure modus (what if I have 
a network error?) it would do exactly the same thing and in a better or 
more pleasant way. Except that mounting remote filesystems by default also 
gives away the location etc.

What I might do is create a network share from a host I reasonably trust 
(VPS) and attempt to store backups there as it automatically rsyncs them 
to a different host. All it requires then is for the writes to that 
network share to succeed reasonably. I could have a script (some cron 
thing perhaps) that just checks whether it is running and if not fire up a 
regular rsync job.

I guess I'll go about fixing that....

Regards..



More information about the rsync mailing list