Hlink node data for 2282618 already has path=...

Karl O. Pinc kop at meme.com
Sun Jun 24 14:57:26 MDT 2012


On 06/24/2012 10:40:11 AM, Brian J. Murrell wrote:
> So, like a lot of people, I am using --link-dest to do backups.  My
> backup target is ext4 so with a hard link limit of 64K.  I do end up
> with trying to create too many links at some point though and get the
> following sequence of events:
<snip>
> 

> It seems awfully heavy-handed for this "Too many links" condition to
> be treated as an outright error.  Surely it's sufficient to just fail
> to hard-link and move on, yes?

Typically the pathnames that run out of hard links are those that
are already hard linked in the source file system.
At least if you're using -H as well to ensure that
your backup is a real copy of the source system.

The problem is that "just fail(ing) to hard link" can cause subsequent
backup(s) to fail depending on the state in which the failure
leaves the pathnames in question, more than one pathname
because I'm assuming the source system has
hardlinking going on at the point where the problem
occurs, not to mention the
unknown state of the pathnames in the initial rsync.
Although "moving on" does get you something, you're not
going to get a good backup without some manual work.

Speaking as a sysadmin I want some predictability
in my backup system and even a small problem with a
backup will put me to work until I resolve the issue.

Agreed, a "mostly good" backup is better than no backup
at all. Even better though would be to have an actual
good backup all the time.  This could be done by
making --link-dest an "atomic(ish)" operation;
either all of the source pathhames sharing an inode
would be --link-dest-ed or none of them would.
The kludgey way to do this is to stat(2) before
each --link-dest to see if there's enough free
links before proceeding.  I say kludgey because
of the possibility of race conditions.
(I also have not looked into how you'd discover
the hard link limit for the filesystem in question.)  An
alternate approach is to, on "too many links"
error, "undo" all the --link-dests
on the "problem paths".  (I've looked at
the code and, at first glance, this seems
like it might be sane.  I posted an email,
of unknown clarity, to this list on this subject
and never heard back.)  The goal is to have
--link-dest work when possible and when not
to copy the source and have the same set of
hard links on the destination side as the source
side.

Note that this is all hot air on my part.  I've
no time now to try to code anything -- even assuming
that a patch on this matter would be accepted.

Apologies if the above is not clearly written.
I have limited opportunity to write and
thought a quick brain dump would be better than
nothing.

Regards,

Karl <kop at meme.com>
Free Software:  "You don't pay back, you pay forward."
                 -- Robert A. Heinlein
e looked at
the code and 


More information about the rsync mailing list