...failed: too many links (31)
c182driver1 at gideon.org
Sun Feb 19 08:57:12 MST 2012
On Wed, 30 Jun 2010 01:43:02 +0000, Andrew Gideon wrote:
> I've thought of two solutions: (1) deliberating breaking linking (and
> therefore wasting disk space) or (2) using a different file system.
> This is running on CentOS 5, so xfs was there to be tried. I've had
> positive experiences with xfs in the past, and from what I have read
> this limit does not exist in that file system. I've tried it out, and -
> so far - the problem has been avoided. There are inodes with up to
> 32868 links at the moment on the xfs copy of this volume.
> I'm curious, though, what thoughts others might have.
> I did wonder, for example, whether rsync should, when faced with this
> error, fall back on creating a copy. But should rsync include behavior
> that exists only to work around a file system limit? Perhaps only as a
> command line option (ie. definitely not the default behavior)?
I know it's been a while, but I thought I'd follow up on this.
First: The problem is occurring with yum databases. A change was
introduced a while back that saves space under /var/lib/yum by hard-
linking at least some files (eg. the changed_by files). This isn't the
only situation where our backups are failing due to "too many links", but
it is the most reliable failure.
The yum change included a fallback: if the linking failed, a new file is
created. I mention this because I'm wondering (see above) if this is an
appropriate solution for rsync. Apparently, it is so for yum.
Second: xfs does seem to completely eliminate this issue. I don't quite
trust xfs as much as I do ext3, so we're only using it where the "too
many links" problem occurred. But as our systems are upgraded to the new
yum, this will be more and more of our backup volumes.
So I'm still wondering if an rsync-centric solution, perhaps similar to
yum's fallback, is appropriate.
More information about the rsync