error allocating core memory buffers (code 22) at util2.c(106) [sender=3.1.2]

John A Hawkinson jhawk at
Mon Apr 10 23:37:44 UTC 2017


    I'm in the middle of recoverying from a tactical error copying
around an Mac OS X 10.10.5 Time Machine backup (turns out Apple's
instructions aren't great...), and I had rsync running for the past 6
hours repairing permissions/acls on 1.5 TB of data (not copying the
data), and then it just died in the middle with:

.L....og.... 2015-03-11-094807/platinum-bar2/usr/local/mysql ->
ERROR: out of memory in expand_item_list [sender]
rsync error: error allocating core memory buffers (code 22) at util2.c(106) [sender=3.1.2]
rsync: [sender] write error: Broken pipe (32)

It was invoked as 

rsync -iaHJAX platinum-barzoom/Backups.backupdb/pb3/ platinum-barratry/x/Backups.backupdb/pb3/

I suspect the situation will be different the next time around, and I'm
also not really inclined to try to wait another 7 hours for it to fail.

Process limits were:

bash-3.2# ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
file size               (blocks, -f) unlimited
max locked memory       (kbytes, -l) unlimited
max memory size         (kbytes, -m) unlimited
open files                      (-n) 256
pipe size            (512 bytes, -p) 1
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 709
virtual memory          (kbytes, -v) unlimited

Based on "df -i", there are about 200 million inodes in the scope of the

bash-3.2# df -i 
Filesystem                        512-blocks       Used  Available Capacity   iused     ifree %iused  Mounted on
/dev/disk2s2                      3906353072 3879969656   26383416   100% 484996205   3297927   99%   /Volumes/platinum-barzoom
/dev/disk1s2                      9766869344 3327435312 6439434032    35% 207964705 402464627   34%   /Volumes/platinum-barratry

Because this is a Time Machine backup, and there were 66 snapshots of a
1 TB disk consuming about 1.5 TB, there were a *lot* of hard links. Many
of directories rather than individual files, so it's a little
challenging to estimate what the number of files to links is.

Are there any useful tips here?
Is it worth filing a bug report on this thin record?
I guess I can turn on core dumps and increase (unlimit completely) the 
stack size...

Although it doesn't seem to have segfaulted, so I'm not sure having
core dumps enabled would have helped?

This was rsync 3.1.2 as installed via Homebrew which appears to have
applied 3 local patches:

    apply "patches/fileflags.diff",
p.s.: If I had to start over, I would have spent less time just deleting
the data and recopying it, rather than trying to fixup the metadata and
dealing with magic Apple stuff like the inability to modify symlinks
inside a top-level Backups.backupdb directory of a Time Machine hfs
volume (But you can move the top-level directory into another directory
and then modify symlinks inside and then move it back). This has been
an "interesting" experience.


--jhawk at
  John Hawkinson

More information about the rsync mailing list