[ccache] Long pauses due to automatic cache clean-up
rdiezmail-2006 at yahoo.de
Sat Apr 17 16:28:11 UTC 2021
First of all, I have been using ccache for years. Many thanks for this great tool.
I don't use ccache for overnight builds, because it does not really matter if the builds take longer then. I am using ccache when (re-)building
Normally, ccache is great: many rebuilds are instant. But sometimes, the build pauses for a long time, say 20 seconds. There is no CPU usage during
I have had slowdowns because of unrelated local network problems, but in the meantime, I have come to the conclusion that these particular pauses are
due to ccache automatically pruning its cache.
I would be nice if someone could confirm that they have seen such pauses on their computers.
My cache is not particularly big, but I am using a conventional (rotational) desktop hard disk. There are no syslog messages about any disk issues.
The ccache configuration is:
max_size = 20G
max_files = 100000
I am using Ubuntu 20.04.2 on a standard ext4 filesystem. I only added flags "noatime,commit=30" to this filesystem in /etc/fstab .
The ccache version on Ubuntu is probably rather old: 3.7.7 vs newest seems to be 4.x, like 4.2.1 .
I was surprised that completely clearing the cache with option '--clear' can take a very long time, 12 minutes the last time around, because ccache
probably does not need to calculate anything beforehand, just delete all files. But I haven't benchmarked the filesystem itself, so perhaps it's just
Linux being slow deleting so many small files.
Apparently, I am not the first one to talk about low performance when cleaning the cache. For example, I found this statement in the mailing list:
"the current cleanups that can take over a half hour to run and hammer a hard drive mercilessly"
It is not clear from the documentation how ccache manages automatic cache cleaning/pruning during a parallel build. What happens if you are building
in parallel, with "make -j32", and all 32 concurrent instances decide to clean the cache at the same time? Will all of them attempt to delete the same
files at the same time? I have seen that there are 16 buckets, but there is bound to be some collisions with a sufficiently large parallel factor.
I wonder if there is a way to increase the number of buckets, from 16 to say to 1024. There is not reason to have so few of them.
I have also being thinking of triggering a manual cache clean-up on start-up, after logging on, while I try to wake up and find the coffee machine. I
could set the clean targets a little lower than usual, say 10 % less than the usual max_size and max_files configuration settings, so that there is
enough space left for the day. This way, no more automatic clean-up will probably happen during the day. I could always set up a cron job that runs
more frequently. The idea is to prevent a cache pruning pause during interactive development.
Is that a good strategy? Or are there better ways?
Option limit_multiple is not taken into account for manual cleanup, and lowering it would probably not trigger a cleanup anyway, so I would have to
parse the config file to find out the current values of max_size and max_files, reduce them by some factor like 10 %, and then run "ccache --cleanup"
with those values, if that is possible.
I am worried that passing --max-size modifies the configuration file, so I may have to revert the configuration file afterwards. And hope that no
normal compilations start during this time, depending on how long the coffee machine needs to warm up. Or maybe I could use a second configuration
file for the same cache with CCACHE_CONFIGPATH, or some other such hack.
Is there some script able to parse the ccache config file that I could use as an example?
Thanks in advance,
More information about the ccache