[ccache] Using a shared ccache in cmake environment (linux)

Steffen Dettmer steffen.dettmer at gmail.com
Mon Mar 16 19:26:44 UTC 2020


Hi,

I setup ccache with shared cache for some of our projects and I would
like to learn how to do it correctly.

Projects here have 1-2 MLOC in 3-10k files (mostly C++) and are built
via cmake files. We have around 20 devs active typically plus Jenkins
and mostly they compile the same inputs (HEADs of active branches with
only their few changes) in various configurations (targets,
debug/release...), in total build output can be 5-30 GB. I made some
tests and found ccache be very efficient (reducing total build
duration by ten or so, depends on many factors of course). However, I
still have some issues.

Setup:
using cmake wrapper scripts that exports:
- CCACHE_BASEDIR
- CCACHE_SLOPPINESS=file_macro,time_macros
- CCACHE_CPP2=set

and havine ccache.conf like:
  max_size = 25.0G
  # find $CCACHE_DIR -type d | xargs chmod g+s
  cache_dir=/local/users/zcone-pisint/tmp/ccache
  hard_link=false
  umask=002

The straight case ("ccache -Cz && make clean all && make clean all &&
ccache -s") works as expected, first build slow, second very fast, 50%
hits.

Is this so far reasonable?

As workaround for a special unrelated issue currently we redefine
__FILE__ (and try to remove that redefinition). I understand that
ccache still works thanks to CCACHE_BASEDIR even for __FILE__ usage
inside files. Is that correct?

I understood that CCACHE_SLOPPINESS=file_macro means that cache
results may be used even if __FILE__ is different, i.e. using a
__FILE__ from another user (fine for our usecases), is this correct?
NB: unfortunately cmake uses absolute paths, so __FILE__ contains user
specific information (currently we redefine it not to do so, but we
might drop this, because it harms other things).

How to find a reasonable max_size?
For now I just arbitrarily picked 25 GB (approximately the build tree
size) and I never saw it "full" according to ccache -s.

On build servers we usually run "make -j 25" (24 cores). Often,
several such jobs are running by different users (and Jenkins;
sometimes 400 compiler processes or even more). I assume ccache of
course safely handles parallel invocation, is this correct?

We have some team mates that have slow (old) laptops only. They
benefit from using a network shared ccache. Technically, they "mount
-t cifs" the cache_dir (NFS is firewalled unfortunately). We have
different Ubuntu/Mint/Debian/Devuan machines, but exactly the same
compilers (own toolchains).

Is sharing via CIFS possibly at all or could it have bad effects?

One issue that occures from time to time is that the ccache -s stats
become zero (all values expect max cache size are 0). I first didn't
notice because stats are shared so I assume someone zeroed the stats,
but with alternate directories we found that it sometimes happens
without ccache -s. "du -hs $CCACHE_DIR" still shows gigabytes used. We
didn't find a cause yet, but several candidates exist.

Can ccache be used on CIFS?
Are cache and/or stats version dependent?
I tried to deploy the same ccache everywhere (3.7.7, now
3.7.7+8_ge65d6c92), but maybe there is some host somewhere with an
older version, hard to say.

A few times we noticed that ccache -s reports few GB size but "du -hs"
reports 40 or 50 GB, although "max_size = 25.0G". Is this expected?
Could be a follow up problem from the one before.

I'm also still facing scmake issues (using "physical" and "logical" in
several mixed combinations). Complex topic.

Any information / hints / pointers are appreciated!


Best regards,
Steffen



More information about the ccache mailing list