[ccache] FATAL: Could not create ... (permission denied?)

Andy Lutomirski luto at amacapital.net
Wed May 30 13:19:43 MDT 2012


On Wed, May 30, 2012 at 11:52 AM, Joel Rosdahl <joel at rosdahl.net> wrote:
> On 30 May 2012 20:09, Andy Lutomirski <luto at amacapital.net> wrote:
>> Looks like a straight-up race: one ccache invocation is cleaning up
>> files that are still in use by another invocation.  I bet the
>> precompiled headers triggered it because they take a long time to
>> generate.
>
> Yes, spot on.
>
> When doing cleanup in one of the 16 subdirectories, ccache deletes
> temporary files that are older than one hour in a pre-step, assuming
> that no compilation takes more than one hour. It doesn't look that
> this happens to you, though.
>
> The next step is that the 20% oldest files in the subdirectory are
> removed. It sounds strange if at least 80% of the files in the
> subdirectory have been put there (or used, which results in an updated
> mtime) after the compilation of the precompiled header begun. Unless
> your max cache size limit is way too low, that is... Could this be the
> case? (Still, ccache should handle this more gracefully, of course.)

Almost certainly true.  My code generates impressively large object
files (this is C++, and I think that the size of an object file is
potentially even bigger than exponential in source file size), and in
pch mode it does so quickly.

My max cache size is 1 GB.  A non-pch build generates 844 MB of cache.

The weird thing: the pch build is failing too early for this to make a
lot of sense.  Cache usage is under 105 MB when ccache fails.

My pch build took 11 seconds or so.  I have multiple slow gcc possibly
in parallel, though: is it possible that they interact to purge far
more files than needed?  With the max size bumped to 5 GB, the build
succeeds, and the cache usage gets past 100 MB much faster than the
time it takes for the build to fail with the setting at 1 GB.

--Andy


More information about the ccache mailing list