strip setuid/setgid bits on backup (was Re: small security-related rsync extension)
dwd at bell-labs.com
Mon Jul 22 09:18:01 EST 2002
I'm catching up on a couple weeks of rsync messages, and I haven't seen
anybody explain in this thread the real problem with .nfs files and
executables. With a NFS cluster of machines (at least pre-NFSv4), a
software distribution system does have to rename executables that might be
running (as opposed to deleting them) because a *different* client might be
running an executable, and otherwise whenever that client needs to page
information in from the executable file the program will crash because the
file is gone. This is because NFS cannot preserve normal Unix "remove on
last close" semantics across all clients. The .nfs* acrobatics only solve
the problem on a single client; if a file is deleted on the server or by
another client, the server does not know that another client is still using
the file and blows away the inode. I participated at the beginning of the
NFSv4 discussions in the IETF particularly to raise this complaint. The
protocol definers didn't seem to think it was a very big problem, but I
believe that other issues pushed them in a direction that should now make
it possible for NFSv4 client implementers to solve the problem without
resorting to .nfs* files and can keep the server informed about what files
they've got open so the server can preserve the inodes until all clients
are through with the file.
In my software distribution system (http://www.bell-labs.com/nsbd/) I
solve this problem outside of rsync, and rather than saving all files
I enable the administrator to specify which programs are likely to be kept
running overnight (since I do almost all of my updates overnight). NSBD
uses rsync with the --compare-dest option to send all updates for a package
to a temporary directory, and then before moving all the files to their
final location it first renames just the selected executables to backup
directories. That isn't perfect though, because if two updates occur two
nights in a row and somebody is running a program longer than that then
the second backup will blow away the first. I've kind of wanted to
replace that scheme with one that keeps as many backups as needed over a
selected time period, like a week or something, depending on the program,
but I haven't gotten around to it.
So, rsyncing directly to the NFS server as Martin suggests will not solve
the problem. Backups still need to be kept of executables that might be
updated. Personally I think raw rsync has several other problems as a
software distribution mechanism, but the solution that somebody else
suggested of using a --backup-dir that's mode 700 (on the same NFS
filesystem) should be sufficient for this setuid security vulnerability Dan
is worrried about. Won't that work for you, Dan?
- Dave Dykstra
On Sun, Jul 21, 2002 at 08:35:33PM +1000, Martin Pool wrote:
> On 19 Jul 2002, tim.conway at philips.com wrote:
> > On Fri, 19 Jul 2002, Dan Stromberg wrote:
> > > Many apologies. If we update on the nfs server, as we've intended all
> > > along, we should have no .nfs* files.
> .nfs files are created on the server, but they are created *by* a command
> from the client. The client sends a RENAME op rather than UNLINK if the
> dentry is still in use.
> > Well, here's one thing that could make them, even if they're being created
> > only directly, not over NFS.
> > I'm watching the directory you're syncing into.
> > I open the file while it's still there.
> > You delete it, and I've got my .nfs* file.
> (Why not just exploit the hole directly?)
> Yes, but as I said the same problem exists with any tool run on the client:
> cp, rpm, ...
> It really is an interesting bug, but it's just not an rsync bug. I might
> send mail (crediting Dan) to the Linux NFS client maintainer and see what
> they say.
More information about the rsync