[clug] Setting permission bit with a mount option

Brett Worth brett.worth at gmail.com
Mon Mar 18 07:03:02 UTC 2019

On 18/3/19 9:01 am, George at Clug via linux wrote:
> You said, "cluster management software", I am curious about your
> cluster. It is an area I have not as yet need to work with. Would you
> like to give a talk on clustering? I would definitely be interested.

I could give a small talk but unfortunately I have a standing engagement on the nights
klug is held.

> What it is that you are attempting to mount?  (i.e. is it a share, a
> partition?  You mentioned a "cluster", so I am guessing your mounting
> a share of some kind?)

It's actually a node local filesytem on an nvme m.2 ssd.  The cluster software looks to
see if there's a mountable filesystem there and if not it will partition/mkfs/mount it
automatically based on the xml filesystem definition.  The xml does allow you to define
the "options" column in the fstab which is what lead to my original post.

As I said I can easily do this with a chmod command later in the boot process so this was
really just an academic question to see if it could be done in the /etc/fstab file.

> Can you provide us with an example of your fstab line that is doing
> the mount which is having the issue?

That would not be useful since nothing I've tried will work.

> If my research and understanding is correct, all permissions are
> stored in the partition inside the directory file, and mounting has
> nothing to do with permissions. 

The permissions are kept in the inode.  There's field in the inode that says what the
permissions are.  You can get more fine grained control with extended attributes.

> Not sure how this affects/works with
> the root partition, I guess the root directory inode has a special way
> to be identified by the linux file system without required a file
> name, and after that all other child directories are identified by the
> directory's filename?

The only place the name of the item exists is in the directory.  In simple terms the
directory only contains the inode number and name.  So when you create a file or a
directory you are allocated a new inode which has a unique number (within the one
filesystem) and then any directory entry that wants to reference it just uses that number.

Here's an example:

brettw at carbon:/tmp/test$ echo Hello World > file_1

brettw at carbon:/tmp/test$ ls -li

total 4

7677967 -rw-rw-r-- 1 brettw brettw 12 Mar 18 17:35 file_1

brettw at carbon:/tmp/test$ cp file_1 file_2

brettw at carbon:/tmp/test$ ls -li

total 8

7677967 -rw-rw-r-- 1 brettw brettw 12 Mar 18 17:35 file_1

7677975 -rw-rw-r-- 1 brettw brettw 12 Mar 18 17:36 file_2

brettw at carbon:/tmp/test$ ln file_1 file_3

brettw at carbon:/tmp/test$ ls -li

total 12

7677967 -rw-rw-r-- 2 brettw brettw 12 Mar 18 17:35 file_1

7677975 -rw-rw-r-- 1 brettw brettw 12 Mar 18 17:36 file_2

7677967 -rw-rw-r-- 2 brettw brettw 12 Mar 18 17:35 file_3

Note that the inode number is the same for file_1 and file_2.  file_3 is a copy so it's
gets its own inode.  Also note that the 3rd column is 2 for the hard linked files.  This
is the link count for that inode.  When you delete a file the directory entry gets deleted
and the link count is decremented.   If the link count hits zero the inode is freed along
with any disk blocks it has assigned to it.  (There's also an "in memory" link count but I
won't go into that).

Now I append to file_1 and show that this is actually the same thing as file_3:

brettw at carbon:/tmp/test$ echo Another Line >> file_1

brettw at carbon:/tmp/test$ cat file_1

Hello World

Another Line

brettw at carbon:/tmp/test$ cat file_2

Hello World

brettw at carbon:/tmp/test$ cat file_3

Hello World

Another Line

brettw at carbon:/tmp/test$

As for the root directory of the filesystem: it's just the first inode and the top of the

> Are you running SELinux?

Not in this instance.

> Do all users/groups exist, on all relevant systems?  (I ask this as
> when dealing with shares, I have had issues when the sharing server
> and the mounting client did not have the same groups and users)

This cluster does have shared filesystems which are NFS mounts.

Usually the users are authenticated using some network method.  e.g. LDAP or IPA.  
Sometimes the user information is applied directly to the nodes via puppet or ansible or
the like.

I had some fun in the early '90s when the permissions of the directory used as a
mountpoint would be honored even though you could not see them!!  So you had to make sure
the permissions on the hidden directory were 777 from the start or all sorts of
unfathomable shenanigans would ensue.  You could still control the permissions above that
but it was double dipping.   (That was MIPS RISCos)

(Who started this trend of very long emails?  It wasn't me. :-) )

  /) _ _ _/_/ / / /  _ _//
 /_)/</= / / (_(_/()/< ///

More information about the linux mailing list