[SCM] Samba Shared Repository - branch master updated

Anoop C S anoopcs at samba.org
Fri Feb 24 05:38:01 UTC 2023


The branch, master has been updated
       via  62ea6ae8c9d doc/vfs_ceph: document ceph:filesystem parameter
       via  5e49d4b4311 doc/vfs_ceph: update confusing default hint for ceph:user_id param
       via  98388e2bf0d vfs_ceph: add support to select ceph file system
       via  e936e4d786a vfs_ceph: cache ceph mounts based on share configuration params
       via  df073c7eedc vfs_ceph: split ceph mount logic into a new function
      from  5cb8805811e python: fix mutable default arguments

https://git.samba.org/?p=samba.git;a=shortlog;h=master


- Log -----------------------------------------------------------------
commit 62ea6ae8c9d3341cb65185124225b55a4175e7ae
Author: John Mulligan <jmulligan at redhat.com>
Date:   Wed Feb 15 16:09:48 2023 -0500

    doc/vfs_ceph: document ceph:filesystem parameter
    
    Document how the `ceph:filesystem` parameter allows one to select
    the cephfs file system to use for the share.
    
    Signed-off-by: John Mulligan <jmulligan at redhat.com>
    Reviewed-by: Guenther Deschner <gd at samba.org>
    Reviewed-by: Anoop C S <anoopcs at samba.org>
    
    Autobuild-User(master): Anoop C S <anoopcs at samba.org>
    Autobuild-Date(master): Fri Feb 24 05:37:57 UTC 2023 on atb-devel-224

commit 5e49d4b4311e9a7059bffd52152a6464d7da49e4
Author: John Mulligan <jmulligan at redhat.com>
Date:   Wed Feb 15 16:04:07 2023 -0500

    doc/vfs_ceph: update confusing default hint for ceph:user_id param
    
    Previously, the manpage hinted that the `ceph:user_id` parameter derived
    a default value from the process id. This implies that it referring
    to the PID but this is not what libcephfs actually does. Rather, this
    param is used to derive the `client.<id>` authorization name ceph uses.
    This mistake probably originates in a comment in the libcephfs header.
    
    When I first started using the vfs_ceph module this confused me as I
    didn't know what to use to get access to my cluster. Eventually, after
    reading both docs and code I found that it does not use the pid but
    defaults to a value in the ceph client library (typically "admin").
    
    Therefore, if you are using commands like `ceph fs authorize x
    client.foo` or `ceph auth get client.bar` to authorize a client you
    would supply smb.conf with `ceph:user_id = foo` or `ceph:user_id = bar`
    respectively.  These entries then need corresponding entries in your
    ceph keyring file.
    
    Signed-off-by: John Mulligan <jmulligan at redhat.com>
    Reviewed-by: Guenther Deschner <gd at samba.org>
    Reviewed-by: Anoop C S <anoopcs at samba.org>

commit 98388e2bf0d35405c974fa7d9671a36e055ed9a6
Author: John Mulligan <jmulligan at redhat.com>
Date:   Wed Feb 15 15:21:43 2023 -0500

    vfs_ceph: add support to select ceph file system
    
    Add a config parameter `ceph:filesystem` that will be passed to
    ceph_select_filesystem when provided. This allows shares on a single
    smbd to access multiple different cephfs file systems.
    
    The ceph_select_filesystem call was added as part of ceph 'nautilus'
    (v14), released on 2019-03-19 and EOLed on 2021-06-30.
    Since ceph 'pacific' (v16) multiple file system support is stable
    and ready to use. At the time of this commit, 'pacific' is the oldest
    actively supported version of ceph upstream.
    
    Since samba tests building on ubntu 18.04, which has ceph packages
    older than v14, a waf check for the function is added to test for
    the presence of ceph_select_filesystem and disable its use on
    these very old platforms.
    
    Signed-off-by: John Mulligan <jmulligan at redhat.com>
    Reviewed-by: Guenther Deschner <gd at samba.org>
    Reviewed-by: Anoop C S <anoopcs at samba.org>

commit e936e4d786aeb76a8be82d82690c868f67f48d9e
Author: John Mulligan <jmulligan at redhat.com>
Date:   Wed Feb 15 15:12:12 2023 -0500

    vfs_ceph: cache ceph mounts based on share configuration params
    
    Previously, the vfs_ceph module kept one global cached mount.
    This makes it impossible to support multiple ceph clusters or
    file systems. Add a mount cache in a similar fashion to the connection
    cache found in the vfs_glusterfs module. The vfs_ceph cache uses
    a single "cookie" value, a unique string based on config params, to
    ID the cache entry. Shares that produce the same cookie will share
    cephfs mount objects and increment a counter when multiple shares
    are using the same cache entry.
    
    Signed-off-by: John Mulligan <jmulligan at redhat.com>
    Reviewed-by: Guenther Deschner <gd at samba.org>
    Reviewed-by: Anoop C S <anoopcs at samba.org>

commit df073c7eedc61fe1c95b2e26a18915b2a0c77552
Author: John Mulligan <jmulligan at redhat.com>
Date:   Wed Feb 15 14:22:00 2023 -0500

    vfs_ceph: split ceph mount logic into a new function
    
    This new function is entirely dedicated to just setting up a libcephfs
    mount. Handling the cmount global and samba connection params remains
    in cephwrap_connect. This change will later be used to avoid a single
    global cached connection and add improved connection handling.
    
    Signed-off-by: John Mulligan <jmulligan at redhat.com>
    Reviewed-by: Guenther Deschner <gd at samba.org>
    Reviewed-by: Anoop C S <anoopcs at samba.org>

-----------------------------------------------------------------------

Summary of changes:
 docs-xml/manpages/vfs_ceph.8.xml |  19 ++-
 source3/modules/vfs_ceph.c       | 251 +++++++++++++++++++++++++++++----------
 source3/wscript                  |   2 +
 3 files changed, 209 insertions(+), 63 deletions(-)


Changeset truncated at 500 lines:

diff --git a/docs-xml/manpages/vfs_ceph.8.xml b/docs-xml/manpages/vfs_ceph.8.xml
index 5b32122113a..47b5523b9a1 100644
--- a/docs-xml/manpages/vfs_ceph.8.xml
+++ b/docs-xml/manpages/vfs_ceph.8.xml
@@ -118,8 +118,8 @@
 		<listitem>
 		<para>
 			Allows one to explicitly set the client ID used for the
-			CephFS mount handle. Empty by default (CephFS uses the
-			process id).
+			CephFS mount handle. Empty by default (use the libcephfs
+			client default).
 		</para>
 		<para>
 			Example: ceph:user_id = samba
@@ -127,6 +127,21 @@
 		</listitem>
 		</varlistentry>
 
+		<varlistentry>
+		<term>ceph:filesystem = fs_name</term>
+		<listitem>
+		<para>
+			Allows one to explicitly select the CephFS file system
+			to use when the Ceph cluster supports more than one
+			file system. Empty by default (use the default file system
+			of the Ceph cluster).
+		</para>
+		<para>
+			Example: ceph:filesystem = myfs2
+		</para>
+		</listitem>
+		</varlistentry>
+
 	</variablelist>
 
 </refsect1>
diff --git a/source3/modules/vfs_ceph.c b/source3/modules/vfs_ceph.c
index c5ee9d5967c..3978f8f589b 100644
--- a/source3/modules/vfs_ceph.c
+++ b/source3/modules/vfs_ceph.c
@@ -66,128 +66,257 @@
 	return _res \
 
 /*
- * We mount only one file system and then all shares are assumed to be in that.
- * FIXME: If we want to support more than one FS, then we have to deal with
- * this differently.
+ * Track unique connections, as virtual mounts, to cephfs file systems.
+ * Individual mounts will be set on the handle->data attribute, but
+ * the mounts themselves will be shared so as not to spawn extra mounts
+ * to the same cephfs.
  *
- * So, cmount tells us if we have been this way before and whether
- * we need to mount ceph and cmount_cnt tells us how many times we have
- * connected
+ * Individual mounts are IDed by a 'cookie' value that is a string built
+ * from identifying parameters found in smb.conf.
  */
-static struct ceph_mount_info * cmount = NULL;
-static uint32_t cmount_cnt = 0;
 
-/* Check for NULL pointer parameters in cephwrap_* functions */
+static struct cephmount_cached {
+	char *cookie;
+	uint32_t count;
+	struct ceph_mount_info *mount;
+	struct cephmount_cached *next, *prev;
+} *cephmount_cached;
 
-/* We don't want to have NULL function pointers lying around.  Someone
-   is sure to try and execute them.  These stubs are used to prevent
-   this possibility. */
+static int cephmount_cache_add(const char *cookie,
+			       struct ceph_mount_info *mount)
+{
+	struct cephmount_cached *entry = NULL;
+
+	entry = talloc_zero(NULL, struct cephmount_cached);
+	if (entry == NULL) {
+		errno = ENOMEM;
+		return -1;
+	}
+
+	entry->cookie = talloc_strdup(entry, cookie);
+	if (entry->cookie == NULL) {
+		talloc_free(entry);
+		errno = ENOMEM;
+		return -1;
+	}
+
+	entry->mount = mount;
+	entry->count = 1;
+
+	DBG_DEBUG("adding mount cache entry for %s\n", entry->cookie);
+	DLIST_ADD(cephmount_cached, entry);
+	return 0;
+}
 
-static int cephwrap_connect(struct vfs_handle_struct *handle,  const char *service, const char *user)
+static struct ceph_mount_info *cephmount_cache_update(const char *cookie)
 {
-	int ret;
-	char buf[256];
-	int snum = SNUM(handle->conn);
-	const char *conf_file;
-	const char *user_id;
+	struct cephmount_cached *entry = NULL;
+
+	for (entry = cephmount_cached; entry; entry = entry->next) {
+		if (strcmp(entry->cookie, cookie) == 0) {
+			entry->count++;
+			DBG_DEBUG("updated mount cache: count is [%"
+				  PRIu32 "]\n", entry->count);
+			return entry->mount;
+		}
+	}
 
-	if (cmount) {
-		handle->data = cmount; /* We have been here before */
-		cmount_cnt++;
-		return 0;
+	errno = ENOENT;
+	return NULL;
+}
+
+static int cephmount_cache_remove(struct ceph_mount_info *mount)
+{
+	struct cephmount_cached *entry = NULL;
+
+	for (entry = cephmount_cached; entry; entry = entry->next) {
+		if (entry->mount == mount) {
+			if (--entry->count) {
+				DBG_DEBUG("updated mount cache: count is [%"
+					  PRIu32 "]\n", entry->count);
+				return entry->count;
+			}
+
+			DBG_DEBUG("removing mount cache entry for %s\n",
+				  entry->cookie);
+			DLIST_REMOVE(cephmount_cached, entry);
+			talloc_free(entry);
+			return 0;
+		}
 	}
+	errno = ENOENT;
+	return -1;
+}
 
+static char *cephmount_get_cookie(TALLOC_CTX * mem_ctx, const int snum)
+{
+	const char *conf_file =
+	    lp_parm_const_string(snum, "ceph", "config_file", ".");
+	const char *user_id = lp_parm_const_string(snum, "ceph", "user_id", "");
+	const char *fsname =
+	    lp_parm_const_string(snum, "ceph", "filesystem", "");
+	return talloc_asprintf(mem_ctx, "(%s/%s/%s)", conf_file, user_id,
+			       fsname);
+}
+
+static int cephmount_select_fs(struct ceph_mount_info *mnt, const char *fsname)
+{
+	/*
+	 * ceph_select_filesystem was added in ceph 'nautilus' (v14).
+	 * Earlier versions of libcephfs will lack that API function.
+	 * At the time of this writing (Feb 2023) all versions of ceph
+	 * supported by ceph upstream have this function.
+	 */
+#if defined(HAVE_CEPH_SELECT_FILESYSTEM)
+	DBG_DEBUG("[CEPH] calling: ceph_select_filesystem with %s\n", fsname);
+	return ceph_select_filesystem(mnt, fsname);
+#else
+	DBG_ERR("[CEPH] ceph_select_filesystem not available\n");
+	return -ENOTSUP;
+#endif
+}
+
+static struct ceph_mount_info *cephmount_mount_fs(const int snum)
+{
+	int ret;
+	char buf[256];
+	struct ceph_mount_info *mnt = NULL;
 	/* if config_file and/or user_id are NULL, ceph will use defaults */
-	conf_file = lp_parm_const_string(snum, "ceph", "config_file", NULL);
-	user_id = lp_parm_const_string(snum, "ceph", "user_id", NULL);
+	const char *conf_file =
+	    lp_parm_const_string(snum, "ceph", "config_file", NULL);
+	const char *user_id =
+	    lp_parm_const_string(snum, "ceph", "user_id", NULL);
+	const char *fsname =
+	    lp_parm_const_string(snum, "ceph", "filesystem", NULL);
 
 	DBG_DEBUG("[CEPH] calling: ceph_create\n");
-	ret = ceph_create(&cmount, user_id);
+	ret = ceph_create(&mnt, user_id);
 	if (ret) {
-		goto err_out;
+		errno = -ret;
+		return NULL;
 	}
 
 	DBG_DEBUG("[CEPH] calling: ceph_conf_read_file with %s\n",
 		  (conf_file == NULL ? "default path" : conf_file));
-	ret = ceph_conf_read_file(cmount, conf_file);
+	ret = ceph_conf_read_file(mnt, conf_file);
 	if (ret) {
 		goto err_cm_release;
 	}
 
 	DBG_DEBUG("[CEPH] calling: ceph_conf_get\n");
-	ret = ceph_conf_get(cmount, "log file", buf, sizeof(buf));
+	ret = ceph_conf_get(mnt, "log file", buf, sizeof(buf));
 	if (ret < 0) {
 		goto err_cm_release;
 	}
 
 	/* libcephfs disables POSIX ACL support by default, enable it... */
-	ret = ceph_conf_set(cmount, "client_acl_type", "posix_acl");
+	ret = ceph_conf_set(mnt, "client_acl_type", "posix_acl");
 	if (ret < 0) {
 		goto err_cm_release;
 	}
 	/* tell libcephfs to perform local permission checks */
-	ret = ceph_conf_set(cmount, "fuse_default_permissions", "false");
+	ret = ceph_conf_set(mnt, "fuse_default_permissions", "false");
 	if (ret < 0) {
 		goto err_cm_release;
 	}
+	/*
+	 * select a cephfs file system to use:
+	 * In ceph, multiple file system support has been stable since 'pacific'.
+	 * Permit different shares to access different file systems.
+	 */
+	if (fsname != NULL) {
+		ret = cephmount_select_fs(mnt, fsname);
+		if (ret < 0) {
+			goto err_cm_release;
+		}
+	}
 
 	DBG_DEBUG("[CEPH] calling: ceph_mount\n");
-	ret = ceph_mount(cmount, NULL);
-	if (ret < 0) {
-		goto err_cm_release;
+	ret = ceph_mount(mnt, NULL);
+	if (ret >= 0) {
+		goto cm_done;
 	}
 
+      err_cm_release:
+	ceph_release(mnt);
+	mnt = NULL;
+	DBG_DEBUG("[CEPH] Error mounting fs: %s\n", strerror(-ret));
+      cm_done:
 	/*
-	 * encode mount context/state into our vfs/connection holding structure
-	 * cmount is a ceph_mount_t*
+	 * Handle the error correctly. Ceph returns -errno.
 	 */
-	handle->data = cmount;
-	cmount_cnt++;
+	if (ret) {
+		errno = -ret;
+	}
+	return mnt;
+}
 
-	/*
-	 * Unless we have an async implementation of getxattrat turn this off.
-	 */
-	lp_do_parameter(SNUM(handle->conn), "smbd async dosmode", "false");
+/* Check for NULL pointer parameters in cephwrap_* functions */
 
-	return 0;
+/* We don't want to have NULL function pointers lying around.  Someone
+   is sure to try and execute them.  These stubs are used to prevent
+   this possibility. */
 
-err_cm_release:
-	ceph_release(cmount);
-	cmount = NULL;
-err_out:
+static int cephwrap_connect(struct vfs_handle_struct *handle,
+			    const char *service, const char *user)
+{
+	int ret = 0;
+	struct ceph_mount_info *cmount = NULL;
+	int snum = SNUM(handle->conn);
+	char *cookie = cephmount_get_cookie(handle, snum);
+	if (cookie == NULL) {
+		return -1;
+	}
+
+	cmount = cephmount_cache_update(cookie);
+	if (cmount != NULL) {
+		goto connect_ok;
+	}
+
+	cmount = cephmount_mount_fs(snum);
+	if (cmount == NULL) {
+		ret = -1;
+		goto connect_fail;
+	}
+	ret = cephmount_cache_add(cookie, cmount);
+	if (ret) {
+		goto connect_fail;
+	}
+
+      connect_ok:
+	handle->data = cmount;
 	/*
-	 * Handle the error correctly. Ceph returns -errno.
+	 * Unless we have an async implementation of getxattrat turn this off.
 	 */
-	DBG_DEBUG("[CEPH] Error return: %s\n", strerror(-ret));
-	WRAP_RETURN(ret);
+	lp_do_parameter(SNUM(handle->conn), "smbd async dosmode", "false");
+      connect_fail:
+	talloc_free(cookie);
+	return ret;
 }
 
 static void cephwrap_disconnect(struct vfs_handle_struct *handle)
 {
-	int ret;
-
-	if (!cmount) {
-		DBG_ERR("[CEPH] Error, ceph not mounted\n");
+	int ret = cephmount_cache_remove(handle->data);
+	if (ret < 0) {
+		DBG_ERR("failed to remove ceph mount from cache: %s\n",
+			strerror(errno));
 		return;
 	}
-
-	/* Should we unmount/shutdown? Only if the last disconnect? */
-	if (--cmount_cnt) {
-		DBG_DEBUG("[CEPH] Not shuting down CEPH because still more connections\n");
+	if (ret > 0) {
+		DBG_DEBUG("mount cache entry still in use\n");
 		return;
 	}
 
-	ret = ceph_unmount(cmount);
+	ret = ceph_unmount(handle->data);
 	if (ret < 0) {
 		DBG_ERR("[CEPH] failed to unmount: %s\n", strerror(-ret));
 	}
 
-	ret = ceph_release(cmount);
+	ret = ceph_release(handle->data);
 	if (ret < 0) {
 		DBG_ERR("[CEPH] failed to release: %s\n", strerror(-ret));
 	}
-
-	cmount = NULL;  /* Make it safe */
+	handle->data = NULL;
 }
 
 /* Disk operations */
diff --git a/source3/wscript b/source3/wscript
index 0dbf02ed530..71378796344 100644
--- a/source3/wscript
+++ b/source3/wscript
@@ -1698,6 +1698,8 @@ int main() {
             conf.CHECK_FUNCS_IN('ceph_statx', 'cephfs',
                                 headers='cephfs/libcephfs.h')):
             conf.DEFINE('HAVE_CEPH', '1')
+            conf.CHECK_FUNCS_IN('ceph_select_filesystem', 'cephfs',
+                                headers='cephfs/libcephfs.h')
         else:
             Logs.warn('''Ceph support disabled due to --without-acl-support
                       or lack of ceph_statx support''')


-- 
Samba Shared Repository



More information about the samba-cvs mailing list