PANIC internal error: samba 12.4.5 CTDB cluster

David Rivera rivera.david87 at gmail.com
Fri Jul 24 20:13:10 UTC 2020


Hi,

We've recently moved to using samba as our file server (about 2 weeks ago).
Today we experienced a panic on all 3 CTDB cluster members that made all
shares unavailable (samba-ctdb-node0-panic-backtrace-internal-error.log
attached). I've included two additional panics I've found in the smbd log.

We are using samba 12.4.5 built from source and have a 3 node CTDB cluster
with ceph as the storage backend mounted using the ceph kernel client
(Linux kernel 5.7.7). Our SMB clients are Windows XP, Windows 7 & Windows
10 connecting through Microsoft DFS (Windows DCs) and storing a number of
file types including Microsoft Office documents and shared Microsoft Access
databases. We have been running into issues with Windows XP client sessions
hanging and making the locked files inaccessible but we've been able to get
around this issue for the meantime by killing the associated smbd process.

We've compiled samba on CentOS 8 [CentOS Linux release 8.2.2004 (Core)]
using the following commands on all 3 nodes:

# PKG_CONFIG_PATH="/usr/lib/pkgconfig/:${PKG_CONFIG_PATH}" ./configure
--with-cluster-support --enable-ceph-reclock
--with-shared-modules=idmap_rid,idmap_tdb2,idmap_ad --without-ad-dc
# make -j 4
# make install

Our configuration is as follows.

[global]
        client min protocol = NT1
        clustering = Yes
        dedicated keytab file = /etc/krb5.keytab
        kerberos method = secrets and keytab
        netbios name = CTDB
        realm = DOMAIN1.COM <http://domain1.com/>
        reset on zero vc = Yes
        security = ADS
        server min protocol = NT1
        template shell = /bin/bash
        username map = /usr/local/samba/etc/user.map
        winbind nss info = rfc2307
        winbind refresh tickets = Yes
        workgroup = DOMAIN1
        idmap config domain5:unix_primary_group = yes
        idmap config domain5:unix_nss_info = no
        idmap config domain5:range = 50000-59999
        idmap config domain5:schema_mode = rfc2307
        idmap config domain5:backend = ad
        idmap config domain4:unix_primary_group = yes
        idmap config domain4:unix_nss_info = no
        idmap config domain4:range = 40000-49999
        idmap config domain4:schema_mode = rfc2307
        idmap config domain4:backend = ad
        idmap config domain3:unix_primary_group = yes
        idmap config domain3:unix_nss_info = no
        idmap config domain3:range = 30000-39999
        idmap config domain3:schema_mode = rfc2307
        idmap config domain3:backend = ad
        idmap config domain2:unix_primary_group = yes
        idmap config domain2:unix_nss_info = no
        idmap config domain2:range = 20000-29999
        idmap config domain2:schema_mode = rfc2307
        idmap config domain2:backend = ad
        idmap config domain1:unix_primary_group = yes
        idmap config domain1:unix_nss_info = no
        idmap config domain1:range = 10000-99999
        idmap config domain1:schema_mode = rfc2307
        idmap config domain1:backend = ad
        idmap config * : range = 3000-7999
        idmap config * : backend = tdb
        kernel share modes = No
        map acl inherit = Yes
        posix locking = No
        vfs objects = acl_xattr
        ## Used during testing, turned off for production
        #server multi channel support = yes
        #interfaces = "10.20.10.224;capability=RSS"
"10.20.10.225;capability=RSS" "10.20.10.226;capability=RSS"

# Multiple shares defined this way
[share1]
        allocation roundup size = 4096
        comment = Share1
        # CephFS mount on /srv/samba
        path = /srv/samba/shares/share1
        read only = No
        vfs objects = acl_xattr ceph_snapshots io_uring

# Test share
[test]
        allocation roundup size = 4096
        comment = Test Share
        path = /srv/samba/shares/test
        smb encrypt = desired
        vfs objects = acl_xattr recycle ceph_snapshots io_uring
        recycle:exclude = thumbs.db,*.ldb,~$*
        recycle:touch = Yes
        recycle:versions = Yes
        recycle:keeptree = Yes
        recycle:repository = ../recycle/test

Here is our CTDB configuration file:

[logging]
        location = syslog
        log level = NOTICE

[cluster]
        recovery lock =
!/usr/local/samba/libexec/ctdb/ctdb_mutex_ceph_rados_helper ceph
client.samba rados.samba.conf ctdb.lock

Please let me know how I could help figure out the cause of this panic(s).

Thank you,
David
-------------- next part --------------
A non-text attachment was scrubbed...
Name: samba-ctdb-node0-panic-backtrace-internal-error.log
Type: application/octet-stream
Size: 4800 bytes
Desc: not available
URL: <http://lists.samba.org/pipermail/samba-technical/attachments/20200724/df763968/samba-ctdb-node0-panic-backtrace-internal-error.obj>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: samba-ctdb-node0-panic-backtrace-Bad-talloc-magic-value.log
Type: application/octet-stream
Size: 3367 bytes
Desc: not available
URL: <http://lists.samba.org/pipermail/samba-technical/attachments/20200724/df763968/samba-ctdb-node0-panic-backtrace-Bad-talloc-magic-value.obj>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: samba-ctdb-node0-panic-backtrace-lock-order-violation.log
Type: application/octet-stream
Size: 6186 bytes
Desc: not available
URL: <http://lists.samba.org/pipermail/samba-technical/attachments/20200724/df763968/samba-ctdb-node0-panic-backtrace-lock-order-violation.obj>


More information about the samba-technical mailing list