[Samba] sssd on a DC

Jonathan Hunter jmhunter1 at gmail.com
Sat May 9 19:36:35 MDT 2015

Thanks all.

I have manually copied across idmap.ldb to my other DCs, and run "net
cache flush" which I think think does the same as above.

Things seem to be working much better in this respect now. I still
think it's a temporary fix i.e. I'll need to manually copy across
idmap.ldb and flush the cache again if the set of users editing files
in sysvol changes - but for now it's at least solved the immediate
issue for me, so thank you.

For reference, the command I was using to test out group policy was:
C:\> gpupdate /force /wait:-1

which tells gpupdate to wait until the very end before returning -
much easier to see any error messages this way.

On 10 May 2015 at 02:22, Achim Gottinger <achim at ag-web.biz> wrote:
> Hello Jonathan and Rowlaand,
> Am 09.05.2015 um 17:46 schrieb Rowland Penny:
>> On 09/05/15 18:20, Jonathan Hunter wrote:
>>> Hi,
>>> I have a query about the use of sssd on a Samba4 DC. Background is as
>>> follows:
>>> I have two DCs and would like to synchronise files between the two
>>> machines. This is for sysvol replication - I am using lsyncd (
>>> https://code.google.com/p/lsyncd/ ) to trigger an rsync whenever files
>>> change.
>>> However I have hit a predictable problem, which is that since there is
>>> no synchronised UID mapping between the two servers (they are both DCs
>>> so rid mapping won't work), when I update a group policy using the
>>> Windows tools, and the rsync job runs in response, my client machines
>>> aren't able to successfully apply the policy when they reboot /
>>> gpupdate. Error messages from Windows are along the lines of:
>>> "The processing of Group Policy failed. Windows attempted to read the
>>> file \\domain.tld\sysvol\domain.tld\Policies\{g-u-i-d}\gpt.ini from a
>>> domain controller and was not successful. Group Policy settings may
>>> not be applied until this event is resolved. [...]"
>>> I can run "samba-tool ntacl sysvolreset" on the offending DC and that
>>> fixes things straight away.. but only until a client requests a GPO
>>> from the *other* DC, at which point the file ownerships on /that/ one
>>> are wrong, and I have to repeat the process again.
>>> After some previous advice (thanks Rowland) I think the best solution
>>> for me would be to install and configure sssd on the DCs to
>>> synchronise the UIDs and GIDs, at which point rsync in this way should
>>> work just fine.. However, I don't know enough about keytabs and
>>> suchlike.
>>> I have sssd configured at a basic level, but am getting a strange error.
>>>  From the log files of sssd, things look fine up to this point:
>>> [resolv_getsrv_send] (0x0100): Trying to resolve SRV record of
>>> '_ldap._tcp.domain.tld'
>>> but then the rest only looks like it works 50% of the time, i.e. when
>>> the above line resolves to the *other* DC.
>>> I have been testing sssd on DC1 first of all. When the above DNS query
>>> resolves to DC1, I get:
>>> [be_resolve_server_process] (0x0200): Found address for server
>>> dc1.domain.tld: [] TTL 900
>>> [ldap_child_get_tgt_sync] (0x0100): Principal name is: [DC1$@DOMAIN.TLD]
>>> [...]
>>> [sasl_bind_send] (0x0100): Executing sasl bind mech: gssapi, user: DC1$
>>> [sasl_bind_send] (0x0020): ldap_sasl_bind failed (-2)[Local error]
>>> [sasl_bind_send] (0x0080): Extended failure message: [SASL(-1):
>>> generic failure: GSSAPI Error: Unspecified GSS failure.  Minor code
>>> may provide more information (Server not found in Kerberos database)]
>>> [fo_set_port_status] (0x0100): Marking port 389 of server
>>> 'dc1.domain.tld' as 'not working'
>>> However, it's perfectly happy when the query resolves to DC2:
>>> [be_resolve_server_process] (0x0200): Found address for server
>>> dc2.domain.tld: [] TTL 900
>>> [ldap_child_get_tgt_sync] (0x0100): Principal name is: [DC1$@DOMAIN.TLD]
>>> [...]
>>> [sasl_bind_send] (0x0100): Executing sasl bind mech: gssapi, user: DC1$
>>> [child_sig_handler] (0x0100): child [8505] finished successfully.
>>> [fo_set_port_status] (0x0100): Marking port 389 of server
>>> 'dc2.domain.tld' as 'working'
>>> [set_server_common_status] (0x0100): Marking server 'dc2.domain.tld'
>>> as 'working'
>>> At first I thought it was something to do with the keytab file (which
>>> is a bit of a black box to me and I don't quite understand); I even
>>> extracted the keytab for DC1 and told sssd to use it directly, but I'm
>>> confused as to why DC1 would have a problem authenticating against
>>> itself, whereas DC2 is quite happy for it to do so.
>>> I used:
>>> # samba-tool domain exportkeytab /etc/krb5-dc1.keytab --principal-DC1\$
>>> and added to sssd.conf:
>>> krb5_keytab=/etc/krb5-dc1.keytab
>>> I suspect this is a samba query, not sssd, given the log messages
>>> above. Can anyone help suggest further debug commands / tests I can
>>> run?
>>> Both machines are CentOS 6.6; samba 4.1 compiled from source.
>>> Many thanks
>>> Jonathan
>> I think what you are hitting here is the problem where idmap.ldb is not
>> synced between DCs. This means that a windows group can have a different IDs
>> between DCs. A cure is to copy idmap.ldb from the first DC to the second DC
>> and then run 'samba-tool ntacl sysvolreset' , unfortunately, this doesn't
>> seem to work with the latest samba 4.2.x and it also seems that this will
>> not be fixed.
>> If this doesn't work, then I would recommend taking your problem to the
>> sssd list, they know more about sssd, after all they write it :-)
>> Rowland
> Copy idmap.ldb still works with 4.2.x. If one uses winbindd the cache file
> /var/cache/samba/gencache.tdb (location of that file may differ depending on
> the build options) must be removed after copying idmap.ldb.
> achim~
> --
> To unsubscribe from this list go to the following URL and read the
> instructions:  https://lists.samba.org/mailman/options/samba

"If we knew what it was we were doing, it would not be called
research, would it?"
      - Albert Einstein

More information about the samba mailing list