[SCM] Samba Shared Repository - branch master updated

Andrew Bartlett abartlet at samba.org
Mon Jun 6 10:26:03 UTC 2016


The branch, master has been updated
       via  4d26210 dsdb: Cache our local invocation_id at the start of each request
       via  ee80da5 repl: Retry replication of the schema on WERR_DS_DRA_SCHEMA_MISMATCH
       via  db15993 dbcheck: Find and fix a missing Deleted Objects container
       via  e070af2 selftest: Only mark runtime dbcheck as flapping
       via  0f28c96 repl: Remove duplicated delete of sAMAccountType
       via  de8fb1f dsdb: Allow "cn" to be missing on schema import
       via  c6ed444 selftest: Make replica_sync test pass against Windows 2012R2
       via  ca2e038 samba-tool domain join: Set drsuapi.DRSUAPI_DRS_GET_ANC during initial repl
       via  d103aab dsdb: Only search the provided partition for the object GUID
       via  65a35ac repl: Pass in the full partition DN to dsdb_replicated_objects_convert()
       via  5967852 repl: Allow GetNCChanges DRSUAPI_EXOP_REPL_OBJ to succeed against a deleted object
       via  21b4f67 dsdb: Remove incorrect RDN attid check in replmd_replPropertyMetaDataCtr1_verify
       via  5c6a977 selftest: Use random OIDs from under the Samba OID arc
      from  93dcca2 ctdb-recovery: Update timeout and number of retries during recovery

https://git.samba.org/?p=samba.git;a=shortlog;h=master


- Log -----------------------------------------------------------------
commit 4d26210806419ef5710f38d0fa478e9f52243484
Author: Andrew Bartlett <abartlet at samba.org>
Date:   Fri May 27 10:22:15 2016 +1200

    dsdb: Cache our local invocation_id at the start of each request
    
    This avoids fetching it over and over again
    
    Signed-off-by: Andrew Bartlett <abartlet at samba.org>
    Reviewed-by: Garming Sam <garming at catalyst.net.nz
    
    Autobuild-User(master): Andrew Bartlett <abartlet at samba.org>
    Autobuild-Date(master): Mon Jun  6 12:25:14 CEST 2016 on sn-devel-144

commit ee80da56b96721170a3349531292827888e0c3e2
Author: Andrew Bartlett <abartlet at samba.org>
Date:   Wed May 4 16:19:09 2016 +1200

    repl: Retry replication of the schema on WERR_DS_DRA_SCHEMA_MISMATCH
    
    This makes us replicate the schema, and then go back to what we asked to replicate
    originally, when the schema changes.  This should make the replication much more
    robust after schema changes
    
    Signed-off-by: Andrew Bartlett <abartlet at samba.org>
    Reviewed-by: Garming Sam <garming at catalyst.net.nz>

commit db15993401f927fd2fcea1687c4155dce2272aa8
Author: Andrew Bartlett <abartlet at samba.org>
Date:   Thu Mar 24 20:12:55 2016 +1300

    dbcheck: Find and fix a missing Deleted Objects container
    
    Older Samba versions could delete this.  This patch tries very hard
    to put back the original object, with the original GUID, so that
    if another replica has the correct container, that we just merge
    rather than conflict.
    
    The existing "wrong dn" check can then put any deleted objects
    under this container correctly.
    
    Pair-programmed-with: Garming Sam <garming at catalyst.net.nz>
    Pair-programmed-with: Douglas Bagnall <douglas.bagnall at catalyst.net.nz>
    Signed-off-by: Andrew Bartlett <abartlet at samba.org>
    Reviewed-by: Garming Sam <garming at catalyst.net.nz>

commit e070af22ac7a0fdcb4c275233d0af7bb49c6fb4d
Author: Andrew Bartlett <abartlet at samba.org>
Date:   Mon May 30 14:22:23 2016 +1200

    selftest: Only mark runtime dbcheck as flapping
    
    The dbcheck-oldrelease.sh based tests do not flap
    
    Signed-off-by: Andrew Bartlett <abartlet at samba.org>
    Reviewed-by: Garming Sam <garming at catalyst.net.nz>

commit 0f28c96d88cd9762e335d49b2ebf7d0ebdcbca48
Author: Andrew Bartlett <abartlet at samba.org>
Date:   Wed May 25 12:41:35 2016 +1200

    repl: Remove duplicated delete of sAMAccountType
    
    This causes the version number in replPropertyMetaData to be
    incorrectly bumped twice, because it is implied by not being in the
    list of attributes to be preserved.
    
    Signed-off-by: Andrew Bartlett <abartlet at samba.org>
    Reviewed-by: Garming Sam <garming at catalyst.net.nz>

commit de8fb1f1edb7de8fca5bff5a5a2818f15de6fff2
Author: Andrew Bartlett <abartlet at samba.org>
Date:   Fri May 27 10:28:26 2016 +1200

    dsdb: Allow "cn" to be missing on schema import
    
    This avoids a segfault when we remove the duplication of this value from dsdb_convert_object_ex()
    
    Signed-off-by: Andrew Bartlett <abartlet at samba.org>
    Reviewed-by: Garming Sam <garming at catalyst.net.nz>

commit c6ed444ac76631c437eea7b4418fbf7c3932f9c0
Author: Andrew Bartlett <abartlet at samba.org>
Date:   Thu Mar 24 11:30:04 2016 +1300

    selftest: Make replica_sync test pass against Windows 2012R2
    
    The tests that invoke --local will not pass, naturally, but otherwise it works
    
    Signed-off-by: Andrew Bartlett <abartlet at samba.org>
    Reviewed-by: Garming Sam <garming at catalyst.net.nz>

commit ca2e038aa5221d80a2a84aa1d65c3c246502ddc5
Author: Andrew Bartlett <abartlet at samba.org>
Date:   Wed Dec 9 17:04:14 2015 +1300

    samba-tool domain join: Set drsuapi.DRSUAPI_DRS_GET_ANC during initial repl
    
    This is needed so that we get parents before children.
    
    We need this to ensure that we always know the correct parent for a
    new child object, rather than just trusting the DN string
    
    Signed-off-by: Andrew Bartlett <abartlet at samba.org>
    Reviewed-by: Garming Sam <garming at catalyst.net.nz>

commit d103aabcf59cf376b21b7714d14878359fb3e157
Author: Andrew Bartlett <abartlet at samba.org>
Date:   Thu Mar 24 20:18:19 2016 +1300

    dsdb: Only search the provided partition for the object GUID
    
    We know which NC this is in, so save searching all the backends.
    
    Signed-off-by: Andrew Bartlett <abartlet at samba.org>
    Reviewed-by: Garming Sam <garming at catalyst.net.nz>

commit 65a35acbf33dd6dfd6ba6b2fb76580126829b597
Author: Andrew Bartlett <abartlet at samba.org>
Date:   Mon Mar 21 15:49:33 2016 +1300

    repl: Pass in the full partition DN to dsdb_replicated_objects_convert()
    
    When we were processing an EXOP, we would pass in a DN specific to that
    operation, but this stopped repl_meta_data from finding the parent object
    
    Signed-off-by: Andrew Bartlett <abartlet at samba.org>
    Reviewed-by: Garming Sam <garming at catalyst.net.nz>

commit 5967852aa085cd13784b4f994122a78d2ab64845
Author: Andrew Bartlett <abartlet at samba.org>
Date:   Wed May 25 12:36:59 2016 +1200

    repl: Allow GetNCChanges DRSUAPI_EXOP_REPL_OBJ to succeed against a deleted object
    
    Signed-off-by: Andrew Bartlett <abartlet at samba.org>
    Reviewed-by: Garming Sam <garming at catalyst.net.nz>

commit 21b4f67acd51af3ab1da069933a64c328ab57998
Author: Andrew Bartlett <abartlet at samba.org>
Date:   Sun May 15 21:31:27 2016 +1200

    dsdb: Remove incorrect RDN attid check in replmd_replPropertyMetaDataCtr1_verify
    
    On windows, the RDN is not sorted last.  Additionally, the check should have been
    against dsdb_attribute_get_attid(), not just ->attributeID_id
    
    By including this in earlier versions of Samba, we allow a backport of Samba
    databases from Samba 4.5, where this will be sorted in the same way
    as Windows 2012R2.
    
    Signed-off-by: Andrew Bartlett <abartlet at samba.org>
    Reviewed-by: Garming Sam <garming at catalyst.net.nz
    BUG: https://bugzilla.samba.org/show_bug.cgi?id=11842

commit 5c6a97769c956f8cc1b3f0ff697e9dc5d0892587
Author: Andrew Bartlett <abartlet at samba.org>
Date:   Mon Apr 18 14:58:10 2016 +1200

    selftest: Use random OIDs from under the Samba OID arc
    
    The urgent_replication.py test used the OID of uid, and this caused
    other tests to fail
    
    The other random OIDs should have been from under our arc, not under
    iso.member-body.us
    
    We split up the range a little to avoid some of the birthday paradox,
    in the tests that create multiple OIDs.
    
    Signed-off-by: Andrew Bartlett <abartlet at samba.org>
    Reviewed-by: Garming Sam <garming at catalyst.net.nz

-----------------------------------------------------------------------

Summary of changes:
 python/samba/dbchecker.py                          | 152 ++++++++++++++--
 python/samba/join.py                               |   5 +-
 selftest/flapping                                  |   2 +-
 source4/dsdb/common/util.c                         |   2 +-
 source4/dsdb/repl/drepl_out_helpers.c              | 119 +++++++++++-
 source4/dsdb/repl/drepl_service.h                  |   7 +
 source4/dsdb/repl/replicated_objects.c             |  10 +-
 source4/dsdb/samdb/ldb_modules/objectclass.c       |   8 +
 source4/dsdb/samdb/ldb_modules/repl_meta_data.c    |  61 ++-----
 source4/dsdb/schema/schema_init.c                  |  20 ++
 source4/dsdb/tests/python/dsdb_schema_info.py      |   2 +-
 source4/dsdb/tests/python/ldap_schema.py           |  12 +-
 source4/dsdb/tests/python/ldap_syntaxes.py         |   8 +-
 source4/dsdb/tests/python/sec_descriptor.py        |   2 +-
 source4/dsdb/tests/python/urgent_replication.py    |   5 +-
 source4/libnet/libnet_become_dc.c                  |  11 +-
 source4/libnet/libnet_vampire.c                    |  28 ++-
 .../expected-deleted_objects-after-dbcheck.ldif    |  10 +
 source4/setup/schema_samba4.ldif                   |  11 ++
 source4/torture/drs/python/repl_schema.py          |  68 +++++--
 source4/torture/drs/python/replica_sync.py         | 202 ++++++++++++---------
 source4/torture/drs/rpc/dssync.c                   |   5 +-
 source4/torture/drs/rpc/msds_intid.c               |   2 +-
 testprogs/blackbox/dbcheck-oldrelease.sh           |  46 +++++
 24 files changed, 606 insertions(+), 192 deletions(-)
 create mode 100644 source4/selftest/provisions/release-4-1-0rc3/expected-deleted_objects-after-dbcheck.ldif


Changeset truncated at 500 lines:

diff --git a/python/samba/dbchecker.py b/python/samba/dbchecker.py
index bcefc26..75eff51 100644
--- a/python/samba/dbchecker.py
+++ b/python/samba/dbchecker.py
@@ -69,6 +69,7 @@ class dbcheck(object):
         self.fix_replmetadata_wrong_attid = False
         self.fix_replmetadata_unsorted_attid = False
         self.fix_deleted_deleted_objects = False
+        self.fix_incorrect_deleted_objects = False
         self.fix_dn = False
         self.fix_base64_userparameters = False
         self.fix_utf8_userparameters = False
@@ -84,6 +85,7 @@ class dbcheck(object):
         self.class_schemaIDGUID = {}
         self.wellknown_sds = get_wellknown_sds(self.samdb)
         self.fix_all_missing_objectclass = False
+        self.fix_missing_deleted_objects = False
 
         self.dn_set = set()
 
@@ -115,27 +117,31 @@ class dbcheck(object):
                 self.write_ncs = None
 
         res = self.samdb.search(base="", scope=ldb.SCOPE_BASE, attrs=['namingContexts'])
+        self.deleted_objects_containers = []
+        self.ncs_lacking_deleted_containers = []
         try:
-            ncs = res[0]["namingContexts"]
-            self.deleted_objects_containers = []
-            for nc in ncs:
-                try:
-                    dn = self.samdb.get_wellknown_dn(ldb.Dn(self.samdb, nc),
-                                                     dsdb.DS_GUID_DELETED_OBJECTS_CONTAINER)
-                    self.deleted_objects_containers.append(dn)
-                except KeyError:
-                    pass
+            self.ncs = res[0]["namingContexts"]
         except KeyError:
             pass
         except IndexError:
             pass
 
+        for nc in self.ncs:
+            try:
+                dn = self.samdb.get_wellknown_dn(ldb.Dn(self.samdb, nc),
+                                                 dsdb.DS_GUID_DELETED_OBJECTS_CONTAINER)
+                self.deleted_objects_containers.append(dn)
+            except KeyError:
+                self.ncs_lacking_deleted_containers.append(ldb.Dn(self.samdb, nc))
+
     def check_database(self, DN=None, scope=ldb.SCOPE_SUBTREE, controls=[], attrs=['*']):
         '''perform a database check, returning the number of errors found'''
         res = self.samdb.search(base=DN, scope=scope, attrs=['dn'], controls=controls)
         self.report('Checking %u objects' % len(res))
         error_count = 0
 
+        error_count += self.check_deleted_objects_containers()
+
         for object in res:
             self.dn_set.add(str(object.dn))
             error_count += self.check_object(object.dn, attrs=attrs)
@@ -149,6 +155,105 @@ class dbcheck(object):
         self.report('Checked %u objects (%u errors)' % (len(res), error_count))
         return error_count
 
+
+    def check_deleted_objects_containers(self):
+        """This function only fixes conflicts on the Deleted Objects
+        containers, not the attributes"""
+        error_count = 0
+        for nc in self.ncs_lacking_deleted_containers:
+            if nc == self.schema_dn:
+                continue
+            error_count += 1
+            self.report("ERROR: NC %s lacks a reference to a Deleted Objects container" % nc)
+            if not self.confirm_all('Fix missing Deleted Objects container for %s?' % (nc), 'fix_missing_deleted_objects'):
+                continue
+
+            dn = ldb.Dn(self.samdb, "CN=Deleted Objects")
+            dn.add_base(nc)
+
+            conflict_dn = None
+            try:
+                # If something already exists here, add a conflict
+                res = self.samdb.search(base=dn, scope=ldb.SCOPE_BASE, attrs=[],
+                                        controls=["show_deleted:1", "extended_dn:1:1",
+                                                  "show_recycled:1", "reveal_internals:0"])
+                if len(res) != 0:
+                    guid = res[0].dn.get_extended_component("GUID")
+                    conflict_dn = ldb.Dn(self.samdb,
+                                         "CN=Deleted Objects\\0ACNF:%s" % str(misc.GUID(guid)))
+                    conflict_dn.add_base(nc)
+
+            except ldb.LdbError, (enum, estr):
+                if enum == ldb.ERR_NO_SUCH_OBJECT:
+                    pass
+                else:
+                    self.report("Couldn't check for conflicting Deleted Objects container: %s" % estr)
+                    return 1
+
+            if conflict_dn is not None:
+                try:
+                    self.samdb.rename(dn, conflict_dn, ["show_deleted:1", "relax:0", "show_recycled:1"])
+                except ldb.LdbError, (enum, estr):
+                    self.report("Couldn't move old Deleted Objects placeholder: %s to %s: %s" % (dn, conflict_dn, estr))
+                    return 1
+
+            # Refresh wellKnownObjects links
+            res = self.samdb.search(base=nc, scope=ldb.SCOPE_BASE,
+                                    attrs=['wellKnownObjects'],
+                                    controls=["show_deleted:1", "extended_dn:0",
+                                              "show_recycled:1", "reveal_internals:0"])
+            if len(res) != 1:
+                self.report("wellKnownObjects was not found for NC %s" % nc)
+                return 1
+
+            # Prevent duplicate deleted objects containers just in case
+            wko = res[0]["wellKnownObjects"]
+            listwko = []
+            proposed_objectguid = None
+            for o in wko:
+                dsdb_dn = dsdb_Dn(self.samdb, o, dsdb.DSDB_SYNTAX_BINARY_DN)
+                if self.is_deleted_objects_dn(dsdb_dn):
+                    self.report("wellKnownObjects had duplicate Deleted Objects value %s" % o)
+                    # We really want to put this back in the same spot
+                    # as the original one, so that on replication we
+                    # merge, rather than conflict.
+                    proposed_objectguid = dsdb_dn.dn.get_extended_component("GUID")
+                listwko.append(o)
+
+            if proposed_objectguid is not None:
+                guid_suffix = "\nobjectGUID: %s" % str(misc.GUID(proposed_objectguid))
+            else:
+                wko_prefix = "B:32:%s" % dsdb.DS_GUID_DELETED_OBJECTS_CONTAINER
+                listwko.append('%s:%s' % (wko_prefix, dn))
+                guid_suffix = ""
+
+            # Insert a brand new Deleted Objects container
+            self.samdb.add_ldif("""dn: %s
+objectClass: top
+objectClass: container
+description: Container for deleted objects
+isDeleted: TRUE
+isCriticalSystemObject: TRUE
+showInAdvancedViewOnly: TRUE
+systemFlags: -1946157056%s""" % (dn, guid_suffix),
+                                controls=["relax:0", "provision:0"])
+
+            delta = ldb.Message()
+            delta.dn = ldb.Dn(self.samdb, str(res[0]["dn"]))
+            delta["wellKnownObjects"] = ldb.MessageElement(listwko,
+                                                           ldb.FLAG_MOD_REPLACE,
+                                                           "wellKnownObjects")
+
+            # Insert the link to the brand new container
+            if self.do_modify(delta, ["relax:0"],
+                              "NC %s lacks Deleted Objects WKGUID" % nc,
+                              validate=False):
+                self.report("Added %s well known guid link" % dn)
+
+            self.deleted_objects_containers.append(dn)
+
+        return error_count
+
     def report(self, msg):
         '''print a message unless quiet is set'''
         if not self.quiet:
@@ -1143,21 +1248,31 @@ newSuperior: %s""" % (str(from_dn), str(to_rdn), str(to_base)))
         if "description" not in obj:
             self.report("ERROR: description not present on Deleted Objects container %s" % obj.dn)
             faulty = True
-        if "showInAdvancedViewOnly" not in obj:
+        if "showInAdvancedViewOnly" not in obj or obj['showInAdvancedViewOnly'][0].upper() == 'FALSE':
             self.report("ERROR: showInAdvancedViewOnly not present on Deleted Objects container %s" % obj.dn)
             faulty = True
         if "objectCategory" not in obj:
             self.report("ERROR: objectCategory not present on Deleted Objects container %s" % obj.dn)
             faulty = True
-        if "isCriticalSystemObject" not in obj:
+        if "isCriticalSystemObject" not in obj or obj['isCriticalSystemObject'][0].upper() == 'FALSE':
             self.report("ERROR: isCriticalSystemObject not present on Deleted Objects container %s" % obj.dn)
             faulty = True
         if "isRecycled" in obj:
             self.report("ERROR: isRecycled present on Deleted Objects container %s" % obj.dn)
             faulty = True
+        if "isDeleted" in obj and obj['isDeleted'][0].upper() == 'FALSE':
+            self.report("ERROR: isDeleted not set on Deleted Objects container %s" % obj.dn)
+            faulty = True
+        if "objectClass" not in obj or (len(obj['objectClass']) != 2 or
+                                        obj['objectClass'][0] != 'top' or
+                                        obj['objectClass'][1] != 'container'):
+            self.report("ERROR: objectClass incorrectly set on Deleted Objects container %s" % obj.dn)
+            faulty = True
+        if "systemFlags" not in obj or obj['systemFlags'][0] != '-1946157056':
+            self.report("ERROR: systemFlags incorrectly set on Deleted Objects container %s" % obj.dn)
+            faulty = True
         return faulty
 
-
     def err_deleted_deleted_objects(self, obj):
         nmsg = ldb.Message()
         nmsg.dn = dn = obj.dn
@@ -1173,6 +1288,10 @@ newSuperior: %s""" % (str(from_dn), str(to_rdn), str(to_base)))
         if "isRecycled" in obj:
             nmsg["isRecycled"] = ldb.MessageElement("TRUE", ldb.FLAG_MOD_DELETE, "isRecycled")
 
+        nmsg["isDeleted"] = ldb.MessageElement("TRUE", ldb.FLAG_MOD_REPLACE, "isDeleted")
+        nmsg["systemFlags"] = ldb.MessageElement("-1946157056", ldb.FLAG_MOD_REPLACE, "systemFlags")
+        nmsg["objectClass"] = ldb.MessageElement(["top", "container"], ldb.FLAG_MOD_REPLACE, "objectClass")
+
         if not self.confirm_all('Fix Deleted Objects container %s by restoring default attributes?'
                                 % (dn), 'fix_deleted_deleted_objects'):
             self.report('Not fixing missing/incorrect attributes on %s\n' % (dn))
@@ -1281,9 +1400,12 @@ newSuperior: %s""" % (str(from_dn), str(to_rdn), str(to_base)))
         nc_dn = self.samdb.get_nc_root(obj.dn)
         try:
             deleted_objects_dn = self.samdb.get_wellknown_dn(nc_dn,
-                                                 samba.dsdb.DS_GUID_DELETED_OBJECTS_CONTAINER)
-        except KeyError, e:
-            deleted_objects_dn = ldb.Dn(self.samdb, "CN=Deleted Objects,%s" % nc_dn)
+                                                             samba.dsdb.DS_GUID_DELETED_OBJECTS_CONTAINER)
+        except KeyError:
+            # We have no deleted objects DN for schema, and we check for this above for the other
+            # NCs
+            deleted_objects_dn = None
+
 
         object_rdn_attr = None
         object_rdn_val = None
diff --git a/python/samba/join.py b/python/samba/join.py
index ebfd63e..103e4d9 100644
--- a/python/samba/join.py
+++ b/python/samba/join.py
@@ -833,7 +833,7 @@ class dc_join(object):
                     repl.replicate(ctx.base_dn, source_dsa_invocation_id,
                                 destination_dsa_guid, rodc=ctx.RODC,
                                 replica_flags=ctx.domain_replica_flags)
-                    ctx.domain_replica_flags ^= drsuapi.DRSUAPI_DRS_CRITICAL_ONLY | drsuapi.DRSUAPI_DRS_GET_ANC
+                    ctx.domain_replica_flags ^= drsuapi.DRSUAPI_DRS_CRITICAL_ONLY
                 else:
                     ctx.domain_replica_flags |= drsuapi.DRSUAPI_DRS_GET_ANC
                 repl.replicate(ctx.base_dn, source_dsa_invocation_id,
@@ -1189,6 +1189,7 @@ def join_DC(logger=None, server=None, creds=None, lp=None, site=None, netbios_na
     ctx.replica_flags = (drsuapi.DRSUAPI_DRS_WRIT_REP |
                          drsuapi.DRSUAPI_DRS_INIT_SYNC |
                          drsuapi.DRSUAPI_DRS_PER_SYNC |
+                         drsuapi.DRSUAPI_DRS_GET_ANC |
                          drsuapi.DRSUAPI_DRS_FULL_SYNC_IN_PROGRESS |
                          drsuapi.DRSUAPI_DRS_NEVER_SYNCED)
     ctx.domain_replica_flags = ctx.replica_flags
@@ -1213,6 +1214,7 @@ def join_clone(logger=None, server=None, creds=None, lp=None,
     ctx.replica_flags = (drsuapi.DRSUAPI_DRS_WRIT_REP |
                          drsuapi.DRSUAPI_DRS_INIT_SYNC |
                          drsuapi.DRSUAPI_DRS_PER_SYNC |
+                         drsuapi.DRSUAPI_DRS_GET_ANC |
                          drsuapi.DRSUAPI_DRS_FULL_SYNC_IN_PROGRESS |
                          drsuapi.DRSUAPI_DRS_NEVER_SYNCED)
     if not include_secrets:
@@ -1268,6 +1270,7 @@ def join_subdomain(logger=None, server=None, creds=None, lp=None, site=None,
     ctx.replica_flags = (drsuapi.DRSUAPI_DRS_WRIT_REP |
                          drsuapi.DRSUAPI_DRS_INIT_SYNC |
                          drsuapi.DRSUAPI_DRS_PER_SYNC |
+                         drsuapi.DRSUAPI_DRS_GET_ANC |
                          drsuapi.DRSUAPI_DRS_FULL_SYNC_IN_PROGRESS |
                          drsuapi.DRSUAPI_DRS_NEVER_SYNCED)
     ctx.domain_replica_flags = ctx.replica_flags
diff --git a/selftest/flapping b/selftest/flapping
index d5716a5..8422e20 100644
--- a/selftest/flapping
+++ b/selftest/flapping
@@ -28,7 +28,7 @@
 ^samba4.tombstone_reanimation.python # flakey on sn-devel
 ^samba4.drs.delete_object.python # flakey on sn-devel
 ^samba4.blackbox.samba_tool_demote # flakey on sn-devel
-^samba4.blackbox.dbcheck # flakey on sn-devel
+^samba4.blackbox.dbcheck\( # flakey on sn-devel
 ^samba4.smb2.create.mkdir-dup\(ad_dc_ntvfs\) # This test (for bug 11486) involves a race, not always protected against in the NTVFS file server
 ^samba4.winbind.struct.domain_info.ad_member # flakey on sn-devel-104 and sn-devel-144
 #
diff --git a/source4/dsdb/common/util.c b/source4/dsdb/common/util.c
index 84d3c6b..cfa1376 100644
--- a/source4/dsdb/common/util.c
+++ b/source4/dsdb/common/util.c
@@ -3984,7 +3984,7 @@ int dsdb_load_udv_v2(struct ldb_context *samdb, struct ldb_dn *dn, TALLOC_CTX *m
 	static const struct timeval tv1970;
 	NTTIME nt1970 = timeval_to_nttime(&tv1970);
 
-	ret = ldb_search(samdb, mem_ctx, &r, dn, LDB_SCOPE_BASE, attrs, NULL);
+	ret = dsdb_search_dn(samdb, mem_ctx, &r, dn, attrs, DSDB_SEARCH_SHOW_RECYCLED|DSDB_SEARCH_SHOW_DELETED);
 	if (ret != LDB_SUCCESS) {
 		return ret;
 	}
diff --git a/source4/dsdb/repl/drepl_out_helpers.c b/source4/dsdb/repl/drepl_out_helpers.c
index a1e8dcb..54f44c6 100644
--- a/source4/dsdb/repl/drepl_out_helpers.c
+++ b/source4/dsdb/repl/drepl_out_helpers.c
@@ -671,6 +671,8 @@ static void dreplsrv_op_pull_source_apply_changes_trigger(struct tevent_req *req
 	WERROR status;
 	NTSTATUS nt_status;
 	uint32_t dsdb_repl_flags = 0;
+	struct ldb_dn *nc_root = NULL;
+	int ret;
 
 	switch (ctr_level) {
 	case 1:
@@ -744,9 +746,22 @@ static void dreplsrv_op_pull_source_apply_changes_trigger(struct tevent_req *req
 		dsdb_repl_flags |= DSDB_REPL_FLAG_EXPECT_NO_SECRETS;
 	}
 
+	if (state->op->extended_op != DRSUAPI_EXOP_NONE) {
+		ret = dsdb_find_nc_root(service->samdb, partition,
+					partition->dn, &nc_root);
+		if (ret != LDB_SUCCESS) {
+			DEBUG(0,(__location__ ": Failed to find nc_root for %s\n",
+				 ldb_dn_get_linearized(partition->dn)));
+			tevent_req_nterror(req, NT_STATUS_INTERNAL_ERROR);
+			return;
+		}
+	} else {
+		nc_root = partition->dn;
+	}
+
 	status = dsdb_replicated_objects_convert(service->samdb,
 						 working_schema ? working_schema : schema,
-						 partition->nc.dn,
+						 nc_root,
 						 mapping_ctr,
 						 object_count,
 						 first_object,
@@ -757,7 +772,93 @@ static void dreplsrv_op_pull_source_apply_changes_trigger(struct tevent_req *req
 						 &drsuapi->gensec_skey,
 						 dsdb_repl_flags,
 						 state, &objects);
-	if (!W_ERROR_IS_OK(status)) {
+
+	if (W_ERROR_EQUAL(status, WERR_DS_DRA_SCHEMA_MISMATCH)
+	    && state->op->source_dsa_retry == NULL) {
+		struct dreplsrv_partition *p;
+
+		/*
+		 * Change info sync or extended operation into a fetch
+		 * of the schema partition, so we get all the schema
+		 * objects we need.
+		 *
+		 * We don't want to re-do the remote exop,
+		 * unless it was REPL_SECRET so we set the
+		 * fallback operation to just be a fetch of
+		 * the relevent partition.
+		 */
+
+
+		if (state->op->extended_op == DRSUAPI_EXOP_REPL_SECRET) {
+			state->op->extended_op_retry = state->op->extended_op;
+		} else {
+			state->op->extended_op_retry = DRSUAPI_EXOP_NONE;
+		}
+		state->op->extended_op = DRSUAPI_EXOP_NONE;
+
+		if (ldb_dn_compare(nc_root, partition->dn) == 0) {
+			state->op->source_dsa_retry = state->op->source_dsa;
+		} else {
+			status = dreplsrv_partition_find_for_nc(service,
+								NULL, NULL,
+								ldb_dn_get_linearized(nc_root),
+								&p);
+			if (!W_ERROR_IS_OK(status)) {
+				DEBUG(2, ("Failed to find requested Naming Context for %s: %s",
+					  ldb_dn_get_linearized(nc_root),
+					  win_errstr(status)));
+				nt_status = werror_to_ntstatus(status);
+				tevent_req_nterror(req, nt_status);
+				return;
+			}
+			status = dreplsrv_partition_source_dsa_by_guid(p,
+								       &state->op->source_dsa->repsFrom1->source_dsa_obj_guid,
+								       &state->op->source_dsa_retry);
+
+			if (!W_ERROR_IS_OK(status)) {
+				struct GUID_txt_buf str;
+				DEBUG(2, ("Failed to find requested source DSA for %s and %s: %s",
+					  ldb_dn_get_linearized(nc_root),
+					  GUID_buf_string(&state->op->source_dsa->repsFrom1->source_dsa_obj_guid, &str),
+					  win_errstr(status)));
+				nt_status = werror_to_ntstatus(status);
+				tevent_req_nterror(req, nt_status);
+				return;
+			}
+		}
+
+		/* Find schmea naming context to be synchronized first */
+		status = dreplsrv_partition_find_for_nc(service,
+							NULL, NULL,
+							ldb_dn_get_linearized(schema_dn),
+							&p);
+		if (!W_ERROR_IS_OK(status)) {
+			DEBUG(2, ("Failed to find requested Naming Context for schema: %s",
+				  win_errstr(status)));
+			nt_status = werror_to_ntstatus(status);
+			tevent_req_nterror(req, nt_status);
+			return;
+		}
+
+		status = dreplsrv_partition_source_dsa_by_guid(p,
+							       &state->op->source_dsa->repsFrom1->source_dsa_obj_guid,
+							       &state->op->source_dsa);
+		if (!W_ERROR_IS_OK(status)) {
+			struct GUID_txt_buf str;
+			DEBUG(2, ("Failed to find requested source DSA for %s and %s: %s",
+				  ldb_dn_get_linearized(schema_dn),
+				  GUID_buf_string(&state->op->source_dsa->repsFrom1->source_dsa_obj_guid, &str),
+				  win_errstr(status)));
+			nt_status = werror_to_ntstatus(status);
+			tevent_req_nterror(req, nt_status);
+			return;
+		}
+		DEBUG(4,("Wrong schema when applying reply GetNCChanges, retrying\n"));
+
+		dreplsrv_op_pull_source_get_changes_trigger(req);
+		return;
+
+	} else if (!W_ERROR_IS_OK(status)) {
 		nt_status = werror_to_ntstatus(WERR_BAD_NET_RESP);
 		DEBUG(0,("Failed to convert objects: %s/%s\n",
 			  win_errstr(status), nt_errstr(nt_status)));
@@ -770,6 +871,7 @@ static void dreplsrv_op_pull_source_apply_changes_trigger(struct tevent_req *req
 						objects,
 						&state->op->source_dsa->notify_uSN);
 	talloc_free(objects);
+
 	if (!W_ERROR_IS_OK(status)) {
 		nt_status = werror_to_ntstatus(WERR_BAD_NET_RESP);
 		DEBUG(0,("Failed to commit objects: %s/%s\n",
@@ -791,6 +893,19 @@ static void dreplsrv_op_pull_source_apply_changes_trigger(struct tevent_req *req
 		return;
 	}
 
+	/*
+	 * If we had to divert via doing some other thing, such as
+	 * pulling the schema, then go back and do the original
+	 * operation once we are done.
+	 */
+	if (state->op->source_dsa_retry != NULL) {
+		state->op->source_dsa = state->op->source_dsa_retry;
+		state->op->extended_op = state->op->extended_op_retry;
+		state->op->source_dsa_retry = NULL;
+		dreplsrv_op_pull_source_get_changes_trigger(req);
+		return;
+	}
+
 	if (state->op->extended_op != DRSUAPI_EXOP_NONE ||
 	    state->op->service->am_rodc) {
 		/*
diff --git a/source4/dsdb/repl/drepl_service.h b/source4/dsdb/repl/drepl_service.h
index edba4c4..317fa87 100644
--- a/source4/dsdb/repl/drepl_service.h
+++ b/source4/dsdb/repl/drepl_service.h
@@ -130,6 +130,13 @@ struct dreplsrv_out_operation {
 	enum drsuapi_DsExtendedError extended_ret;
 	dreplsrv_extended_callback_t callback;
 	void *cb_data;
+
+	/*
+	 * Used when we have to re-try with a different NC, eg for
+	 * EXOP retry or to get a current schema first
+	 */
+	struct dreplsrv_partition_source_dsa *source_dsa_retry;
+	enum drsuapi_DsExtendedOperation extended_op_retry;
 };
 
 struct dreplsrv_notify_operation {
diff --git a/source4/dsdb/repl/replicated_objects.c b/source4/dsdb/repl/replicated_objects.c
index 0ac2b54..44a766b 100644
--- a/source4/dsdb/repl/replicated_objects.c
+++ b/source4/dsdb/repl/replicated_objects.c
@@ -639,7 +639,7 @@ WERROR dsdb_convert_object_ex(struct ldb_context *ldb,
 
 WERROR dsdb_replicated_objects_convert(struct ldb_context *ldb,
 				       const struct dsdb_schema *schema,
-				       const char *partition_dn_str,
+				       struct ldb_dn *partition_dn,
 				       const struct drsuapi_DsReplicaOIDMapping_Ctr *mapping_ctr,
 				       uint32_t object_count,
 				       const struct drsuapi_DsReplicaObjectListItemEx *first_object,
@@ -653,7 +653,6 @@ WERROR dsdb_replicated_objects_convert(struct ldb_context *ldb,
 				       struct dsdb_extended_replicated_objects **objects)
 {
 	WERROR status;
-	struct ldb_dn *partition_dn;
 	struct dsdb_schema_prefixmap *pfm_remote;
 	struct dsdb_extended_replicated_objects *out;
 	const struct drsuapi_DsReplicaObjectListItemEx *cur;
@@ -671,9 +670,6 @@ WERROR dsdb_replicated_objects_convert(struct ldb_context *ldb,
 	schema = talloc_reference(out, schema);
 	W_ERROR_HAVE_NO_MEMORY(schema);
 
-	partition_dn = ldb_dn_new(out, ldb, partition_dn_str);
-	W_ERROR_HAVE_NO_MEMORY_AND_FREE(partition_dn, out);
-
 	status = dsdb_schema_pfm_from_drsuapi_pfm(mapping_ctr, true,
 						  out, &pfm_remote, NULL);


-- 
Samba Shared Repository



More information about the samba-cvs mailing list