[SCM] Samba Shared Repository - branch v4-7-test updated

Karolin Seeger kseeger at samba.org
Thu Dec 14 15:33:02 UTC 2017


The branch, v4-7-test has been updated
       via  40ac029 winbindd: idmap_rid: error code for failing id-to-sid mapping request
       via  e92edf0 winbindd: idmap_rid: don't rely on the static domain list
       via  5724d65 winbindd: pass domain SID to wbint_UnixIDs2Sids
       via  5ffedec winbindd: add domain SID to idmap mapping domains
       via  dce01ac pthreadpool: Add a test for the race condition fixed in the last commit
       via  b418ab3 pthreadpool: Fix starvation after fork
       via  2530053 third_party: Fix a typo in the option name
       via  364f1c5 third_party: Link th aesni-intel library with -z noexecstack
      from  2d1d00b s3: libsmb: Fix reversing of oldname/newname paths when creating a reparse point symlink on Windows from smbclient.

https://git.samba.org/?p=samba.git;a=shortlog;h=v4-7-test


- Log -----------------------------------------------------------------
commit 40ac02968ec88acca9ca7cf0345aa5b4b9ba39b4
Author: Ralph Boehme <slow at samba.org>
Date:   Mon Oct 9 13:29:05 2017 +0200

    winbindd: idmap_rid: error code for failing id-to-sid mapping request
    
    NT_STATUS_NO_SUCH_DOMAIN triggers complete request failure in the parent
    winbindd. By returning NT_STATUS_NONE_MAPPED winbindd lets the individual
    mapping fail but keeps processing any remaining mapping requests.
    
    Bug: https://bugzilla.samba.org/show_bug.cgi?id=13052
    
    Signed-off-by: Ralph Boehme <slow at samba.org>
    Reviewed-by: Volker Lendecke <vl at samba.org>
    
    Autobuild-User(master): Ralph Böhme <slow at samba.org>
    Autobuild-Date(master): Tue Oct 10 19:57:37 CEST 2017 on sn-devel-144
    
    (cherry picked from commit 490c35df35bad6c2f1c4acd2f056d6fdc480ec1f)
    
    Autobuild-User(v4-7-test): Karolin Seeger <kseeger at samba.org>
    Autobuild-Date(v4-7-test): Thu Dec 14 16:32:49 CET 2017 on sn-devel-144

commit e92edf0db5b4f58666b7826448f242f2ceda3d26
Author: Ralph Boehme <slow at samba.org>
Date:   Mon Sep 25 15:42:08 2017 +0200

    winbindd: idmap_rid: don't rely on the static domain list
    
    The domain list in the idmap child is inherited from the parent winbindd
    process and may not contain all domains in case enumerating trusted
    domains didn't finish before the first winbind request that triggers the
    idmap child fork comes along.
    
    The previous commits added the domain SID as an additional argument to
    the wbint_UnixIDs2Sids request, storing the domain SID in struct
    idmap_domain.
    
    Bug: https://bugzilla.samba.org/show_bug.cgi?id=13052
    
    Signed-off-by: Ralph Boehme <slow at samba.org>
    Reviewed-by: Volker Lendecke <vl at samba.org>
    (cherry picked from commit 108675c4cf4c3d5bd29468255743423a56bd1471)

commit 5724d651c450a1980ec5899e1afa870060c7ae59
Author: Ralph Boehme <slow at samba.org>
Date:   Mon Sep 25 15:39:39 2017 +0200

    winbindd: pass domain SID to wbint_UnixIDs2Sids
    
    This makes the domain SID available to the idmap child for
    wbint_UnixIDs2Sids mapping request. It's not used yet anywhere, this
    comes in the next commit.
    
    Bug: https://bugzilla.samba.org/show_bug.cgi?id=13052
    
    Signed-off-by: Ralph Boehme <slow at samba.org>
    Reviewed-by: Volker Lendecke <vl at samba.org>
    (cherry picked from commit 71f99cb132f4c26f9febac6cb7dcd79f4940216a)

commit 5ffedec128d6087eecc447a3ea7ef1e507e08b41
Author: Ralph Boehme <slow at samba.org>
Date:   Mon Sep 25 13:25:57 2017 +0200

    winbindd: add domain SID to idmap mapping domains
    
    Fetch the domain SID for every domain in the idmap-domain map. This is
    in preperation of passing the domain SID as an additional argument to
    xid2sid requests to the idmap child.
    
    Bug: https://bugzilla.samba.org/show_bug.cgi?id=13052
    
    Signed-off-by: Ralph Boehme <slow at samba.org>
    Reviewed-by: Volker Lendecke <vl at samba.org>
    (cherry picked from commit 59438bfd3d3551195582cf88bd1109c3cbc7e12a)

commit dce01acabed7fc8e8ad35369971f0cfd2ddefbc3
Author: Volker Lendecke <vl at samba.org>
Date:   Wed Nov 29 18:55:21 2017 +0100

    pthreadpool: Add a test for the race condition fixed in the last commit
    
    Bug: https://bugzilla.samba.org/show_bug.cgi?id=13179
    Signed-off-by: Volker Lendecke <vl at samba.org>
    Reviewed-by: Jeremy Allison <jra at samba.org>
    (cherry picked from commit 53f7bbca0451e4f57cdbe8ab4f67f601fe8d40c1)

commit b418ab369876f6aff08ee564a7f07d9e5dc0bcc0
Author: Volker Lendecke <vl at samba.org>
Date:   Wed Nov 29 16:45:40 2017 +0100

    pthreadpool: Fix starvation after fork
    
    After the race is before the race:
    
    1) Create an idle thread
    2) Add a job: This won't create a thread anymore
    3) Immediately fork
    
    The idle thread will be woken twice before it's actually woken up: Both
    pthreadpool_add_job and pthreadpool_prepare_pool call cond_signal, for
    different reasons. We must look at pool->prefork_cond first because otherwise
    we will end up in a blocking job deep within a fork call, the helper thread
    must take its fingers off the condvar as quickly as possible.  This means that
    after the fork there's no idle thread around anymore that would pick up the job
    submitted in 2). So we must keep the idle threads around across the fork.
    
    The quick solution to re-create one helper thread in pthreadpool_parent has a
    fatal flaw: What do we do if that pthread_create call fails? We're deep in an
    application calling fork(), and doing fancy signalling from there is really
    something we must avoid.
    
    This has one potential performance issue: If we have hundreds of idle threads
    (do we ever have that) during the fork, the call to pthread_mutex_lock on the
    fork_mutex from pthreadpool_server (the helper thread) will probably cause a
    thundering herd when the _parent call unlocks the fork_mutex. The solution for
    this to just keep one idle thread around. But this adds code that is not
    strictly required functionally for now.
    
    More detailed explanation from Jeremy:
    
    First, understanding the problem the test reproduces:
    
    add a job (num_jobs = 1) -> creates thread to run it.
    job finishes, thread sticks around (num_idle = 1).
    num_jobs is now zero (initial job finished).
    
    a) Idle thread is now waiting on pool->condvar inside
    pthreadpool_server() in pthread_cond_timedwait().
    
    Now, add another job ->
    
    	pthreadpool_add_job()
    		-> pthreadpool_put_job()
    			This adds the job to the queue.
    		Oh, there is an idle thread so don't
    		create one, do:
    
    		pthread_cond_signal(&pool->condvar);
    
    		and return.
    
    Now call fork *before* idle thread in (a) wakes from
    the signaling of pool->condvar.
    
    In the parent (child is irrelevent):
    
    Go into: pthreadpool_prepare() ->
    		pthreadpool_prepare_pool()
    
    		Set the variable to tell idle threads to exit:
    
    		pool->prefork_cond = &prefork_cond;
    
    		then wake them up with:
    
    		pthread_cond_signal(&pool->condvar);
    
    		This does nothing as the idle thread
    		is already awoken.
    
    b) Idle thread wakes up and does:
    
    		Reduce idle thread count (num_idle = 0)
    
    		pool->num_idle -= 1;
    
    		Check if we're in the middle of a fork.
    
    		if (pool->prefork_cond != NULL) {
    
    			Yes we are, tell pthreadpool_prepare()
    			we are exiting.
    
    			pthread_cond_signal(pool->prefork_cond);
    
    			And exit.
    
    			pthreadpool_server_exit(pool);
    			return NULL;
    		}
    
    So we come back from the fork in the parent with num_jobs = 1,
    a job on the queue but no idle threads - and the code that
    creates a new thread on job submission was skipped because
    an idle thread existed at point (a).
    
    OK, assuming that the previous explaination is correct, the
    fix is to create a new pthreadpool context mutex:
    
    pool->fork_mutex
    
    and in pthreadpool_server(), when an idle thread wakes up and
    notices we're in the prepare fork state, it puts itself to
    sleep by waiting on the new pool->fork_mutex.
    
    And in pthreadpool_prepare_pool(), instead of waiting for
    the idle threads to exit, hold the pool->fork_mutex and
    signal each idle thread in turn, and wait for the pool->num_idle
    to go to zero - which means they're all blocked waiting on
    pool->fork_mutex.
    
    When the parent continues, pthreadpool_parent()
    unlocks the pool->fork_mutex and all the previously
    'idle' threads wake up (and you mention the thundering
    herd problem, which is as you say vanishingly small :-)
    and pick up any remaining job.
    
    Bug: https://bugzilla.samba.org/show_bug.cgi?id=13179
    Signed-off-by: Volker Lendecke <vl at samba.org>
    Reviewed-by: Jeremy Allison <jra at samba.org>
    (cherry picked from commit f6858505aec9f1004aeaffa83f21e58868749d65)

commit 253005363e36b3f30fb59f3f3b8dadc490891dfe
Author: Andreas Schneider <asn at samba.org>
Date:   Mon Dec 4 11:00:10 2017 +0100

    third_party: Fix a typo in the option name
    
    BUG: https://bugzilla.samba.org/show_bug.cgi?id=13174
    
    Signed-off-by: Andreas Schneider <asn at samba.org>
    Reviewed-by: Alexander Bokovoy <ab at samba.org>
    (cherry picked from commit e7e68958025937f97554cd956ca482dfe507f803)

commit 364f1c52423fd2d84eb3282e90efa06268f648a2
Author: Björn Baumbach <bb at sernet.de>
Date:   Mon Dec 4 10:49:19 2017 +0100

    third_party: Link th aesni-intel library with -z noexecstack
    
    BUG: https://bugzilla.samba.org/show_bug.cgi?id=13174
    
    Signed-off-by: Björn Baumbach <bb at sernet.de>
    Reviewed-by: Andreas Schneider <asn at samba.org>
    Reviewed-by: Alexander Bokovoy <ab at samba.org>
    (cherry picked from commit 6015cfad6ebf46b9f311a069dd960ff5af5bdcd8)

-----------------------------------------------------------------------

Summary of changes:
 lib/pthreadpool/pthreadpool.c        |  93 +++++++++++++++----
 lib/pthreadpool/tests.c              |  82 +++++++++++++++++
 librpc/idl/winbind.idl               |   1 +
 source3/include/idmap.h              |   5 ++
 source3/winbindd/idmap.c             |   4 +-
 source3/winbindd/idmap_proto.h       |   3 +-
 source3/winbindd/idmap_rid.c         |  10 +--
 source3/winbindd/wb_xids2sids.c      | 167 +++++++++++++++++++++++++++++++----
 source3/winbindd/winbindd_dual_srv.c |   3 +-
 third_party/aesni-intel/wscript      |   7 +-
 10 files changed, 330 insertions(+), 45 deletions(-)


Changeset truncated at 500 lines:

diff --git a/lib/pthreadpool/pthreadpool.c b/lib/pthreadpool/pthreadpool.c
index 309aba9..b70694a 100644
--- a/lib/pthreadpool/pthreadpool.c
+++ b/lib/pthreadpool/pthreadpool.c
@@ -91,11 +91,19 @@ struct pthreadpool {
 	int num_idle;
 
 	/*
-	 * Condition variable indicating that we should quickly go
-	 * away making way for fork() without anybody waiting on
-	 * pool->condvar.
+	 * Condition variable indicating that helper threads should
+	 * quickly go away making way for fork() without anybody
+	 * waiting on pool->condvar.
 	 */
 	pthread_cond_t *prefork_cond;
+
+	/*
+	 * Waiting position for helper threads while fork is
+	 * running. The forking thread will have locked it, and all
+	 * idle helper threads will sit here until after the fork,
+	 * where the forking thread will unlock it again.
+	 */
+	pthread_mutex_t fork_mutex;
 };
 
 static pthread_mutex_t pthreadpools_mutex = PTHREAD_MUTEX_INITIALIZER;
@@ -151,6 +159,15 @@ int pthreadpool_init(unsigned max_threads, struct pthreadpool **presult,
 		return ret;
 	}
 
+	ret = pthread_mutex_init(&pool->fork_mutex, NULL);
+	if (ret != 0) {
+		pthread_cond_destroy(&pool->condvar);
+		pthread_mutex_destroy(&pool->mutex);
+		free(pool->jobs);
+		free(pool);
+		return ret;
+	}
+
 	pool->shutdown = false;
 	pool->num_threads = 0;
 	pool->max_threads = max_threads;
@@ -159,6 +176,7 @@ int pthreadpool_init(unsigned max_threads, struct pthreadpool **presult,
 
 	ret = pthread_mutex_lock(&pthreadpools_mutex);
 	if (ret != 0) {
+		pthread_mutex_destroy(&pool->fork_mutex);
 		pthread_cond_destroy(&pool->condvar);
 		pthread_mutex_destroy(&pool->mutex);
 		free(pool->jobs);
@@ -179,18 +197,26 @@ int pthreadpool_init(unsigned max_threads, struct pthreadpool **presult,
 
 static void pthreadpool_prepare_pool(struct pthreadpool *pool)
 {
-	pthread_cond_t prefork_cond = PTHREAD_COND_INITIALIZER;
 	int ret;
 
+	ret = pthread_mutex_lock(&pool->fork_mutex);
+	assert(ret == 0);
+
 	ret = pthread_mutex_lock(&pool->mutex);
 	assert(ret == 0);
 
 	while (pool->num_idle != 0) {
+		int num_idle = pool->num_idle;
+		pthread_cond_t prefork_cond;
+
+		ret = pthread_cond_init(&prefork_cond, NULL);
+		assert(ret == 0);
+
 		/*
-		 * Exit all idle threads, which are all blocked in
-		 * pool->condvar. In the child we can destroy the
-		 * pool, which would result in undefined behaviour in
-		 * the pthread_cond_destroy(pool->condvar). glibc just
+		 * Push all idle threads off pool->condvar. In the
+		 * child we can destroy the pool, which would result
+		 * in undefined behaviour in the
+		 * pthread_cond_destroy(pool->condvar). glibc just
 		 * blocks here.
 		 */
 		pool->prefork_cond = &prefork_cond;
@@ -198,14 +224,16 @@ static void pthreadpool_prepare_pool(struct pthreadpool *pool)
 		ret = pthread_cond_signal(&pool->condvar);
 		assert(ret == 0);
 
-		ret = pthread_cond_wait(&prefork_cond, &pool->mutex);
-		assert(ret == 0);
+		while (pool->num_idle == num_idle) {
+			ret = pthread_cond_wait(&prefork_cond, &pool->mutex);
+			assert(ret == 0);
+		}
 
 		pool->prefork_cond = NULL;
-	}
 
-	ret = pthread_cond_destroy(&prefork_cond);
-	assert(ret == 0);
+		ret = pthread_cond_destroy(&prefork_cond);
+		assert(ret == 0);
+	}
 
 	/*
 	 * Probably it's well-defined somewhere: What happens to
@@ -246,6 +274,8 @@ static void pthreadpool_parent(void)
 		assert(ret == 0);
 		ret = pthread_mutex_unlock(&pool->mutex);
 		assert(ret == 0);
+		ret = pthread_mutex_unlock(&pool->fork_mutex);
+		assert(ret == 0);
 	}
 
 	ret = pthread_mutex_unlock(&pthreadpools_mutex);
@@ -268,8 +298,12 @@ static void pthreadpool_child(void)
 
 		ret = pthread_cond_init(&pool->condvar, NULL);
 		assert(ret == 0);
+
 		ret = pthread_mutex_unlock(&pool->mutex);
 		assert(ret == 0);
+
+		ret = pthread_mutex_unlock(&pool->fork_mutex);
+		assert(ret == 0);
 	}
 
 	ret = pthread_mutex_unlock(&pthreadpools_mutex);
@@ -284,7 +318,7 @@ static void pthreadpool_prep_atfork(void)
 
 static int pthreadpool_free(struct pthreadpool *pool)
 {
-	int ret, ret1;
+	int ret, ret1, ret2;
 
 	ret = pthread_mutex_lock(&pthreadpools_mutex);
 	if (ret != 0) {
@@ -296,6 +330,7 @@ static int pthreadpool_free(struct pthreadpool *pool)
 
 	ret = pthread_mutex_destroy(&pool->mutex);
 	ret1 = pthread_cond_destroy(&pool->condvar);
+	ret2 = pthread_mutex_destroy(&pool->fork_mutex);
 
 	if (ret != 0) {
 		return ret;
@@ -303,6 +338,9 @@ static int pthreadpool_free(struct pthreadpool *pool)
 	if (ret1 != 0) {
 		return ret1;
 	}
+	if (ret2 != 0) {
+		return ret2;
+	}
 
 	free(pool->jobs);
 	free(pool);
@@ -467,11 +505,30 @@ static void *pthreadpool_server(void *arg)
 				/*
 				 * Me must allow fork() to continue
 				 * without anybody waiting on
-				 * &pool->condvar.
+				 * &pool->condvar. Tell
+				 * pthreadpool_prepare_pool that we
+				 * got that message.
 				 */
-				pthread_cond_signal(pool->prefork_cond);
-				pthreadpool_server_exit(pool);
-				return NULL;
+
+				res = pthread_cond_signal(pool->prefork_cond);
+				assert(res == 0);
+
+				res = pthread_mutex_unlock(&pool->mutex);
+				assert(res == 0);
+
+				/*
+				 * pthreadpool_prepare_pool has
+				 * already locked this mutex across
+				 * the fork. This makes us wait
+				 * without sitting in a condvar.
+				 */
+				res = pthread_mutex_lock(&pool->fork_mutex);
+				assert(res == 0);
+				res = pthread_mutex_unlock(&pool->fork_mutex);
+				assert(res == 0);
+
+				res = pthread_mutex_lock(&pool->mutex);
+				assert(res == 0);
 			}
 
 			if (res == ETIMEDOUT) {
diff --git a/lib/pthreadpool/tests.c b/lib/pthreadpool/tests.c
index 9991182..0ea285d 100644
--- a/lib/pthreadpool/tests.c
+++ b/lib/pthreadpool/tests.c
@@ -300,6 +300,82 @@ static int test_busyfork(void)
 	return 0;
 }
 
+static int test_busyfork2(void)
+{
+	struct pthreadpool_pipe *p;
+	pid_t child;
+	int ret, jobnum;
+	struct pollfd pfd;
+
+	ret = pthreadpool_pipe_init(1, &p);
+	if (ret != 0) {
+		fprintf(stderr, "pthreadpool_pipe_init failed: %s\n",
+			strerror(ret));
+		return -1;
+	}
+
+	ret = pthreadpool_pipe_add_job(p, 1, busyfork_job, NULL);
+	if (ret != 0) {
+		fprintf(stderr, "pthreadpool_add_job failed: %s\n",
+			strerror(ret));
+		return -1;
+	}
+
+	ret = pthreadpool_pipe_finished_jobs(p, &jobnum, 1);
+	if (ret != 1) {
+		fprintf(stderr, "pthreadpool_pipe_finished_jobs failed\n");
+		return -1;
+	}
+
+	ret = poll(NULL, 0, 10);
+	if (ret == -1) {
+		perror("poll failed");
+		return -1;
+	}
+
+	ret = pthreadpool_pipe_add_job(p, 1, busyfork_job, NULL);
+	if (ret != 0) {
+		fprintf(stderr, "pthreadpool_add_job failed: %s\n",
+			strerror(ret));
+		return -1;
+	}
+
+	/*
+	 * Do the fork right after the add_job. This tests a race
+	 * where the atfork prepare handler gets all idle threads off
+	 * the condvar. If we are faster doing the fork than the
+	 * existing idle thread could get out of idle and take the
+	 * job, after the fork we end up with no threads to take care
+	 * of the job.
+	 */
+
+	child = fork();
+	if (child < 0) {
+		perror("fork failed");
+		return -1;
+	}
+
+	if (child == 0) {
+		exit(0);
+	}
+
+	pfd = (struct pollfd) {
+		.fd = pthreadpool_pipe_signal_fd(p),
+		.events = POLLIN|POLLERR
+	};
+
+	do {
+		ret = poll(&pfd, 1, 5000);
+	} while ((ret == -1) && (errno == EINTR));
+
+	if (ret == 0) {
+		fprintf(stderr, "job unfinished after 5 seconds\n");
+		return -1;
+	}
+
+	return 0;
+}
+
 static void test_tevent_wait(void *private_data)
 {
 	int *timeout = private_data;
@@ -415,6 +491,12 @@ int main(void)
 		return 1;
 	}
 
+	ret = test_busyfork2();
+	if (ret != 0) {
+		fprintf(stderr, "test_busyfork2 failed\n");
+		return 1;
+	}
+
 	printf("success\n");
 	return 0;
 }
diff --git a/librpc/idl/winbind.idl b/librpc/idl/winbind.idl
index 737d66a..f5e3507 100644
--- a/librpc/idl/winbind.idl
+++ b/librpc/idl/winbind.idl
@@ -58,6 +58,7 @@ interface winbind
 
     NTSTATUS wbint_UnixIDs2Sids(
 	[in,string,charset(UTF8)] char *domain_name,
+	[in] dom_sid domain_sid,
 	[in] uint32 num_ids,
 	[in,out] unixid xids[num_ids],
 	[out] dom_sid sids[num_ids]
diff --git a/source3/include/idmap.h b/source3/include/idmap.h
index 75d2e45..8d80643 100644
--- a/source3/include/idmap.h
+++ b/source3/include/idmap.h
@@ -37,6 +37,11 @@ struct wbint_userinfo;
 
 struct idmap_domain {
 	const char *name;
+	/*
+	 * dom_sid is currently only initialized in the unixids_to_sids request,
+	 * so don't rely on this being filled out everywhere!
+	 */
+	struct dom_sid dom_sid;
 	struct idmap_methods *methods;
 	NTSTATUS (*query_user)(struct idmap_domain *domain,
 			       struct wbint_userinfo *info);
diff --git a/source3/winbindd/idmap.c b/source3/winbindd/idmap.c
index 70f35ae..cee065b 100644
--- a/source3/winbindd/idmap.c
+++ b/source3/winbindd/idmap.c
@@ -605,7 +605,8 @@ NTSTATUS idmap_allocate_gid(struct unixid *id)
 }
 
 NTSTATUS idmap_backend_unixids_to_sids(struct id_map **maps,
-				       const char *domain_name)
+				       const char *domain_name,
+				       struct dom_sid domain_sid)
 {
 	struct idmap_domain *dom = NULL;
 	NTSTATUS status;
@@ -626,6 +627,7 @@ NTSTATUS idmap_backend_unixids_to_sids(struct id_map **maps,
 		return NT_STATUS_NONE_MAPPED;
 	}
 
+	dom->dom_sid = domain_sid;
 	status = dom->methods->unixids_to_sids(dom, maps);
 
 	DBG_DEBUG("unixid_to_sids for domain %s returned %s\n",
diff --git a/source3/winbindd/idmap_proto.h b/source3/winbindd/idmap_proto.h
index 596c22f..a51560d 100644
--- a/source3/winbindd/idmap_proto.h
+++ b/source3/winbindd/idmap_proto.h
@@ -35,7 +35,8 @@ void idmap_close(void);
 NTSTATUS idmap_allocate_uid(struct unixid *id);
 NTSTATUS idmap_allocate_gid(struct unixid *id);
 NTSTATUS idmap_backend_unixids_to_sids(struct id_map **maps,
-				       const char *domain_name);
+				       const char *domain_name,
+				       struct dom_sid domain_sid);
 struct idmap_domain *idmap_find_domain(const char *domname);
 
 /* The following definitions come from winbindd/idmap_nss.c  */
diff --git a/source3/winbindd/idmap_rid.c b/source3/winbindd/idmap_rid.c
index 10088b5..b066ba3 100644
--- a/source3/winbindd/idmap_rid.c
+++ b/source3/winbindd/idmap_rid.c
@@ -54,7 +54,6 @@ static NTSTATUS idmap_rid_initialize(struct idmap_domain *dom)
 
 static NTSTATUS idmap_rid_id_to_sid(struct idmap_domain *dom, struct id_map *map)
 {
-	struct winbindd_domain *domain;
 	struct idmap_rid_context *ctx;
 
 	ctx = talloc_get_type(dom->private_data, struct idmap_rid_context);
@@ -66,12 +65,13 @@ static NTSTATUS idmap_rid_id_to_sid(struct idmap_domain *dom, struct id_map *map
 		return NT_STATUS_NONE_MAPPED;
 	}
 
-	domain = find_domain_from_name_noinit(dom->name);
-	if (domain == NULL ) {
-		return NT_STATUS_NO_SUCH_DOMAIN;
+	if (is_null_sid(&dom->dom_sid)) {
+		DBG_INFO("idmap domain '%s' without SID\n", dom->name);
+		return NT_STATUS_NONE_MAPPED;
 	}
 
-	sid_compose(map->sid, &domain->sid, map->xid.id - dom->low_id + ctx->base_rid);
+	sid_compose(map->sid, &dom->dom_sid,
+		    map->xid.id - dom->low_id + ctx->base_rid);
 
 	map->status = ID_MAPPED;
 	map->xid.type = ID_TYPE_BOTH;
diff --git a/source3/winbindd/wb_xids2sids.c b/source3/winbindd/wb_xids2sids.c
index 15e94c4..a2a4493 100644
--- a/source3/winbindd/wb_xids2sids.c
+++ b/source3/winbindd/wb_xids2sids.c
@@ -23,11 +23,13 @@
 #include "idmap_cache.h"
 #include "librpc/gen_ndr/ndr_winbind_c.h"
 #include "librpc/gen_ndr/ndr_netlogon.h"
+#include "passdb/lookup_sid.h"
 
 struct wb_xids2sids_dom_map {
 	unsigned low_id;
 	unsigned high_id;
 	const char *name;
+	struct dom_sid sid;
 };
 
 /*
@@ -93,6 +95,7 @@ static bool wb_xids2sids_add_dom(const char *domname,
 		dom_maps = tmp;
 
 		map = &dom_maps[num_maps];
+		ZERO_STRUCTP(map);
 		map->name = talloc_move(dom_maps, &name);
 	}
 
@@ -102,30 +105,138 @@ static bool wb_xids2sids_add_dom(const char *domname,
 	return false;
 }
 
-static void wb_xids2sids_init_dom_maps(void)
+struct wb_xids2sids_init_dom_maps_state {
+	struct tevent_context *ev;
+	struct tevent_req *req;
+	size_t dom_idx;
+};
+
+static void wb_xids2sids_init_dom_maps_lookupname_next(
+	struct wb_xids2sids_init_dom_maps_state *state);
+
+static void wb_xids2sids_init_dom_maps_lookupname_done(
+	struct tevent_req *subreq);
+
+static struct tevent_req *wb_xids2sids_init_dom_maps_send(
+	TALLOC_CTX *mem_ctx, struct tevent_context *ev)
 {
-	if (dom_maps != NULL) {
-		return;
+	struct tevent_req *req = NULL;
+	struct wb_xids2sids_init_dom_maps_state *state = NULL;
+
+	req = tevent_req_create(mem_ctx, &state,
+				struct wb_xids2sids_init_dom_maps_state);
+	if (req == NULL) {
+		return NULL;
 	}
+	*state = (struct wb_xids2sids_init_dom_maps_state) {
+		.ev = ev,
+		.req = req,
+		.dom_idx = 0,
+	};
 
+	if (dom_maps != NULL) {
+		tevent_req_done(req);
+		return tevent_req_post(req, ev);
+	}
 	/*
 	 * Put the passdb idmap domain first. We always need to try
 	 * there first.
 	 */
 
-	dom_maps = talloc_array(NULL, struct wb_xids2sids_dom_map, 1);
-	if (dom_maps == NULL) {
-		return;
+	dom_maps = talloc_zero_array(NULL, struct wb_xids2sids_dom_map, 1);
+	if (tevent_req_nomem(dom_maps, req)) {
+		return tevent_req_post(req, ev);
 	}
 	dom_maps[0].low_id = 0;
 	dom_maps[0].high_id = UINT_MAX;
 	dom_maps[0].name = talloc_strdup(dom_maps, get_global_sam_name());
-	if (dom_maps[0].name == NULL) {
+	if (tevent_req_nomem(dom_maps[0].name, req)) {
 		TALLOC_FREE(dom_maps);
-		return;
+		return tevent_req_post(req, ev);
 	}
 
 	lp_scan_idmap_domains(wb_xids2sids_add_dom, NULL);
+
+	wb_xids2sids_init_dom_maps_lookupname_next(state);
+	if (!tevent_req_is_in_progress(req)) {
+		tevent_req_post(req, ev);
+	}
+	return req;
+}
+
+static void wb_xids2sids_init_dom_maps_lookupname_next(
+	struct wb_xids2sids_init_dom_maps_state *state)
+{
+	struct tevent_req *subreq = NULL;
+
+	if (state->dom_idx == talloc_array_length(dom_maps)) {
+		tevent_req_done(state->req);
+		return;
+	}
+
+	if (strequal(dom_maps[state->dom_idx].name, "*")) {
+		state->dom_idx++;
+		if (state->dom_idx == talloc_array_length(dom_maps)) {
+			tevent_req_done(state->req);
+			return;
+		}
+	}
+
+	subreq = wb_lookupname_send(state,
+				    state->ev,


-- 
Samba Shared Repository



More information about the samba-cvs mailing list