impersonation part 4 (Re: [Patches] Preparation for tevent impersonation (part1))

Stefan Metzmacher metze at samba.org
Wed Jul 11 22:47:16 UTC 2018


Hi,

here's another patchset that implements some cleanups in the pthreadpool
code, which make it easier to implement the required pthreadpool wrapper
for per thread impersonation.

I'll do some private autobuilds with this tomorrow, but a very similar
state already passed a few times.

Please review:-) Note some is already reviewed by Ralph,
I need to port the review tags from his branch to mine.

Thanks!
metze


Am 17.06.2018 um 22:59 schrieb Stefan Metzmacher via samba-technical:
> Am 14.06.2018 um 22:12 schrieb Jeremy Allison via samba-technical:
>> On Thu, Jun 14, 2018 at 08:13:07PM +0200, Stefan Metzmacher wrote:
>>> Hi Jeremy,
>>>
>>> here's the almost finished patchset, that actually implements the
>>> impersonation for tevent. (The pthreadpool_tevent impersonation will
>>> follow later as part4).
>>>
>>> This is (as part2) already reviewed by Ralph, but we want to
>>> add some tests in tevent for the new wrapper infrastructure
>>> and add useful debug message to the
>>> smbd_impersonate_{conn_vuid,conn_sess,root,guest}_create() wrappers.
>>> We'll do that once everything is otherwise complete, working and reviewed.
>>
>> Wow, this is a dense patchset :-). I'll study carefully,
>> and send feedback.
> 
> Here's the update on top of the current part2.
> 
> It also passed some autobuilds and is ready and reviewed code wise,
> Just the debug messages in [PATCH 35/37] FIXUP: add debuggging to
> impersonation will be improved and squashed.
> 
> metze
> 

-------------- next part --------------
From 9eda624b496e7185d51960d9d638de8228de39a4 Mon Sep 17 00:00:00 2001
From: Stefan Metzmacher <metze at samba.org>
Date: Fri, 20 Apr 2018 15:05:44 +0200
Subject: [PATCH 01/20] pthreadpool: correctly handle
 pthreadpool_tevent_register_ev() failures

It returns errno values instead of setting 'errno'.

Signed-off-by: Stefan Metzmacher <metze at samba.org>
Reviewed-by: Ralph Boehme <slow at samba.org>
---
 lib/pthreadpool/pthreadpool_tevent.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/lib/pthreadpool/pthreadpool_tevent.c b/lib/pthreadpool/pthreadpool_tevent.c
index 493083406ab0..b2292ef214c9 100644
--- a/lib/pthreadpool/pthreadpool_tevent.c
+++ b/lib/pthreadpool/pthreadpool_tevent.c
@@ -289,8 +289,7 @@ struct tevent_req *pthreadpool_tevent_job_send(
 	}
 
 	ret = pthreadpool_tevent_register_ev(pool, ev);
-	if (ret != 0) {
-		tevent_req_error(req, errno);
+	if (tevent_req_error(req, ret)) {
 		return tevent_req_post(req, ev);
 	}
 
-- 
2.17.1


From dd68233959342f2d193999fa42a58f81d4c78203 Mon Sep 17 00:00:00 2001
From: Stefan Metzmacher <metze at samba.org>
Date: Fri, 22 Jun 2018 00:04:48 +0200
Subject: [PATCH 02/20] pthreadpool: use unsigned for num_idle, num_threads and
 max_threads

These can't get negative.

Signed-off-by: Stefan Metzmacher <metze at samba.org>
---
 lib/pthreadpool/pthreadpool.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/lib/pthreadpool/pthreadpool.c b/lib/pthreadpool/pthreadpool.c
index 92a88c9ca849..0ab6f63dbf42 100644
--- a/lib/pthreadpool/pthreadpool.c
+++ b/lib/pthreadpool/pthreadpool.c
@@ -78,17 +78,17 @@ struct pthreadpool {
 	/*
 	 * maximum number of threads
 	 */
-	int max_threads;
+	unsigned max_threads;
 
 	/*
 	 * Number of threads
 	 */
-	int num_threads;
+	unsigned num_threads;
 
 	/*
 	 * Number of idle threads
 	 */
-	int num_idle;
+	unsigned num_idle;
 
 	/*
 	 * Condition variable indicating that helper threads should
@@ -206,7 +206,7 @@ static void pthreadpool_prepare_pool(struct pthreadpool *pool)
 	assert(ret == 0);
 
 	while (pool->num_idle != 0) {
-		int num_idle = pool->num_idle;
+		unsigned num_idle = pool->num_idle;
 		pthread_cond_t prefork_cond;
 
 		ret = pthread_cond_init(&prefork_cond, NULL);
-- 
2.17.1


From 04c214a59bef17c5440508cee97270347df702d3 Mon Sep 17 00:00:00 2001
From: Stefan Metzmacher <metze at samba.org>
Date: Fri, 22 Jun 2018 00:10:08 +0200
Subject: [PATCH 03/20] pthreadpool: explicitly use max_thread=unlimited for
 pthreadpool_tevent_init() tests

Currently 0 also means unlimited, but that will change soon,
to force no thread and strict sync processing.

Signed-off-by: Stefan Metzmacher <metze at samba.org>
---
 lib/pthreadpool/tests.c        | 3 ++-
 lib/pthreadpool/tests_cmocka.c | 3 ++-
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/lib/pthreadpool/tests.c b/lib/pthreadpool/tests.c
index f0ae0aa4a93b..d57754603a23 100644
--- a/lib/pthreadpool/tests.c
+++ b/lib/pthreadpool/tests.c
@@ -3,6 +3,7 @@
 #include <poll.h>
 #include <errno.h>
 #include <stdlib.h>
+#include <limits.h>
 #include <pthread.h>
 #include <unistd.h>
 #include <sys/types.h>
@@ -407,7 +408,7 @@ static int test_tevent_1(void)
 			strerror(ret));
 		return ret;
 	}
-	ret = pthreadpool_tevent_init(ev, 0, &pool);
+	ret = pthreadpool_tevent_init(ev, UINT_MAX, &pool);
 	if (ret != 0) {
 		fprintf(stderr, "pthreadpool_tevent_init failed: %s\n",
 			strerror(ret));
diff --git a/lib/pthreadpool/tests_cmocka.c b/lib/pthreadpool/tests_cmocka.c
index 9753d212e1ce..33952442e1d6 100644
--- a/lib/pthreadpool/tests_cmocka.c
+++ b/lib/pthreadpool/tests_cmocka.c
@@ -22,6 +22,7 @@
 #include <setjmp.h>
 #include <stdlib.h>
 #include <string.h>
+#include <limits.h>
 
 #include <talloc.h>
 #include <tevent.h>
@@ -46,7 +47,7 @@ static int setup_pthreadpool_tevent(void **state)
 	t->ev = tevent_context_init(t);
 	assert_non_null(t->ev);
 
-	ret = pthreadpool_tevent_init(t->ev, 0, &t->pool);
+	ret = pthreadpool_tevent_init(t->ev, UINT_MAX, &t->pool);
 	assert_return_code(ret, 0);
 
 	*state = t;
-- 
2.17.1


From 696e7eec6773a7fb08408473797c80ef24943934 Mon Sep 17 00:00:00 2001
From: Stefan Metzmacher <metze at samba.org>
Date: Fri, 22 Jun 2018 00:10:08 +0200
Subject: [PATCH 04/20] s3:messages: explicitly use max_thread=unlimited for
 pthreadpool_tevent_init() in messaging_dgm_init()

Currently 0 also means unlimited, but that will change soon,
to force no thread and strict sync processing.

Signed-off-by: Stefan Metzmacher <metze at samba.org>
---
 source3/lib/messages_dgm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/source3/lib/messages_dgm.c b/source3/lib/messages_dgm.c
index 0ad8f46e09f1..daaad9619e08 100644
--- a/source3/lib/messages_dgm.c
+++ b/source3/lib/messages_dgm.c
@@ -1072,7 +1072,7 @@ int messaging_dgm_init(struct tevent_context *ev,
 
 	ctx->have_dgm_context = &have_dgm_context;
 
-	ret = pthreadpool_tevent_init(ctx, 0, &ctx->pool);
+	ret = pthreadpool_tevent_init(ctx, UINT_MAX, &ctx->pool);
 	if (ret != 0) {
 		DBG_WARNING("pthreadpool_tevent_init failed: %s\n",
 			    strerror(ret));
-- 
2.17.1


From e904290c5fb3722f6e31b848a972fb833e1dbfb9 Mon Sep 17 00:00:00 2001
From: Stefan Metzmacher <metze at samba.org>
Date: Fri, 22 Jun 2018 00:27:39 +0200
Subject: [PATCH 05/20] pthreadpool: consitently use unlock_res for
 pthread_mutex_unlock() in pthreadpool_add_job()

This makes further restructuring easier to implement and understand.

Signed-off-by: Stefan Metzmacher <metze at samba.org>
---
 lib/pthreadpool/pthreadpool.c | 26 +++++++++++++-------------
 1 file changed, 13 insertions(+), 13 deletions(-)

diff --git a/lib/pthreadpool/pthreadpool.c b/lib/pthreadpool/pthreadpool.c
index 0ab6f63dbf42..0f695bcd769e 100644
--- a/lib/pthreadpool/pthreadpool.c
+++ b/lib/pthreadpool/pthreadpool.c
@@ -632,6 +632,7 @@ int pthreadpool_add_job(struct pthreadpool *pool, int job_id,
 			void (*fn)(void *private_data), void *private_data)
 {
 	int res;
+	int unlock_res;
 
 	res = pthread_mutex_lock(&pool->mutex);
 	if (res != 0) {
@@ -643,8 +644,8 @@ int pthreadpool_add_job(struct pthreadpool *pool, int job_id,
 		 * Protect against the pool being shut down while
 		 * trying to add a job
 		 */
-		res = pthread_mutex_unlock(&pool->mutex);
-		assert(res == 0);
+		unlock_res = pthread_mutex_unlock(&pool->mutex);
+		assert(unlock_res == 0);
 		return EINVAL;
 	}
 
@@ -652,13 +653,12 @@ int pthreadpool_add_job(struct pthreadpool *pool, int job_id,
 	 * Add job to the end of the queue
 	 */
 	if (!pthreadpool_put_job(pool, job_id, fn, private_data)) {
-		res = pthread_mutex_unlock(&pool->mutex);
-		assert(res == 0);
+		unlock_res = pthread_mutex_unlock(&pool->mutex);
+		assert(unlock_res == 0);
 		return ENOMEM;
 	}
 
 	if (pool->num_idle > 0) {
-		int unlock_res;
 		/*
 		 * We have idle threads, wake one.
 		 */
@@ -676,15 +676,15 @@ int pthreadpool_add_job(struct pthreadpool *pool, int job_id,
 		/*
 		 * No more new threads, we just queue the request
 		 */
-		res = pthread_mutex_unlock(&pool->mutex);
-		assert(res == 0);
+		unlock_res = pthread_mutex_unlock(&pool->mutex);
+		assert(unlock_res == 0);
 		return 0;
 	}
 
 	res = pthreadpool_create_thread(pool);
 	if (res == 0) {
-		res = pthread_mutex_unlock(&pool->mutex);
-		assert(res == 0);
+		unlock_res = pthread_mutex_unlock(&pool->mutex);
+		assert(unlock_res == 0);
 		return 0;
 	}
 
@@ -693,8 +693,8 @@ int pthreadpool_add_job(struct pthreadpool *pool, int job_id,
 		 * At least one thread is still available, let
 		 * that one run the queued job.
 		 */
-		res = pthread_mutex_unlock(&pool->mutex);
-		assert(res == 0);
+		unlock_res = pthread_mutex_unlock(&pool->mutex);
+		assert(unlock_res == 0);
 		return 0;
 	}
 
@@ -704,8 +704,8 @@ int pthreadpool_add_job(struct pthreadpool *pool, int job_id,
 	 */
 	pthreadpool_undo_put_job(pool);
 
-	res = pthread_mutex_unlock(&pool->mutex);
-	assert(res == 0);
+	unlock_res = pthread_mutex_unlock(&pool->mutex);
+	assert(unlock_res == 0);
 
 	fn(private_data);
 	res = pool->signal_fn(job_id, fn, private_data,
-- 
2.17.1


From 397accd09536b8caed303306cd9b583a8b0168b4 Mon Sep 17 00:00:00 2001
From: Stefan Metzmacher <metze at samba.org>
Date: Fri, 22 Jun 2018 00:29:53 +0200
Subject: [PATCH 06/20] pthreadpool: use strict sync processing only with
 max_threads=0

Otherwise it's an error if not at least one thread is possible.

This gives a much saner behaviour and doesn't end up with
unexpected sync processing.

Signed-off-by: Stefan Metzmacher <metze at samba.org>
---
 lib/pthreadpool/pthreadpool.c | 20 +++++++++++++++-----
 1 file changed, 15 insertions(+), 5 deletions(-)

diff --git a/lib/pthreadpool/pthreadpool.c b/lib/pthreadpool/pthreadpool.c
index 0f695bcd769e..31ff02dd5836 100644
--- a/lib/pthreadpool/pthreadpool.c
+++ b/lib/pthreadpool/pthreadpool.c
@@ -77,6 +77,7 @@ struct pthreadpool {
 
 	/*
 	 * maximum number of threads
+	 * 0 means no real thread, only strict sync processing.
 	 */
 	unsigned max_threads;
 
@@ -649,6 +650,19 @@ int pthreadpool_add_job(struct pthreadpool *pool, int job_id,
 		return EINVAL;
 	}
 
+	if (pool->max_threads == 0) {
+		unlock_res = pthread_mutex_unlock(&pool->mutex);
+		assert(unlock_res == 0);
+
+		/*
+		 * If no thread are allowed we do strict sync processing.
+		 */
+		fn(private_data);
+		res = pool->signal_fn(job_id, fn, private_data,
+				      pool->signal_fn_private_data);
+		return res;
+	}
+
 	/*
 	 * Add job to the end of the queue
 	 */
@@ -671,8 +685,7 @@ int pthreadpool_add_job(struct pthreadpool *pool, int job_id,
 		return res;
 	}
 
-	if ((pool->max_threads != 0) &&
-	    (pool->num_threads >= pool->max_threads)) {
+	if (pool->num_threads >= pool->max_threads) {
 		/*
 		 * No more new threads, we just queue the request
 		 */
@@ -707,8 +720,5 @@ int pthreadpool_add_job(struct pthreadpool *pool, int job_id,
 	unlock_res = pthread_mutex_unlock(&pool->mutex);
 	assert(unlock_res == 0);
 
-	fn(private_data);
-	res = pool->signal_fn(job_id, fn, private_data,
-			      pool->signal_fn_private_data);
 	return res;
 }
-- 
2.17.1


From db77483f34243a2fc5c9ea81ce5daddb99869fa5 Mon Sep 17 00:00:00 2001
From: Stefan Metzmacher <metze at samba.org>
Date: Wed, 20 Jun 2018 15:38:08 +0200
Subject: [PATCH 07/20] pthreadpool: use talloc_zero() in tests_cmocka.c
 setup_pthreadpool_tevent()

This was found with valgrind.

Signed-off-by: Stefan Metzmacher <metze at samba.org>
---
 lib/pthreadpool/tests_cmocka.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/pthreadpool/tests_cmocka.c b/lib/pthreadpool/tests_cmocka.c
index 33952442e1d6..d80b8c8f775a 100644
--- a/lib/pthreadpool/tests_cmocka.c
+++ b/lib/pthreadpool/tests_cmocka.c
@@ -41,7 +41,7 @@ static int setup_pthreadpool_tevent(void **state)
 	struct pthreadpool_tevent_test *t;
 	int ret;
 
-	t = talloc(NULL, struct pthreadpool_tevent_test);
+	t = talloc_zero(NULL, struct pthreadpool_tevent_test);
 	assert_non_null(t);
 
 	t->ev = tevent_context_init(t);
-- 
2.17.1


From 5f7033bffc80a5d0ca80fbe054773fa7d17c6c9c Mon Sep 17 00:00:00 2001
From: Stefan Metzmacher <metze at samba.org>
Date: Thu, 21 Jun 2018 12:40:30 +0200
Subject: [PATCH 08/20] pthreadpool: fix helgrind error in pthreadpool_free()

We need to pthread_mutex_lock/unlock the pool mutex
before we can destroy it.

The following test would trigger this.

Signed-off-by: Stefan Metzmacher <metze at samba.org>
---
 lib/pthreadpool/pthreadpool.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/lib/pthreadpool/pthreadpool.c b/lib/pthreadpool/pthreadpool.c
index 31ff02dd5836..58ea857ded5b 100644
--- a/lib/pthreadpool/pthreadpool.c
+++ b/lib/pthreadpool/pthreadpool.c
@@ -329,6 +329,11 @@ static int pthreadpool_free(struct pthreadpool *pool)
 	ret = pthread_mutex_unlock(&pthreadpools_mutex);
 	assert(ret == 0);
 
+	ret = pthread_mutex_lock(&pool->mutex);
+	assert(ret == 0);
+	ret = pthread_mutex_unlock(&pool->mutex);
+	assert(ret == 0);
+
 	ret = pthread_mutex_destroy(&pool->mutex);
 	ret1 = pthread_cond_destroy(&pool->condvar);
 	ret2 = pthread_mutex_destroy(&pool->fork_mutex);
-- 
2.17.1


From 288b00c4f9c83c0adf330980a751fc13373c86dc Mon Sep 17 00:00:00 2001
From: Stefan Metzmacher <metze at samba.org>
Date: Fri, 22 Jun 2018 08:39:36 +0200
Subject: [PATCH 09/20] pthreadpool: expand test_create() to check unlimited,
 sync and one thread pool

Signed-off-by: Stefan Metzmacher <metze at samba.org>
---
 lib/pthreadpool/tests_cmocka.c | 83 ++++++++++++++++++++++++++++------
 1 file changed, 70 insertions(+), 13 deletions(-)

diff --git a/lib/pthreadpool/tests_cmocka.c b/lib/pthreadpool/tests_cmocka.c
index d80b8c8f775a..677800892f65 100644
--- a/lib/pthreadpool/tests_cmocka.c
+++ b/lib/pthreadpool/tests_cmocka.c
@@ -33,7 +33,9 @@
 
 struct pthreadpool_tevent_test {
 	struct tevent_context *ev;
-	struct pthreadpool_tevent *pool;
+	struct pthreadpool_tevent *upool;
+	struct pthreadpool_tevent *spool;
+	struct pthreadpool_tevent *opool;
 };
 
 static int setup_pthreadpool_tevent(void **state)
@@ -47,7 +49,13 @@ static int setup_pthreadpool_tevent(void **state)
 	t->ev = tevent_context_init(t);
 	assert_non_null(t->ev);
 
-	ret = pthreadpool_tevent_init(t->ev, UINT_MAX, &t->pool);
+	ret = pthreadpool_tevent_init(t->ev, UINT_MAX, &t->upool);
+	assert_return_code(ret, 0);
+
+	ret = pthreadpool_tevent_init(t->ev, 1, &t->opool);
+	assert_return_code(ret, 0);
+
+	ret = pthreadpool_tevent_init(t->ev, 0, &t->spool);
 	assert_return_code(ret, 0);
 
 	*state = t;
@@ -91,20 +99,27 @@ static void test_job_threadid(void *ptr)
 
 static int test_create_do(struct tevent_context *ev,
 			  struct pthreadpool_tevent *pool,
+			  bool *executed,
 			  bool *in_main_thread)
 {
 	struct tevent_req *req;
-	pthread_t main_thread, worker_thread;
+	pthread_t zero_thread;
+	pthread_t main_thread;
+	pthread_t worker_thread;
 	bool ok;
 	int ret;
 
+	*executed = false;
+	*in_main_thread = false;
+
+	memset(&zero_thread, 0, sizeof(zero_thread));
 	main_thread = pthread_self();
+	worker_thread = zero_thread;
 
 	req = pthreadpool_tevent_job_send(
 		ev, ev, pool, test_job_threadid, &worker_thread);
 	if (req == NULL) {
 		fprintf(stderr, "pthreadpool_tevent_job_send failed\n");
-		TALLOC_FREE(ev);
 		return ENOMEM;
 	}
 
@@ -113,20 +128,21 @@ static int test_create_do(struct tevent_context *ev,
 		ret = errno;
 		fprintf(stderr, "tevent_req_poll failed: %s\n",
 			strerror(ret));
-		TALLOC_FREE(ev);
+		*executed = !pthread_equal(worker_thread, zero_thread);
+		*in_main_thread = pthread_equal(worker_thread, main_thread);
 		return ret;
 	}
 
 
 	ret = pthreadpool_tevent_job_recv(req);
 	TALLOC_FREE(req);
+	*executed = !pthread_equal(worker_thread, zero_thread);
+	*in_main_thread = pthread_equal(worker_thread, main_thread);
 	if (ret != 0) {
 		fprintf(stderr, "tevent_req_recv failed: %s\n",
 			strerror(ret));
-		TALLOC_FREE(ev);
 		return ret;
 	}
-	*in_main_thread = pthread_equal(worker_thread, main_thread);
 
 	return 0;
 }
@@ -134,6 +150,7 @@ static int test_create_do(struct tevent_context *ev,
 static void test_create(void **state)
 {
 	struct pthreadpool_tevent_test *t = *state;
+	bool executed;
 	bool in_main_thread;
 	int ret;
 
@@ -142,16 +159,34 @@ static void test_create(void **state)
 	 * this job will run in the sync fallback in the main thread.
 	 */
 	will_return(__wrap_pthread_create, EAGAIN);
-	ret = test_create_do(t->ev, t->pool, &in_main_thread);
-	assert_return_code(ret, 0);
+	ret = test_create_do(t->ev, t->upool, &executed, &in_main_thread);
+	assert_int_equal(ret, EAGAIN);
+	assert_false(executed);
+	assert_false(in_main_thread);
+
+	/*
+	 * The sync pool won't trigger pthread_create()
+	 * It will be triggered by the one pool.
+	 */
+	will_return(__wrap_pthread_create, EAGAIN);
+
+	ret = test_create_do(t->ev, t->spool, &executed, &in_main_thread);
+	assert_int_equal(ret, 0);
+	assert_true(executed);
 	assert_true(in_main_thread);
 
+	ret = test_create_do(t->ev, t->opool, &executed, &in_main_thread);
+	assert_int_equal(ret, EAGAIN);
+	assert_false(executed);
+	assert_false(in_main_thread);
+
 	/*
 	 * When a thread can be created, the job will run in the worker thread.
 	 */
 	will_return(__wrap_pthread_create, 0);
-	ret = test_create_do(t->ev, t->pool, &in_main_thread);
-	assert_return_code(ret, 0);
+	ret = test_create_do(t->ev, t->upool, &executed, &in_main_thread);
+	assert_int_equal(ret, 0);
+	assert_true(executed);
 	assert_false(in_main_thread);
 
 	poll(NULL, 0, 10);
@@ -161,8 +196,30 @@ static void test_create(void **state)
 	 * running another job will also use the worker thread, even
 	 * if a new thread cannot be created.
 	 */
-	ret = test_create_do(t->ev, t->pool, &in_main_thread);
-	assert_return_code(ret, 0);
+	ret = test_create_do(t->ev, t->upool, &executed, &in_main_thread);
+	assert_int_equal(ret, 0);
+	assert_true(executed);
+	assert_false(in_main_thread);
+
+	/*
+	 * When a thread can be created, the job will run in the worker thread.
+	 */
+	will_return(__wrap_pthread_create, 0);
+	ret = test_create_do(t->ev, t->opool, &executed, &in_main_thread);
+	assert_int_equal(ret, 0);
+	assert_true(executed);
+	assert_false(in_main_thread);
+
+	poll(NULL, 0, 10);
+
+	/*
+	 * Workerthread will still be active for a second; immediately
+	 * running another job will also use the worker thread, even
+	 * if a new thread cannot be created.
+	 */
+	ret = test_create_do(t->ev, t->opool, &executed, &in_main_thread);
+	assert_int_equal(ret, 0);
+	assert_true(executed);
 	assert_false(in_main_thread);
 }
 
-- 
2.17.1


From 9332052642d657645994866641457090dc3d6c15 Mon Sep 17 00:00:00 2001
From: Stefan Metzmacher <metze at samba.org>
Date: Fri, 22 Jun 2018 00:49:33 +0200
Subject: [PATCH 10/20] pthreadpool: add pthreadpool_max_threads() and
 pthreadpool_queued_jobs() helpers

These can be used to implement some kind of flow control in the caller.
E.g. unless pthreadpool_queued_jobs() is lower than
pthreadpool_max_threads() is good to prepare new jobs.

Signed-off-by: Stefan Metzmacher <metze at samba.org>
---
 lib/pthreadpool/pthreadpool.c      | 23 +++++++++++++++++++++++
 lib/pthreadpool/pthreadpool.h      | 21 +++++++++++++++++++++
 lib/pthreadpool/pthreadpool_sync.c | 10 ++++++++++
 3 files changed, 54 insertions(+)

diff --git a/lib/pthreadpool/pthreadpool.c b/lib/pthreadpool/pthreadpool.c
index 58ea857ded5b..6c51bc5272b3 100644
--- a/lib/pthreadpool/pthreadpool.c
+++ b/lib/pthreadpool/pthreadpool.c
@@ -196,6 +196,29 @@ int pthreadpool_init(unsigned max_threads, struct pthreadpool **presult,
 	return 0;
 }
 
+size_t pthreadpool_max_threads(struct pthreadpool *pool)
+{
+	return pool->max_threads;
+}
+
+size_t pthreadpool_queued_jobs(struct pthreadpool *pool)
+{
+	int res;
+	int unlock_res;
+	size_t ret;
+
+	res = pthread_mutex_lock(&pool->mutex);
+	if (res != 0) {
+		return 0;
+	}
+
+	ret = pool->num_jobs;
+
+	unlock_res = pthread_mutex_unlock(&pool->mutex);
+	assert(unlock_res == 0);
+	return ret;
+}
+
 static void pthreadpool_prepare_pool(struct pthreadpool *pool)
 {
 	int ret;
diff --git a/lib/pthreadpool/pthreadpool.h b/lib/pthreadpool/pthreadpool.h
index defbe5a9f623..cb8baffebb18 100644
--- a/lib/pthreadpool/pthreadpool.h
+++ b/lib/pthreadpool/pthreadpool.h
@@ -50,6 +50,27 @@ int pthreadpool_init(unsigned max_threads, struct pthreadpool **presult,
 				      void *private_data),
 		     void *signal_fn_private_data);
 
+/**
+ * @brief Get the max threads value of pthreadpool
+ *
+ * @note This can be 0 for strict sync processing.
+ *
+ * @param[in]	pool		The pool
+ * @return			number of possible threads
+ */
+size_t pthreadpool_max_threads(struct pthreadpool *pool);
+
+/**
+ * @brief The number of queued jobs of pthreadpool
+ *
+ * This is the number of jobs added by pthreadpool_add_job(),
+ * which are not yet processed by a thread.
+ *
+ * @param[in]	pool		The pool
+ * @return			The number of jobs
+ */
+size_t pthreadpool_queued_jobs(struct pthreadpool *pool);
+
 /**
  * @brief Destroy a pthreadpool
  *
diff --git a/lib/pthreadpool/pthreadpool_sync.c b/lib/pthreadpool/pthreadpool_sync.c
index d9a95f53c611..a7dce580951b 100644
--- a/lib/pthreadpool/pthreadpool_sync.c
+++ b/lib/pthreadpool/pthreadpool_sync.c
@@ -52,6 +52,16 @@ int pthreadpool_init(unsigned max_threads, struct pthreadpool **presult,
 	return 0;
 }
 
+size_t pthreadpool_max_threads(struct pthreadpool *pool)
+{
+	return 0;
+}
+
+size_t pthreadpool_queued_jobs(struct pthreadpool *pool)
+{
+	return 0;
+}
+
 int pthreadpool_add_job(struct pthreadpool *pool, int job_id,
 			void (*fn)(void *private_data), void *private_data)
 {
-- 
2.17.1


From d15918ac654355fc55e96e50e6e192383a256cc9 Mon Sep 17 00:00:00 2001
From: Stefan Metzmacher <metze at samba.org>
Date: Fri, 22 Jun 2018 01:01:42 +0200
Subject: [PATCH 11/20] pthreadpool: add pthreadpool_tevent_max_threads() and
 pthreadpool_tevent_queued_jobs()

These can be used to implement some kind of flow control in the caller.
E.g. unless pthreadpool_tevent_queued_jobs() is lower than
pthreadpool_tevent_max_threads() is good to prepare new jobs.

Signed-off-by: Stefan Metzmacher <metze at samba.org>
---
 lib/pthreadpool/pthreadpool_tevent.c | 18 ++++++++++++++++++
 lib/pthreadpool/pthreadpool_tevent.h |  3 +++
 2 files changed, 21 insertions(+)

diff --git a/lib/pthreadpool/pthreadpool_tevent.c b/lib/pthreadpool/pthreadpool_tevent.c
index b2292ef214c9..db6d4b2f2e83 100644
--- a/lib/pthreadpool/pthreadpool_tevent.c
+++ b/lib/pthreadpool/pthreadpool_tevent.c
@@ -102,6 +102,24 @@ int pthreadpool_tevent_init(TALLOC_CTX *mem_ctx, unsigned max_threads,
 	return 0;
 }
 
+size_t pthreadpool_tevent_max_threads(struct pthreadpool_tevent *pool)
+{
+	if (pool->pool == NULL) {
+		return 0;
+	}
+
+	return pthreadpool_max_threads(pool->pool);
+}
+
+size_t pthreadpool_tevent_queued_jobs(struct pthreadpool_tevent *pool)
+{
+	if (pool->pool == NULL) {
+		return 0;
+	}
+
+	return pthreadpool_queued_jobs(pool->pool);
+}
+
 static int pthreadpool_tevent_destructor(struct pthreadpool_tevent *pool)
 {
 	struct pthreadpool_tevent_job_state *state, *next;
diff --git a/lib/pthreadpool/pthreadpool_tevent.h b/lib/pthreadpool/pthreadpool_tevent.h
index de74a347e25c..10d3a71c40a7 100644
--- a/lib/pthreadpool/pthreadpool_tevent.h
+++ b/lib/pthreadpool/pthreadpool_tevent.h
@@ -27,6 +27,9 @@ struct pthreadpool_tevent;
 int pthreadpool_tevent_init(TALLOC_CTX *mem_ctx, unsigned max_threads,
 			    struct pthreadpool_tevent **presult);
 
+size_t pthreadpool_tevent_max_threads(struct pthreadpool_tevent *pool);
+size_t pthreadpool_tevent_queued_jobs(struct pthreadpool_tevent *pool);
+
 struct tevent_req *pthreadpool_tevent_job_send(
 	TALLOC_CTX *mem_ctx, struct tevent_context *ev,
 	struct pthreadpool_tevent *pool,
-- 
2.17.1


From 1152cb2bb7ef0d6804e3527a86c2fdcf9292e5c5 Mon Sep 17 00:00:00 2001
From: Stefan Metzmacher <metze at samba.org>
Date: Fri, 20 Apr 2018 15:00:31 +0200
Subject: [PATCH 12/20] pthreadpool: add pthreadpool_cancel_job()

Signed-off-by: Stefan Metzmacher <metze at samba.org>
Reviewed-by: Ralph Boehme <slow at samba.org>
---
 lib/pthreadpool/pthreadpool.c      | 46 ++++++++++++++++++++++++++++++
 lib/pthreadpool/pthreadpool.h      | 26 +++++++++++++++++
 lib/pthreadpool/pthreadpool_sync.c |  6 ++++
 3 files changed, 78 insertions(+)

diff --git a/lib/pthreadpool/pthreadpool.c b/lib/pthreadpool/pthreadpool.c
index 6c51bc5272b3..55ea36ed0234 100644
--- a/lib/pthreadpool/pthreadpool.c
+++ b/lib/pthreadpool/pthreadpool.c
@@ -750,3 +750,49 @@ int pthreadpool_add_job(struct pthreadpool *pool, int job_id,
 
 	return res;
 }
+
+size_t pthreadpool_cancel_job(struct pthreadpool *pool, int job_id,
+			      void (*fn)(void *private_data), void *private_data)
+{
+	int res;
+	size_t i, j;
+	size_t num = 0;
+
+	res = pthread_mutex_lock(&pool->mutex);
+	if (res != 0) {
+		return res;
+	}
+
+	for (i = 0, j = 0; i < pool->num_jobs; i++) {
+		size_t idx = (pool->head + i) % pool->jobs_array_len;
+		size_t new_idx = (pool->head + j) % pool->jobs_array_len;
+		struct pthreadpool_job *job = &pool->jobs[idx];
+
+		if ((job->private_data == private_data) &&
+		    (job->id == job_id) &&
+		    (job->fn == fn))
+		{
+			/*
+			 * Just skip the entry.
+			 */
+			num++;
+			continue;
+		}
+
+		/*
+		 * If we already removed one or more jobs (so j will be smaller
+		 * then i), we need to fill possible gaps in the logical list.
+		 */
+		if (j < i) {
+			pool->jobs[new_idx] = *job;
+		}
+		j++;
+	}
+
+	pool->num_jobs -= num;
+
+	res = pthread_mutex_unlock(&pool->mutex);
+	assert(res == 0);
+
+	return num;
+}
diff --git a/lib/pthreadpool/pthreadpool.h b/lib/pthreadpool/pthreadpool.h
index cb8baffebb18..dd1f9718b236 100644
--- a/lib/pthreadpool/pthreadpool.h
+++ b/lib/pthreadpool/pthreadpool.h
@@ -103,4 +103,30 @@ int pthreadpool_destroy(struct pthreadpool *pool);
 int pthreadpool_add_job(struct pthreadpool *pool, int job_id,
 			void (*fn)(void *private_data), void *private_data);
 
+/**
+ * @brief Try to cancel a job in a pthreadpool
+ *
+ * This tries to cancel a job in a pthreadpool. The same
+ * arguments, which were given to pthreadpool_add_job()
+ * needs to be passed.
+ *
+ * The combination of id, fn, private_data might not be unique.
+ * So the function tries to cancel as much matching jobs as possible.
+ * Note once a job is scheduled in a thread it's to late to
+ * cancel it.
+ *
+ * Canceled jobs that weren't started yet won't be reported via a
+ * pool's signal_fn.
+ *
+ * @param[in]	pool		The pool to run the job on
+ * @param[in]	job_id		A custom identifier
+ * @param[in]	fn		The function to run asynchronously
+ * @param[in]	private_data	Pointer passed to fn
+ * @return			The number of canceled jobs
+ *
+ * @see pthreadpool_add_job()
+ */
+size_t pthreadpool_cancel_job(struct pthreadpool *pool, int job_id,
+			      void (*fn)(void *private_data), void *private_data);
+
 #endif
diff --git a/lib/pthreadpool/pthreadpool_sync.c b/lib/pthreadpool/pthreadpool_sync.c
index a7dce580951b..837abac54d7c 100644
--- a/lib/pthreadpool/pthreadpool_sync.c
+++ b/lib/pthreadpool/pthreadpool_sync.c
@@ -71,6 +71,12 @@ int pthreadpool_add_job(struct pthreadpool *pool, int job_id,
 			       pool->signal_fn_private_data);
 }
 
+size_t pthreadpool_cancel_job(struct pthreadpool *pool, int job_id,
+			      void (*fn)(void *private_data), void *private_data)
+{
+	return 0;
+}
+
 int pthreadpool_destroy(struct pthreadpool *pool)
 {
 	free(pool);
-- 
2.17.1


From ed9dd2327c6a08f99dab8964e750b884dd31b46b Mon Sep 17 00:00:00 2001
From: Stefan Metzmacher <metze at samba.org>
Date: Fri, 20 Apr 2018 17:12:07 +0200
Subject: [PATCH 13/20] pthreadpool: don't process further jobs when shutting
 down

Signed-off-by: Stefan Metzmacher <metze at samba.org>
Reviewed-by: Ralph Boehme <slow at samba.org>
---
 lib/pthreadpool/pthreadpool.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/lib/pthreadpool/pthreadpool.c b/lib/pthreadpool/pthreadpool.c
index 55ea36ed0234..1ef6dccee621 100644
--- a/lib/pthreadpool/pthreadpool.c
+++ b/lib/pthreadpool/pthreadpool.c
@@ -444,6 +444,10 @@ static void pthreadpool_server_exit(struct pthreadpool *pool)
 static bool pthreadpool_get_job(struct pthreadpool *p,
 				struct pthreadpool_job *job)
 {
+	if (p->shutdown) {
+		return false;
+	}
+
 	if (p->num_jobs == 0) {
 		return false;
 	}
@@ -601,10 +605,9 @@ static void *pthreadpool_server(void *arg)
 			}
 		}
 
-		if ((pool->num_jobs == 0) && pool->shutdown) {
+		if (pool->shutdown) {
 			/*
-			 * No more work to do and we're asked to shut down, so
-			 * exit
+			 * we're asked to shut down, so exit
 			 */
 			pthreadpool_server_exit(pool);
 			return NULL;
-- 
2.17.1


From 1fa2b231c93ab731af7a757a04e7adc9e3c0f79f Mon Sep 17 00:00:00 2001
From: Stefan Metzmacher <metze at samba.org>
Date: Wed, 25 Apr 2018 14:03:30 +0200
Subject: [PATCH 14/20] pthreadpool: split out a pthreadpool_stop() from
 pthreadpool_destroy()

This can be used in combination with pthreadpool_cancel_job() to
implement a multi step shutdown of the pool.

Signed-off-by: Stefan Metzmacher <metze at samba.org>
Reviewed-by: Ralph Boehme <slow at samba.org>
---
 lib/pthreadpool/pthreadpool.c      | 116 ++++++++++++++++++++++-------
 lib/pthreadpool/pthreadpool.h      |  26 +++++++
 lib/pthreadpool/pthreadpool_sync.c |  13 ++++
 3 files changed, 129 insertions(+), 26 deletions(-)

diff --git a/lib/pthreadpool/pthreadpool.c b/lib/pthreadpool/pthreadpool.c
index 1ef6dccee621..610cfb02f154 100644
--- a/lib/pthreadpool/pthreadpool.c
+++ b/lib/pthreadpool/pthreadpool.c
@@ -71,9 +71,16 @@ struct pthreadpool {
 	void *signal_fn_private_data;
 
 	/*
-	 * indicator to worker threads that they should shut down
+	 * indicator to worker threads to stop processing further jobs
+	 * and exit.
 	 */
-	bool shutdown;
+	bool stopped;
+
+	/*
+	 * indicator to the last worker thread to free the pool
+	 * resources.
+	 */
+	bool destroyed;
 
 	/*
 	 * maximum number of threads
@@ -169,7 +176,8 @@ int pthreadpool_init(unsigned max_threads, struct pthreadpool **presult,
 		return ret;
 	}
 
-	pool->shutdown = false;
+	pool->stopped = false;
+	pool->destroyed = false;
 	pool->num_threads = 0;
 	pool->max_threads = max_threads;
 	pool->num_idle = 0;
@@ -198,6 +206,10 @@ int pthreadpool_init(unsigned max_threads, struct pthreadpool **presult,
 
 size_t pthreadpool_max_threads(struct pthreadpool *pool)
 {
+	if (pool->stopped) {
+		return 0;
+	}
+
 	return pool->max_threads;
 }
 
@@ -207,8 +219,18 @@ size_t pthreadpool_queued_jobs(struct pthreadpool *pool)
 	int unlock_res;
 	size_t ret;
 
+	if (pool->stopped) {
+		return 0;
+	}
+
 	res = pthread_mutex_lock(&pool->mutex);
 	if (res != 0) {
+		return res;
+	}
+
+	if (pool->stopped) {
+		unlock_res = pthread_mutex_unlock(&pool->mutex);
+		assert(unlock_res == 0);
 		return 0;
 	}
 
@@ -378,11 +400,33 @@ static int pthreadpool_free(struct pthreadpool *pool)
 }
 
 /*
- * Destroy a thread pool. Wake up all idle threads for exit. The last
- * one will free the pool.
+ * Stop a thread pool. Wake up all idle threads for exit.
  */
 
-int pthreadpool_destroy(struct pthreadpool *pool)
+static int pthreadpool_stop_locked(struct pthreadpool *pool)
+{
+	int ret;
+
+	pool->stopped = true;
+
+	if (pool->num_threads == 0) {
+		return 0;
+	}
+
+	/*
+	 * We have active threads, tell them to finish.
+	 */
+
+	ret = pthread_cond_broadcast(&pool->condvar);
+
+	return ret;
+}
+
+/*
+ * Stop a thread pool. Wake up all idle threads for exit.
+ */
+
+int pthreadpool_stop(struct pthreadpool *pool)
 {
 	int ret, ret1;
 
@@ -391,34 +435,50 @@ int pthreadpool_destroy(struct pthreadpool *pool)
 		return ret;
 	}
 
-	if (pool->shutdown) {
-		ret = pthread_mutex_unlock(&pool->mutex);
-		assert(ret == 0);
-		return EBUSY;
+	if (!pool->stopped) {
+		ret = pthreadpool_stop_locked(pool);
 	}
 
-	pool->shutdown = true;
+	ret1 = pthread_mutex_unlock(&pool->mutex);
+	assert(ret1 == 0);
 
-	if (pool->num_threads == 0) {
-		ret = pthread_mutex_unlock(&pool->mutex);
-		assert(ret == 0);
+	return ret;
+}
+
+/*
+ * Destroy a thread pool. Wake up all idle threads for exit. The last
+ * one will free the pool.
+ */
+
+int pthreadpool_destroy(struct pthreadpool *pool)
+{
+	int ret, ret1;
+	bool free_it;
+
+	assert(!pool->destroyed);
 
-		ret = pthreadpool_free(pool);
+	ret = pthread_mutex_lock(&pool->mutex);
+	if (ret != 0) {
 		return ret;
 	}
 
-	/*
-	 * We have active threads, tell them to finish.
-	 */
+	pool->destroyed = true;
 
-	ret = pthread_cond_broadcast(&pool->condvar);
+	if (!pool->stopped) {
+		ret = pthreadpool_stop_locked(pool);
+	}
+
+	free_it = (pool->num_threads == 0);
 
 	ret1 = pthread_mutex_unlock(&pool->mutex);
 	assert(ret1 == 0);
 
+	if (free_it) {
+		pthreadpool_free(pool);
+	}
+
 	return ret;
 }
-
 /*
  * Prepare for pthread_exit(), pool->mutex must be locked and will be
  * unlocked here. This is a bit of a layering violation, but here we
@@ -431,7 +491,7 @@ static void pthreadpool_server_exit(struct pthreadpool *pool)
 
 	pool->num_threads -= 1;
 
-	free_it = (pool->shutdown && (pool->num_threads == 0));
+	free_it = (pool->destroyed && (pool->num_threads == 0));
 
 	ret = pthread_mutex_unlock(&pool->mutex);
 	assert(ret == 0);
@@ -444,7 +504,7 @@ static void pthreadpool_server_exit(struct pthreadpool *pool)
 static bool pthreadpool_get_job(struct pthreadpool *p,
 				struct pthreadpool_job *job)
 {
-	if (p->shutdown) {
+	if (p->stopped) {
 		return false;
 	}
 
@@ -527,7 +587,7 @@ static void *pthreadpool_server(void *arg)
 		clock_gettime(CLOCK_REALTIME, &ts);
 		ts.tv_sec += 1;
 
-		while ((pool->num_jobs == 0) && !pool->shutdown) {
+		while ((pool->num_jobs == 0) && !pool->stopped) {
 
 			pool->num_idle += 1;
 			res = pthread_cond_timedwait(
@@ -605,9 +665,9 @@ static void *pthreadpool_server(void *arg)
 			}
 		}
 
-		if (pool->shutdown) {
+		if (pool->stopped) {
 			/*
-			 * we're asked to shut down, so exit
+			 * we're asked to stop processing jobs, so exit
 			 */
 			pthreadpool_server_exit(pool);
 			return NULL;
@@ -666,12 +726,14 @@ int pthreadpool_add_job(struct pthreadpool *pool, int job_id,
 	int res;
 	int unlock_res;
 
+	assert(!pool->destroyed);
+
 	res = pthread_mutex_lock(&pool->mutex);
 	if (res != 0) {
 		return res;
 	}
 
-	if (pool->shutdown) {
+	if (pool->stopped) {
 		/*
 		 * Protect against the pool being shut down while
 		 * trying to add a job
@@ -761,6 +823,8 @@ size_t pthreadpool_cancel_job(struct pthreadpool *pool, int job_id,
 	size_t i, j;
 	size_t num = 0;
 
+	assert(!pool->destroyed);
+
 	res = pthread_mutex_lock(&pool->mutex);
 	if (res != 0) {
 		return res;
diff --git a/lib/pthreadpool/pthreadpool.h b/lib/pthreadpool/pthreadpool.h
index dd1f9718b236..b4733580e07b 100644
--- a/lib/pthreadpool/pthreadpool.h
+++ b/lib/pthreadpool/pthreadpool.h
@@ -71,9 +71,31 @@ size_t pthreadpool_max_threads(struct pthreadpool *pool);
  */
 size_t pthreadpool_queued_jobs(struct pthreadpool *pool);
 
+/**
+ * @brief Stop a pthreadpool
+ *
+ * Stop a pthreadpool. If jobs are submitted, but not yet active in
+ * a thread, they won't get executed. If a job has already been
+ * submitted to a thread, the job function will continue running, and
+ * the signal function might still be called.
+ *
+ * This allows a multi step shutdown using pthreadpool_stop(),
+ * pthreadpool_cancel_job() and pthreadpool_destroy().
+ *
+ * @param[in]	pool		The pool to stop
+ * @return			success: 0, failure: errno
+ *
+ * @see pthreadpool_cancel_job()
+ * @see pthreadpool_destroy()
+ */
+int pthreadpool_stop(struct pthreadpool *pool);
+
 /**
  * @brief Destroy a pthreadpool
  *
+ * This basically implies pthreadpool_stop() if the pool
+ * isn't already stopped.
+ *
  * Destroy a pthreadpool. If jobs are submitted, but not yet active in
  * a thread, they won't get executed. If a job has already been
  * submitted to a thread, the job function will continue running, and
@@ -84,6 +106,8 @@ size_t pthreadpool_queued_jobs(struct pthreadpool *pool);
  *
  * @param[in]	pool		The pool to destroy
  * @return			success: 0, failure: errno
+ *
+ * @see pthreadpool_stop()
  */
 int pthreadpool_destroy(struct pthreadpool *pool);
 
@@ -125,6 +149,8 @@ int pthreadpool_add_job(struct pthreadpool *pool, int job_id,
  * @return			The number of canceled jobs
  *
  * @see pthreadpool_add_job()
+ * @see pthreadpool_stop()
+ * @see pthreadpool_destroy()
  */
 size_t pthreadpool_cancel_job(struct pthreadpool *pool, int job_id,
 			      void (*fn)(void *private_data), void *private_data);
diff --git a/lib/pthreadpool/pthreadpool_sync.c b/lib/pthreadpool/pthreadpool_sync.c
index 837abac54d7c..48e6a0ddb604 100644
--- a/lib/pthreadpool/pthreadpool_sync.c
+++ b/lib/pthreadpool/pthreadpool_sync.c
@@ -22,6 +22,8 @@
 #include "pthreadpool.h"
 
 struct pthreadpool {
+	bool stopped;
+
 	/*
 	 * Indicate job completion
 	 */
@@ -45,6 +47,7 @@ int pthreadpool_init(unsigned max_threads, struct pthreadpool **presult,
 	if (pool == NULL) {
 		return ENOMEM;
 	}
+	pool->stopped = false;
 	pool->signal_fn = signal_fn;
 	pool->signal_fn_private_data = signal_fn_private_data;
 
@@ -65,6 +68,10 @@ size_t pthreadpool_queued_jobs(struct pthreadpool *pool)
 int pthreadpool_add_job(struct pthreadpool *pool, int job_id,
 			void (*fn)(void *private_data), void *private_data)
 {
+	if (pool->stopped) {
+		return EINVAL;
+	}
+
 	fn(private_data);
 
 	return pool->signal_fn(job_id, fn, private_data,
@@ -77,6 +84,12 @@ size_t pthreadpool_cancel_job(struct pthreadpool *pool, int job_id,
 	return 0;
 }
 
+int pthreadpool_stop(struct pthreadpool *pool)
+{
+	pool->stopped = true;
+	return 0;
+}
+
 int pthreadpool_destroy(struct pthreadpool *pool)
 {
 	free(pool);
-- 
2.17.1


From 6a612d2a60c61d42197c84c28f2f07b6c1d4d229 Mon Sep 17 00:00:00 2001
From: Stefan Metzmacher <metze at samba.org>
Date: Fri, 20 Apr 2018 15:07:08 +0200
Subject: [PATCH 15/20] pthreadpool: let pthreadpool_tevent_job_send() fail
 with an invalid pool

Signed-off-by: Stefan Metzmacher <metze at samba.org>
---
 lib/pthreadpool/pthreadpool_tevent.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/lib/pthreadpool/pthreadpool_tevent.c b/lib/pthreadpool/pthreadpool_tevent.c
index db6d4b2f2e83..0e890cb7ce52 100644
--- a/lib/pthreadpool/pthreadpool_tevent.c
+++ b/lib/pthreadpool/pthreadpool_tevent.c
@@ -301,6 +301,15 @@ struct tevent_req *pthreadpool_tevent_job_send(
 	state->fn = fn;
 	state->private_data = private_data;
 
+	if (pool == NULL) {
+		tevent_req_error(req, EINVAL);
+		return tevent_req_post(req, ev);
+	}
+	if (pool->pool == NULL) {
+		tevent_req_error(req, EINVAL);
+		return tevent_req_post(req, ev);
+	}
+
 	state->im = tevent_create_immediate(state);
 	if (tevent_req_nomem(state->im, req)) {
 		return tevent_req_post(req, ev);
-- 
2.17.1


From df8e38921a5d698b0c86e6b386a5cef42e69121a Mon Sep 17 00:00:00 2001
From: Stefan Metzmacher <metze at samba.org>
Date: Fri, 22 Jun 2018 01:39:47 +0200
Subject: [PATCH 16/20] pthreadpool: split out pthreadpool_tevent_job from
 pthreadpool_tevent_job_state

This makes it much easier to handle orphaned jobs,
we either wait for the immediate tevent to trigger
or we just keep leaking the memory.

The next commits will improve this further.

Signed-off-by: Stefan Metzmacher <metze at samba.org>
---
 lib/pthreadpool/pthreadpool_tevent.c | 238 +++++++++++++++++++--------
 1 file changed, 172 insertions(+), 66 deletions(-)

diff --git a/lib/pthreadpool/pthreadpool_tevent.c b/lib/pthreadpool/pthreadpool_tevent.c
index 0e890cb7ce52..7c8015d2f594 100644
--- a/lib/pthreadpool/pthreadpool_tevent.c
+++ b/lib/pthreadpool/pthreadpool_tevent.c
@@ -57,15 +57,21 @@ struct pthreadpool_tevent {
 	struct pthreadpool *pool;
 	struct pthreadpool_tevent_glue *glue_list;
 
-	struct pthreadpool_tevent_job_state *jobs;
+	struct pthreadpool_tevent_job *jobs;
 };
 
 struct pthreadpool_tevent_job_state {
-	struct pthreadpool_tevent_job_state *prev, *next;
-	struct pthreadpool_tevent *pool;
 	struct tevent_context *ev;
-	struct tevent_immediate *im;
 	struct tevent_req *req;
+	struct pthreadpool_tevent_job *job;
+};
+
+struct pthreadpool_tevent_job {
+	struct pthreadpool_tevent_job *prev, *next;
+
+	struct pthreadpool_tevent *pool;
+	struct pthreadpool_tevent_job_state *state;
+	struct tevent_immediate *im;
 
 	void (*fn)(void *private_data);
 	void *private_data;
@@ -73,6 +79,8 @@ struct pthreadpool_tevent_job_state {
 
 static int pthreadpool_tevent_destructor(struct pthreadpool_tevent *pool);
 
+static void pthreadpool_tevent_job_orphan(struct pthreadpool_tevent_job *job);
+
 static int pthreadpool_tevent_job_signal(int jobid,
 					 void (*job_fn)(void *private_data),
 					 void *job_private_data,
@@ -122,7 +130,8 @@ size_t pthreadpool_tevent_queued_jobs(struct pthreadpool_tevent *pool)
 
 static int pthreadpool_tevent_destructor(struct pthreadpool_tevent *pool)
 {
-	struct pthreadpool_tevent_job_state *state, *next;
+	struct pthreadpool_tevent_job *job = NULL;
+	struct pthreadpool_tevent_job *njob = NULL;
 	struct pthreadpool_tevent_glue *glue = NULL;
 	int ret;
 
@@ -132,10 +141,11 @@ static int pthreadpool_tevent_destructor(struct pthreadpool_tevent *pool)
 	}
 	pool->pool = NULL;
 
-	for (state = pool->jobs; state != NULL; state = next) {
-		next = state->next;
-		DLIST_REMOVE(pool->jobs, state);
-		state->pool = NULL;
+	for (job = pool->jobs; job != NULL; job = njob) {
+		njob = job->next;
+
+		/* The job this removes it from the list */
+		pthreadpool_tevent_job_orphan(job);
 	}
 
 	/*
@@ -258,27 +268,120 @@ static void pthreadpool_tevent_job_done(struct tevent_context *ctx,
 					struct tevent_immediate *im,
 					void *private_data);
 
-static int pthreadpool_tevent_job_state_destructor(
-	struct pthreadpool_tevent_job_state *state)
+static int pthreadpool_tevent_job_destructor(struct pthreadpool_tevent_job *job)
 {
-	if (state->pool == NULL) {
-		return 0;
+	/*
+	 * We should never be called with state->state != NULL.
+	 * Only pthreadpool_tevent_job_orphan() will call TALLOC_FREE(job)
+	 * after detaching from the request state and pool list.
+	 */
+	if (job->state != NULL) {
+		abort();
+	}
+
+	/*
+	 * If the job is not finished (job->im still there)
+	 * and it's still attached to the pool,
+	 * we try to cancel it (before it was starts)
+	 */
+	if (job->im != NULL && job->pool != NULL) {
+		size_t num;
+
+		num = pthreadpool_cancel_job(job->pool->pool, 0,
+					     pthreadpool_tevent_job_fn,
+					     job);
+		if (num != 0) {
+			/*
+			 * It was not too late to cancel the request.
+			 *
+			 * We can remove job->im, as it will never be used.
+			 */
+			TALLOC_FREE(job->im);
+		}
+	}
+
+	/*
+	 * pthreadpool_tevent_job_orphan() already removed
+	 * it from pool->jobs. And we don't need try
+	 * pthreadpool_cancel_job() again.
+	 */
+	job->pool = NULL;
+
+	if (job->im != NULL) {
+		/*
+		 * state->im still there means, we need to wait for the
+		 * immediate event to be triggered or just leak the memory.
+		 */
+		return -1;
+	}
+
+	return 0;
+}
+
+static void pthreadpool_tevent_job_orphan(struct pthreadpool_tevent_job *job)
+{
+	/*
+	 * We're the only function that sets
+	 * job->state = NULL;
+	 */
+	if (job->state == NULL) {
+		abort();
 	}
 
 	/*
-	 * We should never be called with state->req == NULL,
-	 * state->pool must be cleared before the 2nd talloc_free().
+	 * We need to reparent to a long term context.
+	 * And detach from the request state.
+	 * Maybe the destructor will keep the memory
+	 * and leak it for now.
 	 */
-	if (state->req == NULL) {
+	(void)talloc_reparent(job->state, NULL, job);
+	job->state->job = NULL;
+	job->state = NULL;
+
+	/*
+	 * job->pool will only be set to NULL
+	 * in the first destructur run.
+	 */
+	if (job->pool == NULL) {
 		abort();
 	}
 
+	/*
+	 * Dettach it from the pool.
+	 *
+	 * The job might still be running,
+	 * so we keep job->pool.
+	 * The destructor will set it to NULL
+	 * after trying pthreadpool_cancel_job()
+	 */
+	DLIST_REMOVE(job->pool->jobs, job);
+
+	TALLOC_FREE(job);
+}
+
+static void pthreadpool_tevent_job_cleanup(struct tevent_req *req,
+					   enum tevent_req_state req_state)
+{
+	struct pthreadpool_tevent_job_state *state =
+		tevent_req_data(req,
+		struct pthreadpool_tevent_job_state);
+
+	if (state->job == NULL) {
+		/*
+		 * The job request is not scheduled in the pool
+		 * yet or anymore.
+		 */
+		return;
+	}
+
 	/*
 	 * We need to reparent to a long term context.
+	 * Maybe the destructor will keep the memory
+	 * and leak it for now.
 	 */
-	(void)talloc_reparent(state->req, NULL, state);
-	state->req = NULL;
-	return -1;
+	pthreadpool_tevent_job_orphan(state->job);
+	state->job = NULL; /* not needed but looks better */
+	return;
 }
 
 struct tevent_req *pthreadpool_tevent_job_send(
@@ -286,8 +389,9 @@ struct tevent_req *pthreadpool_tevent_job_send(
 	struct pthreadpool_tevent *pool,
 	void (*fn)(void *private_data), void *private_data)
 {
-	struct tevent_req *req;
-	struct pthreadpool_tevent_job_state *state;
+	struct tevent_req *req = NULL;
+	struct pthreadpool_tevent_job_state *state = NULL;
+	struct pthreadpool_tevent_job *job = NULL;
 	int ret;
 
 	req = tevent_req_create(mem_ctx, &state,
@@ -295,11 +399,10 @@ struct tevent_req *pthreadpool_tevent_job_send(
 	if (req == NULL) {
 		return NULL;
 	}
-	state->pool = pool;
 	state->ev = ev;
 	state->req = req;
-	state->fn = fn;
-	state->private_data = private_data;
+
+	tevent_req_set_cleanup_fn(req, pthreadpool_tevent_job_cleanup);
 
 	if (pool == NULL) {
 		tevent_req_error(req, EINVAL);
@@ -310,39 +413,44 @@ struct tevent_req *pthreadpool_tevent_job_send(
 		return tevent_req_post(req, ev);
 	}
 
-	state->im = tevent_create_immediate(state);
-	if (tevent_req_nomem(state->im, req)) {
+	ret = pthreadpool_tevent_register_ev(pool, ev);
+	if (tevent_req_error(req, ret)) {
 		return tevent_req_post(req, ev);
 	}
 
-	ret = pthreadpool_tevent_register_ev(pool, ev);
-	if (tevent_req_error(req, ret)) {
+	job = talloc_zero(state, struct pthreadpool_tevent_job);
+	if (tevent_req_nomem(job, req)) {
 		return tevent_req_post(req, ev);
 	}
+	job->pool = pool;
+	job->fn = fn;
+	job->private_data = private_data;
+	job->im = tevent_create_immediate(state->job);
+	if (tevent_req_nomem(job->im, req)) {
+		return tevent_req_post(req, ev);
+	}
+	talloc_set_destructor(job, pthreadpool_tevent_job_destructor);
+	DLIST_ADD_END(job->pool->jobs, job);
+	job->state = state;
+	state->job = job;
 
-	ret = pthreadpool_add_job(pool->pool, 0,
+	ret = pthreadpool_add_job(job->pool->pool, 0,
 				  pthreadpool_tevent_job_fn,
-				  state);
+				  job);
 	if (tevent_req_error(req, ret)) {
 		return tevent_req_post(req, ev);
 	}
 
-	/*
-	 * Once the job is scheduled, we need to protect
-	 * our memory.
-	 */
-	talloc_set_destructor(state, pthreadpool_tevent_job_state_destructor);
-
-	DLIST_ADD_END(pool->jobs, state);
-
 	return req;
 }
 
 static void pthreadpool_tevent_job_fn(void *private_data)
 {
-	struct pthreadpool_tevent_job_state *state = talloc_get_type_abort(
-		private_data, struct pthreadpool_tevent_job_state);
-	state->fn(state->private_data);
+	struct pthreadpool_tevent_job *job =
+		talloc_get_type_abort(private_data,
+		struct pthreadpool_tevent_job);
+
+	job->fn(job->private_data);
 }
 
 static int pthreadpool_tevent_job_signal(int jobid,
@@ -350,18 +458,20 @@ static int pthreadpool_tevent_job_signal(int jobid,
 					 void *job_private_data,
 					 void *private_data)
 {
-	struct pthreadpool_tevent_job_state *state = talloc_get_type_abort(
-		job_private_data, struct pthreadpool_tevent_job_state);
+	struct pthreadpool_tevent_job *job =
+		talloc_get_type_abort(job_private_data,
+		struct pthreadpool_tevent_job);
+	struct pthreadpool_tevent_job_state *state = job->state;
 	struct tevent_threaded_context *tctx = NULL;
 	struct pthreadpool_tevent_glue *g = NULL;
 
-	if (state->pool == NULL) {
-		/* The pthreadpool_tevent is already gone */
+	if (state == NULL) {
+		/* Request already gone */
 		return 0;
 	}
 
 #ifdef HAVE_PTHREAD
-	for (g = state->pool->glue_list; g != NULL; g = g->next) {
+	for (g = job->pool->glue_list; g != NULL; g = g->next) {
 		if (g->ev == state->ev) {
 			tctx = g->tctx;
 			break;
@@ -375,14 +485,14 @@ static int pthreadpool_tevent_job_signal(int jobid,
 
 	if (tctx != NULL) {
 		/* with HAVE_PTHREAD */
-		tevent_threaded_schedule_immediate(tctx, state->im,
+		tevent_threaded_schedule_immediate(tctx, job->im,
 						   pthreadpool_tevent_job_done,
-						   state);
+						   job);
 	} else {
 		/* without HAVE_PTHREAD */
-		tevent_schedule_immediate(state->im, state->ev,
+		tevent_schedule_immediate(job->im, state->ev,
 					  pthreadpool_tevent_job_done,
-					  state);
+					  job);
 	}
 
 	return 0;
@@ -392,27 +502,23 @@ static void pthreadpool_tevent_job_done(struct tevent_context *ctx,
 					struct tevent_immediate *im,
 					void *private_data)
 {
-	struct pthreadpool_tevent_job_state *state = talloc_get_type_abort(
-		private_data, struct pthreadpool_tevent_job_state);
+	struct pthreadpool_tevent_job *job =
+		talloc_get_type_abort(private_data,
+		struct pthreadpool_tevent_job);
+	struct pthreadpool_tevent_job_state *state = job->state;
 
-	if (state->pool != NULL) {
-		DLIST_REMOVE(state->pool->jobs, state);
-		state->pool = NULL;
-	}
+	TALLOC_FREE(job->im);
 
-	if (state->req == NULL) {
-		/*
-		 * There was a talloc_free() state->req
-		 * while the job was pending,
-		 * which mean we're reparented on a longterm
-		 * talloc context.
-		 *
-		 * We just cleanup here...
-		 */
-		talloc_free(state);
+	if (state == NULL) {
+		/* Request already gone */
+		TALLOC_FREE(job);
 		return;
 	}
 
+	/*
+	 * pthreadpool_tevent_job_cleanup()
+	 * will destroy the job.
+	 */
 	tevent_req_done(state->req);
 }
 
-- 
2.17.1


From f24af288f76f1a4bb764bfeac6660f4b26d6cd57 Mon Sep 17 00:00:00 2001
From: Stefan Metzmacher <metze at samba.org>
Date: Wed, 25 Apr 2018 14:43:22 +0200
Subject: [PATCH 17/20] pthreadpool: add pthreadpool_tevent_job_cancel()

Signed-off-by: Stefan Metzmacher <metze at samba.org>
---
 lib/pthreadpool/pthreadpool_tevent.c | 40 ++++++++++++++++++++++++++++
 1 file changed, 40 insertions(+)

diff --git a/lib/pthreadpool/pthreadpool_tevent.c b/lib/pthreadpool/pthreadpool_tevent.c
index 7c8015d2f594..bfd178c09a67 100644
--- a/lib/pthreadpool/pthreadpool_tevent.c
+++ b/lib/pthreadpool/pthreadpool_tevent.c
@@ -267,6 +267,7 @@ static void pthreadpool_tevent_job_fn(void *private_data);
 static void pthreadpool_tevent_job_done(struct tevent_context *ctx,
 					struct tevent_immediate *im,
 					void *private_data);
+static bool pthreadpool_tevent_job_cancel(struct tevent_req *req);
 
 static int pthreadpool_tevent_job_destructor(struct pthreadpool_tevent_job *job)
 {
@@ -441,6 +442,7 @@ struct tevent_req *pthreadpool_tevent_job_send(
 		return tevent_req_post(req, ev);
 	}
 
+	tevent_req_set_cancel_fn(req, pthreadpool_tevent_job_cancel);
 	return req;
 }
 
@@ -522,6 +524,44 @@ static void pthreadpool_tevent_job_done(struct tevent_context *ctx,
 	tevent_req_done(state->req);
 }
 
+static bool pthreadpool_tevent_job_cancel(struct tevent_req *req)
+{
+	struct pthreadpool_tevent_job_state *state =
+		tevent_req_data(req,
+		struct pthreadpool_tevent_job_state);
+	struct pthreadpool_tevent_job *job = state->job;
+	size_t num;
+
+	if (job == NULL) {
+		return false;
+	}
+
+	num = pthreadpool_cancel_job(job->pool->pool, 0,
+				     pthreadpool_tevent_job_fn,
+				     job);
+	if (num == 0) {
+		/*
+		 * It was too late to cancel the request.
+		 */
+		return false;
+	}
+
+	/*
+	 * It was not too late to cancel the request.
+	 *
+	 * We can remove job->im, as it will never be used.
+	 */
+	TALLOC_FREE(job->im);
+
+	/*
+	 * pthreadpool_tevent_job_cleanup()
+	 * will destroy the job.
+	 */
+	tevent_req_defer_callback(req, state->ev);
+	tevent_req_error(req, ECANCELED);
+	return true;
+}
+
 int pthreadpool_tevent_job_recv(struct tevent_req *req)
 {
 	return tevent_req_simple_recv_unix(req);
-- 
2.17.1


From 97c1093d7aaf01f06fa8641088b9b9f8922e242f Mon Sep 17 00:00:00 2001
From: Stefan Metzmacher <metze at samba.org>
Date: Wed, 25 Apr 2018 20:25:21 +0200
Subject: [PATCH 18/20] pthreadpool: make use of pthreadpool_stop() in
 pthreadpool_tevent_destructor()

Signed-off-by: Stefan Metzmacher <metze at samba.org>
---
 lib/pthreadpool/pthreadpool_tevent.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/lib/pthreadpool/pthreadpool_tevent.c b/lib/pthreadpool/pthreadpool_tevent.c
index bfd178c09a67..e73965370587 100644
--- a/lib/pthreadpool/pthreadpool_tevent.c
+++ b/lib/pthreadpool/pthreadpool_tevent.c
@@ -135,11 +135,10 @@ static int pthreadpool_tevent_destructor(struct pthreadpool_tevent *pool)
 	struct pthreadpool_tevent_glue *glue = NULL;
 	int ret;
 
-	ret = pthreadpool_destroy(pool->pool);
+	ret = pthreadpool_stop(pool->pool);
 	if (ret != 0) {
 		return ret;
 	}
-	pool->pool = NULL;
 
 	for (job = pool->jobs; job != NULL; job = njob) {
 		njob = job->next;
@@ -159,6 +158,12 @@ static int pthreadpool_tevent_destructor(struct pthreadpool_tevent *pool)
 	}
 	pool->glue_list = NULL;
 
+	ret = pthreadpool_destroy(pool->pool);
+	if (ret != 0) {
+		return ret;
+	}
+	pool->pool = NULL;
+
 	return 0;
 }
 
-- 
2.17.1


From d51c24c76420b836f55d15d16cb22dceb4013eb2 Mon Sep 17 00:00:00 2001
From: Stefan Metzmacher <metze at samba.org>
Date: Wed, 20 Jun 2018 13:38:19 +0200
Subject: [PATCH 19/20] pthreadpool: maintain a global list of orphaned
 pthreadpool_tevent_jobs

Instead of leaking the memory forever, we retry the cleanup,
if other pthreadpool_tevent_*() functions are used.

pthreadpool_tevent_cleanup_orphaned_jobs() could also be called
by external callers.

Signed-off-by: Stefan Metzmacher <metze at samba.org>
---
 lib/pthreadpool/pthreadpool_tevent.c | 41 ++++++++++++++++++++++++++++
 lib/pthreadpool/pthreadpool_tevent.h |  2 ++
 2 files changed, 43 insertions(+)

diff --git a/lib/pthreadpool/pthreadpool_tevent.c b/lib/pthreadpool/pthreadpool_tevent.c
index e73965370587..5da1f22e91c4 100644
--- a/lib/pthreadpool/pthreadpool_tevent.c
+++ b/lib/pthreadpool/pthreadpool_tevent.c
@@ -81,6 +81,24 @@ static int pthreadpool_tevent_destructor(struct pthreadpool_tevent *pool);
 
 static void pthreadpool_tevent_job_orphan(struct pthreadpool_tevent_job *job);
 
+static struct pthreadpool_tevent_job *orphaned_jobs;
+
+void pthreadpool_tevent_cleanup_orphaned_jobs(void)
+{
+	struct pthreadpool_tevent_job *job = NULL;
+	struct pthreadpool_tevent_job *njob = NULL;
+
+	for (job = orphaned_jobs; job != NULL; job = njob) {
+		njob = job->next;
+
+		/*
+		 * The job destructor keeps the job alive
+		 * (and in the list) or removes it from the list.
+		 */
+		TALLOC_FREE(job);
+	}
+}
+
 static int pthreadpool_tevent_job_signal(int jobid,
 					 void (*job_fn)(void *private_data),
 					 void *job_private_data,
@@ -92,6 +110,8 @@ int pthreadpool_tevent_init(TALLOC_CTX *mem_ctx, unsigned max_threads,
 	struct pthreadpool_tevent *pool;
 	int ret;
 
+	pthreadpool_tevent_cleanup_orphaned_jobs();
+
 	pool = talloc_zero(mem_ctx, struct pthreadpool_tevent);
 	if (pool == NULL) {
 		return ENOMEM;
@@ -164,6 +184,8 @@ static int pthreadpool_tevent_destructor(struct pthreadpool_tevent *pool)
 	}
 	pool->pool = NULL;
 
+	pthreadpool_tevent_cleanup_orphaned_jobs();
+
 	return 0;
 }
 
@@ -317,10 +339,18 @@ static int pthreadpool_tevent_job_destructor(struct pthreadpool_tevent_job *job)
 		/*
 		 * state->im still there means, we need to wait for the
 		 * immediate event to be triggered or just leak the memory.
+		 *
+		 * Move it to the orphaned list, if it's not already there.
 		 */
 		return -1;
 	}
 
+	/*
+	 * Finally remove from the orphaned_jobs list
+	 * and let talloc destroy us.
+	 */
+	DLIST_REMOVE(orphaned_jobs, job);
+
 	return 0;
 }
 
@@ -362,6 +392,15 @@ static void pthreadpool_tevent_job_orphan(struct pthreadpool_tevent_job *job)
 	 */
 	DLIST_REMOVE(job->pool->jobs, job);
 
+	/*
+	 * Add it to the list of orphaned jobs,
+	 * which may be cleaned up later.
+	 *
+	 * The destructor removes it from the list
+	 * when possible or it denies the free
+	 * and keep it in the list.
+	 */
+	DLIST_ADD_END(orphaned_jobs, job);
 	TALLOC_FREE(job);
 }
 
@@ -400,6 +439,8 @@ struct tevent_req *pthreadpool_tevent_job_send(
 	struct pthreadpool_tevent_job *job = NULL;
 	int ret;
 
+	pthreadpool_tevent_cleanup_orphaned_jobs();
+
 	req = tevent_req_create(mem_ctx, &state,
 				struct pthreadpool_tevent_job_state);
 	if (req == NULL) {
diff --git a/lib/pthreadpool/pthreadpool_tevent.h b/lib/pthreadpool/pthreadpool_tevent.h
index 10d3a71c40a7..fdb86e23757a 100644
--- a/lib/pthreadpool/pthreadpool_tevent.h
+++ b/lib/pthreadpool/pthreadpool_tevent.h
@@ -22,6 +22,8 @@
 
 #include <tevent.h>
 
+void pthreadpool_tevent_cleanup_orphaned_jobs(void);
+
 struct pthreadpool_tevent;
 
 int pthreadpool_tevent_init(TALLOC_CTX *mem_ctx, unsigned max_threads,
-- 
2.17.1


From 4afa8b5206b8c50ce0535b47282d95e2294bf989 Mon Sep 17 00:00:00 2001
From: Stefan Metzmacher <metze at samba.org>
Date: Fri, 22 Jun 2018 17:11:53 +0200
Subject: [PATCH 20/20] pthreadpool: allocate glue->tctx on glue as memory
 context.

This means it will go aways together with glue and thte event context.

Signed-off-by: Stefan Metzmacher <metze at samba.org>
---
 lib/pthreadpool/pthreadpool_tevent.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/pthreadpool/pthreadpool_tevent.c b/lib/pthreadpool/pthreadpool_tevent.c
index 5da1f22e91c4..e7e17d3bf0f7 100644
--- a/lib/pthreadpool/pthreadpool_tevent.c
+++ b/lib/pthreadpool/pthreadpool_tevent.c
@@ -278,7 +278,7 @@ static int pthreadpool_tevent_register_ev(struct pthreadpool_tevent *pool,
 	glue->ev_link = ev_link;
 
 #ifdef HAVE_PTHREAD
-	glue->tctx = tevent_threaded_context_create(pool, ev);
+	glue->tctx = tevent_threaded_context_create(glue, ev);
 	if (glue->tctx == NULL) {
 		TALLOC_FREE(ev_link);
 		TALLOC_FREE(glue);
-- 
2.17.1

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: OpenPGP digital signature
URL: <http://lists.samba.org/pipermail/samba-technical/attachments/20180712/9e103a7c/signature-0001.sig>


More information about the samba-technical mailing list