[PATCH] winbindd: control number of winbindd's client connections

Volker Lendecke Volker.Lendecke at SerNet.DE
Mon Jul 13 08:04:40 UTC 2015


Hi!

On Mon, Jul 06, 2015 at 09:40:32PM +0300, Uri Simchoni wrote:
> Attached pls find V4 of this patch, handling the case where winbindd
> runs out of file descriptors during heavy load. This patch takes a
> different approach suggested by Volker:
> 1. The assumption is that we can estimate a limit on the number of
> winbindd clients (estimate number of processes using winbindd, at most
> 1 client per process).
> 2. This limit should be configured into "winbind max clients"
> 3. Winbindd shall modify its fd limit accordingly

I'm not 100% sure I like winbind to modify that limit
itself. Isn't that more a task of the surrounding init
scripts? I know we do it in smbd too, so this is more a
request for other opinions.

> 4. Add code that closes the winbindd side of the socket immediately if
> the client closes the socket (up until now, while a request was queued
> or being processed, the client socket was not sampled) - This ensures
> one socket per client

These patches look really good. I have some very minor
cosmetic comments.

> 5. Take from previous patch the changes in client side - avoid retries
> 6. Take from previous patch the change to enforce "winbind request
> timeout" at all times, not just when the number of client connections
> exceeds the limit.

Here I'm not sure how much load this will bring. Walking
10.000 winbind connections every 5 seconds might be
visible. I haven't really looked -- is it possible to sort
the client list according to what we're looking for using
DLIST_PROMOTE?

> While the approach of the previous patch set a hard limit on number of
> file descriptors, at the cost of suspending new connection acceptance,
> this patch set never stops accepting connections, at the cost of
> staying with the soft limit (but has provisions to ensure that a
> carefully planned system will not reach the limit).
> 
> The patch set:
> V4 1/7 - Set winbindd process file descriptor limit according to
> "winbind max clients" and "winbind max domain connections"
> V4 2/7 - Monitor client socket when request is queued / processed
> V4 3/7+4/7 - Refine 2/7 by emitting differnet debug messages according
> to type of client activity while socket is being monitored
> V4 5/7 - Same as V3 5/6 (changes to client side)
> V4 6/7 - Same as V3 6/6 (request timeout)
> V4 7/7 - Change to documentation, clarifying that "winbind max
> clients" is not a hard limit.

Attached find a patchset with a few Reviewed-by's. Two
patches in between are "my comments", feel free to squash.

Thanks a lot for looking at this!

Volker

-- 
SerNet GmbH, Bahnhofsallee 1b, 37081 Göttingen
phone: +49-551-370000-0, fax: +49-551-370000-9
AG Göttingen, HRB 2816, GF: Dr. Johannes Loxen
http://www.sernet.de, mailto:kontakt at sernet.de
-------------- next part --------------
From f5ba620ccfd3a04bd7887815d8c07eaf2068269c Mon Sep 17 00:00:00 2001
From: Uri Simchoni <urisimchoni at gmail.com>
Date: Mon, 22 Jun 2015 06:38:04 +0300
Subject: [PATCH 1/9] winbindd: set file descriptor limit according to
 configuration

Set the winbindd process file descriptor limit according to
the values that affect it in the configuration:
- Maximum number of clients
- Number of outgoing connections per domain

Signed-off-by: Uri Simchoni <urisimchoni at gmail.com>

VL -- I'm not sure here. See mail. Comments?
---
 source3/include/local.h     |    4 ++++
 source3/winbindd/winbindd.c |   31 +++++++++++++++++++++++++++++++
 2 files changed, 35 insertions(+)

diff --git a/source3/include/local.h b/source3/include/local.h
index 5963eb0..7f97d4e 100644
--- a/source3/include/local.h
+++ b/source3/include/local.h
@@ -204,4 +204,8 @@
 /* Maximum size of RPC data we will accept for one call. */
 #define MAX_RPC_DATA_SIZE (15*1024*1024)
 
+/* A guestimate of how many domains winbindd will be contacting */
+#ifndef WINBIND_MAX_DOMAINS_HINT
+#define WINBIND_MAX_DOMAINS_HINT 10
+#endif
 #endif
diff --git a/source3/winbindd/winbindd.c b/source3/winbindd/winbindd.c
index 4ffe79c..defc9cc 100644
--- a/source3/winbindd/winbindd.c
+++ b/source3/winbindd/winbindd.c
@@ -48,6 +48,7 @@
 
 static bool client_is_idle(struct winbindd_cli_state *state);
 static void remove_client(struct winbindd_cli_state *state);
+static void winbindd_setup_max_fds(void);
 
 static bool opt_nocache = False;
 static bool interactive = False;
@@ -145,6 +146,7 @@ static bool reload_services_file(const char *lfile)
 
 	reopen_logs();
 	load_interfaces();
+	winbindd_setup_max_fds();
 
 	return(ret);
 }
@@ -1130,6 +1132,35 @@ char *get_winbind_priv_pipe_dir(void)
 	return state_path(WINBINDD_PRIV_SOCKET_SUBDIR);
 }
 
+static void winbindd_setup_max_fds(void)
+{
+	int num_fds = MAX_OPEN_FUDGEFACTOR;
+	int actual_fds;
+
+	num_fds += lp_winbind_max_clients();
+	/* Add some more to account for 2 sockets open
+	   when the client transitions from unprivileged
+	   to privileged socket
+	*/
+	num_fds += lp_winbind_max_clients() / 10;
+
+	/* Add one socket per child process
+	   (yeah there are child processes other than the
+	   domain children but only domain children can vary
+	   with configuration
+	*/
+	num_fds += lp_winbind_max_domain_connections() *
+		   (lp_allow_trusted_domains() ? WINBIND_MAX_DOMAINS_HINT : 1);
+
+	actual_fds = set_maxfiles(num_fds);
+
+	if (actual_fds < num_fds) {
+		DEBUG(1, ("winbindd_setup_max_fds: Information only: "
+			  "requested %d open files, %d are available.\n",
+			  num_fds, actual_fds));
+	}
+}
+
 static bool winbindd_setup_listeners(void)
 {
 	struct winbindd_listen_state *pub_state = NULL;
-- 
1.7.9.5


From a9bbb1ae5a005267893aaedf375ca580bd65047c Mon Sep 17 00:00:00 2001
From: Uri Simchoni <urisimchoni at gmail.com>
Date: Thu, 25 Jun 2015 08:59:20 +0300
Subject: [PATCH 2/9] winbindd: cleanup client connection if the client closes
 the connection

This patch allows for early cleanup of client connections if the client
has given up.
Before this patch, any received request would be processed, and then only
upon transmitting the result to the client would winbindd find out the
client is no longer with us, possibly leading to a situation where the
same client tries over and over and increases the number of client
connections.

This patch monitors the client socket for readability while the request
is being processed, and closes the client connection if the socket
becomes readable. The client is not supposed to be writing anything to
the socket while it is waiting, so readability means either that the client
has closed the connection, or that it has broken the protocol.

Signed-off-by: Uri Simchoni <urisimchoni at gmail.com>
Reviewed-by: Volker Lendecke <vl at samba.org>
---
 source3/winbindd/winbindd.c |   28 ++++++++++++++++++++++++++++
 1 file changed, 28 insertions(+)

diff --git a/source3/winbindd/winbindd.c b/source3/winbindd/winbindd.c
index defc9cc..a03b5b4 100644
--- a/source3/winbindd/winbindd.c
+++ b/source3/winbindd/winbindd.c
@@ -42,6 +42,7 @@
 #include "source4/lib/messaging/irpc.h"
 #include "source4/lib/messaging/messaging.h"
 #include "lib/param/param.h"
+#include "lib/async_req/async_sock.h"
 
 #undef DBGC_CLASS
 #define DBGC_CLASS DBGC_WINBIND
@@ -809,11 +810,15 @@ static void request_finished(struct winbindd_cli_state *state);
 
 static void winbind_client_request_read(struct tevent_req *req);
 static void winbind_client_response_written(struct tevent_req *req);
+static void winbind_client_activity(struct tevent_req *req);
 
 static void request_finished(struct winbindd_cli_state *state)
 {
 	struct tevent_req *req;
 
+	/* free client socket monitoring request */
+	TALLOC_FREE(state->io_req);
+
 	TALLOC_FREE(state->request);
 
 	req = wb_resp_write_send(state, winbind_event_context(),
@@ -966,9 +971,32 @@ static void winbind_client_request_read(struct tevent_req *req)
 		remove_client(state);
 		return;
 	}
+
+	req = wait_for_read_send(state, winbind_event_context(), state->sock);
+	if (req == NULL) {
+		DEBUG(0, ("winbind_client_request_read[%d:%s]:"
+			  " wait_for_read_send failed - removing client\n",
+			  (int)state->pid, state->cmd_name));
+		remove_client(state);
+		return;
+	}
+	tevent_req_set_callback(req, winbind_client_activity, state);
+	state->io_req = req;
+
 	process_request(state);
 }
 
+static void winbind_client_activity(struct tevent_req *req)
+{
+	struct winbindd_cli_state *state =
+	    tevent_req_callback_data(req, struct winbindd_cli_state);
+	int err;
+
+	wait_for_read_recv(req, &err);
+
+	remove_client(state);
+}
+
 /* Remove a client connection from client connection list */
 
 static void remove_client(struct winbindd_cli_state *state)
-- 
1.7.9.5


From 7e6d8f5832875235d805570cb349f81a1ae9d5ea Mon Sep 17 00:00:00 2001
From: Uri Simchoni <urisimchoni at gmail.com>
Date: Thu, 25 Jun 2015 09:46:24 +0300
Subject: [PATCH 3/9] async_req: check for errors when monitoring socket for
 readability

Add an option to wait_for_read_send(), so that the request, upon
calling back, report whether the socket actually contains data
or is in EOF/error state. EOF is signalled via the EPIPE error.

This is useful for clients which do not expect data to arrive but
wait for readability to detect a closed socket (i.e. they do not
intend to actually read the socket when it's readable). Actual data
arrival would indicate a bug in this case, so the check can
be used to print an error message.

Signed-off-by: Uri Simchoni <urisimchoni at gmail.com>
---
 lib/async_req/async_sock.c  |   32 ++++++++++++++++++++++++++++++--
 lib/async_req/async_sock.h  |    4 ++--
 source3/smbd/process.c      |    4 ++--
 source3/winbindd/winbindd.c |    3 ++-
 4 files changed, 36 insertions(+), 7 deletions(-)

diff --git a/lib/async_req/async_sock.c b/lib/async_req/async_sock.c
index d2cda15..cee49ea 100644
--- a/lib/async_req/async_sock.c
+++ b/lib/async_req/async_sock.c
@@ -534,6 +534,8 @@ ssize_t read_packet_recv(struct tevent_req *req, TALLOC_CTX *mem_ctx,
 
 struct wait_for_read_state {
 	struct tevent_fd *fde;
+	int fd;
+	bool check_errors;
 };
 
 static void wait_for_read_cleanup(struct tevent_req *req,
@@ -544,8 +546,8 @@ static void wait_for_read_done(struct tevent_context *ev,
 			       void *private_data);
 
 struct tevent_req *wait_for_read_send(TALLOC_CTX *mem_ctx,
-				      struct tevent_context *ev,
-				      int fd)
+				      struct tevent_context *ev, int fd,
+				      bool check_errors)
 {
 	struct tevent_req *req;
 	struct wait_for_read_state *state;
@@ -562,6 +564,9 @@ struct tevent_req *wait_for_read_send(TALLOC_CTX *mem_ctx,
 	if (tevent_req_nomem(state->fde, req)) {
 		return tevent_req_post(req, ev);
 	}
+
+	state->fd = fd;
+	state->check_errors = check_errors;
 	return req;
 }
 
@@ -581,8 +586,31 @@ static void wait_for_read_done(struct tevent_context *ev,
 {
 	struct tevent_req *req = talloc_get_type_abort(
 		private_data, struct tevent_req);
+	struct wait_for_read_state *state =
+	    tevent_req_data(req, struct wait_for_read_state);
+	ssize_t nread;
+	char c;
 
 	if (flags & TEVENT_FD_READ) {
+		if (state->check_errors) {
+			nread = recv(state->fd, &c, 1, MSG_PEEK);
+
+			if (nread == 0) {
+				tevent_req_error(req, EPIPE);
+				return;
+			}
+
+			if ((nread == -1) && (errno == EINTR)) {
+				/* come back later */
+				return;
+			}
+
+			if ((nread == -1) && (errno != ENOTSOCK)) {
+				tevent_req_error(req, errno);
+				return;
+			}
+		}
+
 		tevent_req_done(req);
 	}
 }
diff --git a/lib/async_req/async_sock.h b/lib/async_req/async_sock.h
index 1b76fab..abbf822 100644
--- a/lib/async_req/async_sock.h
+++ b/lib/async_req/async_sock.h
@@ -53,8 +53,8 @@ ssize_t read_packet_recv(struct tevent_req *req, TALLOC_CTX *mem_ctx,
 			 uint8_t **pbuf, int *perrno);
 
 struct tevent_req *wait_for_read_send(TALLOC_CTX *mem_ctx,
-				      struct tevent_context *ev,
-				      int fd);
+				      struct tevent_context *ev, int fd,
+				      bool check_errors);
 bool wait_for_read_recv(struct tevent_req *req, int *perr);
 
 #endif
diff --git a/source3/smbd/process.c b/source3/smbd/process.c
index c83f3bc..4d047a9 100644
--- a/source3/smbd/process.c
+++ b/source3/smbd/process.c
@@ -2846,7 +2846,7 @@ static struct tevent_req *smbd_echo_read_send(
 	state->ev = ev;
 	state->xconn = xconn;
 
-	subreq = wait_for_read_send(state, ev, xconn->transport.sock);
+	subreq = wait_for_read_send(state, ev, xconn->transport.sock, false);
 	if (tevent_req_nomem(subreq, req)) {
 		return tevent_req_post(req, ev);
 	}
@@ -2921,7 +2921,7 @@ static void smbd_echo_read_waited(struct tevent_req *subreq)
 		}
 
 		subreq = wait_for_read_send(state, state->ev,
-					    xconn->transport.sock);
+					    xconn->transport.sock, false);
 		if (tevent_req_nomem(subreq, req)) {
 			return;
 		}
diff --git a/source3/winbindd/winbindd.c b/source3/winbindd/winbindd.c
index a03b5b4..07faadf 100644
--- a/source3/winbindd/winbindd.c
+++ b/source3/winbindd/winbindd.c
@@ -972,7 +972,8 @@ static void winbind_client_request_read(struct tevent_req *req)
 		return;
 	}
 
-	req = wait_for_read_send(state, winbind_event_context(), state->sock);
+	req = wait_for_read_send(state, winbind_event_context(), state->sock,
+				 false);
 	if (req == NULL) {
 		DEBUG(0, ("winbind_client_request_read[%d:%s]:"
 			  " wait_for_read_send failed - removing client\n",
-- 
1.7.9.5


From 0c36d0dcfb9282c779e4fad6a025b7cae65b681e Mon Sep 17 00:00:00 2001
From: Volker Lendecke <vl at samba.org>
Date: Mon, 13 Jul 2015 09:57:08 +0200
Subject: [PATCH 4/9] VL: Simplify logic in wait_for_read_done

Deep indentations and complex boolean expressions are always difficult
for me to understand...
---
 lib/async_req/async_sock.c |   49 +++++++++++++++++++++++++++-----------------
 1 file changed, 30 insertions(+), 19 deletions(-)

diff --git a/lib/async_req/async_sock.c b/lib/async_req/async_sock.c
index cee49ea..bc3780c 100644
--- a/lib/async_req/async_sock.c
+++ b/lib/async_req/async_sock.c
@@ -591,28 +591,39 @@ static void wait_for_read_done(struct tevent_context *ev,
 	ssize_t nread;
 	char c;
 
-	if (flags & TEVENT_FD_READ) {
-		if (state->check_errors) {
-			nread = recv(state->fd, &c, 1, MSG_PEEK);
-
-			if (nread == 0) {
-				tevent_req_error(req, EPIPE);
-				return;
-			}
-
-			if ((nread == -1) && (errno == EINTR)) {
-				/* come back later */
-				return;
-			}
-
-			if ((nread == -1) && (errno != ENOTSOCK)) {
-				tevent_req_error(req, errno);
-				return;
-			}
-		}
+	if ((flags & TEVENT_FD_READ) == 0) {
+		return;
+	}
 
+	if (!state->check_errors) {
 		tevent_req_done(req);
+		return;
 	}
+
+	nread = recv(state->fd, &c, 1, MSG_PEEK);
+
+	if (nread == 0) {
+		tevent_req_error(req, EPIPE);
+		return;
+	}
+
+	if ((nread == -1) && (errno == EINTR)) {
+		/* come back later */
+		return;
+	}
+
+	if ((nread == -1) && (errno == ENOTSOCK)) {
+		/* Ignore this specific error on pipes */
+		tevent_req_done(req);
+		return;
+	}
+
+	if (nread == -1) {
+		tevent_req_error(req, errno);
+		return;
+	}
+
+	tevent_req_done(req);
 }
 
 bool wait_for_read_recv(struct tevent_req *req, int *perr)
-- 
1.7.9.5


From 40228ff3e31854cd8209e05f4f4bfb5de3458720 Mon Sep 17 00:00:00 2001
From: Uri Simchoni <urisimchoni at gmail.com>
Date: Thu, 25 Jun 2015 10:12:37 +0300
Subject: [PATCH 5/9] winbindd: verify that client has closed the connection

A recent change was to remove a client if the client socket
has become readable. In this change, a check is added to
determine the source of the readbility (actual readability,
closed connection, or some other error), and a suitable
debug message is printed.

Signed-off-by: Uri Simchoni <urisimchoni at gmail.com>
---
 source3/winbindd/winbindd.c |   20 ++++++++++++++++++--
 1 file changed, 18 insertions(+), 2 deletions(-)

diff --git a/source3/winbindd/winbindd.c b/source3/winbindd/winbindd.c
index 07faadf..5ddc1ef 100644
--- a/source3/winbindd/winbindd.c
+++ b/source3/winbindd/winbindd.c
@@ -973,7 +973,7 @@ static void winbind_client_request_read(struct tevent_req *req)
 	}
 
 	req = wait_for_read_send(state, winbind_event_context(), state->sock,
-				 false);
+				 true);
 	if (req == NULL) {
 		DEBUG(0, ("winbind_client_request_read[%d:%s]:"
 			  " wait_for_read_send failed - removing client\n",
@@ -992,8 +992,24 @@ static void winbind_client_activity(struct tevent_req *req)
 	struct winbindd_cli_state *state =
 	    tevent_req_callback_data(req, struct winbindd_cli_state);
 	int err;
+	bool has_data;
 
-	wait_for_read_recv(req, &err);
+	has_data = wait_for_read_recv(req, &err);
+
+	if (!has_data && (err == EPIPE)) {
+		DEBUG(6, ("winbind_client_activity[%d:%s]:"
+			  " client has closed connection - removing client\n",
+			  (int)state->pid, state->cmd_name));
+	} else if (!has_data) {
+		DEBUG(2, ("winbind_client_activity[%d:%s]:"
+			  " client socket error"
+			  " (%s) - removing client\n",
+			  (int)state->pid, state->cmd_name, strerror(err)));
+	} else {
+		DEBUG(0, ("winbind_client_activity[%d:%s]:"
+			  " unexpectd data from client - removing client\n",
+			  (int)state->pid, state->cmd_name));
+	}
 
 	remove_client(state);
 }
-- 
1.7.9.5


From 7eaf344b730e99d9e59b3b044f80bc784e935789 Mon Sep 17 00:00:00 2001
From: Volker Lendecke <vl at samba.org>
Date: Mon, 13 Jul 2015 10:00:47 +0200
Subject: [PATCH 6/9] VL: Simplify logic in winbind_client_activity

---
 source3/winbindd/winbindd.c |   26 +++++++++++++++-----------
 1 file changed, 15 insertions(+), 11 deletions(-)

diff --git a/source3/winbindd/winbindd.c b/source3/winbindd/winbindd.c
index 5ddc1ef..fb47781 100644
--- a/source3/winbindd/winbindd.c
+++ b/source3/winbindd/winbindd.c
@@ -996,19 +996,23 @@ static void winbind_client_activity(struct tevent_req *req)
 
 	has_data = wait_for_read_recv(req, &err);
 
-	if (!has_data && (err == EPIPE)) {
-		DEBUG(6, ("winbind_client_activity[%d:%s]:"
-			  " client has closed connection - removing client\n",
-			  (int)state->pid, state->cmd_name));
-	} else if (!has_data) {
-		DEBUG(2, ("winbind_client_activity[%d:%s]:"
-			  " client socket error"
-			  " (%s) - removing client\n",
-			  (int)state->pid, state->cmd_name, strerror(err)));
-	} else {
+	if (has_data) {
 		DEBUG(0, ("winbind_client_activity[%d:%s]:"
-			  " unexpectd data from client - removing client\n",
+			  "unexpected data from client - removing client\n",
 			  (int)state->pid, state->cmd_name));
+	} else {
+		if (err == EPIPE) {
+			DEBUG(6, ("winbind_client_activity[%d:%s]: "
+				  "client has closed connection - removing "
+				  "client\n",
+				  (int)state->pid, state->cmd_name));
+		} else {
+			DEBUG(2, ("winbind_client_activity[%d:%s]: "
+				  "client socket error (%s) - removing "
+				  "client\n",
+				  (int)state->pid, state->cmd_name,
+				  strerror(err)));
+		}
 	}
 
 	remove_client(state);
-- 
1.7.9.5


From b1ce4d0b5f31e8da27a760a0a0ba21c5b7a65dc4 Mon Sep 17 00:00:00 2001
From: Uri Simchoni <urisimchoni at gmail.com>
Date: Wed, 3 Jun 2015 00:36:27 +0300
Subject: [PATCH 7/9] winbind client: avoid vicious cycle created by client
 retry

This patch cancels the retry policy of the winbind client.

When winbindd fails to respond to a request within 30 seconds,
the winbind client closes the connection and retries up to 10
times.

In some cases, delayed response is a result of multiple
requests from multiple clients piling up on the winbind domain
child process. Retrying just piles more and more requests,
creating a vicious cycle.

Even in the case of a single request taking long to complete,
there's no point in retrying because the retry request would just
wait for the current request to complete. Better to wait patiently.

There's one possible benefit in the retry, namely that winbindd typically
caches the results, and therefore a retry might take a cached result, so
the net effect of the retry may be to increase the timeout to 300 seconds.
But a more straightforward way to have a 300 second timeout is to modify the
timeout. Therefore the timeout is modified from 30 seconds to 300 seconds

(IMHO 300 seconds is too much, but we have "winbind rquest timeout"
with a default of 60 to make sure the request completes or fails
within 60 seconds)

Signed-off-by: Uri Simchoni <urisimchoni at gmail.com>
Reviewed-by: Jeremy Allison <jra at samba.org>
Reviewed-by: Volker Lendecke <vl at samba.org>
---
 nsswitch/wb_common.c |   26 +++++++++-----------------
 1 file changed, 9 insertions(+), 17 deletions(-)

diff --git a/nsswitch/wb_common.c b/nsswitch/wb_common.c
index 3194df1..139f010 100644
--- a/nsswitch/wb_common.c
+++ b/nsswitch/wb_common.c
@@ -535,7 +535,7 @@ static int winbind_read_sock(struct winbindd_context *ctx,
 
 		if (ret == 0) {
 			/* Not ready for read yet... */
-			if (total_time >= 30) {
+			if (total_time >= 300) {
 				/* Timeout */
 				winbind_close_sock(ctx);
 				return -1;
@@ -719,20 +719,16 @@ NSS_STATUS winbindd_request_response(struct winbindd_context *ctx,
 				     struct winbindd_response *response)
 {
 	NSS_STATUS status = NSS_STATUS_UNAVAIL;
-	int count = 0;
 	struct winbindd_context *wb_ctx = ctx;
 
 	if (ctx == NULL) {
 		wb_ctx = &wb_global_ctx;
 	}
 
-	while ((status == NSS_STATUS_UNAVAIL) && (count < 10)) {
-		status = winbindd_send_request(wb_ctx, req_type, 0, request);
-		if (status != NSS_STATUS_SUCCESS)
-			return(status);
-		status = winbindd_get_response(wb_ctx, response);
-		count += 1;
-	}
+	status = winbindd_send_request(wb_ctx, req_type, 0, request);
+	if (status != NSS_STATUS_SUCCESS)
+		return (status);
+	status = winbindd_get_response(wb_ctx, response);
 
 	return status;
 }
@@ -743,20 +739,16 @@ NSS_STATUS winbindd_priv_request_response(struct winbindd_context *ctx,
 					  struct winbindd_response *response)
 {
 	NSS_STATUS status = NSS_STATUS_UNAVAIL;
-	int count = 0;
 	struct winbindd_context *wb_ctx = ctx;
 
 	if (ctx == NULL) {
 		wb_ctx = &wb_global_ctx;
 	}
 
-	while ((status == NSS_STATUS_UNAVAIL) && (count < 10)) {
-		status = winbindd_send_request(wb_ctx, req_type, 1, request);
-		if (status != NSS_STATUS_SUCCESS)
-			return(status);
-		status = winbindd_get_response(wb_ctx, response);
-		count += 1;
-	}
+	status = winbindd_send_request(wb_ctx, req_type, 1, request);
+	if (status != NSS_STATUS_SUCCESS)
+		return (status);
+	status = winbindd_get_response(wb_ctx, response);
 
 	return status;
 }
-- 
1.7.9.5


From 59611ea1208b326087a7d700bf2ac3b40ffab847 Mon Sep 17 00:00:00 2001
From: Uri Simchoni <urisimchoni at gmail.com>
Date: Mon, 6 Jul 2015 21:29:17 +0300
Subject: [PATCH 8/9] winbindd: periodically remove timed out clients

Periodically scan winbind client list and close connections
in which either the client is idle, or the request is taking
too long to complete.

Signed-off-by: Uri Simchoni <urisimchoni at gmail.com>
Reviewed-by: Jeremy Allison <jra at samba.org>

VL -- how much load does this generate with 10k clients?
---
 source3/winbindd/winbindd.c |   18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/source3/winbindd/winbindd.c b/source3/winbindd/winbindd.c
index fb47781..66ea68b 100644
--- a/source3/winbindd/winbindd.c
+++ b/source3/winbindd/winbindd.c
@@ -47,6 +47,8 @@
 #undef DBGC_CLASS
 #define DBGC_CLASS DBGC_WINBIND
 
+#define SCRUB_CLIENTS_INTERVAL 5
+
 static bool client_is_idle(struct winbindd_cli_state *state);
 static void remove_client(struct winbindd_cli_state *state);
 static void winbindd_setup_max_fds(void);
@@ -1143,6 +1145,20 @@ static void remove_timed_out_clients(void)
 	}
 }
 
+static void winbindd_scrub_clients_handler(struct tevent_context *ev,
+					   struct tevent_timer *te,
+					   struct timeval current_time,
+					   void *private_data)
+{
+	remove_timed_out_clients();
+	if (tevent_add_timer(ev, ev,
+			     timeval_current_ofs(SCRUB_CLIENTS_INTERVAL, 0),
+			     winbindd_scrub_clients_handler, NULL) == NULL) {
+		DEBUG(0, ("winbindd: failed to reschedule client scrubber\n"));
+		exit(1);
+	}
+}
+
 struct winbindd_listen_state {
 	bool privileged;
 	int fd;
@@ -1276,6 +1292,8 @@ static bool winbindd_setup_listeners(void)
 	}
 	tevent_fd_set_auto_close(fde);
 
+	winbindd_scrub_clients_handler(winbind_event_context(), NULL,
+				       timeval_current(), NULL);
 	return true;
 failed:
 	TALLOC_FREE(pub_state);
-- 
1.7.9.5


From f2e94696667af1153816133961d68299eef35d1d Mon Sep 17 00:00:00 2001
From: Uri Simchoni <urisimchoni at gmail.com>
Date: Mon, 6 Jul 2015 12:13:15 +0300
Subject: [PATCH 9/9] doc: clarify "winbind max clients"

Add clarification about the nature of "winbind max clients" parameter.

Signed-off-by: Uri Simchoni <urisimchoni at gmail.com>
Reviewed-by: Volker Lendecke <vl at samba.org>
---
 docs-xml/smbdotconf/winbind/winbindmaxclients.xml |    6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/docs-xml/smbdotconf/winbind/winbindmaxclients.xml b/docs-xml/smbdotconf/winbind/winbindmaxclients.xml
index d37ee25..c11d0b6 100644
--- a/docs-xml/smbdotconf/winbind/winbindmaxclients.xml
+++ b/docs-xml/smbdotconf/winbind/winbindmaxclients.xml
@@ -6,6 +6,12 @@
 	<para>This parameter specifies the maximum number of clients
 	the <citerefentry><refentrytitle>winbindd</refentrytitle>
 	<manvolnum>8</manvolnum></citerefentry> daemon can connect with.
+	The parameter is not a hard limit.
+	The <citerefentry><refentrytitle>winbindd</refentrytitle>
+	<manvolnum>8</manvolnum></citerefentry> daemon configures
+	itself to be able to accept at least that many connections,
+	and if the limit is reached, an attempt is made to disconnect
+	idle clients.
 	</para>
 </description>
 
-- 
1.7.9.5



More information about the samba-technical mailing list