[SCM] Samba Shared Repository - branch v4-20-test updated

Jule Anger janger at samba.org
Thu Apr 11 12:25:01 UTC 2024


The branch, v4-20-test has been updated
       via  077f39baf7c libcli/http: Detect unsupported Transfer-encoding type
       via  2fb1bf0205f selftest: Add new test for testing non-chunk transfer encoding
       via  30bf3d1430f selftest: fix potential reference before assigned error
       via  a70e3a36c82 libcli/http: Handle http chunked transfer encoding
       via  7e17e4809d5 tests: add test for chunked encoding with http cli library
       via  26206392153 libcli/http: Optimise reading for content-length
       via  71eac5a065f selftest: Add basic content-lenght http tests
       via  19250e13ab6 Add simple http_client for use in black box tests (in following commits)
      from  eaefe50327d VERSION: Bump version up to Samba 4.20.1...

https://git.samba.org/?p=samba.git;a=shortlog;h=v4-20-test


- Log -----------------------------------------------------------------
commit 077f39baf7cc7f4e4ee8709d48b1cb23b8736c1c
Author: Noel Power <noel.power at suse.com>
Date:   Thu Mar 28 10:48:58 2024 +0000

    libcli/http: Detect unsupported Transfer-encoding type
    
    Also removes knownfail for test that now passes
    
    BUG: https://bugzilla.samba.org/show_bug.cgi?id=15611
    Signed-off-by: Noel Power <noel.power at suse.com>
    Reviewed-by: Andrew Bartlett <abartlet at samba.org>
    (cherry picked from commit a18c53a9b98e2e8dea08cf0ef08efc59e58ec137)
    
    Autobuild-User(v4-20-test): Jule Anger <janger at samba.org>
    Autobuild-Date(v4-20-test): Thu Apr 11 12:24:08 UTC 2024 on atb-devel-224

commit 2fb1bf0205f9b5f72d8e1f51e55cf86997639a46
Author: Noel Power <noel.power at suse.com>
Date:   Thu Mar 28 09:16:33 2024 +0000

    selftest: Add new test for testing non-chunk transfer encoding
    
    And add a known fail because there is a bug :-(
    
    BUG: https://bugzilla.samba.org/show_bug.cgi?id=15611
    Signed-off-by: Noel Power <noel.power at suse.com>
    Reviewed-by: Andrew Bartlett <abartlet at samba.org>
    (cherry picked from commit 93709d31590d4ca25fbac813b9e499755b81ddb5)

commit 30bf3d1430f96a42c7b90ef215daa33b427da8b9
Author: Noel Power <noel.power at suse.com>
Date:   Thu Mar 28 09:09:02 2024 +0000

    selftest: fix potential reference before assigned error
    
    This would only happen if the test failed (but the message would be
    incorrect as 'e' the exception to be stringified doesn't exist.
    
    BUG: https://bugzilla.samba.org/show_bug.cgi?id=15611
    Signed-off-by: Noel Power <noel.power at suse.com>
    Reviewed-by: Andrew Bartlett <abartlet at samba.org>
    (cherry picked from commit efdbf0511e0a89f865210170001fbebf17a45278)

commit a70e3a36c8244a324f5e8fa7b138dae5684055e0
Author: Noel Power <noel.power at suse.com>
Date:   Mon Mar 25 19:44:10 2024 +0000

    libcli/http: Handle http chunked transfer encoding
    
    Also removes the knownfail for the chunked transfer test
    
    Signed-off-by: Noel Power <noel.power at suse.com>
    Reviewed-by: Andrew Bartlett <abartlet at samba.org>
    BUG: https://bugzilla.samba.org/show_bug.cgi?id=15611
    (cherry picked from commit 03240c91fb6ffcf5afe47c14a1ba7a8bc12f2348)

commit 7e17e4809d593e1ce2d51583a351b38300a20e2a
Author: Noel Power <noel.power at suse.com>
Date:   Thu Sep 23 12:18:22 2021 +0100

    tests: add test for chunked encoding with http cli library
    
    Adds http test client to excercise the http client library
    and a blackbox test to run the client. This client is built
    only with selftest
    
    also adds a knownfail for the test
    
    Signed-off-by: Noel Power <noel.power at suse.com>
    Reviewed-by: Andrew Bartlett <abartlet at samba.org>
    BUG: https://bugzilla.samba.org/show_bug.cgi?id=15611
    (cherry picked from commit 30acd609f560352d3edb0c931b9a864110025b2c)

commit 26206392153248fb2be1ec95a2e3ac14f9356125
Author: Noel Power <noel.power at suse.com>
Date:   Fri Mar 22 08:55:49 2024 +0000

    libcli/http: Optimise reading for content-length
    
    Instead of reading byte-by-byte we know the content length we
    want to read so lets use it.
    
    Signed-off-by: Noel Power <noel.power at suse.com>
    Reviewed-by: Andrew Bartlett <abartlet at samba.org>
    BUG: https://bugzilla.samba.org/show_bug.cgi?id=15611
    (cherry picked from commit 5f03d84e3b52bf5a31a0f885cb83bdcb48ec96f7)

commit 71eac5a065fac4023601b067b850d209a7dec149
Author: Noel Power <noel.power at suse.com>
Date:   Mon Mar 25 16:25:55 2024 +0000

    selftest: Add basic content-lenght http tests
    
    very simple test of basic http request/response plus some checks to
    ensure http response doesn't exceed the response max length set by
    the client call.
    
    BUG: https://bugzilla.samba.org/show_bug.cgi?id=15611
    Signed-off-by: Noel Power <noel.power at suse.com>
    Reviewed-by: Andrew Bartlett <abartlet at samba.org>
    (cherry picked from commit 74cdebeae3d1bc35eea96b51b9491f6c52844b10)

commit 19250e13ab6c654405baf7c7d9c18f514ceade0f
Author: Noel Power <noel.power at suse.com>
Date:   Mon Mar 25 19:21:54 2024 +0000

    Add simple http_client for use in black box tests (in following commits)
    
    BUG: https://bugzilla.samba.org/show_bug.cgi?id=15611
    Signed-off-by: Noel Power <noel.power at suse.com>
    Reviewed-by: Andrew Bartlett <abartlet at samba.org>
    (cherry picked from commit cd6c075476c820b4fe8bdc10a24d8fc8ac74e9c9)

-----------------------------------------------------------------------

Summary of changes:
 libcli/http/http.c                          | 309 +++++++++++++++++++--
 libcli/http/http_internal.h                 |   4 +
 python/samba/tests/blackbox/http_chunk.py   | 129 +++++++++
 python/samba/tests/blackbox/http_content.py |  95 +++++++
 selftest/tests.py                           |   2 +
 source4/client/http_test.c                  | 401 ++++++++++++++++++++++++++++
 source4/wscript_build                       |   5 +
 7 files changed, 925 insertions(+), 20 deletions(-)
 create mode 100644 python/samba/tests/blackbox/http_chunk.py
 create mode 100644 python/samba/tests/blackbox/http_content.py
 create mode 100644 source4/client/http_test.c


Changeset truncated at 500 lines:

diff --git a/libcli/http/http.c b/libcli/http/http.c
index 96c573af137..6f22214f706 100644
--- a/libcli/http/http.c
+++ b/libcli/http/http.c
@@ -28,16 +28,28 @@
 
 #undef strcasecmp
 
+enum http_body_type {
+	BODY_NONE = 0,
+	BODY_CONTENT_LENGTH,
+	BODY_CHUNKED,
+	BODY_ERROR = -1
+};
+
 /**
  * Determines if a response should have a body.
- * @return 1 if the response MUST have a body; 0 if the response MUST NOT have
- *     a body. Returns -1 on error.
+ * @return 2 if response MUST use chunked encoding,
+ *         1 if the response MUST have a body;
+ *         0 if the response MUST NOT have a body.
+ * Returns -1 on error.
  */
-static int http_response_needs_body(struct http_request *req)
+static enum http_body_type http_response_needs_body(
+					struct http_request *req)
 {
 	struct http_header *h = NULL;
 
-	if (!req) return -1;
+	if (!req) {
+		return BODY_ERROR;
+	}
 
 	for (h = req->headers; h != NULL; h = h->next) {
 		int cmp;
@@ -45,6 +57,18 @@ static int http_response_needs_body(struct http_request *req)
 		char c;
 		unsigned long long v;
 
+		cmp = strcasecmp(h->key, "Transfer-Encoding");
+		if (cmp == 0) {
+			cmp = strcasecmp(h->value, "chunked");
+			if (cmp == 0) {
+				return BODY_CHUNKED;
+			}
+			/* unsupported Transfer-Encoding type */
+			DBG_ERR("Unsupported transfer encoding type %s\n",
+				h->value);
+			return BODY_ERROR;
+		}
+
 		cmp = strcasecmp(h->key, "Content-Length");
 		if (cmp != 0) {
 			continue;
@@ -52,20 +76,25 @@ static int http_response_needs_body(struct http_request *req)
 
 		n = sscanf(h->value, "%llu%c", &v, &c);
 		if (n != 1) {
-			return -1;
+			return BODY_ERROR;
 		}
 
 		req->remaining_content_length = v;
 
 		if (v != 0) {
-			return 1;
+			return BODY_CONTENT_LENGTH;
 		}
 
-		return 0;
+		return BODY_NONE;
 	}
 
-	return 0;
+	return BODY_NONE;
 }
+struct http_chunk
+{
+	struct http_chunk *prev, *next;
+	DATA_BLOB blob;
+};
 
 struct http_read_response_state {
 	enum http_parser_state	parser_state;
@@ -73,6 +102,7 @@ struct http_read_response_state {
 	uint64_t		max_content_length;
 	DATA_BLOB		buffer;
 	struct http_request	*response;
+	struct http_chunk	*chunks;
 };
 
 /**
@@ -86,7 +116,7 @@ static enum http_read_status http_parse_headers(struct http_read_response_state
 	char			*key = NULL;
 	char			*value = NULL;
 	int			n = 0;
-	int			ret;
+	enum http_body_type	ret;
 
 	/* Sanity checks */
 	if (!state || !state->response) {
@@ -119,19 +149,24 @@ static enum http_read_status http_parse_headers(struct http_read_response_state
 
 		ret = http_response_needs_body(state->response);
 		switch (ret) {
-		case 1:
+		case BODY_CHUNKED:
+			DEBUG(11, ("%s: need to process chunks... %d\n", __func__,
+				   state->response->response_code));
+			state->parser_state = HTTP_READING_CHUNK_SIZE;
+			break;
+		case BODY_CONTENT_LENGTH:
 			if (state->response->remaining_content_length <= state->max_content_length) {
 				DEBUG(11, ("%s: Start of read body\n", __func__));
 				state->parser_state = HTTP_READING_BODY;
 				break;
 			}
 			FALL_THROUGH;
-		case 0:
+		case BODY_NONE:
 			DEBUG(11, ("%s: Skipping body for code %d\n", __func__,
 				   state->response->response_code));
 			state->parser_state = HTTP_READING_DONE;
 			break;
-		case -1:
+		case BODY_ERROR:
 			DEBUG(0, ("%s_: Error in http_response_needs_body\n", __func__));
 			TALLOC_FREE(line);
 			return HTTP_DATA_CORRUPTED;
@@ -162,6 +197,141 @@ error:
 	return status;
 }
 
+static bool http_response_process_chunks(struct http_read_response_state *state)
+{
+	struct http_chunk *chunk = NULL;
+	struct http_request *resp = state->response;
+
+	for (chunk = state->chunks; chunk; chunk = chunk->next) {
+		DBG_DEBUG("processing chunk of size %zi\n",
+			  chunk->blob.length);
+		if (resp->body.data == NULL) {
+			resp->body = chunk->blob;
+			chunk->blob = data_blob_null;
+			talloc_steal(resp, resp->body.data);
+			continue;
+		}
+
+		resp->body.data =
+			talloc_realloc(resp,
+				resp->body.data,
+				uint8_t,
+				resp->body.length + chunk->blob.length);
+		if (!resp->body.data) {
+				return false;
+		}
+		memcpy(resp->body.data + resp->body.length,
+		       chunk->blob.data,
+		       chunk->blob.length);
+
+		resp->body.length += chunk->blob.length;
+
+		TALLOC_FREE(chunk->blob.data);
+		chunk->blob = data_blob_null;
+	}
+	return true;
+}
+
+static enum http_read_status http_read_chunk_term(struct http_read_response_state *state)
+{
+	enum http_read_status	status = HTTP_ALL_DATA_READ;
+	char			*ptr = NULL;
+	char			*line = NULL;
+
+	/* Sanity checks */
+	if (!state || !state->response) {
+		DBG_ERR("%s: Invalid Parameter\n", __func__);
+		return HTTP_DATA_CORRUPTED;
+	}
+
+	line = talloc_strndup(state, (char *)state->buffer.data, state->buffer.length);
+	if (!line) {
+		DBG_ERR("%s: Memory error\n", __func__);
+		return HTTP_DATA_CORRUPTED;
+	}
+	ptr = strstr(line, "\r\n");
+	if (ptr == NULL) {
+		TALLOC_FREE(line);
+		return HTTP_MORE_DATA_EXPECTED;
+	}
+
+	if (strncmp(line, "\r\n", 2) == 0) {
+		/* chunk terminator */
+		if (state->parser_state == HTTP_READING_FINAL_CHUNK_TERM) {
+			if (http_response_process_chunks(state) == false) {
+				status = HTTP_DATA_CORRUPTED;
+				goto out;
+			}
+			state->parser_state = HTTP_READING_DONE;
+		} else {
+			state->parser_state = HTTP_READING_CHUNK_SIZE;
+		}
+		status = HTTP_ALL_DATA_READ;
+		goto out;
+	}
+
+	status = HTTP_DATA_CORRUPTED;
+out:
+	TALLOC_FREE(line);
+	return status;
+}
+
+static enum http_read_status http_read_chunk_size(struct http_read_response_state *state)
+{
+	enum http_read_status	status = HTTP_ALL_DATA_READ;
+	char			*ptr = NULL;
+	char			*line = NULL;
+	char			*value = NULL;
+	int			n = 0;
+	unsigned long long v;
+
+	/* Sanity checks */
+	if (!state || !state->response) {
+		DBG_ERR("%s: Invalid Parameter\n", __func__);
+		return HTTP_DATA_CORRUPTED;
+	}
+
+	line = talloc_strndup(state, (char *)state->buffer.data, state->buffer.length);
+	if (!line) {
+		DBG_ERR("%s: Memory error\n", __func__);
+		return HTTP_DATA_CORRUPTED;
+	}
+	ptr = strstr(line, "\r\n");
+	if (ptr == NULL) {
+		TALLOC_FREE(line);
+		return HTTP_MORE_DATA_EXPECTED;
+	}
+
+	n = sscanf(line, "%m[^\r\n]\r\n", &value);
+	if (n != 1) {
+		DBG_ERR("%s: Error parsing chunk size '%s'\n", __func__, line);
+		status = HTTP_DATA_CORRUPTED;
+		goto out;
+	}
+
+	DBG_DEBUG("Got chunk size string %s\n", value);
+	n = sscanf(value, "%llx", &v);
+	if (n != 1) {
+		DBG_ERR("%s: Error parsing chunk size '%s'\n", __func__, line);
+		status = HTTP_DATA_CORRUPTED;
+		goto out;
+	}
+	DBG_DEBUG("Got chunk size %llu 0x%llx\n", v, v);
+	if (v == 0) {
+		state->parser_state = HTTP_READING_FINAL_CHUNK_TERM;
+	} else {
+		state->parser_state = HTTP_READING_CHUNK;
+	}
+	state->response->remaining_content_length = v;
+	status = HTTP_ALL_DATA_READ;
+out:
+	if (value) {
+		free(value);
+	}
+	TALLOC_FREE(line);
+	return status;
+}
+
 /**
  * Parses the first line of a HTTP response
  */
@@ -301,6 +471,55 @@ static enum http_read_status http_read_body(struct http_read_response_state *sta
 	return HTTP_ALL_DATA_READ;
 }
 
+static enum http_read_status http_read_chunk(struct http_read_response_state *state)
+{
+	struct http_request *resp = state->response;
+	struct http_chunk *chunk = NULL;
+	size_t total = 0;
+	size_t prev = 0;
+
+	if (state->buffer.length < resp->remaining_content_length) {
+		return HTTP_MORE_DATA_EXPECTED;
+	}
+
+	for (chunk = state->chunks; chunk; chunk = chunk->next) {
+		total += chunk->blob.length;
+	}
+
+	prev = total;
+	total = total + state->buffer.length;
+	if (total < prev) {
+		DBG_ERR("adding chunklen %zu to buf len %zu "
+			"will overflow\n",
+			state->buffer.length,
+			prev);
+		return HTTP_DATA_CORRUPTED;
+	}
+	if (total > state->max_content_length)  {
+		DBG_DEBUG("size %zu exceeds "
+			  "max content len %"PRIu64" skipping body\n",
+			  total,
+			  state->max_content_length);
+		state->parser_state = HTTP_READING_DONE;
+		goto out;
+	}
+
+	/* chunk read */
+	chunk = talloc_zero(state, struct http_chunk);
+	if (chunk == NULL) {
+		DBG_ERR("%s: Memory error\n", __func__);
+		return HTTP_DATA_CORRUPTED;
+	}
+	chunk->blob = state->buffer;
+	talloc_steal(chunk, chunk->blob.data);
+	DLIST_ADD_END(state->chunks, chunk);
+	state->parser_state = HTTP_READING_CHUNK_TERM;
+out:
+	state->buffer = data_blob_null;
+	resp->remaining_content_length = 0;
+	return HTTP_ALL_DATA_READ;
+}
+
 static enum http_read_status http_read_trailer(struct http_read_response_state *state)
 {
 	enum http_read_status status = HTTP_DATA_CORRUPTED;
@@ -323,6 +542,16 @@ static enum http_read_status http_parse_buffer(struct http_read_response_state *
 		case HTTP_READING_BODY:
 			return http_read_body(state);
 			break;
+		case HTTP_READING_FINAL_CHUNK_TERM:
+		case HTTP_READING_CHUNK_TERM:
+			return http_read_chunk_term(state);
+			break;
+		case HTTP_READING_CHUNK_SIZE:
+			return http_read_chunk_size(state);
+			break;
+		case HTTP_READING_CHUNK:
+			return http_read_chunk(state);
+			break;
 		case HTTP_READING_TRAILER:
 			return http_read_trailer(state);
 			break;
@@ -527,20 +756,60 @@ static int http_read_response_next_vector(struct tstream_context *stream,
 				*_count = 1;
 			}
 			break;
-		case HTTP_MORE_DATA_EXPECTED:
-			/* TODO Optimize, allocating byte by byte */
-			state->buffer.data = talloc_realloc(state, state->buffer.data,
-							    uint8_t, state->buffer.length + 1);
+		case HTTP_MORE_DATA_EXPECTED: {
+			size_t toread = 1;
+			size_t total;
+			if (state->parser_state == HTTP_READING_BODY ||
+			    state->parser_state == HTTP_READING_CHUNK) {
+				struct http_request *resp = state->response;
+				toread = resp->remaining_content_length -
+					 state->buffer.length;
+			}
+
+			total = toread + state->buffer.length;
+
+			if (total < state->buffer.length)  {
+				DBG_ERR("adding %zu to buf len %zu "
+					"will overflow\n",
+					toread,
+					state->buffer.length);
+					return -1;
+			}
+
+			/*
+			 * test if content-length message exceeds the
+			 * specified max_content_length
+			 * Note: This check won't be hit at the moment
+			 *       due to an existing check in parse_headers
+			 *       which will skip the body. Check is here
+			 *       for completeness and to cater for future
+			 *       code changes.
+			 */
+			if (state->parser_state == HTTP_READING_BODY) {
+				if (total > state->max_content_length)  {
+					DBG_ERR("content size %zu exceeds "
+						"max content len %"PRIu64"\n",
+						total,
+						state->max_content_length);
+					return -1;
+				}
+			}
+
+			state->buffer.data =
+				talloc_realloc(state, state->buffer.data,
+					       uint8_t,
+					       state->buffer.length + toread);
 			if (!state->buffer.data) {
 				return -1;
 			}
-			state->buffer.length++;
+			state->buffer.length += toread;
 			vector[0].iov_base = (void *)(state->buffer.data +
-						      state->buffer.length - 1);
-			vector[0].iov_len = 1;
+					     state->buffer.length - toread);
+			vector[0].iov_len = toread;
 			*_vector = vector;
 			*_count = 1;
 			break;
+		}
 		case HTTP_DATA_CORRUPTED:
 		case HTTP_REQUEST_CANCELED:
 		case HTTP_DATA_TOO_LONG:
@@ -603,7 +872,7 @@ static void http_read_response_done(struct tevent_req *subreq)
 {
 	NTSTATUS			status;
 	struct tevent_req		*req;
-	int				ret;
+	enum http_body_type		ret;
 	int				sys_errno;
 
 	if (!subreq) {
diff --git a/libcli/http/http_internal.h b/libcli/http/http_internal.h
index ec17f7e2850..786ace62d84 100644
--- a/libcli/http/http_internal.h
+++ b/libcli/http/http_internal.h
@@ -28,6 +28,10 @@ enum http_parser_state {
 	HTTP_READING_BODY,
 	HTTP_READING_TRAILER,
 	HTTP_READING_DONE,
+	HTTP_READING_CHUNK_SIZE,
+	HTTP_READING_CHUNK,
+	HTTP_READING_CHUNK_TERM,
+	HTTP_READING_FINAL_CHUNK_TERM,
 };
 
 enum http_read_status {
diff --git a/python/samba/tests/blackbox/http_chunk.py b/python/samba/tests/blackbox/http_chunk.py
new file mode 100644
index 00000000000..6745c8cb392
--- /dev/null
+++ b/python/samba/tests/blackbox/http_chunk.py
@@ -0,0 +1,129 @@
+# Blackbox tests for http_test
+#
+# Copyright (C) Noel Power noel.power at suse.com
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+import os
+import time
+import threading
+import logging
+import json
+from http.server import HTTPServer, BaseHTTPRequestHandler
+from samba.logger import get_samba_logger
+from samba.tests import BlackboxTestCase, BlackboxProcessError
+
+logger = get_samba_logger(name=__name__)
+COMMAND = "bin/http_test"
+def make_chunks(msg, chunk_size):
+    chunks = []
+    while len(msg) > chunk_size:
+        chunk = msg[:chunk_size]
+        chunks.append(chunk)
+        msg = msg[chunk_size:]
+    if len(msg):
+        chunks.append(msg)
+    return chunks
+
+# simple handler, spits back the 'path' passed in
+# GET or POST and a chunked encoded http response
+# where the chunk size is 10 octets
+class ChunkHTTPRequestHandler(BaseHTTPRequestHandler):
+    def handle_req(self):
+        msg = bytes(self.path, encoding="utf-8")
+        chunks = make_chunks(msg, 10)
+
+        self.send_response(200)
+        self.send_header('content-type', 'application/json; charset=UTF-8')
+        if self.path == "usegziptransferencoding":
+            self.send_header('Transfer-Encoding', 'gzip')
+        else:
+            self.send_header('Transfer-Encoding', 'chunked')
+        self.end_headers()
+        resp = bytes()
+        for chunk in chunks:
+            resp = resp + ("%x" % len(chunk)).encode("utf-8") + b'\r\n' + chunk + b'\r\n'


-- 
Samba Shared Repository



More information about the samba-cvs mailing list