[SCM] Samba Shared Repository - branch master updated

Noel Power npower at samba.org
Mon Nov 5 22:05:02 UTC 2018


The branch, master has been updated
       via  e355a6b s4/selftest: enable samba.tests.samba_tool.gpo for PY3
       via  fc047c2 python/samba/gp_parse: PY2/PY3 Decode only when necessary
       via  6476ef5 python/samba/tests/samba_tool: PY2/PY3 compat port for test
       via  1659684 python/samba/gp_parse: Fix mulitple encode step with write_section
       via  19a459b python/samba/netcmd: misc PY2/PY3 compat changes for gpo.py
       via  54e2bb7 python/samab/gp_parse: remove unused code
       via  df578e1 python/samba/gp_parse: Use csv.reader for parsing cvs files
       via  cf79e6a python/samba/gp_parse: PY2/PY3 compat porting for gp_init.py
       via  d40ef73 python/samba/gp_parse: PY3 open file non-binary mode for write_binary
       via  388bddf python/samba/gp_parse: PY3 file -> open
       via  0934fc1 python/samba/gp_parse: PY2/PY3 compat changes for __init__.py
      from  27df4f0 ctdb-recovery: Ban a node that causes recovery failure

https://git.samba.org/?p=samba.git;a=shortlog;h=master


- Log -----------------------------------------------------------------
commit e355a6bc59624f9328a6dbf33b335c57e9c8e10f
Author: Noel Power <noel.power at suse.com>
Date:   Tue Sep 4 20:33:35 2018 +0100

    s4/selftest: enable samba.tests.samba_tool.gpo for PY3
    
    Signed-off-by: Noel Power <noel.power at suse.com>
    Reviewed-by: Douglas Bagnall <douglas.bagnall at catalyst.net.nz>
    
    Autobuild-User(master): Noel Power <npower at samba.org>
    Autobuild-Date(master): Mon Nov  5 23:04:48 CET 2018 on sn-devel-144

commit fc047c2cf458e08e2689ff27677d8622f4507c82
Author: Noel Power <noel.power at suse.com>
Date:   Wed Sep 5 17:01:17 2018 +0100

    python/samba/gp_parse: PY2/PY3 Decode only when necessary
    
    In python2 we decode str types in load_xml, in python3 these are
    str class(s) which we cannot decode.
    
    Signed-off-by: Noel Power <noel.power at suse.com>
    Reviewed-by: Douglas Bagnall <douglas.bagnall at catalyst.net.nz>

commit 6476ef589ee093640e93f6c673b432fcab576888
Author: Noel Power <noel.power at suse.com>
Date:   Wed Sep 5 16:14:32 2018 +0100

    python/samba/tests/samba_tool: PY2/PY3 compat port for test
    
    Signed-off-by: Noel Power <noel.power at suse.com>
    Reviewed-by: Douglas Bagnall <douglas.bagnall at catalyst.net.nz>

commit 16596842a62bec0a9d974c48d64000e3c079254e
Author: Noel Power <noel.power at suse.com>
Date:   Wed Sep 5 15:23:01 2018 +0100

    python/samba/gp_parse: Fix mulitple encode step with write_section
    
    In python2 as far as I can see GptTmplInfParser.write_binary more
    or less works by accident.
    
    write_binary creates a writer for the 'utf8' codec, such a writer
    should consume unicode and emit utf8 encoded bytes. This writer
    is passed to each of the sections managed by GptTmplInfParser as
    follows
    
        def write_binary(self, filename):
            with codecs.open(filename, 'wb+',
                             self.encoding) as f:
                for s in self.sections:
                    self.sections[s].write_section(s, f)
    
    And each section type itself is encoding its result to 'utf-16-le'
    e.g.
        class UnicodeParam(AbstractParam):
             def write_section(self, header, fp):
                fp.write(u'[Unicode]\r\nUnicode=yes\r\n'.encode(self.encoding)
    
    But this makes little sense, it seems like sections are encoded to one
    encoding but the total file is supposed to be encoded as ut8??? Also
    having an encoding per ParamType doesn't seem correct.
    
    Bizarely in PY2 this works and it actually encodes the whole file as utf-16le
    In PY3 you can't do this as the writer wants to deal with strings not bytes
    (after the extra encode phase in 'write_section'.
    
    So, changes here are to remove the unnecessary encoding in each 'write_section'
    method, additionally in GptTmplInfParser.write_binary the
    codecs.open call now uses the correct codec (e.g. 'utf-16-le') to write
    
    Signed-off-by: Noel Power <noel.power at suse.com>
    Reviewed-by: Douglas Bagnall <douglas.bagnall at catalyst.net.nz>

commit 19a459bac3932427afc65661d06dd7a6eda8865e
Author: Noel Power <noel.power at suse.com>
Date:   Wed Sep 5 14:54:24 2018 +0100

    python/samba/netcmd: misc PY2/PY3 compat changes for gpo.py
    
    Fixes:
    1) various ldb.bytes that should be displayed as strings in PY3
    2) sorting of lists of xml Element in PY3
    3) various 'open' need to be opened in binary mode (to accept binary
       data)
    
    Signed-off-by: Noel Power <noel.power at suse.com>
    Reviewed-by: Douglas Bagnall <douglas.bagnall at catalyst.net.nz>

commit 54e2bb707bc998e9017ab3818f8df45c95fad3ce
Author: Noel Power <noel.power at suse.com>
Date:   Wed Sep 5 14:39:11 2018 +0100

    python/samab/gp_parse: remove unused code
    
    Signed-off-by: Noel Power <noel.power at suse.com>
    Reviewed-by: Douglas Bagnall <douglas.bagnall at catalyst.net.nz>

commit df578e1554630f6781d40d4820c9026bb7b01d2d
Author: Noel Power <noel.power at suse.com>
Date:   Wed Sep 5 14:18:16 2018 +0100

    python/samba/gp_parse: Use csv.reader for parsing cvs files
    
    The previous version here was using UnicodeReader which was
    wrapping the UTF8Recoder class and passing that to csv.reader.
    It looks like the intention was to read a bytestream in a
    certain encoding and then reencode it to a different encoding.
    And then UnicodeReader creates unicode from the newly encoded stream.
    This is unnecssary, we know the encoding of the bytesstream and
    codec.getreader will happily consume the bytstream and give back
    unicode. The unicode can be fed directly into csv.writer.
    
    Signed-off-by: Noel Power <noel.power at suse.com>
    Reviewed-by: Douglas Bagnall <douglas.bagnall at catalyst.net.nz>

commit cf79e6ae1516a76cf8b7403a8372fbc6dc3a3a6a
Author: Noel Power <noel.power at suse.com>
Date:   Wed Sep 5 13:00:59 2018 +0100

    python/samba/gp_parse: PY2/PY3 compat porting for gp_init.py
    
    Fixes
    1) use compat versions of ConfigParser and StringIO
    2) fix sort list of XML Elements
    3) open file needs to be opened in binary mode as write_pretty_xml
       routing uses BytesIO() object.
    
    Signed-off-by: Noel Power <noel.power at suse.com>
    Reviewed-by: Douglas Bagnall <douglas.bagnall at catalyst.net.nz>

commit d40ef736d5ee34ad5b575dc32f89d0f4cc1885b8
Author: Noel Power <noel.power at suse.com>
Date:   Wed Sep 5 12:52:30 2018 +0100

    python/samba/gp_parse: PY3 open file non-binary mode for write_binary
    
    Although this is unintuitive it's because we are writing unicode
    not bytes (both in PY2 & PY3). using the 'b' mode causes an error in
    PY3.
    
    In PY3 we can define the encoding, but not in PY2.
    
    Signed-off-by: Noel Power <noel.power at suse.com>
    Reviewed-by: Douglas Bagnall <douglas.bagnall at catalyst.net.nz>

commit 388bddf4a6471f17f32af209bec36713f0b75d20
Author: Noel Power <noel.power at suse.com>
Date:   Wed Sep 5 12:46:44 2018 +0100

    python/samba/gp_parse: PY3 file -> open
    
    'file' no longer exists in PY3 replace with 'open'
    
    Signed-off-by: Noel Power <noel.power at suse.com>
    Reviewed-by: Douglas Bagnall <douglas.bagnall at catalyst.net.nz>

commit 0934fc14ef6ed94a100caa8a622b370d41bc1182
Author: Noel Power <noel.power at suse.com>
Date:   Wed Sep 5 12:36:00 2018 +0100

    python/samba/gp_parse: PY2/PY3 compat changes for __init__.py
    
    Fixes.
    
    1) sorting of xml.etree.ElementTree.Element, in PY2 sort
       seems to sort lists of these. In PY3 this no longer works.
       Choosing tag as the sort key for py3 so at least in python3
       there is a consistent sort (probably won't match how it is
       sorted in PY2 but nothing seems to depend on that)
    2) md5 requires bytes
    3) tostring returns bytes in PY3, adjust code for that
    
    Signed-off-by: Noel Power <noel.power at suse.com>
    Reviewed-by: Douglas Bagnall <douglas.bagnall at catalyst.net.nz>

-----------------------------------------------------------------------

Summary of changes:
 python/samba/gp_parse/__init__.py    | 14 +++---
 python/samba/gp_parse/gp_csv.py      | 94 +++++-------------------------------
 python/samba/gp_parse/gp_inf.py      | 31 ++++++------
 python/samba/gp_parse/gp_pol.py      |  4 +-
 python/samba/netcmd/gpo.py           | 26 +++++-----
 python/samba/tests/samba_tool/gpo.py |  4 +-
 source4/selftest/tests.py            |  4 +-
 7 files changed, 56 insertions(+), 121 deletions(-)


Changeset truncated at 500 lines:

diff --git a/python/samba/gp_parse/__init__.py b/python/samba/gp_parse/__init__.py
index 80fbee6..8ddd52d 100644
--- a/python/samba/gp_parse/__init__.py
+++ b/python/samba/gp_parse/__init__.py
@@ -21,6 +21,7 @@ from xml.dom import minidom
 from io import BytesIO
 from xml.etree.ElementTree import ElementTree, fromstring, tostring
 from hashlib import md5
+from samba.compat import get_bytes
 
 
 ENTITY_USER_ID = 0
@@ -60,7 +61,7 @@ class GPParser(object):
         pass
 
     def write_xml(self, filename):
-        with file(filename, 'w') as f:
+        with open(filename, 'w') as f:
             f.write('<?xml version="1.0" encoding="utf-8"?><UnknownFile/>')
 
     def load_xml(self, filename):
@@ -81,7 +82,7 @@ class GPParser(object):
         handle.write(minidom_parsed.toprettyxml(encoding=self.output_encoding))
 
     def new_xml_entity(self, name, ent_type):
-        identifier = md5(name).hexdigest()
+        identifier = md5(get_bytes(name)).hexdigest()
 
         type_str = entity_type_to_string(ent_type)
 
@@ -99,7 +100,7 @@ class GPParser(object):
 
         # Locate all user_id and all ACLs
         user_ids = root.findall('.//*[@user_id="TRUE"]')
-        user_ids.sort()
+        user_ids.sort(key = lambda x: x.tag)
 
         for elem in user_ids:
             old_text = elem.text
@@ -117,7 +118,7 @@ class GPParser(object):
                 global_entities.update([(old_text, elem.text)])
 
         acls = root.findall('.//*[@acl="TRUE"]')
-        acls.sort()
+        acls.sort(key = lambda x: x.tag)
 
         for elem in acls:
             old_text = elem.text
@@ -136,7 +137,7 @@ class GPParser(object):
                 global_entities.update([(old_text, elem.text)])
 
         share_paths = root.findall('.//*[@network_path="TRUE"]')
-        share_paths.sort()
+        share_paths.sort(key = lambda x: x.tag)
 
         for elem in share_paths:
             old_text = elem.text
@@ -171,7 +172,8 @@ class GPParser(object):
         output_xml = tostring(root)
 
         for ent in entities:
-            output_xml = output_xml.replace(ent[0].replace('&', '&'), ent[0])
+            entb = get_bytes(ent[0])
+            output_xml = output_xml.replace(entb.replace(b'&', b'&'), entb)
 
         with open(out_file, 'wb') as f:
             f.write(output_xml)
diff --git a/python/samba/gp_parse/gp_csv.py b/python/samba/gp_parse/gp_csv.py
index b19f84c..9e188db 100644
--- a/python/samba/gp_parse/gp_csv.py
+++ b/python/samba/gp_parse/gp_csv.py
@@ -23,9 +23,9 @@ import io
 
 from io import BytesIO
 from xml.etree.ElementTree import Element, SubElement
-
+from samba.compat import PY3
 from samba.gp_parse import GPParser
-
+from samba.compat import text_type
 # [MS-GPAC] Group Policy Audit Configuration
 class GPAuditCsvParser(GPParser):
     encoding = 'utf-8'
@@ -34,10 +34,9 @@ class GPAuditCsvParser(GPParser):
 
     def parse(self, contents):
         self.lines = []
-        reader = UnicodeReader(BytesIO(contents),
-                               encoding=self.encoding)
+        reader = csv.reader(codecs.getreader(self.encoding)(BytesIO(contents)))
 
-        self.header = reader.next()
+        self.header = next(reader)
         for row in reader:
             line = {}
             for i, x in enumerate(row):
@@ -47,7 +46,7 @@ class GPAuditCsvParser(GPParser):
             # print line
 
     def write_xml(self, filename):
-        with file(filename, 'wb') as f:
+        with open(filename, 'wb') as f:
             root = Element('CsvFile')
             child = SubElement(root, 'Row')
             for e in self.header:
@@ -83,90 +82,23 @@ class GPAuditCsvParser(GPParser):
                 header = False
                 self.header = []
                 for v in r.findall('Value'):
-                    self.header.append(v.text.decode(self.output_encoding))
+                    if not isinstance(v.text, text_type):
+                        v.text = v.text.decode(self.output_encoding)
+                    self.header.append(v.text)
             else:
                 line = {}
                 for i, v in enumerate(r.findall('Value')):
                     line[self.header[i]] = v.text if v.text is not None else ''
-                    line[self.header[i]] = line[self.header[i]].decode(self.output_encoding)
+                    if not isinstance(self.header[i], text_type):
+                        line[self.header[i]] = line[self.header[i]].decode(self.output_encoding)
 
                 self.lines.append(line)
 
     def write_binary(self, filename):
-        with file(filename, 'wb') as f:
-            # This should be using a unicode writer, but it seems to be in the
-            # right encoding at least by default.
-            #
-            # writer = UnicodeWriter(f, quoting=csv.QUOTE_MINIMAL)
+        from io import open
+        with open(filename, 'w', self.encoding) as f:
+            # In this case "binary" means "utf-8", so we let Python do that.
             writer = csv.writer(f, quoting=csv.QUOTE_MINIMAL)
             writer.writerow(self.header)
             for line in self.lines:
                 writer.writerow([line[x] for x in self.header])
-
-
-# The following classes come from the Python documentation
-# https://docs.python.org/3.0/library/csv.html
-
-
-class UTF8Recoder:
-    """
-    Iterator that reads an encoded stream and reencodes the input to UTF-8
-    """
-    def __init__(self, f, encoding):
-        self.reader = codecs.getreader(encoding)(f)
-
-    def __iter__(self):
-        return self
-
-    def next(self):
-        return next(self.reader).encode("utf-8")
-
-    __next__ = next
-
-class UnicodeReader:
-    """
-    A CSV reader which will iterate over lines in the CSV file "f",
-    which is encoded in the given encoding.
-    """
-
-    def __init__(self, f, dialect=csv.excel, encoding="utf-8", **kwds):
-        f = UTF8Recoder(f, encoding)
-        self.reader = csv.reader(f, dialect=dialect, **kwds)
-
-    def next(self):
-        row = next(self.reader)
-        return [unicode(s, "utf-8") for s in row]
-
-    def __iter__(self):
-        return self
-
-    __next__ = next
-
-class UnicodeWriter:
-    """
-    A CSV writer which will write rows to CSV file "f",
-    which is encoded in the given encoding.
-    """
-
-    def __init__(self, f, dialect=csv.excel, encoding="utf-8", **kwds):
-        # Redirect output to a queue
-        self.queue = io.StringIO()
-        self.writer = csv.writer(self.queue, dialect=dialect, **kwds)
-        self.stream = f
-        self.encoder = codecs.getincrementalencoder(encoding)()
-
-    def writerow(self, row):
-        self.writer.writerow([s.encode("utf-8") for s in row])
-        # Fetch UTF-8 output from the queue ...
-        data = self.queue.getvalue()
-        data = data.decode("utf-8")
-        # ... and reencode it into the target encoding
-        data = self.encoder.encode(data)
-        # write to the target stream
-        self.stream.write(data)
-        # empty queue
-        self.queue.truncate(0)
-
-    def writerows(self, rows):
-        for row in rows:
-            self.writerow(row)
diff --git a/python/samba/gp_parse/gp_inf.py b/python/samba/gp_parse/gp_inf.py
index e4bed26..79e2815 100644
--- a/python/samba/gp_parse/gp_inf.py
+++ b/python/samba/gp_parse/gp_inf.py
@@ -29,6 +29,7 @@ from samba.gp_parse import GPParser
 # [MS-GPSB] Security Protocol Extension
 class GptTmplInfParser(GPParser):
     sections = None
+    encoding = 'utf-16le'
 
     class AbstractParam:
         __metaclass__ = ABCMeta
@@ -67,10 +68,10 @@ class GptTmplInfParser(GPParser):
         def write_section(self, header, fp):
             if len(self.param_list) ==  0:
                 return
-            fp.write((u'[%s]\r\n' % header).encode(self.encoding))
+            fp.write(u'[%s]\r\n' % header)
             for key_out, val_out in self.param_list:
-                fp.write((u'%s = %s\r\n' % (key_out,
-                                            val_out)).encode(self.encoding))
+                fp.write(u'%s = %s\r\n' % (key_out,
+                                           val_out))
 
         def build_xml(self, xml_parent):
             for key_ini, val_ini in self.param_list:
@@ -99,9 +100,9 @@ class GptTmplInfParser(GPParser):
         def write_section(self, header, fp):
             if len(self.param_list) ==  0:
                 return
-            fp.write((u'[%s]\r\n' % header).encode(self.encoding))
+            fp.write(u'[%s]\r\n' % header)
             for param in self.param_list:
-                fp.write((u'%s\r\n' % param).encode(self.encoding))
+                fp.write(u'%s\r\n' % param)
 
         def build_xml(self, xml_parent):
             for val_ini in self.param_list:
@@ -129,10 +130,10 @@ class GptTmplInfParser(GPParser):
         def write_section(self, header, fp):
             if len(self.param_list) ==  0:
                 return
-            fp.write((u'[%s]\r\n' % header).encode(self.encoding))
+            fp.write(u'[%s]\r\n' % header)
             for key_out, val in self.param_list:
                 val_out = u','.join(val)
-                fp.write((u'%s = %s\r\n' % (key_out, val_out)).encode(self.encoding))
+                fp.write(u'%s = %s\r\n' % (key_out, val_out))
 
         def build_xml(self, xml_parent):
             for key_ini, sid_list in self.param_list:
@@ -188,9 +189,9 @@ class GptTmplInfParser(GPParser):
         def write_section(self, header, fp):
             if len(self.param_list) ==  0:
                 return
-            fp.write((u'[%s]\r\n' % header).encode(self.encoding))
+            fp.write(u'[%s]\r\n' % header)
             for param in self.param_list:
-                fp.write((u'"%s",%s,"%s"\r\n' % tuple(param)).encode(self.encoding))
+                fp.write(u'"%s",%s,"%s"\r\n' % tuple(param))
 
         def build_xml(self, xml_parent):
             for name_mode_acl in self.param_list:
@@ -225,12 +226,12 @@ class GptTmplInfParser(GPParser):
         def write_section(self, header, fp):
             if len(self.param_list) ==  0:
                 return
-            fp.write((u'[%s]\r\n' % header).encode(self.encoding))
+            fp.write(u'[%s]\r\n' % header)
 
             for key, val in self.param_list:
                 key_out = u'__'.join(key)
                 val_out = u','.join(val)
-                fp.write((u'%s = %s\r\n' % (key_out, val_out)).encode(self.encoding))
+                fp.write(u'%s = %s\r\n' % (key_out, val_out))
 
         def build_xml(self, xml_parent):
             for key_ini, sid_list in self.param_list:
@@ -266,7 +267,7 @@ class GptTmplInfParser(GPParser):
             pass
 
         def write_section(self, header, fp):
-            fp.write(u'[Unicode]\r\nUnicode=yes\r\n'.encode(self.encoding))
+            fp.write(u'[Unicode]\r\nUnicode=yes\r\n')
 
         def build_xml(self, xml_parent):
             # We do not bother storing this field
@@ -283,7 +284,7 @@ class GptTmplInfParser(GPParser):
 
         def write_section(self, header, fp):
             out = u'[Version]\r\nsignature="$CHICAGO$"\r\nRevision=1\r\n'
-            fp.write(out.encode(self.encoding))
+            fp.write(out)
 
         def build_xml(self, xml_parent):
             # We do not bother storing this field
@@ -332,12 +333,12 @@ class GptTmplInfParser(GPParser):
 
     def write_binary(self, filename):
         with codecs.open(filename, 'wb+',
-                         self.output_encoding) as f:
+                         self.encoding) as f:
             for s in self.sections:
                 self.sections[s].write_section(s, f)
 
     def write_xml(self, filename):
-        with file(filename, 'w') as f:
+        with open(filename, 'wb') as f:
             root = Element('GptTmplInfFile')
 
             for sec_inf in self.sections:
diff --git a/python/samba/gp_parse/gp_pol.py b/python/samba/gp_parse/gp_pol.py
index 5635747..67ecd58 100644
--- a/python/samba/gp_parse/gp_pol.py
+++ b/python/samba/gp_parse/gp_pol.py
@@ -99,7 +99,7 @@ class GPPolParser(GPParser):
         # print self.pol_file.__ndr_print__()
 
     def write_xml(self, filename):
-        with file(filename, 'w') as f:
+        with open(filename, 'wb') as f:
             root = Element('PolFile')
             root.attrib['signature'] = self.pol_file.header.signature
             root.attrib['version'] = str(self.pol_file.header.version)
@@ -142,6 +142,6 @@ class GPPolParser(GPParser):
         # self.load_xml(fromstring(contents))
 
     def write_binary(self, filename):
-        with file(filename, 'wb') as f:
+        with open(filename, 'wb') as f:
             binary_data = ndr_pack(self.pol_file)
             f.write(binary_data)
diff --git a/python/samba/netcmd/gpo.py b/python/samba/netcmd/gpo.py
index 6a7efe8..4d5fc88 100644
--- a/python/samba/netcmd/gpo.py
+++ b/python/samba/netcmd/gpo.py
@@ -212,7 +212,7 @@ def del_gpo_link(samdb, container_dn, gpo):
     found = False
     gpo_dn = str(get_gpo_dn(samdb, gpo))
     if 'gPLink' in msg:
-        gplist = parse_gplink(msg['gPLink'][0])
+        gplist = parse_gplink(str(msg['gPLink'][0]))
         for g in gplist:
             if g['dn'].lower() == gpo_dn.lower():
                 gplist.remove(g)
@@ -284,7 +284,7 @@ def backup_directory_remote_to_local(conn, remotedir, localdir):
         l_dir = l_dirs.pop()
 
         dirlist = conn.list(r_dir, attribs=attr_flags)
-        dirlist.sort()
+        dirlist.sort(key=lambda x : x['name'])
         for e in dirlist:
             r_name = r_dir + '\\' + e['name']
             l_name = os.path.join(l_dir, e['name'])
@@ -295,7 +295,7 @@ def backup_directory_remote_to_local(conn, remotedir, localdir):
                 os.mkdir(l_name)
             else:
                 data = conn.loadfile(r_name)
-                with file(l_name + SUFFIX, 'w') as f:
+                with open(l_name + SUFFIX, 'wb') as f:
                     f.write(data)
 
                 parser = find_parser(e['name'])
@@ -319,7 +319,7 @@ def copy_directory_remote_to_local(conn, remotedir, localdir):
         l_dir = l_dirs.pop()
 
         dirlist = conn.list(r_dir, attribs=attr_flags)
-        dirlist.sort()
+        dirlist.sort(key=lambda x : x['name'])
         for e in dirlist:
             r_name = r_dir + '\\' + e['name']
             l_name = os.path.join(l_dir, e['name'])
@@ -330,7 +330,7 @@ def copy_directory_remote_to_local(conn, remotedir, localdir):
                 os.mkdir(l_name)
             else:
                 data = conn.loadfile(r_name)
-                open(l_name, 'w').write(data)
+                open(l_name, 'wb').write(data)
 
 
 def copy_directory_local_to_remote(conn, localdir, remotedir,
@@ -358,7 +358,7 @@ def copy_directory_local_to_remote(conn, localdir, remotedir,
                     if not ignore_existing:
                         raise
             else:
-                data = open(l_name, 'r').read()
+                data = open(l_name, 'rb').read()
                 conn.savefile(r_name, data)
 
 
@@ -467,7 +467,7 @@ class cmd_list(Command):
         while True:
             msg = self.samdb.search(base=dn, scope=ldb.SCOPE_BASE, attrs=['gPLink', 'gPOptions'])[0]
             if 'gPLink' in msg:
-                glist = parse_gplink(msg['gPLink'][0])
+                glist = parse_gplink(str(msg['gPLink'][0]))
                 for g in glist:
                     if not inherit and not (g['options'] & dsdb.GPLINK_OPT_ENFORCE):
                         continue
@@ -609,7 +609,7 @@ class cmd_getlink(Command):
 
         if msg['gPLink']:
             self.outf.write("GPO(s) linked to DN %s\n" % container_dn)
-            gplist = parse_gplink(msg['gPLink'][0])
+            gplist = parse_gplink(str(msg['gPLink'][0]))
             for g in gplist:
                 msg = get_gpo_info(self.samdb, dn=g['dn'])
                 self.outf.write("    GPO     : %s\n" % msg[0]['name'][0])
@@ -675,7 +675,7 @@ class cmd_setlink(Command):
         # Update existing GPlinks or Add new one
         existing_gplink = False
         if 'gPLink' in msg:
-            gplist = parse_gplink(msg['gPLink'][0])
+            gplist = parse_gplink(str(msg['gPLink'][0]))
             existing_gplink = True
             found = False
             for g in gplist:
@@ -921,7 +921,7 @@ class cmd_fetch(Command):
             raise CommandError("GPO '%s' does not exist" % gpo)
 
         # verify UNC path
-        unc = msg['gPCFileSysPath'][0]
+        unc = str(msg['gPCFileSysPath'][0])
         try:
             [dom_name, service, sharepath] = parse_unc(unc)
         except ValueError:
@@ -1003,7 +1003,7 @@ class cmd_backup(Command):
             raise CommandError("GPO '%s' does not exist" % gpo)
 
         # verify UNC path
-        unc = msg['gPCFileSysPath'][0]
+        unc = str(msg['gPCFileSysPath'][0])
         try:
             [dom_name, service, sharepath] = parse_unc(unc)
         except ValueError:
@@ -1445,7 +1445,7 @@ class cmd_del(Command):
         # Check if valid GPO
         try:
             msg = get_gpo_info(self.samdb, gpo=gpo)[0]
-            unc_path = msg['gPCFileSysPath'][0]
+            unc_path = str(msg['gPCFileSysPath'][0])
         except Exception:
             raise CommandError("GPO '%s' does not exist" % gpo)
 
@@ -1522,7 +1522,7 @@ class cmd_aclcheck(Command):
 
         for m in msg:
             # verify UNC path
-            unc = m['gPCFileSysPath'][0]
+            unc = str(m['gPCFileSysPath'][0])
             try:
                 [dom_name, service, sharepath] = parse_unc(unc)
             except ValueError:
diff --git a/python/samba/tests/samba_tool/gpo.py b/python/samba/tests/samba_tool/gpo.py
index 7938a07..a760a98 100644
--- a/python/samba/tests/samba_tool/gpo.py
+++ b/python/samba/tests/samba_tool/gpo.py
@@ -68,7 +68,7 @@ def has_difference(path1, path2, binary=True, xml=True, sortlines=False):
             else:
                 if (l_name.endswith('.xml') and xml or
                     l_name.endswith('.SAMBABACKUP') and binary):
-                    if open(l_name).read() != open(r_name).read():
+                    if open(l_name, "rb").read() != open(r_name, "rb").read():
                         return l_name
 
     return None
@@ -343,7 +343,7 @@ class GpoCmdTestCase(SambaToolCmdTest):
 
         alt_entity_file = os.path.join(new_path, 'entities')
         with open(alt_entity_file, 'wb') as f:
-            f.write('''<!ENTITY SAMBA__NETWORK_PATH__82419dafed126a07d6b96c66fc943735__ "\\\\samdom.example.com">
+            f.write(b'''<!ENTITY SAMBA__NETWORK_PATH__82419dafed126a07d6b96c66fc943735__ "\\\\samdom.example.com">
 <!ENTITY SAMBA__NETWORK_PATH__0484cd41ded45a0728333a9c5e5ef619__ "\\\\samdom">
 <!ENTITY SAMBA____SDDL_ACL____4ce8277be3f630300cbcf80a80e21cf4__ "D:PAR(A;CI;KA;;;BA)(A;CIIO;KA;;;CO)(A;CI;KA;;;SY)(A;CI;KR;;;S-1-16-0)">
 <!ENTITY SAMBA____USER_ID_____d0970f5a1e19cb803f916c203d5c39c4__ "*S-1-5-113">
diff --git a/source4/selftest/tests.py b/source4/selftest/tests.py
index 3737efb..c4b7d18 100755
--- a/source4/selftest/tests.py
+++ b/source4/selftest/tests.py
@@ -650,8 +650,8 @@ for env in ["ad_dc:local", "ad_dc_ntvfs:local", "fl2000dc:local", "fl2003dc:loca
 # We run this test against both AD DC implemetnations because it is
 # the only test we have of GPO get/set behaviour, and this involves
 # the file server as well as the LDAP server.
-planpythontestsuite("ad_dc_ntvfs:local", "samba.tests.samba_tool.gpo")
-planpythontestsuite("ad_dc:local", "samba.tests.samba_tool.gpo")
+planpythontestsuite("ad_dc_ntvfs:local", "samba.tests.samba_tool.gpo",  py3_compatible=True)
+planpythontestsuite("ad_dc:local", "samba.tests.samba_tool.gpo", py3_compatible=True)
 
 planpythontestsuite("ad_dc_ntvfs:local", "samba.tests.samba_tool.processes", py3_compatible=True)
 planpythontestsuite("ad_dc_ntvfs:local", "samba.tests.samba_tool.user", py3_compatible=True)


-- 
Samba Shared Repository



More information about the samba-cvs mailing list