IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
We shouldn't get a node with a zero code, and there's probably nothing
to do but stop.
CID 1517261 (#1-2 of 2): Bad bit shift operation
(BAD_SHIFT)11. negative_shift: In expression j >> offset - k,
shifting by a negative amount has undefined behavior. The shift
amount, offset - k, is -3.
Signed-off-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Jeremy Allison <jra@samba.org>
Autobuild-User(master): Jeremy Allison <jra@samba.org>
Autobuild-Date(master): Mon Dec 19 23:29:04 UTC 2022 on sn-devel-184
Because we just wrote the intermediate representation to have no zero
distances, we can be sure it doesn't, but Coverity doesn't know. If
distance is zero, `bitlen_nonzero_16(distance)` would be bad.
CID 1517278 (#1 of 1): Bad bit shift operation
(BAD_SHIFT)41. large_shift: In expression 1 << code_dist, left
shifting by more than 31 bits has undefined behavior. The shift
amount, code_dist, is 65535.
Signed-off-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Jeremy Allison <jra@samba.org>
Very long matches would be written instead as very very long matches.
We can't in fact hit this because we have a MAX_MATCH_LENGTH defined
as 64M, but if we could, it might make certain 2GB+ strings impossible
to compress.
CID 1517275 (#1 of 1): Unintended sign extension
(SIGN_EXTENSION)sign_extension: Suspicious implicit sign extension:
intermediate[i + 2UL] with type uint16_t (16 bits, unsigned) is
promoted in intermediate[i + 2UL] << 16 to type int (32 bits, signed),
then sign-extended to type unsigned long (64 bits, unsigned). If
intermediate[i + 2UL] << 16 is greater than 0x7FFFFFFF, the upper bits
of the result will all be 1.
Signed-off-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Jeremy Allison <jra@samba.org>
None of our test vectors are 18446744073709551615 bytes long, which
means we can know an `expected_length == returned_length` check will
catch the case where the compression function returns -1 for error. We
know that, but Coverity doesn't.
It's the same thing over and over again, in two different patterns:
>>> CID 1517301: Memory - corruptions (OVERRUN)
>>> Calling "memcmp" with "original.data" and "original.length" is
suspicious because of the very large index, 18446744073709551615. The index
may be due to a negative parameter being interpreted as unsigned.
393 if (original.length != decomp_written ||
394 memcmp(decompressed.data,
395 original.data,
396 original.length) != 0) {
397 debug_message("\033[1;31mgot %zd, expected %zu\033[0m\n",
398 decomp_written,
*** CID 1517299: Memory - corruptions (OVERRUN)
/lib/compression/tests/test_lzxpress_plain.c: 296 in
test_lzxpress_plain_decompress_more_compressed_files()
290 debug_start_timer();
291 written = lzxpress_decompress(p.compressed.data,
292 p.compressed.length,
293 dest,
294 p.decompressed.length);
295 debug_end_timer("decompress", p.decompressed.length);
>>> CID 1517299: Memory - corruptions (OVERRUN)
>>> Calling "memcmp" with "p.decompressed.data" and
"p.decompressed.length" is suspicious because of the very large index,
18446744073709551615. The index may be due to a negative parameter being
interpreted as unsigned.
296 if (written == p.decompressed.length &&
297 memcmp(dest, p.decompressed.data, p.decompressed.length)
== 0) {
298 debug_message("\033[1;32mdecompressed %s!
Signed-off-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Jeremy Allison <jra@samba.org>
We know that code is non-zero, because it comes from the combination of
the intermediate representation and the symbol tables that were generated
at the same time. But Coverity doesn't know that, and it thinks we could
be doing undefined things in the subsequent shift.
CID 1517302: Integer handling issues (BAD_SHIFT)
In expression "1 << code_bit_len", shifting by a negative amount has
Signed-off-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Jeremy Allison <jra@samba.org>
The struct write_context bit_len attribute is always between 0 and 31,
but if the next patches are applied without this, SUSE GCC -O3 will
worry thusly:
../../lib/compression/lzxpress_huffman.c: In function
‘lzxpress_huffman_compress’:
../../lib/compression/lzxpress_huffman.c:953:5: error: assuming signed
overflow does not occur when simplifying conditional to constant
[-Werror=strict-overflow]
if (wc->bit_len > 16) {
^
cc1: all warnings being treated as errors
Inspection tell us that the invariant holds. Nevertheless, we can
safely use an unsigned type and insist that over- or under- flow is
bad.
Signed-off-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Jeremy Allison <jra@samba.org>
The 'compressed' string can be about 9/8 the size of the decompressed
string, but we didn't allow enough memory in the fuzz target for that.
Then when it failed, we didn't check.
Credit to OSSFuzz.
Signed-off-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Jeremy Allison <jra@samba.org>
We had
output[output_pos - distance];
where output_pos and distance are size_t and distance can be greater
than output_pos (because it refers to a place in the previous block).
The underflow is defined, leading to a big number, and when
sizeof(size_t) == sizeof(*uint8_t) the subsequent overflow works as
expected. But if size_t is smaller than a pointer, bad things will
happen.
This was found by OSSFuzz with
'UBSAN_OPTIONS=print_stacktrace=1:silence_unsigned_overflow=1'.
Credit to OSSFuzz.
Signed-off-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Volker Lendecke <vl@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
The linker has to relocate the pointers in the array at startup, save
that. I know we have bigger .data blobs, but every bit counts :-)
Signed-off-by: Volker Lendecke <vl@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
With the proper NULL checks we don't need the stackframe,
use a passed in context instead.
Signed-off-by: Volker Lendecke <vl@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
The first call of talloc_asprintf_append() did not have a NULL check.
Signed-off-by: Volker Lendecke <vl@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
This adds intermediate NULL checks via talloc_asprintf_addbuf()
Signed-off-by: Volker Lendecke <vl@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
I wanted to use this in debug.c, but this would have meant to pollute
debug's deps with a lot of stuff. Also, looking through uses of
talloc_asprint_append(), very many of those don't do NULL checks
properly and could benefit from the _addbuf() flavor. We can add a
vasprintf variant later if the need shows up.
Signed-off-by: Volker Lendecke <vl@samba.org>
Reviewed-by: Jeremy Allison <jra@samba.org>
This allows admins to disable enctypes completely if required.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15237
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Joseph Sutton <josephsutton@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
In order to allow better upgrades we need the default value for smb.conf to the
same even if the effective default value of the software changes in future.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15237
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Joseph Sutton <josephsutton@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
This is not squashed in order to allow easier backports...
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15237
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Joseph Sutton <josephsutton@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
We no longer have support for des encryption types in the kerberos
libraries anyway.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15237
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Joseph Sutton <josephsutton@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
aes encryption types are always supported.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15237
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Joseph Sutton <josephsutton@catalyst.net.nz>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
Pair-Programmed-With: Joseph Sutton <josephsutton@catalyst.net.nz>
Signed-off-by: Andrew Bartlett <abartlet@samba.org>
Signed-off-by: Joseph Sutton <josephsutton@catalyst.net.nz>
Reviewed-by: Stefan Metzmacher <metze@samba.org>
This matches the Windows registry key
HKEY_LOCAL_MACHINE\System\CurrentControlSet\services\KDC\DefaultDomainSupportedEncTypes
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15237
Pair-Programmed-With: Andrew Bartlett <abartlet@samba.org>
Signed-off-by: Joseph Sutton <josephsutton@catalyst.net.nz>
Signed-off-by: Andrew Bartlett <abartlet@samba.org>
Reviewed-by: Stefan Metzmacher <metze@samba.org>
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15240
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
Reviewed-by: Ralph Boehme <slow@samba.org>
AES is supported by Windows Server >= 2008R2, Windows (Client) >= 7 and Samba >= 4.0,
so there's no reason to allow md5 clients by default.
However some third party domain members may need it.
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15240
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
Reviewed-by: Ralph Boehme <slow@samba.org>
AES is supported by Windows >= 2008R2 and Samba >= 4.0 so there's no
reason to allow md5 servers by default.
Note the change in netlogon_creds_cli_context_global() is only cosmetic,
but avoids confusion while reading the code. Check with:
git show -U35 libcli/auth/netlogon_creds_cli.c
BUG: https://bugzilla.samba.org/show_bug.cgi?id=15240
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Andrew Bartlett <abartlet@samba.org>
Reviewed-by: Ralph Boehme <slow@samba.org>
<sys/stat.h> was missing from compression library tests which resulted
in the following compile time error:
../../lib/compression/tests/test_lzx_huffman.c: In function
‘datablob_from_file’:
../../lib/compression/tests/test_lzx_huffman.c:383:21: error:
storage size of ‘s’ isn’t known
383 | struct stat s;
| ^
../../lib/compression/tests/test_lzx_huffman.c:389:15: warning:
implicit declaration of function ‘fstat’ [-Wimplicit-function-declaration]
389 | ret = fstat(fileno(fh), &s);
| ^~~~~
Signed-off-by: Anoop C S <anoopcs@samba.org>
Reviewed-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Volker Lendecke <vl@samba.org>
Autobuild-User(master): Volker Lendecke <vl@samba.org>
Autobuild-Date(master): Tue Dec 6 11:39:16 UTC 2022 on sn-devel-184
lib/compression/tests/test_lzx_huffman.c: In function ‘test_lzxpress_huffman_overlong_matches’:
lib/compression/tests/test_lzx_huffman.c:1013:35: error: ‘j’ may be used uninitialized [-Werror=maybe-uninitialized]
1013 | assert_int_equal(score, i * j);
| ^
lib/compression/tests/test_lzx_huffman.c:979:19: note: ‘j’ was declared here
979 | size_t i, j;
| ^
lib/compression/tests/test_lzx_huffman.c: In function ‘test_lzxpress_huffman_overlong_matches_abc’:
lib/compression/tests/test_lzx_huffman.c:1059:39: error: ‘k’ may be used uninitialized [-Werror=maybe-uninitialized]
1059 | assert_int_equal(score, i * j * k);
| ^
lib/compression/tests/test_lzx_huffman.c:1020:22: note: ‘k’ was declared here
1020 | size_t i, j, k;
| ^
lib/compression/tests/test_lzx_huffman.c:1059:35: error: ‘j’ may be used uninitialized [-Werror=maybe-uninitialized]
1059 | assert_int_equal(score, i * j * k);
| ^
lib/compression/tests/test_lzx_huffman.c:1020:19: note: ‘j’ was declared here
1020 | size_t i, j, k;
| ^
Signed-off-by: Andreas Schneider <asn@samba.org>
Reviewed-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Autobuild-User(master): Andreas Schneider <asn@cryptomilk.org>
Autobuild-Date(master): Sun Dec 4 09:12:30 UTC 2022 on sn-devel-184
This uses the same hash table method as lzxpress_huffman, though the
code can't be directly reused as the sizes of the offsets is
different, and there is not a block processing step here.
This will worsen the compression ratio compared to the exhaustive
search we previously used, though we still perform better than
Windows. To put numbers on it, the test files used to compress to 0.91
of Windows' compression size, and now they compress to 0.96.
On the other hand this is many orders of magnitude faster. It is
difficult to say exactly how much faster -- while the testsuite time
has only improved 200-fold (from 7 minutes to 2 seconds), most of the
remaining 2 seconds is used in data generation and management, not
compression. OSSFuzz consistently finds new vectors that time out
after a minute; on these we'll see nearly an order of magnitude of
orders of magnitude inprovement.
Signed-off-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Joseph Sutton <josephsutton@catalyst.net.nz>
Autobuild-User(master): Joseph Sutton <jsutton@samba.org>
Autobuild-Date(master): Fri Dec 2 00:00:04 UTC 2022 on sn-devel-184
This makes it easier to rework the encoding decision to depend on a
hash table match rather than the current exhaustive search.
Signed-off-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Joseph Sutton <josephsutton@catalyst.net.nz>
This will make it possible to move encoding operations into helper
functions, which will make it easier to restructure the code to use a
hash table for faster matching.
Signed-off-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Joseph Sutton <josephsutton@catalyst.net.nz>
These are based on (i.e. copied and pasted from) the LZ77 + Huffman
tests.
Signed-off-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Joseph Sutton <josephsutton@catalyst.net.nz>
Everything that is in testdata/compression/lzxpress-huffman/ can also
be used for lzxpress plain tests, which is something we really need.
Signed-off-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Joseph Sutton <josephsutton@catalyst.net.nz>
We are going to change from a slow exact match algorithm to a fast
heuristic search that will not always get the same results as the
exhaustive search.
To be precise, a million zeros will compress to 112 rather than 93 bytes.
We don't insist on an exact size, because that is not an issue here.
Signed-off-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Joseph Sutton <josephsutton@catalyst.net.nz>
Mainly so I can go
make bin/test_lzxpress_plain && bin/test_lzxpress_plain
valgrind bin/test_lzxpress_plain
rr bin/test_lzxpress_plain
rr replay
in a tight loop.
Signed-off-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Joseph Sutton <josephsutton@catalyst.net.nz>
If compiled on Windows using Cygwin, MSYS2, or similar, this will output
compressed versions of files exactly as specified by MZ-XCA, if the
following conditions are met:
1. The file > 300 bytes.
2. The compressed file is smaller than the decompressed file.
Otherwise it returns the data unchanged. Without warning; that's just
how the API works.
Signed-off-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Joseph Sutton <josephsutton@catalyst.net.nz>
Compression uses a 3 byte hash remember LZ77 matches in a 14-bit table.
This script runs the hash over all 16M combinations, then again over
all ASCII combinations, counting collisions to find hot-spots.
If you think you have a better hash, you are probably right, but you
should try it here -- alter h() -- before committing to it. This one is
literally the first one I thought of.
Signed-off-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Joseph Sutton <josephsutton@catalyst.net.nz>
Huffman tree re-quantisation and perhaps other code paths are only
triggered by pathological data like this.
Signed-off-by: Douglas Bagnall <douglas.bagnall@catalyst.net.nz>
Reviewed-by: Joseph Sutton <josephsutton@catalyst.net.nz>