Hi, This patch makes the exported symbols explicit in the map file. Furthermore it moves symbols only listed in internal headers in a special section which makes them valid only for testing. I've tested it with abi-compliance-checker and it detects the following missing from nettle: nettle_aeads, nettle_armors, nettle_ciphers, nettle_hashes and from hogweed the nettle_secp_*r1, which seem expected.
Niels, I'm not sure however if that was your intention. Didn't you want to deprecate some of the _nettle symbols as well like _nettle_secp_256r1? These are however still present in the exported headers and unless we remove them, they seem to be part of the API and ABI.
I also attach the script which generated the files in case you'd like to play with it.
regards, Nikos
Nikos Mavrogiannopoulos n.mavrogiannopoulos@gmail.com writes:
Niels, I'm not sure however if that was your intention. Didn't you want to deprecate some of the _nettle symbols as well like _nettle_secp_256r1?
I was thinking of doing something simple, with nettle_* symbols going into the supported ABI (symbol version NETTLE_@LIBNETTLE_MAJOR@), and all _nettle_* symbols getting symbol version NETTLE_@LIBNETTLE_MAJOR@_@LIBNETTLE_MINOR@, which explicitly makes them *not* binary compatible between minor versions.
I think it's helpful that ABI status corresponds to the names used in the source and header files, both for maintenance and for user documentation.
What do you think? Are there any of the current _nettle_* symbols that should be in the advertised API (and hence renamed)?
Regards, /Niels
On Fri, 2018-06-08 at 10:41 +0200, Niels Möller wrote:
Nikos Mavrogiannopoulos n.mavrogiannopoulos@gmail.com writes:
Niels, I'm not sure however if that was your intention. Didn't you want to deprecate some of the _nettle symbols as well like _nettle_secp_256r1?
I was thinking of doing something simple, with nettle_* symbols going into the supported ABI (symbol version NETTLE_@LIBNETTLE_MAJOR@), and all _nettle_* symbols getting symbol version NETTLE_@LIBNETTLE_MAJOR@_@LIBNETTLE_MINOR@, which explicitly makes them *not* binary compatible between minor versions.
I think it's helpful that ABI status corresponds to the names used in the source and header files, both for maintenance and for user documentation.
If you mean removing them from the public headers and placing them in one (or multiple) internal ones, it makes sense to me. Otherwise it would be confusing on why they are listed in headers if they are unavailable or break ABI.
What do you think? Are there any of the current _nettle_* symbols that should be in the advertised API (and hence renamed)?
I do not use any of them in gnutls, but searching at the debian code, I see: _nettle_md5_compress (sogo), _nettle_sha1_compress (filezilla/putty)
regards, Nikos
Nikos Mavrogiannopoulos nmav@redhat.com writes:
If you mean removing them from the public headers and placing them in one (or multiple) internal ones, it makes sense to me.
Sounds reasonable. Then it's harder to use them without realizing they're internal and not compatible over version changes (even though we ought to document the conventions).
What do you think? Are there any of the current _nettle_* symbols that should be in the advertised API (and hence renamed)?
I do not use any of them in gnutls, but searching at the debian code, I see: _nettle_md5_compress (sogo), _nettle_sha1_compress (filezilla/putty)
We could promote those to advertised ABI, then. I.e., linker symbols nettle_*_compress after the ABI change, while we could keep _nettle_*_compress as aliases in the header file, not not also break the API.
Regards, /Niels
On Fri, 2018-06-08 at 13:34 +0200, Niels Möller wrote:
Nikos Mavrogiannopoulos nmav@redhat.com writes:
If you mean removing them from the public headers and placing them in one (or multiple) internal ones, it makes sense to me.
Sounds reasonable. Then it's harder to use them without realizing they're internal and not compatible over version changes (even though we ought to document the conventions).
What do you think? Are there any of the current _nettle_* symbols that should be in the advertised API (and hence renamed)?
I do not use any of them in gnutls, but searching at the debian code, I see: _nettle_md5_compress (sogo), _nettle_sha1_compress (filezilla/putty)
We could promote those to advertised ABI, then. I.e., linker symbols nettle_*_compress after the ABI change, while we could keep _nettle_*_compress as aliases in the header file, not not also break the API.
I attach the current state. It does move all internal symbols into multiple internal headers and removes them from exported list. The last patch renames _nettle_md5_compress and _nettle_sha1_compress and includes it into the exported list (quite ugly, maybe we skip that rename?).
regards, Nikos
Nikos Mavrogiannopoulos nmav@redhat.com writes:
I attach the current state. It does move all internal symbols into multiple internal headers
Makes sense to me.
The last patch renames _nettle_md5_compress and _nettle_sha1_compress and includes it into the exported list
I think this makes sense too. We should leave
#define _nettle_md5_compress nettle_md5_compress #define _nettle_sha1_compress nettle_sha1_compress
in the header files, for API (source level) backwards compatibility.
If we do that too, can we use glob patterns in the linker scripts, instead of listing all symbols explicitly? It might make sense to do this in three steps:
1. Rename the above compress functions.
2. Move declarations if internal functions to uninstalled headers.
3. Update the linker script to handle _nettle_*-symbols differently.
Regards, /Niels
On Sun, 2018-06-17 at 19:55 +0200, Niels Möller wrote:
Nikos Mavrogiannopoulos nmav@redhat.com writes:
I attach the current state. It does move all internal symbols into multiple internal headers
Makes sense to me.
The last patch renames _nettle_md5_compress and _nettle_sha1_compress and includes it into the exported list
I think this makes sense too. We should leave
#define _nettle_md5_compress nettle_md5_compress #define _nettle_sha1_compress nettle_sha1_compress
in the header files, for API (source level) backwards compatibility.
Definitions are already in place.
If we do that too, can we use glob patterns in the linker scripts, instead of listing all symbols explicitly? It might make sense to do this in three steps:
I'd suggest against doing that because there will be no way to achieve symbol versioning. See the previous discussion at: https://lists.lysator.liu.se/pipermail/nettle-bugs/2018/007287.html
Rename the above compress functions.
Move declarations if internal functions to uninstalled headers.
These are already done.
regards, Nikos
On Mon, 2018-06-18 at 13:53 +0200, Nikos Mavrogiannopoulos wrote:
On Sun, 2018-06-17 at 19:55 +0200, Niels Möller wrote:
Nikos Mavrogiannopoulos nmav@redhat.com writes:
I attach the current state. It does move all internal symbols into multiple internal headers
Makes sense to me.
The last patch renames _nettle_md5_compress and _nettle_sha1_compress and includes it into the exported list
I think this makes sense too. We should leave
#define _nettle_md5_compress nettle_md5_compress #define _nettle_sha1_compress nettle_sha1_compress
in the header files, for API (source level) backwards compatibility.
Definitions are already in place.
If we do that too, can we use glob patterns in the linker scripts, instead of listing all symbols explicitly? It might make sense to do this in three steps:
I'd suggest against doing that because there will be no way to achieve symbol versioning. See the previous discussion at: https://lists.lysator.liu.se/pipermail/nettle-bugs/2018/007287.html
The attached version only exports symbols by wildcards and combines patches.
regards, Nikos
Nikos Mavrogiannopoulos nmav@redhat.com writes:
The attached version only exports symbols by wildcards and combines patches.
Thanks a lot. I've tried this out now (currently on master-updates branch for testing). I've noticed two problems:
1. x86_64 --enable-fat builds are broken, errors like
/home/nisse/hack/nettle/fat-x86_64.c:178:34: error: ‘nettle_sha1_compress_sha_ni’ undeclared (first use in this function) nettle_sha1_compress_vec = nettle_sha1_compress_sha_ni;
We have to decide if the name of this assembly function should be "nettle_sha1_compress_sha_ni" (and if so, update the macro DECLARE_FAT_FUNC_VAR in fat-setup.h and all uses), or "_nettle_sha1_compress_sha_ni" (and update definitions of m4 macro fat_transform in x86_64/fat/sha1-compress*.asm). The latter alternative seems best to me.
2. The gnutls build fails with
Submodule path 'tests/suite/tls-fuzzer/tlslite-ng': checked out 'd976188fe7fd7466dc5cf0818a4ef87e37381897' ./bootstrap ./bootstrap: Bootstrapping from checked-out gnutls sources... ./bootstrap: consider installing git-merge-changelog from gnulib ./bootstrap: getting gnulib files... ./bootstrap: getting translations into po/.reference for gnutls... ./bootstrap: line 702: rsync: command not found wget: relocation error: /lib64/libgnutls.so.30: symbol nettle_secp_521r1, version HOGWEED_4 not defined in file libhogweed.so.4 with link time reference make: *** [cfg.mk:63: autoreconf] Error 127
My best guess that wget somehow got linked at runtime with the version of libhogweed under test (where the bump of soname and version numbers isn't done yet). If that's right, it should work if either the test scripts ensure that the system version of libhogweed is used here, or I check in a change to the soname to tell the linker that the development version under test is incompatible with the wget executable.
I think I'll first try to sort this out on the master-updates branch. If it turns out to be more complicated, I'll back out of these changes, and try to do the sha1_compress and md5_compress renaming first, to get less problems at a time.
Regards, /Niels
On Sun, Jul 8, 2018 at 8:36 AM Niels Möller nisse@lysator.liu.se wrote:
Nikos Mavrogiannopoulos nmav@redhat.com writes:
The attached version only exports symbols by wildcards and combines patches.
Thanks a lot. I've tried this out now (currently on master-updates branch for testing). I've noticed two problems:
x86_64 --enable-fat builds are broken, errors like
/home/nisse/hack/nettle/fat-x86_64.c:178:34: error: ‘nettle_sha1_compress_sha_ni’ undeclared (first use in this function) nettle_sha1_compress_vec = nettle_sha1_compress_sha_ni;
We have to decide if the name of this assembly function should be "nettle_sha1_compress_sha_ni" (and if so, update the macro DECLARE_FAT_FUNC_VAR in fat-setup.h and all uses), or "_nettle_sha1_compress_sha_ni" (and update definitions of m4 macro fat_transform in x86_64/fat/sha1-compress*.asm). The latter alternative seems best to me.
That would also mean the latter would be part of the ABI as we agreed on exporting symbols by wildcard. Seeing it further it seems that fat subsystem was not made for wildcard exporting of symbols, as it creates new symbols which can be exported.
The gnutls build fails with
Submodule path 'tests/suite/tls-fuzzer/tlslite-ng': checked out
'd976188fe7fd7466dc5cf0818a4ef87e37381897' ./bootstrap ./bootstrap: Bootstrapping from checked-out gnutls sources... ./bootstrap: consider installing git-merge-changelog from gnulib ./bootstrap: getting gnulib files... ./bootstrap: getting translations into po/.reference for gnutls... ./bootstrap: line 702: rsync: command not found wget: relocation error: /lib64/libgnutls.so.30: symbol nettle_secp_521r1, version HOGWEED_4 not defined in file libhogweed.so.4 with link time reference make: *** [cfg.mk:63: autoreconf] Error 127
My best guess that wget somehow got linked at runtime with the version of libhogweed under test (where the bump of soname and version numbers isn't done yet). If that's right, it should work if either the test scripts ensure that the system version of libhogweed is used here, or I check in a change to the soname to tell the linker that the development version under test is incompatible with the wget executable.
I think I'll first try to sort this out on the master-updates branch. If it turns out to be more complicated, I'll back out of these changes, and try to do the sha1_compress and md5_compress renaming first, to get less problems at a time.
I've updated the image of building gnutls. The f26 is no longer updated and cannot build the current master.
regards, Nikos
Nikos Mavrogiannopoulos nmav@redhat.com writes:
On Sun, Jul 8, 2018 at 8:36 AM Niels Möller nisse@lysator.liu.se wrote:
We have to decide if the name of this assembly function should be "nettle_sha1_compress_sha_ni" (and if so, update the macro DECLARE_FAT_FUNC_VAR in fat-setup.h and all uses), or "_nettle_sha1_compress_sha_ni" (and update definitions of m4 macro fat_transform in x86_64/fat/sha1-compress*.asm). The latter alternative seems best to me.
That would also mean the latter would be part of the ABI as we agreed on exporting symbols by wildcard. Seeing it further it seems that fat subsystem was not made for wildcard exporting of symbols, as it creates new symbols which can be exported.
I went for the latter alternative, which gives us _nettle_sha1_compress_sha_ni and _nettle_sha1_compress_x86_64 (with NETTLE_INTERNAL_* symbol version, by the wildcard rule)), and nettle_sha_compress is a public symbols which jumps via a function pointer (or uses ifunc indirection, if glibc is changed to resolve libc ifunc relocations before other libraries).
I think that should be right; the set of public symols is the same in a fat and non-fat build.
FYI, I also did a forced update of the master-updates branch, to fix a commit message typo.
diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index abfb81a3..52f41c74 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -1,7 +1,7 @@ variables: BUILD_IMAGES_PROJECT: gnutls/build-images
- FEDORA_BUILD: buildenv-f26
- FEDORA_X86_BUILD: buildenv-f26-x86
- FEDORA_BUILD: buildenv-f28
- FEDORA_X86_BUILD: buildenv-f28-x86 GET_SOURCES_ATTEMPTS: "3"
# remove any pre-installed headers from nettle
Applied. Should hopefully make the gnutls build pass, leaving only the aarch64 ci failures.
Is it easy to add a (32-bit) arm build in .gitlab-ci? Any of real hardware, qemu virtual machine, or cross compile + qemu-user testing, would be good to have. Otherwise, I'll have to do some local testing to ensure that arm fat and non-fat builds still work.
Regards, /Niels
On Mon, Jul 9, 2018 at 11:44 AM, Niels Möller nisse@lysator.liu.se wrote:
Nikos Mavrogiannopoulos nmav@redhat.com writes:
On Sun, Jul 8, 2018 at 8:36 AM Niels Möller nisse@lysator.liu.se wrote:
We have to decide if the name of this assembly function should be "nettle_sha1_compress_sha_ni" (and if so, update the macro DECLARE_FAT_FUNC_VAR in fat-setup.h and all uses), or "_nettle_sha1_compress_sha_ni" (and update definitions of m4 macro fat_transform in x86_64/fat/sha1-compress*.asm). The latter alternative seems best to me.
That would also mean the latter would be part of the ABI as we agreed on exporting symbols by wildcard. Seeing it further it seems that fat subsystem was not made for wildcard exporting of symbols, as it creates new symbols which can be exported.
I went for the latter alternative, which gives us _nettle_sha1_compress_sha_ni and _nettle_sha1_compress_x86_64 (with NETTLE_INTERNAL_* symbol version, by the wildcard rule)), and nettle_sha_compress is a public symbols which jumps via a function pointer (or uses ifunc indirection, if glibc is changed to resolve libc ifunc relocations before other libraries).
I think that should be right; the set of public symols is the same in a fat and non-fat build.
FYI, I also did a forced update of the master-updates branch, to fix a commit message typo.
That shouldn't be a problem.
diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index abfb81a3..52f41c74 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -1,7 +1,7 @@ variables: BUILD_IMAGES_PROJECT: gnutls/build-images
- FEDORA_BUILD: buildenv-f26
- FEDORA_X86_BUILD: buildenv-f26-x86
- FEDORA_BUILD: buildenv-f28
- FEDORA_X86_BUILD: buildenv-f28-x86 GET_SOURCES_ATTEMPTS: "3"
# remove any pre-installed headers from nettle
Applied. Should hopefully make the gnutls build pass, leaving only the aarch64 ci failures.
Is it easy to add a (32-bit) arm build in .gitlab-ci? Any of real hardware, qemu virtual machine, or cross compile + qemu-user testing, would be good to have. Otherwise, I'll have to do some local testing to ensure that arm fat and non-fat builds still work.
Yes and most likely we can re-use the images Michael added in gnutls. I'll check to it.
regards, Nikos
On Mon, Jul 9, 2018 at 11:44 AM, Niels Möller nisse@lysator.liu.se wrote:
@@ -1,7 +1,7 @@ variables: BUILD_IMAGES_PROJECT: gnutls/build-images
- FEDORA_BUILD: buildenv-f26
- FEDORA_X86_BUILD: buildenv-f26-x86
- FEDORA_BUILD: buildenv-f28
- FEDORA_X86_BUILD: buildenv-f28-x86 GET_SOURCES_ATTEMPTS: "3"
# remove any pre-installed headers from nettle
Applied. Should hopefully make the gnutls build pass, leaving only the aarch64 ci failures.
Is it easy to add a (32-bit) arm build in .gitlab-ci? Any of real hardware, qemu virtual machine, or cross compile + qemu-user testing, would be good to have. Otherwise, I'll have to do some local testing to ensure that arm fat and non-fat builds still work.
Patch is attached adding the builds from gnutls (mips,arm,aarch64).
The output is at: https://gitlab.com/nmav/nettle/pipelines/25504684
The fat-arm build fails with:
fat-arm.c: In function 'fat_init': fat-arm.c:194:7: error: '_nettle_sha1_compress_vec' undeclared (first use in this function); did you mean 'nettle_sha1_compress_vec'? _nettle_sha1_compress_vec = _nettle_sha1_compress_armv6; ^~~~~~~~~~~~~~~~~~~~~~~~~ nettle_sha1_compress_vec fat-arm.c:194:7: note: each undeclared identifier is reported only once for each function it appears in Makefile:258: recipe for target 'fat-arm.o' failed make[1]: *** [fat-arm.o] Error 1
btw. Note that fat-arm.c capabilities detection can be simplified with getauxval:
https://community.arm.com/android-community/b/android/posts/runtime-detectio...
I've modified gnutls to use that too: https://gitlab.com/gnutls/gnutls/blob/master/lib/accelerated/aarch64/aarch64...
regards, Nikos
Nikos Mavrogiannopoulos n.mavrogiannopoulos@gmail.com writes:
The fat-arm build fails with:
fat-arm.c: In function 'fat_init': fat-arm.c:194:7: error: '_nettle_sha1_compress_vec' undeclared (first use in this function); did you mean 'nettle_sha1_compress_vec'? _nettle_sha1_compress_vec = _nettle_sha1_compress_armv6; ^~~~~~~~~~~~~~~~~~~~~~~~~ nettle_sha1_compress_vec fat-arm.c:194:7: note: each undeclared identifier is reported only once for each function it appears in Makefile:258: recipe for target 'fat-arm.o' failed make[1]: *** [fat-arm.o] Error 1
I've noticed the same in a local cross-compilation environment. I'm about to check in the fixes.
btw. Note that fat-arm.c capabilities detection can be simplified with getauxval:
https://community.arm.com/android-community/b/android/posts/runtime-detectio...
Ok, maybe we should revisit that. When I wrote the current code, which is a few years ago now, I think I considered using getauxval, but concluded that it was less portable than reading /proc/cpuinfo. (And nettle currently only has arm-specific code for 32-bit arm; portability considerations for 64-bit arm may be different since there's less old stuff to care about).
Regards, /Niels
On Thu, 2018-07-12 at 10:31 +0200, Niels Möller wrote:
btw. Note that fat-arm.c capabilities detection can be simplified with getauxval:
https://community.arm.com/android-community/b/android/posts/runtime -detection-of-cpu-features-on-an-armv8-a-cpu
Ok, maybe we should revisit that. When I wrote the current code, which is a few years ago now, I think I considered using getauxval, but concluded that it was less portable than reading /proc/cpuinfo. (And nettle currently only has arm-specific code for 32-bit arm; portability considerations for 64-bit arm may be different since there's less old stuff to care about).
The main problem with the /proc/cpuinfo was that in the qemu-user environment this is not emulated, and thus you'll be reading the CI server's cpu information. We could work-around that by setting multiple environment overrides during make check to ensure all code paths are run.
regards, Nikos
Nikos Mavrogiannopoulos n.mavrogiannopoulos@gmail.com writes:
Patch is attached adding the builds from gnutls (mips,arm,aarch64).
Excellent! Now applied to the master-updates branch, together with arm fat fixes.
Regards, /Niels
On Thu, 2018-07-12 at 10:37 +0200, Niels Möller wrote:
Nikos Mavrogiannopoulos n.mavrogiannopoulos@gmail.com writes:
Patch is attached adding the builds from gnutls (mips,arm,aarch64).
Excellent! Now applied to the master-updates branch, together with arm fat fixes.
Did you push it? I don't seem to see a pipeline with the f28 build systems: https://gitlab.com/gnutls/nettle/pipelines
Nikos Mavrogiannopoulos nmav@redhat.com writes:
Did you push it? I don't seem to see a pipeline with the f28 build systems: https://gitlab.com/gnutls/nettle/pipelines
History info on https://gitlab.com/gnutls/nettle/commits/master-updates says "This project is mirrored from https://git.lysator.liu.se/nettle/nettle.git. Updated 19 minutes ago. This branch has diverged from upstream." ^^^^^^^^^^^^
So I'd guess my force update on that branch has confused the mirroring. Maybe you need to somehow force push the right version to the mirror version of that branch to get it back in sync.
Sorry about that.
Regards, /Niels
On Thu, 2018-07-12 at 15:28 +0200, Niels Möller wrote:
Nikos Mavrogiannopoulos nmav@redhat.com writes:
Did you push it? I don't seem to see a pipeline with the f28 build systems: https://gitlab.com/gnutls/nettle/pipelines
History info on https://gitlab.com/gnutls/nettle/commits/master-updat es says "This project is mirrored from https://git.lysator.liu.se/nettle/nettle.git. Updated 19 minutes ago. This branch has diverged from upstream." ^^^^^^^^^^^^
I've deleted the branches on gitlab and now it seems to work. It seems that static analyzer in F28 finds few issues: https://gitlab.com/gnutls/nettle/-/jobs/81332560 (click on browse for artifacts)
They are mem leaks on examples and one which relates to gmp-mini.
regards, Nikos
Nikos Mavrogiannopoulos nmav@redhat.com writes:
I've deleted the branches on gitlab and now it seems to work. It seems that static analyzer in F28 finds few issues: https://gitlab.com/gnutls/nettle/-/jobs/81332560 (click on browse for artifacts)
We'll see how to deal with those failures, but I don't think they should block merging to the master branch.
There are also two "runner system failures", see https://gitlab.com/gnutls/nettle/-/jobs/81332562. Can that be restarted? It would be nice to see a successful arm build before merging to master.
They are mem leaks on examples and one which relates to gmp-mini.
I'll try to take care of the examples. The mini-gmp issue is more confusing.
Regards, /Niels
On Thu, 2018-07-12 at 18:04 +0200, Niels Möller wrote:
We'll see how to deal with those failures, but I don't think they should block merging to the master branch.
There are also two "runner system failures", see https://gitlab.com/gnutls/nettle/-/jobs/81332562. Can that be restarted? It would be nice to see a successful arm build before merging to master.
I've restarted it. It passes. If you have a gitlab account I will give you access to the repo.
regards, Nikos
Nikos Mavrogiannopoulos nmav@redhat.com writes:
It seems that static analyzer in F28 finds few issues: https://gitlab.com/gnutls/nettle/-/jobs/81332560 (click on browse for artifacts)
They are mem leaks on examples and one which relates to gmp-mini.
I fixed the leaks. Remaining issues listed on https://gnutls.gitlab.io/-/nettle/-/jobs/81425078/artifacts/scan-build-lib/2... (but will disappear in a few days; should we increase the expire_in parameter in .gitlab-cy.yml?)..
I've had a closer look now, and I think both are of similar type. In eratosthenes.c, we have a bitmap initialized with
static void vector_init(unsigned long *vector, unsigned long size) { unsigned long end = (size + BITS_PER_LONG - 1) / BITS_PER_LONG; unsigned long i;
for (i = 0; i < end; i++) vector[i] = ~0UL; }
and later updated with the loop
static void vector_clear_bits (unsigned long *vector, unsigned long step, unsigned long start, unsigned long size) { unsigned long bit;
for (bit = start; bit < size; bit += step) { unsigned long i = bit / BITS_PER_LONG; unsigned long mask = 1L << (bit % BITS_PER_LONG);
vector[i] &= ~mask; } }
The value of the size argument is the same in both places (named "sieve_nbits" in the calling code). The static analyzer complains that vector[i] is not properly initialized. And it's kind-of right: In case (size + BITS_PER_LONG - 1) wraps around, the bitmap will not be initialized properly. But I would classify it as an out-of-bounds access, not use of uninitialized data.
I could add an overflow check in the vector_alloc function, which uses the same expression, and where it might make sense. I wonder of that would make the analyzer happy.
In eccdata.c, the allocation function is ecc_alloc, which alllocates an array and loops over it to initialize all elements. The complant is in ecc_pippenger_precompute, which allocates an array with
ecc->table = ecc_alloc (ecc->table_size);
and the complaint is the assignment of element 1,
ecc_set (&ecc->table[1], &ecc->g);
where ecc->table[1].x->_mp_alloc is claimed to be uninitialized.
This also is kind-of right; if the program is run with strange parameters, we might get ecc->table_size < 2 (and again, in that case, it's an out of bounds access).
I don't have that much experience with the static analyzer. Should I just add error handling for the corner cases, and see if that solves the problem?
Regards, /Niels
On Fri, 2018-07-13 at 10:08 +0200, Niels Möller wrote:
I've had a closer look now, and I think both are of similar type. In eratosthenes.c, we have a bitmap initialized with
static void vector_init(unsigned long *vector, unsigned long size) { unsigned long end = (size + BITS_PER_LONG - 1) / BITS_PER_LONG; unsigned long i;
for (i = 0; i < end; i++) vector[i] = ~0UL;
}
and later updated with the loop
static void vector_clear_bits (unsigned long *vector, unsigned long step, unsigned long start, unsigned long size) { unsigned long bit;
for (bit = start; bit < size; bit += step) { unsigned long i = bit / BITS_PER_LONG; unsigned long mask = 1L << (bit % BITS_PER_LONG); vector[i] &= ~mask; }
}
The value of the size argument is the same in both places (named "sieve_nbits" in the calling code). The static analyzer complains that vector[i] is not properly initialized. And it's kind-of right: In case (size + BITS_PER_LONG - 1) wraps around, the bitmap will not be initialized properly. But I would classify it as an out-of-bounds access, not use of uninitialized data.
I could add an overflow check in the vector_alloc function, which uses the same expression, and where it might make sense. I wonder of that would make the analyzer happy.
In eccdata.c, the allocation function is ecc_alloc, which alllocates an array and loops over it to initialize all elements. The complant is in ecc_pippenger_precompute, which allocates an array with
ecc->table = ecc_alloc (ecc->table_size);
and the complaint is the assignment of element 1,
ecc_set (&ecc->table[1], &ecc->g);
where ecc->table[1].x->_mp_alloc is claimed to be uninitialized.
This also is kind-of right; if the program is run with strange parameters, we might get ecc->table_size < 2 (and again, in that case, it's an out of bounds access).
I don't have that much experience with the static analyzer. Should I just add error handling for the corner cases, and see if that solves the problem?
An assert() should be sufficient to inform it of the constraints.
regards, Nikos
Nikos Mavrogiannopoulos nmav@redhat.com writes:
An assert() should be sufficient to inform it of the constraints.
I've added two asserts for the eratosthenes.c warning, we'lll see if that helps (only vector_alloc wasn't enough). For the other case, eccdata.c, I think some additional checks of the input arguments would be appropriate, in addition to an assert.
Regards, /Niels
nisse@lysator.liu.se (Niels Möller) writes:
Nikos Mavrogiannopoulos nmav@redhat.com writes:
An assert() should be sufficient to inform it of the constraints.
I've added two asserts for the eratosthenes.c warning, we'lll see if that helps (only vector_alloc wasn't enough).
Another analysis run has completed, and it still complains. See https://gnutls.gitlab.io/-/nettle/-/jobs/81674466/artifacts/scan-build-lib/2...
Any other suggestions?
Regards, /Niels
On Fri, Jul 13, 2018 at 9:26 PM, Niels Möller nisse@lysator.liu.se wrote:
nisse@lysator.liu.se (Niels Möller) writes:
Nikos Mavrogiannopoulos nmav@redhat.com writes:
An assert() should be sufficient to inform it of the constraints.
I've added two asserts for the eratosthenes.c warning, we'lll see if that helps (only vector_alloc wasn't enough).
Another analysis run has completed, and it still complains. See https://gnutls.gitlab.io/-/nettle/-/jobs/81674466/artifacts/scan-build-lib/2...
Any other suggestions?
Given that it is only an example maybe use #ifndef __clang_analyzer__ in the whole file?
regards, Nikos
Nikos Mavrogiannopoulos n.mavrogiannopoulos@gmail.com writes:
Given that it is only an example maybe use #ifndef __clang_analyzer__ in the whole file?
Could be an option, if nothing else works. But one would need a dummy main function, otherwise we'd get link errors.
For eccdata.c we'd need something less drastic.
Regards, /Niels
nettle-bugs@lists.lysator.liu.se