IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
lld linked binaries differ by structure from GNU ld, so eu-lint will
always complaint about. Allow to skip those checks by passing
%set_verify_elf_method lint=(skip|no)
Signed-off-by: Konstantin A. Lepikhov <lakostis@altlinux.ru>
This enhances ProvidedSymbols() in two ways:
* fixes st_shndx check for the case when it is not $7 in
"readelf --wide --dyn-syms" output as it happens on some architectures
like ppc64le;
* filters out special symbols (__bss_start, _edata, _end, _fini, _init)
after stripping of versioning.
Co-Authored-by: Dmitry V. Levin <ldv@altlinux.org>
Most of the binaries on mipsel do not contain the stack section,
so this check is not relevant for that architecture. Morover,
it fails, since it can't find the stack section, which is
(rightfully) not present.
eu-elflint does not understand mipsel binaries currently,
so here we disable that check, too.
Previously (according to the exit statuses):
0. if something strange was found in $rpath, an error was reported;
1. if nothing strange was found, no error was reported;
2. if grep failed, it looked like the previous case and no error was
reported (as if $rpath was valid).
Now, the pipe (and the whole script) would abort on grep's failure.
Now, if something fails, the pipe (and the function, and the whole
script) will fail.
If the exit codes are normal, they are handled according to our logic.
Previously, a failure (which is an exceptional rare case) would lead to:
1. skipping the further exe_stack check and error_strict STACK "$f" 'STACK entry not found'
2. silently skipping the exe_stack check (which was impossible anyway because of the failure)
The pipe (and the function, and the whole script) would fail when the
resulting constructed rpath was empty. But usually at least
$RPM_VERIFY_ELF_LDD_RPATH is not empty, so the bad behavior has not
been observed.
However, when verify-elf is used standalone, $RPM_VERIFY_ELF_LDD_RPATH
is empty indeed, and this improvement becomes important.
Before this improvement:
$ RPM_BUILD_ROOT=/home /usr/lib/rpm/verify-elf /bin/pwd; echo $?
1
After this improvement:
$ VERIFY_ELF_UNRESOLVED=strict RPM_BUILD_ROOT=/home /usr/lib/rpm/verify-elf /bin/pwd; echo $?
0
$ VERIFY_ELF_UNRESOLVED=strict RPM_BUILD_ROOT=/home /usr/lib/rpm/verify-elf /usr/lib64/python3/site-packages/_cffi_backend.cpython-35m.so; echo $?
verify-elf: ERROR: /usr/lib64/python3/site-packages/_cffi_backend.cpython-35m.so: undefined symbol: PyExc_SystemError
....
1
When the given ELF file doesn't specify PT_INTERP, e.g. if it's a shared
library, we have to guess it. We used to try the program interpreter
of dump_ld_config, but it doesn't work well enough in multilib
environments, so extend the guess by trying interpreters listed
in RTLDLIST from /usr/bin/ldd script.
Add a brp interface for checking contents of files in buildroot.
The initial revision of brp-check_contents scripts checks
for suspicious path names like /var/lib/run, complementing
the change introduced in sisyphus_check-0.8.43-alt1.
When scanning files for deps, debuginfo, fixup, and verify_elf,
ignore files with type that matches "ELF * shared object, no machine, *"
pattern.
These are ELF files generated by GNU Guile, ignore them for now.
In verify_unresolved(), do not preload requested objects to $elf_ldd
directly because the latter is not an ELF interpreter but a shell
script. Instead, pass the list of objects that have to be preloaded
to $elf_ldd script which in turn assigns it to LD_PRELOAD variable
in the ELF interpreter invocation.
Previously, verify_lfs checked if object has valid interpreter, but it
doesn't make sense in case of shared objects. verify_lfs check is valid
for all ELFs linked with libraries supporting both LFS and non-LFS API
(libc.so and libz.so in our case).
From ld.so's documentation about the search order:
Using the directories specified in the DT_RPATH dynamic section
attribute of the binary if present and DT_RUNPATH attribute does not
exist. Use of DT_RPATH is deprecated.
Running pkg-config multiple times can produce too many cflags, most
of them being dups. With this change, I rely on pkg-config itself to
discard dups properly - pkg-config(1) manpage says that "duplicate
flags are merged (maintaining proper ordering)".
Hierarchical processing makes cpp.req more susceptible to "once-only
header" optimization. To demonstrate the problem, I've implemented
some debugging facilities. Here is how <gtk/gtk.h> is processed.
$ cpp.req -vv /usr/include/gtk-2.0/gtk/gtk.h
[...]
Include gdk/gdk.h
+ Push /usr/include/gtk-2.0/gdk/gdk.h
Include gdk/gdkapplaunchcontext.h
+ Push /usr/include/gtk-2.0/gdk/gdkapplaunchcontext.h
Include gio/gio.h
! Push /usr/include/glib-2.0/gio/gio.h
Include gio/giotypes.h
Push /usr/include/glib-2.0/gio/giotypes.h
Include gio/gioenums.h
Push /usr/include/glib-2.0/gio/gioenums.h
Include glib-object.h
Push /usr/include/glib-2.0/glib-object.h
Include gobject/gbinding.h
Push /usr/include/glib-2.0/gobject/gbinding.h
Include glib.h
Push /usr/include/glib-2.0/glib.h
[...]
+ Push /usr/include/gtk-2.0/gtk/gtkdebug.h
Include glib.h
Pop
[...]
recovered glib.h -> /usr/include/glib-2.0/glib.h
recovered stdarg.h -> /usr/lib64/gcc/x86_64-alt-linux/4.5.3/include/stdarg.h
recovered time.h -> /usr/include/time.h
recovered glib-object.h -> /usr/include/glib-2.0/glib-object.h
In the output, "Include" lines annotate "#include" instructions which
are about to be processed by cpp; "Push" and "Pop" annotate actual
file operations performed by cpp. Technically, "Include" annotations
are enabled via -dI option which installs cb_include callback in
gcc/c-ppoutput.c; "Push" and "Pop" are triggered in the guts of the
libcpp library. The library has hardcoded optimization against repeated
inclusions. According to "info cpp", "It remembers when a header file
has a wrapper #ifndef. If a subsequent #include specifies that header,
and the macro in the #ifndef is still defined, it does not bother to
rescan the file at all." (See should_stack_file in libcpp/files.c.)
This means that, normally, each "Include" should be followed by a
corresponding "Push". However, due to "once-only header" optimization,
some includes are not followed by a push. This means that the file
has already been pushed, and it happens to use a wrapper #ifndef.
Note that, in the output, this is exactly the case with <glib2.h>.
Also note that, in the output, files internal to the package are marked
with "+" on the left. They are tracked down to the first non-packaged
file, which makes a dependency; these files are marked with "!". The
problem with <glib2.h> is then that it gets first included in an
external file. Later it is also included in an internal file, but
a "Push" is not triggered. And since the internal file is subordinate
to <gtk/gtk.h> and is not going to be processed on its own, the
dependency on <glib2.h> is lost.
To recover missing pushes, we have to associate each include with the
first-time push. In other words, we need a table which maintains a
(header -> filename) mapping; in the above example, the table will
contain (glib.h -> /usr/include/glib-2.0/glib.h). Using this table,
we can ascertain that each internal #include produced a result.
Now, this might still have some corner cases: includes with
non-canonical header names probably will not be recovered, and it is not
clear whether <foo.h> and "foo.h" should be processed differently.
It works well enough in simple cases, though.
I have to admit that cpp.req can be slow and often fails in an ugly
manner. To address these issues, this change introduces "hierarchical
processing". Consider the package libgtk+2-devel. Only a few header
files from this package can be included directly, and these files in
turn include other "private" headers which are protected against direct
inclusion. The idea is then that only those few files with the highest
rank have to be processed explicitly, and most of the "private" files
can be processed implicitly as they are included on behalf of
higher-ranking files.
To implement the idea, somehow we have to sort the files by their rank.
This probably has to involve some guesswork. However, assigning higher
ranks to shorter filenames seems to produce nice guesses. More precisely,
files are sorted by shorter directory names and then by shorter basenames.
Another possible criteria which is not currently implemented is also to
take into account the number of path components in a directory name.
The result is pretty amazing: the amount of time needed to process
libgtk+2-devel headers is reduced from 150s to 5s. Notably <gtk/gtk.h>
includes 241 packaged files. This is also due to other optimizations:
packaged files are excluded from dependencies early on, and each
required filename gets passed to FindPackage only once.
Hardlinking identical .pyo and .pyc files splitted from brp-bytecompile_python to
brp-hardlink_pyo_pyc to make this brp work for python3 files (generated by separate
brp-bytecompile_python3).
Made it possible for third party packages to have their own brp-* scripts. All
existent brp-* scripts migrated to /usr/lib/rpm/brp.d, brp-alt taught to execute
all from this directory in alphabetical order. All brp-* scripts obligated to
have three digit prefix (to specify execution order) and .brp suffix.
Package only those /usr/lib/debug/* symlinks that complement the package
being processed and point to debuginfo regular files which are going to
be packaged along with these symlinks.
The most obvious consequence of this change is that library symlinks for
use of ld(1) will not result to their
/usr/lib/debug/usr/lib*/libNAME.so.debug counterparts to be packaged.
When plain cpp check fails, cpp.req tries to process the same file in
c++ mode, which requires c++ support to be installed. As result, when
c++ support is not installed, cpp.req clutter the log with vain attempts
to process files in c++ mode. This change reduces the noise by checking
whether c++ support is actually available.
Some header files have protection against being included into user
code directly. This means that, when processing such files, cpp
is going to fail, and some dependencies probably will be missing.
/usr/include/gtk-2.0/gtk/gtkaccessible.h:
20 #if defined(GTK_DISABLE_SINGLE_INCLUDES) && !defined (__GTK_H_INSIDE__) && !defined (GTK_COMPILATION)
21 #error "Only <gtk/gtk.h> can be included directly."
22 #endif
23
24 #ifndef __GTK_ACCESSIBLE_H__
25 #define __GTK_ACCESSIBLE_H__
26
27 #include <atk/atk.h>
28 #include <gtk/gtkwidget.h>
To remedy the problem, we should, as per the above example, process
gtk/gtk.h dependencies recursively. Dependencies which we now attribute
to gtk/gtk.h are: 1) files which are packaged within the same subpackage
- these dependencies will be optimized out later by rpm; 2) the first
file not packaged into this subpackage, which is atk/atk.h. Files below
atk/atk.h are not processed.
Packaged? Stack
+---------------------+
+ | gtk/gtk.h |
+---------------------+
+ | gtk/gtkaccessible.h | <- SPmark
+---------------------+
- | atk/atk.h |
+---------------------+
| ... |
Also note that packaged files in cpp output should not be identified by
filenames, since filenames in the output will be possibly non-canonical.
Therefore I use standard unix technique to identify files by (dev,ino).
/usr/include/boost/spirit/home/support/detail/lexer/containers/ptr_vector.hpp:
9 #include "../size_t.hpp"