7768 Commits

Author SHA1 Message Date
Mark Rutland
ad8110706f locking/atomic: scripts: generate kerneldoc comments
Currently the atomics are documented in Documentation/atomic_t.txt, and
have no kerneldoc comments. There are a sufficient number of gotchas
(e.g. semantics, noinstr-safety) that it would be nice to have comments
to call these out, and it would be nice to have kerneldoc comments such
that these can be collated.

While it's possible to derive the semantics from the code, this can be
painful given the amount of indirection we currently have (e.g. fallback
paths), and it's easy to be mislead by naming, e.g.

* The unconditional void-returning ops *only* have relaxed variants
  without a _relaxed suffix, and can easily be mistaken for being fully
  ordered.

  It would be nice to give these a _relaxed() suffix, but this would
  result in significant churn throughout the kernel.

* Our naming of conditional and unconditional+test ops is rather
  inconsistent, and it can be difficult to derive the name of an
  operation, or to identify where an op is conditional or
  unconditional+test.

  Some ops are clearly conditional:
  - dec_if_positive
  - add_unless
  - dec_unless_positive
  - inc_unless_negative

  Some ops are clearly unconditional+test:
  - sub_and_test
  - dec_and_test
  - inc_and_test

  However, what exactly those test is not obvious. A _test_zero suffix
  might be clearer.

  Others could be read ambiguously:
  - inc_not_zero	// conditional
  - add_negative	// unconditional+test

  It would probably be worth renaming these, e.g. to inc_unless_zero and
  add_test_negative.

As a step towards making this more consistent and easier to understand,
this patch adds kerneldoc comments for all generated *atomic*_*()
functions. These are generated from templates, with some common text
shared, making it easy to extend these in future if necessary.

I've tried to make these as consistent and clear as possible, and I've
deliberately ensured:

* All ops have their ordering explicitly mentioned in the short and long
  description.

* All test ops have "test" in their short description.

* All ops are described as an expression using their usual C operator.
  For example:

  andnot: "Atomically updates @v to (@v & ~@i)"
  inc:    "Atomically updates @v to (@v + 1)"

  Which may be clearer to non-naative English speakers, and allows all
  the operations to be described in the same style.

* All conditional ops have their condition described as an expression
  using the usual C operators. For example:

  add_unless: "If (@v != @u), atomically updates @v to (@v + @i)"
  cmpxchg:    "If (@v == @old), atomically updates @v to @new"

  Which may be clearer to non-naative English speakers, and allows all
  the operations to be described in the same style.

* All bitwise ops (and,andnot,or,xor) explicitly mention that they are
  bitwise in their short description, so that they are not mistaken for
  performing their logical equivalents.

* The noinstr safety of each op is explicitly described, with a
  description of whether or not to use the raw_ form of the op.

There should be no functional change as a result of this patch.

Reported-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20230605070124.3741859-26-mark.rutland@arm.com
2023-06-05 09:57:23 +02:00
Mark Rutland
8aaf297a0d docs: scripts: kernel-doc: accept bitwise negation like ~@var
In some cases we'd like to indicate the bitwise negation of a parameter,
e.g.

  ~@var

This will be helpful for describing the atomic andnot operations, where
we'd like to write comments of the form:

  Atomically updates @v to (@v & ~@i)

Which kernel-doc currently transforms to:

  Atomically updates **v** to (**v** & ~**i**)

Rather than the preferable form:

  Atomically updates **v** to (**v** & **~i**)

This is similar to what we did for '!@var' in commit:

  ee2aa7590398 ("scripts: kernel-doc: accept negation like !@var")

This patch follows the same pattern that commit used to permit a '!'
prefix on a param ref, allowing a '~' prefix on a param ref, cuasing
kernel-doc to generate the preferred form above.

Suggested-by: Akira Yokosawa <akiyks@gmail.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/20230605070124.3741859-25-mark.rutland@arm.com
2023-06-05 09:57:23 +02:00
Mark Rutland
1d78814d41 locking/atomic: scripts: simplify raw_atomic*() definitions
Currently each ordering variant has several potential definitions,
with a mixture of preprocessor and C definitions, including several
copies of its C prototype, e.g.

| #if defined(arch_atomic_fetch_andnot_acquire)
| #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire
| #elif defined(arch_atomic_fetch_andnot_relaxed)
| static __always_inline int
| raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)
| {
|       int ret = arch_atomic_fetch_andnot_relaxed(i, v);
|       __atomic_acquire_fence();
|       return ret;
| }
| #elif defined(arch_atomic_fetch_andnot)
| #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot
| #else
| static __always_inline int
| raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)
| {
|       return raw_atomic_fetch_and_acquire(~i, v);
| }
| #endif

Make this a bit simpler by defining the C prototype once, and writing
the various potential definitions as plain C code guarded by ifdeffery.
For example, the above becomes:

| static __always_inline int
| raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)
| {
| #if defined(arch_atomic_fetch_andnot_acquire)
|         return arch_atomic_fetch_andnot_acquire(i, v);
| #elif defined(arch_atomic_fetch_andnot_relaxed)
|         int ret = arch_atomic_fetch_andnot_relaxed(i, v);
|         __atomic_acquire_fence();
|         return ret;
| #elif defined(arch_atomic_fetch_andnot)
|         return arch_atomic_fetch_andnot(i, v);
| #else
|         return raw_atomic_fetch_and_acquire(~i, v);
| #endif
| }

Which is far easier to read. As we now always have a single copy of the
C prototype wrapping all the potential definitions, we now have an
obvious single location for kerneldoc comments.

At the same time, the fallbacks for raw_atomic*_xhcg() are made to use
'new' rather than 'i' as the name of the new value. This is what the
existing fallback template used, and is more consistent with the
raw_atomic{_try,}cmpxchg() fallbacks.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20230605070124.3741859-24-mark.rutland@arm.com
2023-06-05 09:57:22 +02:00
Mark Rutland
630399469f locking/atomic: scripts: simplify raw_atomic_long*() definitions
Currently, atomic-long is split into two sections, one defining the
raw_atomic_long_*() ops for CONFIG_64BIT, and one defining the raw
atomic_long_*() ops for !CONFIG_64BIT.

With many lines elided, this looks like:

| #ifdef CONFIG_64BIT
| ...
| static __always_inline bool
| raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
| {
|         return raw_atomic64_try_cmpxchg(v, (s64 *)old, new);
| }
| ...
| #else /* CONFIG_64BIT */
| ...
| static __always_inline bool
| raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
| {
|         return raw_atomic_try_cmpxchg(v, (int *)old, new);
| }
| ...
| #endif

The two definitions are spread far apart in the file, and duplicate the
prototype, making it hard to have a legible set of kerneldoc comments.

Make this simpler by defining the C prototype once, and writing the two
definitions inline. For example, the above becomes:

| static __always_inline bool
| raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
| {
| #ifdef CONFIG_64BIT
|         return raw_atomic64_try_cmpxchg(v, (s64 *)old, new);
| #else
|         return raw_atomic_try_cmpxchg(v, (int *)old, new);
| #endif
| }

As we now always have a single copy of the C prototype wrapping all the
potential definitions, we now have an obvious single location for kerneldoc
comments. As a bonus, both the script and the generated file are
somewhat shorter.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20230605070124.3741859-23-mark.rutland@arm.com
2023-06-05 09:57:22 +02:00
Mark Rutland
b916a8c765 locking/atomic: scripts: split pfx/name/sfx/order
Currently gen-atomic-long.sh's gen_proto_order_variant() function
combines the pfx/name/sfx/order variables immediately, unlike other
functions in gen-atomic-*.sh.

This is fine today, but subsequent patches will require the individual
individual pfx/name/sfx/order variables within gen-atomic-long.sh's
gen_proto_order_variant() function. In preparation for this, split the
variables in the style of other gen-atomic-*.sh scripts.

This results in no change to the generated headers, so there should be
no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20230605070124.3741859-22-mark.rutland@arm.com
2023-06-05 09:57:21 +02:00
Mark Rutland
9257959a6e locking/atomic: scripts: restructure fallback ifdeffery
Currently the various ordering variants of an atomic operation are
defined in groups of full/acquire/release/relaxed ordering variants with
some shared ifdeffery and several potential definitions of each ordering
variant in different branches of the shared ifdeffery.

As an ordering variant can have several potential definitions down
different branches of the shared ifdeffery, it can be painful for a
human to find a relevant definition, and we don't have a good location
to place anything common to all definitions of an ordering variant (e.g.
kerneldoc).

Historically the grouping of full/acquire/release/relaxed ordering
variants was necessary as we filled in the missing atomics in the same
namespace as the architecture used. It would be easy to accidentally
define one ordering fallback in terms of another ordering fallback with
redundant barriers, and avoiding that would otherwise require a lot of
baroque ifdeffery.

With recent changes we no longer need to fill in the missing atomics in
the arch_atomic*_<op>() namespace, and only need to fill in the
raw_atomic*_<op>() namespace. Due to this, there's no risk of a
namespace collision, and we can define each raw_atomic*_<op> ordering
variant with its own ifdeffery checking for the arch_atomic*_<op>
ordering variants.

Restructure the fallbacks in this way, with each ordering variant having
its own ifdeffery of the form:

| #if defined(arch_atomic_fetch_andnot_acquire)
| #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire
| #elif defined(arch_atomic_fetch_andnot_relaxed)
| static __always_inline int
| raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)
| {
| 	int ret = arch_atomic_fetch_andnot_relaxed(i, v);
| 	__atomic_acquire_fence();
| 	return ret;
| }
| #elif defined(arch_atomic_fetch_andnot)
| #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot
| #else
| static __always_inline int
| raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)
| {
| 	return raw_atomic_fetch_and_acquire(~i, v);
| }
| #endif

Note that where there's no relevant arch_atomic*_<op>() ordering
variant, we'll define the operation in terms of a distinct
raw_atomic*_<otherop>(), as this itself might have been filled in with a
fallback.

As we now generate the raw_atomic*_<op>() implementations directly, we
no longer need the trivial wrappers, so they are removed.

This makes the ifdeffery easier to follow, and will allow for further
improvements in subsequent patches.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20230605070124.3741859-21-mark.rutland@arm.com
2023-06-05 09:57:21 +02:00
Mark Rutland
1815da1718 locking/atomic: scripts: build raw_atomic_long*() directly
Now that arch_atomic*() usage is limited to the atomic headers, we no
longer have any users of arch_atomic_long_*(), and can generate
raw_atomic_long_*() directly.

Generate the raw_atomic_long_*() ops directly.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20230605070124.3741859-20-mark.rutland@arm.com
2023-06-05 09:57:20 +02:00
Mark Rutland
c9268ac615 locking/atomic: scripts: add trivial raw_atomic*_<op>()
Currently a number of arch_atomic*_<op>() functions are optional, and
where an arch does not provide a given arch_atomic*_<op>() we will
define an implementation of arch_atomic*_<op>() in
atomic-arch-fallback.h.

Filling in the missing ops requires special care as we want to select
the optimal definition of each op (e.g. preferentially defining ops in
terms of their relaxed form rather than their fully-ordered form). The
ifdeffery necessary for this requires us to group ordering variants
together, which can be a bit painful to read, and is painful for
kerneldoc generation.

It would be easier to handle this if we generated ops into a separate
namespace, as this would remove the need to take special care with the
ifdeffery, and allow each ordering variant to be generated separately.

This patch adds a new set of raw_atomic_<op>() definitions, which are
currently trivial wrappers of their arch_atomic_<op>() equivalent. This
will allow us to move treewide users of arch_atomic_<op>() over to raw
atomic op before we rework the fallback generation to generate
raw_atomic_<op> directly.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20230605070124.3741859-18-mark.rutland@arm.com
2023-06-05 09:57:19 +02:00
Mark Rutland
7ed7a15640 locking/atomic: scripts: factor out order template generation
Currently gen_proto_order_variants() hard codes the path for the templates used
for order fallbacks. Factor this out into a helper so that it can be reused
elsewhere.

This results in no change to the generated headers, so there should be
no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20230605070124.3741859-17-mark.rutland@arm.com
2023-06-05 09:57:19 +02:00
Mark Rutland
e40e5298e6 locking/atomic: scripts: remove leftover "${mult}"
We removed cmpxchg_double() and variants in commit:

  b4cf83b2d1da40b2 ("arch: Remove cmpxchg_double")

Which removed the need for "${mult}" in the instrumentation logic.
Unfortunately we missed an instance of "${mult}".

There is no change to the generated header.
There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20230605070124.3741859-16-mark.rutland@arm.com
2023-06-05 09:57:18 +02:00
Mark Rutland
a083ecc933 locking/atomic: scripts: remove bogus order parameter
At the start of gen_proto_order_variants(), the ${order} variable is not
yet defined, and will be substituted with an empty string.

Replace the current bogus use of ${order} with an empty string instead.

This results in no change to the generated headers.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20230605070124.3741859-15-mark.rutland@arm.com
2023-06-05 09:57:18 +02:00
Mark Rutland
d12157efc8 locking/atomic: make atomic*_{cmp,}xchg optional
Most architectures define the atomic/atomic64 xchg and cmpxchg
operations in terms of arch_xchg and arch_cmpxchg respectfully.

Add fallbacks for these cases and remove the trivial cases from arch
code. On some architectures the existing definitions are kept as these
are used to build other arch_atomic*() operations.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20230605070124.3741859-5-mark.rutland@arm.com
2023-06-05 09:57:14 +02:00
Mark Rutland
14d72d4b6f locking/atomic: remove fallback comments
Currently a subset of the fallback templates have kerneldoc comments,
resulting in a haphazard set of generated kerneldoc comments as only
some operations have fallback templates to begin with.

We'd like to generate more consistent kerneldoc comments, and to do so
we'll need to restructure the way the fallback code is generated.

To minimize churn and to make it easier to restructure the fallback
code, this patch removes the existing kerneldoc comments from the
fallback templates. We can add new kerneldoc comments in subsequent
patches.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20230605070124.3741859-3-mark.rutland@arm.com
2023-06-05 09:57:13 +02:00
Peter Zijlstra
febe950dbf arch: Remove cmpxchg_double
No moar users, remove the monster.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Heiko Carstens <hca@linux.ibm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20230531132323.991907085@infradead.org
2023-06-05 09:36:39 +02:00
Peter Zijlstra
8664645ade parisc: Raise minimal GCC version
64-bit targets need the __int128 type, which for pa-risc means raising
the minimum gcc version to 11.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Helge Deller <deller@gmx.de>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lkml.kernel.org/r/20230602143912.GI620383%40hirez.programming.kicks-ass.net
2023-06-05 09:36:37 +02:00
Peter Zijlstra
8c8b096a23 instrumentation: Wire up cmpxchg128()
Wire up the cmpxchg128 family in the atomic wrapper scripts.

These provide the generic cmpxchg128 family of functions from the
arch_ prefixed version, adding explicit instrumentation where needed.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20230531132323.519237070@infradead.org
2023-06-05 09:36:36 +02:00
Masahiro Yamada
feb843a469 kbuild: add $(CLANG_FLAGS) to KBUILD_CPPFLAGS
When preprocessing arch/*/kernel/vmlinux.lds.S, the target triple is
not passed to $(CPP) because we add it only to KBUILD_{C,A}FLAGS.

As a result, the linker script is preprocessed with predefined macros
for the build host instead of the target.

Assuming you use an x86 build machine, compare the following:

 $ clang -dM -E -x c /dev/null
 $ clang -dM -E -x c /dev/null -target aarch64-linux-gnu

There is no actual problem presumably because our linker scripts do not
rely on such predefined macros, but it is better to define correct ones.

Move $(CLANG_FLAGS) to KBUILD_CPPFLAGS, so that all *.c, *.S, *.lds.S
will be processed with the proper target triple.

[Note]
After the patch submission, we got an actual problem that needs this
commit. (CBL issue 1859)

Link: https://github.com/ClangBuiltLinux/linux/issues/1859
Reported-by: Tom Rini <trini@konsulko.com>
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Reviewed-by: Nathan Chancellor <nathan@kernel.org>
Tested-by: Nathan Chancellor <nathan@kernel.org>
2023-06-05 09:50:44 +09:00
Nathan Chancellor
cff6e7f50b kbuild: Add CLANG_FLAGS to as-instr
A future change will move CLANG_FLAGS from KBUILD_{A,C}FLAGS to
KBUILD_CPPFLAGS so that '--target' is available while preprocessing.
When that occurs, the following errors appear multiple times when
building ARCH=powerpc powernv_defconfig:

  ld.lld: error: vmlinux.a(arch/powerpc/kernel/head_64.o):(.text+0x12d4): relocation R_PPC64_ADDR16_HI out of range: -4611686018409717520 is not in [-2147483648, 2147483647]; references '__start___soft_mask_table'
  ld.lld: error: vmlinux.a(arch/powerpc/kernel/head_64.o):(.text+0x12e8): relocation R_PPC64_ADDR16_HI out of range: -4611686018409717392 is not in [-2147483648, 2147483647]; references '__stop___soft_mask_table'

Diffing the .o.cmd files reveals that -DHAVE_AS_ATHIGH=1 is not present
anymore, because as-instr only uses KBUILD_AFLAGS, which will no longer
contain '--target'.

Mirror Kconfig's as-instr and add CLANG_FLAGS explicitly to the
invocation to ensure the target information is always present.

Signed-off-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2023-06-05 09:49:58 +09:00
Masahiro Yamada
2cb749466d modpost: detect section mismatch for R_ARM_REL32
For ARM, modpost fails to detect some types of section mismatches.

  [test code]

    .section .init.data,"aw"
    bar:
            .long 0

    .section .data,"aw"
    .globl foo
    foo:
            .long bar - .

It is apparently a bad reference, but modpost does not report anything.

The test code above produces the following relocations.

  Relocation section '.rel.data' at offset 0xe8 contains 1 entry:
   Offset     Info    Type            Sym.Value  Sym. Name
  00000000  00000403 R_ARM_REL32       00000000   .init.data

Currently, R_ARM_REL32 is just skipped.

Handle it like R_ARM_ABS32.

Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2023-06-04 01:37:41 +09:00
Masahiro Yamada
3310bae805 modpost: fix section_mismatch message for R_ARM_THM_{CALL,JUMP24,JUMP19}
addend_arm_rel() processes R_ARM_THM_CALL, R_ARM_THM_JUMP24,
R_ARM_THM_JUMP19 in a wrong way.

Here, test code.

[test code for R_ARM_THM_JUMP24]

  .section .init.text,"ax"
  bar:
          bx      lr

  .section .text,"ax"
  .globl foo
  foo:
          b       bar

[test code for R_ARM_THM_CALL]

  .section .init.text,"ax"
  bar:
          bx      lr

  .section .text,"ax"
  .globl foo
  foo:
          push    {lr}
          bl      bar
          pop     {pc}

If you compile it with CONFIG_THUMB2_KERNEL=y, modpost will show the
symbol name, (unknown).

  WARNING: modpost: vmlinux.o: section mismatch in reference: foo (section: .text) -> (unknown) (section: .init.text)

(You need to use GNU linker instead of LLD to reproduce it.)

Fix the code to make modpost show the correct symbol name. I checked
arch/arm/kernel/module.c to learn the encoding of R_ARM_THM_CALL and
R_ARM_THM_JUMP24. The module does not support R_ARM_THM_JUMP19, but
I checked its encoding in ARM ARM.

The '+4' is the compensation for pc-relative instruction. It is
documented in "ELF for the Arm Architecture" [1].

  "If the relocation is pc-relative then compensation for the PC bias
  (the PC value is 8 bytes ahead of the executing instruction in Arm
  state and 4 bytes in Thumb state) must be encoded in the relocation
  by the object producer."

[1]: https://github.com/ARM-software/abi-aa/blob/main/aaelf32/aaelf32.rst

Fixes: c9698e5cd6ad ("ARM: 7964/1: Detect section mismatches in thumb relocations")
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2023-06-04 01:37:41 +09:00
Masahiro Yamada
cd1824fb7a modpost: detect section mismatch for R_ARM_THM_{MOVW_ABS_NC,MOVT_ABS}
When CONFIG_THUMB2_KERNEL is enabled, modpost fails to detect some
types of section mismatches.

  [test code]

    #include <linux/init.h>

    int __initdata foo;
    int get_foo(void) { return foo; }

It is apparently a bad reference, but modpost does not report anything.

The test code above produces the following relocations.

  Relocation section '.rel.text' at offset 0x1e8 contains 2 entries:
   Offset     Info    Type            Sym.Value  Sym. Name
  00000000  0000052f R_ARM_THM_MOVW_AB 00000000   .LANCHOR0
  00000004  00000530 R_ARM_THM_MOVT_AB 00000000   .LANCHOR0

Currently, R_ARM_THM_MOVW_ABS_NC and R_ARM_THM_MOVT_ABS are just skipped.

Add code to handle them. I checked arch/arm/kernel/module.c to learn
how the offset is encoded in the instruction.

One more thing to note for Thumb instructions - the st_value is an odd
value, so you need to mask the bit 0 to get the offset. Otherwise, you
will get an off-by-one error in the nearest symbol look-up.

It is documented in "ELF for the ARM Architecture" [1]:

  In addition to the normal rules for symbol values the following rules
  shall also apply to symbols of type STT_FUNC:

   * If the symbol addresses an Arm instruction, its value is the
     address of the instruction (in a relocatable object, the offset
     of the instruction from the start of the section containing it).

   * If the symbol addresses a Thumb instruction, its value is the
     address of the instruction with bit zero set (in a relocatable
     object, the section offset with bit zero set).

   * For the purposes of relocation the value used shall be the address
     of the instruction (st_value & ~1).

[1]: https://github.com/ARM-software/abi-aa/blob/main/aaelf32/aaelf32.rst

Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2023-06-04 01:36:27 +09:00
Masahiro Yamada
b1a9651d48 modpost: refactor find_fromsym() and find_tosym()
find_fromsym() and find_tosym() are similar - both of them iterate
in the .symtab section and return the nearest symbol.

The difference between them is that find_tosym() allows a negative
distance, but the distance must be less than 20.

Factor out the common part into find_nearest_sym().

Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
2023-06-02 22:45:59 +09:00
Masahiro Yamada
12ca2c67d7 modpost: detect section mismatch for R_ARM_{MOVW_ABS_NC,MOVT_ABS}
For ARM defconfig (i.e. multi_v7_defconfig), modpost fails to detect
some types of section mismatches.

  [test code]

    #include <linux/init.h>

    int __initdata foo;
    int get_foo(void) { return foo; }

It is apparently a bad reference, but modpost does not report anything.

The test code above produces the following relocations.

  Relocation section '.rel.text' at offset 0x200 contains 2 entries:
   Offset     Info    Type            Sym.Value  Sym. Name
  00000000  0000062b R_ARM_MOVW_ABS_NC 00000000   .LANCHOR0
  00000004  0000062c R_ARM_MOVT_ABS    00000000   .LANCHOR0

Currently, R_ARM_MOVW_ABS_NC and R_ARM_MOVT_ABS are just skipped.

Add code to handle them. I checked arch/arm/kernel/module.c to learn
how the offset is encoded in the instruction.

The referenced symbol in relocation might be a local anchor.
If is_valid_name() returns false, let's search for a better symbol name.

Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2023-06-02 17:59:52 +09:00
Masahiro Yamada
56a24b8ce6 modpost: fix section mismatch message for R_ARM_{PC24,CALL,JUMP24}
addend_arm_rel() processes R_ARM_PC24, R_ARM_CALL, R_ARM_JUMP24 in a
wrong way.

Here, test code.

[test code for R_ARM_JUMP24]

  .section .init.text,"ax"
  bar:
          bx      lr

  .section .text,"ax"
  .globl foo
  foo:
          b       bar

[test code for R_ARM_CALL]

  .section .init.text,"ax"
  bar:
          bx      lr

  .section .text,"ax"
  .globl foo
  foo:
          push    {lr}
          bl      bar
          pop     {pc}

If you compile it with ARM multi_v7_defconfig, modpost will show the
symbol name, (unknown).

  WARNING: modpost: vmlinux.o: section mismatch in reference: foo (section: .text) -> (unknown) (section: .init.text)

(You need to use GNU linker instead of LLD to reproduce it.)

Fix the code to make modpost show the correct symbol name.

I imported (with adjustment) sign_extend32() from include/linux/bitops.h.

The '+8' is the compensation for pc-relative instruction. It is
documented in "ELF for the Arm Architecture" [1].

  "If the relocation is pc-relative then compensation for the PC bias
  (the PC value is 8 bytes ahead of the executing instruction in Arm
  state and 4 bytes in Thumb state) must be encoded in the relocation
  by the object producer."

[1]: https://github.com/ARM-software/abi-aa/blob/main/aaelf32/aaelf32.rst

Fixes: 56a974fa2d59 ("kbuild: make better section mismatch reports on arm")
Fixes: 6e2e340b59d2 ("ARM: 7324/1: modpost: Fix section warnings for ARM for many compilers")
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2023-06-02 17:59:52 +09:00
Masahiro Yamada
b7c63520f6 modpost: fix section mismatch message for R_ARM_ABS32
addend_arm_rel() processes R_ARM_ABS32 in a wrong way.

Here, test code.

  [test code 1]

    #include <linux/init.h>

    int __initdata foo;
    int get_foo(void) { return foo; }

If you compile it with ARM versatile_defconfig, modpost will show the
symbol name, (unknown).

  WARNING: modpost: vmlinux.o: section mismatch in reference: get_foo (section: .text) -> (unknown) (section: .init.data)

(You need to use GNU linker instead of LLD to reproduce it.)

If you compile it for other architectures, modpost will show the correct
symbol name.

  WARNING: modpost: vmlinux.o: section mismatch in reference: get_foo (section: .text) -> foo (section: .init.data)

For R_ARM_ABS32, addend_arm_rel() sets r->r_addend to a wrong value.

I just mimicked the code in arch/arm/kernel/module.c.

However, there is more difficulty for ARM.

Here, test code.

  [test code 2]

    #include <linux/init.h>

    int __initdata foo;
    int get_foo(void) { return foo; }

    int __initdata bar;
    int get_bar(void) { return bar; }

With this commit applied, modpost will show the following messages
for ARM versatile_defconfig:

  WARNING: modpost: vmlinux.o: section mismatch in reference: get_foo (section: .text) -> foo (section: .init.data)
  WARNING: modpost: vmlinux.o: section mismatch in reference: get_bar (section: .text) -> foo (section: .init.data)

The reference from 'get_bar' to 'foo' seems wrong.

I have no solution for this because it is true in assembly level.

In the following output, relocation at 0x1c is no longer associated
with 'bar'. The two relocation entries point to the same symbol, and
the offset to 'bar' is encoded in the instruction 'r0, [r3, #4]'.

  Disassembly of section .text:

  00000000 <get_foo>:
     0: e59f3004          ldr     r3, [pc, #4]   @ c <get_foo+0xc>
     4: e5930000          ldr     r0, [r3]
     8: e12fff1e          bx      lr
     c: 00000000          .word   0x00000000

  00000010 <get_bar>:
    10: e59f3004          ldr     r3, [pc, #4]   @ 1c <get_bar+0xc>
    14: e5930004          ldr     r0, [r3, #4]
    18: e12fff1e          bx      lr
    1c: 00000000          .word   0x00000000

  Relocation section '.rel.text' at offset 0x244 contains 2 entries:
   Offset     Info    Type            Sym.Value  Sym. Name
  0000000c  00000c02 R_ARM_ABS32       00000000   .init.data
  0000001c  00000c02 R_ARM_ABS32       00000000   .init.data

When find_elf_symbol() gets into a situation where relsym->st_name is
zero, there is no guarantee to get the symbol name as written in C.

I am keeping the current logic because it is useful in many architectures,
but the symbol name is not always correct depending on the optimization.
I left some comments in find_tosym().

Fixes: 56a974fa2d59 ("kbuild: make better section mismatch reports on arm")
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2023-06-02 17:59:52 +09:00
Jialu Xu
82089b00ae scripts/tags.sh: improve compiled sources generation
Use grep instead of sed for all compiled sources generation, it is three
times more efficient.

Signed-off-by: Jialu Xu <xujialu@vimux.org>
Tested-by: Carlos Llamas <cmllamas@google.com>
Link: https://lore.kernel.org/r/20230601010402.71040-1-xujialu@vimux.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-06-01 19:01:57 +01:00
Miguel Ojeda
3ed03f4da0 rust: upgrade to Rust 1.68.2
This is the first upgrade to the Rust toolchain since the initial Rust
merge, from 1.62.0 to 1.68.2 (i.e. the latest).

# Context

The kernel currently supports only a single Rust version [1] (rather
than a minimum) given our usage of some "unstable" Rust features [2]
which do not promise backwards compatibility.

The goal is to reach a point where we can declare a minimum version for
the toolchain. For instance, by waiting for some of the features to be
stabilized. Therefore, the first minimum Rust version that the kernel
will support is "in the future".

# Upgrade policy

Given we will eventually need to reach that minimum version, it would be
ideal to upgrade the compiler from time to time to be as close as
possible to that goal and find any issues sooner. In the extreme, we
could upgrade as soon as a new Rust release is out. Of course, upgrading
so often is in stark contrast to what one normally would need for GCC
and LLVM, especially given the release schedule: 6 weeks for Rust vs.
half a year for LLVM and a year for GCC.

Having said that, there is no particular advantage to updating slowly
either: kernel developers in "stable" distributions are unlikely to be
able to use their distribution-provided Rust toolchain for the kernel
anyway [3]. Instead, by routinely upgrading to the latest instead,
kernel developers using Linux distributions that track the latest Rust
release may be able to use those rather than Rust-provided ones,
especially if their package manager allows to pin / hold back /
downgrade the version for some days during windows where the version may
not match. For instance, Arch, Fedora, Gentoo and openSUSE all provide
and track the latest version of Rust as they get released every 6 weeks.

Then, when the minimum version is reached, we will stop upgrading and
decide how wide the window of support will be. For instance, a year of
Rust versions. We will probably want to start small, and then widen it
over time, just like the kernel did originally for LLVM, see commit
3519c4d6e08e ("Documentation: add minimum clang/llvm version").

# Unstable features stabilized

This upgrade allows us to remove the following unstable features since
they were stabilized:

  - `feature(explicit_generic_args_with_impl_trait)` (1.63).
  - `feature(core_ffi_c)` (1.64).
  - `feature(generic_associated_types)` (1.65).
  - `feature(const_ptr_offset_from)` (1.65, *).
  - `feature(bench_black_box)` (1.66, *).
  - `feature(pin_macro)` (1.68).

The ones marked with `*` apply only to our old `rust` branch, not
mainline yet, i.e. only for code that we may potentially upstream.

With this patch applied, the only unstable feature allowed to be used
outside the `kernel` crate is `new_uninit`, though other code to be
upstreamed may increase the list.

Please see [2] for details.

# Other required changes

Since 1.63, `rustdoc` triggers the `broken_intra_doc_links` lint for
links pointing to exported (`#[macro_export]`) `macro_rules`. An issue
was opened upstream [4], but it turns out it is intended behavior. For
the moment, just add an explicit reference for each link. Later we can
revisit this if `rustdoc` removes the compatibility measure.

Nevertheless, this was helpful to discover a link that was pointing to
the wrong place unintentionally. Since that one was actually wrong, it
is fixed in a previous commit independently.

Another change was the addition of `cfg(no_rc)` and `cfg(no_sync)` in
upstream [5], thus remove our original changes for that.

Similarly, upstream now tests that it compiles successfully with
`#[cfg(not(no_global_oom_handling))]` [6], which allow us to get rid
of some changes, such as an `#[allow(dead_code)]`.

In addition, remove another `#[allow(dead_code)]` due to new uses
within the standard library.

Finally, add `try_extend_trusted` and move the code in `spec_extend.rs`
since upstream moved it for the infallible version.

# `alloc` upgrade and reviewing

There are a large amount of changes, but the vast majority of them are
due to our `alloc` fork being upgraded at once.

There are two kinds of changes to be aware of: the ones coming from
upstream, which we should follow as closely as possible, and the updates
needed in our added fallible APIs to keep them matching the newer
infallible APIs coming from upstream.

Instead of taking a look at the diff of this patch, an alternative
approach is reviewing a diff of the changes between upstream `alloc` and
the kernel's. This allows to easily inspect the kernel additions only,
especially to check if the fallible methods we already have still match
the infallible ones in the new version coming from upstream.

Another approach is reviewing the changes introduced in the additions in
the kernel fork between the two versions. This is useful to spot
potentially unintended changes to our additions.

To apply these approaches, one may follow steps similar to the following
to generate a pair of patches that show the differences between upstream
Rust and the kernel (for the subset of `alloc` we use) before and after
applying this patch:

    # Get the difference with respect to the old version.
    git -C rust checkout $(linux/scripts/min-tool-version.sh rustc)
    git -C linux ls-tree -r --name-only HEAD -- rust/alloc |
        cut -d/ -f3- |
        grep -Fv README.md |
        xargs -IPATH cp rust/library/alloc/src/PATH linux/rust/alloc/PATH
    git -C linux diff --patch-with-stat --summary -R > old.patch
    git -C linux restore rust/alloc

    # Apply this patch.
    git -C linux am rust-upgrade.patch

    # Get the difference with respect to the new version.
    git -C rust checkout $(linux/scripts/min-tool-version.sh rustc)
    git -C linux ls-tree -r --name-only HEAD -- rust/alloc |
        cut -d/ -f3- |
        grep -Fv README.md |
        xargs -IPATH cp rust/library/alloc/src/PATH linux/rust/alloc/PATH
    git -C linux diff --patch-with-stat --summary -R > new.patch
    git -C linux restore rust/alloc

Now one may check the `new.patch` to take a look at the additions (first
approach) or at the difference between those two patches (second
approach). For the latter, a side-by-side tool is recommended.

Link: https://rust-for-linux.com/rust-version-policy [1]
Link: https://github.com/Rust-for-Linux/linux/issues/2 [2]
Link: https://lore.kernel.org/rust-for-linux/CANiq72mT3bVDKdHgaea-6WiZazd8Mvurqmqegbe5JZxVyLR8Yg@mail.gmail.com/ [3]
Link: https://github.com/rust-lang/rust/issues/106142 [4]
Link: https://github.com/rust-lang/rust/pull/89891 [5]
Link: https://github.com/rust-lang/rust/pull/98652 [6]
Reviewed-by: Björn Roy Baron <bjorn3_gh@protonmail.com>
Reviewed-by: Gary Guo <gary@garyguo.net>
Reviewed-By: Martin Rodriguez Reboredo <yakoyoku@gmail.com>
Tested-by: Ariel Miculas <amiculas@cisco.com>
Tested-by: David Gow <davidgow@google.com>
Tested-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://lore.kernel.org/r/20230418214347.324156-4-ojeda@kernel.org
[ Removed `feature(core_ffi_c)` from `uapi` ]
Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
2023-05-31 17:35:03 +02:00
Kees Cook
d0f90841cb checkpatch: Check for strcpy and strncpy too
Warn about strcpy(), strncpy(), and strlcpy(). Suggest strscpy() and
include pointers to the open KSPP issues for each, which has further
details and replacement procedures.

Cc: Andy Whitcroft <apw@canonical.com>
Cc: Joe Perches <joe@perches.com>
Cc: Dwaipayan Ray <dwaipayanray1@gmail.com>
Cc: Lukas Bulwahn <lukas.bulwahn@gmail.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20230517201349.never.582-kees@kernel.org
2023-05-30 16:42:01 -07:00
Masahiro Yamada
1df380ff30 modpost: remove *_sections[] arrays
Use PATTERNS() macros to remove unneeded array definitions.

Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2023-05-28 20:40:17 +09:00
Masahiro Yamada
abc23979ac modpost: merge bad_tosec=ALL_EXIT_SECTIONS entries in sectioncheck table
There is no distinction between TEXT_TO_ANY_EXIT and DATA_TO_ANY_EXIT.
Just merge them.

Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
2023-05-28 20:39:52 +09:00
Masahiro Yamada
d4323e8350 modpost: merge fromsec=DATA_SECTIONS entries in sectioncheck table
You can merge these entries.

Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
2023-05-28 20:36:52 +09:00
Masahiro Yamada
a9bb3e5d57 modpost: remove is_shndx_special() check from section_rel(a)
This check is unneeded. Without it, sec_name() will returns the null
string "", then section_mismatch() will return immediately.

Anyway, special section indices rarely appear in these loops.

Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
2023-05-28 20:35:16 +09:00
Masahiro Yamada
04ed3b4763 modpost: replace r->r_offset, r->r_addend with faddr, taddr
r_offset/r_addend holds the offset address from/to which a symbol is
referenced. It is unclear unless you are familiar with ELF.

Rename them to faddr, taddr, respectively. The prefix 'f' means 'from',
't' means 'to'.

Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
2023-05-28 20:34:40 +09:00
Masahiro Yamada
a23e7584ec modpost: unify 'sym' and 'to' in default_mismatch_handler()
find_tosym() takes 'sym' and stores the return value to another
variable 'to'. You can use the same variable because we want to
replace the original one when appropriate.

Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
2023-05-28 20:34:05 +09:00
Masahiro Yamada
05bb070467 modpost: remove unused argument from secref_whitelist()
secref_whitelist() does not use the argument 'mismatch'.

Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
2023-05-28 20:32:04 +09:00
Masahiro Yamada
17b53f10ab Revert "modpost: skip ELF local symbols during section mismatch check"
This reverts commit a4d26f1a0958bb1c2b60c6f1e67c6f5d43e2647b.

The variable 'fromsym' never starts with ".L" since commit 87e5b1e8f257
("module: Sync code of is_arm_mapping_symbol()").

In other words, Pattern 6 is now dead code.

Previously, the .LANCHOR1 hid the symbols listed in Pattern 2.

87e5b1e8f257 provided a better solution.

Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
2023-05-28 16:24:07 +09:00
Joel Granados
b8cbc0855a sysctl: Remove register_sysctl_table
This is part of the general push to deprecate register_sysctl_paths and
register_sysctl_table. After removing all the calling functions, we
remove both the register_sysctl_table function and the documentation
check that appeared in check-sysctl-docs awk script.

We save 595 bytes with this change:

./scripts/bloat-o-meter vmlinux.1.refactor-base-paths vmlinux.2.remove-sysctl-table
add/remove: 2/8 grow/shrink: 1/0 up/down: 1154/-1749 (-595)
Function                                     old     new   delta
count_subheaders                               -     983    +983
unregister_sysctl_table                       29     184    +155
__pfx_count_subheaders                         -      16     +16
__pfx_unregister_sysctl_table.part            16       -     -16
__pfx_register_leaf_sysctl_tables.constprop   16       -     -16
__pfx_count_subheaders.part                   16       -     -16
__pfx___register_sysctl_base                  16       -     -16
unregister_sysctl_table.part                 136       -    -136
__register_sysctl_base                       478       -    -478
register_leaf_sysctl_tables.constprop        524       -    -524
count_subheaders.part                        547       -    -547
Total: Before=21257652, After=21257057, chg -0.00%

[mcgrof: remove register_leaf_sysctl_tables and append_path too and
 add bloat-o-meter stats]

Signed-off-by: Joel Granados <j.granados@samsung.com>
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
Acked-by: Christian Brauner <brauner@kernel.org>
2023-05-23 21:43:26 -07:00
Ahmed S. Darwish
e1b37563ca scripts/tags.sh: Resolve gtags empty index generation
gtags considers any file outside of its current working directory
"outside the source tree" and refuses to index it. For O= kernel builds,
or when "make" is invoked from a directory other then the kernel source
tree, gtags ignores the entire kernel source and generates an empty
index.

Force-set gtags current working directory to the kernel source tree.

Due to commit 9da0763bdd82 ("kbuild: Use relative path when building in
a subdir of the source tree"), if the kernel build is done in a
sub-directory of the kernel source tree, the kernel Makefile will set
the kernel's $srctree to ".." for shorter compile-time and run-time
warnings. Consequently, the list of files to be indexed will be in the
"../*" form, rendering all such paths invalid once gtags switches to the
kernel source tree as its current working directory.

If gtags indexing is requested and the build directory is not the kernel
source tree, index all files in absolute-path form.

Note, indexing in absolute-path form will not affect the generated
index, as paths in gtags indices are always relative to the gtags "root
directory" anyway (as evidenced by "gtags --dump").

Signed-off-by: Ahmed S. Darwish <darwi@linutronix.de>
Cc: <stable@vger.kernel.org>
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2023-05-22 10:34:38 +09:00
Masahiro Yamada
ac263349b9 modpost: rename find_elf_symbol() and find_elf_symbol2()
find_elf_symbol() and find_elf_symbol2() are not good names.

Rename them to find_tosym(), find_fromsym(), respectively.

Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2023-05-22 10:34:38 +09:00
Masahiro Yamada
9990ca3587 modpost: pass section index to find_elf_symbol2()
find_elf_symbol2() converts the section index to the section name,
then compares the two strings in each iteration. This is slow.

It is faster to compare the section indices (i.e. integers) directly.

Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
2023-05-22 10:34:38 +09:00
Masahiro Yamada
dbf7cc2e4e modpost: pass 'tosec' down to default_mismatch_handler()
default_mismatch_handler() does not need to compute 'tosec' because
it is calculated by the caller.

Pass it down to default_mismatch_handler() instead of calling
sec_name() twice.

Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
2023-05-22 10:34:38 +09:00
Masahiro Yamada
856567d559 modpost: squash extable_mismatch_handler() into default_mismatch_handler()
Merging these two reduces several lines of code. The extable section
mismatch is already distinguished by EXTABLE_TO_NON_TEXT.

Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
2023-05-22 10:34:38 +09:00
Masahiro Yamada
f4c35484e7 modpost: clean up is_executable_section()
SHF_EXECINSTR is a bit flag (#define SHF_EXECINSTR 0x4).
Compare the masked flag to '!= 0'.

There is no good reason to stop modpost immediately even if a special
section index is given. You will get a section mismatch error anyway.

Also, change the return type to bool.

Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
2023-05-22 10:34:38 +09:00
Masahiro Yamada
fc5fa862c4 modpost: squash report_sec_mismatch() into default_mismatch_handler()
report_sec_mismatch() and default_mismatch_handler() are small enough
to be merged together.

Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
2023-05-22 10:34:38 +09:00
Masahiro Yamada
faee9defd8 modpost: squash report_extable_warnings() into extable_mismatch_handler()
Collect relevant code into one place to clarify all the cases are
covered by 'if () ... else if ... else ...'.

Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
2023-05-22 10:34:38 +09:00
Masahiro Yamada
6691e6f5fc modpost: remove get_prettyname()
This is the last user of get_pretty_name() - it is just used to
distinguish whether the symbol is a function or not. It is not
valuable information.

Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2023-05-22 10:34:38 +09:00
Masahiro Yamada
6c90d36be3 modpost: remove fromsym info in __ex_table section mismatch warning
report_extable_warnings() prints "from" in a pretty form, but we know
it is always located in the __ex_table section, i.e. a collection of
struct exception_table_entry.

It is very likely to fail to get the symbol name and ends up with
meaningless message:

  ... in reference from the (unknown reference) (unknown) to ...

Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2023-05-22 10:34:38 +09:00
Masahiro Yamada
d0acc76a49 modpost: remove broken calculation of exception_table_entry size
find_extable_entry_size() is completely broken. It has awesome comments
about how to calculate sizeof(struct exception_table_entry).

It was based on these assumptions:

  - struct exception_table_entry has two fields
  - both of the fields have the same size

Then, we came up with this equation:

  (offset of the second field) * 2 == (size of struct)

It was true for all architectures when commit 52dc0595d540 ("modpost:
handle relocations mismatch in __ex_table.") was applied.

Our mathematics broke when commit 548acf19234d ("x86/mm: Expand the
exception table logic to allow new handling options") introduced the
third field.

Now, the definition of exception_table_entry is highly arch-dependent.

For x86, sizeof(struct exception_table_entry) is apparently 12, but
find_extable_entry_size() sets extable_entry_size to 8.

I could fix it, but I do not see much value in this code.

extable_entry_size is used just for selecting a slightly different
error message.

If the first field ("insn") references to a non-executable section,

    The relocation at %s+0x%lx references
    section "%s" which is not executable, IOW
    it is not possible for the kernel to fault
    at that address.  Something is seriously wrong
    and should be fixed.

If the second field ("fixup") references to a non-executable section,

    The relocation at %s+0x%lx references
    section "%s" which is not executable, IOW
    the kernel will fault if it ever tries to
    jump to it.  Something is seriously wrong
    and should be fixed.

Merge the two error messages rather than adding even more complexity.

Change fatal() to error() to make it continue running and catch more
possible errors.

Fixes: 548acf19234d ("x86/mm: Expand the exception table logic to allow new handling options")
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2023-05-22 10:34:38 +09:00
Masahiro Yamada
64f140417d modpost: error out if addend_*_rel() is not implemented for REL arch
The section mismatch check relies on the relocation entries.

For REL, the addend value is implicit, so we need some code to compute
it. Currently, EM_386, EM_ARM, and EM_MIPS are supported. This commit
makes sure we covered all the cases.

I believe the other architectures use RELA, where the explicit r_addend
field exists.

Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2023-05-22 10:34:38 +09:00
Andrew Davis
81d362732b kbuild: Disallow DTB overlays to built from .dts named source files
As a follow up to the series allowing DTB overlays to built from .dtso
files. Now that all overlays have been renamed, remove the ability to
build from overlays from .dts files to prevent any files with the old
name from accidental being added.

Signed-off-by: Andrew Davis <afd@ti.com>
Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
Acked-by: Andy Shevchenko <andriy.shevchenko@intel.com>
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2023-05-22 10:34:37 +09:00