Use __always_inline for atomic fallback wrappers. When building for size (CC_OPTIMIZE_FOR_SIZE), some compilers appear to be less inclined to inline even relatively small static inline functions that are assumed to be inlinable such as atomic ops. This can cause problems, for example in UACCESS regions. While the fallback wrappers aren't pure wrappers, they are trivial nonetheless, and the function they wrap should determine the final inlining policy. For x86 tinyconfig we observe: - vmlinux baseline: 1315988 - vmlinux with patch: 1315928 (-60 bytes) [ tglx: Cherry-picked from KCSAN ] Suggested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Marco Elver <elver@google.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
17 lines
395 B
Plaintext
Executable File
17 lines
395 B
Plaintext
Executable File
cat <<EOF
|
|
/**
|
|
* ${atomic}_add_negative - add and test if negative
|
|
* @i: integer value to add
|
|
* @v: pointer of type ${atomic}_t
|
|
*
|
|
* Atomically adds @i to @v and returns true
|
|
* if the result is negative, or false when
|
|
* result is greater than or equal to zero.
|
|
*/
|
|
static __always_inline bool
|
|
${atomic}_add_negative(${int} i, ${atomic}_t *v)
|
|
{
|
|
return ${atomic}_add_return(i, v) < 0;
|
|
}
|
|
EOF
|