Random number generator updates for Linux 6.2-rc1.
-----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEEq5lC5tSkz8NBJiCnSfxwEqXeA64FAmOU+U8ACgkQSfxwEqXe A67NnQ//Y5DltmvibyPd7r1TFT2gUYv+Rx3sUV9ZE1NYptd/SWhhcL8c5FZ70Fuw bSKCa1uiWjOxosjXT1kGrWq3de7q7oUpAPSOGxgxzoaNURIt58N/ajItCX/4Au8I RlGAScHy5e5t41/26a498kB6qJ441fBEqCYKQpPLINMBAhe8TQ+NVp0rlpUwNHFX WrUGg4oKWxdBIW3HkDirQjJWDkkAiklRTifQh/Al4b6QDbOnRUGGCeckNOhixsvS waHWTld+Td8jRrA4b82tUb2uVZ2/b8dEvj/A8CuTv4yC0lywoyMgBWmJAGOC+UmT ZVNdGW02Jc2T+Iap8ZdsEmeLHNqbli4+IcbY5xNlov+tHJ2oz41H9TZoYKbudlr6 /ReAUPSn7i50PhbQlEruj3eg+M2gjOeh8OF8UKwwRK8PghvyWQ1ScW0l3kUhPIhI PdIG6j4+D2mJc1FIj2rTVB+Bg933x6S+qx4zDxGlNp62AARUFYf6EgyD6aXFQVuX RxcKb6cjRuFkzFiKc8zkqg5edZH+IJcPNuIBmABqTGBOxbZWURXzIQvK/iULqZa4 CdGAFIs6FuOh8pFHLI3R4YoHBopbHup/xKDEeAO9KZGyeVIuOSERDxxo5f/ITzcq APvT77DFOEuyvanr8RMqqh0yUjzcddXqw9+ieufsAyDwjD9DTuE= =QRhK -----END PGP SIGNATURE----- Merge tag 'random-6.2-rc1-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/crng/random Pull random number generator updates from Jason Donenfeld: - Replace prandom_u32_max() and various open-coded variants of it, there is now a new family of functions that uses fast rejection sampling to choose properly uniformly random numbers within an interval: get_random_u32_below(ceil) - [0, ceil) get_random_u32_above(floor) - (floor, U32_MAX] get_random_u32_inclusive(floor, ceil) - [floor, ceil] Coccinelle was used to convert all current users of prandom_u32_max(), as well as many open-coded patterns, resulting in improvements throughout the tree. I'll have a "late" 6.1-rc1 pull for you that removes the now unused prandom_u32_max() function, just in case any other trees add a new use case of it that needs to converted. According to linux-next, there may be two trivial cases of prandom_u32_max() reintroductions that are fixable with a 's/.../.../'. So I'll have for you a final conversion patch doing that alongside the removal patch during the second week. This is a treewide change that touches many files throughout. - More consistent use of get_random_canary(). - Updates to comments, documentation, tests, headers, and simplification in configuration. - The arch_get_random*_early() abstraction was only used by arm64 and wasn't entirely useful, so this has been replaced by code that works in all relevant contexts. - The kernel will use and manage random seeds in non-volatile EFI variables, refreshing a variable with a fresh seed when the RNG is initialized. The RNG GUID namespace is then hidden from efivarfs to prevent accidental leakage. These changes are split into random.c infrastructure code used in the EFI subsystem, in this pull request, and related support inside of EFISTUB, in Ard's EFI tree. These are co-dependent for full functionality, but the order of merging doesn't matter. - Part of the infrastructure added for the EFI support is also used for an improvement to the way vsprintf initializes its siphash key, replacing an sleep loop wart. - The hardware RNG framework now always calls its correct random.c input function, add_hwgenerator_randomness(), rather than sometimes going through helpers better suited for other cases. - The add_latent_entropy() function has long been called from the fork handler, but is a no-op when the latent entropy gcc plugin isn't used, which is fine for the purposes of latent entropy. But it was missing out on the cycle counter that was also being mixed in beside the latent entropy variable. So now, if the latent entropy gcc plugin isn't enabled, add_latent_entropy() will expand to a call to add_device_randomness(NULL, 0), which adds a cycle counter, without the absent latent entropy variable. - The RNG is now reseeded from a delayed worker, rather than on demand when used. Always running from a worker allows it to make use of the CPU RNG on platforms like S390x, whose instructions are too slow to do so from interrupts. It also has the effect of adding in new inputs more frequently with more regularity, amounting to a long term transcript of random values. Plus, it helps a bit with the upcoming vDSO implementation (which isn't yet ready for 6.2). - The jitter entropy algorithm now tries to execute on many different CPUs, round-robining, in hopes of hitting even more memory latencies and other unpredictable effects. It also will mix in a cycle counter when the entropy timer fires, in addition to being mixed in from the main loop, to account more explicitly for fluctuations in that timer firing. And the state it touches is now kept within the same cache line, so that it's assured that the different execution contexts will cause latencies. * tag 'random-6.2-rc1-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/crng/random: (23 commits) random: include <linux/once.h> in the right header random: align entropy_timer_state to cache line random: mix in cycle counter when jitter timer fires random: spread out jitter callback to different CPUs random: remove extraneous period and add a missing one in comments efi: random: refresh non-volatile random seed when RNG is initialized vsprintf: initialize siphash key using notifier random: add back async readiness notifier random: reseed in delayed work rather than on-demand random: always mix cycle counter in add_latent_entropy() hw_random: use add_hwgenerator_randomness() for early entropy random: modernize documentation comment on get_random_bytes() random: adjust comment to account for removed function random: remove early archrandom abstraction random: use random.trust_{bootloader,cpu} command line option only stackprotector: actually use get_random_canary() stackprotector: move get_random_canary() into stackprotector.h treewide: use get_random_u32_inclusive() when possible treewide: use get_random_u32_{above,below}() instead of manual loop treewide: use get_random_u32_below() instead of deprecated function ...
This commit is contained in:
commit
268325bda5
@ -4574,17 +4574,15 @@
|
||||
|
||||
ramdisk_start= [RAM] RAM disk image start address
|
||||
|
||||
random.trust_cpu={on,off}
|
||||
[KNL] Enable or disable trusting the use of the
|
||||
CPU's random number generator (if available) to
|
||||
fully seed the kernel's CRNG. Default is controlled
|
||||
by CONFIG_RANDOM_TRUST_CPU.
|
||||
random.trust_cpu=off
|
||||
[KNL] Disable trusting the use of the CPU's
|
||||
random number generator (if available) to
|
||||
initialize the kernel's RNG.
|
||||
|
||||
random.trust_bootloader={on,off}
|
||||
[KNL] Enable or disable trusting the use of a
|
||||
seed passed by the bootloader (if available) to
|
||||
fully seed the kernel's CRNG. Default is controlled
|
||||
by CONFIG_RANDOM_TRUST_BOOTLOADER.
|
||||
random.trust_bootloader=off
|
||||
[KNL] Disable trusting the use of the a seed
|
||||
passed by the bootloader (if available) to
|
||||
initialize the kernel's RNG.
|
||||
|
||||
randomize_kstack_offset=
|
||||
[KNL] Enable or disable kernel stack offset
|
||||
|
@ -15,9 +15,6 @@
|
||||
#ifndef _ASM_STACKPROTECTOR_H
|
||||
#define _ASM_STACKPROTECTOR_H 1
|
||||
|
||||
#include <linux/random.h>
|
||||
#include <linux/version.h>
|
||||
|
||||
#include <asm/thread_info.h>
|
||||
|
||||
extern unsigned long __stack_chk_guard;
|
||||
@ -30,11 +27,7 @@ extern unsigned long __stack_chk_guard;
|
||||
*/
|
||||
static __always_inline void boot_init_stack_canary(void)
|
||||
{
|
||||
unsigned long canary;
|
||||
|
||||
/* Try to get a semi random initial value. */
|
||||
get_random_bytes(&canary, sizeof(canary));
|
||||
canary ^= LINUX_VERSION_CODE;
|
||||
unsigned long canary = get_random_canary();
|
||||
|
||||
current->stack_canary = canary;
|
||||
#ifndef CONFIG_STACKPROTECTOR_PER_TASK
|
||||
|
@ -371,7 +371,7 @@ static unsigned long sigpage_addr(const struct mm_struct *mm,
|
||||
|
||||
slots = ((last - first) >> PAGE_SHIFT) + 1;
|
||||
|
||||
offset = prandom_u32_max(slots);
|
||||
offset = get_random_u32_below(slots);
|
||||
|
||||
addr = first + (offset << PAGE_SHIFT);
|
||||
|
||||
|
@ -5,6 +5,7 @@
|
||||
#include <linux/arm-smccc.h>
|
||||
#include <linux/bug.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/irqflags.h>
|
||||
#include <asm/cpufeature.h>
|
||||
|
||||
#define ARM_SMCCC_TRNG_MIN_VERSION 0x10000UL
|
||||
@ -58,6 +59,13 @@ static inline bool __arm64_rndrrs(unsigned long *v)
|
||||
return ok;
|
||||
}
|
||||
|
||||
static __always_inline bool __cpu_has_rng(void)
|
||||
{
|
||||
if (unlikely(!system_capabilities_finalized() && !preemptible()))
|
||||
return this_cpu_has_cap(ARM64_HAS_RNG);
|
||||
return cpus_have_const_cap(ARM64_HAS_RNG);
|
||||
}
|
||||
|
||||
static inline size_t __must_check arch_get_random_longs(unsigned long *v, size_t max_longs)
|
||||
{
|
||||
/*
|
||||
@ -66,7 +74,7 @@ static inline size_t __must_check arch_get_random_longs(unsigned long *v, size_t
|
||||
* cpufeature code and with potential scheduling between CPUs
|
||||
* with and without the feature.
|
||||
*/
|
||||
if (max_longs && cpus_have_const_cap(ARM64_HAS_RNG) && __arm64_rndr(v))
|
||||
if (max_longs && __cpu_has_rng() && __arm64_rndr(v))
|
||||
return 1;
|
||||
return 0;
|
||||
}
|
||||
@ -108,7 +116,7 @@ static inline size_t __must_check arch_get_random_seed_longs(unsigned long *v, s
|
||||
* reseeded after each invocation. This is not a 100% fit but good
|
||||
* enough to implement this API if no other entropy source exists.
|
||||
*/
|
||||
if (cpus_have_const_cap(ARM64_HAS_RNG) && __arm64_rndrrs(v))
|
||||
if (__cpu_has_rng() && __arm64_rndrrs(v))
|
||||
return 1;
|
||||
|
||||
return 0;
|
||||
@ -121,40 +129,4 @@ static inline bool __init __early_cpu_has_rndr(void)
|
||||
return (ftr >> ID_AA64ISAR0_EL1_RNDR_SHIFT) & 0xf;
|
||||
}
|
||||
|
||||
static inline size_t __init __must_check
|
||||
arch_get_random_seed_longs_early(unsigned long *v, size_t max_longs)
|
||||
{
|
||||
WARN_ON(system_state != SYSTEM_BOOTING);
|
||||
|
||||
if (!max_longs)
|
||||
return 0;
|
||||
|
||||
if (smccc_trng_available) {
|
||||
struct arm_smccc_res res;
|
||||
|
||||
max_longs = min_t(size_t, 3, max_longs);
|
||||
arm_smccc_1_1_invoke(ARM_SMCCC_TRNG_RND64, max_longs * 64, &res);
|
||||
if ((int)res.a0 >= 0) {
|
||||
switch (max_longs) {
|
||||
case 3:
|
||||
*v++ = res.a1;
|
||||
fallthrough;
|
||||
case 2:
|
||||
*v++ = res.a2;
|
||||
fallthrough;
|
||||
case 1:
|
||||
*v++ = res.a3;
|
||||
break;
|
||||
}
|
||||
return max_longs;
|
||||
}
|
||||
}
|
||||
|
||||
if (__early_cpu_has_rndr() && __arm64_rndr(v))
|
||||
return 1;
|
||||
|
||||
return 0;
|
||||
}
|
||||
#define arch_get_random_seed_longs_early arch_get_random_seed_longs_early
|
||||
|
||||
#endif /* _ASM_ARCHRANDOM_H */
|
||||
|
@ -13,8 +13,6 @@
|
||||
#ifndef __ASM_STACKPROTECTOR_H
|
||||
#define __ASM_STACKPROTECTOR_H
|
||||
|
||||
#include <linux/random.h>
|
||||
#include <linux/version.h>
|
||||
#include <asm/pointer_auth.h>
|
||||
|
||||
extern unsigned long __stack_chk_guard;
|
||||
@ -28,12 +26,7 @@ extern unsigned long __stack_chk_guard;
|
||||
static __always_inline void boot_init_stack_canary(void)
|
||||
{
|
||||
#if defined(CONFIG_STACKPROTECTOR)
|
||||
unsigned long canary;
|
||||
|
||||
/* Try to get a semi random initial value. */
|
||||
get_random_bytes(&canary, sizeof(canary));
|
||||
canary ^= LINUX_VERSION_CODE;
|
||||
canary &= CANARY_MASK;
|
||||
unsigned long canary = get_random_canary();
|
||||
|
||||
current->stack_canary = canary;
|
||||
if (!IS_ENABLED(CONFIG_STACKPROTECTOR_PER_TASK))
|
||||
|
@ -593,7 +593,7 @@ unsigned long __get_wchan(struct task_struct *p)
|
||||
unsigned long arch_align_stack(unsigned long sp)
|
||||
{
|
||||
if (!(current->personality & ADDR_NO_RANDOMIZE) && randomize_va_space)
|
||||
sp -= prandom_u32_max(PAGE_SIZE);
|
||||
sp -= get_random_u32_below(PAGE_SIZE);
|
||||
return sp & ~0xf;
|
||||
}
|
||||
|
||||
|
@ -2,9 +2,6 @@
|
||||
#ifndef _ASM_STACKPROTECTOR_H
|
||||
#define _ASM_STACKPROTECTOR_H 1
|
||||
|
||||
#include <linux/random.h>
|
||||
#include <linux/version.h>
|
||||
|
||||
extern unsigned long __stack_chk_guard;
|
||||
|
||||
/*
|
||||
@ -15,12 +12,7 @@ extern unsigned long __stack_chk_guard;
|
||||
*/
|
||||
static __always_inline void boot_init_stack_canary(void)
|
||||
{
|
||||
unsigned long canary;
|
||||
|
||||
/* Try to get a semi random initial value. */
|
||||
get_random_bytes(&canary, sizeof(canary));
|
||||
canary ^= LINUX_VERSION_CODE;
|
||||
canary &= CANARY_MASK;
|
||||
unsigned long canary = get_random_canary();
|
||||
|
||||
current->stack_canary = canary;
|
||||
__stack_chk_guard = current->stack_canary;
|
||||
|
@ -294,7 +294,7 @@ unsigned long stack_top(void)
|
||||
unsigned long arch_align_stack(unsigned long sp)
|
||||
{
|
||||
if (!(current->personality & ADDR_NO_RANDOMIZE) && randomize_va_space)
|
||||
sp -= prandom_u32_max(PAGE_SIZE);
|
||||
sp -= get_random_u32_below(PAGE_SIZE);
|
||||
|
||||
return sp & STACK_ALIGN;
|
||||
}
|
||||
|
@ -78,7 +78,7 @@ static unsigned long vdso_base(void)
|
||||
unsigned long base = STACK_TOP;
|
||||
|
||||
if (current->flags & PF_RANDOMIZE) {
|
||||
base += prandom_u32_max(VDSO_RANDOMIZE_SIZE);
|
||||
base += get_random_u32_below(VDSO_RANDOMIZE_SIZE);
|
||||
base = PAGE_ALIGN(base);
|
||||
}
|
||||
|
||||
|
@ -15,9 +15,6 @@
|
||||
#ifndef _ASM_STACKPROTECTOR_H
|
||||
#define _ASM_STACKPROTECTOR_H 1
|
||||
|
||||
#include <linux/random.h>
|
||||
#include <linux/version.h>
|
||||
|
||||
extern unsigned long __stack_chk_guard;
|
||||
|
||||
/*
|
||||
@ -28,11 +25,7 @@ extern unsigned long __stack_chk_guard;
|
||||
*/
|
||||
static __always_inline void boot_init_stack_canary(void)
|
||||
{
|
||||
unsigned long canary;
|
||||
|
||||
/* Try to get a semi random initial value. */
|
||||
get_random_bytes(&canary, sizeof(canary));
|
||||
canary ^= LINUX_VERSION_CODE;
|
||||
unsigned long canary = get_random_canary();
|
||||
|
||||
current->stack_canary = canary;
|
||||
__stack_chk_guard = current->stack_canary;
|
||||
|
@ -711,7 +711,7 @@ unsigned long mips_stack_top(void)
|
||||
unsigned long arch_align_stack(unsigned long sp)
|
||||
{
|
||||
if (!(current->personality & ADDR_NO_RANDOMIZE) && randomize_va_space)
|
||||
sp -= prandom_u32_max(PAGE_SIZE);
|
||||
sp -= get_random_u32_below(PAGE_SIZE);
|
||||
|
||||
return sp & ALMASK;
|
||||
}
|
||||
|
@ -79,7 +79,7 @@ static unsigned long vdso_base(void)
|
||||
}
|
||||
|
||||
if (current->flags & PF_RANDOMIZE) {
|
||||
base += prandom_u32_max(VDSO_RANDOMIZE_SIZE);
|
||||
base += get_random_u32_below(VDSO_RANDOMIZE_SIZE);
|
||||
base = PAGE_ALIGN(base);
|
||||
}
|
||||
|
||||
|
@ -75,7 +75,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm,
|
||||
|
||||
map_base = mm->mmap_base;
|
||||
if (current->flags & PF_RANDOMIZE)
|
||||
map_base -= prandom_u32_max(0x20) * PAGE_SIZE;
|
||||
map_base -= get_random_u32_below(0x20) * PAGE_SIZE;
|
||||
|
||||
vdso_text_start = get_unmapped_area(NULL, map_base, vdso_text_len, 0, 0);
|
||||
|
||||
|
@ -68,7 +68,6 @@ CONFIG_SERIAL_8250_CONSOLE=y
|
||||
CONFIG_SERIAL_OF_PLATFORM=y
|
||||
CONFIG_SERIAL_NONSTANDARD=y
|
||||
# CONFIG_NVRAM is not set
|
||||
CONFIG_RANDOM_TRUST_CPU=y
|
||||
CONFIG_SPI=y
|
||||
CONFIG_SPI_DEBUG=y
|
||||
CONFIG_SPI_BITBANG=y
|
||||
|
@ -77,8 +77,8 @@ static int __init crc_test_init(void)
|
||||
|
||||
pr_info("crc-vpmsum_test begins, %lu iterations\n", iterations);
|
||||
for (i=0; i<iterations; i++) {
|
||||
size_t offset = prandom_u32_max(16);
|
||||
size_t len = prandom_u32_max(MAX_CRC_LENGTH);
|
||||
size_t offset = get_random_u32_below(16);
|
||||
size_t len = get_random_u32_below(MAX_CRC_LENGTH);
|
||||
|
||||
if (len <= offset)
|
||||
continue;
|
||||
|
@ -7,8 +7,6 @@
|
||||
#ifndef _ASM_STACKPROTECTOR_H
|
||||
#define _ASM_STACKPROTECTOR_H
|
||||
|
||||
#include <linux/random.h>
|
||||
#include <linux/version.h>
|
||||
#include <asm/reg.h>
|
||||
#include <asm/current.h>
|
||||
#include <asm/paca.h>
|
||||
@ -21,13 +19,7 @@
|
||||
*/
|
||||
static __always_inline void boot_init_stack_canary(void)
|
||||
{
|
||||
unsigned long canary;
|
||||
|
||||
/* Try to get a semi random initial value. */
|
||||
canary = get_random_canary();
|
||||
canary ^= mftb();
|
||||
canary ^= LINUX_VERSION_CODE;
|
||||
canary &= CANARY_MASK;
|
||||
unsigned long canary = get_random_canary();
|
||||
|
||||
current->stack_canary = canary;
|
||||
#ifdef CONFIG_PPC64
|
||||
|
@ -2303,6 +2303,6 @@ void notrace __ppc64_runlatch_off(void)
|
||||
unsigned long arch_align_stack(unsigned long sp)
|
||||
{
|
||||
if (!(current->personality & ADDR_NO_RANDOMIZE) && randomize_va_space)
|
||||
sp -= prandom_u32_max(PAGE_SIZE);
|
||||
sp -= get_random_u32_below(PAGE_SIZE);
|
||||
return sp & ~0xf;
|
||||
}
|
||||
|
@ -3,9 +3,6 @@
|
||||
#ifndef _ASM_RISCV_STACKPROTECTOR_H
|
||||
#define _ASM_RISCV_STACKPROTECTOR_H
|
||||
|
||||
#include <linux/random.h>
|
||||
#include <linux/version.h>
|
||||
|
||||
extern unsigned long __stack_chk_guard;
|
||||
|
||||
/*
|
||||
@ -16,12 +13,7 @@ extern unsigned long __stack_chk_guard;
|
||||
*/
|
||||
static __always_inline void boot_init_stack_canary(void)
|
||||
{
|
||||
unsigned long canary;
|
||||
|
||||
/* Try to get a semi random initial value. */
|
||||
get_random_bytes(&canary, sizeof(canary));
|
||||
canary ^= LINUX_VERSION_CODE;
|
||||
canary &= CANARY_MASK;
|
||||
unsigned long canary = get_random_canary();
|
||||
|
||||
current->stack_canary = canary;
|
||||
if (!IS_ENABLED(CONFIG_STACKPROTECTOR_PER_TASK))
|
||||
|
@ -573,8 +573,6 @@ CONFIG_VIRTIO_CONSOLE=m
|
||||
CONFIG_HW_RANDOM_VIRTIO=m
|
||||
CONFIG_HANGCHECK_TIMER=m
|
||||
CONFIG_TN3270_FS=y
|
||||
# CONFIG_RANDOM_TRUST_CPU is not set
|
||||
# CONFIG_RANDOM_TRUST_BOOTLOADER is not set
|
||||
CONFIG_PPS=m
|
||||
# CONFIG_PTP_1588_CLOCK is not set
|
||||
# CONFIG_HWMON is not set
|
||||
|
@ -563,8 +563,6 @@ CONFIG_VIRTIO_CONSOLE=m
|
||||
CONFIG_HW_RANDOM_VIRTIO=m
|
||||
CONFIG_HANGCHECK_TIMER=m
|
||||
CONFIG_TN3270_FS=y
|
||||
# CONFIG_RANDOM_TRUST_CPU is not set
|
||||
# CONFIG_RANDOM_TRUST_BOOTLOADER is not set
|
||||
# CONFIG_PTP_1588_CLOCK is not set
|
||||
# CONFIG_HWMON is not set
|
||||
CONFIG_WATCHDOG=y
|
||||
|
@ -58,7 +58,6 @@ CONFIG_ZFCP=y
|
||||
# CONFIG_VMCP is not set
|
||||
# CONFIG_MONWRITER is not set
|
||||
# CONFIG_S390_VMUR is not set
|
||||
# CONFIG_RANDOM_TRUST_BOOTLOADER is not set
|
||||
# CONFIG_HID is not set
|
||||
# CONFIG_VIRTIO_MENU is not set
|
||||
# CONFIG_VHOST_MENU is not set
|
||||
|
@ -224,7 +224,7 @@ unsigned long __get_wchan(struct task_struct *p)
|
||||
unsigned long arch_align_stack(unsigned long sp)
|
||||
{
|
||||
if (!(current->personality & ADDR_NO_RANDOMIZE) && randomize_va_space)
|
||||
sp -= prandom_u32_max(PAGE_SIZE);
|
||||
sp -= get_random_u32_below(PAGE_SIZE);
|
||||
return sp & ~0xf;
|
||||
}
|
||||
|
||||
|
@ -207,7 +207,7 @@ static unsigned long vdso_addr(unsigned long start, unsigned long len)
|
||||
end -= len;
|
||||
|
||||
if (end > start) {
|
||||
offset = prandom_u32_max(((end - start) >> PAGE_SHIFT) + 1);
|
||||
offset = get_random_u32_below(((end - start) >> PAGE_SHIFT) + 1);
|
||||
addr = start + (offset << PAGE_SHIFT);
|
||||
} else {
|
||||
addr = start;
|
||||
|
@ -2,9 +2,6 @@
|
||||
#ifndef __ASM_SH_STACKPROTECTOR_H
|
||||
#define __ASM_SH_STACKPROTECTOR_H
|
||||
|
||||
#include <linux/random.h>
|
||||
#include <linux/version.h>
|
||||
|
||||
extern unsigned long __stack_chk_guard;
|
||||
|
||||
/*
|
||||
@ -15,12 +12,7 @@ extern unsigned long __stack_chk_guard;
|
||||
*/
|
||||
static __always_inline void boot_init_stack_canary(void)
|
||||
{
|
||||
unsigned long canary;
|
||||
|
||||
/* Try to get a semi random initial value. */
|
||||
get_random_bytes(&canary, sizeof(canary));
|
||||
canary ^= LINUX_VERSION_CODE;
|
||||
canary &= CANARY_MASK;
|
||||
unsigned long canary = get_random_canary();
|
||||
|
||||
current->stack_canary = canary;
|
||||
__stack_chk_guard = current->stack_canary;
|
||||
|
@ -354,7 +354,7 @@ static unsigned long vdso_addr(unsigned long start, unsigned int len)
|
||||
unsigned int offset;
|
||||
|
||||
/* This loses some more bits than a modulo, but is cheaper */
|
||||
offset = prandom_u32_max(PTRS_PER_PTE);
|
||||
offset = get_random_u32_below(PTRS_PER_PTE);
|
||||
return start + (offset << PAGE_SHIFT);
|
||||
}
|
||||
|
||||
|
@ -356,7 +356,7 @@ int singlestepping(void * t)
|
||||
unsigned long arch_align_stack(unsigned long sp)
|
||||
{
|
||||
if (!(current->personality & ADDR_NO_RANDOMIZE) && randomize_va_space)
|
||||
sp -= prandom_u32_max(8192);
|
||||
sp -= get_random_u32_below(8192);
|
||||
return sp & ~0xf;
|
||||
}
|
||||
#endif
|
||||
|
@ -303,7 +303,7 @@ static unsigned long vdso_addr(unsigned long start, unsigned len)
|
||||
end -= len;
|
||||
|
||||
if (end > start) {
|
||||
offset = prandom_u32_max(((end - start) >> PAGE_SHIFT) + 1);
|
||||
offset = get_random_u32_below(((end - start) >> PAGE_SHIFT) + 1);
|
||||
addr = start + (offset << PAGE_SHIFT);
|
||||
} else {
|
||||
addr = start;
|
||||
|
@ -34,7 +34,6 @@
|
||||
#include <asm/percpu.h>
|
||||
#include <asm/desc.h>
|
||||
|
||||
#include <linux/random.h>
|
||||
#include <linux/sched.h>
|
||||
|
||||
/*
|
||||
@ -50,22 +49,11 @@
|
||||
*/
|
||||
static __always_inline void boot_init_stack_canary(void)
|
||||
{
|
||||
u64 canary;
|
||||
u64 tsc;
|
||||
unsigned long canary = get_random_canary();
|
||||
|
||||
#ifdef CONFIG_X86_64
|
||||
BUILD_BUG_ON(offsetof(struct fixed_percpu_data, stack_canary) != 40);
|
||||
#endif
|
||||
/*
|
||||
* We both use the random pool and the current TSC as a source
|
||||
* of randomness. The TSC only matters for very early init,
|
||||
* there it already has some randomness on most systems. Later
|
||||
* on during the bootup the random pool has true entropy too.
|
||||
*/
|
||||
get_random_bytes(&canary, sizeof(canary));
|
||||
tsc = rdtsc();
|
||||
canary += tsc + (tsc << 32UL);
|
||||
canary &= CANARY_MASK;
|
||||
|
||||
current->stack_canary = canary;
|
||||
#ifdef CONFIG_X86_64
|
||||
|
@ -22,9 +22,9 @@
|
||||
#include <linux/io.h>
|
||||
#include <linux/syscore_ops.h>
|
||||
#include <linux/pgtable.h>
|
||||
#include <linux/stackprotector.h>
|
||||
|
||||
#include <asm/cmdline.h>
|
||||
#include <asm/stackprotector.h>
|
||||
#include <asm/perf_event.h>
|
||||
#include <asm/mmu_context.h>
|
||||
#include <asm/doublefault.h>
|
||||
|
@ -53,7 +53,7 @@ static unsigned long int get_module_load_offset(void)
|
||||
*/
|
||||
if (module_load_offset == 0)
|
||||
module_load_offset =
|
||||
(prandom_u32_max(1024) + 1) * PAGE_SIZE;
|
||||
get_random_u32_inclusive(1, 1024) * PAGE_SIZE;
|
||||
mutex_unlock(&module_kaslr_mutex);
|
||||
}
|
||||
return module_load_offset;
|
||||
|
@ -965,7 +965,7 @@ early_param("idle", idle_setup);
|
||||
unsigned long arch_align_stack(unsigned long sp)
|
||||
{
|
||||
if (!(current->personality & ADDR_NO_RANDOMIZE) && randomize_va_space)
|
||||
sp -= prandom_u32_max(8192);
|
||||
sp -= get_random_u32_below(8192);
|
||||
return sp & ~0xf;
|
||||
}
|
||||
|
||||
|
@ -11,6 +11,7 @@
|
||||
#include <linux/smp.h>
|
||||
#include <linux/topology.h>
|
||||
#include <linux/pfn.h>
|
||||
#include <linux/stackprotector.h>
|
||||
#include <asm/sections.h>
|
||||
#include <asm/processor.h>
|
||||
#include <asm/desc.h>
|
||||
@ -21,7 +22,6 @@
|
||||
#include <asm/proto.h>
|
||||
#include <asm/cpumask.h>
|
||||
#include <asm/cpu.h>
|
||||
#include <asm/stackprotector.h>
|
||||
|
||||
DEFINE_PER_CPU_READ_MOSTLY(int, cpu_number);
|
||||
EXPORT_PER_CPU_SYMBOL(cpu_number);
|
||||
|
@ -56,6 +56,7 @@
|
||||
#include <linux/numa.h>
|
||||
#include <linux/pgtable.h>
|
||||
#include <linux/overflow.h>
|
||||
#include <linux/stackprotector.h>
|
||||
|
||||
#include <asm/acpi.h>
|
||||
#include <asm/desc.h>
|
||||
|
@ -136,10 +136,10 @@ static int pageattr_test(void)
|
||||
failed += print_split(&sa);
|
||||
|
||||
for (i = 0; i < NTEST; i++) {
|
||||
unsigned long pfn = prandom_u32_max(max_pfn_mapped);
|
||||
unsigned long pfn = get_random_u32_below(max_pfn_mapped);
|
||||
|
||||
addr[i] = (unsigned long)__va(pfn << PAGE_SHIFT);
|
||||
len[i] = prandom_u32_max(NPAGES);
|
||||
len[i] = get_random_u32_below(NPAGES);
|
||||
len[i] = min_t(unsigned long, len[i], max_pfn_mapped - pfn - 1);
|
||||
|
||||
if (len[i] == 0)
|
||||
|
@ -33,6 +33,7 @@
|
||||
#include <linux/edd.h>
|
||||
#include <linux/reboot.h>
|
||||
#include <linux/virtio_anchor.h>
|
||||
#include <linux/stackprotector.h>
|
||||
|
||||
#include <xen/xen.h>
|
||||
#include <xen/events.h>
|
||||
@ -65,7 +66,6 @@
|
||||
#include <asm/pgalloc.h>
|
||||
#include <asm/tlbflush.h>
|
||||
#include <asm/reboot.h>
|
||||
#include <asm/stackprotector.h>
|
||||
#include <asm/hypervisor.h>
|
||||
#include <asm/mach_traps.h>
|
||||
#include <asm/mwait.h>
|
||||
|
@ -14,9 +14,6 @@
|
||||
#ifndef _ASM_STACKPROTECTOR_H
|
||||
#define _ASM_STACKPROTECTOR_H 1
|
||||
|
||||
#include <linux/random.h>
|
||||
#include <linux/version.h>
|
||||
|
||||
extern unsigned long __stack_chk_guard;
|
||||
|
||||
/*
|
||||
@ -27,11 +24,7 @@ extern unsigned long __stack_chk_guard;
|
||||
*/
|
||||
static __always_inline void boot_init_stack_canary(void)
|
||||
{
|
||||
unsigned long canary;
|
||||
|
||||
/* Try to get a semi random initial value. */
|
||||
get_random_bytes(&canary, sizeof(canary));
|
||||
canary ^= LINUX_VERSION_CODE;
|
||||
unsigned long canary = get_random_canary();
|
||||
|
||||
current->stack_canary = canary;
|
||||
__stack_chk_guard = current->stack_canary;
|
||||
|
@ -253,7 +253,7 @@ static int pkcs1pad_encrypt(struct akcipher_request *req)
|
||||
ps_end = ctx->key_size - req->src_len - 2;
|
||||
req_ctx->in_buf[0] = 0x02;
|
||||
for (i = 1; i < ps_end; i++)
|
||||
req_ctx->in_buf[i] = 1 + prandom_u32_max(255);
|
||||
req_ctx->in_buf[i] = get_random_u32_inclusive(1, 255);
|
||||
req_ctx->in_buf[ps_end] = 0x00;
|
||||
|
||||
pkcs1pad_sg_set_buf(req_ctx->in_sg, req_ctx->in_buf,
|
||||
|
@ -855,9 +855,9 @@ static int prepare_keybuf(const u8 *key, unsigned int ksize,
|
||||
/* Generate a random length in range [0, max_len], but prefer smaller values */
|
||||
static unsigned int generate_random_length(unsigned int max_len)
|
||||
{
|
||||
unsigned int len = prandom_u32_max(max_len + 1);
|
||||
unsigned int len = get_random_u32_below(max_len + 1);
|
||||
|
||||
switch (prandom_u32_max(4)) {
|
||||
switch (get_random_u32_below(4)) {
|
||||
case 0:
|
||||
return len % 64;
|
||||
case 1:
|
||||
@ -874,14 +874,14 @@ static void flip_random_bit(u8 *buf, size_t size)
|
||||
{
|
||||
size_t bitpos;
|
||||
|
||||
bitpos = prandom_u32_max(size * 8);
|
||||
bitpos = get_random_u32_below(size * 8);
|
||||
buf[bitpos / 8] ^= 1 << (bitpos % 8);
|
||||
}
|
||||
|
||||
/* Flip a random byte in the given nonempty data buffer */
|
||||
static void flip_random_byte(u8 *buf, size_t size)
|
||||
{
|
||||
buf[prandom_u32_max(size)] ^= 0xff;
|
||||
buf[get_random_u32_below(size)] ^= 0xff;
|
||||
}
|
||||
|
||||
/* Sometimes make some random changes to the given nonempty data buffer */
|
||||
@ -891,15 +891,15 @@ static void mutate_buffer(u8 *buf, size_t size)
|
||||
size_t i;
|
||||
|
||||
/* Sometimes flip some bits */
|
||||
if (prandom_u32_max(4) == 0) {
|
||||
num_flips = min_t(size_t, 1 << prandom_u32_max(8), size * 8);
|
||||
if (get_random_u32_below(4) == 0) {
|
||||
num_flips = min_t(size_t, 1 << get_random_u32_below(8), size * 8);
|
||||
for (i = 0; i < num_flips; i++)
|
||||
flip_random_bit(buf, size);
|
||||
}
|
||||
|
||||
/* Sometimes flip some bytes */
|
||||
if (prandom_u32_max(4) == 0) {
|
||||
num_flips = min_t(size_t, 1 << prandom_u32_max(8), size);
|
||||
if (get_random_u32_below(4) == 0) {
|
||||
num_flips = min_t(size_t, 1 << get_random_u32_below(8), size);
|
||||
for (i = 0; i < num_flips; i++)
|
||||
flip_random_byte(buf, size);
|
||||
}
|
||||
@ -915,11 +915,11 @@ static void generate_random_bytes(u8 *buf, size_t count)
|
||||
if (count == 0)
|
||||
return;
|
||||
|
||||
switch (prandom_u32_max(8)) { /* Choose a generation strategy */
|
||||
switch (get_random_u32_below(8)) { /* Choose a generation strategy */
|
||||
case 0:
|
||||
case 1:
|
||||
/* All the same byte, plus optional mutations */
|
||||
switch (prandom_u32_max(4)) {
|
||||
switch (get_random_u32_below(4)) {
|
||||
case 0:
|
||||
b = 0x00;
|
||||
break;
|
||||
@ -959,24 +959,24 @@ static char *generate_random_sgl_divisions(struct test_sg_division *divs,
|
||||
unsigned int this_len;
|
||||
const char *flushtype_str;
|
||||
|
||||
if (div == &divs[max_divs - 1] || prandom_u32_max(2) == 0)
|
||||
if (div == &divs[max_divs - 1] || get_random_u32_below(2) == 0)
|
||||
this_len = remaining;
|
||||
else
|
||||
this_len = 1 + prandom_u32_max(remaining);
|
||||
this_len = get_random_u32_inclusive(1, remaining);
|
||||
div->proportion_of_total = this_len;
|
||||
|
||||
if (prandom_u32_max(4) == 0)
|
||||
div->offset = (PAGE_SIZE - 128) + prandom_u32_max(128);
|
||||
else if (prandom_u32_max(2) == 0)
|
||||
div->offset = prandom_u32_max(32);
|
||||
if (get_random_u32_below(4) == 0)
|
||||
div->offset = get_random_u32_inclusive(PAGE_SIZE - 128, PAGE_SIZE - 1);
|
||||
else if (get_random_u32_below(2) == 0)
|
||||
div->offset = get_random_u32_below(32);
|
||||
else
|
||||
div->offset = prandom_u32_max(PAGE_SIZE);
|
||||
if (prandom_u32_max(8) == 0)
|
||||
div->offset = get_random_u32_below(PAGE_SIZE);
|
||||
if (get_random_u32_below(8) == 0)
|
||||
div->offset_relative_to_alignmask = true;
|
||||
|
||||
div->flush_type = FLUSH_TYPE_NONE;
|
||||
if (gen_flushes) {
|
||||
switch (prandom_u32_max(4)) {
|
||||
switch (get_random_u32_below(4)) {
|
||||
case 0:
|
||||
div->flush_type = FLUSH_TYPE_REIMPORT;
|
||||
break;
|
||||
@ -988,7 +988,7 @@ static char *generate_random_sgl_divisions(struct test_sg_division *divs,
|
||||
|
||||
if (div->flush_type != FLUSH_TYPE_NONE &&
|
||||
!(req_flags & CRYPTO_TFM_REQ_MAY_SLEEP) &&
|
||||
prandom_u32_max(2) == 0)
|
||||
get_random_u32_below(2) == 0)
|
||||
div->nosimd = true;
|
||||
|
||||
switch (div->flush_type) {
|
||||
@ -1035,7 +1035,7 @@ static void generate_random_testvec_config(struct testvec_config *cfg,
|
||||
|
||||
p += scnprintf(p, end - p, "random:");
|
||||
|
||||
switch (prandom_u32_max(4)) {
|
||||
switch (get_random_u32_below(4)) {
|
||||
case 0:
|
||||
case 1:
|
||||
cfg->inplace_mode = OUT_OF_PLACE;
|
||||
@ -1050,12 +1050,12 @@ static void generate_random_testvec_config(struct testvec_config *cfg,
|
||||
break;
|
||||
}
|
||||
|
||||
if (prandom_u32_max(2) == 0) {
|
||||
if (get_random_u32_below(2) == 0) {
|
||||
cfg->req_flags |= CRYPTO_TFM_REQ_MAY_SLEEP;
|
||||
p += scnprintf(p, end - p, " may_sleep");
|
||||
}
|
||||
|
||||
switch (prandom_u32_max(4)) {
|
||||
switch (get_random_u32_below(4)) {
|
||||
case 0:
|
||||
cfg->finalization_type = FINALIZATION_TYPE_FINAL;
|
||||
p += scnprintf(p, end - p, " use_final");
|
||||
@ -1071,7 +1071,7 @@ static void generate_random_testvec_config(struct testvec_config *cfg,
|
||||
}
|
||||
|
||||
if (!(cfg->req_flags & CRYPTO_TFM_REQ_MAY_SLEEP) &&
|
||||
prandom_u32_max(2) == 0) {
|
||||
get_random_u32_below(2) == 0) {
|
||||
cfg->nosimd = true;
|
||||
p += scnprintf(p, end - p, " nosimd");
|
||||
}
|
||||
@ -1084,7 +1084,7 @@ static void generate_random_testvec_config(struct testvec_config *cfg,
|
||||
cfg->req_flags);
|
||||
p += scnprintf(p, end - p, "]");
|
||||
|
||||
if (cfg->inplace_mode == OUT_OF_PLACE && prandom_u32_max(2) == 0) {
|
||||
if (cfg->inplace_mode == OUT_OF_PLACE && get_random_u32_below(2) == 0) {
|
||||
p += scnprintf(p, end - p, " dst_divs=[");
|
||||
p = generate_random_sgl_divisions(cfg->dst_divs,
|
||||
ARRAY_SIZE(cfg->dst_divs),
|
||||
@ -1093,13 +1093,13 @@ static void generate_random_testvec_config(struct testvec_config *cfg,
|
||||
p += scnprintf(p, end - p, "]");
|
||||
}
|
||||
|
||||
if (prandom_u32_max(2) == 0) {
|
||||
cfg->iv_offset = 1 + prandom_u32_max(MAX_ALGAPI_ALIGNMASK);
|
||||
if (get_random_u32_below(2) == 0) {
|
||||
cfg->iv_offset = get_random_u32_inclusive(1, MAX_ALGAPI_ALIGNMASK);
|
||||
p += scnprintf(p, end - p, " iv_offset=%u", cfg->iv_offset);
|
||||
}
|
||||
|
||||
if (prandom_u32_max(2) == 0) {
|
||||
cfg->key_offset = 1 + prandom_u32_max(MAX_ALGAPI_ALIGNMASK);
|
||||
if (get_random_u32_below(2) == 0) {
|
||||
cfg->key_offset = get_random_u32_inclusive(1, MAX_ALGAPI_ALIGNMASK);
|
||||
p += scnprintf(p, end - p, " key_offset=%u", cfg->key_offset);
|
||||
}
|
||||
|
||||
@ -1652,8 +1652,8 @@ static void generate_random_hash_testvec(struct shash_desc *desc,
|
||||
vec->ksize = 0;
|
||||
if (maxkeysize) {
|
||||
vec->ksize = maxkeysize;
|
||||
if (prandom_u32_max(4) == 0)
|
||||
vec->ksize = 1 + prandom_u32_max(maxkeysize);
|
||||
if (get_random_u32_below(4) == 0)
|
||||
vec->ksize = get_random_u32_inclusive(1, maxkeysize);
|
||||
generate_random_bytes((u8 *)vec->key, vec->ksize);
|
||||
|
||||
vec->setkey_error = crypto_shash_setkey(desc->tfm, vec->key,
|
||||
@ -2218,13 +2218,13 @@ static void mutate_aead_message(struct aead_testvec *vec, bool aad_iv,
|
||||
const unsigned int aad_tail_size = aad_iv ? ivsize : 0;
|
||||
const unsigned int authsize = vec->clen - vec->plen;
|
||||
|
||||
if (prandom_u32_max(2) == 0 && vec->alen > aad_tail_size) {
|
||||
if (get_random_u32_below(2) == 0 && vec->alen > aad_tail_size) {
|
||||
/* Mutate the AAD */
|
||||
flip_random_bit((u8 *)vec->assoc, vec->alen - aad_tail_size);
|
||||
if (prandom_u32_max(2) == 0)
|
||||
if (get_random_u32_below(2) == 0)
|
||||
return;
|
||||
}
|
||||
if (prandom_u32_max(2) == 0) {
|
||||
if (get_random_u32_below(2) == 0) {
|
||||
/* Mutate auth tag (assuming it's at the end of ciphertext) */
|
||||
flip_random_bit((u8 *)vec->ctext + vec->plen, authsize);
|
||||
} else {
|
||||
@ -2249,7 +2249,7 @@ static void generate_aead_message(struct aead_request *req,
|
||||
const unsigned int ivsize = crypto_aead_ivsize(tfm);
|
||||
const unsigned int authsize = vec->clen - vec->plen;
|
||||
const bool inauthentic = (authsize >= MIN_COLLISION_FREE_AUTHSIZE) &&
|
||||
(prefer_inauthentic || prandom_u32_max(4) == 0);
|
||||
(prefer_inauthentic || get_random_u32_below(4) == 0);
|
||||
|
||||
/* Generate the AAD. */
|
||||
generate_random_bytes((u8 *)vec->assoc, vec->alen);
|
||||
@ -2257,7 +2257,7 @@ static void generate_aead_message(struct aead_request *req,
|
||||
/* Avoid implementation-defined behavior. */
|
||||
memcpy((u8 *)vec->assoc + vec->alen - ivsize, vec->iv, ivsize);
|
||||
|
||||
if (inauthentic && prandom_u32_max(2) == 0) {
|
||||
if (inauthentic && get_random_u32_below(2) == 0) {
|
||||
/* Generate a random ciphertext. */
|
||||
generate_random_bytes((u8 *)vec->ctext, vec->clen);
|
||||
} else {
|
||||
@ -2321,8 +2321,8 @@ static void generate_random_aead_testvec(struct aead_request *req,
|
||||
|
||||
/* Key: length in [0, maxkeysize], but usually choose maxkeysize */
|
||||
vec->klen = maxkeysize;
|
||||
if (prandom_u32_max(4) == 0)
|
||||
vec->klen = prandom_u32_max(maxkeysize + 1);
|
||||
if (get_random_u32_below(4) == 0)
|
||||
vec->klen = get_random_u32_below(maxkeysize + 1);
|
||||
generate_random_bytes((u8 *)vec->key, vec->klen);
|
||||
vec->setkey_error = crypto_aead_setkey(tfm, vec->key, vec->klen);
|
||||
|
||||
@ -2331,8 +2331,8 @@ static void generate_random_aead_testvec(struct aead_request *req,
|
||||
|
||||
/* Tag length: in [0, maxauthsize], but usually choose maxauthsize */
|
||||
authsize = maxauthsize;
|
||||
if (prandom_u32_max(4) == 0)
|
||||
authsize = prandom_u32_max(maxauthsize + 1);
|
||||
if (get_random_u32_below(4) == 0)
|
||||
authsize = get_random_u32_below(maxauthsize + 1);
|
||||
if (prefer_inauthentic && authsize < MIN_COLLISION_FREE_AUTHSIZE)
|
||||
authsize = MIN_COLLISION_FREE_AUTHSIZE;
|
||||
if (WARN_ON(authsize > maxdatasize))
|
||||
@ -2342,7 +2342,7 @@ static void generate_random_aead_testvec(struct aead_request *req,
|
||||
|
||||
/* AAD, plaintext, and ciphertext lengths */
|
||||
total_len = generate_random_length(maxdatasize);
|
||||
if (prandom_u32_max(4) == 0)
|
||||
if (get_random_u32_below(4) == 0)
|
||||
vec->alen = 0;
|
||||
else
|
||||
vec->alen = generate_random_length(total_len);
|
||||
@ -2958,8 +2958,8 @@ static void generate_random_cipher_testvec(struct skcipher_request *req,
|
||||
|
||||
/* Key: length in [0, maxkeysize], but usually choose maxkeysize */
|
||||
vec->klen = maxkeysize;
|
||||
if (prandom_u32_max(4) == 0)
|
||||
vec->klen = prandom_u32_max(maxkeysize + 1);
|
||||
if (get_random_u32_below(4) == 0)
|
||||
vec->klen = get_random_u32_below(maxkeysize + 1);
|
||||
generate_random_bytes((u8 *)vec->key, vec->klen);
|
||||
vec->setkey_error = crypto_skcipher_setkey(tfm, vec->key, vec->klen);
|
||||
|
||||
|
@ -781,7 +781,7 @@ static struct socket *drbd_wait_for_connect(struct drbd_connection *connection,
|
||||
|
||||
timeo = connect_int * HZ;
|
||||
/* 28.5% random jitter */
|
||||
timeo += prandom_u32_max(2) ? timeo / 7 : -timeo / 7;
|
||||
timeo += get_random_u32_below(2) ? timeo / 7 : -timeo / 7;
|
||||
|
||||
err = wait_for_completion_interruptible_timeout(&ad->door_bell, timeo);
|
||||
if (err <= 0)
|
||||
@ -1004,7 +1004,7 @@ retry:
|
||||
drbd_warn(connection, "Error receiving initial packet\n");
|
||||
sock_release(s);
|
||||
randomize:
|
||||
if (prandom_u32_max(2))
|
||||
if (get_random_u32_below(2))
|
||||
goto retry;
|
||||
}
|
||||
}
|
||||
|
@ -129,7 +129,7 @@ enum mhi_pm_state {
|
||||
#define PRIMARY_CMD_RING 0
|
||||
#define MHI_DEV_WAKE_DB 127
|
||||
#define MHI_MAX_MTU 0xffff
|
||||
#define MHI_RANDOM_U32_NONZERO(bmsk) (prandom_u32_max(bmsk) + 1)
|
||||
#define MHI_RANDOM_U32_NONZERO(bmsk) (get_random_u32_inclusive(1, bmsk))
|
||||
|
||||
enum mhi_er_type {
|
||||
MHI_ER_TYPE_INVALID = 0x0,
|
||||
|
@ -423,40 +423,4 @@ config ADI
|
||||
and SSM (Silicon Secured Memory). Intended consumers of this
|
||||
driver include crash and makedumpfile.
|
||||
|
||||
config RANDOM_TRUST_CPU
|
||||
bool "Initialize RNG using CPU RNG instructions"
|
||||
default y
|
||||
help
|
||||
Initialize the RNG using random numbers supplied by the CPU's
|
||||
RNG instructions (e.g. RDRAND), if supported and available. These
|
||||
random numbers are never used directly, but are rather hashed into
|
||||
the main input pool, and this happens regardless of whether or not
|
||||
this option is enabled. Instead, this option controls whether the
|
||||
they are credited and hence can initialize the RNG. Additionally,
|
||||
other sources of randomness are always used, regardless of this
|
||||
setting. Enabling this implies trusting that the CPU can supply high
|
||||
quality and non-backdoored random numbers.
|
||||
|
||||
Say Y here unless you have reason to mistrust your CPU or believe
|
||||
its RNG facilities may be faulty. This may also be configured at
|
||||
boot time with "random.trust_cpu=on/off".
|
||||
|
||||
config RANDOM_TRUST_BOOTLOADER
|
||||
bool "Initialize RNG using bootloader-supplied seed"
|
||||
default y
|
||||
help
|
||||
Initialize the RNG using a seed supplied by the bootloader or boot
|
||||
environment (e.g. EFI or a bootloader-generated device tree). This
|
||||
seed is not used directly, but is rather hashed into the main input
|
||||
pool, and this happens regardless of whether or not this option is
|
||||
enabled. Instead, this option controls whether the seed is credited
|
||||
and hence can initialize the RNG. Additionally, other sources of
|
||||
randomness are always used, regardless of this setting. Enabling
|
||||
this implies trusting that the bootloader can supply high quality and
|
||||
non-backdoored seeds.
|
||||
|
||||
Say Y here unless you have reason to mistrust your bootloader or
|
||||
believe its RNG facilities may be faulty. This may also be configured
|
||||
at boot time with "random.trust_bootloader=on/off".
|
||||
|
||||
endmenu
|
||||
|
@ -69,8 +69,10 @@ static void add_early_randomness(struct hwrng *rng)
|
||||
mutex_lock(&reading_mutex);
|
||||
bytes_read = rng_get_data(rng, rng_fillbuf, 32, 0);
|
||||
mutex_unlock(&reading_mutex);
|
||||
if (bytes_read > 0)
|
||||
add_device_randomness(rng_fillbuf, bytes_read);
|
||||
if (bytes_read > 0) {
|
||||
size_t entropy = bytes_read * 8 * rng->quality / 1024;
|
||||
add_hwgenerator_randomness(rng_fillbuf, bytes_read, entropy, false);
|
||||
}
|
||||
}
|
||||
|
||||
static inline void cleanup_rng(struct kref *kref)
|
||||
@ -528,7 +530,7 @@ static int hwrng_fillfn(void *unused)
|
||||
|
||||
/* Outside lock, sure, but y'know: randomness. */
|
||||
add_hwgenerator_randomness((void *)rng_fillbuf, rc,
|
||||
entropy >> 10);
|
||||
entropy >> 10, true);
|
||||
}
|
||||
hwrng_fill = NULL;
|
||||
return 0;
|
||||
|
@ -53,6 +53,7 @@
|
||||
#include <linux/uaccess.h>
|
||||
#include <linux/suspend.h>
|
||||
#include <linux/siphash.h>
|
||||
#include <linux/sched/isolation.h>
|
||||
#include <crypto/chacha.h>
|
||||
#include <crypto/blake2s.h>
|
||||
#include <asm/processor.h>
|
||||
@ -84,6 +85,7 @@ static DEFINE_STATIC_KEY_FALSE(crng_is_ready);
|
||||
/* Various types of waiters for crng_init->CRNG_READY transition. */
|
||||
static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait);
|
||||
static struct fasync_struct *fasync;
|
||||
static ATOMIC_NOTIFIER_HEAD(random_ready_notifier);
|
||||
|
||||
/* Control how we warn userspace. */
|
||||
static struct ratelimit_state urandom_warning =
|
||||
@ -120,7 +122,7 @@ static void try_to_generate_entropy(void);
|
||||
* Wait for the input pool to be seeded and thus guaranteed to supply
|
||||
* cryptographically secure random numbers. This applies to: the /dev/urandom
|
||||
* device, the get_random_bytes function, and the get_random_{u8,u16,u32,u64,
|
||||
* int,long} family of functions. Using any of these functions without first
|
||||
* long} family of functions. Using any of these functions without first
|
||||
* calling this function forfeits the guarantee of security.
|
||||
*
|
||||
* Returns: 0 if the input pool has been seeded.
|
||||
@ -140,6 +142,26 @@ int wait_for_random_bytes(void)
|
||||
}
|
||||
EXPORT_SYMBOL(wait_for_random_bytes);
|
||||
|
||||
/*
|
||||
* Add a callback function that will be invoked when the crng is initialised,
|
||||
* or immediately if it already has been. Only use this is you are absolutely
|
||||
* sure it is required. Most users should instead be able to test
|
||||
* `rng_is_initialized()` on demand, or make use of `get_random_bytes_wait()`.
|
||||
*/
|
||||
int __cold execute_with_initialized_rng(struct notifier_block *nb)
|
||||
{
|
||||
unsigned long flags;
|
||||
int ret = 0;
|
||||
|
||||
spin_lock_irqsave(&random_ready_notifier.lock, flags);
|
||||
if (crng_ready())
|
||||
nb->notifier_call(nb, 0, NULL);
|
||||
else
|
||||
ret = raw_notifier_chain_register((struct raw_notifier_head *)&random_ready_notifier.head, nb);
|
||||
spin_unlock_irqrestore(&random_ready_notifier.lock, flags);
|
||||
return ret;
|
||||
}
|
||||
|
||||
#define warn_unseeded_randomness() \
|
||||
if (IS_ENABLED(CONFIG_WARN_ALL_UNSEEDED_RANDOM) && !crng_ready()) \
|
||||
printk_deferred(KERN_NOTICE "random: %s called from %pS with crng_init=%d\n", \
|
||||
@ -160,6 +182,9 @@ EXPORT_SYMBOL(wait_for_random_bytes);
|
||||
* u8 get_random_u8()
|
||||
* u16 get_random_u16()
|
||||
* u32 get_random_u32()
|
||||
* u32 get_random_u32_below(u32 ceil)
|
||||
* u32 get_random_u32_above(u32 floor)
|
||||
* u32 get_random_u32_inclusive(u32 floor, u32 ceil)
|
||||
* u64 get_random_u64()
|
||||
* unsigned long get_random_long()
|
||||
*
|
||||
@ -179,7 +204,6 @@ enum {
|
||||
|
||||
static struct {
|
||||
u8 key[CHACHA_KEY_SIZE] __aligned(__alignof__(long));
|
||||
unsigned long birth;
|
||||
unsigned long generation;
|
||||
spinlock_t lock;
|
||||
} base_crng = {
|
||||
@ -197,16 +221,41 @@ static DEFINE_PER_CPU(struct crng, crngs) = {
|
||||
.lock = INIT_LOCAL_LOCK(crngs.lock),
|
||||
};
|
||||
|
||||
/*
|
||||
* Return the interval until the next reseeding, which is normally
|
||||
* CRNG_RESEED_INTERVAL, but during early boot, it is at an interval
|
||||
* proportional to the uptime.
|
||||
*/
|
||||
static unsigned int crng_reseed_interval(void)
|
||||
{
|
||||
static bool early_boot = true;
|
||||
|
||||
if (unlikely(READ_ONCE(early_boot))) {
|
||||
time64_t uptime = ktime_get_seconds();
|
||||
if (uptime >= CRNG_RESEED_INTERVAL / HZ * 2)
|
||||
WRITE_ONCE(early_boot, false);
|
||||
else
|
||||
return max_t(unsigned int, CRNG_RESEED_START_INTERVAL,
|
||||
(unsigned int)uptime / 2 * HZ);
|
||||
}
|
||||
return CRNG_RESEED_INTERVAL;
|
||||
}
|
||||
|
||||
/* Used by crng_reseed() and crng_make_state() to extract a new seed from the input pool. */
|
||||
static void extract_entropy(void *buf, size_t len);
|
||||
|
||||
/* This extracts a new crng key from the input pool. */
|
||||
static void crng_reseed(void)
|
||||
static void crng_reseed(struct work_struct *work)
|
||||
{
|
||||
static DECLARE_DELAYED_WORK(next_reseed, crng_reseed);
|
||||
unsigned long flags;
|
||||
unsigned long next_gen;
|
||||
u8 key[CHACHA_KEY_SIZE];
|
||||
|
||||
/* Immediately schedule the next reseeding, so that it fires sooner rather than later. */
|
||||
if (likely(system_unbound_wq))
|
||||
queue_delayed_work(system_unbound_wq, &next_reseed, crng_reseed_interval());
|
||||
|
||||
extract_entropy(key, sizeof(key));
|
||||
|
||||
/*
|
||||
@ -221,7 +270,6 @@ static void crng_reseed(void)
|
||||
if (next_gen == ULONG_MAX)
|
||||
++next_gen;
|
||||
WRITE_ONCE(base_crng.generation, next_gen);
|
||||
WRITE_ONCE(base_crng.birth, jiffies);
|
||||
if (!static_branch_likely(&crng_is_ready))
|
||||
crng_init = CRNG_READY;
|
||||
spin_unlock_irqrestore(&base_crng.lock, flags);
|
||||
@ -260,26 +308,6 @@ static void crng_fast_key_erasure(u8 key[CHACHA_KEY_SIZE],
|
||||
memzero_explicit(first_block, sizeof(first_block));
|
||||
}
|
||||
|
||||
/*
|
||||
* Return the interval until the next reseeding, which is normally
|
||||
* CRNG_RESEED_INTERVAL, but during early boot, it is at an interval
|
||||
* proportional to the uptime.
|
||||
*/
|
||||
static unsigned int crng_reseed_interval(void)
|
||||
{
|
||||
static bool early_boot = true;
|
||||
|
||||
if (unlikely(READ_ONCE(early_boot))) {
|
||||
time64_t uptime = ktime_get_seconds();
|
||||
if (uptime >= CRNG_RESEED_INTERVAL / HZ * 2)
|
||||
WRITE_ONCE(early_boot, false);
|
||||
else
|
||||
return max_t(unsigned int, CRNG_RESEED_START_INTERVAL,
|
||||
(unsigned int)uptime / 2 * HZ);
|
||||
}
|
||||
return CRNG_RESEED_INTERVAL;
|
||||
}
|
||||
|
||||
/*
|
||||
* This function returns a ChaCha state that you may use for generating
|
||||
* random data. It also returns up to 32 bytes on its own of random data
|
||||
@ -315,13 +343,6 @@ static void crng_make_state(u32 chacha_state[CHACHA_STATE_WORDS],
|
||||
return;
|
||||
}
|
||||
|
||||
/*
|
||||
* If the base_crng is old enough, we reseed, which in turn bumps the
|
||||
* generation counter that we check below.
|
||||
*/
|
||||
if (unlikely(time_is_before_jiffies(READ_ONCE(base_crng.birth) + crng_reseed_interval())))
|
||||
crng_reseed();
|
||||
|
||||
local_lock_irqsave(&crngs.lock, flags);
|
||||
crng = raw_cpu_ptr(&crngs);
|
||||
|
||||
@ -383,11 +404,11 @@ static void _get_random_bytes(void *buf, size_t len)
|
||||
}
|
||||
|
||||
/*
|
||||
* This function is the exported kernel interface. It returns some number of
|
||||
* good random numbers, suitable for key generation, seeding TCP sequence
|
||||
* numbers, etc. In order to ensure that the randomness returned by this
|
||||
* function is okay, the function wait_for_random_bytes() should be called and
|
||||
* return 0 at least once at any point prior.
|
||||
* This returns random bytes in arbitrary quantities. The quality of the
|
||||
* random bytes is good as /dev/urandom. In order to ensure that the
|
||||
* randomness provided by this function is okay, the function
|
||||
* wait_for_random_bytes() should be called and return 0 at least once
|
||||
* at any point prior.
|
||||
*/
|
||||
void get_random_bytes(void *buf, size_t len)
|
||||
{
|
||||
@ -510,6 +531,41 @@ DEFINE_BATCHED_ENTROPY(u16)
|
||||
DEFINE_BATCHED_ENTROPY(u32)
|
||||
DEFINE_BATCHED_ENTROPY(u64)
|
||||
|
||||
u32 __get_random_u32_below(u32 ceil)
|
||||
{
|
||||
/*
|
||||
* This is the slow path for variable ceil. It is still fast, most of
|
||||
* the time, by doing traditional reciprocal multiplication and
|
||||
* opportunistically comparing the lower half to ceil itself, before
|
||||
* falling back to computing a larger bound, and then rejecting samples
|
||||
* whose lower half would indicate a range indivisible by ceil. The use
|
||||
* of `-ceil % ceil` is analogous to `2^32 % ceil`, but is computable
|
||||
* in 32-bits.
|
||||
*/
|
||||
u32 rand = get_random_u32();
|
||||
u64 mult;
|
||||
|
||||
/*
|
||||
* This function is technically undefined for ceil == 0, and in fact
|
||||
* for the non-underscored constant version in the header, we build bug
|
||||
* on that. But for the non-constant case, it's convenient to have that
|
||||
* evaluate to being a straight call to get_random_u32(), so that
|
||||
* get_random_u32_inclusive() can work over its whole range without
|
||||
* undefined behavior.
|
||||
*/
|
||||
if (unlikely(!ceil))
|
||||
return rand;
|
||||
|
||||
mult = (u64)ceil * rand;
|
||||
if (unlikely((u32)mult < ceil)) {
|
||||
u32 bound = -ceil % ceil;
|
||||
while (unlikely((u32)mult < bound))
|
||||
mult = (u64)ceil * get_random_u32();
|
||||
}
|
||||
return mult >> 32;
|
||||
}
|
||||
EXPORT_SYMBOL(__get_random_u32_below);
|
||||
|
||||
#ifdef CONFIG_SMP
|
||||
/*
|
||||
* This function is called when the CPU is coming up, with entry
|
||||
@ -660,9 +716,10 @@ static void __cold _credit_init_bits(size_t bits)
|
||||
} while (!try_cmpxchg(&input_pool.init_bits, &orig, new));
|
||||
|
||||
if (orig < POOL_READY_BITS && new >= POOL_READY_BITS) {
|
||||
crng_reseed(); /* Sets crng_init to CRNG_READY under base_crng.lock. */
|
||||
crng_reseed(NULL); /* Sets crng_init to CRNG_READY under base_crng.lock. */
|
||||
if (static_key_initialized)
|
||||
execute_in_process_context(crng_set_ready, &set_ready);
|
||||
atomic_notifier_call_chain(&random_ready_notifier, 0, NULL);
|
||||
wake_up_interruptible(&crng_init_wait);
|
||||
kill_fasync(&fasync, SIGIO, POLL_IN);
|
||||
pr_notice("crng init done\n");
|
||||
@ -689,7 +746,7 @@ static void __cold _credit_init_bits(size_t bits)
|
||||
* the above entropy accumulation routines:
|
||||
*
|
||||
* void add_device_randomness(const void *buf, size_t len);
|
||||
* void add_hwgenerator_randomness(const void *buf, size_t len, size_t entropy);
|
||||
* void add_hwgenerator_randomness(const void *buf, size_t len, size_t entropy, bool sleep_after);
|
||||
* void add_bootloader_randomness(const void *buf, size_t len);
|
||||
* void add_vmfork_randomness(const void *unique_vm_id, size_t len);
|
||||
* void add_interrupt_randomness(int irq);
|
||||
@ -710,7 +767,7 @@ static void __cold _credit_init_bits(size_t bits)
|
||||
*
|
||||
* add_bootloader_randomness() is called by bootloader drivers, such as EFI
|
||||
* and device tree, and credits its input depending on whether or not the
|
||||
* configuration option CONFIG_RANDOM_TRUST_BOOTLOADER is set.
|
||||
* command line option 'random.trust_bootloader'.
|
||||
*
|
||||
* add_vmfork_randomness() adds a unique (but not necessarily secret) ID
|
||||
* representing the current instance of a VM to the pool, without crediting,
|
||||
@ -736,8 +793,8 @@ static void __cold _credit_init_bits(size_t bits)
|
||||
*
|
||||
**********************************************************************/
|
||||
|
||||
static bool trust_cpu __initdata = IS_ENABLED(CONFIG_RANDOM_TRUST_CPU);
|
||||
static bool trust_bootloader __initdata = IS_ENABLED(CONFIG_RANDOM_TRUST_BOOTLOADER);
|
||||
static bool trust_cpu __initdata = true;
|
||||
static bool trust_bootloader __initdata = true;
|
||||
static int __init parse_trust_cpu(char *arg)
|
||||
{
|
||||
return kstrtobool(arg, &trust_cpu);
|
||||
@ -768,7 +825,7 @@ static int random_pm_notification(struct notifier_block *nb, unsigned long actio
|
||||
if (crng_ready() && (action == PM_RESTORE_PREPARE ||
|
||||
(action == PM_POST_SUSPEND && !IS_ENABLED(CONFIG_PM_AUTOSLEEP) &&
|
||||
!IS_ENABLED(CONFIG_PM_USERSPACE_AUTOSLEEP)))) {
|
||||
crng_reseed();
|
||||
crng_reseed(NULL);
|
||||
pr_notice("crng reseeded on system resumption\n");
|
||||
}
|
||||
return 0;
|
||||
@ -791,13 +848,13 @@ void __init random_init_early(const char *command_line)
|
||||
#endif
|
||||
|
||||
for (i = 0, arch_bits = sizeof(entropy) * 8; i < ARRAY_SIZE(entropy);) {
|
||||
longs = arch_get_random_seed_longs_early(entropy, ARRAY_SIZE(entropy) - i);
|
||||
longs = arch_get_random_seed_longs(entropy, ARRAY_SIZE(entropy) - i);
|
||||
if (longs) {
|
||||
_mix_pool_bytes(entropy, sizeof(*entropy) * longs);
|
||||
i += longs;
|
||||
continue;
|
||||
}
|
||||
longs = arch_get_random_longs_early(entropy, ARRAY_SIZE(entropy) - i);
|
||||
longs = arch_get_random_longs(entropy, ARRAY_SIZE(entropy) - i);
|
||||
if (longs) {
|
||||
_mix_pool_bytes(entropy, sizeof(*entropy) * longs);
|
||||
i += longs;
|
||||
@ -812,7 +869,7 @@ void __init random_init_early(const char *command_line)
|
||||
|
||||
/* Reseed if already seeded by earlier phases. */
|
||||
if (crng_ready())
|
||||
crng_reseed();
|
||||
crng_reseed(NULL);
|
||||
else if (trust_cpu)
|
||||
_credit_init_bits(arch_bits);
|
||||
}
|
||||
@ -840,7 +897,7 @@ void __init random_init(void)
|
||||
|
||||
/* Reseed if already seeded by earlier phases. */
|
||||
if (crng_ready())
|
||||
crng_reseed();
|
||||
crng_reseed(NULL);
|
||||
|
||||
WARN_ON(register_pm_notifier(&pm_notifier));
|
||||
|
||||
@ -869,11 +926,11 @@ void add_device_randomness(const void *buf, size_t len)
|
||||
EXPORT_SYMBOL(add_device_randomness);
|
||||
|
||||
/*
|
||||
* Interface for in-kernel drivers of true hardware RNGs.
|
||||
* Those devices may produce endless random bits and will be throttled
|
||||
* when our pool is full.
|
||||
* Interface for in-kernel drivers of true hardware RNGs. Those devices
|
||||
* may produce endless random bits, so this function will sleep for
|
||||
* some amount of time after, if the sleep_after parameter is true.
|
||||
*/
|
||||
void add_hwgenerator_randomness(const void *buf, size_t len, size_t entropy)
|
||||
void add_hwgenerator_randomness(const void *buf, size_t len, size_t entropy, bool sleep_after)
|
||||
{
|
||||
mix_pool_bytes(buf, len);
|
||||
credit_init_bits(entropy);
|
||||
@ -882,14 +939,14 @@ void add_hwgenerator_randomness(const void *buf, size_t len, size_t entropy)
|
||||
* Throttle writing to once every reseed interval, unless we're not yet
|
||||
* initialized or no entropy is credited.
|
||||
*/
|
||||
if (!kthread_should_stop() && (crng_ready() || !entropy))
|
||||
if (sleep_after && !kthread_should_stop() && (crng_ready() || !entropy))
|
||||
schedule_timeout_interruptible(crng_reseed_interval());
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(add_hwgenerator_randomness);
|
||||
|
||||
/*
|
||||
* Handle random seed passed by bootloader, and credit it if
|
||||
* CONFIG_RANDOM_TRUST_BOOTLOADER is set.
|
||||
* Handle random seed passed by bootloader, and credit it depending
|
||||
* on the command line option 'random.trust_bootloader'.
|
||||
*/
|
||||
void __init add_bootloader_randomness(const void *buf, size_t len)
|
||||
{
|
||||
@ -910,7 +967,7 @@ void __cold add_vmfork_randomness(const void *unique_vm_id, size_t len)
|
||||
{
|
||||
add_device_randomness(unique_vm_id, len);
|
||||
if (crng_ready()) {
|
||||
crng_reseed();
|
||||
crng_reseed(NULL);
|
||||
pr_notice("crng reseeded due to virtual machine fork\n");
|
||||
}
|
||||
blocking_notifier_call_chain(&vmfork_chain, 0, NULL);
|
||||
@ -1176,66 +1233,102 @@ void __cold rand_initialize_disk(struct gendisk *disk)
|
||||
struct entropy_timer_state {
|
||||
unsigned long entropy;
|
||||
struct timer_list timer;
|
||||
unsigned int samples, samples_per_bit;
|
||||
atomic_t samples;
|
||||
unsigned int samples_per_bit;
|
||||
};
|
||||
|
||||
/*
|
||||
* Each time the timer fires, we expect that we got an unpredictable
|
||||
* jump in the cycle counter. Even if the timer is running on another
|
||||
* CPU, the timer activity will be touching the stack of the CPU that is
|
||||
* generating entropy..
|
||||
* Each time the timer fires, we expect that we got an unpredictable jump in
|
||||
* the cycle counter. Even if the timer is running on another CPU, the timer
|
||||
* activity will be touching the stack of the CPU that is generating entropy.
|
||||
*
|
||||
* Note that we don't re-arm the timer in the timer itself - we are
|
||||
* happy to be scheduled away, since that just makes the load more
|
||||
* complex, but we do not want the timer to keep ticking unless the
|
||||
* entropy loop is running.
|
||||
* Note that we don't re-arm the timer in the timer itself - we are happy to be
|
||||
* scheduled away, since that just makes the load more complex, but we do not
|
||||
* want the timer to keep ticking unless the entropy loop is running.
|
||||
*
|
||||
* So the re-arming always happens in the entropy loop itself.
|
||||
*/
|
||||
static void __cold entropy_timer(struct timer_list *timer)
|
||||
{
|
||||
struct entropy_timer_state *state = container_of(timer, struct entropy_timer_state, timer);
|
||||
unsigned long entropy = random_get_entropy();
|
||||
|
||||
if (++state->samples == state->samples_per_bit) {
|
||||
mix_pool_bytes(&entropy, sizeof(entropy));
|
||||
if (atomic_inc_return(&state->samples) % state->samples_per_bit == 0)
|
||||
credit_init_bits(1);
|
||||
state->samples = 0;
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* If we have an actual cycle counter, see if we can
|
||||
* generate enough entropy with timing noise
|
||||
* If we have an actual cycle counter, see if we can generate enough entropy
|
||||
* with timing noise.
|
||||
*/
|
||||
static void __cold try_to_generate_entropy(void)
|
||||
{
|
||||
enum { NUM_TRIAL_SAMPLES = 8192, MAX_SAMPLES_PER_BIT = HZ / 15 };
|
||||
struct entropy_timer_state stack;
|
||||
u8 stack_bytes[sizeof(struct entropy_timer_state) + SMP_CACHE_BYTES - 1];
|
||||
struct entropy_timer_state *stack = PTR_ALIGN((void *)stack_bytes, SMP_CACHE_BYTES);
|
||||
unsigned int i, num_different = 0;
|
||||
unsigned long last = random_get_entropy();
|
||||
int cpu = -1;
|
||||
|
||||
for (i = 0; i < NUM_TRIAL_SAMPLES - 1; ++i) {
|
||||
stack.entropy = random_get_entropy();
|
||||
if (stack.entropy != last)
|
||||
stack->entropy = random_get_entropy();
|
||||
if (stack->entropy != last)
|
||||
++num_different;
|
||||
last = stack.entropy;
|
||||
last = stack->entropy;
|
||||
}
|
||||
stack.samples_per_bit = DIV_ROUND_UP(NUM_TRIAL_SAMPLES, num_different + 1);
|
||||
if (stack.samples_per_bit > MAX_SAMPLES_PER_BIT)
|
||||
stack->samples_per_bit = DIV_ROUND_UP(NUM_TRIAL_SAMPLES, num_different + 1);
|
||||
if (stack->samples_per_bit > MAX_SAMPLES_PER_BIT)
|
||||
return;
|
||||
|
||||
stack.samples = 0;
|
||||
timer_setup_on_stack(&stack.timer, entropy_timer, 0);
|
||||
atomic_set(&stack->samples, 0);
|
||||
timer_setup_on_stack(&stack->timer, entropy_timer, 0);
|
||||
while (!crng_ready() && !signal_pending(current)) {
|
||||
if (!timer_pending(&stack.timer))
|
||||
mod_timer(&stack.timer, jiffies);
|
||||
mix_pool_bytes(&stack.entropy, sizeof(stack.entropy));
|
||||
schedule();
|
||||
stack.entropy = random_get_entropy();
|
||||
}
|
||||
/*
|
||||
* Check !timer_pending() and then ensure that any previous callback has finished
|
||||
* executing by checking try_to_del_timer_sync(), before queueing the next one.
|
||||
*/
|
||||
if (!timer_pending(&stack->timer) && try_to_del_timer_sync(&stack->timer) >= 0) {
|
||||
struct cpumask timer_cpus;
|
||||
unsigned int num_cpus;
|
||||
|
||||
del_timer_sync(&stack.timer);
|
||||
destroy_timer_on_stack(&stack.timer);
|
||||
mix_pool_bytes(&stack.entropy, sizeof(stack.entropy));
|
||||
/*
|
||||
* Preemption must be disabled here, both to read the current CPU number
|
||||
* and to avoid scheduling a timer on a dead CPU.
|
||||
*/
|
||||
preempt_disable();
|
||||
|
||||
/* Only schedule callbacks on timer CPUs that are online. */
|
||||
cpumask_and(&timer_cpus, housekeeping_cpumask(HK_TYPE_TIMER), cpu_online_mask);
|
||||
num_cpus = cpumask_weight(&timer_cpus);
|
||||
/* In very bizarre case of misconfiguration, fallback to all online. */
|
||||
if (unlikely(num_cpus == 0)) {
|
||||
timer_cpus = *cpu_online_mask;
|
||||
num_cpus = cpumask_weight(&timer_cpus);
|
||||
}
|
||||
|
||||
/* Basic CPU round-robin, which avoids the current CPU. */
|
||||
do {
|
||||
cpu = cpumask_next(cpu, &timer_cpus);
|
||||
if (cpu == nr_cpumask_bits)
|
||||
cpu = cpumask_first(&timer_cpus);
|
||||
} while (cpu == smp_processor_id() && num_cpus > 1);
|
||||
|
||||
/* Expiring the timer at `jiffies` means it's the next tick. */
|
||||
stack->timer.expires = jiffies;
|
||||
|
||||
add_timer_on(&stack->timer, cpu);
|
||||
|
||||
preempt_enable();
|
||||
}
|
||||
mix_pool_bytes(&stack->entropy, sizeof(stack->entropy));
|
||||
schedule();
|
||||
stack->entropy = random_get_entropy();
|
||||
}
|
||||
mix_pool_bytes(&stack->entropy, sizeof(stack->entropy));
|
||||
|
||||
del_timer_sync(&stack->timer);
|
||||
destroy_timer_on_stack(&stack->timer);
|
||||
}
|
||||
|
||||
|
||||
@ -1432,7 +1525,7 @@ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg)
|
||||
return -EPERM;
|
||||
if (!crng_ready())
|
||||
return -ENODATA;
|
||||
crng_reseed();
|
||||
crng_reseed(NULL);
|
||||
return 0;
|
||||
default:
|
||||
return -EINVAL;
|
||||
|
@ -400,7 +400,7 @@ static int __find_race(void *arg)
|
||||
struct dma_fence *fence = dma_fence_get(data->fc.tail);
|
||||
int seqno;
|
||||
|
||||
seqno = prandom_u32_max(data->fc.chain_length) + 1;
|
||||
seqno = get_random_u32_inclusive(1, data->fc.chain_length);
|
||||
|
||||
err = dma_fence_chain_find_seqno(&fence, seqno);
|
||||
if (err) {
|
||||
@ -429,7 +429,7 @@ static int __find_race(void *arg)
|
||||
dma_fence_put(fence);
|
||||
|
||||
signal:
|
||||
seqno = prandom_u32_max(data->fc.chain_length - 1);
|
||||
seqno = get_random_u32_below(data->fc.chain_length - 1);
|
||||
dma_fence_signal(data->fc.fences[seqno]);
|
||||
cond_resched();
|
||||
}
|
||||
@ -637,7 +637,7 @@ static void randomise_fences(struct fence_chains *fc)
|
||||
while (--count) {
|
||||
unsigned int swp;
|
||||
|
||||
swp = prandom_u32_max(count + 1);
|
||||
swp = get_random_u32_below(count + 1);
|
||||
if (swp == count)
|
||||
continue;
|
||||
|
||||
|
@ -337,6 +337,24 @@ static void __init efi_debugfs_init(void)
|
||||
static inline void efi_debugfs_init(void) {}
|
||||
#endif
|
||||
|
||||
static void refresh_nv_rng_seed(struct work_struct *work)
|
||||
{
|
||||
u8 seed[EFI_RANDOM_SEED_SIZE];
|
||||
|
||||
get_random_bytes(seed, sizeof(seed));
|
||||
efi.set_variable(L"RandomSeed", &LINUX_EFI_RANDOM_SEED_TABLE_GUID,
|
||||
EFI_VARIABLE_NON_VOLATILE | EFI_VARIABLE_BOOTSERVICE_ACCESS |
|
||||
EFI_VARIABLE_RUNTIME_ACCESS, sizeof(seed), seed);
|
||||
memzero_explicit(seed, sizeof(seed));
|
||||
}
|
||||
static int refresh_nv_rng_seed_notification(struct notifier_block *nb, unsigned long action, void *data)
|
||||
{
|
||||
static DECLARE_WORK(work, refresh_nv_rng_seed);
|
||||
schedule_work(&work);
|
||||
return NOTIFY_DONE;
|
||||
}
|
||||
static struct notifier_block refresh_nv_rng_seed_nb = { .notifier_call = refresh_nv_rng_seed_notification };
|
||||
|
||||
/*
|
||||
* We register the efi subsystem with the firmware subsystem and the
|
||||
* efivars subsystem with the efi subsystem, if the system was booted with
|
||||
@ -413,6 +431,7 @@ static int __init efisubsys_init(void)
|
||||
platform_device_register_simple("efi_secret", 0, NULL, 0);
|
||||
#endif
|
||||
|
||||
execute_with_initialized_rng(&refresh_nv_rng_seed_nb);
|
||||
return 0;
|
||||
|
||||
err_remove_group:
|
||||
|
@ -2424,7 +2424,7 @@ gen8_dispatch_bsd_engine(struct drm_i915_private *dev_priv,
|
||||
/* Check whether the file_priv has already selected one ring. */
|
||||
if ((int)file_priv->bsd_engine < 0)
|
||||
file_priv->bsd_engine =
|
||||
prandom_u32_max(num_vcs_engines(dev_priv));
|
||||
get_random_u32_below(num_vcs_engines(dev_priv));
|
||||
|
||||
return file_priv->bsd_engine;
|
||||
}
|
||||
|
@ -3689,7 +3689,7 @@ static void virtual_engine_initial_hint(struct virtual_engine *ve)
|
||||
* NB This does not force us to execute on this engine, it will just
|
||||
* typically be the first we inspect for submission.
|
||||
*/
|
||||
swp = prandom_u32_max(ve->num_siblings);
|
||||
swp = get_random_u32_below(ve->num_siblings);
|
||||
if (swp)
|
||||
swap(ve->siblings[swp], ve->siblings[0]);
|
||||
}
|
||||
|
@ -38,7 +38,7 @@ static int __iopagetest(struct intel_memory_region *mem,
|
||||
u8 value, resource_size_t offset,
|
||||
const void *caller)
|
||||
{
|
||||
int byte = prandom_u32_max(pagesize);
|
||||
int byte = get_random_u32_below(pagesize);
|
||||
u8 result[3];
|
||||
|
||||
memset_io(va, value, pagesize); /* or GPF! */
|
||||
@ -92,7 +92,7 @@ static int iopagetest(struct intel_memory_region *mem,
|
||||
static resource_size_t random_page(resource_size_t last)
|
||||
{
|
||||
/* Limited to low 44b (16TiB), but should suffice for a spot check */
|
||||
return prandom_u32_max(last >> PAGE_SHIFT) << PAGE_SHIFT;
|
||||
return get_random_u32_below(last >> PAGE_SHIFT) << PAGE_SHIFT;
|
||||
}
|
||||
|
||||
static int iomemtest(struct intel_memory_region *mem,
|
||||
|
@ -3807,7 +3807,7 @@ static int cma_alloc_any_port(enum rdma_ucm_port_space ps,
|
||||
|
||||
inet_get_local_port_range(net, &low, &high);
|
||||
remaining = (high - low) + 1;
|
||||
rover = prandom_u32_max(remaining) + low;
|
||||
rover = get_random_u32_inclusive(low, remaining + low - 1);
|
||||
retry:
|
||||
if (last_used_port != rover) {
|
||||
struct rdma_bind_list *bind_list;
|
||||
|
@ -54,7 +54,7 @@ u32 c4iw_id_alloc(struct c4iw_id_table *alloc)
|
||||
|
||||
if (obj < alloc->max) {
|
||||
if (alloc->flags & C4IW_ID_TABLE_F_RANDOM)
|
||||
alloc->last += prandom_u32_max(RANDOM_SKIP);
|
||||
alloc->last += get_random_u32_below(RANDOM_SKIP);
|
||||
else
|
||||
alloc->last = obj + 1;
|
||||
if (alloc->last >= alloc->max)
|
||||
@ -85,7 +85,7 @@ int c4iw_id_table_alloc(struct c4iw_id_table *alloc, u32 start, u32 num,
|
||||
alloc->start = start;
|
||||
alloc->flags = flags;
|
||||
if (flags & C4IW_ID_TABLE_F_RANDOM)
|
||||
alloc->last = prandom_u32_max(RANDOM_SKIP);
|
||||
alloc->last = get_random_u32_below(RANDOM_SKIP);
|
||||
else
|
||||
alloc->last = 0;
|
||||
alloc->max = num;
|
||||
|
@ -41,9 +41,8 @@ static inline u16 get_ah_udp_sport(const struct rdma_ah_attr *ah_attr)
|
||||
u16 sport;
|
||||
|
||||
if (!fl)
|
||||
sport = prandom_u32_max(IB_ROCE_UDP_ENCAP_VALID_PORT_MAX + 1 -
|
||||
IB_ROCE_UDP_ENCAP_VALID_PORT_MIN) +
|
||||
IB_ROCE_UDP_ENCAP_VALID_PORT_MIN;
|
||||
sport = get_random_u32_inclusive(IB_ROCE_UDP_ENCAP_VALID_PORT_MIN,
|
||||
IB_ROCE_UDP_ENCAP_VALID_PORT_MAX);
|
||||
else
|
||||
sport = rdma_flow_label_to_udp_sport(fl);
|
||||
|
||||
|
@ -1517,7 +1517,7 @@ static void rtrs_clt_err_recovery_work(struct work_struct *work)
|
||||
rtrs_clt_stop_and_destroy_conns(clt_path);
|
||||
queue_delayed_work(rtrs_wq, &clt_path->reconnect_dwork,
|
||||
msecs_to_jiffies(delay_ms +
|
||||
prandom_u32_max(RTRS_RECONNECT_SEED)));
|
||||
get_random_u32_below(RTRS_RECONNECT_SEED)));
|
||||
}
|
||||
|
||||
static struct rtrs_clt_path *alloc_path(struct rtrs_clt_sess *clt,
|
||||
|
@ -401,7 +401,7 @@ static bool check_should_bypass(struct cached_dev *dc, struct bio *bio)
|
||||
}
|
||||
|
||||
if (bypass_torture_test(dc)) {
|
||||
if (prandom_u32_max(4) == 3)
|
||||
if (get_random_u32_below(4) == 3)
|
||||
goto skip;
|
||||
else
|
||||
goto rescale;
|
||||
|
@ -872,7 +872,7 @@ static void precalculate_color(struct tpg_data *tpg, int k)
|
||||
} else if (tpg->pattern == TPG_PAT_NOISE) {
|
||||
r = g = b = get_random_u8();
|
||||
} else if (k == TPG_COLOR_RANDOM) {
|
||||
r = g = b = tpg->qual_offset + prandom_u32_max(196);
|
||||
r = g = b = tpg->qual_offset + get_random_u32_below(196);
|
||||
} else if (k >= TPG_COLOR_RAMP) {
|
||||
r = g = b = k - TPG_COLOR_RAMP;
|
||||
}
|
||||
@ -2286,7 +2286,7 @@ static void tpg_fill_params_extras(const struct tpg_data *tpg,
|
||||
params->wss_width = tpg->crop.width;
|
||||
params->wss_width = tpg_hscale_div(tpg, p, params->wss_width);
|
||||
params->wss_random_offset =
|
||||
params->twopixsize * prandom_u32_max(tpg->src_width / 2);
|
||||
params->twopixsize * get_random_u32_below(tpg->src_width / 2);
|
||||
|
||||
if (tpg->crop.left < tpg->border.left) {
|
||||
left_pillar_width = tpg->border.left - tpg->crop.left;
|
||||
@ -2495,9 +2495,9 @@ static void tpg_fill_plane_pattern(const struct tpg_data *tpg,
|
||||
linestart_newer = tpg->black_line[p];
|
||||
} else if (tpg->pattern == TPG_PAT_NOISE || tpg->qual == TPG_QUAL_NOISE) {
|
||||
linestart_older = tpg->random_line[p] +
|
||||
twopixsize * prandom_u32_max(tpg->src_width / 2);
|
||||
twopixsize * get_random_u32_below(tpg->src_width / 2);
|
||||
linestart_newer = tpg->random_line[p] +
|
||||
twopixsize * prandom_u32_max(tpg->src_width / 2);
|
||||
twopixsize * get_random_u32_below(tpg->src_width / 2);
|
||||
} else {
|
||||
unsigned frame_line_old =
|
||||
(frame_line + mv_vert_old) % tpg->src_height;
|
||||
|
@ -188,11 +188,11 @@ static void vidtv_demod_update_stats(struct dvb_frontend *fe)
|
||||
* Also, usually, signal strength is a negative number in dBm.
|
||||
*/
|
||||
c->strength.stat[0].svalue = state->tuner_cnr;
|
||||
c->strength.stat[0].svalue -= prandom_u32_max(state->tuner_cnr / 50);
|
||||
c->strength.stat[0].svalue -= get_random_u32_below(state->tuner_cnr / 50);
|
||||
c->strength.stat[0].svalue -= 68000; /* Adjust to a better range */
|
||||
|
||||
c->cnr.stat[0].svalue = state->tuner_cnr;
|
||||
c->cnr.stat[0].svalue -= prandom_u32_max(state->tuner_cnr / 50);
|
||||
c->cnr.stat[0].svalue -= get_random_u32_below(state->tuner_cnr / 50);
|
||||
}
|
||||
|
||||
static int vidtv_demod_read_status(struct dvb_frontend *fe,
|
||||
@ -213,11 +213,11 @@ static int vidtv_demod_read_status(struct dvb_frontend *fe,
|
||||
|
||||
if (snr < cnr2qual->cnr_ok) {
|
||||
/* eventually lose the TS lock */
|
||||
if (prandom_u32_max(100) < config->drop_tslock_prob_on_low_snr)
|
||||
if (get_random_u32_below(100) < config->drop_tslock_prob_on_low_snr)
|
||||
state->status = 0;
|
||||
} else {
|
||||
/* recover if the signal improves */
|
||||
if (prandom_u32_max(100) <
|
||||
if (get_random_u32_below(100) <
|
||||
config->recover_tslock_prob_on_good_snr)
|
||||
state->status = FE_HAS_SIGNAL |
|
||||
FE_HAS_CARRIER |
|
||||
|
@ -693,7 +693,7 @@ static noinline_for_stack void vivid_thread_vid_cap_tick(struct vivid_dev *dev,
|
||||
|
||||
/* Drop a certain percentage of buffers. */
|
||||
if (dev->perc_dropped_buffers &&
|
||||
prandom_u32_max(100) < dev->perc_dropped_buffers)
|
||||
get_random_u32_below(100) < dev->perc_dropped_buffers)
|
||||
goto update_mv;
|
||||
|
||||
spin_lock(&dev->slock);
|
||||
|
@ -51,7 +51,7 @@ static void vivid_thread_vid_out_tick(struct vivid_dev *dev)
|
||||
|
||||
/* Drop a certain percentage of buffers. */
|
||||
if (dev->perc_dropped_buffers &&
|
||||
prandom_u32_max(100) < dev->perc_dropped_buffers)
|
||||
get_random_u32_below(100) < dev->perc_dropped_buffers)
|
||||
return;
|
||||
|
||||
spin_lock(&dev->slock);
|
||||
|
@ -94,8 +94,8 @@ retry:
|
||||
|
||||
if (data_blk == 0 && dev->radio_rds_loop)
|
||||
vivid_radio_rds_init(dev);
|
||||
if (perc && prandom_u32_max(100) < perc) {
|
||||
switch (prandom_u32_max(4)) {
|
||||
if (perc && get_random_u32_below(100) < perc) {
|
||||
switch (get_random_u32_below(4)) {
|
||||
case 0:
|
||||
rds.block |= V4L2_RDS_BLOCK_CORRECTED;
|
||||
break;
|
||||
|
@ -90,7 +90,7 @@ static void vivid_thread_sdr_cap_tick(struct vivid_dev *dev)
|
||||
|
||||
/* Drop a certain percentage of buffers. */
|
||||
if (dev->perc_dropped_buffers &&
|
||||
prandom_u32_max(100) < dev->perc_dropped_buffers)
|
||||
get_random_u32_below(100) < dev->perc_dropped_buffers)
|
||||
return;
|
||||
|
||||
spin_lock(&dev->slock);
|
||||
|
@ -221,7 +221,7 @@ static void vivid_fill_buff_noise(__s16 *tch_buf, int size)
|
||||
|
||||
static inline int get_random_pressure(void)
|
||||
{
|
||||
return prandom_u32_max(VIVID_PRESSURE_LIMIT);
|
||||
return get_random_u32_below(VIVID_PRESSURE_LIMIT);
|
||||
}
|
||||
|
||||
static void vivid_tch_buf_set(struct v4l2_pix_format *f,
|
||||
|
@ -97,8 +97,8 @@ static void mmc_should_fail_request(struct mmc_host *host,
|
||||
!should_fail(&host->fail_mmc_request, data->blksz * data->blocks))
|
||||
return;
|
||||
|
||||
data->error = data_errors[prandom_u32_max(ARRAY_SIZE(data_errors))];
|
||||
data->bytes_xfered = prandom_u32_max(data->bytes_xfered >> 9) << 9;
|
||||
data->error = data_errors[get_random_u32_below(ARRAY_SIZE(data_errors))];
|
||||
data->bytes_xfered = get_random_u32_below(data->bytes_xfered >> 9) << 9;
|
||||
}
|
||||
|
||||
#else /* CONFIG_FAIL_MMC_REQUEST */
|
||||
|
@ -1858,7 +1858,7 @@ static void dw_mci_start_fault_timer(struct dw_mci *host)
|
||||
* Try to inject the error at random points during the data transfer.
|
||||
*/
|
||||
hrtimer_start(&host->fault_timer,
|
||||
ms_to_ktime(prandom_u32_max(25)),
|
||||
ms_to_ktime(get_random_u32_below(25)),
|
||||
HRTIMER_MODE_REL);
|
||||
}
|
||||
|
||||
|
@ -1405,9 +1405,9 @@ static void ns_do_bit_flips(struct nandsim *ns, int num)
|
||||
if (bitflips && get_random_u16() < (1 << 6)) {
|
||||
int flips = 1;
|
||||
if (bitflips > 1)
|
||||
flips = prandom_u32_max(bitflips) + 1;
|
||||
flips = get_random_u32_inclusive(1, bitflips);
|
||||
while (flips--) {
|
||||
int pos = prandom_u32_max(num * 8);
|
||||
int pos = get_random_u32_below(num * 8);
|
||||
ns->buf.byte[pos / 8] ^= (1 << (pos % 8));
|
||||
NS_WARN("read_page: flipping bit %d in page %d "
|
||||
"reading from %d ecc: corrected=%u failed=%u\n",
|
||||
|
@ -47,7 +47,7 @@ struct nand_ecc_test {
|
||||
static void single_bit_error_data(void *error_data, void *correct_data,
|
||||
size_t size)
|
||||
{
|
||||
unsigned int offset = prandom_u32_max(size * BITS_PER_BYTE);
|
||||
unsigned int offset = get_random_u32_below(size * BITS_PER_BYTE);
|
||||
|
||||
memcpy(error_data, correct_data, size);
|
||||
__change_bit_le(offset, error_data);
|
||||
@ -58,9 +58,9 @@ static void double_bit_error_data(void *error_data, void *correct_data,
|
||||
{
|
||||
unsigned int offset[2];
|
||||
|
||||
offset[0] = prandom_u32_max(size * BITS_PER_BYTE);
|
||||
offset[0] = get_random_u32_below(size * BITS_PER_BYTE);
|
||||
do {
|
||||
offset[1] = prandom_u32_max(size * BITS_PER_BYTE);
|
||||
offset[1] = get_random_u32_below(size * BITS_PER_BYTE);
|
||||
} while (offset[0] == offset[1]);
|
||||
|
||||
memcpy(error_data, correct_data, size);
|
||||
@ -71,7 +71,7 @@ static void double_bit_error_data(void *error_data, void *correct_data,
|
||||
|
||||
static unsigned int random_ecc_bit(size_t size)
|
||||
{
|
||||
unsigned int offset = prandom_u32_max(3 * BITS_PER_BYTE);
|
||||
unsigned int offset = get_random_u32_below(3 * BITS_PER_BYTE);
|
||||
|
||||
if (size == 256) {
|
||||
/*
|
||||
@ -79,7 +79,7 @@ static unsigned int random_ecc_bit(size_t size)
|
||||
* and 17th bit) in ECC code for 256 byte data block
|
||||
*/
|
||||
while (offset == 16 || offset == 17)
|
||||
offset = prandom_u32_max(3 * BITS_PER_BYTE);
|
||||
offset = get_random_u32_below(3 * BITS_PER_BYTE);
|
||||
}
|
||||
|
||||
return offset;
|
||||
|
@ -46,7 +46,7 @@ static int rand_eb(void)
|
||||
|
||||
again:
|
||||
/* Read or write up 2 eraseblocks at a time - hence 'ebcnt - 1' */
|
||||
eb = prandom_u32_max(ebcnt - 1);
|
||||
eb = get_random_u32_below(ebcnt - 1);
|
||||
if (bbt[eb])
|
||||
goto again;
|
||||
return eb;
|
||||
@ -54,12 +54,12 @@ again:
|
||||
|
||||
static int rand_offs(void)
|
||||
{
|
||||
return prandom_u32_max(bufsize);
|
||||
return get_random_u32_below(bufsize);
|
||||
}
|
||||
|
||||
static int rand_len(int offs)
|
||||
{
|
||||
return prandom_u32_max(bufsize - offs);
|
||||
return get_random_u32_below(bufsize - offs);
|
||||
}
|
||||
|
||||
static int do_read(void)
|
||||
@ -118,7 +118,7 @@ static int do_write(void)
|
||||
|
||||
static int do_operation(void)
|
||||
{
|
||||
if (prandom_u32_max(2))
|
||||
if (get_random_u32_below(2))
|
||||
return do_read();
|
||||
else
|
||||
return do_write();
|
||||
|
@ -590,7 +590,7 @@ int ubi_dbg_power_cut(struct ubi_device *ubi, int caller)
|
||||
|
||||
if (ubi->dbg.power_cut_max > ubi->dbg.power_cut_min) {
|
||||
range = ubi->dbg.power_cut_max - ubi->dbg.power_cut_min;
|
||||
ubi->dbg.power_cut_counter += prandom_u32_max(range);
|
||||
ubi->dbg.power_cut_counter += get_random_u32_below(range);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
@ -73,7 +73,7 @@ static inline int ubi_dbg_is_bgt_disabled(const struct ubi_device *ubi)
|
||||
static inline int ubi_dbg_is_bitflip(const struct ubi_device *ubi)
|
||||
{
|
||||
if (ubi->dbg.emulate_bitflips)
|
||||
return !prandom_u32_max(200);
|
||||
return !get_random_u32_below(200);
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -87,7 +87,7 @@ static inline int ubi_dbg_is_bitflip(const struct ubi_device *ubi)
|
||||
static inline int ubi_dbg_is_write_failure(const struct ubi_device *ubi)
|
||||
{
|
||||
if (ubi->dbg.emulate_io_failures)
|
||||
return !prandom_u32_max(500);
|
||||
return !get_random_u32_below(500);
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -101,7 +101,7 @@ static inline int ubi_dbg_is_write_failure(const struct ubi_device *ubi)
|
||||
static inline int ubi_dbg_is_erase_failure(const struct ubi_device *ubi)
|
||||
{
|
||||
if (ubi->dbg.emulate_io_failures)
|
||||
return !prandom_u32_max(400);
|
||||
return !get_random_u32_below(400);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -4105,7 +4105,7 @@ static int cnic_cm_alloc_mem(struct cnic_dev *dev)
|
||||
for (i = 0; i < MAX_CM_SK_TBL_SZ; i++)
|
||||
atomic_set(&cp->csk_tbl[i].ref_count, 0);
|
||||
|
||||
port_id = prandom_u32_max(CNIC_LOCAL_PORT_RANGE);
|
||||
port_id = get_random_u32_below(CNIC_LOCAL_PORT_RANGE);
|
||||
if (cnic_init_id_tbl(&cp->csk_port_tbl, CNIC_LOCAL_PORT_RANGE,
|
||||
CNIC_LOCAL_PORT_MIN, port_id)) {
|
||||
cnic_cm_free_mem(dev);
|
||||
|
@ -919,8 +919,8 @@ static int csk_wait_memory(struct chtls_dev *cdev,
|
||||
current_timeo = *timeo_p;
|
||||
noblock = (*timeo_p ? false : true);
|
||||
if (csk_mem_free(cdev, sk)) {
|
||||
current_timeo = prandom_u32_max(HZ / 5) + 2;
|
||||
vm_wait = prandom_u32_max(HZ / 5) + 2;
|
||||
current_timeo = get_random_u32_below(HZ / 5) + 2;
|
||||
vm_wait = get_random_u32_below(HZ / 5) + 2;
|
||||
}
|
||||
|
||||
add_wait_queue(sk_sleep(sk), &wait);
|
||||
|
@ -1760,7 +1760,7 @@ static int qca808x_phy_fast_retrain_config(struct phy_device *phydev)
|
||||
|
||||
static int qca808x_phy_ms_random_seed_set(struct phy_device *phydev)
|
||||
{
|
||||
u16 seed_value = prandom_u32_max(QCA808X_MASTER_SLAVE_SEED_RANGE);
|
||||
u16 seed_value = get_random_u32_below(QCA808X_MASTER_SLAVE_SEED_RANGE);
|
||||
|
||||
return at803x_debug_reg_mask(phydev, QCA808X_PHY_DEBUG_LOCAL_SEED,
|
||||
QCA808X_MASTER_SLAVE_SEED_CFG,
|
||||
|
@ -16,7 +16,7 @@ static bool rnd_transmit(struct team *team, struct sk_buff *skb)
|
||||
struct team_port *port;
|
||||
int port_index;
|
||||
|
||||
port_index = prandom_u32_max(team->en_port_count);
|
||||
port_index = get_random_u32_below(team->en_port_count);
|
||||
port = team_get_port_by_index_rcu(team, port_index);
|
||||
if (unlikely(!port))
|
||||
goto drop;
|
||||
|
@ -285,8 +285,8 @@ static __init bool randomized_test(void)
|
||||
|
||||
for (i = 0; i < NUM_RAND_ROUTES; ++i) {
|
||||
get_random_bytes(ip, 4);
|
||||
cidr = prandom_u32_max(32) + 1;
|
||||
peer = peers[prandom_u32_max(NUM_PEERS)];
|
||||
cidr = get_random_u32_inclusive(1, 32);
|
||||
peer = peers[get_random_u32_below(NUM_PEERS)];
|
||||
if (wg_allowedips_insert_v4(&t, (struct in_addr *)ip, cidr,
|
||||
peer, &mutex) < 0) {
|
||||
pr_err("allowedips random self-test malloc: FAIL\n");
|
||||
@ -300,7 +300,7 @@ static __init bool randomized_test(void)
|
||||
for (j = 0; j < NUM_MUTATED_ROUTES; ++j) {
|
||||
memcpy(mutated, ip, 4);
|
||||
get_random_bytes(mutate_mask, 4);
|
||||
mutate_amount = prandom_u32_max(32);
|
||||
mutate_amount = get_random_u32_below(32);
|
||||
for (k = 0; k < mutate_amount / 8; ++k)
|
||||
mutate_mask[k] = 0xff;
|
||||
mutate_mask[k] = 0xff
|
||||
@ -311,8 +311,8 @@ static __init bool randomized_test(void)
|
||||
mutated[k] = (mutated[k] & mutate_mask[k]) |
|
||||
(~mutate_mask[k] &
|
||||
get_random_u8());
|
||||
cidr = prandom_u32_max(32) + 1;
|
||||
peer = peers[prandom_u32_max(NUM_PEERS)];
|
||||
cidr = get_random_u32_inclusive(1, 32);
|
||||
peer = peers[get_random_u32_below(NUM_PEERS)];
|
||||
if (wg_allowedips_insert_v4(&t,
|
||||
(struct in_addr *)mutated,
|
||||
cidr, peer, &mutex) < 0) {
|
||||
@ -329,8 +329,8 @@ static __init bool randomized_test(void)
|
||||
|
||||
for (i = 0; i < NUM_RAND_ROUTES; ++i) {
|
||||
get_random_bytes(ip, 16);
|
||||
cidr = prandom_u32_max(128) + 1;
|
||||
peer = peers[prandom_u32_max(NUM_PEERS)];
|
||||
cidr = get_random_u32_inclusive(1, 128);
|
||||
peer = peers[get_random_u32_below(NUM_PEERS)];
|
||||
if (wg_allowedips_insert_v6(&t, (struct in6_addr *)ip, cidr,
|
||||
peer, &mutex) < 0) {
|
||||
pr_err("allowedips random self-test malloc: FAIL\n");
|
||||
@ -344,7 +344,7 @@ static __init bool randomized_test(void)
|
||||
for (j = 0; j < NUM_MUTATED_ROUTES; ++j) {
|
||||
memcpy(mutated, ip, 16);
|
||||
get_random_bytes(mutate_mask, 16);
|
||||
mutate_amount = prandom_u32_max(128);
|
||||
mutate_amount = get_random_u32_below(128);
|
||||
for (k = 0; k < mutate_amount / 8; ++k)
|
||||
mutate_mask[k] = 0xff;
|
||||
mutate_mask[k] = 0xff
|
||||
@ -355,8 +355,8 @@ static __init bool randomized_test(void)
|
||||
mutated[k] = (mutated[k] & mutate_mask[k]) |
|
||||
(~mutate_mask[k] &
|
||||
get_random_u8());
|
||||
cidr = prandom_u32_max(128) + 1;
|
||||
peer = peers[prandom_u32_max(NUM_PEERS)];
|
||||
cidr = get_random_u32_inclusive(1, 128);
|
||||
peer = peers[get_random_u32_below(NUM_PEERS)];
|
||||
if (wg_allowedips_insert_v6(&t,
|
||||
(struct in6_addr *)mutated,
|
||||
cidr, peer, &mutex) < 0) {
|
||||
|
@ -147,7 +147,7 @@ void wg_timers_data_sent(struct wg_peer *peer)
|
||||
if (!timer_pending(&peer->timer_new_handshake))
|
||||
mod_peer_timer(peer, &peer->timer_new_handshake,
|
||||
jiffies + (KEEPALIVE_TIMEOUT + REKEY_TIMEOUT) * HZ +
|
||||
prandom_u32_max(REKEY_TIMEOUT_JITTER_MAX_JIFFIES));
|
||||
get_random_u32_below(REKEY_TIMEOUT_JITTER_MAX_JIFFIES));
|
||||
}
|
||||
|
||||
/* Should be called after an authenticated data packet is received. */
|
||||
@ -183,7 +183,7 @@ void wg_timers_handshake_initiated(struct wg_peer *peer)
|
||||
{
|
||||
mod_peer_timer(peer, &peer->timer_retransmit_handshake,
|
||||
jiffies + REKEY_TIMEOUT * HZ +
|
||||
prandom_u32_max(REKEY_TIMEOUT_JITTER_MAX_JIFFIES));
|
||||
get_random_u32_below(REKEY_TIMEOUT_JITTER_MAX_JIFFIES));
|
||||
}
|
||||
|
||||
/* Should be called after a handshake response message is received and processed
|
||||
|
@ -1128,7 +1128,7 @@ static void brcmf_p2p_afx_handler(struct work_struct *work)
|
||||
if (afx_hdl->is_listen && afx_hdl->my_listen_chan)
|
||||
/* 100ms ~ 300ms */
|
||||
err = brcmf_p2p_discover_listen(p2p, afx_hdl->my_listen_chan,
|
||||
100 * (1 + prandom_u32_max(3)));
|
||||
100 * get_random_u32_inclusive(1, 3));
|
||||
else
|
||||
err = brcmf_p2p_act_frm_search(p2p, afx_hdl->peer_listen_chan);
|
||||
|
||||
|
@ -1099,7 +1099,7 @@ static void iwl_mvm_mac_ctxt_cmd_fill_ap(struct iwl_mvm *mvm,
|
||||
iwl_mvm_mac_ap_iterator, &data);
|
||||
|
||||
if (data.beacon_device_ts) {
|
||||
u32 rand = prandom_u32_max(64 - 36) + 36;
|
||||
u32 rand = get_random_u32_inclusive(36, 63);
|
||||
mvmvif->ap_beacon_time = data.beacon_device_ts +
|
||||
ieee80211_tu_to_usec(data.beacon_int * rand /
|
||||
100);
|
||||
|
@ -673,7 +673,7 @@ struct pci_dev *pci_p2pmem_find_many(struct device **clients, int num_clients)
|
||||
}
|
||||
|
||||
if (dev_cnt)
|
||||
pdev = pci_dev_get(closest_pdevs[prandom_u32_max(dev_cnt)]);
|
||||
pdev = pci_dev_get(closest_pdevs[get_random_u32_below(dev_cnt)]);
|
||||
|
||||
for (i = 0; i < dev_cnt; i++)
|
||||
pci_dev_put(closest_pdevs[i]);
|
||||
|
@ -48,7 +48,7 @@ unsigned int zfcp_fc_port_scan_backoff(void)
|
||||
{
|
||||
if (!port_scan_backoff)
|
||||
return 0;
|
||||
return prandom_u32_max(port_scan_backoff);
|
||||
return get_random_u32_below(port_scan_backoff);
|
||||
}
|
||||
|
||||
static void zfcp_fc_port_scan_time(struct zfcp_adapter *adapter)
|
||||
|
@ -2233,7 +2233,7 @@ static void fcoe_ctlr_vn_restart(struct fcoe_ctlr *fip)
|
||||
|
||||
if (fip->probe_tries < FIP_VN_RLIM_COUNT) {
|
||||
fip->probe_tries++;
|
||||
wait = prandom_u32_max(FIP_VN_PROBE_WAIT);
|
||||
wait = get_random_u32_below(FIP_VN_PROBE_WAIT);
|
||||
} else
|
||||
wait = FIP_VN_RLIM_INT;
|
||||
mod_timer(&fip->timer, jiffies + msecs_to_jiffies(wait));
|
||||
@ -3125,7 +3125,7 @@ static void fcoe_ctlr_vn_timeout(struct fcoe_ctlr *fip)
|
||||
fcoe_all_vn2vn, 0);
|
||||
fip->port_ka_time = jiffies +
|
||||
msecs_to_jiffies(FIP_VN_BEACON_INT +
|
||||
prandom_u32_max(FIP_VN_BEACON_FUZZ));
|
||||
get_random_u32_below(FIP_VN_BEACON_FUZZ));
|
||||
}
|
||||
if (time_before(fip->port_ka_time, next_time))
|
||||
next_time = fip->port_ka_time;
|
||||
|
@ -618,7 +618,7 @@ static int qedi_cm_alloc_mem(struct qedi_ctx *qedi)
|
||||
sizeof(struct qedi_endpoint *)), GFP_KERNEL);
|
||||
if (!qedi->ep_tbl)
|
||||
return -ENOMEM;
|
||||
port_id = prandom_u32_max(QEDI_LOCAL_PORT_RANGE);
|
||||
port_id = get_random_u32_below(QEDI_LOCAL_PORT_RANGE);
|
||||
if (qedi_init_id_tbl(&qedi->lcl_port_tbl, QEDI_LOCAL_PORT_RANGE,
|
||||
QEDI_LOCAL_PORT_MIN, port_id)) {
|
||||
qedi_cm_free_mem(qedi);
|
||||
|
@ -5702,16 +5702,16 @@ static int schedule_resp(struct scsi_cmnd *cmnd, struct sdebug_dev_info *devip,
|
||||
u64 ns = jiffies_to_nsecs(delta_jiff);
|
||||
|
||||
if (sdebug_random && ns < U32_MAX) {
|
||||
ns = prandom_u32_max((u32)ns);
|
||||
ns = get_random_u32_below((u32)ns);
|
||||
} else if (sdebug_random) {
|
||||
ns >>= 12; /* scale to 4 usec precision */
|
||||
if (ns < U32_MAX) /* over 4 hours max */
|
||||
ns = prandom_u32_max((u32)ns);
|
||||
ns = get_random_u32_below((u32)ns);
|
||||
ns <<= 12;
|
||||
}
|
||||
kt = ns_to_ktime(ns);
|
||||
} else { /* ndelay has a 4.2 second max */
|
||||
kt = sdebug_random ? prandom_u32_max((u32)ndelay) :
|
||||
kt = sdebug_random ? get_random_u32_below((u32)ndelay) :
|
||||
(u32)ndelay;
|
||||
if (ndelay < INCLUSIVE_TIMING_MAX_NS) {
|
||||
u64 d = ktime_get_boottime_ns() - ns_from_boot;
|
||||
|
@ -362,7 +362,7 @@ static int ceph_fill_fragtree(struct inode *inode,
|
||||
if (nsplits != ci->i_fragtree_nsplits) {
|
||||
update = true;
|
||||
} else if (nsplits) {
|
||||
i = prandom_u32_max(nsplits);
|
||||
i = get_random_u32_below(nsplits);
|
||||
id = le32_to_cpu(fragtree->splits[i].frag);
|
||||
if (!__ceph_find_frag(ci, id))
|
||||
update = true;
|
||||
|
@ -29,7 +29,7 @@ static int __mdsmap_get_random_mds(struct ceph_mdsmap *m, bool ignore_laggy)
|
||||
return -1;
|
||||
|
||||
/* pick */
|
||||
n = prandom_u32_max(n);
|
||||
n = get_random_u32_below(n);
|
||||
for (j = 0, i = 0; i < m->possible_max_rank; i++) {
|
||||
if (CEPH_MDS_IS_READY(i, ignore_laggy))
|
||||
j++;
|
||||
|
@ -277,7 +277,7 @@ static int find_group_orlov(struct super_block *sb, struct inode *parent)
|
||||
int best_ndir = inodes_per_group;
|
||||
int best_group = -1;
|
||||
|
||||
parent_group = prandom_u32_max(ngroups);
|
||||
parent_group = get_random_u32_below(ngroups);
|
||||
for (i = 0; i < ngroups; i++) {
|
||||
group = (parent_group + i) % ngroups;
|
||||
desc = ext2_get_group_desc (sb, group, NULL);
|
||||
|
@ -465,7 +465,7 @@ static int find_group_orlov(struct super_block *sb, struct inode *parent,
|
||||
ext4fs_dirhash(parent, qstr->name, qstr->len, &hinfo);
|
||||
parent_group = hinfo.hash % ngroups;
|
||||
} else
|
||||
parent_group = prandom_u32_max(ngroups);
|
||||
parent_group = get_random_u32_below(ngroups);
|
||||
for (i = 0; i < ngroups; i++) {
|
||||
g = (parent_group + i) % ngroups;
|
||||
get_orlov_stats(sb, g, flex_size, &stats);
|
||||
|
@ -262,13 +262,7 @@ void ext4_stop_mmpd(struct ext4_sb_info *sbi)
|
||||
*/
|
||||
static unsigned int mmp_new_seq(void)
|
||||
{
|
||||
u32 new_seq;
|
||||
|
||||
do {
|
||||
new_seq = get_random_u32();
|
||||
} while (new_seq > EXT4_MMP_SEQ_MAX);
|
||||
|
||||
return new_seq;
|
||||
return get_random_u32_below(EXT4_MMP_SEQ_MAX + 1);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -3778,7 +3778,7 @@ cont_thread:
|
||||
}
|
||||
if (!progress) {
|
||||
elr->lr_next_sched = jiffies +
|
||||
prandom_u32_max(EXT4_DEF_LI_MAX_START_DELAY * HZ);
|
||||
get_random_u32_below(EXT4_DEF_LI_MAX_START_DELAY * HZ);
|
||||
}
|
||||
if (time_before(elr->lr_next_sched, next_wakeup))
|
||||
next_wakeup = elr->lr_next_sched;
|
||||
@ -3925,8 +3925,7 @@ static struct ext4_li_request *ext4_li_request_new(struct super_block *sb,
|
||||
* spread the inode table initialization requests
|
||||
* better.
|
||||
*/
|
||||
elr->lr_next_sched = jiffies + prandom_u32_max(
|
||||
EXT4_DEF_LI_MAX_START_DELAY * HZ);
|
||||
elr->lr_next_sched = jiffies + get_random_u32_below(EXT4_DEF_LI_MAX_START_DELAY * HZ);
|
||||
return elr;
|
||||
}
|
||||
|
||||
|
@ -282,7 +282,7 @@ static void select_policy(struct f2fs_sb_info *sbi, int gc_type,
|
||||
|
||||
/* let's select beginning hot/small space first in no_heap mode*/
|
||||
if (f2fs_need_rand_seg(sbi))
|
||||
p->offset = prandom_u32_max(MAIN_SECS(sbi) * sbi->segs_per_sec);
|
||||
p->offset = get_random_u32_below(MAIN_SECS(sbi) * sbi->segs_per_sec);
|
||||
else if (test_opt(sbi, NOHEAP) &&
|
||||
(type == CURSEG_HOT_DATA || IS_NODESEG(type)))
|
||||
p->offset = 0;
|
||||
|
@ -2534,7 +2534,7 @@ static unsigned int __get_next_segno(struct f2fs_sb_info *sbi, int type)
|
||||
|
||||
sanity_check_seg_type(sbi, seg_type);
|
||||
if (f2fs_need_rand_seg(sbi))
|
||||
return prandom_u32_max(MAIN_SECS(sbi) * sbi->segs_per_sec);
|
||||
return get_random_u32_below(MAIN_SECS(sbi) * sbi->segs_per_sec);
|
||||
|
||||
/* if segs_per_sec is large than 1, we need to keep original policy. */
|
||||
if (__is_large_section(sbi))
|
||||
@ -2588,7 +2588,7 @@ static void new_curseg(struct f2fs_sb_info *sbi, int type, bool new_sec)
|
||||
curseg->alloc_type = LFS;
|
||||
if (F2FS_OPTION(sbi).fs_mode == FS_MODE_FRAGMENT_BLK)
|
||||
curseg->fragment_remained_chunk =
|
||||
prandom_u32_max(sbi->max_fragment_chunk) + 1;
|
||||
get_random_u32_inclusive(1, sbi->max_fragment_chunk);
|
||||
}
|
||||
|
||||
static int __next_free_blkoff(struct f2fs_sb_info *sbi,
|
||||
@ -2625,9 +2625,9 @@ static void __refresh_next_blkoff(struct f2fs_sb_info *sbi,
|
||||
/* To allocate block chunks in different sizes, use random number */
|
||||
if (--seg->fragment_remained_chunk <= 0) {
|
||||
seg->fragment_remained_chunk =
|
||||
prandom_u32_max(sbi->max_fragment_chunk) + 1;
|
||||
get_random_u32_inclusive(1, sbi->max_fragment_chunk);
|
||||
seg->next_blkoff +=
|
||||
prandom_u32_max(sbi->max_fragment_hole) + 1;
|
||||
get_random_u32_inclusive(1, sbi->max_fragment_hole);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -2467,7 +2467,7 @@ error_dump:
|
||||
|
||||
static inline int chance(unsigned int n, unsigned int out_of)
|
||||
{
|
||||
return !!(prandom_u32_max(out_of) + 1 <= n);
|
||||
return !!(get_random_u32_below(out_of) + 1 <= n);
|
||||
|
||||
}
|
||||
|
||||
@ -2485,13 +2485,13 @@ static int power_cut_emulated(struct ubifs_info *c, int lnum, int write)
|
||||
if (chance(1, 2)) {
|
||||
d->pc_delay = 1;
|
||||
/* Fail within 1 minute */
|
||||
delay = prandom_u32_max(60000);
|
||||
delay = get_random_u32_below(60000);
|
||||
d->pc_timeout = jiffies;
|
||||
d->pc_timeout += msecs_to_jiffies(delay);
|
||||
ubifs_warn(c, "failing after %lums", delay);
|
||||
} else {
|
||||
d->pc_delay = 2;
|
||||
delay = prandom_u32_max(10000);
|
||||
delay = get_random_u32_below(10000);
|
||||
/* Fail within 10000 operations */
|
||||
d->pc_cnt_max = delay;
|
||||
ubifs_warn(c, "failing after %lu calls", delay);
|
||||
@ -2571,7 +2571,7 @@ static int corrupt_data(const struct ubifs_info *c, const void *buf,
|
||||
unsigned int from, to, ffs = chance(1, 2);
|
||||
unsigned char *p = (void *)buf;
|
||||
|
||||
from = prandom_u32_max(len);
|
||||
from = get_random_u32_below(len);
|
||||
/* Corruption span max to end of write unit */
|
||||
to = min(len, ALIGN(from + 1, c->max_write_size));
|
||||
|
||||
|
@ -1970,28 +1970,28 @@ static int dbg_populate_lsave(struct ubifs_info *c)
|
||||
|
||||
if (!dbg_is_chk_gen(c))
|
||||
return 0;
|
||||
if (prandom_u32_max(4))
|
||||
if (get_random_u32_below(4))
|
||||
return 0;
|
||||
|
||||
for (i = 0; i < c->lsave_cnt; i++)
|
||||
c->lsave[i] = c->main_first;
|
||||
|
||||
list_for_each_entry(lprops, &c->empty_list, list)
|
||||
c->lsave[prandom_u32_max(c->lsave_cnt)] = lprops->lnum;
|
||||
c->lsave[get_random_u32_below(c->lsave_cnt)] = lprops->lnum;
|
||||
list_for_each_entry(lprops, &c->freeable_list, list)
|
||||
c->lsave[prandom_u32_max(c->lsave_cnt)] = lprops->lnum;
|
||||
c->lsave[get_random_u32_below(c->lsave_cnt)] = lprops->lnum;
|
||||
list_for_each_entry(lprops, &c->frdi_idx_list, list)
|
||||
c->lsave[prandom_u32_max(c->lsave_cnt)] = lprops->lnum;
|
||||
c->lsave[get_random_u32_below(c->lsave_cnt)] = lprops->lnum;
|
||||
|
||||
heap = &c->lpt_heap[LPROPS_DIRTY_IDX - 1];
|
||||
for (i = 0; i < heap->cnt; i++)
|
||||
c->lsave[prandom_u32_max(c->lsave_cnt)] = heap->arr[i]->lnum;
|
||||
c->lsave[get_random_u32_below(c->lsave_cnt)] = heap->arr[i]->lnum;
|
||||
heap = &c->lpt_heap[LPROPS_DIRTY - 1];
|
||||
for (i = 0; i < heap->cnt; i++)
|
||||
c->lsave[prandom_u32_max(c->lsave_cnt)] = heap->arr[i]->lnum;
|
||||
c->lsave[get_random_u32_below(c->lsave_cnt)] = heap->arr[i]->lnum;
|
||||
heap = &c->lpt_heap[LPROPS_FREE - 1];
|
||||
for (i = 0; i < heap->cnt; i++)
|
||||
c->lsave[prandom_u32_max(c->lsave_cnt)] = heap->arr[i]->lnum;
|
||||
c->lsave[get_random_u32_below(c->lsave_cnt)] = heap->arr[i]->lnum;
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
@ -700,7 +700,7 @@ static int alloc_idx_lebs(struct ubifs_info *c, int cnt)
|
||||
c->ilebs[c->ileb_cnt++] = lnum;
|
||||
dbg_cmt("LEB %d", lnum);
|
||||
}
|
||||
if (dbg_is_chk_index(c) && !prandom_u32_max(8))
|
||||
if (dbg_is_chk_index(c) && !get_random_u32_below(8))
|
||||
return -ENOSPC;
|
||||
return 0;
|
||||
}
|
||||
|
@ -1516,7 +1516,7 @@ xfs_alloc_ag_vextent_lastblock(
|
||||
|
||||
#ifdef DEBUG
|
||||
/* Randomly don't execute the first algorithm. */
|
||||
if (prandom_u32_max(2))
|
||||
if (get_random_u32_below(2))
|
||||
return 0;
|
||||
#endif
|
||||
|
||||
|
@ -636,7 +636,7 @@ xfs_ialloc_ag_alloc(
|
||||
/* randomly do sparse inode allocations */
|
||||
if (xfs_has_sparseinodes(tp->t_mountp) &&
|
||||
igeo->ialloc_min_blks < igeo->ialloc_blks)
|
||||
do_sparse = prandom_u32_max(2);
|
||||
do_sparse = get_random_u32_below(2);
|
||||
#endif
|
||||
|
||||
/*
|
||||
|
@ -279,7 +279,7 @@ xfs_errortag_test(
|
||||
|
||||
ASSERT(error_tag < XFS_ERRTAG_MAX);
|
||||
randfactor = mp->m_errortag[error_tag];
|
||||
if (!randfactor || prandom_u32_max(randfactor))
|
||||
if (!randfactor || get_random_u32_below(randfactor))
|
||||
return false;
|
||||
|
||||
xfs_warn_ratelimited(mp,
|
||||
|
@ -21,7 +21,7 @@
|
||||
/* Get a random number in [l, r) */
|
||||
static inline unsigned long damon_rand(unsigned long l, unsigned long r)
|
||||
{
|
||||
return l + prandom_u32_max(r - l);
|
||||
return l + get_random_u32_below(r - l);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -516,7 +516,7 @@ static inline int node_random(const nodemask_t *maskp)
|
||||
bit = first_node(*maskp);
|
||||
break;
|
||||
default:
|
||||
bit = find_nth_bit(maskp->bits, MAX_NUMNODES, prandom_u32_max(w));
|
||||
bit = find_nth_bit(maskp->bits, MAX_NUMNODES, get_random_u32_below(w));
|
||||
break;
|
||||
}
|
||||
return bit;
|
||||
|
@ -9,6 +9,7 @@
|
||||
#define _LINUX_PRANDOM_H
|
||||
|
||||
#include <linux/types.h>
|
||||
#include <linux/once.h>
|
||||
#include <linux/percpu.h>
|
||||
#include <linux/random.h>
|
||||
|
||||
@ -23,24 +24,10 @@ void prandom_seed_full_state(struct rnd_state __percpu *pcpu_state);
|
||||
#define prandom_init_once(pcpu_state) \
|
||||
DO_ONCE(prandom_seed_full_state, (pcpu_state))
|
||||
|
||||
/**
|
||||
* prandom_u32_max - returns a pseudo-random number in interval [0, ep_ro)
|
||||
* @ep_ro: right open interval endpoint
|
||||
*
|
||||
* Returns a pseudo-random number that is in interval [0, ep_ro). This is
|
||||
* useful when requesting a random index of an array containing ep_ro elements,
|
||||
* for example. The result is somewhat biased when ep_ro is not a power of 2,
|
||||
* so do not use this for cryptographic purposes.
|
||||
*
|
||||
* Returns: pseudo-random number in interval [0, ep_ro)
|
||||
*/
|
||||
/* Deprecated: use get_random_u32_below() instead. */
|
||||
static inline u32 prandom_u32_max(u32 ep_ro)
|
||||
{
|
||||
if (__builtin_constant_p(ep_ro <= 1U << 8) && ep_ro <= 1U << 8)
|
||||
return (get_random_u8() * ep_ro) >> 8;
|
||||
if (__builtin_constant_p(ep_ro <= 1U << 16) && ep_ro <= 1U << 16)
|
||||
return (get_random_u16() * ep_ro) >> 16;
|
||||
return ((u64)get_random_u32() * ep_ro) >> 32;
|
||||
return get_random_u32_below(ep_ro);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -6,7 +6,6 @@
|
||||
#include <linux/bug.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/once.h>
|
||||
|
||||
#include <uapi/linux/random.h>
|
||||
|
||||
@ -17,16 +16,16 @@ void __init add_bootloader_randomness(const void *buf, size_t len);
|
||||
void add_input_randomness(unsigned int type, unsigned int code,
|
||||
unsigned int value) __latent_entropy;
|
||||
void add_interrupt_randomness(int irq) __latent_entropy;
|
||||
void add_hwgenerator_randomness(const void *buf, size_t len, size_t entropy);
|
||||
void add_hwgenerator_randomness(const void *buf, size_t len, size_t entropy, bool sleep_after);
|
||||
|
||||
#if defined(LATENT_ENTROPY_PLUGIN) && !defined(__CHECKER__)
|
||||
static inline void add_latent_entropy(void)
|
||||
{
|
||||
#if defined(LATENT_ENTROPY_PLUGIN) && !defined(__CHECKER__)
|
||||
add_device_randomness((const void *)&latent_entropy, sizeof(latent_entropy));
|
||||
}
|
||||
#else
|
||||
static inline void add_latent_entropy(void) { }
|
||||
add_device_randomness(NULL, 0);
|
||||
#endif
|
||||
}
|
||||
|
||||
#if IS_ENABLED(CONFIG_VMGENID)
|
||||
void add_vmfork_randomness(const void *unique_vm_id, size_t len);
|
||||
@ -51,29 +50,76 @@ static inline unsigned long get_random_long(void)
|
||||
#endif
|
||||
}
|
||||
|
||||
/*
|
||||
* On 64-bit architectures, protect against non-terminated C string overflows
|
||||
* by zeroing out the first byte of the canary; this leaves 56 bits of entropy.
|
||||
*/
|
||||
#ifdef CONFIG_64BIT
|
||||
# ifdef __LITTLE_ENDIAN
|
||||
# define CANARY_MASK 0xffffffffffffff00UL
|
||||
# else /* big endian, 64 bits: */
|
||||
# define CANARY_MASK 0x00ffffffffffffffUL
|
||||
# endif
|
||||
#else /* 32 bits: */
|
||||
# define CANARY_MASK 0xffffffffUL
|
||||
#endif
|
||||
u32 __get_random_u32_below(u32 ceil);
|
||||
|
||||
static inline unsigned long get_random_canary(void)
|
||||
/*
|
||||
* Returns a random integer in the interval [0, ceil), with uniform
|
||||
* distribution, suitable for all uses. Fastest when ceil is a constant, but
|
||||
* still fast for variable ceil as well.
|
||||
*/
|
||||
static inline u32 get_random_u32_below(u32 ceil)
|
||||
{
|
||||
return get_random_long() & CANARY_MASK;
|
||||
if (!__builtin_constant_p(ceil))
|
||||
return __get_random_u32_below(ceil);
|
||||
|
||||
/*
|
||||
* For the fast path, below, all operations on ceil are precomputed by
|
||||
* the compiler, so this incurs no overhead for checking pow2, doing
|
||||
* divisions, or branching based on integer size. The resultant
|
||||
* algorithm does traditional reciprocal multiplication (typically
|
||||
* optimized by the compiler into shifts and adds), rejecting samples
|
||||
* whose lower half would indicate a range indivisible by ceil.
|
||||
*/
|
||||
BUILD_BUG_ON_MSG(!ceil, "get_random_u32_below() must take ceil > 0");
|
||||
if (ceil <= 1)
|
||||
return 0;
|
||||
for (;;) {
|
||||
if (ceil <= 1U << 8) {
|
||||
u32 mult = ceil * get_random_u8();
|
||||
if (likely(is_power_of_2(ceil) || (u8)mult >= (1U << 8) % ceil))
|
||||
return mult >> 8;
|
||||
} else if (ceil <= 1U << 16) {
|
||||
u32 mult = ceil * get_random_u16();
|
||||
if (likely(is_power_of_2(ceil) || (u16)mult >= (1U << 16) % ceil))
|
||||
return mult >> 16;
|
||||
} else {
|
||||
u64 mult = (u64)ceil * get_random_u32();
|
||||
if (likely(is_power_of_2(ceil) || (u32)mult >= -ceil % ceil))
|
||||
return mult >> 32;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Returns a random integer in the interval (floor, U32_MAX], with uniform
|
||||
* distribution, suitable for all uses. Fastest when floor is a constant, but
|
||||
* still fast for variable floor as well.
|
||||
*/
|
||||
static inline u32 get_random_u32_above(u32 floor)
|
||||
{
|
||||
BUILD_BUG_ON_MSG(__builtin_constant_p(floor) && floor == U32_MAX,
|
||||
"get_random_u32_above() must take floor < U32_MAX");
|
||||
return floor + 1 + get_random_u32_below(U32_MAX - floor);
|
||||
}
|
||||
|
||||
/*
|
||||
* Returns a random integer in the interval [floor, ceil], with uniform
|
||||
* distribution, suitable for all uses. Fastest when floor and ceil are
|
||||
* constant, but still fast for variable floor and ceil as well.
|
||||
*/
|
||||
static inline u32 get_random_u32_inclusive(u32 floor, u32 ceil)
|
||||
{
|
||||
BUILD_BUG_ON_MSG(__builtin_constant_p(floor) && __builtin_constant_p(ceil) &&
|
||||
(floor > ceil || ceil - floor == U32_MAX),
|
||||
"get_random_u32_inclusive() must take floor <= ceil");
|
||||
return floor + get_random_u32_below(ceil - floor + 1);
|
||||
}
|
||||
|
||||
void __init random_init_early(const char *command_line);
|
||||
void __init random_init(void);
|
||||
bool rng_is_initialized(void);
|
||||
int wait_for_random_bytes(void);
|
||||
int execute_with_initialized_rng(struct notifier_block *nb);
|
||||
|
||||
/* Calls wait_for_random_bytes() and then calls get_random_bytes(buf, nbytes).
|
||||
* Returns the result of the call to wait_for_random_bytes. */
|
||||
@ -108,26 +154,6 @@ declare_get_random_var_wait(long, unsigned long)
|
||||
|
||||
#include <asm/archrandom.h>
|
||||
|
||||
/*
|
||||
* Called from the boot CPU during startup; not valid to call once
|
||||
* secondary CPUs are up and preemption is possible.
|
||||
*/
|
||||
#ifndef arch_get_random_seed_longs_early
|
||||
static inline size_t __init arch_get_random_seed_longs_early(unsigned long *v, size_t max_longs)
|
||||
{
|
||||
WARN_ON(system_state != SYSTEM_BOOTING);
|
||||
return arch_get_random_seed_longs(v, max_longs);
|
||||
}
|
||||
#endif
|
||||
|
||||
#ifndef arch_get_random_longs_early
|
||||
static inline bool __init arch_get_random_longs_early(unsigned long *v, size_t max_longs)
|
||||
{
|
||||
WARN_ON(system_state != SYSTEM_BOOTING);
|
||||
return arch_get_random_longs(v, max_longs);
|
||||
}
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_SMP
|
||||
int random_prepare_cpu(unsigned int cpu);
|
||||
int random_online_cpu(unsigned int cpu);
|
||||
|
@ -6,6 +6,25 @@
|
||||
#include <linux/sched.h>
|
||||
#include <linux/random.h>
|
||||
|
||||
/*
|
||||
* On 64-bit architectures, protect against non-terminated C string overflows
|
||||
* by zeroing out the first byte of the canary; this leaves 56 bits of entropy.
|
||||
*/
|
||||
#ifdef CONFIG_64BIT
|
||||
# ifdef __LITTLE_ENDIAN
|
||||
# define CANARY_MASK 0xffffffffffffff00UL
|
||||
# else /* big endian, 64 bits: */
|
||||
# define CANARY_MASK 0x00ffffffffffffffUL
|
||||
# endif
|
||||
#else /* 32 bits: */
|
||||
# define CANARY_MASK 0xffffffffUL
|
||||
#endif
|
||||
|
||||
static inline unsigned long get_random_canary(void)
|
||||
{
|
||||
return get_random_long() & CANARY_MASK;
|
||||
}
|
||||
|
||||
#if defined(CONFIG_STACKPROTECTOR) || defined(CONFIG_ARM64_PTR_AUTH)
|
||||
# include <asm/stackprotector.h>
|
||||
#else
|
||||
|
@ -1032,7 +1032,7 @@ bpf_jit_binary_alloc(unsigned int proglen, u8 **image_ptr,
|
||||
hdr->size = size;
|
||||
hole = min_t(unsigned int, size - (proglen + sizeof(*hdr)),
|
||||
PAGE_SIZE - sizeof(*hdr));
|
||||
start = prandom_u32_max(hole) & ~(alignment - 1);
|
||||
start = get_random_u32_below(hole) & ~(alignment - 1);
|
||||
|
||||
/* Leave a random number of instructions before BPF code. */
|
||||
*image_ptr = &hdr->image[start];
|
||||
@ -1094,7 +1094,7 @@ bpf_jit_binary_pack_alloc(unsigned int proglen, u8 **image_ptr,
|
||||
|
||||
hole = min_t(unsigned int, size - (proglen + sizeof(*ro_header)),
|
||||
BPF_PROG_CHUNK_SIZE - sizeof(*ro_header));
|
||||
start = prandom_u32_max(hole) & ~(alignment - 1);
|
||||
start = get_random_u32_below(hole) & ~(alignment - 1);
|
||||
|
||||
*image_ptr = &ro_header->image[start];
|
||||
*rw_image = &(*rw_header)->image[start];
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user