Commit Graph

1200513 Commits

Author SHA1 Message Date
Masahiro Yamada
3932618287 powerpc: replace #include <asm/export.h> with #include <linux/export.h>
Commit ddb5cdbafa ("kbuild: generate KSYMTAB entries by modpost")
deprecated <asm/export.h>, which is now a wrapper of <linux/export.h>.

Replace #include <asm/export.h> with #include <linux/export.h>.

After all the <asm/export.h> lines are converted, <asm/export.h> and
<asm-generic/export.h> will be removed.

Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
[mpe: Fixup selftests that stub asm/export.h]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/20230806150954.394189-2-masahiroy@kernel.org
2023-08-16 23:54:48 +10:00
Masahiro Yamada
3eb3f168e8 powerpc: remove unneeded #include <asm/export.h>
There is no EXPORT_SYMBOL line there, hence #include <asm/export.h>
is unneeded.

Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/20230806150954.394189-1-masahiroy@kernel.org
2023-08-16 23:54:48 +10:00
Nick Desaulniers
ae7936d232 powerpc/inst: add PPC_TLBILX_LPID
Clang didn't recognize the instruction tlbilxlpid. This was fixed in
clang-18 [0] then backported to clang-17 [1].  To support clang-16 and
older, rather than using that instruction bare in inline asm, add it to
ppc-opcode.h and use that macro as is done elsewhere for other
instructions.

Link: https://github.com/ClangBuiltLinux/linux/issues/1891
Link: https://github.com/llvm/llvm-project/issues/64080
Link: 53648ac1d0 [0]
Link: 0af7e5e54a [1]
Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/llvm/202307211945.TSPcyOhh-lkp@intel.com/
Suggested-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Nick Desaulniers <ndesaulniers@google.com>
Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/20230803-ppc_tlbilxlpid-v3-1-ca84739bfd73@google.com
2023-08-16 23:54:48 +10:00
Xiongfeng Wang
0e1cd3d9f8 cxl: Use pci_find_vsec_capability() to simplify the code
PCI core add pci_find_vsec_capability() to query VSEC. We can use that
core API to simplify the code.

The only logical change is that pci_find_vsec_capability check the
Vendor ID before finding the VSEC.

PCI spec rev 5.0 says in 7.9.5.2 Vendor-Specific Header:
  VSEC ID - This field is a vendor-defined ID number that indicates the
  nature and format of the VSEC structure
  Software must qualify the Vendor ID before interpreting this field.

Signed-off-by: Xiongfeng Wang <wangxiongfeng2@huawei.com>
Reviewed-by: Andrew Donnellan <ajd@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/20230804075630.186054-1-wangxiongfeng2@huawei.com
2023-08-16 23:54:48 +10:00
Christophe Leroy
6039fcd3fb powerpc/reg: Remove #ifdef around mtspr macro
That ifdef was introduced by commit 1458dd951f ("powerpc/8xx:
Handle CPU6 ERRATA directly in mtspr() macro") and left over by
commit 2a45addd21 ("powerpc/8xx: Remove CPU6 ERRATA Workaround")

Remove it.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/cf652e47ea9e453e89813611b6f76d0939a12063.1687344017.git.christophe.leroy@csgroup.eu
2023-08-16 23:54:48 +10:00
Christophe Leroy
0d5769f950 powerpc/step: Mark __copy_mem_out() and __emulate_dcbz() __always_inline
objtool reports two folliwng warnings:
  arch/powerpc/lib/sstep.o: warning: objtool: copy_mem_out+0x3c
    (.text+0x30c): call to __copy_mem_out() with UACCESS enabled
  arch/powerpc/lib/sstep.o: warning: objtool: emulate_dcbz+0x70
    (.text+0x4dc): call to __emulate_dcbz() with UACCESS enabled

Mark __copy_mem_out() and __emulate_dcbz() __always_inline

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/f1d4a15da70190f8c2fcddb377bbc1e09827242c.1687343857.git.christophe.leroy@csgroup.eu
2023-08-16 23:54:48 +10:00
Christophe Leroy
7768716d2f powerpc/cpm2: Remove cpm2_map() and cpm2_unmap()
Since commit 449012daa9 ("[POWERPC] cpm2: Infrastructure code
cleanup.") cpm2_map() is just returning cpm2_immr pointer and
cpm2_unmap() does nothing.

We already have parts of code that use cpm2_immr directly so get rid
of cpm2_map() and cpm2_unmap() by using cpm2_immr directly. And avoid
going through local pointers that hide the pointed structure.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/9fe6ff7284e9f968b12abe7de7c08d7ea40e29d6.1691474658.git.christophe.leroy@csgroup.eu
2023-08-16 23:54:47 +10:00
Christophe Leroy
fbbf4280da powerpc/8xx: Remove immr_map() and immr_unmap()
Since commit fb533d0c5a ("[POWERPC] 8xx: Infrastructure code cleanup.")
immr_map() is just returning mpc8xxx_immr pointer and immr_unmap()
do nothing.

We already have parts of code that use mpc8xxx_immr directly so get rid
of immr_map() and immr_unmap() by using mpc8xxx_immr directly. And avoid
going through local pointers that hide the pointed structure.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/633ed46f6015ff44d5599258647ea517f75d6a1d.1691474658.git.christophe.leroy@csgroup.eu
2023-08-16 23:54:47 +10:00
Christophe Leroy
cb888cdf74 powerpc: Remove CONFIG_PCI_8260
CONFIG_PCI_8260 is not used anymore, remove it.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/19a4c07466ce8b80f287a06eadcc80c4ab1d2c9e.1691474658.git.christophe.leroy@csgroup.eu
2023-08-16 23:54:47 +10:00
Christophe Leroy
fecc436a97 powerpc/include: Remove mpc8260.h and m82xx_pci.h
SIU_INT_IRQ1 is not used anywhere and __IO_BASE is defined in
asm/io.h

Remove m82xx_pci.h

Then the only thing remaining in mpc8260.h is MPC82XX_BCR_PLDP

Move MPC82XX_BCR_PLDP into asm/cpm2.h then remove mpc8260.h

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/afe23bf3624c389ff17e9789884c78c124b7b202.1691474658.git.christophe.leroy@csgroup.eu
2023-08-16 23:54:47 +10:00
Christophe Leroy
e6e077cb2a powerpc/include: Declare mpc8xx_immr in 8xx_immap.h
Do the same as for cmp2_immr : declare it at the same place
as its type immap_t, that is in 8xx_immap.h instead of fs_pd.h

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/62d490b65899c2f2667ca7045c5f0fad9cbda458.1691474658.git.christophe.leroy@csgroup.eu
2023-08-16 23:54:47 +10:00
Christophe Leroy
60bc069c43 powerpc/include: Remove unneeded #include <asm/fs_pd.h>
tqm8xx_setup.c and fs_enet.h don't use any items provided by fs_pd.h

Remove unneeded #include <asm/fs_pd.h>

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/b056c4e986a4a7707fc1994304c34f7bd15d6871.1691474658.git.christophe.leroy@csgroup.eu
2023-08-16 23:54:47 +10:00
Zheng Zengkai
075a88d5eb ocxl: Use pci_dev_id() to simplify the code
PCI core API pci_dev_id() can be used to get the BDF number for a pci
device. We don't need to compose it mannually. Use pci_dev_id() to
simplify the code a little bit.

Signed-off-by: Zheng Zengkai <zhengzengkai@huawei.com>
Acked-by: Andrew Donnellan <ajd@linux.ibm.com>
Acked-by: Frederic Barrat <fbarrat@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/20230811102039.17257-1-zhengzengkai@huawei.com
2023-08-16 23:54:47 +10:00
Arnd Bergmann
cd50430ceb macintosh/ams: mark ams_init() static
This is the module init function, which by definition is used only
locally, so mark it static to avoid a warning:

drivers/macintosh/ams/ams-core.c:179:12: error: no previous prototype for 'ams_init' [-Werror=missing-prototypes]

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/20230810141947.1236730-7-arnd@kernel.org
2023-08-14 21:54:04 +10:00
Randy Dunlap
506e550a7d powerpc/pseries: PLPKS: undo kernel-doc comment notation
Don't use kernel-doc "/**" comment format for non-kernel-doc comments.
This prevents a kernel-doc warning:

  arch/powerpc/platforms/pseries/plpks.c:186: warning: This comment starts with '/**', but isn't a kernel-doc comment. Refer Documentation/doc-guide/kernel-doc.rst
  * Label is combination of label attributes + name.

Fixes: 2454a7af0f ("powerpc/pseries: define driver for Platform KeyStore")
Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Reported-by: kernel test robot <lkp@intel.com>
Link: lore.kernel.org/r/202308040430.GxmPAnwZ-lkp@intel.com
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/20230810000740.23756-1-rdunlap@infradead.org
2023-08-14 21:54:04 +10:00
Christophe Leroy
4a9dd8f292 powerpc/radix: Move some functions into #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
With skiboot_defconfig, Clang reports:

  CC      arch/powerpc/mm/book3s64/radix_tlb.o
arch/powerpc/mm/book3s64/radix_tlb.c:419:20: error: unused function '_tlbie_pid_lpid' [-Werror,-Wunused-function]
static inline void _tlbie_pid_lpid(unsigned long pid, unsigned long lpid,
                   ^
arch/powerpc/mm/book3s64/radix_tlb.c:663:20: error: unused function '_tlbie_va_range_lpid' [-Werror,-Wunused-function]
static inline void _tlbie_va_range_lpid(unsigned long start, unsigned long end,
                   ^

This is because those functions are only called from functions
enclosed in a #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE

Move below functions inside that #ifdef
* __tlbie_pid_lpid(unsigned long pid,
* __tlbie_va_lpid(unsigned long va, unsigned long pid,
* fixup_tlbie_pid_lpid(unsigned long pid, unsigned long lpid)
* _tlbie_pid_lpid(unsigned long pid, unsigned long lpid,
* fixup_tlbie_va_range_lpid(unsigned long va,
* __tlbie_va_range_lpid(unsigned long start, unsigned long end,
* _tlbie_va_range_lpid(unsigned long start, unsigned long end,

Fixes: f0c6fbbb90 ("KVM: PPC: Book3S HV: Add support for H_RPT_INVALIDATE")
Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202307260802.Mjr99P5O-lkp@intel.com/
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/3d72efd39f986ee939d068af69fdce28bd600766.1691568093.git.christophe.leroy@csgroup.eu
2023-08-14 21:54:04 +10:00
Arnd Bergmann
ef73dcaa31 powerpc: xmon: remove unused variables
Randconfig testing with W=1 showed up these warnings that I'd like to enable
by default:

arch/powerpc/xmon/xmon.c: In function 'dump_tlb_book3e':
arch/powerpc/xmon/xmon.c:3833:42: error: variable 'lrat' set but not used [-Werror=unused-but-set-variable]
 3833 |  int i, tlb, ntlbs, pidsz, lpidsz, rasz, lrat = 0;
      |                                          ^~~~
arch/powerpc/xmon/xmon.c:3831:23: error: variable 'lpidmask' set but not used [-Werror=unused-but-set-variable]
 3831 |  u32 mmucfg, pidmask, lpidmask;
      |                       ^~~~~~~~
arch/powerpc/xmon/xmon.c:3831:14: error: variable 'pidmask' set but not used [-Werror=unused-but-set-variable]
 3831 |  u32 mmucfg, pidmask, lpidmask;
      |              ^~~~~~~

Just remove these as they have been unused since the code was added in 2010.

Fixes: 03247157f7 ("powerpc/book3e: Add TLB dump in xmon for Book3E")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/20230809131024.2039647-2-arnd@kernel.org
2023-08-14 21:54:04 +10:00
Arnd Bergmann
0f7ce21ab5 powerpc: mark more local variables as volatile
A while ago I created a2305e3de8 ("powerpc: mark local variables
around longjmp as volatile") in order to allow building powerpc with
-Wextra enabled on gcc-11.

I tried this again with gcc-13 and found two more of the same issues,
presumably based on slightly different optimization paths being taken
here:

arch/powerpc/xmon/xmon.c:3306:27: error: variable 'mm' might be clobbered by 'longjmp' or 'vfork' [-Werror=clobbered]
arch/powerpc/kexec/crash.c:353:22: error: variable 'i' might be clobbered by 'longjmp' or 'vfork' [-Werror=clobbered]

I checked a bunch of randconfigs and found only these two, so just
address them the same way as the others.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/20230809131024.2039647-1-arnd@kernel.org
2023-08-14 21:54:04 +10:00
Yuan Tan
0b5e06e9cb powerpc/pmac32: enable serial options by default in defconfig
Serial is a critical feature for logging and debuging, and the other
architectures enable serial by default.

Let's enable CONFIG_SERIAL_PMACZILOG and CONFIG_SERIAL_PMACZILOG_CONSOLE
by default.

Signed-off-by: Yuan Tan <tanyuan@tinylab.org>
Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/bb7b5f9958b3e3a20f6573ff7ce7c5dc566e7e32.1690982937.git.tanyuan@tinylab.org
2023-08-14 21:54:04 +10:00
Michael Ellerman
15f63e306d Merge branch 'topic/cpu-smt' into next
Merge SMT changes we are sharing with the tip tree.
2023-08-14 21:46:03 +10:00
Michael Ellerman
d1099e2276 powerpc/pseries: Honour current SMT state when DLPAR onlining CPUs
Integrate with the generic SMT support, so that when a CPU is DLPAR
onlined it is brought up with the correct SMT mode.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/20230705145143.40545-11-ldufour@linux.ibm.com
2023-08-02 22:49:46 +10:00
Michael Ellerman
73c58e7e14 powerpc: Add HOTPLUG_SMT support
Add support for HOTPLUG_SMT, which enables the generic sysfs SMT support
files in /sys/devices/system/cpu/smt, as well as the "nosmt" boot
parameter.

Implement the recently added hooks to allow partial SMT states, allow
any number of threads per core.

Tie the config symbol to HOTPLUG_CPU, which enables it on the major
platforms that support SMT. If there are other platforms that want the
SMT support that can be tweaked in future.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
[ldufour: remove topology_smt_supported]
[ldufour: remove topology_smt_threads_supported]
[ldufour: select CONFIG_SMT_NUM_THREADS_DYNAMIC]
[ldufour: update kernel-parameters.txt]
Signed-off-by: Laurent Dufour <ldufour@linux.ibm.com>
Link: https://msgid.link/20230705145143.40545-10-ldufour@linux.ibm.com
2023-08-02 22:49:43 +10:00
Michael Ellerman
3b3a4d0fe5 powerpc/pseries: Initialise CPU hotplug callbacks earlier
As part of the generic HOTPLUG_SMT code, there is support for disabling
secondary SMT threads at boot time, by passing "nosmt" on the kernel
command line.

The way that is implemented is the secondary threads are brought partly
online, and then taken back offline again. That is done to support x86
CPUs needing certain initialisation done on all threads. However powerpc
has similar needs, see commit d70a54e2d0 ("powerpc/powernv: Ignore
smt-enabled on Power8 and later").

For that to work the powerpc CPU hotplug callbacks need to be registered
before secondary CPUs are brought online, otherwise __cpu_disable()
fails due to smp_ops->cpu_disable being NULL.

So split the basic initialisation into pseries_cpu_hotplug_init() which
can be called early from setup_arch(). The DLPAR related initialisation
can still be done later, because it needs to do allocations.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/20230705145143.40545-9-ldufour@linux.ibm.com
2023-08-02 22:43:42 +10:00
Michael Ellerman
d82e6762b0 SMP core changes for powerpc
-----BEGIN PGP SIGNATURE-----
 
 iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAmTDdHMTHHRnbHhAbGlu
 dXRyb25peC5kZQAKCRCmGPVMDXSYoawDEACf7tZeCfwnkEVzBO45mKujBstQyV+v
 yATR7YRFG3Li9VaRP/rOrFxh+UPM2YbvQCgcT2BECl5gkBm7j/jgTiL47atN5VCo
 Sg9fDzjkf6+A1ac62kriQT0EyhYYN/V6DxByhqSLvxev9626ge93wBKmChohGOgj
 j66PEg5z+Luw9aTdyKm3cEuXGYyCmdWTgZ6z4f4VI+rVBXFUMiZl28eMlLH+eyVM
 +ntkDoNw00TqypSrcSr8WEzJc3SNAqUsLOW+Mogm4AJK8sKeFtdkjCLFDcV1dLmZ
 9Ak8OxeQGNhh538yCLYiNDu57xpQffS4E2t+Pusv8ZHc6ww5J8vsPq3p9323LbyF
 FXcREHM8W5cKLdy4hzqeqKx2YK4shquYXH10WIbVQXqbxMUD5wGCYuQTS12V+QW0
 1Ut48/RS3a6kaKvX475enFrR/wQNSTJkM+ajFSGY3eSNp9v9EmTepw3aY8RXw+Gr
 QRxCuM4MgaUazJ0RtbjbzSRK2r8MObj+aurfvMQZGFmkkw0DiUZNOnb5VPZSrqb+
 laNQKKAdG99NlTNyvLSSgYTX+FM0AbqKVWnIO4b+3wMvYxfiENKDWVzdQWksNz6U
 J5j7qDqZQ45kEow0Xn0lKxz1zKdcJLZPFjExrgYkRTaRp7VwcQ8st2Lbj1XVk7Wr
 wqXouAvhBSqWEA==
 =jF0X
 -----END PGP SIGNATURE-----

Merge tag 'smp-core-for-ppc-23-07-28' of https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip into topic/cpu-smt

SMP core changes for powerpc
2023-08-02 22:39:48 +10:00
Laurent Dufour
7f96539437 powerpc/kexec: fix minor typo
Function name in the descriptor was not correct.

Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202307251721.bUGcsCeQ-lkp@intel.com/
Signed-off-by: Laurent Dufour <ldufour@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/20230725132759.53975-1-ldufour@linux.ibm.com
2023-08-02 22:22:19 +10:00
Uwe Kleine-König
bbfa509d63 powerpc/ep8248e: Mark driver as non removable
Instead of resorting to BUG() ensure that the driver isn't unbound by
suppressing its bind and unbind sysfs attributes. As the driver is
built-in there is no way to remove a device once bound.

As a nice side effect this allows to drop the remove function.

Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/20230726081442.461026-1-u.kleine-koenig@pengutronix.de
2023-08-02 22:22:19 +10:00
Arnd Bergmann
54f30b83fe powerpc: address missing-prototypes warnings
There are a few warnings in powerpc64 defconfig builds after -Wmissing-prototypes
gets promoted from W=1 to the default warning set:

arch/powerpc/mm/book3s64/pgtable.c:422:6: error: no previous prototype for 'arch_report_meminfo' [-Werror=missing-prototypes]
arch/powerpc/platforms/cell/ras.c:275:5: error: no previous prototype for 'cbe_sysreset_hack' [-Werror=missing-prototypes]
arch/powerpc/platforms/cell/spu_manage.c:29:21: error: no previous prototype for 'spu_devnode' [-Werror=missing-prototypes]
arch/powerpc/platforms/pasemi/time.c:12:17: error: no previous prototype for 'pas_get_boot_time' [-Werror=missing-prototypes]
arch/powerpc/platforms/powermac/feature.c:1532:13: error: no previous prototype for 'g5_phy_disable_cpu1' [-Werror=missing-prototypes]
arch/powerpc/platforms/86xx/pic.c:28:13: error: no previous prototype for 'mpc86xx_init_irq' [-Werror=missing-prototypes]
drivers/pci/pci-sysfs.c:936:13: error: no previous prototype for 'pci_adjust_legacy_attr' [-Werror=missing-prototypes]

Address these by including the right header files or marking the
functions static. The audit.c one is a bit tricky since compat_audit.h
cannot include regular kernel headers tht have conflicting types on
32-bit powerpc.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
[mpe: Drop change to __vmemmap_free() which only exists in mm]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/20230727122720.2558065-1-arnd@kernel.org
2023-08-02 22:22:19 +10:00
Benjamin Gray
c3062ede99 selftests/powerpc/ptrace: Declare test temporary variables as volatile
While the target is volatile, the temporary variables used to access the
target cast away the volatile. This is undefined behaviour, and a
compiler may optimise away/reorder these accesses, breaking the test.

This was observed with GCC 13.1.1, but it can be difficult to reproduce
because of the dependency on compiler behaviour.

Signed-off-by: Benjamin Gray <bgray@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/20230725005841.28854-5-bgray@linux.ibm.com
2023-08-02 22:22:19 +10:00
Benjamin Gray
fc6732a855 selftests/powerpc/ptrace: Fix typo in pid_max search error
pid_max_addr() searches for the 'pid_max' symbol in /proc/kallsyms, and
prints an error if it cannot find it. The error message has a typo,
calling it pix_max.

Signed-off-by: Benjamin Gray <bgray@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/20230725005841.28854-4-bgray@linux.ibm.com
2023-08-02 22:22:19 +10:00
Benjamin Gray
68877ff20a selftests/powerpc/ptrace: Explain why tests are skipped
Many tests require specific hardware features/configurations that a
typical machine might not have. As a result, it's common to see a test
is skipped. But it is tedious to find out why a test is skipped
when all it gives is the file location of the skip macro.

Convert SKIP_IF() to SKIP_IF_MSG(), with appropriate descriptions of why
the test is being skipped. This gives a general idea of why a test is
skipped, which can be looked into further if it doesn't make sense.

Signed-off-by: Benjamin Gray <bgray@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/20230725005841.28854-3-bgray@linux.ibm.com
2023-08-02 22:22:19 +10:00
Rob Herring
81d7cac4d1 powerpc: Explicitly include correct DT includes
The DT of_device.h and of_platform.h date back to the separate
of_platform_bus_type before it as merged into the regular platform bus.
As part of that merge prepping Arm DT support 13 years ago, they
"temporarily" include each other. They also include platform_device.h
and of.h. As a result, there's a pretty much random mix of those include
files used throughout the tree. In order to detangle these headers and
replace the implicit includes with struct declarations, users need to
explicitly include the correct includes.

Signed-off-by: Rob Herring <robh@kernel.org>
[mpe: Fixup maple/setup.c which needs platform_device]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/20230724210247.778034-1-robh@kernel.org
2023-08-02 22:22:19 +10:00
Nicholas Piggin
e43c0a0c3c powerpc/64s/radix: combine final TLB flush and lazy tlb mm shootdown IPIs
This performs lazy tlb mm shootdown when doing the exit TLB flush when
all mm users go away and user mappings are removed, which avoids having
to do the lazy tlb mm shootdown IPIs on the final mmput when all kernel
references disappear.

powerpc/64s uses a broadcast TLBIE for the exit TLB flush if remote CPUs
need to be invalidated (unless TLBIE is disabled), so this doesn't
necessarily save IPIs but it does avoid a broadcast TLBIE which is quite
expensive.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[mpe: Squash in preempt_disable/enable() fix from Nick]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/20230524060821.148015-5-npiggin@gmail.com
2023-08-02 22:22:19 +10:00
Nicholas Piggin
177255afb4 powerpc: Add mm_cpumask warning when context switching
When context switching away from an mm, add a CONFIG_DEBUG_VM warning
check to ensure this CPU is still set in the mask. This could catch
bugs where the mask is improperly trimmed while the CPU is still using
the mm.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/20230524060821.148015-4-npiggin@gmail.com
2023-08-02 22:22:19 +10:00
Nicholas Piggin
f74b2a6c01 powerpc/64s: Use dec_mm_active_cpus helper
Avoid open-coded atomic_dec on mm->context.active_cpus and use the
function made for it. Add CONFIG_DEBUG_VM underflow checking on the
counter.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/20230524060821.148015-3-npiggin@gmail.com
2023-08-02 22:22:18 +10:00
Nicholas Piggin
c3c2e93753 powerpc: Account mm_cpumask and active_cpus in init_mm
init_mm mm_cpumask and context.active_cpus is not maintained at boot
and hotplug. This seems to be harmless because init_mm does not have a
userspace and so never gets user TLBs flushed, but it looks odd and it
prevents some sanity checks being added.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/20230524060821.148015-2-npiggin@gmail.com
2023-08-02 22:22:18 +10:00
Michael Ellerman
ab48181791 powerpc/64: Enable accelerated crypto algorithms in defconfig
Enable all the acclerated crypto algorithms as modules in the 64-bit
defconfig, to get more test coverage.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/20230717115223.286158-1-mpe@ellerman.id.au
2023-08-02 22:22:18 +10:00
Omar Sandoval
026fa6c52d powerpc/crypto: don't build aes-gcm-p10 by default
None of the other accelerated crypto modules are built by default.

Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/40d9c7ebe82c9a9d4ace542ac433753d2f22c6a0.1689007370.git.osandov@osandov.com
2023-08-02 22:22:18 +10:00
Omar Sandoval
9d6e1c21e1 powerpc/crypto: fix missing skcipher dependency for aes-gcm-p10
My stripped down configuration fails to build with:

  ERROR: modpost: "skcipher_walk_aead_encrypt" [arch/powerpc/crypto/aes-gcm-p10-crypto.ko] undefined!
  ERROR: modpost: "skcipher_walk_done" [arch/powerpc/crypto/aes-gcm-p10-crypto.ko] undefined!
  ERROR: modpost: "skcipher_walk_aead_decrypt" [arch/powerpc/crypto/aes-gcm-p10-crypto.ko] undefined!

Fix it by selecting CRYPTO_SKCIPHER.

Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/c55ad70799e027a3d2756b85ccadc0af52ae8915.1689007370.git.osandov@osandov.com
2023-08-02 22:22:18 +10:00
Christophe Leroy
3a24ea0df8 powerpc/kuap: Use ASM feature fixups instead of static branches
To avoid a useless nop on top of every uaccess enable/disable and
make life easier for objtool, replace static branches by ASM feature
fixups that will nop KUAP enabling instructions out in the unlikely
case KUAP is disabled at boottime.

Leave it as is on book3s/64 for now, it will be handled later when
objtool is activated on PPC64.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/671948788024fd890ec4ed175bc332dab8664ea5.1689091022.git.christophe.leroy@csgroup.eu
2023-08-02 22:22:18 +10:00
Christophe Leroy
eb52f66f0a powerpc/kuap: KUAP enabling/disabling functions must be __always_inline
Objtool reports following warnings:

  arch/powerpc/kernel/signal_32.o: warning: objtool:
    __prevent_user_access.constprop.0+0x4 (.text+0x4):
    redundant UACCESS disable

  arch/powerpc/kernel/signal_32.o: warning: objtool: user_access_begin+0x2c
    (.text+0x4c): return with UACCESS enabled

  arch/powerpc/kernel/signal_32.o: warning: objtool: handle_rt_signal32+0x188
    (.text+0x360): call to __prevent_user_access.constprop.0() with UACCESS enabled

  arch/powerpc/kernel/signal_32.o: warning: objtool: handle_signal32+0x150
    (.text+0x4d4): call to __prevent_user_access.constprop.0() with UACCESS enabled

This is due to some KUAP enabling/disabling functions being outline
allthough they are marked inline. Use __always_inline instead.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/ca5e50ddbec3867db5146ebddbc9a1dc0e443bc8.1689091022.git.christophe.leroy@csgroup.eu
2023-08-02 22:22:18 +10:00
Christophe Leroy
5222a1d514 powerpc/kuap: Simplify KUAP lock/unlock on BOOK3S/32
On book3s/32 KUAP is performed at segment level. At the moment,
when enabling userspace access, only current segment is modified.
Then if a write is performed on another user segment, a fault is
taken and all other user segments get enabled for userspace
access. This then require special attention when disabling
userspace access.

Having a userspace write access crossing a segment boundary is
unlikely. Having a userspace write access crossing a segment boundary
back and forth is even more unlikely. So, instead of enabling
userspace access on all segments when a write fault occurs, just
change which segment has userspace access enabled in order to
eliminate the case when more than one segment has userspace access
enabled. That simplifies userspace access deactivation.

There is however a corner case which is even more unlikely but has
to be handled anyway: an unaligned access which is crossing a
segment boundary. That would definitely require at least having
userspace access enabled on the two segments. To avoid complicating
the likely case for a so unlikely happening, handle such situation
like an alignment exception and emulate the store.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/8de8580513c1a6e880bad1ba9a69d3efad3d4fa5.1689091022.git.christophe.leroy@csgroup.eu
2023-08-02 22:22:18 +10:00
Christophe Leroy
26e0412082 powerpc/kuap: Use MMU_FTR_KUAP on all and refactor disabling kuap
All but book3s/64 use a static branch key for disabling kuap.
book3s/64 uses an mmu feature.

Refactor all targets to use MMU_FTR_KUAP like book3s/64.

For PPC32 that implies updating mmu features fixups once KUAP
has been initialised.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/6b3d7c977bad73378ea368bc6818e9c94ea95ab0.1689091022.git.christophe.leroy@csgroup.eu
2023-08-02 22:22:18 +10:00
Christophe Leroy
4589a2b789 powerpc/kuap: MMU_FTR_BOOK3S_KUAP becomes MMU_FTR_KUAP
In order to reuse MMU_FTR_BOOK3S_KUAP for other targets than BOOK3S,
rename it MMU_FTR_KUAP.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/c8b6f7b8cd0eeaace96879ed0e0a157faa619451.1689091022.git.christophe.leroy@csgroup.eu
2023-08-02 22:22:18 +10:00
Christophe Leroy
6b289911c8 powerpc/features: Add capability to update mmu features later
On powerpc32, features fixup is performed very early and that's too
early to read the cmdline and take into account 'nosmap' parameter.

On the other hand, no userspace access is performed that early and
KUAP feature fixup can be performed later.

Add a function to update mmu features. The function is passed a
mask with the features that can be updated.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/31b27ee2c9d338f4f82cd8cd69d6bff979495290.1689091022.git.christophe.leroy@csgroup.eu
2023-08-02 22:22:17 +10:00
Christophe Leroy
38bb171b95 powerpc/kuap: Fold kuep_is_disabled() into its only user
kuep_is_disabled() was introduced by commit 91bb30822a ("powerpc/32s:
Refactor update of user segment registers") but then all users but one
were removed by commit 526d4a4c77 ("powerpc/32s: Do kuep_lock() and
kuep_unlock() in assembly").

Fold kuep_is_disabled() into init_new_context() which is its only user.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/b2247147c0a8c830ac82966451647850df4a64da.1689091022.git.christophe.leroy@csgroup.eu
2023-08-02 22:22:17 +10:00
Christophe Leroy
1bec4adcd5 powerpc/kuap: Avoid useless jump_label on empty function
Disassembly of interrupt_enter_prepare() shows a pointless nop
before the mftb

  c000abf0 <interrupt_enter_prepare>:
  c000abf0:       81 23 00 84     lwz     r9,132(r3)
  c000abf4:       71 29 40 00     andi.   r9,r9,16384
  c000abf8:       41 82 00 28     beq-    c000ac20 <interrupt_enter_prepare+0x30>
  c000abfc: ===>  60 00 00 00     nop	<====
  c000ac00:       7d 0c 42 e6     mftb    r8
  c000ac04:       80 e2 00 08     lwz     r7,8(r2)
  c000ac08:       81 22 00 28     lwz     r9,40(r2)
  c000ac0c:       91 02 00 24     stw     r8,36(r2)
  c000ac10:       7d 29 38 50     subf    r9,r9,r7
  c000ac14:       7d 29 42 14     add     r9,r9,r8
  c000ac18:       91 22 00 08     stw     r9,8(r2)
  c000ac1c:       4e 80 00 20     blr
  c000ac20:       60 00 00 00     nop
  c000ac24:       7d 5a c2 a6     mfmd_ap r10
  c000ac28:       3d 20 de 00     lis     r9,-8704
  c000ac2c:       91 43 00 b0     stw     r10,176(r3)
  c000ac30:       7d 3a c3 a6     mtspr   794,r9
  c000ac34:       4e 80 00 20     blr

That comes from the call to kuap_loc(), allthough __kuap_lock() is an
empty function on the 8xx.

To avoid that, only perform kuap_is_disabled() check when there is
something to do with __kuap_lock().

Do the same with __kuap_save_and_lock() and
__kuap_get_and_assert_locked().

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/a854d25bea375d4ba6ca9c2617f9edbba397100a.1689091022.git.christophe.leroy@csgroup.eu
2023-08-02 22:22:17 +10:00
Christophe Leroy
880df2d46a powerpc/kuap: Avoid unnecessary reads of MD_AP
A disassembly of interrupt_exit_kernel_prepare() shows a useless read
of MD_AP register. This is shown by r9 being re-used immediately without
doing anything with the value read.

  c000e0e0:       60 00 00 00     nop
  c000e0e4: ===>  7d 3a c2 a6     mfmd_ap r9	<====
  c000e0e8:       7d 20 00 a6     mfmsr   r9
  c000e0ec:       7c 51 13 a6     mtspr   81,r2
  c000e0f0:       81 3f 00 84     lwz     r9,132(r31)
  c000e0f4:       71 29 80 00     andi.   r9,r9,32768

kuap_get_and_assert_locked() is paired with kuap_kernel_restore()
and are only used in interrupt_exit_kernel_prepare(). The value
returned by kuap_get_and_assert_locked() is only used by
kuap_kernel_restore().

On 8xx, kuap_kernel_restore() doesn't use the value read by
kuap_get_and_assert_locked() so modify kuap_get_and_assert_locked()
to not perform the read of MD_AP and return 0 instead.

The same applies on BOOKE.

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/bcbc84c2dd90bb1021da792b1968cdc22112dad8.1689091022.git.christophe.leroy@csgroup.eu
2023-08-02 22:22:17 +10:00
Michael Ellerman
7f48405c3c cpu/SMT: Allow enabling partial SMT states via sysfs
Add support to the /sys/devices/system/cpu/smt/control interface for
enabling a specified number of SMT threads per core, including partial
SMT states where not all threads are brought online.

The current interface accepts "on" and "off", to enable either 1 or all
SMT threads per core.

This commit allows writing an integer, between 1 and the number of SMT
threads supported by the machine. Writing 1 is a synonym for "off", 2 or
more enables SMT with the specified number of threads.

When reading the file, if all threads are online "on" is returned, to
avoid changing behaviour for existing users. If some other number of
threads is online then the integer value is returned.

Architectures like x86 only supporting 1 thread or all threads, should not
define CONFIG_SMT_NUM_THREADS_DYNAMIC. Architecture supporting partial SMT
states, like PowerPC, should define it.

[ ldufour: Slightly reword the commit's description ]
[ ldufour: Remove switch() in __store_smt_control() ]
[ ldufour: Rix build issue in control_show() ]

Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Laurent Dufour <ldufour@linux.ibm.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Zhang Rui <rui.zhang@intel.com>
Link: https://lore.kernel.org/r/20230705145143.40545-8-ldufour@linux.ibm.com
2023-07-28 09:53:37 +02:00
Michael Ellerman
38253464bc cpu/SMT: Create topology_smt_thread_allowed()
Some architectures allows partial SMT states, i.e. when not all SMT threads
are brought online.

To support that, add an architecture helper which checks whether a given
CPU is allowed to be brought online depending on how many SMT threads are
currently enabled. Since this is only applicable to architecture supporting
partial SMT, only these architectures should select the new configuration
variable CONFIG_SMT_NUM_THREADS_DYNAMIC. For the other architectures, not
supporting the partial SMT states, there is no need to define
topology_cpu_smt_allowed(), the generic code assumed that all the threads
are allowed or only the primary ones.

Call the helper from cpu_smt_enable(), and cpu_smt_allowed() when SMT is
enabled, to check if the particular thread should be onlined. Notably,
also call it from cpu_smt_disable() if CPU_SMT_ENABLED, to allow
offlining some threads to move from a higher to lower number of threads
online.

[ ldufour: Slightly reword the commit's description ]
[ ldufour: Introduce CONFIG_SMT_NUM_THREADS_DYNAMIC ]

Suggested-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Laurent Dufour <ldufour@linux.ibm.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Zhang Rui <rui.zhang@intel.com>
Link: https://lore.kernel.org/r/20230705145143.40545-7-ldufour@linux.ibm.com
2023-07-28 09:53:37 +02:00
Laurent Dufour
91b4a7dbfe cpu/SMT: Remove topology_smt_supported()
Since the maximum number of threads is now passed to cpu_smt_set_num_threads(),
checking that value is enough to know whether SMT is supported.

Suggested-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Laurent Dufour <ldufour@linux.ibm.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Zhang Rui <rui.zhang@intel.com>
Link: https://lore.kernel.org/r/20230705145143.40545-6-ldufour@linux.ibm.com
2023-07-28 09:53:37 +02:00