45831 Commits

Author SHA1 Message Date
Linus Torvalds
685d982112 Core x86 changes for v6.9:
- The biggest change is the rework of the percpu code,
   to support the 'Named Address Spaces' GCC feature,
   by Uros Bizjak:
 
    - This allows C code to access GS and FS segment relative
      memory via variables declared with such attributes,
      which allows the compiler to better optimize those accesses
      than the previous inline assembly code.
 
    - The series also includes a number of micro-optimizations
      for various percpu access methods, plus a number of
      cleanups of %gs accesses in assembly code.
 
    - These changes have been exposed to linux-next testing for
      the last ~5 months, with no known regressions in this area.
 
 - Fix/clean up __switch_to()'s broken but accidentally
   working handling of FPU switching - which also generates
   better code.
 
 - Propagate more RIP-relative addressing in assembly code,
   to generate slightly better code.
 
 - Rework the CPU mitigations Kconfig space to be less idiosyncratic,
   to make it easier for distros to follow & maintain these options.
 
 - Rework the x86 idle code to cure RCU violations and
   to clean up the logic.
 
 - Clean up the vDSO Makefile logic.
 
 - Misc cleanups and fixes.
 
 [ Please note that there's a higher number of merge commits in
   this branch (three) than is usual in x86 topic trees. This happened
   due to the long testing lifecycle of the percpu changes that
   involved 3 merge windows, which generated a longer history
   and various interactions with other core x86 changes that we
   felt better about to carry in a single branch. ]
 
 Signed-off-by: Ingo Molnar <mingo@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAmXvB0gRHG1pbmdvQGtl
 cm5lbC5vcmcACgkQEnMQ0APhK1jUqRAAqnEQPiabF5acQlHrwviX+cjSobDlqtH5
 9q2AQy9qaEHapzD0XMOxvFye6XIvehGOGxSPvk6CoviSxBND8rb56lvnsEZuLeBV
 Bo5QSIL2x42Zrvo11iPHwgXZfTIusU90sBuKDRFkYBAxY3HK2naMDZe8MAsYCUE9
 nwgHF8DDc/NYiSOXV8kosWoWpNIkoK/STyH5bvTQZMqZcwyZ49AIeP1jGZb/prbC
 e/rbnlrq5Eu6brpM7xo9kELO0Vhd34urV14KrrIpdkmUKytW2KIsyvW8D6fqgDBj
 NSaQLLcz0pCXbhF+8Nqvdh/1coR4L7Ymt08P1rfEjCsQgb/2WnSAGUQuC5JoGzaj
 ngkbFcZllIbD9gNzMQ1n4Aw5TiO+l9zxCqPC/r58Uuvstr+K9QKlwnp2+B3Q73Ft
 rojIJ04NJL6lCHdDgwAjTTks+TD2PT/eBWsDfJ/1pnUWttmv9IjMpnXD5sbHxoiU
 2RGGKnYbxXczYdq/ALYDWM6JXpfnJZcXL3jJi0IDcCSsb92xRvTANYFHnTfyzGfw
 EHkhbF4e4Vy9f6QOkSP3CvW5H26BmZS9DKG0J9Il5R3u2lKdfbb5vmtUmVTqHmAD
 Ulo5cWZjEznlWCAYSI/aIidmBsp9OAEvYd+X7Z5SBIgTfSqV7VWHGt0BfA1heiVv
 F/mednG0gGc=
 =3v4F
 -----END PGP SIGNATURE-----

Merge tag 'x86-core-2024-03-11' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull core x86 updates from Ingo Molnar:

 - The biggest change is the rework of the percpu code, to support the
   'Named Address Spaces' GCC feature, by Uros Bizjak:

      - This allows C code to access GS and FS segment relative memory
        via variables declared with such attributes, which allows the
        compiler to better optimize those accesses than the previous
        inline assembly code.

      - The series also includes a number of micro-optimizations for
        various percpu access methods, plus a number of cleanups of %gs
        accesses in assembly code.

      - These changes have been exposed to linux-next testing for the
        last ~5 months, with no known regressions in this area.

 - Fix/clean up __switch_to()'s broken but accidentally working handling
   of FPU switching - which also generates better code

 - Propagate more RIP-relative addressing in assembly code, to generate
   slightly better code

 - Rework the CPU mitigations Kconfig space to be less idiosyncratic, to
   make it easier for distros to follow & maintain these options

 - Rework the x86 idle code to cure RCU violations and to clean up the
   logic

 - Clean up the vDSO Makefile logic

 - Misc cleanups and fixes

* tag 'x86-core-2024-03-11' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (52 commits)
  x86/idle: Select idle routine only once
  x86/idle: Let prefer_mwait_c1_over_halt() return bool
  x86/idle: Cleanup idle_setup()
  x86/idle: Clean up idle selection
  x86/idle: Sanitize X86_BUG_AMD_E400 handling
  sched/idle: Conditionally handle tick broadcast in default_idle_call()
  x86: Increase brk randomness entropy for 64-bit systems
  x86/vdso: Move vDSO to mmap region
  x86/vdso/kbuild: Group non-standard build attributes and primary object file rules together
  x86/vdso: Fix rethunk patching for vdso-image-{32,64}.o
  x86/retpoline: Ensure default return thunk isn't used at runtime
  x86/vdso: Use CONFIG_COMPAT_32 to specify vdso32
  x86/vdso: Use $(addprefix ) instead of $(foreach )
  x86/vdso: Simplify obj-y addition
  x86/vdso: Consolidate targets and clean-files
  x86/bugs: Rename CONFIG_RETHUNK              => CONFIG_MITIGATION_RETHUNK
  x86/bugs: Rename CONFIG_CPU_SRSO             => CONFIG_MITIGATION_SRSO
  x86/bugs: Rename CONFIG_CPU_IBRS_ENTRY       => CONFIG_MITIGATION_IBRS_ENTRY
  x86/bugs: Rename CONFIG_CPU_UNRET_ENTRY      => CONFIG_MITIGATION_UNRET_ENTRY
  x86/bugs: Rename CONFIG_SLS                  => CONFIG_MITIGATION_SLS
  ...
2024-03-11 19:53:15 -07:00
Linus Torvalds
fcc196579a Misc cleanups, including a large series from Thomas Gleixner to
cure Sparse warnings.
 
 Signed-off-by: Ingo Molnar <mingo@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAmXvAFQRHG1pbmdvQGtl
 cm5lbC5vcmcACgkQEnMQ0APhK1hkDRAAwASVCQ88kiGqNQtHibXlK54mAFGsc0xv
 T8OPds15DUzoLg/y8lw0X0DHly6MdGXVmygybejNIw2BN4lhLjQ7f4Ria7rv7LDy
 FcI1jfvysEMyYRFHGRefb/GBFzuEfKoROwf+QylGmKz0ZK674gNMngsI9pwOBdbe
 wElq3IkHoNuTUfH9QA4BvqGam1n122nvVTop3g0PMHWzx9ky8hd/BEUjXFZhfINL
 zZk3fwUbER2QYbhHt+BN2GRbdf2BrKvqTkXpKxyXTdnpiqAo0CzBGKerZ62H82qG
 n737Nib1lrsfM5yDHySnau02aamRXaGvCJUd6gpac1ZmNpZMWhEOT/0Tr/Nj5ztF
 lUAvKqMZn/CwwQky1/XxD0LHegnve0G+syqQt/7x7o1ELdiwTzOWMCx016UeodzB
 yyHf3Xx9J8nt3snlrlZBaGEfegg9ePLu5Vir7iXjg3vrloUW8A+GZM62NVxF4HVV
 QWF80BfWf8zbLQ/OS1382t1shaioIe5pEXzIjcnyVIZCiiP2/5kP2O6P4XVbwVlo
 Ca5eEt8U1rtsLUZaCzI2ZRTQf/8SLMQWyaV+ZmkVwcVdFoARC31EgdE5wYYoZOf6
 7Vl+rXd+rZCuTWk0ZgznCZEm75aaqukaQCBa2V8hIVociLFVzhg/Tjedv7s0CspA
 hNfxdN1LDZc=
 =0eJ7
 -----END PGP SIGNATURE-----

Merge tag 'x86-cleanups-2024-03-11' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull x86 cleanups from Ingo Molnar:
 "Misc cleanups, including a large series from Thomas Gleixner to cure
  sparse warnings"

* tag 'x86-cleanups-2024-03-11' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/nmi: Drop unused declaration of proc_nmi_enabled()
  x86/callthunks: Use EXPORT_PER_CPU_SYMBOL_GPL() for per CPU variables
  x86/cpu: Provide a declaration for itlb_multihit_kvm_mitigation
  x86/cpu: Use EXPORT_PER_CPU_SYMBOL_GPL() for x86_spec_ctrl_current
  x86/uaccess: Add missing __force to casts in __access_ok() and valid_user_address()
  x86/percpu: Cure per CPU madness on UP
  smp: Consolidate smp_prepare_boot_cpu()
  x86/msr: Add missing __percpu annotations
  x86/msr: Prepare for including <linux/percpu.h> into <asm/msr.h>
  perf/x86/amd/uncore: Fix __percpu annotation
  x86/nmi: Remove an unnecessary IS_ENABLED(CONFIG_SMP)
  x86/apm_32: Remove dead function apm_get_battery_status()
  x86/insn-eval: Fix function param name in get_eff_addr_sib()
2024-03-11 19:37:56 -07:00
Linus Torvalds
d69ad12c78 x86/build changes for v6.9:
- Reduce <asm/bootparam.h> dependencies
 - Simplify <asm/efi.h>
 - Unify *_setup_data definitions into <asm/setup_data.h>
 - Reduce the size of <asm/bootparam.h>
 
 Signed-off-by: Ingo Molnar <mingo@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAmXu+VERHG1pbmdvQGtl
 cm5lbC5vcmcACgkQEnMQ0APhK1jQCxAAiESAaRnUY3IzENu502LHWdUUihbgCUdp
 zNE5GDX4+FCt4w7DXUGbkoRchsrZEISR4LeEmuQ29wkvclPOhr9LlI3uNpM4l/E+
 e52B8/ig6Yd+D3g7FL7ck+OnTjEQ+V/SifR/5YGKr5TownLoCJXBlitaZsShvVcT
 70+NN/BiJC/n3D8/CYzFUYB6uj3YjZYidFb0dTyJOCVEJxe5m0NCQAtk3bMovwpl
 xmvqVs++VFCEYdcTxK40XBlbcP6KF5DZFVvGw9/vKdU6TKsXwCkrh7GCiFXOJ8bj
 vEHuFAx9tspAaAAnVCQCp42RLbjldvSqGCmif/iswN8JLwAd1FwWf0VXQJaf1qtZ
 XDB+KBRDIrM+arD9qrZb6ghYkenovq8yyEwXETHq79h7ICpCAqm9XE2PQKP/IJZ6
 7A1zdXnHaa/VJEKUZg7Jg9E9c1BsqXCGrOUpLIuEnks//nNgU68JbsRr+9LF9UnB
 LEPQBUuAwPR8cb+JVmN7NNOJpCrjIikx2yKU+BJ5ywCZ5qKs7VA6IxbPLvtBVEv7
 eokYFHJb4Wzgauxxisy6KaaLJc+hIz680bMfjMBFnZ95cgh7ZYTMxO0G0eozAVNX
 BzOQTfPocLBWJ4qiyMnItvWKE1ioUjcWneq46Y+njD5Ow66H/Y/uOmPa3dBj9AxD
 aGkMg3ceTy0=
 =leh5
 -----END PGP SIGNATURE-----

Merge tag 'x86-build-2024-03-11' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull x86 build updates from Ingo Molnar:

 - Reduce <asm/bootparam.h> dependencies

 - Simplify <asm/efi.h>

 - Unify *_setup_data definitions into <asm/setup_data.h>

 - Reduce the size of <asm/bootparam.h>

* tag 'x86-build-2024-03-11' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86: Do not include <asm/bootparam.h> in several files
  x86/efi: Implement arch_ima_efi_boot_mode() in source file
  x86/setup: Move internal setup_data structures into setup_data.h
  x86/setup: Move UAPI setup structures into setup_data.h
2024-03-11 19:23:16 -07:00
Linus Torvalds
73f0d1d7b4 Two changes to simplify the x86 decoder logic a bit.
Signed-off-by: Ingo Molnar <mingo@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAmXu9zERHG1pbmdvQGtl
 cm5lbC5vcmcACgkQEnMQ0APhK1g63w//RlHznVWzZE6XrL3kKc0kKLNlzvHwD84h
 V/5UC+lMzFgirULxnnleOL4/GePoubv4NppOgFnpSLpynVbd+m3Fv5yg550LTdnu
 acus7IbF7KUVpVYdCUXZQohhS+aAdG3QsWcATuuvxQHTzaxrp5G5OWYWSKT6xb2X
 2/oUq8oKXLC6XFNJVe8uEG6uqLx3U2AuUfgQ7uMRpZYiwCIeGTPBgXudL6yYhjIF
 TTHJ6kfTp+TeUnPX7WP2n0z917GrV5B4V/7jBcsMy90oHfAdqi+ibqgdO5hyiXgK
 s/jdSESoCXB6Hq108+R+hiq9NEe5GIv7472jaWLdsoq7lun85T/fHiME/HChOnZg
 yUZ/AeMQvhfpMxMFyomjObzTQAnHSwHZ8aqc1wG86+NoHACXwoWhhzvZ48zruhCj
 wxbn22p4E2fHq60++L24HaYIqi0C1tWNMr2i9xh9Beks6ZGHnPRK1FDXMwXu92fm
 LklAEu1aDIJA28RfDqH6vjY/I4dI0z2zP3foM42O0wOd5Kon1EIGk5U9Rs1R18+h
 NgoQFq0vpU+Y5wD2evgoUiaNnl90XI5KT+Jeq9VjNWswKN54ZSB94UprxK6uwPJ9
 LH2QX2yS48nuecErjZ2qacXF7K8tj0o0FV1HB/v2dUzTF+s/IPnp/aP10+aUknIu
 sKPLbgiXS5E=
 =H9XK
 -----END PGP SIGNATURE-----

Merge tag 'x86-asm-2024-03-11' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull x86 asm updates from Ingo Molnar:
 "Two changes to simplify the x86 decoder logic a bit"

* tag 'x86-asm-2024-03-11' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/insn: Directly assign x86_64 state in insn_init()
  x86/insn: Remove superfluous checks from instruction decoding routines
2024-03-11 19:13:06 -07:00
Linus Torvalds
a5b1a017cb Locking changes for v6.9:
- Micro-optimize local_xchg() and the rtmutex code on x86
 
 - Fix percpu-rwsem contention tracepoints
 
 - Simplify debugging Kconfig dependencies
 
 - Update/clarify the documentation of atomic primitives
 
 - Misc cleanups
 
 Signed-off-by: Ingo Molnar <mingo@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAmXu6EARHG1pbmdvQGtl
 cm5lbC5vcmcACgkQEnMQ0APhK1i7og/8DY/pEGqa/9xYZNE+3NZypuri93XjzFKu
 i2yN1ymjSmjgQY83ImmP67gBBf7xd3kS0oiHM+lWnPE10pkzIPhleru4iExoyOB6
 oMcQSyZALK3uRzxG/EwhuZhE0z9SadB/vkFUDJh677beMRsqfm2QXb4urEcTLUye
 z4+Tg5zjJvNpKpGoTO7sWj0AfvpEa40RFaGAZEBdmU5CrykLE9tIL6wBEP5RAUcI
 b8M+tr7D0JD0VGp4zhayEvq2TiwZhhxQ9C5HpVqck7LsfQvoXgBhGtxl/EkXVJ59
 PiaLDJAY/D0ocyz1WNB7pFfOdZP6RV0a/5gEzp1uvmRdRV+gEhX88aBmtcc2072p
 Do5fQwoqNecpHdY1+QY4n5Bq5KYQz9JZl3U1M5g/5dAjDiCo1W+eKk4AlkdymLQQ
 4jhCsBFnrQdcrxHIfyHi1ocggs0cUXTCDIRPZSsA1ij51UxcLK2kz/6Ba1jSnFGk
 iAfcF+Dj68/48zrz9yr+DS1od+oIsj4E+lr0btbj7xf2yiFXKbjPNE5Z8dk3JLay
 /Eyb5NSZzfT4cpjpwYAoQ/JJySm3i0Uu/llOOIlTTi94waFomFBaCAo7/ujoGOOJ
 /VHqouGVWaWtv6JhkjikgqzVj34Yr3rqZq9O3SbrZcu4YafKbaLNbzlt5z4zMkQv
 wSMAPFgvcZQ=
 =t84c
 -----END PGP SIGNATURE-----

Merge tag 'locking-core-2024-03-11' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull locking updates from Ingo Molnar:

 - Micro-optimize local_xchg() and the rtmutex code on x86

 - Fix percpu-rwsem contention tracepoints

 - Simplify debugging Kconfig dependencies

 - Update/clarify the documentation of atomic primitives

 - Misc cleanups

* tag 'locking-core-2024-03-11' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  locking/rtmutex: Use try_cmpxchg_relaxed() in mark_rt_mutex_waiters()
  locking/x86: Implement local_xchg() using CMPXCHG without the LOCK prefix
  locking/percpu-rwsem: Trigger contention tracepoints only if contended
  locking/rwsem: Make DEBUG_RWSEMS and PREEMPT_RT mutually exclusive
  locking/rwsem: Clarify that RWSEM_READER_OWNED is just a hint
  locking/mutex: Simplify <linux/mutex.h>
  locking/qspinlock: Fix 'wait_early' set but not used warning
  locking/atomic: scripts: Clarify ordering of conditional atomics
2024-03-11 18:33:03 -07:00
Linus Torvalds
b0402403e5 - Add a FRU (Field Replaceable Unit) memory poison manager which
collects and manages previously encountered hw errors in order to
    save them to persistent storage across reboots. Previously recorded
    errors are "replayed" upon reboot in order to poison memory which has
    caused said errors in the past.
 
    The main use case is stacked, on-chip memory which cannot simply be
    replaced so poisoning faulty areas of it and thus making them
    inaccessible is the only strategy to prolong its lifetime.
 
  - Add an AMD address translation library glue which converts the
    reported addresses of hw errors into system physical addresses in
    order to be used by other subsystems like memory failure, for
    example. Add support for MI300 accelerators to that library.
 
  - igen6: Add support for Alder Lake-N SoC
 
  - i10nm: Add Grand Ridge support
 
  - The usual fixlets and cleanups
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEzv7L6UO9uDPlPSfHEsHwGGHeVUoFAmXvKHcACgkQEsHwGGHe
 VUo4Lg/+OwXDI1EaCDyaHJ+f6JRmNok1EGjKMVjpp71/XmE3eUjiXfCv/b0bwl3V
 oIXGlXpJ5RSME+9aFDWADaE3h5zAGzTwQXuKtOUQPiJ6UuCebXodm8SaIG8V8trG
 yaW/hhP98AoJD+fN6qzv4XWYvTG8VRQs4tdISg9FXiljTjv4mKA+sxuCu8KpfrDh
 Tg+9F4Rre6gyR5GaB6N7Cc0k97DM7n5yKBZZGKucv+oYzDyf6n631ZSJ2zA9NC51
 CJlux917hCXI/IWrCQ2nkyfPPXxn8AaznUAA30wKgwlt8TFSdKTW+DvRA2zyuAU3
 0UDHO4FezOKuzVnWkzdnKsIMAnDyTGOz3Fi2LU4mC+JHaHHmI2quSWDxp5phWBuy
 S+T3XHxpbSsLGEI7zxT5F9u1oAlCvYu1C7HJw+yxNSn2iCy5LoNo0H/kl/nhR8Xr
 FgVp8SYgQRU2Pp8vgGOibMYY/TAHX55EticKdxvBI0yY+iqoJyAbZ0fb0XyLNc7s
 GqoWfvrK1KQzf5/Ya1Mm//0/QTPyFmJwujMJ2eEnMRRER+23bYpGvVBBT8E1sG9s
 gqEJkKjmVCPt9xJTcivm96sLJ7CG36w8+r/axSqpKXdcvDG9ec8G8PRqjlo5pcvh
 gYevmCBIcKny1xuhALwD6Rn2mkPip7araycDx9X9nd5z1qCxBaU=
 =FR2l
 -----END PGP SIGNATURE-----

Merge tag 'edac_updates_for_v6.9' of git://git.kernel.org/pub/scm/linux/kernel/git/ras/ras

Pull EDAC updates from Borislav Petkov:

 - Add a FRU (Field Replaceable Unit) memory poison manager which
   collects and manages previously encountered hw errors in order to
   save them to persistent storage across reboots. Previously recorded
   errors are "replayed" upon reboot in order to poison memory which has
   caused said errors in the past.

   The main use case is stacked, on-chip memory which cannot simply be
   replaced so poisoning faulty areas of it and thus making them
   inaccessible is the only strategy to prolong its lifetime.

 - Add an AMD address translation library glue which converts the
   reported addresses of hw errors into system physical addresses in
   order to be used by other subsystems like memory failure, for
   example. Add support for MI300 accelerators to that library.

 - igen6: Add support for Alder Lake-N SoC

 - i10nm: Add Grand Ridge support

 - The usual fixlets and cleanups

* tag 'edac_updates_for_v6.9' of git://git.kernel.org/pub/scm/linux/kernel/git/ras/ras:
  EDAC/versal: Convert to platform remove callback returning void
  RAS/AMD/FMPM: Fix off by one when unwinding on error
  RAS/AMD/FMPM: Add debugfs interface to print record entries
  RAS/AMD/FMPM: Save SPA values
  RAS: Export helper to get ras_debugfs_dir
  RAS/AMD/ATL: Fix bit overflow in denorm_addr_df4_np2()
  RAS: Introduce a FRU memory poison manager
  RAS/AMD/ATL: Add MI300 row retirement support
  Documentation: Move RAS section to admin-guide
  EDAC/versal: Make the bit position of injected errors configurable
  EDAC/i10nm: Add Intel Grand Ridge micro-server support
  EDAC/igen6: Add one more Intel Alder Lake-N SoC support
  RAS/AMD/ATL: Add MI300 DRAM to normalized address translation support
  RAS/AMD/ATL: Fix array overflow in get_logical_coh_st_fabric_id_mi300()
  RAS/AMD/ATL: Add MI300 support
  Documentation: RAS: Add index and address translation section
  EDAC/amd64: Use new AMD Address Translation Library
  RAS: Introduce AMD Address Translation Library
  EDAC/synopsys: Convert to devm_platform_ioremap_resource()
2024-03-11 18:14:06 -07:00
Linus Torvalds
1f75619a72 - Fix a wrong check in the function reporting whether a CPU executes (or
not) a NMI handler
 
 - Ratelimit unknown NMIs messages in order to not potentially slow down
   the machine
 
 - Other fixlets
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEzv7L6UO9uDPlPSfHEsHwGGHeVUoFAmXvN0wACgkQEsHwGGHe
 VUqZLg//fo0puvI2XVjcyW2aNZXNyCWUID5J0HvIZqLveQQQzOopfuX4NLfgKSRR
 GUX3k/jlfO9pku+gz6rQRYi8kaTlY8rScf9XpbUBgZZg3Pz2/ySel5uhPpHatgZ7
 Zj455XALGVLA3T4bFKfCvUGKmRVmSTyXgPg3i/yFpfVzRZ8yhvAyJWJSWxJpFOpC
 Eeg/cXUUPjlb2qOom0Bk9BEjG8Ez76yImAlN5ys/csG2Fe7iE3rU+DQ2IfU/yLfI
 22QNZa8xGJY47c7iP1A/tGsxKGu5Pjsz4I2QvobWhteeiu+03g2NUWUcAaP+3/GN
 6hj2IeiNAkhDcWaJMS9U5vaVAcfDZzTEErkPf896bk6lrR0UY1CRQlJzEQZLz1Vy
 0ZVUuppY2hBcTj3YA9h65a/+sdsxAUG4BdsUJ63jHejJYEPN5YSFvL5wXZlxj3GO
 XVVMsHMs9Lgnz1x+xzAB8SmmoPSj6qdMneY1Xp92cEtV6QQM/EinTfIcTUtvDACZ
 9FJ77Iu6Up4hemftTGOC8eVqr+V0Q8M5x2Xs8NQAwlq9dnFVQCIwd/LjdRDyJ3Gw
 ksFrq6Cv94Fi4bqmQi4CY04GH3kc5ua9sDeTM7rkBMm6RRSTO2NBgIOqHcBbrlOT
 B3kSUqoUB6BEqlRRqP/YZ8YSOL5FWk2A2WDKtp8+ThkDYixGy1M=
 =Jt9B
 -----END PGP SIGNATURE-----

Merge tag 'x86_misc_for_v6.9_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull misc x86 fixes from Borislav Petkov:

 - Fix a wrong check in the function reporting whether a CPU executes
   (or not) a NMI handler

 - Ratelimit unknown NMIs messages in order to not potentially slow down
   the machine

 - Other fixlets

* tag 'x86_misc_for_v6.9_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/nmi: Fix the inverse "in NMI handler" check
  Documentation/maintainer-tip: Add C++ tail comments exception
  Documentation/maintainer-tip: Add Closes tag
  x86/nmi: Rate limit unknown NMI messages
  Documentation/kernel-parameters: Add spec_rstack_overflow to mitigations=off
2024-03-11 18:02:44 -07:00
Linus Torvalds
38b334fc76 - Add the x86 part of the SEV-SNP host support. This will allow the
kernel to be used as a KVM hypervisor capable of running SNP (Secure
   Nested Paging) guests. Roughly speaking, SEV-SNP is the ultimate goal
   of the AMD confidential computing side, providing the most
   comprehensive confidential computing environment up to date.
 
   This is the x86 part and there is a KVM part which did not get ready
   in time for the merge window so latter will be forthcoming in the next
   cycle.
 
 - Rework the early code's position-dependent SEV variable references in
   order to allow building the kernel with clang and -fPIE/-fPIC and
   -mcmodel=kernel
 
 - The usual set of fixes, cleanups and improvements all over the place
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEzv7L6UO9uDPlPSfHEsHwGGHeVUoFAmXvH0wACgkQEsHwGGHe
 VUrzmA//VS/n6dhHRnm/nAGngr4PeegkgV1OhyKYFfiZ272rT6P9QvblQrgcY0dc
 Ij1DOhEKlke51pTHvMOQ33B3P4Fuc0mx3dpCLY0up5V26kzQiKCjRKEkC4U1bcw8
 W4GqMejaR89bE14bYibmwpSib9T/uVsV65eM3xf1iF5UvsnoUaTziymDoy+nb43a
 B1pdd5vcl4mBNqXeEvt0qjg+xkMLpWUI9tJDB8mbMl/cnIFGgMZzBaY8oktHSROK
 QpuUnKegOgp1RXpfLbNjmZ2Q4Rkk4MNazzDzWq3EIxaRjXL3Qp507ePK7yeA2qa0
 J3jCBQc9E2j7lfrIkUgNIzOWhMAXM2YH5bvH6UrIcMi1qsWJYDmkp2MF1nUedjdf
 Wj16/pJbeEw1aKKIywJGwsmViSQju158vY3SzXG83U/A/Iz7zZRHFmC/ALoxZptY
 Bi7VhfcOSpz98PE3axnG8CvvxRDWMfzBr2FY1VmQbg6VBNo1Xl1aP/IH1I8iQNKg
 /laBYl/qP+1286TygF1lthYROb1lfEIJprgi2xfO6jVYUqPb7/zq2sm78qZRfm7l
 25PN/oHnuidfVfI/H3hzcGubjOG9Zwra8WWYBB2EEmelf21rT0OLqq+eS4T6pxFb
 GNVfc0AzG77UmqbrpkAMuPqL7LrGaSee4NdU3hkEdSphlx1/YTo=
 =c1ps
 -----END PGP SIGNATURE-----

Merge tag 'x86_sev_for_v6.9_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull x86 SEV updates from Borislav Petkov:

 - Add the x86 part of the SEV-SNP host support.

   This will allow the kernel to be used as a KVM hypervisor capable of
   running SNP (Secure Nested Paging) guests. Roughly speaking, SEV-SNP
   is the ultimate goal of the AMD confidential computing side,
   providing the most comprehensive confidential computing environment
   up to date.

   This is the x86 part and there is a KVM part which did not get ready
   in time for the merge window so latter will be forthcoming in the
   next cycle.

 - Rework the early code's position-dependent SEV variable references in
   order to allow building the kernel with clang and -fPIE/-fPIC and
   -mcmodel=kernel

 - The usual set of fixes, cleanups and improvements all over the place

* tag 'x86_sev_for_v6.9_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (36 commits)
  x86/sev: Disable KMSAN for memory encryption TUs
  x86/sev: Dump SEV_STATUS
  crypto: ccp - Have it depend on AMD_IOMMU
  iommu/amd: Fix failure return from snp_lookup_rmpentry()
  x86/sev: Fix position dependent variable references in startup code
  crypto: ccp: Make snp_range_list static
  x86/Kconfig: Remove CONFIG_AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT
  Documentation: virt: Fix up pre-formatted text block for SEV ioctls
  crypto: ccp: Add the SNP_SET_CONFIG command
  crypto: ccp: Add the SNP_COMMIT command
  crypto: ccp: Add the SNP_PLATFORM_STATUS command
  x86/cpufeatures: Enable/unmask SEV-SNP CPU feature
  KVM: SEV: Make AVIC backing, VMSA and VMCB memory allocation SNP safe
  crypto: ccp: Add panic notifier for SEV/SNP firmware shutdown on kdump
  iommu/amd: Clean up RMP entries for IOMMU pages during SNP shutdown
  crypto: ccp: Handle legacy SEV commands when SNP is enabled
  crypto: ccp: Handle non-volatile INIT_EX data when SNP is enabled
  crypto: ccp: Handle the legacy TMR allocation when SNP is enabled
  x86/sev: Introduce an SNP leaked pages list
  crypto: ccp: Provide an API to issue SEV and SNP commands
  ...
2024-03-11 17:44:11 -07:00
Linus Torvalds
2edfd1046f - Rework different aspects of the resctrl code like adding arch-specific
accessors and splitting the locking, in order to accomodate ARM's MPAM
   implementation of hw resource control and be able to use the same
   filesystem control interface like on x86. Work by James Morse
 
 - Improve the memory bandwidth throttling heuristic to handle workloads
   with not too regular load levels which end up penalized unnecessarily
 
 - Use CPUID to detect the memory bandwidth enforcement limit on AMD
 
 - The usual set of fixes
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEzv7L6UO9uDPlPSfHEsHwGGHeVUoFAmXvGP8ACgkQEsHwGGHe
 VUo7nw//e3qGYx09qA6UShcIjz4e9cVM3gUraBn82rd4T6oeIfU5ecJn6auJzlVO
 cvlRFumaLbrNZXHd+Ww5VG0g0LVEcLmqS2ER295Rbp5gTvbDTNrmIAgriUpxER42
 UkVtI4/y+P5980Y0Jl1j5xECACIdXFxJEGO3Eiok0rk3ZRhcFZgf1T2/35F2Jiif
 hXAtvmkeTBxldhcdgovdaoR7SIY4MBZjgB1zX5WqJGlFdxfc6RaYbpCnl8rVXF2J
 2DSUvHjtXco9MWNDm9c2bwNzXHV3EaAvUiCwmfoNeXCCJEqpyYFaPs3U61RnlwQe
 ucAtSXeRx8YmJAVNJTjSR4Cou0stQDJdLZx0yYgoAvhXqwcpePilMzfHwdHkZ/5V
 K7Kwl+VbJ1JxnTJgYmcgJ3juF7R7VW+stiKZOTkFYvBsWzXvCK5w+w1JScbdphqa
 P878tySa58ehIaEf9/472QpA+zbItENsf1OFytfbJPKAJhnKMG73X4lrt6swSZBW
 a1rmTGqG0ufuPiXT9XDajgeFR/15RQWcYtXPVXmWLaIJ+hHhRc57v11qy0uIMs9V
 o0uRtdJP2SL+7rEm26VPjBXyS3orf2tvigrXnYeyNpTR/RVhMHL4n+0kxs4p9ELf
 3oD4vd/KqyGHo7LO5QMm52eSxfHLpJzgFL02inBgFTFtmWMWpy8=
 =v7bo
 -----END PGP SIGNATURE-----

Merge tag 'x86_cache_for_v6.9_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull resource control updates from Borislav Petkov:

 - Rework different aspects of the resctrl code like adding
   arch-specific accessors and splitting the locking, in order to
   accomodate ARM's MPAM implementation of hw resource control and be
   able to use the same filesystem control interface like on x86. Work
   by James Morse

 - Improve the memory bandwidth throttling heuristic to handle workloads
   with not too regular load levels which end up penalized unnecessarily

 - Use CPUID to detect the memory bandwidth enforcement limit on AMD

 - The usual set of fixes

* tag 'x86_cache_for_v6.9_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (30 commits)
  x86/resctrl: Remove lockdep annotation that triggers false positive
  x86/resctrl: Separate arch and fs resctrl locks
  x86/resctrl: Move domain helper migration into resctrl_offline_cpu()
  x86/resctrl: Add CPU offline callback for resctrl work
  x86/resctrl: Allow overflow/limbo handlers to be scheduled on any-but CPU
  x86/resctrl: Add CPU online callback for resctrl work
  x86/resctrl: Add helpers for system wide mon/alloc capable
  x86/resctrl: Make rdt_enable_key the arch's decision to switch
  x86/resctrl: Move alloc/mon static keys into helpers
  x86/resctrl: Make resctrl_mounted checks explicit
  x86/resctrl: Allow arch to allocate memory needed in resctrl_arch_rmid_read()
  x86/resctrl: Allow resctrl_arch_rmid_read() to sleep
  x86/resctrl: Queue mon_event_read() instead of sending an IPI
  x86/resctrl: Add cpumask_any_housekeeping() for limbo/overflow
  x86/resctrl: Move CLOSID/RMID matching and setting to use helpers
  x86/resctrl: Allocate the cleanest CLOSID by searching closid_num_dirty_rmid
  x86/resctrl: Use __set_bit()/__clear_bit() instead of open coding
  x86/resctrl: Track the number of dirty RMID a CLOSID has
  x86/resctrl: Allow RMID allocation to be scoped by CLOSID
  x86/resctrl: Access per-rmid structures by index
  ...
2024-03-11 17:29:55 -07:00
Linus Torvalds
bfdb395a7c - Relax the PAT MSR programming which was unnecessarily using the MTRR
programming protocol of disabling the cache around the changes. The
    reason behind this is the current algorithm triggering a #VE
    exception for TDX guests and unnecessarily complicating things
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEzv7L6UO9uDPlPSfHEsHwGGHeVUoFAmXux8sACgkQEsHwGGHe
 VUodOw//diEAM3//Ht733soDDMYuc3pnLBgpIvEYtU7nvo7rVuNJASUny+WmQNVl
 Szm1ATl88I0H1t54CAdvd398csKlZPmsO/puu/sLiJrvmjXtH4raE/u9lFjpdBwo
 yoSbgb8v15No0JlszeE782rJfAHQ01FK7LbEuV0EKF3dx+KDZQPY8E+/LGVNeyh4
 X7OWh2RJHUKENYxYgQBBuw2Hkm9HXIgyQiKe9eIrEwpHskCmZ/y8F8LazohVmw8L
 XqlUZFCmKPwHsLE44sWq5coXoN28RKZfQ2D7jvhts8AwwU1RRoFv5WgCXhFe0Rfe
 dPfLm93PvxxUYV0OHyCsKeJJkA8KH+vuXiaC1iw7Za6Ipkio1LzNAc/pxa/Q4x8Y
 dwOM+WI/OdXz8KHQAJlU37ZNGbnA/ETWumNN7SrqqxvKzUbjcjDwZqIqneFT0dg6
 c5quB/fgj+lL1xXk9EDE4HrOkzLv3/ax449oLFkJ3JKfRRMAzQalRaTwjTh/hufM
 7Eig3iNRN+G6bItXC6XoQjDBEEJP7LplXT8jNQkVbHyMg8WPPToxtJGXBnR73PQp
 q8+Iv3gLqM5EPqetdAtElVRhikmPHPqCdcBj47EHCoPFsQ1E9b72BUutDH0MVEG4
 BIFCWQ4DS+3OXX/BZf7P5UOcPDcGkP+2PqbUmiBRB5I3174XQDQ=
 =nNC0
 -----END PGP SIGNATURE-----

Merge tag 'x86_mtrr_for_v6.9_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull x86 MTRR update from Borislav Petkov:

 - Relax the PAT MSR programming which was unnecessarily using the MTRR
   programming protocol of disabling the cache around the changes. The
   reason behind this is the current algorithm triggering a #VE
   exception for TDX guests and unnecessarily complicating things

* tag 'x86_mtrr_for_v6.9_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/pat: Simplify the PAT programming protocol
2024-03-11 17:27:12 -07:00
Linus Torvalds
742582acec - Have AMD Zen common init code run on all families from Zen1 onwards
in order to save some future enablement effort
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEzv7L6UO9uDPlPSfHEsHwGGHeVUoFAmXuxn4ACgkQEsHwGGHe
 VUolmQ//djDJa11FTQ5Zfnu8RjH4LFe6ZanLMIP93urT8rRuOfhlOZLHqxFGvJHy
 1K1yT34NmHdXBsVWX7MxDmyhRJMOhgkkgGhYaBqZWrcV1RO26PKg8FS5B/a3BsVI
 Y7ryOOqWNg0Hf/++Qm0zSq21VEH3Ehq4gYitK0irX/gBbHQMdui63pbLqOHwdszG
 bhgMSI42EjZxpbR1ow5Bx7dia0ChBODbV4WeVB0eZo47mSJU4eu8yDPuy5+5ywwA
 fOOVWZ2e12HrisfJYxL01vivU/pK0WYB2gJlAKv0tp+Q2ReIvo/vh4w2MHC1c+YT
 X8e95rz1jzzlTkEKt4iWE/NZ1XS30z77jGbKVLxl8lsWswTtup48xLw0idLHc39L
 M0ayY3yXbWRVxSltucH2DVKMzG8IP5XNeG53qfiMqIHsoYbmnVgxWk/0HrtgcrSL
 jvcU4f2hwehO/ZvwlRyRlQACOlDSHGehNHmAVK3BqxYxM2+a9ArTA2KmnbC6+U9u
 LAKaXlf+lMo6lszHDqKb+GUePqZ4EX01X4EuSTRX/G6qD4RMZIu1+4sBwfr79miE
 uKJvRIT9DH74+OLPeSt/osdbGAK26BzJM9ZnqkdcggOMM/tHPNkQ5YTK/lStP3gl
 JAh8ih/Or9p3LQHNKIU1zoT0MOKv6Mbr8n+MPYAhaS/oNpST6Bs=
 =h7IU
 -----END PGP SIGNATURE-----

Merge tag 'x86_cpu_for_v6.9_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull x86 cpu update from Borislav Petkov:

 - Have AMD Zen common init code run on all families from Zen1 onwards
   in order to save some future enablement effort

* tag 'x86_cpu_for_v6.9_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/CPU/AMD: Do the common init on future Zens too
2024-03-11 17:25:45 -07:00
Linus Torvalds
d8941ce52b - Constify yet another static struct bus_type instance now that the
driver core can handle that
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEzv7L6UO9uDPlPSfHEsHwGGHeVUoFAmXuxYIACgkQEsHwGGHe
 VUrVGBAAgOB8RglOqSCaF2m//92E2TyXKGSpXXuiizuHbV4G7v+yRgunbX99XOBa
 wGkeja0rqovmjaSNOK3B4Hp6/eSXycdKgL8KfMRqa7VzGcla4oN097d6Nvz1YPo3
 YL/8sJ04wy1CrF2Hgxj9bFF/ni0WFUgRr8GvlzKqeYGm7rRP2V8kNk64beAMa1GR
 XTwoqSVq9cA88/Xnw4/qnYG2HxIL+Eu1uJWtkb47EWGD6qzsgC7t+PE0aKrqcTC8
 jzcbiINHPK10FxoXGq3xa1yJQH02E83w0EmjhGmQ06/3gHQVoSUFrO0k4rOJJ7KI
 GvAOYMGjkG/vuX0a2+FcxYoU/ODUuA8tiHK9x1HBkqLPzkiz3FPwQQ0yjfqOyo95
 6dPd2EeUPjSK12xZ2LM22jyfhkIX6v02QjbmkwkP5pVcQ2WQOQVaOQzITZ/5vhLu
 /Eaw+wRj8PBf2Jxv8yX885+qT9owkZIH2jSsVajGpMdoOTkS4R0CmUtPq7D43pGb
 PEUabjcGBkSLGvHeKV0xmMeGAMDrsYNcqZ09RJdnIJ4LExI7tsR7lw5jwRSf0M/O
 3Vg8ZSW4WlWkTzK5ikFitB8p6fWCe2MhE22zdEJhOei4Wzfz1MmvfcgQW6k9i2KB
 AqZGlkg148ItHA56+NMUIgKUqPyblQixR97VkpZoHUAlMdaSdd4=
 =RL7O
 -----END PGP SIGNATURE-----

Merge tag 'ras_core_for_v6.9_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull RAS fixlet from Borislav Petkov:

 - Constify yet another static struct bus_type instance now that the
   driver core can handle that

* tag 'ras_core_for_v6.9_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/mce: Make mce_subsys const
2024-03-11 17:22:57 -07:00
Linus Torvalds
86833aec44 A single update for the x86 entry code:
The current CR3 handling for kernel page table isolation in the paranoid
   return paths which are relevant for #NMI, #MCE, #VC, #DB and #DF is
   unconditionally writing CR3 with the value retrieved on exception entry.
 
   In the vast majority of cases when returning to the kernel this is a
   pointless exercise because CR3 was not modified on exception entry. The
   only situation where this is necessary is when the exception interrupts a
   entry from user before switching to kernel CR3 or interrupts an exit to
   user after switching back to user CR3.
 
   As CR3 writes can be expensive on some systems this becomes measurable
   overhead with high frequency #NMIs such as perf.
 
   Avoid this overhead by checking the CR3 value, which was saved on entry,
   and write it back to CR3 only when it us a user CR3.
 -----BEGIN PGP SIGNATURE-----
 
 iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAmXvTXYTHHRnbHhAbGlu
 dXRyb25peC5kZQAKCRCmGPVMDXSYoYMED/40YXFa0si5/9LRh/LSYglxVe/RaXCn
 3oU19oWFRxdHCCLYHeQdlQGrpugM773X+4EC1dE92QpYjFnuLhl5H10h3t2e+3Uw
 Q2VoWEo95FuJ2v7nqex7p2pglOvNjT2VBBlcFFdhqxiC1FCupXvU17nCcLeBsPkj
 wbY2Sq4DxPDoWhWMNK2jhCQNVyYYluJERylS5+j0CK8vhQghq1N1WjcB6tQiAYsa
 7nXz2ZJeGF0jnvLanyhAVSHDKU7QOMO3zkQpaaMlGQ9izawupe5/Gbi8ouFieCh+
 xoLnGo1sgtMOXInnYaJnCiwuc+WiVN3d83aO/s7NZi8ZF60ib72xhzsRip2Cu4aV
 kBtJaCVLFItQZ81HRSBABj6s9MLphHVm4AaOCvCIxK0ib5KDFaWy3tZpwTU4dvwX
 rcwKsQrSLlOOD5zqO5dZn+HX6hK2lsNeTPLfcKVqARGn5S9fITzYbUMlkhO/FGaj
 ZhIgadH8+rXwFDbgS6CGbVYKtM6Ncf/VBGFfE7tEOUQVUmLws3pdLiWo6I2QTGtw
 fCAeF9uYmvhtiKk0e2jotZdbAg6HP2XTQSZfBxQpRgY6AnYW+XyDezcN0X1eNMJC
 lmNC72WYxURHZUoOIxiiVzDS9kz7YTUo3pBHFrpQlNqGTqP8r+tAhUyou16yDK/0
 2G9Mms/85u89MQ==
 =UcMe
 -----END PGP SIGNATURE-----

Merge tag 'x86-entry-2024-03-11' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull x86 entry update from Thomas Gleixner:
 "A single update for the x86 entry code:

  The current CR3 handling for kernel page table isolation in the
  paranoid return paths which are relevant for #NMI, #MCE, #VC, #DB and
  #DF is unconditionally writing CR3 with the value retrieved on
  exception entry.

  In the vast majority of cases when returning to the kernel this is a
  pointless exercise because CR3 was not modified on exception entry.
  The only situation where this is necessary is when the exception
  interrupts a entry from user before switching to kernel CR3 or
  interrupts an exit to user after switching back to user CR3.

  As CR3 writes can be expensive on some systems this becomes measurable
  overhead with high frequency #NMIs such as perf.

  Avoid this overhead by checking the CR3 value, which was saved on
  entry, and write it back to CR3 only when it is a user CR3"

* tag 'x86-entry-2024-03-11' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/entry: Avoid redundant CR3 write on paranoid returns
2024-03-11 16:15:43 -07:00
Linus Torvalds
720c857907 Support for x86 Fast Return and Event Delivery (FRED):
FRED is a replacement for IDT event delivery on x86 and addresses most of
 the technical nightmares which IDT exposes:
 
  1) Exception cause registers like CR2 need to be manually preserved in
     nested exception scenarios.
 
  2) Hardware interrupt stack switching is suboptimal for nested exceptions
     as the interrupt stack mechanism rewinds the stack on each entry which
     requires a massive effort in the low level entry of #NMI code to handle
     this.
 
  3) No hardware distinction between entry from kernel or from user which
     makes establishing kernel context more complex than it needs to be
     especially for unconditionally nestable exceptions like NMI.
 
  4) NMI nesting caused by IRET unconditionally reenabling NMIs, which is a
     problem when the perf NMI takes a fault when collecting a stack trace.
 
  5) Partial restore of ESP when returning to a 16-bit segment
 
  6) Limitation of the vector space which can cause vector exhaustion on
     large systems.
 
  7) Inability to differentiate NMI sources
 
 FRED addresses these shortcomings by:
 
  1) An extended exception stack frame which the CPU uses to save exception
     cause registers. This ensures that the meta information for each
     exception is preserved on stack and avoids the extra complexity of
     preserving it in software.
 
  2) Hardware interrupt stack switching is non-rewinding if a nested
     exception uses the currently interrupt stack.
 
  3) The entry points for kernel and user context are separate and GS BASE
     handling which is required to establish kernel context for per CPU
     variable access is done in hardware.
 
  4) NMIs are now nesting protected. They are only reenabled on the return
     from NMI.
 
  5) FRED guarantees full restore of ESP
 
  6) FRED does not put a limitation on the vector space by design because it
     uses a central entry points for kernel and user space and the CPUstores
     the entry type (exception, trap, interrupt, syscall) on the entry stack
     along with the vector number. The entry code has to demultiplex this
     information, but this removes the vector space restriction.
 
     The first hardware implementations will still have the current
     restricted vector space because lifting this limitation requires
     further changes to the local APIC.
 
  7) FRED stores the vector number and meta information on stack which
     allows having more than one NMI vector in future hardware when the
     required local APIC changes are in place.
 
 The series implements the initial FRED support by:
 
  - Reworking the existing entry and IDT handling infrastructure to
    accomodate for the alternative entry mechanism.
 
  - Expanding the stack frame to accomodate for the extra 16 bytes FRED
    requires to store context and meta information
 
  - Providing FRED specific C entry points for events which have information
    pushed to the extended stack frame, e.g. #PF and #DB.
 
  - Providing FRED specific C entry points for #NMI and #MCE
 
  - Implementing the FRED specific ASM entry points and the C code to
    demultiplex the events
 
  - Providing detection and initialization mechanisms and the necessary
    tweaks in context switching, GS BASE handling etc.
 
 The FRED integration aims for maximum code reuse vs. the existing IDT
 implementation to the extent possible and the deviation in hot paths like
 context switching are handled with alternatives to minimalize the
 impact. The low level entry and exit paths are seperate due to the extended
 stack frame and the hardware based GS BASE swichting and therefore have no
 impact on IDT based systems.
 
 It has been extensively tested on existing systems and on the FRED
 simulation and as of now there are know outstanding problems.
 -----BEGIN PGP SIGNATURE-----
 
 iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAmXuKPgTHHRnbHhAbGlu
 dXRyb25peC5kZQAKCRCmGPVMDXSYoWyUEACevJMHU+Ot9zqBPizSWxByM1uunHbp
 bjQXhaFeskd3mt7k7HU6GsPRSmC3q4lliP1Y9ypfbU0DvYSI2h/PhMWizjhmot2y
 nIvFpl51r/NsI+JHx1oXcFetz0eGHEqBui/4YQ/swgOCMymYgfqgHhazXTdldV3g
 KpH9/8W3AeGvw79uzXFH9tjBzTkbvywpam3v0LYNDJWTCuDkilyo8PjhsgRZD4x3
 V9f1nLD7nSHZW8XLoktdJJ38bKwI2Lhao91NQ0ErwopekA4/9WphZEKsDpidUSXJ
 sn1O148oQ8X92IO2OaQje8XC5pLGr5GqQBGPWzRH56P/Vd3+WOwBxaFoU6Drxc5s
 tIe23ZjkVcpA8EEG7BQBZV1Un/NX7XaCCnMniOt0RauXw+1NaslX7t/tnUAh5F1V
 TWCH4D0I0oJ0qJ7kNliGn2BP3agYXOVg81xVEUjT6KfHcYU4ImUrwi+BkeNXuXtL
 Ch5ADnbYAcUjWLFnAmEmaRtfmfNGY5T7PeGFHW2RRkaOJ88v5g14Voo6gPJaDUPn
 wMQ0nLq1xN4xZWF6ZgfRqAhArvh20k38ZujRku5vXEqnhOugQ76TF2UYiFEwOXbQ
 8jcM+yEBLGgBz7tGMwmIAml6kfxaFF1KPpdrtcPxNkGlbE6KTSuIolLx2YGUvlSU
 6/O8nwZy49ckmQ==
 =Ib7w
 -----END PGP SIGNATURE-----

Merge tag 'x86-fred-2024-03-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull x86 FRED support from Thomas Gleixner:
 "Support for x86 Fast Return and Event Delivery (FRED).

  FRED is a replacement for IDT event delivery on x86 and addresses most
  of the technical nightmares which IDT exposes:

   1) Exception cause registers like CR2 need to be manually preserved
      in nested exception scenarios.

   2) Hardware interrupt stack switching is suboptimal for nested
      exceptions as the interrupt stack mechanism rewinds the stack on
      each entry which requires a massive effort in the low level entry
      of #NMI code to handle this.

   3) No hardware distinction between entry from kernel or from user
      which makes establishing kernel context more complex than it needs
      to be especially for unconditionally nestable exceptions like NMI.

   4) NMI nesting caused by IRET unconditionally reenabling NMIs, which
      is a problem when the perf NMI takes a fault when collecting a
      stack trace.

   5) Partial restore of ESP when returning to a 16-bit segment

   6) Limitation of the vector space which can cause vector exhaustion
      on large systems.

   7) Inability to differentiate NMI sources

  FRED addresses these shortcomings by:

   1) An extended exception stack frame which the CPU uses to save
      exception cause registers. This ensures that the meta information
      for each exception is preserved on stack and avoids the extra
      complexity of preserving it in software.

   2) Hardware interrupt stack switching is non-rewinding if a nested
      exception uses the currently interrupt stack.

   3) The entry points for kernel and user context are separate and GS
      BASE handling which is required to establish kernel context for
      per CPU variable access is done in hardware.

   4) NMIs are now nesting protected. They are only reenabled on the
      return from NMI.

   5) FRED guarantees full restore of ESP

   6) FRED does not put a limitation on the vector space by design
      because it uses a central entry points for kernel and user space
      and the CPUstores the entry type (exception, trap, interrupt,
      syscall) on the entry stack along with the vector number. The
      entry code has to demultiplex this information, but this removes
      the vector space restriction.

      The first hardware implementations will still have the current
      restricted vector space because lifting this limitation requires
      further changes to the local APIC.

   7) FRED stores the vector number and meta information on stack which
      allows having more than one NMI vector in future hardware when the
      required local APIC changes are in place.

  The series implements the initial FRED support by:

   - Reworking the existing entry and IDT handling infrastructure to
     accomodate for the alternative entry mechanism.

   - Expanding the stack frame to accomodate for the extra 16 bytes FRED
     requires to store context and meta information

   - Providing FRED specific C entry points for events which have
     information pushed to the extended stack frame, e.g. #PF and #DB.

   - Providing FRED specific C entry points for #NMI and #MCE

   - Implementing the FRED specific ASM entry points and the C code to
     demultiplex the events

   - Providing detection and initialization mechanisms and the necessary
     tweaks in context switching, GS BASE handling etc.

  The FRED integration aims for maximum code reuse vs the existing IDT
  implementation to the extent possible and the deviation in hot paths
  like context switching are handled with alternatives to minimalize the
  impact. The low level entry and exit paths are seperate due to the
  extended stack frame and the hardware based GS BASE swichting and
  therefore have no impact on IDT based systems.

  It has been extensively tested on existing systems and on the FRED
  simulation and as of now there are no outstanding problems"

* tag 'x86-fred-2024-03-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (38 commits)
  x86/fred: Fix init_task thread stack pointer initialization
  MAINTAINERS: Add a maintainer entry for FRED
  x86/fred: Fix a build warning with allmodconfig due to 'inline' failing to inline properly
  x86/fred: Invoke FRED initialization code to enable FRED
  x86/fred: Add FRED initialization functions
  x86/syscall: Split IDT syscall setup code into idt_syscall_init()
  KVM: VMX: Call fred_entry_from_kvm() for IRQ/NMI handling
  x86/entry: Add fred_entry_from_kvm() for VMX to handle IRQ/NMI
  x86/entry/calling: Allow PUSH_AND_CLEAR_REGS being used beyond actual entry code
  x86/fred: Fixup fault on ERETU by jumping to fred_entrypoint_user
  x86/fred: Let ret_from_fork_asm() jmp to asm_fred_exit_user when FRED is enabled
  x86/traps: Add sysvec_install() to install a system interrupt handler
  x86/fred: FRED entry/exit and dispatch code
  x86/fred: Add a machine check entry stub for FRED
  x86/fred: Add a NMI entry stub for FRED
  x86/fred: Add a debug fault entry stub for FRED
  x86/idtentry: Incorporate definitions/declarations of the FRED entries
  x86/fred: Make exc_page_fault() work for FRED
  x86/fred: Allow single-step trap and NMI when starting a new task
  x86/fred: No ESPFIX needed when FRED is enabled
  ...
2024-03-11 16:00:17 -07:00
Linus Torvalds
ca7e917769 Rework of APIC enumeration and topology evaluation:
The current implementation has a couple of shortcomings:
 
   - It fails to handle hybrid systems correctly.
 
   - The APIC registration code which handles CPU number assignents is in
     the middle of the APIC code and detached from the topology evaluation.
 
   - The various mechanisms which enumerate APICs, ACPI, MPPARSE and guest
     specific ones, tweak global variables as they see fit or in case of
     XENPV just hack around the generic mechanisms completely.
 
   - The CPUID topology evaluation code is sprinkled all over the vendor
     code and reevaluates global variables on every hotplug operation.
 
   - There is no way to analyze topology on the boot CPU before bringing up
     the APs. This causes problems for infrastructure like PERF which needs
     to size certain aspects upfront or could be simplified if that would be
     possible.
 
   - The APIC admission and CPU number association logic is incomprehensible
     and overly complex and needs to be kept around after boot instead of
     completing this right after the APIC enumeration.
 
 This update addresses these shortcomings with the following changes:
 
   - Rework the CPUID evaluation code so it is common for all vendors and
     provides information about the APIC ID segments in a uniform way
     independent of the number of segments (Thread, Core, Module, ..., Die,
     Package) so that this information can be computed instead of rewriting
     global variables of dubious value over and over.
 
   - A few cleanups and simplifcations of the APIC, IO/APIC and related
     interfaces to prepare for the topology evaluation changes.
 
   - Seperation of the parser stages so the early evaluation which tries to
     find the APIC address can be seperately overridden from the late
     evaluation which enumerates and registers the local APIC as further
     preparation for sanitizing the topology evaluation.
 
   - A new registration and admission logic which
 
      - encapsulates the inner workings so that parsers and guest logic
        cannot longer fiddle in it
 
      - uses the APIC ID segments to build topology bitmaps at registration
        time
 
      - provides a sane admission logic
 
      - allows to detect the crash kernel case, where CPU0 does not run on
        the real BSP, automatically. This is required to prevent sending
        INIT/SIPI sequences to the real BSP which would reset the whole
        machine. This was so far handled by a tedious command line
        parameter, which does not even work in nested crash scenarios.
 
      - Associates CPU number after the enumeration completed and prevents
        the late registration of APICs, which was somehow tolerated before.
 
   - Converting all parsers and guest enumeration mechanisms over to the
     new interfaces.
 
     This allows to get rid of all global variable tweaking from the parsers
     and enumeration mechanisms and sanitizes the XEN[PV] handling so it can
     use CPUID evaluation for the first time.
 
   - Mopping up existing sins by taking the information from the APIC ID
     segment bitmaps.
 
     This evaluates hybrid systems correctly on the boot CPU and allows for
     cleanups and fixes in the related drivers, e.g. PERF.
 
 The series has been extensively tested and the minimal late fallout due to
 a broken ACPI/MADT table has been addressed by tightening the admission
 logic further.
 -----BEGIN PGP SIGNATURE-----
 
 iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAmXuDawTHHRnbHhAbGlu
 dXRyb25peC5kZQAKCRCmGPVMDXSYobE7EACngItF+UOTCoCV6och2lL6HVoIdZD1
 Y5oaAgD+WzQSz/lBkH6b9kZSyvjlMo6O9GlnGX+ii+VUnijDp4VrspnxbJDaKEq3
 gOfsSg2Tk+ps50HqMcZawjjBYJb/TmvKwEV2XuzIBPOONSWLNjvN7nBSzLl1eF9/
 8uCE39/8aB5K3GXryRyXdo2uLu6eHTVC0aYFu/kLX1/BbVqF5NMD3sz9E9w8+D/U
 MIIMEMXy4Fn+P2o0vVH+gjUlwI76mJbB1WqCX/sqbVacXrjl3KfNJRiisTFIOOYV
 8o+rIV0ef5X9xmZqtOXAdyZQzj++Gwmz9+4TU1M4YHtS7UkYn6AluOjvVekCc+gc
 qXE3WhqKfCK2/carRMLQxAMxNeRylkZG+Wuv1Qtyjpe9JX2dTqtems0f4DMp9DKf
 b7InO3z39kJanpqcUG2Sx+GWanetfnX+0Ho2Moqu6Xi+2ATr1PfMG/Wyr5/WWOfV
 qApaHSTwa+J43mSzP6BsXngEv085EHSGM5tPe7u46MCYFqB21+bMl+qH82KjMkOe
 c6uZovFQMmX2WBlqJSYGVCH+Jhgvqq8HFeRs19Hd4enOt3e6LE3E74RBVD1AyfLV
 1b/m8tYB/o871ZlEZwDCGVrV/LNnA7PxmFpq5ZHLpUt39g2/V0RH1puBVz1e97pU
 YsTT7hBCUYzgjQ==
 =/5oR
 -----END PGP SIGNATURE-----

Merge tag 'x86-apic-2024-03-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull x86 APIC updates from Thomas Gleixner:
 "Rework of APIC enumeration and topology evaluation.

  The current implementation has a couple of shortcomings:

   - It fails to handle hybrid systems correctly.

   - The APIC registration code which handles CPU number assignents is
     in the middle of the APIC code and detached from the topology
     evaluation.

   - The various mechanisms which enumerate APICs, ACPI, MPPARSE and
     guest specific ones, tweak global variables as they see fit or in
     case of XENPV just hack around the generic mechanisms completely.

   - The CPUID topology evaluation code is sprinkled all over the vendor
     code and reevaluates global variables on every hotplug operation.

   - There is no way to analyze topology on the boot CPU before bringing
     up the APs. This causes problems for infrastructure like PERF which
     needs to size certain aspects upfront or could be simplified if
     that would be possible.

   - The APIC admission and CPU number association logic is
     incomprehensible and overly complex and needs to be kept around
     after boot instead of completing this right after the APIC
     enumeration.

  This update addresses these shortcomings with the following changes:

   - Rework the CPUID evaluation code so it is common for all vendors
     and provides information about the APIC ID segments in a uniform
     way independent of the number of segments (Thread, Core, Module,
     ..., Die, Package) so that this information can be computed instead
     of rewriting global variables of dubious value over and over.

   - A few cleanups and simplifcations of the APIC, IO/APIC and related
     interfaces to prepare for the topology evaluation changes.

   - Seperation of the parser stages so the early evaluation which tries
     to find the APIC address can be seperately overridden from the late
     evaluation which enumerates and registers the local APIC as further
     preparation for sanitizing the topology evaluation.

   - A new registration and admission logic which

       - encapsulates the inner workings so that parsers and guest logic
         cannot longer fiddle in it

       - uses the APIC ID segments to build topology bitmaps at
         registration time

       - provides a sane admission logic

       - allows to detect the crash kernel case, where CPU0 does not run
         on the real BSP, automatically. This is required to prevent
         sending INIT/SIPI sequences to the real BSP which would reset
         the whole machine. This was so far handled by a tedious command
         line parameter, which does not even work in nested crash
         scenarios.

       - Associates CPU number after the enumeration completed and
         prevents the late registration of APICs, which was somehow
         tolerated before.

   - Converting all parsers and guest enumeration mechanisms over to the
     new interfaces.

     This allows to get rid of all global variable tweaking from the
     parsers and enumeration mechanisms and sanitizes the XEN[PV]
     handling so it can use CPUID evaluation for the first time.

   - Mopping up existing sins by taking the information from the APIC ID
     segment bitmaps.

     This evaluates hybrid systems correctly on the boot CPU and allows
     for cleanups and fixes in the related drivers, e.g. PERF.

  The series has been extensively tested and the minimal late fallout
  due to a broken ACPI/MADT table has been addressed by tightening the
  admission logic further"

* tag 'x86-apic-2024-03-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (76 commits)
  x86/topology: Ignore non-present APIC IDs in a present package
  x86/apic: Build the x86 topology enumeration functions on UP APIC builds too
  smp: Provide 'setup_max_cpus' definition on UP too
  smp: Avoid 'setup_max_cpus' namespace collision/shadowing
  x86/bugs: Use fixed addressing for VERW operand
  x86/cpu/topology: Get rid of cpuinfo::x86_max_cores
  x86/cpu/topology: Provide __num_[cores|threads]_per_package
  x86/cpu/topology: Rename topology_max_die_per_package()
  x86/cpu/topology: Rename smp_num_siblings
  x86/cpu/topology: Retrieve cores per package from topology bitmaps
  x86/cpu/topology: Use topology logical mapping mechanism
  x86/cpu/topology: Provide logical pkg/die mapping
  x86/cpu/topology: Simplify cpu_mark_primary_thread()
  x86/cpu/topology: Mop up primary thread mask handling
  x86/cpu/topology: Use topology bitmaps for sizing
  x86/cpu/topology: Let XEN/PV use topology from CPUID/MADT
  x86/xen/smp_pv: Count number of vCPUs early
  x86/cpu/topology: Assign hotpluggable CPUIDs during init
  x86/cpu/topology: Reject unknown APIC IDs on ACPI hotplug
  x86/topology: Add a mechanism to track topology via APIC IDs
  ...
2024-03-11 15:45:55 -07:00
Linus Torvalds
80a76c60e5 Updates for timekeeping and PTP core:
The cross-timestamp mechanism which allows to correlate hardware
   clocks uses clocksource pointers for describing the correlation.
 
   That's suboptimal as drivers need to obtain the pointer, which requires
   needless exports and exposing internals.
 
   This can be completely avoided by assigning clocksource IDs and using
   them for describing the correlated clock source.
 
   This update adds clocksource IDs to all clocksources in the tree which
   can be exposed to this mechanism and removes the pointer and now needless
   exports.
 
   This is separate from the timer core changes as it was provided to the
   PTP folks to build further changes on top.
 
   A related improvement for the core and the correlation handling has not
   made it this time, but is expected to get ready for the next round.
 -----BEGIN PGP SIGNATURE-----
 
 iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAmXuAsoTHHRnbHhAbGlu
 dXRyb25peC5kZQAKCRCmGPVMDXSYoQSFD/0Qvyrm/tKgJwdOZrXAmcPkCRu4amrv
 z5GiZtMt6/GHN6JA6ZkR9tjpYnh/NrhxaGxD2k9kcUsaj1tEZyGULNYtfPXsS/j0
 SVOVpuagqppPGryfqnxgnZk7M+zjGAxb58miGMEkk08Ex7ysAkujGnmfHzNBP1mz
 Ryeeime6aOVB8jhISS68GtAYZ5fD0fWjXfN7DN9G1faJwmF82nJLKkGFy7E1TV9Y
 IYaW4r/EZuRATXesnIg6YAjop3l3qK1J8hMAam7OqvOqVzGCs0QNg9usg9Pf6je4
 BaELA6GIwDw8ncR5865ONVC8Qpw8/AgChNf7WJrXsP1xBL56FFDmyTPGJMcUFXya
 G7s/YIQSj+yXg9+LPMAQqFTqLolnwspBw/fz2ctShpbnGbs8lmnAOTAjNz5lBddd
 vrQSn3Gtcj9vHP5OTKXSzHIYGmbvTZp0acsTtuSQGGzJySgVD43m1/xwY5eb11gp
 vS57GADgqTli8mrgipVPZCQ3o87RxNMqqda9lrEG/6lfuJ1rUGZWTkvqoasJI/jq
 mGiWidFhDOGHaJJUQajLIHPXLll+NN2LIa4wcZqPWE4qdtBAqtutkPfVAC5O0Qot
 dA1eWjW02i1Hy7SsUwlpivlDO+MoMn7hqmfXxA01u/x4y8UCnB+vSjWs0LdVlG3G
 xWIbTzzp7HKEwg==
 =xKya
 -----END PGP SIGNATURE-----

Merge tag 'timers-ptp-2024-03-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull clocksource updates from Thomas Gleixner:
 "Updates for timekeeping and PTP core.

  The cross-timestamp mechanism which allows to correlate hardware
  clocks uses clocksource pointers for describing the correlation.

  That's suboptimal as drivers need to obtain the pointer, which
  requires needless exports and exposing internals. This can all be
  completely avoided by assigning clocksource IDs and using them for
  describing the correlated clock source.

  So this adds clocksource IDs to all clocksources in the tree which can
  be exposed to this mechanism and removes the pointer and now needless
  exports.

  A related improvement for the core and the correlation handling has
  not made it this time, but is expected to get ready for the next
  round"

* tag 'timers-ptp-2024-03-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  kvmclock: Unexport kvmclock clocksource
  treewide: Remove system_counterval_t.cs, which is never read
  timekeeping: Evaluate system_counterval_t.cs_id instead of .cs
  ptp/kvm, arm_arch_timer: Set system_counterval_t.cs_id to constant
  x86/kvm, ptp/kvm: Add clocksource ID, set system_counterval_t.cs_id
  x86/tsc: Add clocksource ID, set system_counterval_t.cs_id
  timekeeping: Add clocksource ID to struct system_counterval_t
  x86/tsc: Correct kernel-doc notation
2024-03-11 14:25:18 -07:00
Linus Torvalds
4527e83780 Updates for the MSI interrupt subsystem and RISC-V initial MSI support:
- Core and platform-MSI
 
     The core changes have been adopted from previous work which converted
     ARM[64] to the new per device MSI domain model, which was merged to
     support multiple MSI domain per device. The ARM[64] changes are being
     worked on too, but have not been ready yet. The core and platform-MSI
     changes have been split out to not hold up RISC-V and to avoid that
     RISC-V builds on the scheduled for removal interfaces.
 
     The core support provides new interfaces to handle wire to MSI bridges
     in a straight forward way and introduces new platform-MSI interfaces
     which are built on top of the per device MSI domain model.
 
     Once ARM[64] is converted over the old platform-MSI interfaces and the
     related ugliness in the MSI core code will be removed.
 
   - Drivers:
 
     - Add a new driver for the Andes hart-level interrupt controller
 
     - Rework the SiFive PLIC driver to prepare for MSI suport
 
     - Expand the RISC-V INTC driver to support the new RISC-V AIA
       controller which provides the basis for MSI on RISC-V
 
     - A few fixup for the fallout of the core changes.
 
     The actual MSI parts for RISC-V were finalized late and have been
     post-poned for the next merge window.
 -----BEGIN PGP SIGNATURE-----
 
 iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAmXt7MsTHHRnbHhAbGlu
 dXRyb25peC5kZQAKCRCmGPVMDXSYofrMD/9Dag12ttmbE2uqzTzlTxc7RHC2MX5n
 VJLt84FNNwGPA4r7WLOOqHrfuvfoGjuWT9pYMrVaXCglRG1CMvL10kHMB2f28UWv
 Qpc5PzbJwpD6tqyfRSFHMoJp63DAI8IpS7J3I8bqnRD8+0PwYn3jMA1+iMZkH0B7
 8uO3mxlFhQ7BFvIAeMEAhR0szuAfvXqEtpi1iTgQTrQ4Je4Rf1pmLjEe2rkwDvF4
 p3SAmPIh4+F3IjO7vNsVkQ2yOarTP2cpSns6JmO8mrobLIVX7ZCQ6uVaVCfBhxfx
 WttuJO6Bmh/I15yDe/waH6q9ym+0VBwYRWi5lonMpViGdq4/D2WVnY1mNeLRIfjl
 X65aMWE1+bhiqyIIUfc24hacf0UgBIlMEW4kJ31VmQzb+OyLDXw+UvzWg1dO6XdA
 3L6j1nRgHk0ea5yFyH6SfH/mrfeyqHuwHqo17KFyHxD3jM2H1RRMplpbwXiOIepp
 KJJ/O06eMEzHqzn4B8GCT2EvX6L2ehgoWbLeEDNLQh/3LwA9OdcBzPr6gsweEl0U
 Q7szJgUWZHeMr39F2rnt0GmvkEuu6muEp/nQzfnohjoYZ0PhpMLSq++4Gi+Ko3fz
 2IyecJ+tlbSfyM5//8AdNnOSpsTG3f8u6B/WwhGp5lIDwMnMzCssgfQmRnc3Uyv5
 kU3pdMjURJaTUA==
 =7aXj
 -----END PGP SIGNATURE-----

Merge tag 'irq-msi-2024-03-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull MSI updates from Thomas Gleixner:
 "Updates for the MSI interrupt subsystem and initial RISC-V MSI
  support.

  The core changes have been adopted from previous work which converted
  ARM[64] to the new per device MSI domain model, which was merged to
  support multiple MSI domain per device. The ARM[64] changes are being
  worked on too, but have not been ready yet. The core and platform-MSI
  changes have been split out to not hold up RISC-V and to avoid that
  RISC-V builds on the scheduled for removal interfaces.

  The core support provides new interfaces to handle wire to MSI bridges
  in a straight forward way and introduces new platform-MSI interfaces
  which are built on top of the per device MSI domain model.

  Once ARM[64] is converted over the old platform-MSI interfaces and the
  related ugliness in the MSI core code will be removed.

  The actual MSI parts for RISC-V were finalized late and have been
  post-poned for the next merge window.

  Drivers:

   - Add a new driver for the Andes hart-level interrupt controller

   - Rework the SiFive PLIC driver to prepare for MSI suport

   - Expand the RISC-V INTC driver to support the new RISC-V AIA
     controller which provides the basis for MSI on RISC-V

   - A few fixup for the fallout of the core changes"

* tag 'irq-msi-2024-03-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (29 commits)
  irqchip/riscv-intc: Fix low-level interrupt handler setup for AIA
  x86/apic/msi: Use DOMAIN_BUS_GENERIC_MSI for HPET/IO-APIC domain search
  genirq/matrix: Dynamic bitmap allocation
  irqchip/riscv-intc: Add support for RISC-V AIA
  irqchip/sifive-plic: Improve locking safety by using irqsave/irqrestore
  irqchip/sifive-plic: Parse number of interrupts and contexts early in plic_probe()
  irqchip/sifive-plic: Cleanup PLIC contexts upon irqdomain creation failure
  irqchip/sifive-plic: Use riscv_get_intc_hwnode() to get parent fwnode
  irqchip/sifive-plic: Use devm_xyz() for managed allocation
  irqchip/sifive-plic: Use dev_xyz() in-place of pr_xyz()
  irqchip/sifive-plic: Convert PLIC driver into a platform driver
  irqchip/riscv-intc: Introduce Andes hart-level interrupt controller
  irqchip/riscv-intc: Allow large non-standard interrupt number
  genirq/irqdomain: Don't call ops->select for DOMAIN_BUS_ANY tokens
  irqchip/imx-intmux: Handle pure domain searches correctly
  genirq/msi: Provide MSI_FLAG_PARENT_PM_DEV
  genirq/irqdomain: Reroute device MSI create_mapping
  genirq/msi: Provide allocation/free functions for "wired" MSI interrupts
  genirq/msi: Optionally use dev->fwnode for device domain
  genirq/msi: Provide DOMAIN_BUS_WIRED_TO_MSI
  ...
2024-03-11 14:03:03 -07:00
Linus Torvalds
137e0ec05a KVM GUEST_MEMFD fixes for 6.8:
- Make KVM_MEM_GUEST_MEMFD mutually exclusive with KVM_MEM_READONLY to
   avoid creating an inconsistent ABI (KVM_MEM_GUEST_MEMFD is not writable
   from userspace, so there would be no way to write to a read-only
   guest_memfd).
 
 - Update documentation for KVM_SW_PROTECTED_VM to make it abundantly
   clear that such VMs are purely for development and testing.
 
 - Limit KVM_SW_PROTECTED_VM guests to the TDP MMU, as the long term plan
   is to support confidential VMs with deterministic private memory (SNP
   and TDX) only in the TDP MMU.
 
 - Fix a bug in a GUEST_MEMFD dirty logging test that caused false passes.
 
 x86 fixes:
 
 - Fix missing marking of a guest page as dirty when emulating an atomic access.
 
 - Check for mmu_notifier invalidation events before faulting in the pfn,
   and before acquiring mmu_lock, to avoid unnecessary work and lock
   contention with preemptible kernels (including CONFIG_PREEMPT_DYNAMIC
   in non-preemptible mode).
 
 - Disable AMD DebugSwap by default, it breaks VMSA signing and will be
   re-enabled with a better VM creation API in 6.10.
 
 - Do the cache flush of converted pages in svm_register_enc_region() before
   dropping kvm->lock, to avoid a race with unregistering of the same region
   and the consequent use-after-free issue.
 -----BEGIN PGP SIGNATURE-----
 
 iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmXskdYUHHBib256aW5p
 QHJlZGhhdC5jb20ACgkQv/vSX3jHroN1TAf/SUGf4QuYG7nnfgWDR+goFO6Gx7NE
 pJr3kAwv6d2f+qTlURfGjnX929pgZDLgoTkXTNeZquN6LjgownxMjBIpymVobvAD
 AKvqJS/ECpryuehXbeqlxJxJn+TrxJ5r4QeNILMHc3AOZoiUqM6xl3zFfXWDNWVo
 IazwT8P3d8wxiHAxv1eG6OVWHxbcg31068FVKRX3f/bWPbVwROJrPkCopmz2BJvU
 6KYdYcn2rkpDTEM3ouDC/6gxJ9vpSY3+nW7Q7dNtGtOH2+BddfSA6I0rphCQWCNs
 uXOxd5bDrC+KmkiULTPostuvwBgIm1k9wC2kW9A4P2VEf6Ay+ZHEdAOBJQ==
 =+MT/
 -----END PGP SIGNATURE-----

Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm

Pull kvm fixes from Paolo Bonzini:
 "KVM GUEST_MEMFD fixes for 6.8:

   - Make KVM_MEM_GUEST_MEMFD mutually exclusive with KVM_MEM_READONLY
     to avoid creating an inconsistent ABI (KVM_MEM_GUEST_MEMFD is not
     writable from userspace, so there would be no way to write to a
     read-only guest_memfd).

   - Update documentation for KVM_SW_PROTECTED_VM to make it abundantly
     clear that such VMs are purely for development and testing.

   - Limit KVM_SW_PROTECTED_VM guests to the TDP MMU, as the long term
     plan is to support confidential VMs with deterministic private
     memory (SNP and TDX) only in the TDP MMU.

   - Fix a bug in a GUEST_MEMFD dirty logging test that caused false
     passes.

  x86 fixes:

   - Fix missing marking of a guest page as dirty when emulating an
     atomic access.

   - Check for mmu_notifier invalidation events before faulting in the
     pfn, and before acquiring mmu_lock, to avoid unnecessary work and
     lock contention with preemptible kernels (including
     CONFIG_PREEMPT_DYNAMIC in non-preemptible mode).

   - Disable AMD DebugSwap by default, it breaks VMSA signing and will
     be re-enabled with a better VM creation API in 6.10.

   - Do the cache flush of converted pages in svm_register_enc_region()
     before dropping kvm->lock, to avoid a race with unregistering of
     the same region and the consequent use-after-free issue"

* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
  SEV: disable SEV-ES DebugSwap by default
  KVM: x86/mmu: Retry fault before acquiring mmu_lock if mapping is changing
  KVM: SVM: Flush pages under kvm->lock to fix UAF in svm_register_enc_region()
  KVM: selftests: Add a testcase to verify GUEST_MEMFD and READONLY are exclusive
  KVM: selftests: Create GUEST_MEMFD for relevant invalid flags testcases
  KVM: x86/mmu: Restrict KVM_SW_PROTECTED_VM to the TDP MMU
  KVM: x86: Update KVM_SW_PROTECTED_VM docs to make it clear they're a WIP
  KVM: Make KVM_MEM_GUEST_MEMFD mutually exclusive with KVM_MEM_READONLY
  KVM: x86: Mark target gfn of emulated atomic instruction as dirty
2024-03-10 09:27:39 -07:00
Paolo Bonzini
5abf6dceb0 SEV: disable SEV-ES DebugSwap by default
The DebugSwap feature of SEV-ES provides a way for confidential guests to use
data breakpoints.  However, because the status of the DebugSwap feature is
recorded in the VMSA, enabling it by default invalidates the attestation
signatures.  In 6.10 we will introduce a new API to create SEV VMs that
will allow enabling DebugSwap based on what the user tells KVM to do.
Contextually, we will change the legacy KVM_SEV_ES_INIT API to never
enable DebugSwap.

For compatibility with kernels that pre-date the introduction of DebugSwap,
as well as with those where KVM_SEV_ES_INIT will never enable it, do not enable
the feature by default.  If anybody wants to use it, for now they can enable
the sev_es_debug_swap_enabled module parameter, but this will result in a
warning.

Fixes: d1f85fbe836e ("KVM: SEV: Enable data breakpoints in SEV-ES")
Cc: stable@vger.kernel.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-03-09 11:42:25 -05:00
Paolo Bonzini
39fee313fd Merge tag 'kvm-x86-guest_memfd_fixes-6.8' of https://github.com/kvm-x86/linux into HEAD
KVM GUEST_MEMFD fixes for 6.8:

 - Make KVM_MEM_GUEST_MEMFD mutually exclusive with KVM_MEM_READONLY to
   avoid creating ABI that KVM can't sanely support.

 - Update documentation for KVM_SW_PROTECTED_VM to make it abundantly
   clear that such VMs are purely a development and testing vehicle, and
   come with zero guarantees.

 - Limit KVM_SW_PROTECTED_VM guests to the TDP MMU, as the long term plan
   is to support confidential VMs with deterministic private memory (SNP
   and TDX) only in the TDP MMU.

 - Fix a bug in a GUEST_MEMFD negative test that resulted in false passes
   when verifying that KVM_MEM_GUEST_MEMFD memslots can't be dirty logged.
2024-03-09 11:42:17 -05:00
Paolo Bonzini
1b6c146df5 Merge tag 'kvm-x86-fixes-6.8-2' of https://github.com/kvm-x86/linux into HEAD
KVM x86 fixes for 6.8, round 2:

 - When emulating an atomic access, mark the gfn as dirty in the memslot
   to fix a bug where KVM could fail to mark the slot as dirty during live
   migration, ultimately resulting in guest data corruption due to a dirty
   page not being re-copied from the source to the target.

 - Check for mmu_notifier invalidation events before faulting in the pfn,
   and before acquiring mmu_lock, to avoid unnecessary work and lock
   contention.  Contending mmu_lock is especially problematic on preemptible
   kernels, as KVM may yield mmu_lock in response to the contention, which
   severely degrades overall performance due to vCPUs making it difficult
   for the task that triggered invalidation to make forward progress.

   Note, due to another kernel bug, this fix isn't limited to preemtible
   kernels, as any kernel built with CONFIG_PREEMPT_DYNAMIC=y will yield
   contended rwlocks and spinlocks.

   https://lore.kernel.org/all/20240110214723.695930-1-seanjc@google.com
2024-03-09 11:42:06 -05:00
Changbin Du
c0935fca6b x86/sev: Disable KMSAN for memory encryption TUs
Instrumenting sev.c and mem_encrypt_identity.c with KMSAN will result in
a triple-faulting kernel. Some of the code is invoked too early during
boot, before KMSAN is ready.

Disable KMSAN instrumentation for the two translation units.

  [ bp: Massage commit message. ]

Signed-off-by: Changbin Du <changbin.du@huawei.com>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Link: https://lore.kernel.org/r/20240308044401.1120395-1-changbin.du@huawei.com
2024-03-08 08:59:22 +01:00
Xin Li (Intel)
c416b5bac6 x86/fred: Fix init_task thread stack pointer initialization
As TOP_OF_KERNEL_STACK_PADDING was defined as 0 on x86_64, it went
unnoticed that the initialization of the .sp field in INIT_THREAD and some
calculations in the low level startup code do not take the padding into
account.

FRED enabled kernels require a 16 byte padding, which means that the init
task initialization and the low level startup code use the wrong stack
offset.

Subtract TOP_OF_KERNEL_STACK_PADDING in all affected places to adjust for
this.

Fixes: 65c9cc9e2c14 ("x86/fred: Reserve space for the FRED stack frame")
Fixes: 3adee777ad0d ("x86/smpboot: Remove initial_stack on 64-bit")
Reported-by: kernel test robot <oliver.sang@intel.com>
Signed-off-by: Xin Li (Intel) <xin@zytor.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Closes: https://lore.kernel.org/oe-lkp/202402262159.183c2a37-lkp@intel.com
Link: https://lore.kernel.org/r/20240304083333.449322-1-xin@zytor.com
2024-03-07 11:55:36 +01:00
Thomas Gleixner
f0551af021 x86/topology: Ignore non-present APIC IDs in a present package
Borislav reported that one of his systems has a broken MADT table which
advertises eight present APICs and 24 non-present APICs in the same
package.

The non-present ones are considered hot-pluggable by the topology
evaluation code, which is obviously bogus as there is no way to hot-plug
within the same package.

As the topology evaluation code accounts for hot-pluggable CPUs in a
package, the maximum number of cores per package is computed wrong, which
in turn causes the uncore performance counter driver to access non-existing
MSRs. It will probably confuse other entities which rely on the maximum
number of cores and threads per package too.

Cure this by ignoring hot-pluggable APIC IDs within a present package.

In theory it would be reasonable to just do this unconditionally, but then
there is this thing called reality^Wvirtualization which ruins
everything. Virtualization is the only existing user of "physical" hotplug
and the virtualization tools allow the above scenario. Whether that is
actually in use or not is unknown.

As it can be argued that the virtualization case is not affected by the
issues which exposed the reported problem, allow the bogosity if the kernel
determined that it is running in a VM for now.

Fixes: 89b0f15f408f ("x86/cpu/topology: Get rid of cpuinfo::x86_max_cores")
Reported-by: Borislav Petkov (AMD) <bp@alien8.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Borislav Petkov (AMD) <bp@alien8.de>
Link: https://lore.kernel.org/r/87a5nbvccx.ffs@tglx
2024-03-06 14:35:30 +01:00
Thomas Weißschuh
774a86f1c8 x86/nmi: Drop unused declaration of proc_nmi_enabled()
The declaration is unused as the definition got deleted.

Fixes: 5f2b0ba4d94b ("x86, nmi_watchdog: Remove the old nmi_watchdog").
Signed-off-by: Thomas Weißschuh <linux@weissschuh.net>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Link: https://lore.kernel.org/r/20240306-const-sysctl-prep-x86-v1-1-f9d1fa38dd2b@weissschuh.net
2024-03-06 10:13:33 +01:00
Linus Torvalds
1c46d04a0d hyperv-fixes for v6.8
-----BEGIN PGP SIGNATURE-----
 
 iQFHBAABCgAxFiEEIbPD0id6easf0xsudhRwX5BBoF4FAmXlZ0gTHHdlaS5saXVA
 a2VybmVsLm9yZwAKCRB2FHBfkEGgXnnUB/oCfw6GxsYL4eAiKcayrU0E7aDZbZzG
 wf/1m3fSiocERGEQqyU7s3ULoba/ejX09nTwV+ZwECbqat64ceUQb5ousf/3kn7i
 vg3kbPKmF79c2DNMnT5+K7gvmhyewm+5r8eCBsOegEqnXv0F3tGjq729Qe+5/SBB
 roP5XHjERpY5yHVsDNsTeQ1Qg+H/Mg/2eLAogSFtY0FXKfNrXXmMAuKwe7UJdWmd
 KIeSA4F18wsohtb4Aq8XLDG8UwmCUaBjzGsBOgjlVLtP2QxyfxswWludVK/fwyVl
 T/xcMW2ZZcK7mWRebqr9iritxbOls8ltbsY3fLENREJShs+JgLs19w8x
 =vyy7
 -----END PGP SIGNATURE-----

Merge tag 'hyperv-fixes-signed-20240303' of git://git.kernel.org/pub/scm/linux/kernel/git/hyperv/linux

Pull hyperv fixes from Wei Liu:

 - Multiple fixes, cleanups and documentations for Hyper-V core code and
   drivers

* tag 'hyperv-fixes-signed-20240303' of git://git.kernel.org/pub/scm/linux/kernel/git/hyperv/linux:
  Drivers: hv: vmbus: make hv_bus const
  x86/hyperv: Allow 15-bit APIC IDs for VTL platforms
  x86/hyperv: Make encrypted/decrypted changes safe for load_unaligned_zeropad()
  x86/mm: Regularize set_memory_p() parameters and make non-static
  x86/hyperv: Use slow_virt_to_phys() in page transition hypervisor callback
  Documentation: hyperv: Add overview of PCI pass-thru device support
  Drivers: hv: vmbus: Update indentation in create_gpadl_header()
  Drivers: hv: vmbus: Remove duplication and cleanup code in create_gpadl_header()
  fbdev/hyperv_fb: Fix logic error for Gen2 VMs in hvfb_getmem()
  Drivers: hv: vmbus: Calculate ring buffer size for more efficient use of memory
  hv_utils: Allow implicit ICTIMESYNCFLAG_SYNC
2024-03-05 12:38:50 -08:00
Thomas Gleixner
35ce64922c x86/idle: Select idle routine only once
The idle routine selection is done on every CPU bringup operation and
has a guard in place which is effective after the first invocation,
which is a pointless exercise.

Invoke it once on the boot CPU and mark the related functions __init.
The guard check has to stay as xen_set_default_idle() runs early.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Link: https://lore.kernel.org/r/87edcu6vaq.ffs@tglx
2024-03-04 17:39:24 +01:00
Thomas Gleixner
5f75916ec6 x86/idle: Let prefer_mwait_c1_over_halt() return bool
The return value is truly boolean. Make it so.

No functional change.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Link: https://lore.kernel.org/r/20240229142248.518723854@linutronix.de
2024-03-04 17:39:24 +01:00
Thomas Gleixner
f3d7eab7be x86/idle: Cleanup idle_setup()
Updating the static call for x86_idle() from idle_setup() is
counter-intuitive.

Let select_idle_routine() handle it like the other idle choices, which
allows to simplify the idle selection later on.

While at it rewrite comments and return a proper error code and not -1.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Link: https://lore.kernel.org/r/20240229142248.455616019@linutronix.de
2024-03-04 17:39:24 +01:00
Thomas Gleixner
0ab562875c x86/idle: Clean up idle selection
Clean up the code to make it readable. No functional change.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Link: https://lore.kernel.org/r/20240229142248.392017685@linutronix.de
2024-03-04 17:39:24 +01:00
Thomas Gleixner
cb81deefb5 x86/idle: Sanitize X86_BUG_AMD_E400 handling
amd_e400_idle(), the idle routine for AMD CPUs which are affected by
erratum 400 violates the RCU constraints by invoking tick_broadcast_enter()
and tick_broadcast_exit() after the core code has marked RCU non-idle.  The
functions can end up in lockdep or tracing, which rightfully triggers a
RCU warning.

The core code provides now a static branch conditional invocation of the
broadcast functions.

Remove amd_e400_idle(), enforce default_idle() and enable the static branch
on affected CPUs to cure this.

  [ bp: Fold in a fix for a IS_ENABLED() check fail missing a "CONFIG_"
    prefix which tglx spotted. ]

Reported-by: Borislav Petkov <bp@alien8.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Link: https://lore.kernel.org/r/877cim6sis.ffs@tglx
2024-03-04 17:38:06 +01:00
Thomas Gleixner
cad860b595 x86/callthunks: Use EXPORT_PER_CPU_SYMBOL_GPL() for per CPU variables
Sparse complains rightfully about the usage of EXPORT_SYMBOL_GPL() for per
CPU variables:

  callthunks.c:346:20: sparse: warning: incorrect type in initializer (different address spaces)
  callthunks.c:346:20: sparse:    expected void const [noderef] __percpu *__vpp_verify
  callthunks.c:346:20: sparse:    got unsigned long long *

Use EXPORT_PER_CPU_SYMBOL_GPL() instead.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20240304005104.841915535@linutronix.de
2024-03-04 12:09:13 +01:00
Thomas Gleixner
65efc4dc12 x86/cpu: Provide a declaration for itlb_multihit_kvm_mitigation
Sparse complains rightfully about the missing declaration which has been
placed sloppily into the usage site:

  bugs.c:2223:6: sparse: warning: symbol 'itlb_multihit_kvm_mitigation' was not declared. Should it be static?

Add it to <asm/spec-ctrl.h> where it belongs and remove the one in the KVM code.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20240304005104.787173239@linutronix.de
2024-03-04 12:09:13 +01:00
Thomas Gleixner
ca3ec9e554 x86/cpu: Use EXPORT_PER_CPU_SYMBOL_GPL() for x86_spec_ctrl_current
Sparse rightfully complains:

  bugs.c:71:9: sparse: warning: incorrect type in initializer (different address spaces)
  bugs.c:71:9: sparse:    expected void const [noderef] __percpu *__vpp_verify
  bugs.c:71:9: sparse:    got unsigned long long *

The reason is that x86_spec_ctrl_current which is a per CPU variable is
exported with EXPORT_SYMBOL_GPL().

Use EXPORT_PER_CPU_SYMBOL_GPL() instead.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20240304005104.732288812@linutronix.de
2024-03-04 12:09:13 +01:00
Thomas Gleixner
ae6b0195f5 x86/uaccess: Add missing __force to casts in __access_ok() and valid_user_address()
Sparse complains about losing the __user address space due to the cast to
long:

  uaccess_64.h:88:24: sparse: warning: cast removes address space '__user' of expression

Annotate it with __force to tell sparse that this is intentional.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20240304005104.677606054@linutronix.de
2024-03-04 12:09:12 +01:00
Thomas Gleixner
71eb4893cf x86/percpu: Cure per CPU madness on UP
On UP builds Sparse complains rightfully about accesses to cpu_info with
per CPU accessors:

  cacheinfo.c:282:30: sparse: warning: incorrect type in initializer (different address spaces)
  cacheinfo.c:282:30: sparse:    expected void const [noderef] __percpu *__vpp_verify
  cacheinfo.c:282:30: sparse:    got unsigned int *

The reason is that on UP builds cpu_info which is a per CPU variable on SMP
is mapped to boot_cpu_info which is a regular variable. There is a hideous
accessor cpu_data() which tries to hide this, but it's not sufficient as
some places require raw accessors and generates worse code than the regular
per CPU accessors.

Waste sizeof(struct x86_cpuinfo) memory on UP and provide the per CPU
cpu_info unconditionally. This requires to update the CPU info on the boot
CPU as SMP does. (Ab)use the weakly defined smp_prepare_boot_cpu() function
and implement exactly that.

This allows to use regular per CPU accessors uncoditionally and paves the
way to remove the cpu_data() hackery.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20240304005104.622511517@linutronix.de
2024-03-04 12:09:07 +01:00
Thomas Gleixner
712610725c smp: Consolidate smp_prepare_boot_cpu()
There is no point in having seven architectures implementing the same empty
stub.

Provide a weak function in the init code and remove the stubs.

This also allows to utilize the function on UP which is required to
sanitize the per CPU handling on X86 UP.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20240304005104.567671691@linutronix.de
2024-03-04 12:01:54 +01:00
Thomas Gleixner
5323922f50 x86/msr: Add missing __percpu annotations
Sparse rightfully complains about using a plain pointer for per CPU
accessors:

  msr-smp.c:15:23: sparse: warning: incorrect type in initializer (different address spaces)
  msr-smp.c:15:23: sparse:    expected void const [noderef] __percpu *__vpp_verify
  msr-smp.c:15:23: sparse:    got struct msr *

Add __percpu annotations to the related datastructure and function
arguments to cure this. This also cures the related sparse warnings at the
callsites in drivers/edac/amd64_edac.c.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20240304005104.513181735@linutronix.de
2024-03-04 12:01:54 +01:00
Thomas Gleixner
154fcf3a78 x86/msr: Prepare for including <linux/percpu.h> into <asm/msr.h>
To clean up the per CPU insanity of UP which causes sparse to be rightfully
unhappy and prevents the usage of the generic per CPU accessors on cpu_info
it is necessary to include <linux/percpu.h> into <asm/msr.h>.

Including <linux/percpu.h> into <asm/msr.h> is impossible because it ends
up in header dependency hell. The problem is that <asm/processor.h>
includes <asm/msr.h>. The inclusion of <linux/percpu.h> results in a
compile fail where the compiler cannot longer handle an include in
<asm/cpufeature.h> which references boot_cpu_data which is
defined in <asm/processor.h>.

The only reason why <asm/msr.h> is included in <asm/processor.h> are the
set/get_debugctlmsr() inlines. They are defined there because <asm/processor.h>
is such a nice dump ground for everything. In fact they belong obviously
into <asm/debugreg.h>.

Move them to <asm/debugreg.h> and fix up the resulting damage which is just
exposing the reliance on random include chains.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20240304005104.454678686@linutronix.de
2024-03-04 12:01:39 +01:00
Thomas Gleixner
9eae297d5d perf/x86/amd/uncore: Fix __percpu annotation
The __percpu annotation in struct amd_uncore is confusing Sparse:

  uncore.c:649:10: sparse: warning: incorrect type in initializer (different address spaces)
  uncore.c:649:10: sparse:    expected void const [noderef] __percpu *__vpp_verify
  uncore.c:649:10: sparse:    got union amd_uncore_info *

The reason is that the __percpu annotation sits between the '*'
dereferencing operator and the member name.

Move it before the dereferencing operator to cure this.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20240304005104.394845326@linutronix.de
2024-03-04 11:58:36 +01:00
Ingo Molnar
3c94ba5267 Linux 6.8-rc7
-----BEGIN PGP SIGNATURE-----
 
 iQFSBAABCAA8FiEEq68RxlopcLEwq+PEeb4+QwBBGIYFAmXk5XweHHRvcnZhbGRz
 QGxpbnV4LWZvdW5kYXRpb24ub3JnAAoJEHm+PkMAQRiGV7UH/3I5Dt0YoqFYPnTx
 yE06EJVFupqd7nDTDtduynRuMWscOmxZyYdGz8erz1fdzcFDJvlvhYwGviIRleCb
 SH89noxq7vNRsQv1QzQAe8PA3AfgGlMDtDlC/lfQyk56oCtMw3QcfwA7j/+mwCrK
 rIBi1gMlkbEzE1Tj9qAnpmJv4uyyKEvKdMwWtNy6wQWsBN2PZJXyp9LXaaqRGoPF
 B40lJHAXL7R1OpHrhIvUyC6N7BssP0ychVqNO+r4F3kPBOiqfdR1a5LoVHjsGNU3
 qC7lBaUGxBetxHRzYSq0coHDkVSlQ3DlmoMMFWt3jK/Cu1KrFoT2GKtsHHrmOc4V
 5TPEl/o=
 =0vJs
 -----END PGP SIGNATURE-----

Merge tag 'v6.8-rc7' into x86/cleanups, to pick up fixes

Signed-off-by: Ingo Molnar <mingo@kernel.org>
2024-03-04 11:54:09 +01:00
Linus Torvalds
73d35f8335 - Do not reserve SETUP_RNG_SEED setup data in the e820 map as it should
be used by kexec only
 
 - Make sure MKTME feature detection happens at an earlier time in the
   boot process so that the physical address size supported by the CPU is
   properly corrected and MTRR masks are programmed properly, leading to
   TDX systems booting without disable_mtrr_cleanup on the cmdline
 
 - Make sure the different address sizes supported by the CPU are read
   out as early as possible
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEzv7L6UO9uDPlPSfHEsHwGGHeVUoFAmXkVjkACgkQEsHwGGHe
 VUqSfg//XGV970NhxR3kLW7I+MvQsQDwWA61u+XpyCFdTZDOWKo17ZndR2/exHXI
 fjnpnS8SP3Dgib0wrWfqBAqmcDkN5laBIpvceH+l57NXi0Jep1leSunRBFS22jxT
 1/FqKFlOka9SG4aMJRD4xhG2Y9TO2uLqjPam7gZkdLbI7TIfIACEm4JPqWu4wLQo
 JKeoYmlSkSC5kbHMOXpIKBSzUCm4cKSU39uZRxIhpZf50nXiJf25IAlDFlVZ5uBw
 p5S2Rm4jF4eekRMArG76c61AMsdIg9kcp2OZ4Kop/t1Ouo96pap+h1G6c6Xi73zi
 WX99roTuS5FoBCguDmkxi2Hx8HU8PQGbzIAuwoTtPc6u9XHDdmXQZEvhZxIn/by8
 fjhZd4DCfmqBoasR+AaX8YzK0j5ypp8BKcFCKmvWzqVR+aMB16lxGj62xyUsvbry
 gvd6GezMn8WODXjUOa27gmh+YhOJboX2hozf6FhqItBGEnvGpsWDdMfViO5/AZnk
 T0KZxwH5OpD/CrPG1TYSWynz5vLSHSIaj2Y7iXFHfYu9s6yW3E/jZsWt33Qaag96
 sBt8/YlFR82gU8mbpmg0epJ7s6OLtGmfuoujFMfl0fK+OoLLtzOA+VZPX6Ud0Vrg
 9NMY6Q8szKqDDH68DZuj7OSbf6i9NlZI/AiHpGzH3bT77iHknnI=
 =fmsi
 -----END PGP SIGNATURE-----

Merge tag 'x86_urgent_for_v6.8_rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull x86 fixes from Borislav Petkov:

 - Do not reserve SETUP_RNG_SEED setup data in the e820 map as it should
   be used by kexec only

 - Make sure MKTME feature detection happens at an earlier time in the
   boot process so that the physical address size supported by the CPU
   is properly corrected and MTRR masks are programmed properly, leading
   to TDX systems booting without disable_mtrr_cleanup on the cmdline

 - Make sure the different address sizes supported by the CPU are read
   out as early as possible

* tag 'x86_urgent_for_v6.8_rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/e820: Don't reserve SETUP_RNG_SEED in e820
  x86/cpu/intel: Detect TME keyid bits before setting MTRR mask registers
  x86/cpu: Allow reducing x86_phys_bits during early_identify_cpu()
2024-03-03 09:43:03 -08:00
Jiri Bohac
7fd817c906 x86/e820: Don't reserve SETUP_RNG_SEED in e820
SETUP_RNG_SEED in setup_data is supplied by kexec and should
not be reserved in the e820 map.

Doing so reserves 16 bytes of RAM when booting with kexec.
(16 bytes because data->len is zeroed by parse_setup_data so only
sizeof(setup_data) is reserved.)

When kexec is used repeatedly, each boot adds two entries in the
kexec-provided e820 map as the 16-byte range splits a larger
range of usable memory. Eventually all of the 128 available entries
get used up. The next split will result in losing usable memory
as the new entries cannot be added to the e820 map.

Fixes: 68b8e9713c8e ("x86/setup: Use rng seeds from setup_data")
Signed-off-by: Jiri Bohac <jbohac@suse.cz>
Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: <stable@kernel.org>
Link: https://lore.kernel.org/r/ZbmOjKnARGiaYBd5@dwarf.suse.cz
2024-03-01 10:27:20 -08:00
Uros Bizjak
e807c2a370 locking/x86: Implement local_xchg() using CMPXCHG without the LOCK prefix
Implement local_xchg() using the CMPXCHG instruction without the LOCK prefix.
XCHG is expensive due to the implied LOCK prefix.  The processor
cannot prefetch cachelines if XCHG is used.

Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@kernel.org>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Link: https://lore.kernel.org/r/20240124105816.612670-1-ubizjak@gmail.com
2024-03-01 12:54:25 +01:00
Saurabh Sengar
0d63e4c0eb x86/hyperv: Allow 15-bit APIC IDs for VTL platforms
The current method for signaling the compatibility of a Hyper-V host
with MSIs featuring 15-bit APIC IDs relies on a synthetic cpuid leaf.
However, for higher VTLs, this leaf is not reported, due to the absence
of an IO-APIC.

As an alternative, assume that when running at a high VTL, the host
supports 15-bit APIC IDs. This assumption is safe, as Hyper-V does not
employ any architectural MSIs at higher VTLs

This unblocks startup of VTL2 environments with more than 256 CPUs.

Signed-off-by: Saurabh Sengar <ssengar@linux.microsoft.com>
Reviewed-by: Michael Kelley <mhklinux@outlook.com>
Link: https://lore.kernel.org/r/1705341460-18394-1-git-send-email-ssengar@linux.microsoft.com
Signed-off-by: Wei Liu <wei.liu@kernel.org>
Message-ID: <1705341460-18394-1-git-send-email-ssengar@linux.microsoft.com>
2024-03-01 08:36:22 +00:00
Michael Kelley
0f34d11234 x86/hyperv: Make encrypted/decrypted changes safe for load_unaligned_zeropad()
In a CoCo VM, when transitioning memory from encrypted to decrypted, or
vice versa, the caller of set_memory_encrypted() or set_memory_decrypted()
is responsible for ensuring the memory isn't in use and isn't referenced
while the transition is in progress.  The transition has multiple steps,
and the memory is in an inconsistent state until all steps are complete.
A reference while the state is inconsistent could result in an exception
that can't be cleanly fixed up.

However, the kernel load_unaligned_zeropad() mechanism could cause a stray
reference that can't be prevented by the caller of set_memory_encrypted()
or set_memory_decrypted(), so there's specific code to handle this case.
But a CoCo VM running on Hyper-V may be configured to run with a paravisor,
with the #VC or #VE exception routed to the paravisor. There's no
architectural way to forward the exceptions back to the guest kernel, and
in such a case, the load_unaligned_zeropad() specific code doesn't work.

To avoid this problem, mark pages as "not present" while a transition
is in progress. If load_unaligned_zeropad() causes a stray reference, a
normal page fault is generated instead of #VC or #VE, and the
page-fault-based fixup handlers for load_unaligned_zeropad() resolve the
reference. When the encrypted/decrypted transition is complete, mark the
pages as "present" again.

Signed-off-by: Michael Kelley <mhklinux@outlook.com>
Reviewed-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
Link: https://lore.kernel.org/r/20240116022008.1023398-4-mhklinux@outlook.com
Signed-off-by: Wei Liu <wei.liu@kernel.org>
Message-ID: <20240116022008.1023398-4-mhklinux@outlook.com>
2024-03-01 08:31:42 +00:00
Michael Kelley
030ad7af94 x86/mm: Regularize set_memory_p() parameters and make non-static
set_memory_p() is currently static.  It has parameters that don't
match set_memory_p() under arch/powerpc and that aren't congruent
with the other set_memory_* functions. There's no good reason for
the difference.

Fix this by making the parameters consistent, and update the one
existing call site.  Make the function non-static and add it to
include/asm/set_memory.h so that it is completely parallel to
set_memory_np() and is usable in other modules.

No functional change.

Signed-off-by: Michael Kelley <mhklinux@outlook.com>
Reviewed-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Link: https://lore.kernel.org/r/20240116022008.1023398-3-mhklinux@outlook.com
Signed-off-by: Wei Liu <wei.liu@kernel.org>
Message-ID: <20240116022008.1023398-3-mhklinux@outlook.com>
2024-03-01 08:31:41 +00:00
Michael Kelley
9fef276f9f x86/hyperv: Use slow_virt_to_phys() in page transition hypervisor callback
In preparation for temporarily marking pages not present during a
transition between encrypted and decrypted, use slow_virt_to_phys()
in the hypervisor callback. As long as the PFN is correct,
slow_virt_to_phys() works even if the leaf PTE is not present.
The existing functions that depend on vmalloc_to_page() all
require that the leaf PTE be marked present, so they don't work.

Update the comments for slow_virt_to_phys() to note this broader usage
and the requirement to work even if the PTE is not marked present.

Signed-off-by: Michael Kelley <mhklinux@outlook.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reviewed-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
Link: https://lore.kernel.org/r/20240116022008.1023398-2-mhklinux@outlook.com
Signed-off-by: Wei Liu <wei.liu@kernel.org>
Message-ID: <20240116022008.1023398-2-mhklinux@outlook.com>
2024-03-01 08:31:41 +00:00
Borislav Petkov (AMD)
d7b69b590b x86/sev: Dump SEV_STATUS
It is, and will be even more useful in the future, to dump the SEV
features enabled according to SEV_STATUS. Do so:

  [    0.542753] Memory Encryption Features active: AMD SEV SEV-ES SEV-SNP
  [    0.544425] SEV: Status: SEV SEV-ES SEV-SNP DebugSwap

Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Reviewed-by: Nikunj A Dadhania <nikunj@amd.com>
Link: https://lore.kernel.org/r/20240219094216.GAZdMieDHKiI8aaP3n@fat_crate.local
2024-02-28 13:39:37 +01:00
Xin Li (Intel)
47403a4b49 x86/nmi: Remove an unnecessary IS_ENABLED(CONFIG_SMP)
IS_ENABLED(CONFIG_SMP) is unnecessary here: smp_processor_id() should
always return zero on UP, and arch_cpu_is_offline() reduces to
!(cpu == 0), so this is a statically false condition on UP.

Suggested-by: H. Peter Anvin (Intel) <hpa@zytor.com>
Signed-off-by: Xin Li (Intel) <xin@zytor.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lore.kernel.org/r/20240201094604.3918141-1-xin@zytor.com
2024-02-27 22:00:50 +01:00