amd-drm-next-6.7-2023-10-13:

amdgpu:
 - DC replay fixes
 - Misc code cleanups and spelling fixes
 - Documentation updates
 - RAS EEPROM Updates
 - FRU EEPROM Updates
 - IP discovery updates
 - SR-IOV fixes
 - RAS updates
 - DC PQ fixes
 - SMU 13.0.6 updates
 - GC 11.5 Support
 - NBIO 7.11 Support
 - GMC 11 Updates
 - Reset fixes
 - SMU 11.5 Updates
 - SMU 13.0 OD support
 - Use flexible arrays for bo list handling
 - W=1 Fixes
 - SubVP fixes
 - DPIA fixes
 - DCN 3.5 Support
 - Devcoredump fixes
 - VPE 6.1 support
 - VCN 4.0 Updates
 - S/G display fixes
 - DML fixes
 - DML2 Support
 - MST fixes
 - VRR fixes
 - Enable seamless boot in more cases
 - Enable content type property for HDMI
 - OLED fixes
 - Rework and clean up GPUVM TLB flushing
 - DC ODM fixes
 - DP 2.x fixes
 - AGP aperture fixes
 - SDMA firmware loading cleanups
 - Cyan Skillfish GPU clock counter fix
 - GC 11 GART fix
 - Cache GPU fault info for userspace queries
 - DC cursor check fixes
 - eDP fixes
 - DC FP handling fixes
 - Variable sized array fixes
 - SMU 13.0.x fixes
 - IB start and size alignment fixes for VCN
 - SMU 14 Support
 - Suspend and resume sequence rework
 - vkms fix
 
 amdkfd:
 - GC 11 fixes
 - GC 10 fixes
 - Doorbell fixes
 - CWSR fixes
 - SVM fixes
 - Clean up GC info enumeration
 - Rework memory limit handling
 - Coherent memory handling fixes
 - Use partial migrations in GPU faults
 - TLB flush fixes
 - DMA unmap fixes
 - GC 9.4.3 fixes
 - SQ interrupt fix
 - GTT mapping fix
 - GC 11.5 Support
 
 radeon:
 - Misc code cleanups
 - W=1 Fixes
 - Fix possible buffer overflow
 - Fix possible NULL pointer dereference
 
 UAPI:
 - Add EXT_COHERENT memory allocation flags.  These allow for system scope atomics.
   Proposed userspace: https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/pull/88
 - Add support for new VPE engine.  This is a memory to memory copy engine with advanced scaling, CSC, and color management features
   Proposed mesa MR: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/25713
 - Add INFO IOCTL interface to query GPU faults
   Proposed Mesa MR: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/23238
   Proposed libdrm MR: https://gitlab.freedesktop.org/mesa/drm/-/merge_requests/298
 -----BEGIN PGP SIGNATURE-----
 
 iHUEABYKAB0WIQQgO5Idg2tXNTSZAr293/aFa7yZ2AUCZSmDAQAKCRC93/aFa7yZ
 2EdeAQC2lkQ9IHLOon5kIZUK+r9IPYlgFsii+qfmMPLBaMcuwgEA8F4eJln/cc9V
 02EKhlapkggYXYa+uhOE2KTnWgMFJgI=
 =SEXq
 -----END PGP SIGNATURE-----

Merge tag 'amd-drm-next-6.7-2023-10-13' of https://gitlab.freedesktop.org/agd5f/linux into drm-next

amd-drm-next-6.7-2023-10-13:

amdgpu:
- DC replay fixes
- Misc code cleanups and spelling fixes
- Documentation updates
- RAS EEPROM Updates
- FRU EEPROM Updates
- IP discovery updates
- SR-IOV fixes
- RAS updates
- DC PQ fixes
- SMU 13.0.6 updates
- GC 11.5 Support
- NBIO 7.11 Support
- GMC 11 Updates
- Reset fixes
- SMU 11.5 Updates
- SMU 13.0 OD support
- Use flexible arrays for bo list handling
- W=1 Fixes
- SubVP fixes
- DPIA fixes
- DCN 3.5 Support
- Devcoredump fixes
- VPE 6.1 support
- VCN 4.0 Updates
- S/G display fixes
- DML fixes
- DML2 Support
- MST fixes
- VRR fixes
- Enable seamless boot in more cases
- Enable content type property for HDMI
- OLED fixes
- Rework and clean up GPUVM TLB flushing
- DC ODM fixes
- DP 2.x fixes
- AGP aperture fixes
- SDMA firmware loading cleanups
- Cyan Skillfish GPU clock counter fix
- GC 11 GART fix
- Cache GPU fault info for userspace queries
- DC cursor check fixes
- eDP fixes
- DC FP handling fixes
- Variable sized array fixes
- SMU 13.0.x fixes
- IB start and size alignment fixes for VCN
- SMU 14 Support
- Suspend and resume sequence rework
- vkms fix

amdkfd:
- GC 11 fixes
- GC 10 fixes
- Doorbell fixes
- CWSR fixes
- SVM fixes
- Clean up GC info enumeration
- Rework memory limit handling
- Coherent memory handling fixes
- Use partial migrations in GPU faults
- TLB flush fixes
- DMA unmap fixes
- GC 9.4.3 fixes
- SQ interrupt fix
- GTT mapping fix
- GC 11.5 Support

radeon:
- Misc code cleanups
- W=1 Fixes
- Fix possible buffer overflow
- Fix possible NULL pointer dereference

UAPI:
- Add EXT_COHERENT memory allocation flags.  These allow for system scope atomics.
  Proposed userspace: https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/pull/88
- Add support for new VPE engine.  This is a memory to memory copy engine with advanced scaling, CSC, and color management features
  Proposed mesa MR: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/25713
- Add INFO IOCTL interface to query GPU faults
  Proposed Mesa MR: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/23238
  Proposed libdrm MR: https://gitlab.freedesktop.org/mesa/drm/-/merge_requests/298

Signed-off-by: Dave Airlie <airlied@redhat.com>

From: Alex Deucher <alexander.deucher@amd.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20231013175758.1735031-1-alexander.deucher@amd.com
This commit is contained in:
Dave Airlie 2023-10-18 16:08:07 +10:00
commit 27442758e9
569 changed files with 266191 additions and 5415 deletions

View File

@ -26,12 +26,30 @@ serial_number
.. kernel-doc:: drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c
:doc: serial_number
fru_id
-------------
.. kernel-doc:: drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c
:doc: fru_id
manufacturer
-------------
.. kernel-doc:: drivers/gpu/drm/amd/amdgpu/amdgpu_fru_eeprom.c
:doc: manufacturer
unique_id
---------
.. kernel-doc:: drivers/gpu/drm/amd/pm/amdgpu_pm.c
:doc: unique_id
board_info
----------
.. kernel-doc:: drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
:doc: board_info
Accelerated Processing Units (APU) Info
---------------------------------------

View File

@ -64,6 +64,36 @@ gpu_metrics
.. kernel-doc:: drivers/gpu/drm/amd/pm/amdgpu_pm.c
:doc: gpu_metrics
fan_curve
---------
.. kernel-doc:: drivers/gpu/drm/amd/pm/amdgpu_pm.c
:doc: fan_curve
acoustic_limit_rpm_threshold
----------------------------
.. kernel-doc:: drivers/gpu/drm/amd/pm/amdgpu_pm.c
:doc: acoustic_limit_rpm_threshold
acoustic_target_rpm_threshold
-----------------------------
.. kernel-doc:: drivers/gpu/drm/amd/pm/amdgpu_pm.c
:doc: acoustic_target_rpm_threshold
fan_target_temperature
----------------------
.. kernel-doc:: drivers/gpu/drm/amd/pm/amdgpu_pm.c
:doc: fan_target_temperature
fan_minimum_pwm
---------------
.. kernel-doc:: drivers/gpu/drm/amd/pm/amdgpu_pm.c
:doc: fan_minimum_pwm
GFXOFF
======

View File

@ -98,7 +98,7 @@ amdgpu-y += \
vega20_reg_init.o nbio_v7_4.o nbio_v2_3.o nv.o arct_reg_init.o mxgpu_nv.o \
nbio_v7_2.o hdp_v4_0.o hdp_v5_0.o aldebaran_reg_init.o aldebaran.o soc21.o \
sienna_cichlid.o smu_v13_0_10.o nbio_v4_3.o hdp_v6_0.o nbio_v7_7.o hdp_v5_2.o lsdma_v6_0.o \
nbio_v7_9.o aqua_vanjaram.o
nbio_v7_9.o aqua_vanjaram.o nbio_v7_11.o
# add DF block
amdgpu-y += \
@ -113,11 +113,12 @@ amdgpu-y += \
gfxhub_v1_0.o mmhub_v1_0.o gmc_v9_0.o gfxhub_v1_1.o mmhub_v9_4.o \
gfxhub_v2_0.o mmhub_v2_0.o gmc_v10_0.o gfxhub_v2_1.o mmhub_v2_3.o \
mmhub_v1_7.o gfxhub_v3_0.o mmhub_v3_0.o mmhub_v3_0_2.o gmc_v11_0.o \
mmhub_v3_0_1.o gfxhub_v3_0_3.o gfxhub_v1_2.o mmhub_v1_8.o
mmhub_v3_0_1.o gfxhub_v3_0_3.o gfxhub_v1_2.o mmhub_v1_8.o mmhub_v3_3.o \
gfxhub_v11_5_0.o
# add UMC block
amdgpu-y += \
umc_v6_0.o umc_v6_1.o umc_v6_7.o umc_v8_7.o umc_v8_10.o
umc_v6_0.o umc_v6_1.o umc_v6_7.o umc_v8_7.o umc_v8_10.o umc_v12_0.o
# add IH block
amdgpu-y += \
@ -205,14 +206,27 @@ amdgpu-y += \
vcn_v3_0.o \
vcn_v4_0.o \
vcn_v4_0_3.o \
vcn_v4_0_5.o \
amdgpu_jpeg.o \
jpeg_v1_0.o \
jpeg_v2_0.o \
jpeg_v2_5.o \
jpeg_v3_0.o \
jpeg_v4_0.o \
jpeg_v4_0_3.o
jpeg_v4_0_3.o \
jpeg_v4_0_5.o
# add VPE block
amdgpu-y += \
amdgpu_vpe.o \
vpe_v6_1.o
# add UMSCH block
amdgpu-y += \
amdgpu_umsch_mm.o \
umsch_mm_v4_0.o
#
# add ATHUB block
amdgpu-y += \
athub_v1_0.o \

View File

@ -35,7 +35,7 @@ static bool aldebaran_is_mode2_default(struct amdgpu_reset_control *reset_ctl)
{
struct amdgpu_device *adev = (struct amdgpu_device *)reset_ctl->handle;
if ((adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 2) &&
if ((amdgpu_ip_version(adev, MP1_HWIP, 0) == IP_VERSION(13, 0, 2) &&
adev->gmc.xgmi.connected_to_cpu))
return true;
@ -48,27 +48,24 @@ aldebaran_get_reset_handler(struct amdgpu_reset_control *reset_ctl,
{
struct amdgpu_reset_handler *handler;
struct amdgpu_device *adev = (struct amdgpu_device *)reset_ctl->handle;
int i;
if (reset_context->method == AMD_RESET_METHOD_NONE) {
if (aldebaran_is_mode2_default(reset_ctl))
reset_context->method = AMD_RESET_METHOD_MODE2;
else
reset_context->method = amdgpu_asic_reset_method(adev);
}
if (reset_context->method != AMD_RESET_METHOD_NONE) {
dev_dbg(adev->dev, "Getting reset handler for method %d\n",
reset_context->method);
list_for_each_entry(handler, &reset_ctl->reset_handlers,
handler_list) {
for_each_handler(i, handler, reset_ctl) {
if (handler->reset_method == reset_context->method)
return handler;
}
}
if (aldebaran_is_mode2_default(reset_ctl)) {
list_for_each_entry(handler, &reset_ctl->reset_handlers,
handler_list) {
if (handler->reset_method == AMD_RESET_METHOD_MODE2) {
reset_context->method = AMD_RESET_METHOD_MODE2;
return handler;
}
}
}
dev_dbg(adev->dev, "Reset handler not found!\n");
return NULL;
@ -124,9 +121,9 @@ static void aldebaran_async_reset(struct work_struct *work)
struct amdgpu_reset_control *reset_ctl =
container_of(work, struct amdgpu_reset_control, reset_work);
struct amdgpu_device *adev = (struct amdgpu_device *)reset_ctl->handle;
int i;
list_for_each_entry(handler, &reset_ctl->reset_handlers,
handler_list) {
for_each_handler(i, handler, reset_ctl) {
if (handler->reset_method == reset_ctl->active_reset) {
dev_dbg(adev->dev, "Resetting device\n");
handler->do_reset(adev);
@ -157,7 +154,7 @@ aldebaran_mode2_perform_reset(struct amdgpu_reset_control *reset_ctl,
if (reset_device_list == NULL)
return -EINVAL;
if (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 2) &&
if (amdgpu_ip_version(adev, MP1_HWIP, 0) == IP_VERSION(13, 0, 2) &&
reset_context->hive == NULL) {
/* Wrong context, return error */
return -EINVAL;
@ -338,7 +335,7 @@ aldebaran_mode2_restore_hwcontext(struct amdgpu_reset_control *reset_ctl,
if (reset_device_list == NULL)
return -EINVAL;
if (reset_context->reset_req_dev->ip_versions[MP1_HWIP][0] ==
if (amdgpu_ip_version(reset_context->reset_req_dev, MP1_HWIP, 0) ==
IP_VERSION(13, 0, 2) &&
reset_context->hive == NULL) {
/* Wrong context, return error */
@ -395,6 +392,11 @@ static struct amdgpu_reset_handler aldebaran_mode2_handler = {
.do_reset = aldebaran_mode2_reset,
};
static struct amdgpu_reset_handler
*aldebaran_rst_handlers[AMDGPU_RESET_MAX_HANDLERS] = {
&aldebaran_mode2_handler,
};
int aldebaran_reset_init(struct amdgpu_device *adev)
{
struct amdgpu_reset_control *reset_ctl;
@ -408,10 +410,9 @@ int aldebaran_reset_init(struct amdgpu_device *adev)
reset_ctl->active_reset = AMD_RESET_METHOD_NONE;
reset_ctl->get_reset_handler = aldebaran_get_reset_handler;
INIT_LIST_HEAD(&reset_ctl->reset_handlers);
INIT_WORK(&reset_ctl->reset_work, reset_ctl->async_reset);
/* Only mode2 is handled through reset control now */
amdgpu_reset_add_handler(reset_ctl, &aldebaran_mode2_handler);
reset_ctl->reset_handlers = &aldebaran_rst_handlers;
adev->reset_cntl = reset_ctl;

View File

@ -79,6 +79,8 @@
#include "amdgpu_vce.h"
#include "amdgpu_vcn.h"
#include "amdgpu_jpeg.h"
#include "amdgpu_vpe.h"
#include "amdgpu_umsch_mm.h"
#include "amdgpu_gmc.h"
#include "amdgpu_gfx.h"
#include "amdgpu_sdma.h"
@ -242,6 +244,8 @@ extern int amdgpu_num_kcq;
#define AMDGPU_VCNFW_LOG_SIZE (32 * 1024)
extern int amdgpu_vcnfw_log;
extern int amdgpu_sg_display;
extern int amdgpu_umsch_mm;
extern int amdgpu_seamless;
extern int amdgpu_user_partt_mode;
@ -623,6 +627,9 @@ typedef void (*amdgpu_wreg_ext_t)(struct amdgpu_device*, uint64_t, uint32_t);
typedef uint64_t (*amdgpu_rreg64_t)(struct amdgpu_device*, uint32_t);
typedef void (*amdgpu_wreg64_t)(struct amdgpu_device*, uint32_t, uint64_t);
typedef uint64_t (*amdgpu_rreg64_ext_t)(struct amdgpu_device*, uint64_t);
typedef void (*amdgpu_wreg64_ext_t)(struct amdgpu_device*, uint64_t, uint64_t);
typedef uint32_t (*amdgpu_block_rreg_t)(struct amdgpu_device*, uint32_t, uint32_t);
typedef void (*amdgpu_block_wreg_t)(struct amdgpu_device*, uint32_t, uint32_t, uint32_t);
@ -654,6 +661,7 @@ enum amd_hw_ip_block_type {
JPEG_HWIP = VCN_HWIP,
VCN1_HWIP,
VCE_HWIP,
VPE_HWIP,
DF_HWIP,
DCE_HWIP,
OSSSYS_HWIP,
@ -673,10 +681,15 @@ enum amd_hw_ip_block_type {
#define HWIP_MAX_INSTANCE 44
#define HW_ID_MAX 300
#define IP_VERSION(mj, mn, rv) (((mj) << 16) | ((mn) << 8) | (rv))
#define IP_VERSION_MAJ(ver) ((ver) >> 16)
#define IP_VERSION_MIN(ver) (((ver) >> 8) & 0xFF)
#define IP_VERSION_REV(ver) ((ver) & 0xFF)
#define IP_VERSION_FULL(mj, mn, rv, var, srev) \
(((mj) << 24) | ((mn) << 16) | ((rv) << 8) | ((var) << 4) | (srev))
#define IP_VERSION(mj, mn, rv) IP_VERSION_FULL(mj, mn, rv, 0, 0)
#define IP_VERSION_MAJ(ver) ((ver) >> 24)
#define IP_VERSION_MIN(ver) (((ver) >> 16) & 0xFF)
#define IP_VERSION_REV(ver) (((ver) >> 8) & 0xFF)
#define IP_VERSION_VARIANT(ver) (((ver) >> 4) & 0xF)
#define IP_VERSION_SUBREV(ver) ((ver) & 0xF)
#define IP_VERSION_MAJ_MIN_REV(ver) ((ver) >> 8)
struct amdgpu_ip_map_info {
/* Map of logical to actual dev instances/mask */
@ -757,8 +770,8 @@ struct amdgpu_mqd {
#define AMDGPU_RESET_MAGIC_NUM 64
#define AMDGPU_MAX_DF_PERFMONS 4
#define AMDGPU_PRODUCT_NAME_LEN 64
struct amdgpu_reset_domain;
struct amdgpu_fru_info;
/*
* Non-zero (true) if the GPU has VRAM. Zero (false) otherwise.
@ -826,6 +839,8 @@ struct amdgpu_device {
amdgpu_wreg_ext_t pcie_wreg_ext;
amdgpu_rreg64_t pcie_rreg64;
amdgpu_wreg64_t pcie_wreg64;
amdgpu_rreg64_ext_t pcie_rreg64_ext;
amdgpu_wreg64_ext_t pcie_wreg64_ext;
/* protects concurrent UVD register access */
spinlock_t uvd_ctx_idx_lock;
amdgpu_rreg_t uvd_ctx_rreg;
@ -946,6 +961,13 @@ struct amdgpu_device {
/* jpeg */
struct amdgpu_jpeg jpeg;
/* vpe */
struct amdgpu_vpe vpe;
/* umsch */
struct amdgpu_umsch_mm umsch_mm;
bool enable_umsch_mm;
/* firmwares */
struct amdgpu_firmware firmware;
@ -1033,11 +1055,7 @@ struct amdgpu_device {
bool ucode_sysfs_en;
/* Chip product information */
char product_number[20];
char product_name[AMDGPU_PRODUCT_NAME_LEN];
char serial[20];
struct amdgpu_fru_info *fru_info;
atomic_t throttling_logging_enabled;
struct ratelimit_state throttling_logging_rs;
uint32_t ras_hw_enabled;
@ -1067,11 +1085,6 @@ struct amdgpu_device {
uint32_t *reset_dump_reg_list;
uint32_t *reset_dump_reg_value;
int num_regs;
#ifdef CONFIG_DEV_COREDUMP
struct amdgpu_task_info reset_task_info;
bool reset_vram_lost;
struct timespec64 reset_time;
#endif
bool scpm_enabled;
uint32_t scpm_status;
@ -1082,8 +1095,31 @@ struct amdgpu_device {
bool dc_enabled;
/* Mask of active clusters */
uint32_t aid_mask;
/* Debug */
bool debug_vm;
bool debug_largebar;
bool debug_disable_soft_recovery;
};
static inline uint32_t amdgpu_ip_version(const struct amdgpu_device *adev,
uint8_t ip, uint8_t inst)
{
/* This considers only major/minor/rev and ignores
* subrevision/variant fields.
*/
return adev->ip_versions[ip][inst] & ~0xFFU;
}
#ifdef CONFIG_DEV_COREDUMP
struct amdgpu_coredump_info {
struct amdgpu_device *adev;
struct amdgpu_task_info reset_task_info;
struct timespec64 reset_time;
bool reset_vram_lost;
};
#endif
static inline struct amdgpu_device *drm_to_adev(struct drm_device *ddev)
{
return container_of(ddev, struct amdgpu_device, ddev);
@ -1134,10 +1170,14 @@ u32 amdgpu_device_indirect_rreg(struct amdgpu_device *adev,
u32 reg_addr);
u64 amdgpu_device_indirect_rreg64(struct amdgpu_device *adev,
u32 reg_addr);
u64 amdgpu_device_indirect_rreg64_ext(struct amdgpu_device *adev,
u64 reg_addr);
void amdgpu_device_indirect_wreg(struct amdgpu_device *adev,
u32 reg_addr, u32 reg_data);
void amdgpu_device_indirect_wreg64(struct amdgpu_device *adev,
u32 reg_addr, u64 reg_data);
void amdgpu_device_indirect_wreg64_ext(struct amdgpu_device *adev,
u64 reg_addr, u64 reg_data);
u32 amdgpu_device_get_rev_id(struct amdgpu_device *adev);
bool amdgpu_device_asic_has_dc_support(enum amd_asic_type asic_type);
bool amdgpu_device_has_dc_support(struct amdgpu_device *adev);
@ -1180,6 +1220,8 @@ int emu_soc_asic_init(struct amdgpu_device *adev);
#define WREG32_PCIE_EXT(reg, v) adev->pcie_wreg_ext(adev, (reg), (v))
#define RREG64_PCIE(reg) adev->pcie_rreg64(adev, (reg))
#define WREG64_PCIE(reg, v) adev->pcie_wreg64(adev, (reg), (v))
#define RREG64_PCIE_EXT(reg) adev->pcie_rreg64_ext(adev, (reg))
#define WREG64_PCIE_EXT(reg, v) adev->pcie_wreg64_ext(adev, (reg), (v))
#define RREG32_SMC(reg) adev->smc_rreg(adev, (reg))
#define WREG32_SMC(reg, v) adev->smc_wreg(adev, (reg), (v))
#define RREG32_UVD_CTX(reg) adev->uvd_ctx_rreg(adev, (reg))
@ -1275,15 +1317,13 @@ int emu_soc_asic_init(struct amdgpu_device *adev);
((adev)->asic_funcs->update_umd_stable_pstate ? (adev)->asic_funcs->update_umd_stable_pstate((adev), (enter)) : 0)
#define amdgpu_asic_query_video_codecs(adev, e, c) (adev)->asic_funcs->query_video_codecs((adev), (e), (c))
#define amdgpu_inc_vram_lost(adev) atomic_inc(&((adev)->vram_lost_counter));
#define amdgpu_inc_vram_lost(adev) atomic_inc(&((adev)->vram_lost_counter))
#define BIT_MASK_UPPER(i) ((i) >= BITS_PER_LONG ? 0 : ~0UL << (i))
#define for_each_inst(i, inst_mask) \
for (i = ffs(inst_mask); i-- != 0; \
i = ffs(inst_mask & BIT_MASK_UPPER(i + 1)))
#define MIN(X, Y) ((X) < (Y) ? (X) : (Y))
/* Common functions */
bool amdgpu_device_has_job_running(struct amdgpu_device *adev);
bool amdgpu_device_should_recover_gpu(struct amdgpu_device *adev);
@ -1293,6 +1333,7 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
void amdgpu_device_pci_config_reset(struct amdgpu_device *adev);
int amdgpu_device_pci_reset(struct amdgpu_device *adev);
bool amdgpu_device_need_post(struct amdgpu_device *adev);
bool amdgpu_device_seamless_boot_supported(struct amdgpu_device *adev);
bool amdgpu_device_pcie_dynamic_switching_supported(void);
bool amdgpu_device_should_use_aspm(struct amdgpu_device *adev);
bool amdgpu_device_aspm_support_quirk(void);
@ -1367,6 +1408,7 @@ void amdgpu_driver_postclose_kms(struct drm_device *dev,
void amdgpu_driver_release_kms(struct drm_device *dev);
int amdgpu_device_ip_suspend(struct amdgpu_device *adev);
int amdgpu_device_prepare(struct drm_device *dev);
int amdgpu_device_suspend(struct drm_device *dev, bool fbcon);
int amdgpu_device_resume(struct drm_device *dev, bool fbcon);
u32 amdgpu_get_vblank_counter_kms(struct drm_crtc *crtc);

View File

@ -28,6 +28,7 @@
#include "amdgpu.h"
#include "amdgpu_gfx.h"
#include "amdgpu_dma_buf.h"
#include <drm/ttm/ttm_tt.h>
#include <linux/module.h>
#include <linux/dma-buf.h>
#include "amdgpu_xgmi.h"
@ -164,7 +165,7 @@ void amdgpu_amdkfd_device_init(struct amdgpu_device *adev)
*/
bitmap_complement(gpu_resources.cp_queue_bitmap,
adev->gfx.mec_bitmap[0].queue_bitmap,
KGD_MAX_QUEUES);
AMDGPU_MAX_QUEUES);
/* According to linux/bitmap.h we shouldn't use bitmap_clear if
* nbits is not compile time constant
@ -172,7 +173,7 @@ void amdgpu_amdkfd_device_init(struct amdgpu_device *adev)
last_valid_bit = 1 /* only first MEC can have compute queues */
* adev->gfx.mec.num_pipe_per_mec
* adev->gfx.mec.num_queue_per_pipe;
for (i = last_valid_bit; i < KGD_MAX_QUEUES; ++i)
for (i = last_valid_bit; i < AMDGPU_MAX_QUEUES; ++i)
clear_bit(i, gpu_resources.cp_queue_bitmap);
amdgpu_doorbell_get_kfd_info(adev,
@ -467,28 +468,6 @@ uint32_t amdgpu_amdkfd_get_max_engine_clock_in_mhz(struct amdgpu_device *adev)
return 100;
}
void amdgpu_amdkfd_get_cu_info(struct amdgpu_device *adev, struct kfd_cu_info *cu_info)
{
struct amdgpu_cu_info acu_info = adev->gfx.cu_info;
memset(cu_info, 0, sizeof(*cu_info));
if (sizeof(cu_info->cu_bitmap) != sizeof(acu_info.bitmap))
return;
cu_info->cu_active_number = acu_info.number;
cu_info->cu_ao_mask = acu_info.ao_cu_mask;
memcpy(&cu_info->cu_bitmap[0], &acu_info.bitmap[0],
sizeof(cu_info->cu_bitmap));
cu_info->num_shader_engines = adev->gfx.config.max_shader_engines;
cu_info->num_shader_arrays_per_engine = adev->gfx.config.max_sh_per_se;
cu_info->num_cu_per_sh = adev->gfx.config.max_cu_per_sh;
cu_info->simd_per_cu = acu_info.simd_per_cu;
cu_info->max_waves_per_simd = acu_info.max_waves_per_simd;
cu_info->wave_front_size = acu_info.wave_front_size;
cu_info->max_scratch_slots_per_cu = acu_info.max_scratch_slots_per_cu;
cu_info->lds_size = acu_info.lds_size;
}
int amdgpu_amdkfd_get_dmabuf_info(struct amdgpu_device *adev, int dma_buf_fd,
struct amdgpu_device **dmabuf_adev,
uint64_t *bo_size, void *metadata_buffer,
@ -704,12 +683,19 @@ err:
void amdgpu_amdkfd_set_compute_idle(struct amdgpu_device *adev, bool idle)
{
enum amd_powergating_state state = idle ? AMD_PG_STATE_GATE : AMD_PG_STATE_UNGATE;
/* Temporary workaround to fix issues observed in some
* compute applications when GFXOFF is enabled on GFX11.
*/
if (IP_VERSION_MAJ(adev->ip_versions[GC_HWIP][0]) == 11) {
if (IP_VERSION_MAJ(amdgpu_ip_version(adev, GC_HWIP, 0)) == 11) {
pr_debug("GFXOFF is %s\n", idle ? "enabled" : "disabled");
amdgpu_gfx_off_ctrl(adev, idle);
} else if ((IP_VERSION_MAJ(amdgpu_ip_version(adev, GC_HWIP, 0)) == 9) &&
(adev->flags & AMD_IS_APU)) {
/* Disable GFXOFF and PG. Temporary workaround
* to fix some compute applications issue on GFX9.
*/
adev->ip_blocks[AMD_IP_BLOCK_TYPE_GFX].version->funcs->set_powergating_state((void *)adev, state);
}
amdgpu_dpm_switch_power_profile(adev,
PP_SMC_POWER_PROFILE_COMPUTE,
@ -805,11 +791,23 @@ void amdgpu_amdkfd_unlock_kfd(struct amdgpu_device *adev)
u64 amdgpu_amdkfd_xcp_memory_size(struct amdgpu_device *adev, int xcp_id)
{
u64 tmp;
s8 mem_id = KFD_XCP_MEM_ID(adev, xcp_id);
u64 tmp;
if (adev->gmc.num_mem_partitions && xcp_id >= 0 && mem_id >= 0) {
tmp = adev->gmc.mem_partitions[mem_id].size;
if (adev->gmc.is_app_apu && adev->gmc.num_mem_partitions == 1) {
/* In NPS1 mode, we should restrict the vram reporting
* tied to the ttm_pages_limit which is 1/2 of the system
* memory. For other partition modes, the HBM is uniformly
* divided already per numa node reported. If user wants to
* go beyond the default ttm limit and maximize the ROCm
* allocations, they can go up to max ttm and sysmem limits.
*/
tmp = (ttm_tt_pages_limit() << PAGE_SHIFT) / num_online_nodes();
} else {
tmp = adev->gmc.mem_partitions[mem_id].size;
}
do_div(tmp, adev->xcp_mgr->num_xcp_per_mem_partition);
return ALIGN_DOWN(tmp, PAGE_SIZE);
} else {

View File

@ -235,8 +235,6 @@ void amdgpu_amdkfd_get_local_mem_info(struct amdgpu_device *adev,
uint64_t amdgpu_amdkfd_get_gpu_clock_counter(struct amdgpu_device *adev);
uint32_t amdgpu_amdkfd_get_max_engine_clock_in_mhz(struct amdgpu_device *adev);
void amdgpu_amdkfd_get_cu_info(struct amdgpu_device *adev,
struct kfd_cu_info *cu_info);
int amdgpu_amdkfd_get_dmabuf_info(struct amdgpu_device *adev, int dma_buf_fd,
struct amdgpu_device **dmabuf_adev,
uint64_t *bo_size, void *metadata_buffer,
@ -303,6 +301,7 @@ int amdgpu_amdkfd_gpuvm_map_memory_to_gpu(struct amdgpu_device *adev,
struct kgd_mem *mem, void *drm_priv);
int amdgpu_amdkfd_gpuvm_unmap_memory_from_gpu(
struct amdgpu_device *adev, struct kgd_mem *mem, void *drm_priv);
void amdgpu_amdkfd_gpuvm_dmaunmap_mem(struct kgd_mem *mem, void *drm_priv);
int amdgpu_amdkfd_gpuvm_sync_memory(
struct amdgpu_device *adev, struct kgd_mem *mem, bool intr);
int amdgpu_amdkfd_gpuvm_map_gtt_bo_to_kernel(struct kgd_mem *mem,

View File

@ -658,7 +658,7 @@ static int kgd_gfx_v11_validate_trap_override_request(struct amdgpu_device *adev
KFD_DBG_TRAP_MASK_DBG_ADDRESS_WATCH |
KFD_DBG_TRAP_MASK_DBG_MEMORY_VIOLATION;
if (adev->ip_versions[GC_HWIP][0] >= IP_VERSION(11, 0, 4))
if (amdgpu_ip_version(adev, GC_HWIP, 0) >= IP_VERSION(11, 0, 4))
*trap_mask_supported |= KFD_DBG_TRAP_MASK_TRAP_ON_WAVE_START |
KFD_DBG_TRAP_MASK_TRAP_ON_WAVE_END;

View File

@ -677,7 +677,7 @@ void kgd_gfx_v9_set_wave_launch_stall(struct amdgpu_device *adev,
int i;
uint32_t data = RREG32(SOC15_REG_OFFSET(GC, 0, mmSPI_GDBG_WAVE_CNTL));
if (adev->ip_versions[GC_HWIP][0] == IP_VERSION(9, 4, 1))
if (amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 4, 1))
data = REG_SET_FIELD(data, SPI_GDBG_WAVE_CNTL, STALL_VMID,
stall ? 1 << vmid : 0);
else
@ -1037,7 +1037,7 @@ void kgd_gfx_v9_get_cu_occupancy(struct amdgpu_device *adev, int pasid,
int pasid_tmp;
int max_queue_cnt;
int vmid_wave_cnt = 0;
DECLARE_BITMAP(cp_queue_bitmap, KGD_MAX_QUEUES);
DECLARE_BITMAP(cp_queue_bitmap, AMDGPU_MAX_QUEUES);
lock_spi_csq_mutexes(adev);
soc15_grbm_select(adev, 1, 0, 0, 0, inst);
@ -1047,7 +1047,7 @@ void kgd_gfx_v9_get_cu_occupancy(struct amdgpu_device *adev, int pasid,
* to get number of waves in flight
*/
bitmap_complement(cp_queue_bitmap, adev->gfx.mec_bitmap[0].queue_bitmap,
KGD_MAX_QUEUES);
AMDGPU_MAX_QUEUES);
max_queue_cnt = adev->gfx.mec.num_pipe_per_mec *
adev->gfx.mec.num_queue_per_pipe;
sh_cnt = adev->gfx.config.max_sh_per_se;

View File

@ -44,6 +44,7 @@
* changes to accumulate
*/
#define AMDGPU_USERPTR_RESTORE_DELAY_MS 1
#define AMDGPU_RESERVE_MEM_LIMIT (3UL << 29)
/*
* Align VRAM availability to 2MB to avoid fragmentation caused by 4K allocations in the tail 2MB
@ -117,11 +118,16 @@ void amdgpu_amdkfd_gpuvm_init_mem_limits(void)
return;
si_meminfo(&si);
mem = si.freeram - si.freehigh;
mem = si.totalram - si.totalhigh;
mem *= si.mem_unit;
spin_lock_init(&kfd_mem_limit.mem_limit_lock);
kfd_mem_limit.max_system_mem_limit = mem - (mem >> 4);
kfd_mem_limit.max_system_mem_limit = mem - (mem >> 6);
if (kfd_mem_limit.max_system_mem_limit < 2 * AMDGPU_RESERVE_MEM_LIMIT)
kfd_mem_limit.max_system_mem_limit >>= 1;
else
kfd_mem_limit.max_system_mem_limit -= AMDGPU_RESERVE_MEM_LIMIT;
kfd_mem_limit.max_ttm_mem_limit = ttm_tt_pages_limit() << PAGE_SHIFT;
pr_debug("Kernel memory limit %lluM, TTM limit %lluM\n",
(kfd_mem_limit.max_system_mem_limit >> 20),
@ -733,7 +739,7 @@ kfd_mem_dmaunmap_sg_bo(struct kgd_mem *mem,
enum dma_data_direction dir;
if (unlikely(!ttm->sg)) {
pr_err("SG Table of BO is UNEXPECTEDLY NULL");
pr_debug("SG Table of BO is NULL");
return;
}
@ -828,6 +834,7 @@ static int kfd_mem_attach(struct amdgpu_device *adev, struct kgd_mem *mem,
uint64_t va = mem->va;
struct kfd_mem_attachment *attachment[2] = {NULL, NULL};
struct amdgpu_bo *bo[2] = {NULL, NULL};
struct amdgpu_bo_va *bo_va;
bool same_hive = false;
int i, ret;
@ -866,9 +873,10 @@ static int kfd_mem_attach(struct amdgpu_device *adev, struct kgd_mem *mem,
if ((adev == bo_adev && !(mem->alloc_flags & KFD_IOC_ALLOC_MEM_FLAGS_MMIO_REMAP)) ||
(amdgpu_ttm_tt_get_usermm(mem->bo->tbo.ttm) && reuse_dmamap(adev, bo_adev)) ||
same_hive) {
(mem->domain == AMDGPU_GEM_DOMAIN_GTT && reuse_dmamap(adev, bo_adev)) ||
same_hive) {
/* Mappings on the local GPU, or VRAM mappings in the
* local hive, or userptr mapping can reuse dma map
* local hive, or userptr, or GTT mapping can reuse dma map
* address space share the original BO
*/
attachment[i]->type = KFD_MEM_ATT_SHARED;
@ -914,7 +922,12 @@ static int kfd_mem_attach(struct amdgpu_device *adev, struct kgd_mem *mem,
pr_debug("Unable to reserve BO during memory attach");
goto unwind;
}
attachment[i]->bo_va = amdgpu_vm_bo_add(adev, vm, bo[i]);
bo_va = amdgpu_vm_bo_find(vm, bo[i]);
if (!bo_va)
bo_va = amdgpu_vm_bo_add(adev, vm, bo[i]);
else
++bo_va->ref_count;
attachment[i]->bo_va = bo_va;
amdgpu_bo_unreserve(bo[i]);
if (unlikely(!attachment[i]->bo_va)) {
ret = -ENOMEM;
@ -938,7 +951,8 @@ unwind:
continue;
if (attachment[i]->bo_va) {
amdgpu_bo_reserve(bo[i], true);
amdgpu_vm_bo_del(adev, attachment[i]->bo_va);
if (--attachment[i]->bo_va->ref_count == 0)
amdgpu_vm_bo_del(adev, attachment[i]->bo_va);
amdgpu_bo_unreserve(bo[i]);
list_del(&attachment[i]->list);
}
@ -955,7 +969,8 @@ static void kfd_mem_detach(struct kfd_mem_attachment *attachment)
pr_debug("\t remove VA 0x%llx in entry %p\n",
attachment->va, attachment);
amdgpu_vm_bo_del(attachment->adev, attachment->bo_va);
if (--attachment->bo_va->ref_count == 0)
amdgpu_vm_bo_del(attachment->adev, attachment->bo_va);
drm_gem_object_put(&bo->tbo.base);
list_del(&attachment->list);
kfree(attachment);
@ -1201,8 +1216,6 @@ static void unmap_bo_from_gpuvm(struct kgd_mem *mem,
amdgpu_vm_clear_freed(adev, vm, &bo_va->last_pt_update);
amdgpu_sync_fence(sync, bo_va->last_pt_update);
kfd_mem_dmaunmap_attachment(mem, entry);
}
static int update_gpuvm_pte(struct kgd_mem *mem,
@ -1257,6 +1270,7 @@ static int map_bo_to_gpuvm(struct kgd_mem *mem,
update_gpuvm_pte_failed:
unmap_bo_from_gpuvm(mem, entry, sync);
kfd_mem_dmaunmap_attachment(mem, entry);
return ret;
}
@ -1690,6 +1704,8 @@ int amdgpu_amdkfd_gpuvm_alloc_memory_of_gpu(
if (flags & KFD_IOC_ALLOC_MEM_FLAGS_COHERENT)
alloc_flags |= AMDGPU_GEM_CREATE_COHERENT;
if (flags & KFD_IOC_ALLOC_MEM_FLAGS_EXT_COHERENT)
alloc_flags |= AMDGPU_GEM_CREATE_EXT_COHERENT;
if (flags & KFD_IOC_ALLOC_MEM_FLAGS_UNCACHED)
alloc_flags |= AMDGPU_GEM_CREATE_UNCACHED;
@ -1860,8 +1876,10 @@ int amdgpu_amdkfd_gpuvm_free_memory_of_gpu(
mem->va + bo_size * (1 + mem->aql_queue));
/* Remove from VM internal data structures */
list_for_each_entry_safe(entry, tmp, &mem->attachments, list)
list_for_each_entry_safe(entry, tmp, &mem->attachments, list) {
kfd_mem_dmaunmap_attachment(mem, entry);
kfd_mem_detach(entry);
}
ret = unreserve_bo_and_vms(&ctx, false, false);
@ -2035,6 +2053,23 @@ out:
return ret;
}
void amdgpu_amdkfd_gpuvm_dmaunmap_mem(struct kgd_mem *mem, void *drm_priv)
{
struct kfd_mem_attachment *entry;
struct amdgpu_vm *vm;
vm = drm_priv_to_vm(drm_priv);
mutex_lock(&mem->lock);
list_for_each_entry(entry, &mem->attachments, list) {
if (entry->bo_va->base.vm == vm)
kfd_mem_dmaunmap_attachment(mem, entry);
}
mutex_unlock(&mem->lock);
}
int amdgpu_amdkfd_gpuvm_unmap_memory_from_gpu(
struct amdgpu_device *adev, struct kgd_mem *mem, void *drm_priv)
{

View File

@ -1776,7 +1776,7 @@ static ssize_t amdgpu_atombios_get_vbios_version(struct device *dev,
struct amdgpu_device *adev = drm_to_adev(ddev);
struct atom_context *ctx = adev->mode_info.atom_context;
return sysfs_emit(buf, "%s\n", ctx->vbios_ver_str);
return sysfs_emit(buf, "%s\n", ctx->vbios_pn);
}
static DEVICE_ATTR(vbios_version, 0444, amdgpu_atombios_get_vbios_version,

View File

@ -75,27 +75,17 @@ int amdgpu_bo_list_create(struct amdgpu_device *adev, struct drm_file *filp,
struct amdgpu_bo_list_entry *array;
struct amdgpu_bo_list *list;
uint64_t total_size = 0;
size_t size;
unsigned i;
int r;
if (num_entries > (SIZE_MAX - sizeof(struct amdgpu_bo_list))
/ sizeof(struct amdgpu_bo_list_entry))
return -EINVAL;
size = sizeof(struct amdgpu_bo_list);
size += num_entries * sizeof(struct amdgpu_bo_list_entry);
list = kvmalloc(size, GFP_KERNEL);
list = kvzalloc(struct_size(list, entries, num_entries), GFP_KERNEL);
if (!list)
return -ENOMEM;
kref_init(&list->refcount);
list->gds_obj = NULL;
list->gws_obj = NULL;
list->oa_obj = NULL;
array = amdgpu_bo_list_array_entry(list, 0);
memset(array, 0, num_entries * sizeof(struct amdgpu_bo_list_entry));
list->num_entries = num_entries;
array = list->entries;
for (i = 0; i < num_entries; ++i) {
struct amdgpu_bo_list_entry *entry;
@ -140,7 +130,6 @@ int amdgpu_bo_list_create(struct amdgpu_device *adev, struct drm_file *filp,
}
list->first_userptr = first_userptr;
list->num_entries = num_entries;
sort(array, last_entry, sizeof(struct amdgpu_bo_list_entry),
amdgpu_bo_list_entry_cmp, NULL);

View File

@ -55,6 +55,8 @@ struct amdgpu_bo_list {
/* Protect access during command submission.
*/
struct mutex bo_list_mutex;
struct amdgpu_bo_list_entry entries[] __counted_by(num_entries);
};
int amdgpu_bo_list_get(struct amdgpu_fpriv *fpriv, int id,
@ -69,22 +71,14 @@ int amdgpu_bo_list_create(struct amdgpu_device *adev,
size_t num_entries,
struct amdgpu_bo_list **list);
static inline struct amdgpu_bo_list_entry *
amdgpu_bo_list_array_entry(struct amdgpu_bo_list *list, unsigned index)
{
struct amdgpu_bo_list_entry *array = (void *)&list[1];
return &array[index];
}
#define amdgpu_bo_list_for_each_entry(e, list) \
for (e = amdgpu_bo_list_array_entry(list, 0); \
e != amdgpu_bo_list_array_entry(list, (list)->num_entries); \
for (e = list->entries; \
e != &list->entries[list->num_entries]; \
++e)
#define amdgpu_bo_list_for_each_userptr_entry(e, list) \
for (e = amdgpu_bo_list_array_entry(list, (list)->first_userptr); \
e != amdgpu_bo_list_array_entry(list, (list)->num_entries); \
for (e = &list->entries[list->first_userptr]; \
e != &list->entries[list->num_entries]; \
++e)
#endif

View File

@ -264,16 +264,9 @@ struct edid *amdgpu_connector_edid(struct drm_connector *connector)
static struct edid *
amdgpu_connector_get_hardcoded_edid(struct amdgpu_device *adev)
{
struct edid *edid;
if (adev->mode_info.bios_hardcoded_edid) {
edid = kmalloc(adev->mode_info.bios_hardcoded_edid_size, GFP_KERNEL);
if (edid) {
memcpy((unsigned char *)edid,
(unsigned char *)adev->mode_info.bios_hardcoded_edid,
adev->mode_info.bios_hardcoded_edid_size);
return edid;
}
return kmemdup((unsigned char *)adev->mode_info.bios_hardcoded_edid,
adev->mode_info.bios_hardcoded_edid_size, GFP_KERNEL);
}
return NULL;
}

View File

@ -1151,7 +1151,7 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
job->vm_pd_addr = amdgpu_gmc_pd_addr(vm->root.bo);
}
if (amdgpu_vm_debug) {
if (adev->debug_vm) {
/* Invalidate all BOs to test for userspace bugs */
amdgpu_bo_list_for_each_entry(e, p->bo_list) {
struct amdgpu_bo *bo = e->bo;
@ -1392,8 +1392,7 @@ int amdgpu_cs_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
r = amdgpu_cs_parser_init(&parser, adev, filp, data);
if (r) {
if (printk_ratelimit())
DRM_ERROR("Failed to initialize parser %d!\n", r);
DRM_ERROR_RATELIMITED("Failed to initialize parser %d!\n", r);
return r;
}

View File

@ -42,6 +42,7 @@ const unsigned int amdgpu_ctx_num_entities[AMDGPU_HW_IP_NUM] = {
[AMDGPU_HW_IP_VCN_DEC] = 1,
[AMDGPU_HW_IP_VCN_ENC] = 1,
[AMDGPU_HW_IP_VCN_JPEG] = 1,
[AMDGPU_HW_IP_VPE] = 1,
};
bool amdgpu_ctx_priority_is_valid(int32_t ctx_prio)

View File

@ -162,6 +162,74 @@ static ssize_t amdgpu_device_get_pcie_replay_count(struct device *dev,
static DEVICE_ATTR(pcie_replay_count, 0444,
amdgpu_device_get_pcie_replay_count, NULL);
/**
* DOC: board_info
*
* The amdgpu driver provides a sysfs API for giving board related information.
* It provides the form factor information in the format
*
* type : form factor
*
* Possible form factor values
*
* - "cem" - PCIE CEM card
* - "oam" - Open Compute Accelerator Module
* - "unknown" - Not known
*
*/
static ssize_t amdgpu_device_get_board_info(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct drm_device *ddev = dev_get_drvdata(dev);
struct amdgpu_device *adev = drm_to_adev(ddev);
enum amdgpu_pkg_type pkg_type = AMDGPU_PKG_TYPE_CEM;
const char *pkg;
if (adev->smuio.funcs && adev->smuio.funcs->get_pkg_type)
pkg_type = adev->smuio.funcs->get_pkg_type(adev);
switch (pkg_type) {
case AMDGPU_PKG_TYPE_CEM:
pkg = "cem";
break;
case AMDGPU_PKG_TYPE_OAM:
pkg = "oam";
break;
default:
pkg = "unknown";
break;
}
return sysfs_emit(buf, "%s : %s\n", "type", pkg);
}
static DEVICE_ATTR(board_info, 0444, amdgpu_device_get_board_info, NULL);
static struct attribute *amdgpu_board_attrs[] = {
&dev_attr_board_info.attr,
NULL,
};
static umode_t amdgpu_board_attrs_is_visible(struct kobject *kobj,
struct attribute *attr, int n)
{
struct device *dev = kobj_to_dev(kobj);
struct drm_device *ddev = dev_get_drvdata(dev);
struct amdgpu_device *adev = drm_to_adev(ddev);
if (adev->flags & AMD_IS_APU)
return 0;
return attr->mode;
}
static const struct attribute_group amdgpu_board_attrs_group = {
.attrs = amdgpu_board_attrs,
.is_visible = amdgpu_board_attrs_is_visible
};
static void amdgpu_device_get_pcie_info(struct amdgpu_device *adev);
@ -507,6 +575,7 @@ void amdgpu_device_wreg(struct amdgpu_device *adev,
* @adev: amdgpu_device pointer
* @reg: mmio/rlc register
* @v: value to write
* @xcc_id: xcc accelerated compute core id
*
* this function is invoked only for the debugfs register access
*/
@ -571,7 +640,7 @@ u32 amdgpu_device_indirect_rreg_ext(struct amdgpu_device *adev,
pcie_index = adev->nbio.funcs->get_pcie_index_offset(adev);
pcie_data = adev->nbio.funcs->get_pcie_data_offset(adev);
if (adev->nbio.funcs->get_pcie_index_hi_offset)
if ((reg_addr >> 32) && (adev->nbio.funcs->get_pcie_index_hi_offset))
pcie_index_hi = adev->nbio.funcs->get_pcie_index_hi_offset(adev);
else
pcie_index_hi = 0;
@ -638,6 +707,56 @@ u64 amdgpu_device_indirect_rreg64(struct amdgpu_device *adev,
return r;
}
u64 amdgpu_device_indirect_rreg64_ext(struct amdgpu_device *adev,
u64 reg_addr)
{
unsigned long flags, pcie_index, pcie_data;
unsigned long pcie_index_hi = 0;
void __iomem *pcie_index_offset;
void __iomem *pcie_index_hi_offset;
void __iomem *pcie_data_offset;
u64 r;
pcie_index = adev->nbio.funcs->get_pcie_index_offset(adev);
pcie_data = adev->nbio.funcs->get_pcie_data_offset(adev);
if ((reg_addr >> 32) && (adev->nbio.funcs->get_pcie_index_hi_offset))
pcie_index_hi = adev->nbio.funcs->get_pcie_index_hi_offset(adev);
spin_lock_irqsave(&adev->pcie_idx_lock, flags);
pcie_index_offset = (void __iomem *)adev->rmmio + pcie_index * 4;
pcie_data_offset = (void __iomem *)adev->rmmio + pcie_data * 4;
if (pcie_index_hi != 0)
pcie_index_hi_offset = (void __iomem *)adev->rmmio +
pcie_index_hi * 4;
/* read low 32 bits */
writel(reg_addr, pcie_index_offset);
readl(pcie_index_offset);
if (pcie_index_hi != 0) {
writel((reg_addr >> 32) & 0xff, pcie_index_hi_offset);
readl(pcie_index_hi_offset);
}
r = readl(pcie_data_offset);
/* read high 32 bits */
writel(reg_addr + 4, pcie_index_offset);
readl(pcie_index_offset);
if (pcie_index_hi != 0) {
writel((reg_addr >> 32) & 0xff, pcie_index_hi_offset);
readl(pcie_index_hi_offset);
}
r |= ((u64)readl(pcie_data_offset) << 32);
/* clear the high bits */
if (pcie_index_hi != 0) {
writel(0, pcie_index_hi_offset);
readl(pcie_index_hi_offset);
}
spin_unlock_irqrestore(&adev->pcie_idx_lock, flags);
return r;
}
/**
* amdgpu_device_indirect_wreg - write an indirect register address
*
@ -677,7 +796,7 @@ void amdgpu_device_indirect_wreg_ext(struct amdgpu_device *adev,
pcie_index = adev->nbio.funcs->get_pcie_index_offset(adev);
pcie_data = adev->nbio.funcs->get_pcie_data_offset(adev);
if (adev->nbio.funcs->get_pcie_index_hi_offset)
if ((reg_addr >> 32) && (adev->nbio.funcs->get_pcie_index_hi_offset))
pcie_index_hi = adev->nbio.funcs->get_pcie_index_hi_offset(adev);
else
pcie_index_hi = 0;
@ -742,6 +861,55 @@ void amdgpu_device_indirect_wreg64(struct amdgpu_device *adev,
spin_unlock_irqrestore(&adev->pcie_idx_lock, flags);
}
void amdgpu_device_indirect_wreg64_ext(struct amdgpu_device *adev,
u64 reg_addr, u64 reg_data)
{
unsigned long flags, pcie_index, pcie_data;
unsigned long pcie_index_hi = 0;
void __iomem *pcie_index_offset;
void __iomem *pcie_index_hi_offset;
void __iomem *pcie_data_offset;
pcie_index = adev->nbio.funcs->get_pcie_index_offset(adev);
pcie_data = adev->nbio.funcs->get_pcie_data_offset(adev);
if ((reg_addr >> 32) && (adev->nbio.funcs->get_pcie_index_hi_offset))
pcie_index_hi = adev->nbio.funcs->get_pcie_index_hi_offset(adev);
spin_lock_irqsave(&adev->pcie_idx_lock, flags);
pcie_index_offset = (void __iomem *)adev->rmmio + pcie_index * 4;
pcie_data_offset = (void __iomem *)adev->rmmio + pcie_data * 4;
if (pcie_index_hi != 0)
pcie_index_hi_offset = (void __iomem *)adev->rmmio +
pcie_index_hi * 4;
/* write low 32 bits */
writel(reg_addr, pcie_index_offset);
readl(pcie_index_offset);
if (pcie_index_hi != 0) {
writel((reg_addr >> 32) & 0xff, pcie_index_hi_offset);
readl(pcie_index_hi_offset);
}
writel((u32)(reg_data & 0xffffffffULL), pcie_data_offset);
readl(pcie_data_offset);
/* write high 32 bits */
writel(reg_addr + 4, pcie_index_offset);
readl(pcie_index_offset);
if (pcie_index_hi != 0) {
writel((reg_addr >> 32) & 0xff, pcie_index_hi_offset);
readl(pcie_index_hi_offset);
}
writel((u32)(reg_data >> 32), pcie_data_offset);
readl(pcie_data_offset);
/* clear the high bits */
if (pcie_index_hi != 0) {
writel(0, pcie_index_hi_offset);
readl(pcie_index_hi_offset);
}
spin_unlock_irqrestore(&adev->pcie_idx_lock, flags);
}
/**
* amdgpu_device_get_rev_id - query device rev_id
*
@ -819,6 +987,13 @@ static uint64_t amdgpu_invalid_rreg64(struct amdgpu_device *adev, uint32_t reg)
return 0;
}
static uint64_t amdgpu_invalid_rreg64_ext(struct amdgpu_device *adev, uint64_t reg)
{
DRM_ERROR("Invalid callback to read register 0x%llX\n", reg);
BUG();
return 0;
}
/**
* amdgpu_invalid_wreg64 - dummy reg write function
*
@ -836,6 +1011,13 @@ static void amdgpu_invalid_wreg64(struct amdgpu_device *adev, uint32_t reg, uint
BUG();
}
static void amdgpu_invalid_wreg64_ext(struct amdgpu_device *adev, uint64_t reg, uint64_t v)
{
DRM_ERROR("Invalid callback to write 64 bit register 0x%llX with 0x%08llX\n",
reg, v);
BUG();
}
/**
* amdgpu_block_invalid_rreg - dummy reg read function
*
@ -889,8 +1071,8 @@ static int amdgpu_device_asic_init(struct amdgpu_device *adev)
amdgpu_asic_pre_asic_init(adev);
if (adev->ip_versions[GC_HWIP][0] == IP_VERSION(9, 4, 3) ||
adev->ip_versions[GC_HWIP][0] >= IP_VERSION(11, 0, 0)) {
if (amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 4, 3) ||
amdgpu_ip_version(adev, GC_HWIP, 0) >= IP_VERSION(11, 0, 0)) {
amdgpu_psp_wait_for_bootloader(adev);
ret = amdgpu_atomfirmware_asic_init(adev, true);
return ret;
@ -1244,6 +1426,37 @@ bool amdgpu_device_need_post(struct amdgpu_device *adev)
return true;
}
/*
* Check whether seamless boot is supported.
*
* So far we only support seamless boot on DCE 3.0 or later.
* If users report that it works on older ASICS as well, we may
* loosen this.
*/
bool amdgpu_device_seamless_boot_supported(struct amdgpu_device *adev)
{
switch (amdgpu_seamless) {
case -1:
break;
case 1:
return true;
case 0:
return false;
default:
DRM_ERROR("Invalid value for amdgpu.seamless: %d\n",
amdgpu_seamless);
return false;
}
if (!(adev->flags & AMD_IS_APU))
return false;
if (adev->mman.keep_stolen_vga_memory)
return false;
return adev->ip_versions[DCE_HWIP][0] >= IP_VERSION(3, 0, 0);
}
/*
* Intel hosts such as Raptor Lake and Sapphire Rapids don't support dynamic
* speed switching. Until we have confirmation from Intel that a specific host
@ -1547,6 +1760,7 @@ static void amdgpu_switcheroo_set_state(struct pci_dev *pdev,
} else {
pr_info("switched off\n");
dev->switch_power_state = DRM_SWITCH_POWER_CHANGING;
amdgpu_device_prepare(dev);
amdgpu_device_suspend(dev, true);
amdgpu_device_cache_pci_state(pdev);
/* Shut down the device */
@ -2093,7 +2307,7 @@ static int amdgpu_device_ip_early_init(struct amdgpu_device *adev)
adev->flags |= AMD_IS_PX;
if (!(adev->flags & AMD_IS_APU)) {
parent = pci_upstream_bridge(adev->pdev);
parent = pcie_find_root_port(adev->pdev);
adev->has_pr3 = parent ? pci_pr3_present(parent) : false;
}
@ -2731,7 +2945,7 @@ static void amdgpu_device_smu_fini_early(struct amdgpu_device *adev)
{
int i, r;
if (adev->ip_versions[GC_HWIP][0] > IP_VERSION(9, 0, 0))
if (amdgpu_ip_version(adev, GC_HWIP, 0) > IP_VERSION(9, 0, 0))
return;
for (i = 0; i < adev->num_ip_blocks; i++) {
@ -2984,8 +3198,10 @@ static int amdgpu_device_ip_suspend_phase2(struct amdgpu_device *adev)
/* SDMA 5.x+ is part of GFX power domain so it's covered by GFXOFF */
if (adev->in_s0ix &&
(adev->ip_versions[SDMA0_HWIP][0] >= IP_VERSION(5, 0, 0)) &&
(adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_SDMA))
(amdgpu_ip_version(adev, SDMA0_HWIP, 0) >=
IP_VERSION(5, 0, 0)) &&
(adev->ip_blocks[i].version->type ==
AMD_IP_BLOCK_TYPE_SDMA))
continue;
/* Once swPSP provides the IMU, RLC FW binaries to TOS during cold-boot.
@ -3476,8 +3692,8 @@ static void amdgpu_device_set_mcbp(struct amdgpu_device *adev)
adev->gfx.mcbp = true;
else if (amdgpu_mcbp == 0)
adev->gfx.mcbp = false;
else if ((adev->ip_versions[GC_HWIP][0] >= IP_VERSION(9, 0, 0)) &&
(adev->ip_versions[GC_HWIP][0] < IP_VERSION(10, 0, 0)) &&
else if ((amdgpu_ip_version(adev, GC_HWIP, 0) >= IP_VERSION(9, 0, 0)) &&
(amdgpu_ip_version(adev, GC_HWIP, 0) < IP_VERSION(10, 0, 0)) &&
adev->gfx.num_gfx_rings)
adev->gfx.mcbp = true;
@ -3542,6 +3758,8 @@ int amdgpu_device_init(struct amdgpu_device *adev,
adev->pciep_wreg = &amdgpu_invalid_wreg;
adev->pcie_rreg64 = &amdgpu_invalid_rreg64;
adev->pcie_wreg64 = &amdgpu_invalid_wreg64;
adev->pcie_rreg64_ext = &amdgpu_invalid_rreg64_ext;
adev->pcie_wreg64_ext = &amdgpu_invalid_wreg64_ext;
adev->uvd_ctx_rreg = &amdgpu_invalid_rreg;
adev->uvd_ctx_wreg = &amdgpu_invalid_wreg;
adev->didt_rreg = &amdgpu_invalid_rreg;
@ -3597,6 +3815,8 @@ int amdgpu_device_init(struct amdgpu_device *adev,
INIT_LIST_HEAD(&adev->ras_list);
INIT_LIST_HEAD(&adev->pm.od_kobj_list);
INIT_DELAYED_WORK(&adev->delayed_init_work,
amdgpu_device_delayed_init_work_handler);
INIT_DELAYED_WORK(&adev->gfx.gfx_off_delay_work,
@ -3693,7 +3913,8 @@ int amdgpu_device_init(struct amdgpu_device *adev,
* internal path natively support atomics, set have_atomics_support to true.
*/
} else if ((adev->flags & AMD_IS_APU) &&
(adev->ip_versions[GC_HWIP][0] > IP_VERSION(9, 0, 0))) {
(amdgpu_ip_version(adev, GC_HWIP, 0) >
IP_VERSION(9, 0, 0))) {
adev->have_atomics_support = true;
} else {
adev->have_atomics_support =
@ -3833,22 +4054,6 @@ fence_driver_init:
/* Get a log2 for easy divisions. */
adev->mm_stats.log2_max_MBps = ilog2(max(1u, max_MBps));
r = amdgpu_atombios_sysfs_init(adev);
if (r)
drm_err(&adev->ddev,
"registering atombios sysfs failed (%d).\n", r);
r = amdgpu_pm_sysfs_init(adev);
if (r)
DRM_ERROR("registering pm sysfs failed (%d).\n", r);
r = amdgpu_ucode_sysfs_init(adev);
if (r) {
adev->ucode_sysfs_en = false;
DRM_ERROR("Creating firmware sysfs failed (%d).\n", r);
} else
adev->ucode_sysfs_en = true;
/*
* Register gpu instance before amdgpu_device_enable_mgpu_fan_boost.
* Otherwise the mgpu fan boost feature will be skipped due to the
@ -3877,10 +4082,36 @@ fence_driver_init:
flush_delayed_work(&adev->delayed_init_work);
}
/*
* Place those sysfs registering after `late_init`. As some of those
* operations performed in `late_init` might affect the sysfs
* interfaces creating.
*/
r = amdgpu_atombios_sysfs_init(adev);
if (r)
drm_err(&adev->ddev,
"registering atombios sysfs failed (%d).\n", r);
r = amdgpu_pm_sysfs_init(adev);
if (r)
DRM_ERROR("registering pm sysfs failed (%d).\n", r);
r = amdgpu_ucode_sysfs_init(adev);
if (r) {
adev->ucode_sysfs_en = false;
DRM_ERROR("Creating firmware sysfs failed (%d).\n", r);
} else
adev->ucode_sysfs_en = true;
r = sysfs_create_files(&adev->dev->kobj, amdgpu_dev_attributes);
if (r)
dev_err(adev->dev, "Could not create amdgpu device attr\n");
r = devm_device_add_group(adev->dev, &amdgpu_board_attrs_group);
if (r)
dev_err(adev->dev,
"Could not create amdgpu board attributes\n");
amdgpu_fru_sysfs_init(adev);
if (IS_ENABLED(CONFIG_PERF_EVENTS))
@ -4044,6 +4275,9 @@ void amdgpu_device_fini_sw(struct amdgpu_device *adev)
kfree(adev->bios);
adev->bios = NULL;
kfree(adev->fru_info);
adev->fru_info = NULL;
px = amdgpu_device_supports_px(adev_to_drm(adev));
if (px || (!pci_is_thunderbolt_attached(adev->pdev) &&
@ -4102,6 +4336,41 @@ static int amdgpu_device_evict_resources(struct amdgpu_device *adev)
/*
* Suspend & resume.
*/
/**
* amdgpu_device_prepare - prepare for device suspend
*
* @dev: drm dev pointer
*
* Prepare to put the hw in the suspend state (all asics).
* Returns 0 for success or an error on failure.
* Called at driver suspend.
*/
int amdgpu_device_prepare(struct drm_device *dev)
{
struct amdgpu_device *adev = drm_to_adev(dev);
int i, r;
if (dev->switch_power_state == DRM_SWITCH_POWER_OFF)
return 0;
/* Evict the majority of BOs before starting suspend sequence */
r = amdgpu_device_evict_resources(adev);
if (r)
return r;
for (i = 0; i < adev->num_ip_blocks; i++) {
if (!adev->ip_blocks[i].status.valid)
continue;
if (!adev->ip_blocks[i].version->funcs->prepare_suspend)
continue;
r = adev->ip_blocks[i].version->funcs->prepare_suspend((void *)adev);
if (r)
return r;
}
return 0;
}
/**
* amdgpu_device_suspend - initiate device suspend
*
@ -4122,11 +4391,6 @@ int amdgpu_device_suspend(struct drm_device *dev, bool fbcon)
adev->in_suspend = true;
/* Evict the majority of BOs before grabbing the full access */
r = amdgpu_device_evict_resources(adev);
if (r)
return r;
if (amdgpu_sriov_vf(adev)) {
amdgpu_virt_fini_data_exchange(adev);
r = amdgpu_virt_request_full_gpu(adev, false);
@ -4795,12 +5059,17 @@ static int amdgpu_reset_reg_dumps(struct amdgpu_device *adev)
return 0;
}
#ifdef CONFIG_DEV_COREDUMP
#ifndef CONFIG_DEV_COREDUMP
static void amdgpu_coredump(struct amdgpu_device *adev, bool vram_lost,
struct amdgpu_reset_context *reset_context)
{
}
#else
static ssize_t amdgpu_devcoredump_read(char *buffer, loff_t offset,
size_t count, void *data, size_t datalen)
{
struct drm_printer p;
struct amdgpu_device *adev = data;
struct amdgpu_coredump_info *coredump = data;
struct drm_print_iterator iter;
int i;
@ -4814,21 +5083,21 @@ static ssize_t amdgpu_devcoredump_read(char *buffer, loff_t offset,
drm_printf(&p, "**** AMDGPU Device Coredump ****\n");
drm_printf(&p, "kernel: " UTS_RELEASE "\n");
drm_printf(&p, "module: " KBUILD_MODNAME "\n");
drm_printf(&p, "time: %lld.%09ld\n", adev->reset_time.tv_sec, adev->reset_time.tv_nsec);
if (adev->reset_task_info.pid)
drm_printf(&p, "time: %lld.%09ld\n", coredump->reset_time.tv_sec, coredump->reset_time.tv_nsec);
if (coredump->reset_task_info.pid)
drm_printf(&p, "process_name: %s PID: %d\n",
adev->reset_task_info.process_name,
adev->reset_task_info.pid);
coredump->reset_task_info.process_name,
coredump->reset_task_info.pid);
if (adev->reset_vram_lost)
if (coredump->reset_vram_lost)
drm_printf(&p, "VRAM is lost due to GPU reset!\n");
if (adev->num_regs) {
if (coredump->adev->num_regs) {
drm_printf(&p, "AMDGPU register dumps:\nOffset: Value:\n");
for (i = 0; i < adev->num_regs; i++)
for (i = 0; i < coredump->adev->num_regs; i++)
drm_printf(&p, "0x%08x: 0x%08x\n",
adev->reset_dump_reg_list[i],
adev->reset_dump_reg_value[i]);
coredump->adev->reset_dump_reg_list[i],
coredump->adev->reset_dump_reg_value[i]);
}
return count - iter.remain;
@ -4836,14 +5105,32 @@ static ssize_t amdgpu_devcoredump_read(char *buffer, loff_t offset,
static void amdgpu_devcoredump_free(void *data)
{
kfree(data);
}
static void amdgpu_reset_capture_coredumpm(struct amdgpu_device *adev)
static void amdgpu_coredump(struct amdgpu_device *adev, bool vram_lost,
struct amdgpu_reset_context *reset_context)
{
struct amdgpu_coredump_info *coredump;
struct drm_device *dev = adev_to_drm(adev);
ktime_get_ts64(&adev->reset_time);
dev_coredumpm(dev->dev, THIS_MODULE, adev, 0, GFP_NOWAIT,
coredump = kzalloc(sizeof(*coredump), GFP_NOWAIT);
if (!coredump) {
DRM_ERROR("%s: failed to allocate memory for coredump\n", __func__);
return;
}
coredump->reset_vram_lost = vram_lost;
if (reset_context->job && reset_context->job->vm)
coredump->reset_task_info = reset_context->job->vm->task_info;
coredump->adev = adev;
ktime_get_ts64(&coredump->reset_time);
dev_coredumpm(dev->dev, THIS_MODULE, coredump, 0, GFP_NOWAIT,
amdgpu_devcoredump_read, amdgpu_devcoredump_free);
}
#endif
@ -4895,7 +5182,7 @@ int amdgpu_do_asic_reset(struct list_head *device_list_handle,
if (r) {
dev_err(tmp_adev->dev, "ASIC reset failed with error, %d for drm dev, %s",
r, adev_to_drm(tmp_adev)->unique);
break;
goto out;
}
}
@ -4948,15 +5235,9 @@ int amdgpu_do_asic_reset(struct list_head *device_list_handle,
goto out;
vram_lost = amdgpu_device_check_vram_lost(tmp_adev);
#ifdef CONFIG_DEV_COREDUMP
tmp_adev->reset_vram_lost = vram_lost;
memset(&tmp_adev->reset_task_info, 0,
sizeof(tmp_adev->reset_task_info));
if (reset_context->job && reset_context->job->vm)
tmp_adev->reset_task_info =
reset_context->job->vm->task_info;
amdgpu_reset_capture_coredumpm(tmp_adev);
#endif
amdgpu_coredump(tmp_adev, vram_lost, reset_context);
if (vram_lost) {
DRM_INFO("VRAM is lost due to GPU reset!\n");
amdgpu_inc_vram_lost(tmp_adev);
@ -4966,6 +5247,11 @@ int amdgpu_do_asic_reset(struct list_head *device_list_handle,
if (r)
return r;
r = amdgpu_xcp_restore_partition_mode(
tmp_adev->xcp_mgr);
if (r)
goto out;
r = amdgpu_device_ip_resume_phase2(tmp_adev);
if (r)
goto out;
@ -5183,7 +5469,8 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
* Flush RAM to disk so that after reboot
* the user can read log and see why the system rebooted.
*/
if (need_emergency_restart && amdgpu_ras_get_context(adev)->reboot) {
if (need_emergency_restart && amdgpu_ras_get_context(adev) &&
amdgpu_ras_get_context(adev)->reboot) {
DRM_WARN("Emergency reboot.");
ksys_sync_helper();
@ -5321,8 +5608,9 @@ retry: /* Rest of adevs pre asic reset from XGMI hive. */
adev->asic_reset_res = r;
/* Aldebaran and gfx_11_0_3 support ras in SRIOV, so need resume ras during reset */
if (adev->ip_versions[GC_HWIP][0] == IP_VERSION(9, 4, 2) ||
adev->ip_versions[GC_HWIP][0] == IP_VERSION(11, 0, 3))
if (amdgpu_ip_version(adev, GC_HWIP, 0) ==
IP_VERSION(9, 4, 2) ||
amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(11, 0, 3))
amdgpu_ras_resume(adev);
} else {
r = amdgpu_do_asic_reset(device_list_handle, reset_context);
@ -5347,7 +5635,8 @@ skip_hw_reset:
drm_sched_start(&ring->sched, true);
}
if (adev->enable_mes && adev->ip_versions[GC_HWIP][0] != IP_VERSION(11, 0, 3))
if (adev->enable_mes &&
amdgpu_ip_version(adev, GC_HWIP, 0) != IP_VERSION(11, 0, 3))
amdgpu_mes_self_test(tmp_adev);
if (!drm_drv_uses_atomic_modeset(adev_to_drm(tmp_adev)) && !job_signaled)
@ -6024,7 +6313,7 @@ bool amdgpu_device_has_display_hardware(struct amdgpu_device *adev)
return true;
default:
/* IP discovery */
if (!adev->ip_versions[DCE_HWIP][0] ||
if (!amdgpu_ip_version(adev, DCE_HWIP, 0) ||
(adev->harvest_ip_mask & AMD_HARVEST_IP_DMU_MASK))
return false;
return true;

View File

@ -39,6 +39,7 @@
#include "nbio_v7_0.h"
#include "nbio_v7_4.h"
#include "nbio_v7_9.h"
#include "nbio_v7_11.h"
#include "hdp_v4_0.h"
#include "vega10_ih.h"
#include "vega20_ih.h"
@ -80,6 +81,8 @@
#include "jpeg_v4_0.h"
#include "vcn_v4_0_3.h"
#include "jpeg_v4_0_3.h"
#include "vcn_v4_0_5.h"
#include "jpeg_v4_0_5.h"
#include "amdgpu_vkms.h"
#include "mes_v10_1.h"
#include "mes_v11_0.h"
@ -89,6 +92,8 @@
#include "smuio_v13_0_3.h"
#include "smuio_v13_0_6.h"
#include "amdgpu_vpe.h"
#define FIRMWARE_IP_DISCOVERY "amdgpu/ip_discovery.bin"
MODULE_FIRMWARE(FIRMWARE_IP_DISCOVERY);
@ -174,6 +179,7 @@ static const char *hw_id_names[HW_ID_MAX] = {
[XGMI_HWID] = "XGMI",
[XGBE_HWID] = "XGBE",
[MP0_HWID] = "MP0",
[VPE_HWID] = "VPE",
};
static int hw_id_map[MAX_HWIP] = {
@ -203,6 +209,7 @@ static int hw_id_map[MAX_HWIP] = {
[XGMI_HWIP] = XGMI_HWID,
[DCI_HWIP] = DCI_HWID,
[PCIE_HWIP] = PCIE_HWID,
[VPE_HWIP] = VPE_HWID,
};
static int amdgpu_discovery_read_binary_from_sysmem(struct amdgpu_device *adev, uint8_t *binary)
@ -304,8 +311,8 @@ static void amdgpu_discovery_harvest_config_quirk(struct amdgpu_device *adev)
* So far, apply this quirk only on those Navy Flounder boards which
* have a bad harvest table of VCN config.
*/
if ((adev->ip_versions[UVD_HWIP][1] == IP_VERSION(3, 0, 1)) &&
(adev->ip_versions[GC_HWIP][0] == IP_VERSION(10, 3, 2))) {
if ((amdgpu_ip_version(adev, UVD_HWIP, 1) == IP_VERSION(3, 0, 1)) &&
(amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(10, 3, 2))) {
switch (adev->pdev->revision) {
case 0xC1:
case 0xC2:
@ -1184,6 +1191,7 @@ static void amdgpu_discovery_sysfs_fini(struct amdgpu_device *adev)
static int amdgpu_discovery_reg_base_init(struct amdgpu_device *adev)
{
uint8_t num_base_address, subrev, variant;
struct binary_header *bhdr;
struct ip_discovery_header *ihdr;
struct die_header *dhdr;
@ -1192,7 +1200,6 @@ static int amdgpu_discovery_reg_base_init(struct amdgpu_device *adev)
uint16_t ip_offset;
uint16_t num_dies;
uint16_t num_ips;
uint8_t num_base_address;
int hw_ip;
int i, j, k;
int r;
@ -1330,8 +1337,22 @@ static int amdgpu_discovery_reg_base_init(struct amdgpu_device *adev)
* example. On most chips there are multiple instances
* with the same HWID.
*/
adev->ip_versions[hw_ip][ip->instance_number] =
IP_VERSION(ip->major, ip->minor, ip->revision);
if (ihdr->version < 3) {
subrev = 0;
variant = 0;
} else {
subrev = ip->sub_revision;
variant = ip->variant;
}
adev->ip_versions[hw_ip]
[ip->instance_number] =
IP_VERSION_FULL(ip->major,
ip->minor,
ip->revision,
variant,
subrev);
}
}
@ -1356,8 +1377,8 @@ static void amdgpu_discovery_harvest_ip(struct amdgpu_device *adev)
* so read harvest bit per IP data structure to set
* harvest configuration.
*/
if (adev->ip_versions[GC_HWIP][0] < IP_VERSION(10, 2, 0) &&
adev->ip_versions[GC_HWIP][0] != IP_VERSION(9, 4, 3)) {
if (amdgpu_ip_version(adev, GC_HWIP, 0) < IP_VERSION(10, 2, 0) &&
amdgpu_ip_version(adev, GC_HWIP, 0) != IP_VERSION(9, 4, 3)) {
if ((adev->pdev->device == 0x731E &&
(adev->pdev->revision == 0xC6 ||
adev->pdev->revision == 0xC7)) ||
@ -1432,12 +1453,12 @@ static int amdgpu_discovery_get_gfx_info(struct amdgpu_device *adev)
adev->gfx.config.num_sc_per_sh = le32_to_cpu(gc_info->v1.gc_num_sc_per_se) /
le32_to_cpu(gc_info->v1.gc_num_sa_per_se);
adev->gfx.config.num_packer_per_sc = le32_to_cpu(gc_info->v1.gc_num_packer_per_sc);
if (gc_info->v1.header.version_minor >= 1) {
if (le16_to_cpu(gc_info->v1.header.version_minor) >= 1) {
adev->gfx.config.gc_num_tcp_per_sa = le32_to_cpu(gc_info->v1_1.gc_num_tcp_per_sa);
adev->gfx.config.gc_num_sdp_interface = le32_to_cpu(gc_info->v1_1.gc_num_sdp_interface);
adev->gfx.config.gc_num_tcps = le32_to_cpu(gc_info->v1_1.gc_num_tcps);
}
if (gc_info->v1.header.version_minor >= 2) {
if (le16_to_cpu(gc_info->v1.header.version_minor) >= 2) {
adev->gfx.config.gc_num_tcp_per_wpg = le32_to_cpu(gc_info->v1_2.gc_num_tcp_per_wpg);
adev->gfx.config.gc_tcp_l1_size = le32_to_cpu(gc_info->v1_2.gc_tcp_l1_size);
adev->gfx.config.gc_num_sqc_per_wgp = le32_to_cpu(gc_info->v1_2.gc_num_sqc_per_wgp);
@ -1466,7 +1487,7 @@ static int amdgpu_discovery_get_gfx_info(struct amdgpu_device *adev)
adev->gfx.config.num_sc_per_sh = le32_to_cpu(gc_info->v2.gc_num_sc_per_se) /
le32_to_cpu(gc_info->v2.gc_num_sh_per_se);
adev->gfx.config.num_packer_per_sc = le32_to_cpu(gc_info->v2.gc_num_packer_per_sc);
if (gc_info->v2.header.version_minor == 1) {
if (le16_to_cpu(gc_info->v2.header.version_minor == 1)) {
adev->gfx.config.gc_num_tcp_per_sa = le32_to_cpu(gc_info->v2_1.gc_num_tcp_per_sh);
adev->gfx.config.gc_tcp_size_per_cu = le32_to_cpu(gc_info->v2_1.gc_tcp_size_per_cu);
adev->gfx.config.gc_num_sdp_interface = le32_to_cpu(gc_info->v2_1.gc_num_sdp_interface); /* per XCD */
@ -1600,7 +1621,7 @@ static int amdgpu_discovery_get_vcn_info(struct amdgpu_device *adev)
static int amdgpu_discovery_set_common_ip_blocks(struct amdgpu_device *adev)
{
/* what IP to use for this? */
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(9, 0, 1):
case IP_VERSION(9, 1, 0):
case IP_VERSION(9, 2, 1):
@ -1632,12 +1653,13 @@ static int amdgpu_discovery_set_common_ip_blocks(struct amdgpu_device *adev)
case IP_VERSION(11, 0, 2):
case IP_VERSION(11, 0, 3):
case IP_VERSION(11, 0, 4):
case IP_VERSION(11, 5, 0):
amdgpu_device_ip_block_add(adev, &soc21_common_ip_block);
break;
default:
dev_err(adev->dev,
"Failed to add common ip block(GC_HWIP:0x%x)\n",
adev->ip_versions[GC_HWIP][0]);
amdgpu_ip_version(adev, GC_HWIP, 0));
return -EINVAL;
}
return 0;
@ -1646,7 +1668,7 @@ static int amdgpu_discovery_set_common_ip_blocks(struct amdgpu_device *adev)
static int amdgpu_discovery_set_gmc_ip_blocks(struct amdgpu_device *adev)
{
/* use GC or MMHUB IP version */
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(9, 0, 1):
case IP_VERSION(9, 1, 0):
case IP_VERSION(9, 2, 1):
@ -1678,12 +1700,12 @@ static int amdgpu_discovery_set_gmc_ip_blocks(struct amdgpu_device *adev)
case IP_VERSION(11, 0, 2):
case IP_VERSION(11, 0, 3):
case IP_VERSION(11, 0, 4):
case IP_VERSION(11, 5, 0):
amdgpu_device_ip_block_add(adev, &gmc_v11_0_ip_block);
break;
default:
dev_err(adev->dev,
"Failed to add gmc ip block(GC_HWIP:0x%x)\n",
adev->ip_versions[GC_HWIP][0]);
dev_err(adev->dev, "Failed to add gmc ip block(GC_HWIP:0x%x)\n",
amdgpu_ip_version(adev, GC_HWIP, 0));
return -EINVAL;
}
return 0;
@ -1691,7 +1713,7 @@ static int amdgpu_discovery_set_gmc_ip_blocks(struct amdgpu_device *adev)
static int amdgpu_discovery_set_ih_ip_blocks(struct amdgpu_device *adev)
{
switch (adev->ip_versions[OSSSYS_HWIP][0]) {
switch (amdgpu_ip_version(adev, OSSSYS_HWIP, 0)) {
case IP_VERSION(4, 0, 0):
case IP_VERSION(4, 0, 1):
case IP_VERSION(4, 1, 0):
@ -1724,7 +1746,7 @@ static int amdgpu_discovery_set_ih_ip_blocks(struct amdgpu_device *adev)
default:
dev_err(adev->dev,
"Failed to add ih ip block(OSSSYS_HWIP:0x%x)\n",
adev->ip_versions[OSSSYS_HWIP][0]);
amdgpu_ip_version(adev, OSSSYS_HWIP, 0));
return -EINVAL;
}
return 0;
@ -1732,7 +1754,7 @@ static int amdgpu_discovery_set_ih_ip_blocks(struct amdgpu_device *adev)
static int amdgpu_discovery_set_psp_ip_blocks(struct amdgpu_device *adev)
{
switch (adev->ip_versions[MP0_HWIP][0]) {
switch (amdgpu_ip_version(adev, MP0_HWIP, 0)) {
case IP_VERSION(9, 0, 0):
amdgpu_device_ip_block_add(adev, &psp_v3_1_ip_block);
break;
@ -1778,7 +1800,7 @@ static int amdgpu_discovery_set_psp_ip_blocks(struct amdgpu_device *adev)
default:
dev_err(adev->dev,
"Failed to add psp ip block(MP0_HWIP:0x%x)\n",
adev->ip_versions[MP0_HWIP][0]);
amdgpu_ip_version(adev, MP0_HWIP, 0));
return -EINVAL;
}
return 0;
@ -1786,7 +1808,7 @@ static int amdgpu_discovery_set_psp_ip_blocks(struct amdgpu_device *adev)
static int amdgpu_discovery_set_smu_ip_blocks(struct amdgpu_device *adev)
{
switch (adev->ip_versions[MP1_HWIP][0]) {
switch (amdgpu_ip_version(adev, MP1_HWIP, 0)) {
case IP_VERSION(9, 0, 0):
case IP_VERSION(10, 0, 0):
case IP_VERSION(10, 0, 1):
@ -1824,10 +1846,13 @@ static int amdgpu_discovery_set_smu_ip_blocks(struct amdgpu_device *adev)
case IP_VERSION(13, 0, 11):
amdgpu_device_ip_block_add(adev, &smu_v13_0_ip_block);
break;
case IP_VERSION(14, 0, 0):
amdgpu_device_ip_block_add(adev, &smu_v14_0_ip_block);
break;
default:
dev_err(adev->dev,
"Failed to add smu ip block(MP1_HWIP:0x%x)\n",
adev->ip_versions[MP1_HWIP][0]);
amdgpu_ip_version(adev, MP1_HWIP, 0));
return -EINVAL;
}
return 0;
@ -1852,8 +1877,8 @@ static int amdgpu_discovery_set_display_ip_blocks(struct amdgpu_device *adev)
return 0;
#if defined(CONFIG_DRM_AMD_DC)
if (adev->ip_versions[DCE_HWIP][0]) {
switch (adev->ip_versions[DCE_HWIP][0]) {
if (amdgpu_ip_version(adev, DCE_HWIP, 0)) {
switch (amdgpu_ip_version(adev, DCE_HWIP, 0)) {
case IP_VERSION(1, 0, 0):
case IP_VERSION(1, 0, 1):
case IP_VERSION(2, 0, 2):
@ -1871,6 +1896,7 @@ static int amdgpu_discovery_set_display_ip_blocks(struct amdgpu_device *adev)
case IP_VERSION(3, 1, 6):
case IP_VERSION(3, 2, 0):
case IP_VERSION(3, 2, 1):
case IP_VERSION(3, 5, 0):
if (amdgpu_sriov_vf(adev))
amdgpu_discovery_set_sriov_display(adev);
else
@ -1879,11 +1905,11 @@ static int amdgpu_discovery_set_display_ip_blocks(struct amdgpu_device *adev)
default:
dev_err(adev->dev,
"Failed to add dm ip block(DCE_HWIP:0x%x)\n",
adev->ip_versions[DCE_HWIP][0]);
amdgpu_ip_version(adev, DCE_HWIP, 0));
return -EINVAL;
}
} else if (adev->ip_versions[DCI_HWIP][0]) {
switch (adev->ip_versions[DCI_HWIP][0]) {
} else if (amdgpu_ip_version(adev, DCI_HWIP, 0)) {
switch (amdgpu_ip_version(adev, DCI_HWIP, 0)) {
case IP_VERSION(12, 0, 0):
case IP_VERSION(12, 0, 1):
case IP_VERSION(12, 1, 0):
@ -1895,7 +1921,7 @@ static int amdgpu_discovery_set_display_ip_blocks(struct amdgpu_device *adev)
default:
dev_err(adev->dev,
"Failed to add dm ip block(DCI_HWIP:0x%x)\n",
adev->ip_versions[DCI_HWIP][0]);
amdgpu_ip_version(adev, DCI_HWIP, 0));
return -EINVAL;
}
}
@ -1905,7 +1931,7 @@ static int amdgpu_discovery_set_display_ip_blocks(struct amdgpu_device *adev)
static int amdgpu_discovery_set_gc_ip_blocks(struct amdgpu_device *adev)
{
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(9, 0, 1):
case IP_VERSION(9, 1, 0):
case IP_VERSION(9, 2, 1):
@ -1941,12 +1967,12 @@ static int amdgpu_discovery_set_gc_ip_blocks(struct amdgpu_device *adev)
case IP_VERSION(11, 0, 2):
case IP_VERSION(11, 0, 3):
case IP_VERSION(11, 0, 4):
case IP_VERSION(11, 5, 0):
amdgpu_device_ip_block_add(adev, &gfx_v11_0_ip_block);
break;
default:
dev_err(adev->dev,
"Failed to add gfx ip block(GC_HWIP:0x%x)\n",
adev->ip_versions[GC_HWIP][0]);
dev_err(adev->dev, "Failed to add gfx ip block(GC_HWIP:0x%x)\n",
amdgpu_ip_version(adev, GC_HWIP, 0));
return -EINVAL;
}
return 0;
@ -1954,7 +1980,7 @@ static int amdgpu_discovery_set_gc_ip_blocks(struct amdgpu_device *adev)
static int amdgpu_discovery_set_sdma_ip_blocks(struct amdgpu_device *adev)
{
switch (adev->ip_versions[SDMA0_HWIP][0]) {
switch (amdgpu_ip_version(adev, SDMA0_HWIP, 0)) {
case IP_VERSION(4, 0, 0):
case IP_VERSION(4, 0, 1):
case IP_VERSION(4, 1, 0):
@ -1994,7 +2020,7 @@ static int amdgpu_discovery_set_sdma_ip_blocks(struct amdgpu_device *adev)
default:
dev_err(adev->dev,
"Failed to add sdma ip block(SDMA0_HWIP:0x%x)\n",
adev->ip_versions[SDMA0_HWIP][0]);
amdgpu_ip_version(adev, SDMA0_HWIP, 0));
return -EINVAL;
}
return 0;
@ -2002,8 +2028,8 @@ static int amdgpu_discovery_set_sdma_ip_blocks(struct amdgpu_device *adev)
static int amdgpu_discovery_set_mm_ip_blocks(struct amdgpu_device *adev)
{
if (adev->ip_versions[VCE_HWIP][0]) {
switch (adev->ip_versions[UVD_HWIP][0]) {
if (amdgpu_ip_version(adev, VCE_HWIP, 0)) {
switch (amdgpu_ip_version(adev, UVD_HWIP, 0)) {
case IP_VERSION(7, 0, 0):
case IP_VERSION(7, 2, 0):
/* UVD is not supported on vega20 SR-IOV */
@ -2013,10 +2039,10 @@ static int amdgpu_discovery_set_mm_ip_blocks(struct amdgpu_device *adev)
default:
dev_err(adev->dev,
"Failed to add uvd v7 ip block(UVD_HWIP:0x%x)\n",
adev->ip_versions[UVD_HWIP][0]);
amdgpu_ip_version(adev, UVD_HWIP, 0));
return -EINVAL;
}
switch (adev->ip_versions[VCE_HWIP][0]) {
switch (amdgpu_ip_version(adev, VCE_HWIP, 0)) {
case IP_VERSION(4, 0, 0):
case IP_VERSION(4, 1, 0):
/* VCE is not supported on vega20 SR-IOV */
@ -2026,11 +2052,11 @@ static int amdgpu_discovery_set_mm_ip_blocks(struct amdgpu_device *adev)
default:
dev_err(adev->dev,
"Failed to add VCE v4 ip block(VCE_HWIP:0x%x)\n",
adev->ip_versions[VCE_HWIP][0]);
amdgpu_ip_version(adev, VCE_HWIP, 0));
return -EINVAL;
}
} else {
switch (adev->ip_versions[UVD_HWIP][0]) {
switch (amdgpu_ip_version(adev, UVD_HWIP, 0)) {
case IP_VERSION(1, 0, 0):
case IP_VERSION(1, 0, 1):
amdgpu_device_ip_block_add(adev, &vcn_v1_0_ip_block);
@ -2074,10 +2100,14 @@ static int amdgpu_discovery_set_mm_ip_blocks(struct amdgpu_device *adev)
amdgpu_device_ip_block_add(adev, &vcn_v4_0_3_ip_block);
amdgpu_device_ip_block_add(adev, &jpeg_v4_0_3_ip_block);
break;
case IP_VERSION(4, 0, 5):
amdgpu_device_ip_block_add(adev, &vcn_v4_0_5_ip_block);
amdgpu_device_ip_block_add(adev, &jpeg_v4_0_5_ip_block);
break;
default:
dev_err(adev->dev,
"Failed to add vcn/jpeg ip block(UVD_HWIP:0x%x)\n",
adev->ip_versions[UVD_HWIP][0]);
amdgpu_ip_version(adev, UVD_HWIP, 0));
return -EINVAL;
}
}
@ -2086,7 +2116,7 @@ static int amdgpu_discovery_set_mm_ip_blocks(struct amdgpu_device *adev)
static int amdgpu_discovery_set_mes_ip_blocks(struct amdgpu_device *adev)
{
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(10, 1, 10):
case IP_VERSION(10, 1, 1):
case IP_VERSION(10, 1, 2):
@ -2111,6 +2141,7 @@ static int amdgpu_discovery_set_mes_ip_blocks(struct amdgpu_device *adev)
case IP_VERSION(11, 0, 2):
case IP_VERSION(11, 0, 3):
case IP_VERSION(11, 0, 4):
case IP_VERSION(11, 5, 0):
amdgpu_device_ip_block_add(adev, &mes_v11_0_ip_block);
adev->enable_mes = true;
adev->enable_mes_kiq = true;
@ -2123,7 +2154,7 @@ static int amdgpu_discovery_set_mes_ip_blocks(struct amdgpu_device *adev)
static void amdgpu_discovery_init_soc_config(struct amdgpu_device *adev)
{
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(9, 4, 3):
aqua_vanjaram_init_soc_config(adev);
break;
@ -2132,6 +2163,35 @@ static void amdgpu_discovery_init_soc_config(struct amdgpu_device *adev)
}
}
static int amdgpu_discovery_set_vpe_ip_blocks(struct amdgpu_device *adev)
{
switch (amdgpu_ip_version(adev, VPE_HWIP, 0)) {
case IP_VERSION(6, 1, 0):
amdgpu_device_ip_block_add(adev, &vpe_v6_1_ip_block);
break;
default:
break;
}
return 0;
}
static int amdgpu_discovery_set_umsch_mm_ip_blocks(struct amdgpu_device *adev)
{
switch (amdgpu_ip_version(adev, VCN_HWIP, 0)) {
case IP_VERSION(4, 0, 5):
if (amdgpu_umsch_mm & 0x1) {
amdgpu_device_ip_block_add(adev, &umsch_mm_v4_0_ip_block);
adev->enable_umsch_mm = true;
}
break;
default:
break;
}
return 0;
}
int amdgpu_discovery_set_ip_blocks(struct amdgpu_device *adev)
{
int r;
@ -2312,7 +2372,7 @@ int amdgpu_discovery_set_ip_blocks(struct amdgpu_device *adev)
amdgpu_discovery_init_soc_config(adev);
amdgpu_discovery_sysfs_init(adev);
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(9, 0, 1):
case IP_VERSION(9, 2, 1):
case IP_VERSION(9, 4, 0):
@ -2359,11 +2419,14 @@ int amdgpu_discovery_set_ip_blocks(struct amdgpu_device *adev)
case IP_VERSION(11, 0, 4):
adev->family = AMDGPU_FAMILY_GC_11_0_1;
break;
case IP_VERSION(11, 5, 0):
adev->family = AMDGPU_FAMILY_GC_11_5_0;
break;
default:
return -EINVAL;
}
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(9, 1, 0):
case IP_VERSION(9, 2, 2):
case IP_VERSION(9, 3, 0):
@ -2375,17 +2438,18 @@ int amdgpu_discovery_set_ip_blocks(struct amdgpu_device *adev)
case IP_VERSION(10, 3, 7):
case IP_VERSION(11, 0, 1):
case IP_VERSION(11, 0, 4):
case IP_VERSION(11, 5, 0):
adev->flags |= AMD_IS_APU;
break;
default:
break;
}
if (adev->ip_versions[XGMI_HWIP][0] == IP_VERSION(4, 8, 0))
if (amdgpu_ip_version(adev, XGMI_HWIP, 0) == IP_VERSION(4, 8, 0))
adev->gmc.xgmi.supported = true;
/* set NBIO version */
switch (adev->ip_versions[NBIO_HWIP][0]) {
switch (amdgpu_ip_version(adev, NBIO_HWIP, 0)) {
case IP_VERSION(6, 1, 0):
case IP_VERSION(6, 2, 0):
adev->nbio.funcs = &nbio_v6_1_funcs;
@ -2407,6 +2471,10 @@ int amdgpu_discovery_set_ip_blocks(struct amdgpu_device *adev)
adev->nbio.funcs = &nbio_v7_9_funcs;
adev->nbio.hdp_flush_reg = &nbio_v7_9_hdp_flush_reg;
break;
case IP_VERSION(7, 11, 0):
adev->nbio.funcs = &nbio_v7_11_funcs;
adev->nbio.hdp_flush_reg = &nbio_v7_11_hdp_flush_reg;
break;
case IP_VERSION(7, 2, 0):
case IP_VERSION(7, 2, 1):
case IP_VERSION(7, 3, 0):
@ -2443,7 +2511,7 @@ int amdgpu_discovery_set_ip_blocks(struct amdgpu_device *adev)
break;
}
switch (adev->ip_versions[HDP_HWIP][0]) {
switch (amdgpu_ip_version(adev, HDP_HWIP, 0)) {
case IP_VERSION(4, 0, 0):
case IP_VERSION(4, 0, 1):
case IP_VERSION(4, 1, 0):
@ -2475,7 +2543,7 @@ int amdgpu_discovery_set_ip_blocks(struct amdgpu_device *adev)
break;
}
switch (adev->ip_versions[DF_HWIP][0]) {
switch (amdgpu_ip_version(adev, DF_HWIP, 0)) {
case IP_VERSION(3, 6, 0):
case IP_VERSION(3, 6, 1):
case IP_VERSION(3, 6, 2):
@ -2495,7 +2563,7 @@ int amdgpu_discovery_set_ip_blocks(struct amdgpu_device *adev)
break;
}
switch (adev->ip_versions[SMUIO_HWIP][0]) {
switch (amdgpu_ip_version(adev, SMUIO_HWIP, 0)) {
case IP_VERSION(9, 0, 0):
case IP_VERSION(9, 0, 1):
case IP_VERSION(10, 0, 0):
@ -2538,7 +2606,7 @@ int amdgpu_discovery_set_ip_blocks(struct amdgpu_device *adev)
break;
}
switch (adev->ip_versions[LSDMA_HWIP][0]) {
switch (amdgpu_ip_version(adev, LSDMA_HWIP, 0)) {
case IP_VERSION(6, 0, 0):
case IP_VERSION(6, 0, 1):
case IP_VERSION(6, 0, 2):
@ -2611,6 +2679,14 @@ int amdgpu_discovery_set_ip_blocks(struct amdgpu_device *adev)
if (r)
return r;
r = amdgpu_discovery_set_vpe_ip_blocks(adev);
if (r)
return r;
r = amdgpu_discovery_set_umsch_mm_ip_blocks(adev);
if (r)
return r;
return 0;
}

View File

@ -24,7 +24,7 @@
#ifndef __AMDGPU_DISCOVERY__
#define __AMDGPU_DISCOVERY__
#define DISCOVERY_TMR_SIZE (8 << 10)
#define DISCOVERY_TMR_SIZE (10 << 10)
#define DISCOVERY_TMR_OFFSET (64 << 10)
void amdgpu_discovery_fini(struct amdgpu_device *adev);

View File

@ -766,11 +766,13 @@ static int convert_tiling_flags_to_modifier(struct amdgpu_framebuffer *afb)
return -EINVAL;
}
if (adev->ip_versions[GC_HWIP][0] >= IP_VERSION(11, 0, 0))
if (amdgpu_ip_version(adev, GC_HWIP, 0) >= IP_VERSION(11, 0, 0))
version = AMD_FMT_MOD_TILE_VER_GFX11;
else if (adev->ip_versions[GC_HWIP][0] >= IP_VERSION(10, 3, 0))
else if (amdgpu_ip_version(adev, GC_HWIP, 0) >=
IP_VERSION(10, 3, 0))
version = AMD_FMT_MOD_TILE_VER_GFX10_RBPLUS;
else if (adev->ip_versions[GC_HWIP][0] >= IP_VERSION(10, 0, 0))
else if (amdgpu_ip_version(adev, GC_HWIP, 0) >=
IP_VERSION(10, 0, 0))
version = AMD_FMT_MOD_TILE_VER_GFX10;
else
version = AMD_FMT_MOD_TILE_VER_GFX9;
@ -779,13 +781,15 @@ static int convert_tiling_flags_to_modifier(struct amdgpu_framebuffer *afb)
case 0: /* Z microtiling */
return -EINVAL;
case 1: /* S microtiling */
if (adev->ip_versions[GC_HWIP][0] < IP_VERSION(11, 0, 0)) {
if (amdgpu_ip_version(adev, GC_HWIP, 0) <
IP_VERSION(11, 0, 0)) {
if (!has_xor)
version = AMD_FMT_MOD_TILE_VER_GFX9;
}
break;
case 2:
if (adev->ip_versions[GC_HWIP][0] < IP_VERSION(11, 0, 0)) {
if (amdgpu_ip_version(adev, GC_HWIP, 0) <
IP_VERSION(11, 0, 0)) {
if (!has_xor && afb->base.format->cpp[0] != 4)
version = AMD_FMT_MOD_TILE_VER_GFX9;
}
@ -838,10 +842,12 @@ static int convert_tiling_flags_to_modifier(struct amdgpu_framebuffer *afb)
u64 render_dcc_offset;
/* Enable constant encode on RAVEN2 and later. */
bool dcc_constant_encode = (adev->asic_type > CHIP_RAVEN ||
(adev->asic_type == CHIP_RAVEN &&
adev->external_rev_id >= 0x81)) &&
adev->ip_versions[GC_HWIP][0] < IP_VERSION(11, 0, 0);
bool dcc_constant_encode =
(adev->asic_type > CHIP_RAVEN ||
(adev->asic_type == CHIP_RAVEN &&
adev->external_rev_id >= 0x81)) &&
amdgpu_ip_version(adev, GC_HWIP, 0) <
IP_VERSION(11, 0, 0);
int max_cblock_size = dcc_i64b ? AMD_FMT_MOD_DCC_BLOCK_64B :
dcc_i128b ? AMD_FMT_MOD_DCC_BLOCK_128B :
@ -878,7 +884,9 @@ static int convert_tiling_flags_to_modifier(struct amdgpu_framebuffer *afb)
if (adev->family >= AMDGPU_FAMILY_NV) {
int extra_pipe = 0;
if ((adev->ip_versions[GC_HWIP][0] >= IP_VERSION(10, 3, 0)) &&
if ((amdgpu_ip_version(adev, GC_HWIP,
0) >=
IP_VERSION(10, 3, 0)) &&
pipes == packers && pipes > 1)
extra_pipe = 1;

View File

@ -331,6 +331,7 @@ amdgpu_dma_buf_create_obj(struct drm_device *dev, struct dma_buf *dma_buf)
flags |= other->flags & (AMDGPU_GEM_CREATE_CPU_GTT_USWC |
AMDGPU_GEM_CREATE_COHERENT |
AMDGPU_GEM_CREATE_EXT_COHERENT |
AMDGPU_GEM_CREATE_UNCACHED);
}

View File

@ -86,6 +86,7 @@ struct amdgpu_doorbell_index {
uint32_t vce_ring6_7;
} uvd_vce;
};
uint32_t vpe_ring;
uint32_t first_non_cp;
uint32_t last_non_cp;
uint32_t max_assignment;
@ -226,10 +227,12 @@ enum AMDGPU_NAVI10_DOORBELL_ASSIGNMENT {
AMDGPU_NAVI10_DOORBELL64_VCNc_d = 0x18E,
AMDGPU_NAVI10_DOORBELL64_VCNe_f = 0x18F,
AMDGPU_NAVI10_DOORBELL64_FIRST_NON_CP = AMDGPU_NAVI10_DOORBELL_sDMA_ENGINE0,
AMDGPU_NAVI10_DOORBELL64_LAST_NON_CP = AMDGPU_NAVI10_DOORBELL64_VCNe_f,
AMDGPU_NAVI10_DOORBELL64_VPE = 0x190,
AMDGPU_NAVI10_DOORBELL_MAX_ASSIGNMENT = 0x18F,
AMDGPU_NAVI10_DOORBELL64_FIRST_NON_CP = AMDGPU_NAVI10_DOORBELL_sDMA_ENGINE0,
AMDGPU_NAVI10_DOORBELL64_LAST_NON_CP = AMDGPU_NAVI10_DOORBELL64_VPE,
AMDGPU_NAVI10_DOORBELL_MAX_ASSIGNMENT = AMDGPU_NAVI10_DOORBELL64_VPE,
AMDGPU_NAVI10_DOORBELL_INVALID = 0xFFFF
};
@ -357,8 +360,9 @@ int amdgpu_doorbell_init(struct amdgpu_device *adev);
void amdgpu_doorbell_fini(struct amdgpu_device *adev);
int amdgpu_doorbell_create_kernel_doorbells(struct amdgpu_device *adev);
uint32_t amdgpu_doorbell_index_on_bar(struct amdgpu_device *adev,
struct amdgpu_bo *db_bo,
uint32_t doorbell_index);
struct amdgpu_bo *db_bo,
uint32_t doorbell_index,
uint32_t db_size);
#define RDOORBELL32(index) amdgpu_mm_rdoorbell(adev, (index))
#define WDOORBELL32(index, v) amdgpu_mm_wdoorbell(adev, (index), (v))

View File

@ -113,20 +113,25 @@ void amdgpu_mm_wdoorbell64(struct amdgpu_device *adev, u32 index, u64 v)
*
* @adev: amdgpu_device pointer
* @db_bo: doorbell object's bo
* @db_index: doorbell relative index in this doorbell object
* @doorbell_index: doorbell relative index in this doorbell object
* @db_size: doorbell size is in byte
*
* returns doorbell's absolute index in BAR
*/
uint32_t amdgpu_doorbell_index_on_bar(struct amdgpu_device *adev,
struct amdgpu_bo *db_bo,
uint32_t doorbell_index)
struct amdgpu_bo *db_bo,
uint32_t doorbell_index,
uint32_t db_size)
{
int db_bo_offset;
db_bo_offset = amdgpu_bo_gpu_offset_no_check(db_bo);
/* doorbell index is 32 bit but doorbell's size is 64-bit, so *2 */
return db_bo_offset / sizeof(u32) + doorbell_index * 2;
/* doorbell index is 32 bit but doorbell's size can be 32 bit
* or 64 bit, so *db_size(in byte)/4 for alignment.
*/
return db_bo_offset / sizeof(u32) + doorbell_index *
DIV_ROUND_UP(db_size, 4);
}
/**
@ -142,6 +147,10 @@ int amdgpu_doorbell_create_kernel_doorbells(struct amdgpu_device *adev)
int r;
int size;
/* SI HW does not have doorbells, skip allocation */
if (adev->doorbell.num_kernel_doorbells == 0)
return 0;
/* Reserve first num_kernel_doorbells (page-aligned) for kernel ops */
size = ALIGN(adev->doorbell.num_kernel_doorbells * sizeof(u32), PAGE_SIZE);

View File

@ -113,11 +113,22 @@
* gl1c_cache_size, gl2c_cache_size, mall_size, enabled_rb_pipes_mask_hi
* 3.53.0 - Support for GFX11 CP GFX shadowing
* 3.54.0 - Add AMDGPU_CTX_QUERY2_FLAGS_RESET_IN_PROGRESS support
* - 3.55.0 - Add AMDGPU_INFO_GPUVM_FAULT query
* - 3.56.0 - Update IB start address and size alignment for decode and encode
*/
#define KMS_DRIVER_MAJOR 3
#define KMS_DRIVER_MINOR 54
#define KMS_DRIVER_MINOR 56
#define KMS_DRIVER_PATCHLEVEL 0
/*
* amdgpu.debug module options. Are all disabled by default
*/
enum AMDGPU_DEBUG_MASK {
AMDGPU_DEBUG_VM = BIT(0),
AMDGPU_DEBUG_LARGEBAR = BIT(1),
AMDGPU_DEBUG_DISABLE_GPU_SOFT_RECOVERY = BIT(2),
};
unsigned int amdgpu_vram_limit = UINT_MAX;
int amdgpu_vis_vram_limit;
int amdgpu_gart_size = -1; /* auto */
@ -140,7 +151,6 @@ int amdgpu_vm_size = -1;
int amdgpu_vm_fragment_size = -1;
int amdgpu_vm_block_size = -1;
int amdgpu_vm_fault_stop;
int amdgpu_vm_debug;
int amdgpu_vm_update_mode = -1;
int amdgpu_exp_hw_support;
int amdgpu_dc = -1;
@ -194,6 +204,9 @@ int amdgpu_use_xgmi_p2p = 1;
int amdgpu_vcnfw_log;
int amdgpu_sg_display = -1; /* auto */
int amdgpu_user_partt_mode = AMDGPU_AUTO_COMPUTE_PARTITION_MODE;
int amdgpu_umsch_mm;
int amdgpu_seamless = -1; /* auto */
uint amdgpu_debug_mask;
static void amdgpu_drv_delayed_reset_work_handler(struct work_struct *work);
@ -405,13 +418,6 @@ module_param_named(vm_block_size, amdgpu_vm_block_size, int, 0444);
MODULE_PARM_DESC(vm_fault_stop, "Stop on VM fault (0 = never (default), 1 = print first, 2 = always)");
module_param_named(vm_fault_stop, amdgpu_vm_fault_stop, int, 0444);
/**
* DOC: vm_debug (int)
* Debug VM handling (0 = disabled, 1 = enabled). The default is 0 (Disabled).
*/
MODULE_PARM_DESC(vm_debug, "Debug VM handling (0 = disabled (default), 1 = enabled)");
module_param_named(vm_debug, amdgpu_vm_debug, int, 0644);
/**
* DOC: vm_update_mode (int)
* Override VM update mode. VM updated by using CPU (0 = never, 1 = Graphics only, 2 = Compute only, 3 = Both). The default
@ -743,18 +749,6 @@ module_param(send_sigterm, int, 0444);
MODULE_PARM_DESC(send_sigterm,
"Send sigterm to HSA process on unhandled exception (0 = disable, 1 = enable)");
/**
* DOC: debug_largebar (int)
* Set debug_largebar as 1 to enable simulating large-bar capability on non-large bar
* system. This limits the VRAM size reported to ROCm applications to the visible
* size, usually 256MB.
* Default value is 0, diabled.
*/
int debug_largebar;
module_param(debug_largebar, int, 0444);
MODULE_PARM_DESC(debug_largebar,
"Debug large-bar flag used to simulate large-bar capability on non-large bar machine (0 = disable, 1 = enable)");
/**
* DOC: halt_if_hws_hang (int)
* Halt if HWS hang is detected. Default value, 0, disables the halt on hang.
@ -907,6 +901,15 @@ module_param_named(vcnfw_log, amdgpu_vcnfw_log, int, 0444);
MODULE_PARM_DESC(sg_display, "S/G Display (-1 = auto (default), 0 = disable)");
module_param_named(sg_display, amdgpu_sg_display, int, 0444);
/**
* DOC: umsch_mm (int)
* Enable Multi Media User Mode Scheduler. This is a HW scheduling engine for VCN and VPE.
* (0 = disabled (default), 1 = enabled)
*/
MODULE_PARM_DESC(umsch_mm,
"Enable Multi Media User Mode Scheduler (0 = disabled (default), 1 = enabled)");
module_param_named(umsch_mm, amdgpu_umsch_mm, int, 0444);
/**
* DOC: smu_pptable_id (int)
* Used to override pptable id. id = 0 use VBIOS pptable.
@ -938,6 +941,26 @@ module_param_named(user_partt_mode, amdgpu_user_partt_mode, uint, 0444);
module_param(enforce_isolation, bool, 0444);
MODULE_PARM_DESC(enforce_isolation, "enforce process isolation between graphics and compute . enforce_isolation = on");
/**
* DOC: seamless (int)
* Seamless boot will keep the image on the screen during the boot process.
*/
MODULE_PARM_DESC(seamless, "Seamless boot (-1 = auto (default), 0 = disable, 1 = enable)");
module_param_named(seamless, amdgpu_seamless, int, 0444);
/**
* DOC: debug_mask (uint)
* Debug options for amdgpu, work as a binary mask with the following options:
*
* - 0x1: Debug VM handling
* - 0x2: Enable simulating large-bar capability on non-large bar system. This
* limits the VRAM size reported to ROCm applications to the visible
* size, usually 256MB.
* - 0x4: Disable GPU soft recovery, always do a full reset
*/
MODULE_PARM_DESC(debug_mask, "debug options for amdgpu, disabled by default");
module_param_named(debug_mask, amdgpu_debug_mask, uint, 0444);
/* These devices are not supported by amdgpu.
* They are supported by the mach64, r128, radeon drivers
*/
@ -2042,6 +2065,24 @@ static void amdgpu_get_secondary_funcs(struct amdgpu_device *adev)
}
}
static void amdgpu_init_debug_options(struct amdgpu_device *adev)
{
if (amdgpu_debug_mask & AMDGPU_DEBUG_VM) {
pr_info("debug: VM handling debug enabled\n");
adev->debug_vm = true;
}
if (amdgpu_debug_mask & AMDGPU_DEBUG_LARGEBAR) {
pr_info("debug: enabled simulating large-bar capability on non-large bar system\n");
adev->debug_largebar = true;
}
if (amdgpu_debug_mask & AMDGPU_DEBUG_DISABLE_GPU_SOFT_RECOVERY) {
pr_info("debug: soft reset for GPU recovery disabled\n");
adev->debug_disable_soft_recovery = true;
}
}
static int amdgpu_pci_probe(struct pci_dev *pdev,
const struct pci_device_id *ent)
{
@ -2220,6 +2261,8 @@ retry_init:
amdgpu_get_secondary_funcs(adev);
}
amdgpu_init_debug_options(adev);
return 0;
err_pci:
@ -2241,7 +2284,7 @@ amdgpu_pci_remove(struct pci_dev *pdev)
pm_runtime_forbid(dev->dev);
}
if (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 2) &&
if (amdgpu_ip_version(adev, MP1_HWIP, 0) == IP_VERSION(13, 0, 2) &&
!amdgpu_sriov_vf(adev)) {
bool need_to_reset_gpu = false;
@ -2384,8 +2427,9 @@ static int amdgpu_pmops_prepare(struct device *dev)
/* Return a positive number here so
* DPM_FLAG_SMART_SUSPEND works properly
*/
if (amdgpu_device_supports_boco(drm_dev))
return pm_runtime_suspended(dev);
if (amdgpu_device_supports_boco(drm_dev) &&
pm_runtime_suspended(dev))
return 1;
/* if we will not support s3 or s2i for the device
* then skip suspend
@ -2394,7 +2438,7 @@ static int amdgpu_pmops_prepare(struct device *dev)
!amdgpu_acpi_is_s3_active(adev))
return 1;
return 0;
return amdgpu_device_prepare(drm_dev);
}
static void amdgpu_pmops_complete(struct device *dev)
@ -2594,6 +2638,9 @@ static int amdgpu_pmops_runtime_suspend(struct device *dev)
if (amdgpu_device_supports_boco(drm_dev))
adev->mp1_state = PP_MP1_STATE_UNLOAD;
ret = amdgpu_device_prepare(drm_dev);
if (ret)
return ret;
ret = amdgpu_device_suspend(drm_dev, false);
if (ret) {
adev->in_runpm = false;

View File

@ -51,6 +51,7 @@ static const char *amdgpu_ip_name[AMDGPU_HW_IP_NUM] = {
[AMDGPU_HW_IP_VCN_DEC] = "dec",
[AMDGPU_HW_IP_VCN_ENC] = "enc",
[AMDGPU_HW_IP_VCN_JPEG] = "jpeg",
[AMDGPU_HW_IP_VPE] = "vpe",
};
void amdgpu_show_fdinfo(struct drm_printer *p, struct drm_file *file)

View File

@ -570,7 +570,8 @@ static bool amdgpu_fence_need_ring_interrupt_restore(struct amdgpu_ring *ring)
switch (ring->funcs->type) {
case AMDGPU_RING_TYPE_SDMA:
/* SDMA 5.x+ is part of GFX power domain so it's covered by GFXOFF */
if (adev->ip_versions[SDMA0_HWIP][0] >= IP_VERSION(5, 0, 0))
if (amdgpu_ip_version(adev, SDMA0_HWIP, 0) >=
IP_VERSION(5, 0, 0))
is_gfx_power_domain = true;
break;
case AMDGPU_RING_TYPE_GFX:

View File

@ -42,8 +42,9 @@ static bool is_fru_eeprom_supported(struct amdgpu_device *adev, u32 *fru_addr)
/* The i2c access is blocked on VF
* TODO: Need other way to get the info
* Also, FRU not valid for APU devices.
*/
if (amdgpu_sriov_vf(adev))
if (amdgpu_sriov_vf(adev) || (adev->flags & AMD_IS_APU))
return false;
/* The default I2C EEPROM address of the FRU.
@ -57,27 +58,26 @@ static bool is_fru_eeprom_supported(struct amdgpu_device *adev, u32 *fru_addr)
* for ease/speed/readability. For now, 2 string comparisons are
* reasonable and not too expensive
*/
switch (adev->asic_type) {
case CHIP_VEGA20:
/* D161 and D163 are the VG20 server SKUs */
if (strnstr(atom_ctx->vbios_pn, "D161",
sizeof(atom_ctx->vbios_pn)) ||
strnstr(atom_ctx->vbios_pn, "D163",
sizeof(atom_ctx->vbios_pn))) {
if (fru_addr)
*fru_addr = FRU_EEPROM_MADDR_6;
return true;
} else {
switch (amdgpu_ip_version(adev, MP1_HWIP, 0)) {
case IP_VERSION(11, 0, 2):
switch (adev->asic_type) {
case CHIP_VEGA20:
/* D161 and D163 are the VG20 server SKUs */
if (strnstr(atom_ctx->vbios_pn, "D161",
sizeof(atom_ctx->vbios_pn)) ||
strnstr(atom_ctx->vbios_pn, "D163",
sizeof(atom_ctx->vbios_pn))) {
if (fru_addr)
*fru_addr = FRU_EEPROM_MADDR_6;
return true;
} else {
return false;
}
case CHIP_ARCTURUS:
default:
return false;
}
case CHIP_ALDEBARAN:
/* All Aldebaran SKUs have an FRU */
if (!strnstr(atom_ctx->vbios_pn, "D673",
sizeof(atom_ctx->vbios_pn)))
if (fru_addr)
*fru_addr = FRU_EEPROM_MADDR_6;
return true;
case CHIP_SIENNA_CICHLID:
case IP_VERSION(11, 0, 7):
if (strnstr(atom_ctx->vbios_pn, "D603",
sizeof(atom_ctx->vbios_pn))) {
if (strnstr(atom_ctx->vbios_pn, "D603GLXE",
@ -92,6 +92,17 @@ static bool is_fru_eeprom_supported(struct amdgpu_device *adev, u32 *fru_addr)
} else {
return false;
}
case IP_VERSION(13, 0, 2):
/* All Aldebaran SKUs have an FRU */
if (!strnstr(atom_ctx->vbios_pn, "D673",
sizeof(atom_ctx->vbios_pn)))
if (fru_addr)
*fru_addr = FRU_EEPROM_MADDR_6;
return true;
case IP_VERSION(13, 0, 6):
if (fru_addr)
*fru_addr = FRU_EEPROM_MADDR_8;
return true;
default:
return false;
}
@ -99,6 +110,7 @@ static bool is_fru_eeprom_supported(struct amdgpu_device *adev, u32 *fru_addr)
int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
{
struct amdgpu_fru_info *fru_info;
unsigned char buf[8], *pia;
u32 addr, fru_addr;
int size, len;
@ -107,6 +119,19 @@ int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
if (!is_fru_eeprom_supported(adev, &fru_addr))
return 0;
if (!adev->fru_info) {
adev->fru_info = kzalloc(sizeof(*adev->fru_info), GFP_KERNEL);
if (!adev->fru_info)
return -ENOMEM;
}
fru_info = adev->fru_info;
/* For Arcturus-and-later, default value of serial_number is unique_id
* so convert it to a 16-digit HEX string for convenience and
* backwards-compatibility.
*/
sprintf(fru_info->serial, "%llx", adev->unique_id);
/* If algo exists, it means that the i2c_adapter's initialized */
if (!adev->pm.fru_eeprom_i2c_bus || !adev->pm.fru_eeprom_i2c_bus->algo) {
DRM_WARN("Cannot access FRU, EEPROM accessor not initialized");
@ -170,32 +195,39 @@ int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
csum += pia[size - 1];
if (csum) {
DRM_ERROR("Bad Product Info Area checksum: 0x%02x", csum);
kfree(pia);
return -EIO;
}
/* Now extract useful information from the PIA.
*
* Skip the Manufacturer Name at [3] and go directly to
* the Product Name field.
* Read Manufacturer Name field whose length is [3].
*/
addr = 3 + 1 + (pia[3] & 0x3F);
addr = 3;
if (addr + 1 >= len)
goto Out;
memcpy(adev->product_name, pia + addr + 1,
min_t(size_t,
sizeof(adev->product_name),
memcpy(fru_info->manufacturer_name, pia + addr + 1,
min_t(size_t, sizeof(fru_info->manufacturer_name),
pia[addr] & 0x3F));
adev->product_name[sizeof(adev->product_name) - 1] = '\0';
fru_info->manufacturer_name[sizeof(fru_info->manufacturer_name) - 1] =
'\0';
/* Read Product Name field. */
addr += 1 + (pia[addr] & 0x3F);
if (addr + 1 >= len)
goto Out;
memcpy(fru_info->product_name, pia + addr + 1,
min_t(size_t, sizeof(fru_info->product_name), pia[addr] & 0x3F));
fru_info->product_name[sizeof(fru_info->product_name) - 1] = '\0';
/* Go to the Product Part/Model Number field. */
addr += 1 + (pia[addr] & 0x3F);
if (addr + 1 >= len)
goto Out;
memcpy(adev->product_number, pia + addr + 1,
min_t(size_t,
sizeof(adev->product_number),
memcpy(fru_info->product_number, pia + addr + 1,
min_t(size_t, sizeof(fru_info->product_number),
pia[addr] & 0x3F));
adev->product_number[sizeof(adev->product_number) - 1] = '\0';
fru_info->product_number[sizeof(fru_info->product_number) - 1] = '\0';
/* Go to the Product Version field. */
addr += 1 + (pia[addr] & 0x3F);
@ -204,10 +236,21 @@ int amdgpu_fru_get_product_info(struct amdgpu_device *adev)
addr += 1 + (pia[addr] & 0x3F);
if (addr + 1 >= len)
goto Out;
memcpy(adev->serial, pia + addr + 1, min_t(size_t,
sizeof(adev->serial),
pia[addr] & 0x3F));
adev->serial[sizeof(adev->serial) - 1] = '\0';
memcpy(fru_info->serial, pia + addr + 1,
min_t(size_t, sizeof(fru_info->serial), pia[addr] & 0x3F));
fru_info->serial[sizeof(fru_info->serial) - 1] = '\0';
/* Asset Tag field */
addr += 1 + (pia[addr] & 0x3F);
/* FRU File Id field. This could be 'null'. */
addr += 1 + (pia[addr] & 0x3F);
if ((addr + 1 >= len) || !(pia[addr] & 0x3F))
goto Out;
memcpy(fru_info->fru_id, pia + addr + 1,
min_t(size_t, sizeof(fru_info->fru_id), pia[addr] & 0x3F));
fru_info->fru_id[sizeof(fru_info->fru_id) - 1] = '\0';
Out:
kfree(pia);
return 0;
@ -230,7 +273,7 @@ static ssize_t amdgpu_fru_product_name_show(struct device *dev,
struct drm_device *ddev = dev_get_drvdata(dev);
struct amdgpu_device *adev = drm_to_adev(ddev);
return sysfs_emit(buf, "%s\n", adev->product_name);
return sysfs_emit(buf, "%s\n", adev->fru_info->product_name);
}
static DEVICE_ATTR(product_name, 0444, amdgpu_fru_product_name_show, NULL);
@ -252,7 +295,7 @@ static ssize_t amdgpu_fru_product_number_show(struct device *dev,
struct drm_device *ddev = dev_get_drvdata(dev);
struct amdgpu_device *adev = drm_to_adev(ddev);
return sysfs_emit(buf, "%s\n", adev->product_number);
return sysfs_emit(buf, "%s\n", adev->fru_info->product_number);
}
static DEVICE_ATTR(product_number, 0444, amdgpu_fru_product_number_show, NULL);
@ -274,21 +317,65 @@ static ssize_t amdgpu_fru_serial_number_show(struct device *dev,
struct drm_device *ddev = dev_get_drvdata(dev);
struct amdgpu_device *adev = drm_to_adev(ddev);
return sysfs_emit(buf, "%s\n", adev->serial);
return sysfs_emit(buf, "%s\n", adev->fru_info->serial);
}
static DEVICE_ATTR(serial_number, 0444, amdgpu_fru_serial_number_show, NULL);
/**
* DOC: fru_id
*
* The amdgpu driver provides a sysfs API for reporting FRU File Id
* for the device.
* The file fru_id is used for this and returns the File Id value
* as returned from the FRU.
* NOTE: This is only available for certain server cards
*/
static ssize_t amdgpu_fru_id_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct drm_device *ddev = dev_get_drvdata(dev);
struct amdgpu_device *adev = drm_to_adev(ddev);
return sysfs_emit(buf, "%s\n", adev->fru_info->fru_id);
}
static DEVICE_ATTR(fru_id, 0444, amdgpu_fru_id_show, NULL);
/**
* DOC: manufacturer
*
* The amdgpu driver provides a sysfs API for reporting manufacturer name from
* FRU information.
* The file manufacturer returns the value as returned from the FRU.
* NOTE: This is only available for certain server cards
*/
static ssize_t amdgpu_fru_manufacturer_name_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct drm_device *ddev = dev_get_drvdata(dev);
struct amdgpu_device *adev = drm_to_adev(ddev);
return sysfs_emit(buf, "%s\n", adev->fru_info->manufacturer_name);
}
static DEVICE_ATTR(manufacturer, 0444, amdgpu_fru_manufacturer_name_show, NULL);
static const struct attribute *amdgpu_fru_attributes[] = {
&dev_attr_product_name.attr,
&dev_attr_product_number.attr,
&dev_attr_serial_number.attr,
&dev_attr_fru_id.attr,
&dev_attr_manufacturer.attr,
NULL
};
int amdgpu_fru_sysfs_init(struct amdgpu_device *adev)
{
if (!is_fru_eeprom_supported(adev, NULL))
if (!is_fru_eeprom_supported(adev, NULL) || !adev->fru_info)
return 0;
return sysfs_create_files(&adev->dev->kobj, amdgpu_fru_attributes);
@ -296,7 +383,7 @@ int amdgpu_fru_sysfs_init(struct amdgpu_device *adev)
void amdgpu_fru_sysfs_fini(struct amdgpu_device *adev)
{
if (!is_fru_eeprom_supported(adev, NULL))
if (!is_fru_eeprom_supported(adev, NULL) || !adev->fru_info)
return;
sysfs_remove_files(&adev->dev->kobj, amdgpu_fru_attributes);

View File

@ -24,6 +24,17 @@
#ifndef __AMDGPU_FRU_EEPROM_H__
#define __AMDGPU_FRU_EEPROM_H__
#define AMDGPU_PRODUCT_NAME_LEN 64
/* FRU product information */
struct amdgpu_fru_info {
char product_number[20];
char product_name[AMDGPU_PRODUCT_NAME_LEN];
char serial[20];
char manufacturer_name[32];
char fru_id[32];
};
int amdgpu_fru_get_product_info(struct amdgpu_device *adev);
int amdgpu_fru_sysfs_init(struct amdgpu_device *adev);
void amdgpu_fru_sysfs_fini(struct amdgpu_device *adev);

View File

@ -791,7 +791,7 @@ int amdgpu_gem_va_ioctl(struct drm_device *dev, void *data,
default:
break;
}
if (!r && !(args->flags & AMDGPU_VM_DELAY_UPDATE) && !amdgpu_vm_debug)
if (!r && !(args->flags & AMDGPU_VM_DELAY_UPDATE) && !adev->debug_vm)
amdgpu_gem_va_update_vm(adev, &fpriv->vm, bo_va,
args->operation);

View File

@ -158,7 +158,7 @@ static bool amdgpu_gfx_is_compute_multipipe_capable(struct amdgpu_device *adev)
return amdgpu_compute_multipipe == 1;
}
if (adev->ip_versions[GC_HWIP][0] > IP_VERSION(9, 0, 0))
if (amdgpu_ip_version(adev, GC_HWIP, 0) > IP_VERSION(9, 0, 0))
return true;
/* FIXME: spreading the queues across pipes causes perf regressions
@ -385,7 +385,7 @@ int amdgpu_gfx_mqd_sw_init(struct amdgpu_device *adev,
u32 domain = AMDGPU_GEM_DOMAIN_GTT;
/* Only enable on gfx10 and 11 for now to avoid changing behavior on older chips */
if (adev->ip_versions[GC_HWIP][0] >= IP_VERSION(10, 0, 0))
if (amdgpu_ip_version(adev, GC_HWIP, 0) >= IP_VERSION(10, 0, 0))
domain |= AMDGPU_GEM_DOMAIN_VRAM;
/* create MQD for KIQ */

View File

@ -43,10 +43,10 @@
#define AMDGPU_GFX_LBPW_DISABLED_MODE 0x00000008L
#define AMDGPU_MAX_GC_INSTANCES 8
#define KGD_MAX_QUEUES 128
#define AMDGPU_MAX_QUEUES 128
#define AMDGPU_MAX_GFX_QUEUES KGD_MAX_QUEUES
#define AMDGPU_MAX_COMPUTE_QUEUES KGD_MAX_QUEUES
#define AMDGPU_MAX_GFX_QUEUES AMDGPU_MAX_QUEUES
#define AMDGPU_MAX_COMPUTE_QUEUES AMDGPU_MAX_QUEUES
enum amdgpu_gfx_pipe_priority {
AMDGPU_GFX_PIPE_PRIO_NORMAL = AMDGPU_RING_PRIO_1,
@ -69,11 +69,6 @@ enum amdgpu_gfx_partition {
#define NUM_XCC(x) hweight16(x)
enum amdgpu_pkg_type {
AMDGPU_PKG_TYPE_APU = 2,
AMDGPU_PKG_TYPE_UNKNOWN,
};
enum amdgpu_gfx_ras_mem_id_type {
AMDGPU_GFX_CP_MEM = 0,
AMDGPU_GFX_GCEA_MEM,

View File

@ -32,6 +32,7 @@
#include "amdgpu.h"
#include "amdgpu_gmc.h"
#include "amdgpu_ras.h"
#include "amdgpu_reset.h"
#include "amdgpu_xgmi.h"
#include <drm/drm_drv.h>
@ -263,12 +264,14 @@ void amdgpu_gmc_sysvm_location(struct amdgpu_device *adev, struct amdgpu_gmc *mc
*
* @adev: amdgpu device structure holding all necessary information
* @mc: memory controller structure holding memory information
* @gart_placement: GART placement policy with respect to VRAM
*
* Function will place try to place GART before or after VRAM.
* If GART size is bigger than space left then we ajust GART size.
* Thus function will never fails.
*/
void amdgpu_gmc_gart_location(struct amdgpu_device *adev, struct amdgpu_gmc *mc)
void amdgpu_gmc_gart_location(struct amdgpu_device *adev, struct amdgpu_gmc *mc,
enum amdgpu_gart_placement gart_placement)
{
const uint64_t four_gb = 0x100000000ULL;
u64 size_af, size_bf;
@ -286,11 +289,22 @@ void amdgpu_gmc_gart_location(struct amdgpu_device *adev, struct amdgpu_gmc *mc)
mc->gart_size = max(size_bf, size_af);
}
if ((size_bf >= mc->gart_size && size_bf < size_af) ||
(size_af < mc->gart_size))
mc->gart_start = 0;
else
switch (gart_placement) {
case AMDGPU_GART_PLACEMENT_HIGH:
mc->gart_start = max_mc_address - mc->gart_size + 1;
break;
case AMDGPU_GART_PLACEMENT_LOW:
mc->gart_start = 0;
break;
case AMDGPU_GART_PLACEMENT_BEST_FIT:
default:
if ((size_bf >= mc->gart_size && size_bf < size_af) ||
(size_af < mc->gart_size))
mc->gart_start = 0;
else
mc->gart_start = max_mc_address - mc->gart_size + 1;
break;
}
mc->gart_start &= ~(four_gb - 1);
mc->gart_end = mc->gart_start + mc->gart_size - 1;
@ -315,14 +329,6 @@ void amdgpu_gmc_agp_location(struct amdgpu_device *adev, struct amdgpu_gmc *mc)
const uint64_t sixteen_gb_mask = ~(sixteen_gb - 1);
u64 size_af, size_bf;
if (amdgpu_sriov_vf(adev)) {
mc->agp_start = 0xffffffffffff;
mc->agp_end = 0x0;
mc->agp_size = 0;
return;
}
if (mc->fb_start > mc->gart_start) {
size_bf = (mc->fb_start & sixteen_gb_mask) -
ALIGN(mc->gart_end + 1, sixteen_gb);
@ -346,6 +352,25 @@ void amdgpu_gmc_agp_location(struct amdgpu_device *adev, struct amdgpu_gmc *mc)
mc->agp_size >> 20, mc->agp_start, mc->agp_end);
}
/**
* amdgpu_gmc_set_agp_default - Set the default AGP aperture value.
* @adev: amdgpu device structure holding all necessary information
* @mc: memory controller structure holding memory information
*
* To disable the AGP aperture, you need to set the start to a larger
* value than the end. This function sets the default value which
* can then be overridden using amdgpu_gmc_agp_location() if you want
* to enable the AGP aperture on a specific chip.
*
*/
void amdgpu_gmc_set_agp_default(struct amdgpu_device *adev,
struct amdgpu_gmc *mc)
{
mc->agp_start = 0xffffffffffff;
mc->agp_end = 0;
mc->agp_size = 0;
}
/**
* amdgpu_gmc_fault_key - get hask key from vm fault address and pasid
*
@ -452,7 +477,10 @@ void amdgpu_gmc_filter_faults_remove(struct amdgpu_device *adev, uint64_t addr,
uint32_t hash;
uint64_t tmp;
ih = adev->irq.retry_cam_enabled ? &adev->irq.ih_soft : &adev->irq.ih1;
if (adev->irq.retry_cam_enabled)
return;
ih = &adev->irq.ih1;
/* Get the WPTR of the last entry in IH ring */
last_wptr = amdgpu_ih_get_wptr(adev, ih);
/* Order wptr with ring data. */
@ -549,13 +577,17 @@ int amdgpu_gmc_allocate_vm_inv_eng(struct amdgpu_device *adev)
/* reserve engine 5 for firmware */
if (adev->enable_mes)
vm_inv_engs[i] &= ~(1 << 5);
/* reserve mmhub engine 3 for firmware */
if (adev->enable_umsch_mm)
vm_inv_engs[i] &= ~(1 << 3);
}
for (i = 0; i < adev->num_rings; ++i) {
ring = adev->rings[i];
vmhub = ring->vm_hub;
if (ring == &adev->mes.ring)
if (ring == &adev->mes.ring ||
ring == &adev->umsch_mm.ring)
continue;
inv_eng = ffs(vm_inv_engs[vmhub]);
@ -575,6 +607,142 @@ int amdgpu_gmc_allocate_vm_inv_eng(struct amdgpu_device *adev)
return 0;
}
void amdgpu_gmc_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
uint32_t vmhub, uint32_t flush_type)
{
struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
struct amdgpu_vmhub *hub = &adev->vmhub[vmhub];
struct dma_fence *fence;
struct amdgpu_job *job;
int r;
if (!hub->sdma_invalidation_workaround || vmid ||
!adev->mman.buffer_funcs_enabled ||
!adev->ib_pool_ready || amdgpu_in_reset(adev) ||
!ring->sched.ready) {
/*
* A GPU reset should flush all TLBs anyway, so no need to do
* this while one is ongoing.
*/
if (!down_read_trylock(&adev->reset_domain->sem))
return;
if (adev->gmc.flush_tlb_needs_extra_type_2)
adev->gmc.gmc_funcs->flush_gpu_tlb(adev, vmid,
vmhub, 2);
if (adev->gmc.flush_tlb_needs_extra_type_0 && flush_type == 2)
adev->gmc.gmc_funcs->flush_gpu_tlb(adev, vmid,
vmhub, 0);
adev->gmc.gmc_funcs->flush_gpu_tlb(adev, vmid, vmhub,
flush_type);
up_read(&adev->reset_domain->sem);
return;
}
/* The SDMA on Navi 1x has a bug which can theoretically result in memory
* corruption if an invalidation happens at the same time as an VA
* translation. Avoid this by doing the invalidation from the SDMA
* itself at least for GART.
*/
mutex_lock(&adev->mman.gtt_window_lock);
r = amdgpu_job_alloc_with_ib(ring->adev, &adev->mman.high_pr,
AMDGPU_FENCE_OWNER_UNDEFINED,
16 * 4, AMDGPU_IB_POOL_IMMEDIATE,
&job);
if (r)
goto error_alloc;
job->vm_pd_addr = amdgpu_gmc_pd_addr(adev->gart.bo);
job->vm_needs_flush = true;
job->ibs->ptr[job->ibs->length_dw++] = ring->funcs->nop;
amdgpu_ring_pad_ib(ring, &job->ibs[0]);
fence = amdgpu_job_submit(job);
mutex_unlock(&adev->mman.gtt_window_lock);
dma_fence_wait(fence, false);
dma_fence_put(fence);
return;
error_alloc:
mutex_unlock(&adev->mman.gtt_window_lock);
dev_err(adev->dev, "Error flushing GPU TLB using the SDMA (%d)!\n", r);
}
int amdgpu_gmc_flush_gpu_tlb_pasid(struct amdgpu_device *adev, uint16_t pasid,
uint32_t flush_type, bool all_hub,
uint32_t inst)
{
u32 usec_timeout = amdgpu_sriov_vf(adev) ? SRIOV_USEC_TIMEOUT :
adev->usec_timeout;
struct amdgpu_ring *ring = &adev->gfx.kiq[inst].ring;
struct amdgpu_kiq *kiq = &adev->gfx.kiq[inst];
unsigned int ndw;
signed long r;
uint32_t seq;
if (!adev->gmc.flush_pasid_uses_kiq || !ring->sched.ready ||
!down_read_trylock(&adev->reset_domain->sem)) {
if (adev->gmc.flush_tlb_needs_extra_type_2)
adev->gmc.gmc_funcs->flush_gpu_tlb_pasid(adev, pasid,
2, all_hub,
inst);
if (adev->gmc.flush_tlb_needs_extra_type_0 && flush_type == 2)
adev->gmc.gmc_funcs->flush_gpu_tlb_pasid(adev, pasid,
0, all_hub,
inst);
adev->gmc.gmc_funcs->flush_gpu_tlb_pasid(adev, pasid,
flush_type, all_hub,
inst);
return 0;
}
/* 2 dwords flush + 8 dwords fence */
ndw = kiq->pmf->invalidate_tlbs_size + 8;
if (adev->gmc.flush_tlb_needs_extra_type_2)
ndw += kiq->pmf->invalidate_tlbs_size;
if (adev->gmc.flush_tlb_needs_extra_type_0)
ndw += kiq->pmf->invalidate_tlbs_size;
spin_lock(&adev->gfx.kiq[inst].ring_lock);
amdgpu_ring_alloc(ring, ndw);
if (adev->gmc.flush_tlb_needs_extra_type_2)
kiq->pmf->kiq_invalidate_tlbs(ring, pasid, 2, all_hub);
if (flush_type == 2 && adev->gmc.flush_tlb_needs_extra_type_0)
kiq->pmf->kiq_invalidate_tlbs(ring, pasid, 0, all_hub);
kiq->pmf->kiq_invalidate_tlbs(ring, pasid, flush_type, all_hub);
r = amdgpu_fence_emit_polling(ring, &seq, MAX_KIQ_REG_WAIT);
if (r) {
amdgpu_ring_undo(ring);
spin_unlock(&adev->gfx.kiq[inst].ring_lock);
goto error_unlock_reset;
}
amdgpu_ring_commit(ring);
spin_unlock(&adev->gfx.kiq[inst].ring_lock);
r = amdgpu_fence_wait_polling(ring, seq, usec_timeout);
if (r < 1) {
dev_err(adev->dev, "wait for kiq fence error: %ld.\n", r);
r = -ETIME;
goto error_unlock_reset;
}
r = 0;
error_unlock_reset:
up_read(&adev->reset_domain->sem);
return r;
}
/**
* amdgpu_gmc_tmz_set -- check and set if a device supports TMZ
* @adev: amdgpu_device pointer
@ -584,7 +752,7 @@ int amdgpu_gmc_allocate_vm_inv_eng(struct amdgpu_device *adev)
*/
void amdgpu_gmc_tmz_set(struct amdgpu_device *adev)
{
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
/* RAVEN */
case IP_VERSION(9, 2, 2):
case IP_VERSION(9, 1, 0):
@ -648,7 +816,7 @@ void amdgpu_gmc_tmz_set(struct amdgpu_device *adev)
void amdgpu_gmc_noretry_set(struct amdgpu_device *adev)
{
struct amdgpu_gmc *gmc = &adev->gmc;
uint32_t gc_ver = adev->ip_versions[GC_HWIP][0];
uint32_t gc_ver = amdgpu_ip_version(adev, GC_HWIP, 0);
bool noretry_default = (gc_ver == IP_VERSION(9, 0, 1) ||
gc_ver == IP_VERSION(9, 3, 0) ||
gc_ver == IP_VERSION(9, 4, 0) ||
@ -721,12 +889,6 @@ void amdgpu_gmc_get_vbios_allocations(struct amdgpu_device *adev)
case CHIP_RENOIR:
adev->mman.keep_stolen_vga_memory = true;
break;
case CHIP_YELLOW_CARP:
if (amdgpu_discovery == 0) {
adev->mman.stolen_reserved_offset = 0x1ffb0000;
adev->mman.stolen_reserved_size = 64 * PAGE_SIZE;
}
break;
default:
adev->mman.keep_stolen_vga_memory = false;
break;

View File

@ -117,6 +117,8 @@ struct amdgpu_vmhub {
uint32_t vm_contexts_disable;
bool sdma_invalidation_workaround;
const struct amdgpu_vmhub_funcs *vmhub_funcs;
};
@ -128,9 +130,9 @@ struct amdgpu_gmc_funcs {
void (*flush_gpu_tlb)(struct amdgpu_device *adev, uint32_t vmid,
uint32_t vmhub, uint32_t flush_type);
/* flush the vm tlb via pasid */
int (*flush_gpu_tlb_pasid)(struct amdgpu_device *adev, uint16_t pasid,
uint32_t flush_type, bool all_hub,
uint32_t inst);
void (*flush_gpu_tlb_pasid)(struct amdgpu_device *adev, uint16_t pasid,
uint32_t flush_type, bool all_hub,
uint32_t inst);
/* flush the vm tlb via ring */
uint64_t (*emit_flush_gpu_tlb)(struct amdgpu_ring *ring, unsigned vmid,
uint64_t pd_addr);
@ -197,6 +199,12 @@ struct amdgpu_mem_partition_info {
#define INVALID_PFN -1
enum amdgpu_gart_placement {
AMDGPU_GART_PLACEMENT_BEST_FIT = 0,
AMDGPU_GART_PLACEMENT_HIGH,
AMDGPU_GART_PLACEMENT_LOW,
};
struct amdgpu_gmc {
/* FB's physical address in MMIO space (for CPU to
* map FB). This is different compared to the agp/
@ -333,12 +341,12 @@ struct amdgpu_gmc {
u64 MC_VM_MX_L1_TLB_CNTL;
u64 noretry_flags;
bool flush_tlb_needs_extra_type_0;
bool flush_tlb_needs_extra_type_2;
bool flush_pasid_uses_kiq;
};
#define amdgpu_gmc_flush_gpu_tlb(adev, vmid, vmhub, type) ((adev)->gmc.gmc_funcs->flush_gpu_tlb((adev), (vmid), (vmhub), (type)))
#define amdgpu_gmc_flush_gpu_tlb_pasid(adev, pasid, type, allhub, inst) \
((adev)->gmc.gmc_funcs->flush_gpu_tlb_pasid \
((adev), (pasid), (type), (allhub), (inst)))
#define amdgpu_gmc_emit_flush_gpu_tlb(r, vmid, addr) (r)->adev->gmc.gmc_funcs->emit_flush_gpu_tlb((r), (vmid), (addr))
#define amdgpu_gmc_emit_pasid_mapping(r, vmid, pasid) (r)->adev->gmc.gmc_funcs->emit_pasid_mapping((r), (vmid), (pasid))
#define amdgpu_gmc_map_mtype(adev, flags) (adev)->gmc.gmc_funcs->map_mtype((adev),(flags))
@ -389,9 +397,12 @@ void amdgpu_gmc_sysvm_location(struct amdgpu_device *adev, struct amdgpu_gmc *mc
void amdgpu_gmc_vram_location(struct amdgpu_device *adev, struct amdgpu_gmc *mc,
u64 base);
void amdgpu_gmc_gart_location(struct amdgpu_device *adev,
struct amdgpu_gmc *mc);
struct amdgpu_gmc *mc,
enum amdgpu_gart_placement gart_placement);
void amdgpu_gmc_agp_location(struct amdgpu_device *adev,
struct amdgpu_gmc *mc);
void amdgpu_gmc_set_agp_default(struct amdgpu_device *adev,
struct amdgpu_gmc *mc);
bool amdgpu_gmc_filter_faults(struct amdgpu_device *adev,
struct amdgpu_ih_ring *ih, uint64_t addr,
uint16_t pasid, uint64_t timestamp);
@ -401,6 +412,11 @@ int amdgpu_gmc_ras_sw_init(struct amdgpu_device *adev);
int amdgpu_gmc_ras_late_init(struct amdgpu_device *adev);
void amdgpu_gmc_ras_fini(struct amdgpu_device *adev);
int amdgpu_gmc_allocate_vm_inv_eng(struct amdgpu_device *adev);
void amdgpu_gmc_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
uint32_t vmhub, uint32_t flush_type);
int amdgpu_gmc_flush_gpu_tlb_pasid(struct amdgpu_device *adev, uint16_t pasid,
uint32_t flush_type, bool all_hub,
uint32_t inst);
extern void amdgpu_gmc_tmz_set(struct amdgpu_device *adev);
extern void amdgpu_gmc_noretry_set(struct amdgpu_device *adev);

View File

@ -188,7 +188,6 @@ static bool amdgpu_vmid_compatible(struct amdgpu_vmid *id,
/**
* amdgpu_vmid_grab_idle - grab idle VMID
*
* @vm: vm to allocate id for
* @ring: ring we want to submit job to
* @idle: resulting idle VMID
* @fence: fence to wait for if no id could be grabbed
@ -196,8 +195,7 @@ static bool amdgpu_vmid_compatible(struct amdgpu_vmid *id,
* Try to find an idle VMID, if none is idle add a fence to wait to the sync
* object. Returns -ENOMEM when we are out of memory.
*/
static int amdgpu_vmid_grab_idle(struct amdgpu_vm *vm,
struct amdgpu_ring *ring,
static int amdgpu_vmid_grab_idle(struct amdgpu_ring *ring,
struct amdgpu_vmid **idle,
struct dma_fence **fence)
{
@ -405,7 +403,7 @@ int amdgpu_vmid_grab(struct amdgpu_vm *vm, struct amdgpu_ring *ring,
int r = 0;
mutex_lock(&id_mgr->lock);
r = amdgpu_vmid_grab_idle(vm, ring, &idle, fence);
r = amdgpu_vmid_grab_idle(ring, &idle, fence);
if (r || !idle)
goto error;

View File

@ -28,7 +28,7 @@
#define AMDGPU_IH_MAX_NUM_IVS 32
#define IH_RING_SIZE (256 * 1024)
#define IH_SW_RING_SIZE (8 * 1024) /* enough for 256 CAM entries */
#define IH_SW_RING_SIZE (16 * 1024) /* enough for 512 CAM entries */
struct amdgpu_device;
struct amdgpu_iv_entry;

View File

@ -270,29 +270,29 @@ static void amdgpu_restore_msix(struct amdgpu_device *adev)
*/
int amdgpu_irq_init(struct amdgpu_device *adev)
{
int r = 0;
unsigned int irq;
unsigned int irq, flags;
int r;
spin_lock_init(&adev->irq.lock);
/* Enable MSI if not disabled by module parameter */
adev->irq.msi_enabled = false;
if (!amdgpu_msi_ok(adev))
flags = PCI_IRQ_LEGACY;
else
flags = PCI_IRQ_ALL_TYPES;
/* we only need one vector */
r = pci_alloc_irq_vectors(adev->pdev, 1, 1, flags);
if (r < 0) {
dev_err(adev->dev, "Failed to alloc msi vectors\n");
return r;
}
if (amdgpu_msi_ok(adev)) {
int nvec = pci_msix_vec_count(adev->pdev);
unsigned int flags;
if (nvec <= 0)
flags = PCI_IRQ_MSI;
else
flags = PCI_IRQ_MSI | PCI_IRQ_MSIX;
/* we only need one vector */
nvec = pci_alloc_irq_vectors(adev->pdev, 1, 1, flags);
if (nvec > 0) {
adev->irq.msi_enabled = true;
dev_dbg(adev->dev, "using MSI/MSI-X.\n");
}
adev->irq.msi_enabled = true;
dev_dbg(adev->dev, "using MSI/MSI-X.\n");
}
INIT_WORK(&adev->irq.ih1_work, amdgpu_irq_handle_ih1);
@ -302,22 +302,29 @@ int amdgpu_irq_init(struct amdgpu_device *adev)
/* Use vector 0 for MSI-X. */
r = pci_irq_vector(adev->pdev, 0);
if (r < 0)
return r;
goto free_vectors;
irq = r;
/* PCI devices require shared interrupts. */
r = request_irq(irq, amdgpu_irq_handler, IRQF_SHARED, adev_to_drm(adev)->driver->name,
adev_to_drm(adev));
if (r)
return r;
goto free_vectors;
adev->irq.installed = true;
adev->irq.irq = irq;
adev_to_drm(adev)->max_vblank_count = 0x00ffffff;
DRM_DEBUG("amdgpu: irq initialized.\n");
return 0;
}
free_vectors:
if (adev->irq.msi_enabled)
pci_free_irq_vectors(adev->pdev);
adev->irq.msi_enabled = false;
return r;
}
void amdgpu_irq_fini_hw(struct amdgpu_device *adev)
{

View File

@ -200,6 +200,44 @@ out:
return r;
}
static enum amd_ip_block_type
amdgpu_ip_get_block_type(struct amdgpu_device *adev, uint32_t ip)
{
enum amd_ip_block_type type;
switch (ip) {
case AMDGPU_HW_IP_GFX:
type = AMD_IP_BLOCK_TYPE_GFX;
break;
case AMDGPU_HW_IP_COMPUTE:
type = AMD_IP_BLOCK_TYPE_GFX;
break;
case AMDGPU_HW_IP_DMA:
type = AMD_IP_BLOCK_TYPE_SDMA;
break;
case AMDGPU_HW_IP_UVD:
case AMDGPU_HW_IP_UVD_ENC:
type = AMD_IP_BLOCK_TYPE_UVD;
break;
case AMDGPU_HW_IP_VCE:
type = AMD_IP_BLOCK_TYPE_VCE;
break;
case AMDGPU_HW_IP_VCN_DEC:
case AMDGPU_HW_IP_VCN_ENC:
type = AMD_IP_BLOCK_TYPE_VCN;
break;
case AMDGPU_HW_IP_VCN_JPEG:
type = (amdgpu_device_ip_get_ip_block(adev, AMD_IP_BLOCK_TYPE_JPEG)) ?
AMD_IP_BLOCK_TYPE_JPEG : AMD_IP_BLOCK_TYPE_VCN;
break;
default:
type = AMD_IP_BLOCK_TYPE_NUM;
break;
}
return type;
}
static int amdgpu_firmware_info(struct drm_amdgpu_info_firmware *fw_info,
struct drm_amdgpu_query_fw *query_fw,
struct amdgpu_device *adev)
@ -352,6 +390,10 @@ static int amdgpu_firmware_info(struct drm_amdgpu_info_firmware *fw_info,
fw_info->ver = adev->gfx.imu_fw_version;
fw_info->feature = 0;
break;
case AMDGPU_INFO_FW_VPE:
fw_info->ver = adev->vpe.fw_version;
fw_info->feature = adev->vpe.feature_version;
break;
default:
return -EINVAL;
}
@ -405,7 +447,7 @@ static int amdgpu_hw_ip_info(struct amdgpu_device *adev,
if (adev->uvd.inst[i].ring.sched.ready)
++num_rings;
}
ib_start_alignment = 64;
ib_start_alignment = 256;
ib_size_alignment = 64;
break;
case AMDGPU_HW_IP_VCE:
@ -413,8 +455,8 @@ static int amdgpu_hw_ip_info(struct amdgpu_device *adev,
for (i = 0; i < adev->vce.num_rings; i++)
if (adev->vce.ring[i].sched.ready)
++num_rings;
ib_start_alignment = 4;
ib_size_alignment = 1;
ib_start_alignment = 256;
ib_size_alignment = 4;
break;
case AMDGPU_HW_IP_UVD_ENC:
type = AMD_IP_BLOCK_TYPE_UVD;
@ -426,8 +468,8 @@ static int amdgpu_hw_ip_info(struct amdgpu_device *adev,
if (adev->uvd.inst[i].ring_enc[j].sched.ready)
++num_rings;
}
ib_start_alignment = 64;
ib_size_alignment = 64;
ib_start_alignment = 256;
ib_size_alignment = 4;
break;
case AMDGPU_HW_IP_VCN_DEC:
type = AMD_IP_BLOCK_TYPE_VCN;
@ -438,8 +480,8 @@ static int amdgpu_hw_ip_info(struct amdgpu_device *adev,
if (adev->vcn.inst[i].ring_dec.sched.ready)
++num_rings;
}
ib_start_alignment = 16;
ib_size_alignment = 16;
ib_start_alignment = 256;
ib_size_alignment = 64;
break;
case AMDGPU_HW_IP_VCN_ENC:
type = AMD_IP_BLOCK_TYPE_VCN;
@ -451,8 +493,8 @@ static int amdgpu_hw_ip_info(struct amdgpu_device *adev,
if (adev->vcn.inst[i].ring_enc[j].sched.ready)
++num_rings;
}
ib_start_alignment = 64;
ib_size_alignment = 1;
ib_start_alignment = 256;
ib_size_alignment = 4;
break;
case AMDGPU_HW_IP_VCN_JPEG:
type = (amdgpu_device_ip_get_ip_block(adev, AMD_IP_BLOCK_TYPE_JPEG)) ?
@ -466,8 +508,15 @@ static int amdgpu_hw_ip_info(struct amdgpu_device *adev,
if (adev->jpeg.inst[i].ring_dec[j].sched.ready)
++num_rings;
}
ib_start_alignment = 16;
ib_size_alignment = 16;
ib_start_alignment = 256;
ib_size_alignment = 64;
break;
case AMDGPU_HW_IP_VPE:
type = AMD_IP_BLOCK_TYPE_VPE;
if (adev->vpe.ring.sched.ready)
++num_rings;
ib_start_alignment = 256;
ib_size_alignment = 4;
break;
default:
return -EINVAL;
@ -490,18 +539,26 @@ static int amdgpu_hw_ip_info(struct amdgpu_device *adev,
if (adev->asic_type >= CHIP_VEGA10) {
switch (type) {
case AMD_IP_BLOCK_TYPE_GFX:
result->ip_discovery_version = adev->ip_versions[GC_HWIP][0];
result->ip_discovery_version =
IP_VERSION_MAJ_MIN_REV(amdgpu_ip_version(adev, GC_HWIP, 0));
break;
case AMD_IP_BLOCK_TYPE_SDMA:
result->ip_discovery_version = adev->ip_versions[SDMA0_HWIP][0];
result->ip_discovery_version =
IP_VERSION_MAJ_MIN_REV(amdgpu_ip_version(adev, SDMA0_HWIP, 0));
break;
case AMD_IP_BLOCK_TYPE_UVD:
case AMD_IP_BLOCK_TYPE_VCN:
case AMD_IP_BLOCK_TYPE_JPEG:
result->ip_discovery_version = adev->ip_versions[UVD_HWIP][0];
result->ip_discovery_version =
IP_VERSION_MAJ_MIN_REV(amdgpu_ip_version(adev, UVD_HWIP, 0));
break;
case AMD_IP_BLOCK_TYPE_VCE:
result->ip_discovery_version = adev->ip_versions[VCE_HWIP][0];
result->ip_discovery_version =
IP_VERSION_MAJ_MIN_REV(amdgpu_ip_version(adev, VCE_HWIP, 0));
break;
case AMD_IP_BLOCK_TYPE_VPE:
result->ip_discovery_version =
IP_VERSION_MAJ_MIN_REV(amdgpu_ip_version(adev, VPE_HWIP, 0));
break;
default:
result->ip_discovery_version = 0;
@ -538,11 +595,16 @@ int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
struct drm_amdgpu_info *info = data;
struct amdgpu_mode_info *minfo = &adev->mode_info;
void __user *out = (void __user *)(uintptr_t)info->return_pointer;
struct amdgpu_fpriv *fpriv;
struct amdgpu_ip_block *ip_block;
enum amd_ip_block_type type;
struct amdgpu_xcp *xcp;
u32 count, inst_mask;
uint32_t size = info->return_size;
struct drm_crtc *crtc;
uint32_t ui32 = 0;
uint64_t ui64 = 0;
int i, found;
int i, found, ret;
int ui32_size = sizeof(ui32);
if (!info->return_size || !info->return_pointer)
@ -570,7 +632,6 @@ int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
return copy_to_user(out, &ui32, min(size, 4u)) ? -EFAULT : 0;
case AMDGPU_INFO_HW_IP_INFO: {
struct drm_amdgpu_info_hw_ip ip = {};
int ret;
ret = amdgpu_hw_ip_info(adev, info, &ip);
if (ret)
@ -580,45 +641,65 @@ int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
return ret ? -EFAULT : 0;
}
case AMDGPU_INFO_HW_IP_COUNT: {
enum amd_ip_block_type type;
uint32_t count = 0;
fpriv = (struct amdgpu_fpriv *)filp->driver_priv;
type = amdgpu_ip_get_block_type(adev, info->query_hw_ip.type);
ip_block = amdgpu_device_ip_get_ip_block(adev, type);
switch (info->query_hw_ip.type) {
case AMDGPU_HW_IP_GFX:
type = AMD_IP_BLOCK_TYPE_GFX;
break;
case AMDGPU_HW_IP_COMPUTE:
type = AMD_IP_BLOCK_TYPE_GFX;
break;
case AMDGPU_HW_IP_DMA:
type = AMD_IP_BLOCK_TYPE_SDMA;
break;
case AMDGPU_HW_IP_UVD:
type = AMD_IP_BLOCK_TYPE_UVD;
break;
case AMDGPU_HW_IP_VCE:
type = AMD_IP_BLOCK_TYPE_VCE;
break;
case AMDGPU_HW_IP_UVD_ENC:
type = AMD_IP_BLOCK_TYPE_UVD;
break;
case AMDGPU_HW_IP_VCN_DEC:
case AMDGPU_HW_IP_VCN_ENC:
type = AMD_IP_BLOCK_TYPE_VCN;
break;
case AMDGPU_HW_IP_VCN_JPEG:
type = (amdgpu_device_ip_get_ip_block(adev, AMD_IP_BLOCK_TYPE_JPEG)) ?
AMD_IP_BLOCK_TYPE_JPEG : AMD_IP_BLOCK_TYPE_VCN;
break;
default:
if (!ip_block || !ip_block->status.valid)
return -EINVAL;
if (adev->xcp_mgr && adev->xcp_mgr->num_xcps > 0 &&
fpriv->xcp_id >= 0 && fpriv->xcp_id < adev->xcp_mgr->num_xcps) {
xcp = &adev->xcp_mgr->xcp[fpriv->xcp_id];
switch (type) {
case AMD_IP_BLOCK_TYPE_GFX:
ret = amdgpu_xcp_get_inst_details(xcp, AMDGPU_XCP_GFX, &inst_mask);
count = hweight32(inst_mask);
break;
case AMD_IP_BLOCK_TYPE_SDMA:
ret = amdgpu_xcp_get_inst_details(xcp, AMDGPU_XCP_SDMA, &inst_mask);
count = hweight32(inst_mask);
break;
case AMD_IP_BLOCK_TYPE_JPEG:
ret = amdgpu_xcp_get_inst_details(xcp, AMDGPU_XCP_VCN, &inst_mask);
count = hweight32(inst_mask) * adev->jpeg.num_jpeg_rings;
break;
case AMD_IP_BLOCK_TYPE_VCN:
ret = amdgpu_xcp_get_inst_details(xcp, AMDGPU_XCP_VCN, &inst_mask);
count = hweight32(inst_mask);
break;
default:
return -EINVAL;
}
if (ret)
return ret;
return copy_to_user(out, &count, min(size, 4u)) ? -EFAULT : 0;
}
for (i = 0; i < adev->num_ip_blocks; i++)
if (adev->ip_blocks[i].version->type == type &&
adev->ip_blocks[i].status.valid &&
count < AMDGPU_HW_IP_INSTANCE_MAX_COUNT)
count++;
switch (type) {
case AMD_IP_BLOCK_TYPE_GFX:
case AMD_IP_BLOCK_TYPE_VCE:
count = 1;
break;
case AMD_IP_BLOCK_TYPE_SDMA:
count = adev->sdma.num_instances;
break;
case AMD_IP_BLOCK_TYPE_JPEG:
count = adev->jpeg.num_jpeg_inst * adev->jpeg.num_jpeg_rings;
break;
case AMD_IP_BLOCK_TYPE_VCN:
count = adev->vcn.num_vcn_inst;
break;
case AMD_IP_BLOCK_TYPE_UVD:
count = adev->uvd.num_uvd_inst;
break;
/* For all other IP block types not listed in the switch statement
* the ip status is valid here and the instance count is one.
*/
default:
count = 1;
break;
}
return copy_to_user(out, &count, min(size, 4u)) ? -EFAULT : 0;
}
@ -627,7 +708,6 @@ int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
return copy_to_user(out, &ui64, min(size, 8u)) ? -EFAULT : 0;
case AMDGPU_INFO_FW_VERSION: {
struct drm_amdgpu_info_firmware fw_info;
int ret;
/* We only support one instance of each IP block right now. */
if (info->query_fw.ip_instance != 0)
@ -772,7 +852,6 @@ int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
struct drm_amdgpu_info_device *dev_info;
uint64_t vm_size;
uint32_t pcie_gen_mask;
int ret;
dev_info = kzalloc(sizeof(*dev_info), GFP_KERNEL);
if (!dev_info)
@ -1173,6 +1252,26 @@ int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
return copy_to_user(out, max_ibs,
min((size_t)size, sizeof(max_ibs))) ? -EFAULT : 0;
}
case AMDGPU_INFO_GPUVM_FAULT: {
struct amdgpu_fpriv *fpriv = filp->driver_priv;
struct amdgpu_vm *vm = &fpriv->vm;
struct drm_amdgpu_info_gpuvm_fault gpuvm_fault;
unsigned long flags;
if (!vm)
return -EINVAL;
memset(&gpuvm_fault, 0, sizeof(gpuvm_fault));
xa_lock_irqsave(&adev->vm_manager.pasids, flags);
gpuvm_fault.addr = vm->fault_info.addr;
gpuvm_fault.status = vm->fault_info.status;
gpuvm_fault.vmhub = vm->fault_info.vmhub;
xa_unlock_irqrestore(&adev->vm_manager.pasids, flags);
return copy_to_user(out, &gpuvm_fault,
min((size_t)size, sizeof(gpuvm_fault))) ? -EFAULT : 0;
}
default:
DRM_DEBUG_KMS("Invalid request %d\n", info->query);
return -EINVAL;
@ -1729,6 +1828,14 @@ static int amdgpu_debugfs_firmware_info_show(struct seq_file *m, void *unused)
seq_printf(m, "MES feature version: %u, firmware version: 0x%08x\n",
fw_info.feature, fw_info.ver);
/* VPE */
query_fw.fw_type = AMDGPU_INFO_FW_VPE;
ret = amdgpu_firmware_info(&fw_info, &query_fw, adev);
if (ret)
return ret;
seq_printf(m, "VPE feature version: %u, firmware version: 0x%08x\n",
fw_info.feature, fw_info.ver);
seq_printf(m, "VBIOS version: %s\n", ctx->vbios_pn);
return 0;

View File

@ -142,3 +142,189 @@ int amdgpu_mca_mpio_ras_sw_init(struct amdgpu_device *adev)
return 0;
}
void amdgpu_mca_smu_init_funcs(struct amdgpu_device *adev, const struct amdgpu_mca_smu_funcs *mca_funcs)
{
struct amdgpu_mca *mca = &adev->mca;
mca->mca_funcs = mca_funcs;
}
int amdgpu_mca_smu_set_debug_mode(struct amdgpu_device *adev, bool enable)
{
const struct amdgpu_mca_smu_funcs *mca_funcs = adev->mca.mca_funcs;
if (mca_funcs && mca_funcs->mca_set_debug_mode)
return mca_funcs->mca_set_debug_mode(adev, enable);
return -EOPNOTSUPP;
}
int amdgpu_mca_smu_get_valid_mca_count(struct amdgpu_device *adev, enum amdgpu_mca_error_type type, uint32_t *count)
{
const struct amdgpu_mca_smu_funcs *mca_funcs = adev->mca.mca_funcs;
if (!count)
return -EINVAL;
if (mca_funcs && mca_funcs->mca_get_valid_mca_count)
return mca_funcs->mca_get_valid_mca_count(adev, type, count);
return -EOPNOTSUPP;
}
int amdgpu_mca_smu_get_error_count(struct amdgpu_device *adev, enum amdgpu_ras_block blk,
enum amdgpu_mca_error_type type, uint32_t *count)
{
const struct amdgpu_mca_smu_funcs *mca_funcs = adev->mca.mca_funcs;
if (!count)
return -EINVAL;
if (mca_funcs && mca_funcs->mca_get_error_count)
return mca_funcs->mca_get_error_count(adev, blk, type, count);
return -EOPNOTSUPP;
}
int amdgpu_mca_smu_get_mca_entry(struct amdgpu_device *adev, enum amdgpu_mca_error_type type,
int idx, struct mca_bank_entry *entry)
{
const struct amdgpu_mca_smu_funcs *mca_funcs = adev->mca.mca_funcs;
int count;
switch (type) {
case AMDGPU_MCA_ERROR_TYPE_UE:
count = mca_funcs->max_ue_count;
break;
case AMDGPU_MCA_ERROR_TYPE_CE:
count = mca_funcs->max_ce_count;
break;
default:
return -EINVAL;
}
if (idx >= count)
return -EINVAL;
if (mca_funcs && mca_funcs->mca_get_mca_entry)
return mca_funcs->mca_get_mca_entry(adev, type, idx, entry);
return -EOPNOTSUPP;
}
#if defined(CONFIG_DEBUG_FS)
static int amdgpu_mca_smu_debug_mode_set(void *data, u64 val)
{
struct amdgpu_device *adev = (struct amdgpu_device *)data;
int ret;
ret = amdgpu_mca_smu_set_debug_mode(adev, val ? true : false);
if (ret)
return ret;
dev_info(adev->dev, "amdgpu set smu mca debug mode %s success\n", val ? "on" : "off");
return 0;
}
static void mca_dump_entry(struct seq_file *m, struct mca_bank_entry *entry)
{
int i, idx = entry->idx;
seq_printf(m, "mca entry[%d].type: %s\n", idx, entry->type == AMDGPU_MCA_ERROR_TYPE_UE ? "UE" : "CE");
seq_printf(m, "mca entry[%d].ip: %d\n", idx, entry->ip);
seq_printf(m, "mca entry[%d].info: socketid:%d aid:%d hwid:0x%03x mcatype:0x%04x\n",
idx, entry->info.socket_id, entry->info.aid, entry->info.hwid, entry->info.mcatype);
for (i = 0; i < ARRAY_SIZE(entry->regs); i++)
seq_printf(m, "mca entry[%d].regs[%d]: 0x%016llx\n", idx, i, entry->regs[i]);
}
static int mca_dump_show(struct seq_file *m, enum amdgpu_mca_error_type type)
{
struct amdgpu_device *adev = (struct amdgpu_device *)m->private;
struct mca_bank_entry *entry;
uint32_t count = 0;
int i, ret;
ret = amdgpu_mca_smu_get_valid_mca_count(adev, type, &count);
if (ret)
return ret;
seq_printf(m, "amdgpu smu %s valid mca count: %d\n",
type == AMDGPU_MCA_ERROR_TYPE_UE ? "UE" : "CE", count);
if (!count)
return 0;
entry = kmalloc(sizeof(*entry), GFP_KERNEL);
if (!entry)
return -ENOMEM;
for (i = 0; i < count; i++) {
memset(entry, 0, sizeof(*entry));
ret = amdgpu_mca_smu_get_mca_entry(adev, type, i, entry);
if (ret)
goto err_free_entry;
mca_dump_entry(m, entry);
}
err_free_entry:
kfree(entry);
return ret;
}
static int mca_dump_ce_show(struct seq_file *m, void *unused)
{
return mca_dump_show(m, AMDGPU_MCA_ERROR_TYPE_CE);
}
static int mca_dump_ce_open(struct inode *inode, struct file *file)
{
return single_open(file, mca_dump_ce_show, inode->i_private);
}
static const struct file_operations mca_ce_dump_debug_fops = {
.owner = THIS_MODULE,
.open = mca_dump_ce_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
static int mca_dump_ue_show(struct seq_file *m, void *unused)
{
return mca_dump_show(m, AMDGPU_MCA_ERROR_TYPE_UE);
}
static int mca_dump_ue_open(struct inode *inode, struct file *file)
{
return single_open(file, mca_dump_ue_show, inode->i_private);
}
static const struct file_operations mca_ue_dump_debug_fops = {
.owner = THIS_MODULE,
.open = mca_dump_ue_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
DEFINE_DEBUGFS_ATTRIBUTE(mca_debug_mode_fops, NULL, amdgpu_mca_smu_debug_mode_set, "%llu\n");
#endif
void amdgpu_mca_smu_debugfs_init(struct amdgpu_device *adev, struct dentry *root)
{
#if defined(CONFIG_DEBUG_FS)
if (!root || adev->ip_versions[MP1_HWIP][0] != IP_VERSION(13, 0, 6))
return;
debugfs_create_file("mca_debug_mode", 0200, root, adev, &mca_debug_mode_fops);
debugfs_create_file("mca_ue_dump", 0400, root, adev, &mca_ue_dump_debug_fops);
debugfs_create_file("mca_ce_dump", 0400, root, adev, &mca_ce_dump_debug_fops);
#endif
}

View File

@ -21,6 +21,26 @@
#ifndef __AMDGPU_MCA_H__
#define __AMDGPU_MCA_H__
#include "amdgpu_ras.h"
#define MCA_MAX_REGS_COUNT (16)
enum amdgpu_mca_ip {
AMDGPU_MCA_IP_UNKNOW = -1,
AMDGPU_MCA_IP_PSP = 0,
AMDGPU_MCA_IP_SDMA,
AMDGPU_MCA_IP_GC,
AMDGPU_MCA_IP_SMU,
AMDGPU_MCA_IP_MP5,
AMDGPU_MCA_IP_UMC,
AMDGPU_MCA_IP_COUNT,
};
enum amdgpu_mca_error_type {
AMDGPU_MCA_ERROR_TYPE_UE = 0,
AMDGPU_MCA_ERROR_TYPE_CE,
};
struct amdgpu_mca_ras_block {
struct amdgpu_ras_block_object ras_block;
};
@ -34,6 +54,36 @@ struct amdgpu_mca {
struct amdgpu_mca_ras mp0;
struct amdgpu_mca_ras mp1;
struct amdgpu_mca_ras mpio;
const struct amdgpu_mca_smu_funcs *mca_funcs;
};
struct mca_bank_info {
int socket_id;
int aid;
int hwid;
int mcatype;
};
struct mca_bank_entry {
int idx;
enum amdgpu_mca_error_type type;
enum amdgpu_mca_ip ip;
struct mca_bank_info info;
uint64_t regs[MCA_MAX_REGS_COUNT];
};
struct amdgpu_mca_smu_funcs {
int max_ue_count;
int max_ce_count;
int (*mca_set_debug_mode)(struct amdgpu_device *adev, bool enable);
int (*mca_get_error_count)(struct amdgpu_device *adev, enum amdgpu_ras_block blk,
enum amdgpu_mca_error_type type, uint32_t *count);
int (*mca_get_valid_mca_count)(struct amdgpu_device *adev, enum amdgpu_mca_error_type type,
uint32_t *count);
int (*mca_get_mca_entry)(struct amdgpu_device *adev, enum amdgpu_mca_error_type type,
int idx, struct mca_bank_entry *entry);
int (*mca_get_ras_mca_idx_array)(struct amdgpu_device *adev, enum amdgpu_ras_block blk,
enum amdgpu_mca_error_type type, int *idx_array, int *idx_array_size);
};
void amdgpu_mca_query_correctable_error_count(struct amdgpu_device *adev,
@ -53,4 +103,15 @@ void amdgpu_mca_query_ras_error_count(struct amdgpu_device *adev,
int amdgpu_mca_mp0_ras_sw_init(struct amdgpu_device *adev);
int amdgpu_mca_mp1_ras_sw_init(struct amdgpu_device *adev);
int amdgpu_mca_mpio_ras_sw_init(struct amdgpu_device *adev);
void amdgpu_mca_smu_init_funcs(struct amdgpu_device *adev, const struct amdgpu_mca_smu_funcs *mca_funcs);
int amdgpu_mca_smu_set_debug_mode(struct amdgpu_device *adev, bool enable);
int amdgpu_mca_smu_get_valid_mca_count(struct amdgpu_device *adev, enum amdgpu_mca_error_type type, uint32_t *count);
int amdgpu_mca_smu_get_error_count(struct amdgpu_device *adev, enum amdgpu_ras_block blk,
enum amdgpu_mca_error_type type, uint32_t *count);
int amdgpu_mca_smu_get_mca_entry(struct amdgpu_device *adev, enum amdgpu_mca_error_type type,
int idx, struct mca_bank_entry *entry);
void amdgpu_mca_smu_debugfs_init(struct amdgpu_device *adev, struct dentry *root);
#endif

View File

@ -132,7 +132,8 @@ int amdgpu_mes_init(struct amdgpu_device *adev)
adev->mes.gfx_hqd_mask[i] = i ? 0 : 0xfffffffe;
for (i = 0; i < AMDGPU_MES_MAX_SDMA_PIPES; i++) {
if (adev->ip_versions[SDMA0_HWIP][0] < IP_VERSION(6, 0, 0))
if (amdgpu_ip_version(adev, SDMA0_HWIP, 0) <
IP_VERSION(6, 0, 0))
adev->mes.sdma_hqd_mask[i] = i ? 0 : 0x3fc;
/* zero sdma_hqd_mask for non-existent engine */
else if (adev->sdma.num_instances == 1)
@ -1335,8 +1336,10 @@ int amdgpu_mes_self_test(struct amdgpu_device *adev)
for (i = 0; i < ARRAY_SIZE(queue_types); i++) {
/* On GFX v10.3, fw hasn't supported to map sdma queue. */
if (adev->ip_versions[GC_HWIP][0] >= IP_VERSION(10, 3, 0) &&
adev->ip_versions[GC_HWIP][0] < IP_VERSION(11, 0, 0) &&
if (amdgpu_ip_version(adev, GC_HWIP, 0) >=
IP_VERSION(10, 3, 0) &&
amdgpu_ip_version(adev, GC_HWIP, 0) <
IP_VERSION(11, 0, 0) &&
queue_types[i][0] == AMDGPU_RING_TYPE_SDMA)
continue;
@ -1397,7 +1400,7 @@ int amdgpu_mes_init_microcode(struct amdgpu_device *adev, int pipe)
amdgpu_ucode_ip_version_decode(adev, GC_HWIP, ucode_prefix,
sizeof(ucode_prefix));
if (adev->ip_versions[GC_HWIP][0] >= IP_VERSION(11, 0, 0)) {
if (amdgpu_ip_version(adev, GC_HWIP, 0) >= IP_VERSION(11, 0, 0)) {
snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_mes%s.bin",
ucode_prefix,
pipe == AMDGPU_MES_SCHED_PIPE ? "_2" : "1");

View File

@ -69,6 +69,8 @@ struct amdgpu_nbio_funcs {
u32 (*get_memsize)(struct amdgpu_device *adev);
void (*sdma_doorbell_range)(struct amdgpu_device *adev, int instance,
bool use_doorbell, int doorbell_index, int doorbell_size);
void (*vpe_doorbell_range)(struct amdgpu_device *adev, int instance,
bool use_doorbell, int doorbell_index, int doorbell_size);
void (*vcn_doorbell_range)(struct amdgpu_device *adev, bool use_doorbell,
int doorbell_index, int instance);
void (*gc_doorbell_init)(struct amdgpu_device *adev);

View File

@ -459,7 +459,7 @@ void amdgpu_bo_free_kernel(struct amdgpu_bo **bo, u64 *gpu_addr,
*cpu_addr = NULL;
}
/* Validate bo size is bit bigger then the request domain */
/* Validate bo size is bit bigger than the request domain */
static bool amdgpu_bo_validate_size(struct amdgpu_device *adev,
unsigned long size, u32 domain)
{
@ -469,29 +469,24 @@ static bool amdgpu_bo_validate_size(struct amdgpu_device *adev,
* If GTT is part of requested domains the check must succeed to
* allow fall back to GTT.
*/
if (domain & AMDGPU_GEM_DOMAIN_GTT) {
if (domain & AMDGPU_GEM_DOMAIN_GTT)
man = ttm_manager_type(&adev->mman.bdev, TTM_PL_TT);
if (man && size < man->size)
return true;
else if (!man)
WARN_ON_ONCE("GTT domain requested but GTT mem manager uninitialized");
goto fail;
} else if (domain & AMDGPU_GEM_DOMAIN_VRAM) {
else if (domain & AMDGPU_GEM_DOMAIN_VRAM)
man = ttm_manager_type(&adev->mman.bdev, TTM_PL_VRAM);
else
return true;
if (man && size < man->size)
return true;
goto fail;
if (!man) {
if (domain & AMDGPU_GEM_DOMAIN_GTT)
WARN_ON_ONCE("GTT domain requested but GTT mem manager uninitialized");
return false;
}
/* TODO add more domains checks, such as AMDGPU_GEM_DOMAIN_CPU, _DOMAIN_DOORBELL */
return true;
if (size < man->size)
return true;
fail:
if (man)
DRM_DEBUG("BO size %lu > total memory in domain: %llu\n", size,
man->size);
DRM_DEBUG("BO size %lu > total memory in domain: %llu\n", size, man->size);
return false;
}
@ -1067,6 +1062,9 @@ static const char * const amdgpu_vram_names[] = {
*/
int amdgpu_bo_init(struct amdgpu_device *adev)
{
/* set the default AGP aperture state */
amdgpu_gmc_set_agp_default(adev, &adev->gmc);
/* On A+A platform, VRAM can be mapped as WB */
if (!adev->gmc.xgmi.connected_to_cpu && !adev->gmc.is_app_apu) {
/* reserve PAT memory space to WC for VRAM */

View File

@ -252,7 +252,7 @@ static inline bool amdgpu_bo_in_cpu_visible_vram(struct amdgpu_bo *bo)
struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
struct amdgpu_res_cursor cursor;
if (bo->tbo.resource->mem_type != TTM_PL_VRAM)
if (!bo->tbo.resource || bo->tbo.resource->mem_type != TTM_PL_VRAM)
return false;
amdgpu_res_first(bo->tbo.resource, 0, amdgpu_bo_size(bo), &cursor);

View File

@ -100,7 +100,7 @@ static void psp_check_pmfw_centralized_cstate_management(struct psp_context *psp
return;
}
switch (adev->ip_versions[MP0_HWIP][0]) {
switch (amdgpu_ip_version(adev, MP0_HWIP, 0)) {
case IP_VERSION(11, 0, 0):
case IP_VERSION(11, 0, 4):
case IP_VERSION(11, 0, 5):
@ -128,7 +128,7 @@ static int psp_init_sriov_microcode(struct psp_context *psp)
amdgpu_ucode_ip_version_decode(adev, MP0_HWIP, ucode_prefix, sizeof(ucode_prefix));
switch (adev->ip_versions[MP0_HWIP][0]) {
switch (amdgpu_ip_version(adev, MP0_HWIP, 0)) {
case IP_VERSION(9, 0, 0):
case IP_VERSION(11, 0, 7):
case IP_VERSION(11, 0, 9):
@ -162,7 +162,7 @@ static int psp_early_init(void *handle)
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
struct psp_context *psp = &adev->psp;
switch (adev->ip_versions[MP0_HWIP][0]) {
switch (amdgpu_ip_version(adev, MP0_HWIP, 0)) {
case IP_VERSION(9, 0, 0):
psp_v3_1_set_psp_funcs(psp);
psp->autoload_supported = false;
@ -334,7 +334,7 @@ static bool psp_get_runtime_db_entry(struct amdgpu_device *adev,
bool ret = false;
int i;
if (adev->ip_versions[MP0_HWIP][0] == IP_VERSION(13, 0, 6))
if (amdgpu_ip_version(adev, MP0_HWIP, 0) == IP_VERSION(13, 0, 6))
return false;
db_header_pos = adev->gmc.mc_vram_size - PSP_RUNTIME_DB_OFFSET;
@ -413,7 +413,7 @@ static int psp_sw_init(void *handle)
adev->psp.xgmi_context.supports_extended_data =
!adev->gmc.xgmi.connected_to_cpu &&
adev->ip_versions[MP0_HWIP][0] == IP_VERSION(13, 0, 2);
amdgpu_ip_version(adev, MP0_HWIP, 0) == IP_VERSION(13, 0, 2);
memset(&scpm_entry, 0, sizeof(scpm_entry));
if ((psp_get_runtime_db_entry(adev,
@ -773,7 +773,7 @@ static int psp_load_toc(struct psp_context *psp,
static bool psp_boottime_tmr(struct psp_context *psp)
{
switch (psp->adev->ip_versions[MP0_HWIP][0]) {
switch (amdgpu_ip_version(psp->adev, MP0_HWIP, 0)) {
case IP_VERSION(13, 0, 6):
return true;
default:
@ -828,7 +828,7 @@ static int psp_tmr_init(struct psp_context *psp)
static bool psp_skip_tmr(struct psp_context *psp)
{
switch (psp->adev->ip_versions[MP0_HWIP][0]) {
switch (amdgpu_ip_version(psp->adev, MP0_HWIP, 0)) {
case IP_VERSION(11, 0, 9):
case IP_VERSION(11, 0, 7):
case IP_VERSION(13, 0, 2):
@ -1215,8 +1215,8 @@ int psp_xgmi_terminate(struct psp_context *psp)
struct amdgpu_device *adev = psp->adev;
/* XGMI TA unload currently is not supported on Arcturus/Aldebaran A+A */
if (adev->ip_versions[MP0_HWIP][0] == IP_VERSION(11, 0, 4) ||
(adev->ip_versions[MP0_HWIP][0] == IP_VERSION(13, 0, 2) &&
if (amdgpu_ip_version(adev, MP0_HWIP, 0) == IP_VERSION(11, 0, 4) ||
(amdgpu_ip_version(adev, MP0_HWIP, 0) == IP_VERSION(13, 0, 2) &&
adev->gmc.xgmi.connected_to_cpu))
return 0;
@ -1313,9 +1313,11 @@ int psp_xgmi_get_node_id(struct psp_context *psp, uint64_t *node_id)
static bool psp_xgmi_peer_link_info_supported(struct psp_context *psp)
{
return (psp->adev->ip_versions[MP0_HWIP][0] == IP_VERSION(13, 0, 2) &&
return (amdgpu_ip_version(psp->adev, MP0_HWIP, 0) ==
IP_VERSION(13, 0, 2) &&
psp->xgmi_context.context.bin_desc.fw_version >= 0x2000000b) ||
psp->adev->ip_versions[MP0_HWIP][0] >= IP_VERSION(13, 0, 6);
amdgpu_ip_version(psp->adev, MP0_HWIP, 0) >=
IP_VERSION(13, 0, 6);
}
/*
@ -1424,8 +1426,10 @@ int psp_xgmi_get_topology_info(struct psp_context *psp,
if (psp_xgmi_peer_link_info_supported(psp)) {
struct ta_xgmi_cmd_get_peer_link_info_output *link_info_output;
bool requires_reflection =
(psp->xgmi_context.supports_extended_data && get_extended_data) ||
psp->adev->ip_versions[MP0_HWIP][0] == IP_VERSION(13, 0, 6);
(psp->xgmi_context.supports_extended_data &&
get_extended_data) ||
amdgpu_ip_version(psp->adev, MP0_HWIP, 0) ==
IP_VERSION(13, 0, 6);
xgmi_cmd->cmd_id = TA_COMMAND_XGMI__GET_PEER_LINKS;
@ -2390,6 +2394,27 @@ static int psp_get_fw_type(struct amdgpu_firmware_info *ucode,
case AMDGPU_UCODE_ID_CP_RS64_MEC_P3_STACK:
*type = GFX_FW_TYPE_RS64_MEC_P3_STACK;
break;
case AMDGPU_UCODE_ID_VPE_CTX:
*type = GFX_FW_TYPE_VPEC_FW1;
break;
case AMDGPU_UCODE_ID_VPE_CTL:
*type = GFX_FW_TYPE_VPEC_FW2;
break;
case AMDGPU_UCODE_ID_VPE:
*type = GFX_FW_TYPE_VPE;
break;
case AMDGPU_UCODE_ID_UMSCH_MM_UCODE:
*type = GFX_FW_TYPE_UMSCH_UCODE;
break;
case AMDGPU_UCODE_ID_UMSCH_MM_DATA:
*type = GFX_FW_TYPE_UMSCH_DATA;
break;
case AMDGPU_UCODE_ID_UMSCH_MM_CMD_BUFFER:
*type = GFX_FW_TYPE_UMSCH_CMD_BUFFER;
break;
case AMDGPU_UCODE_ID_P2S_TABLE:
*type = GFX_FW_TYPE_P2S_TABLE;
break;
case AMDGPU_UCODE_ID_MAXIMUM:
default:
return -EINVAL;
@ -2481,6 +2506,31 @@ int psp_execute_ip_fw_load(struct psp_context *psp,
return ret;
}
static int psp_load_p2s_table(struct psp_context *psp)
{
int ret;
struct amdgpu_device *adev = psp->adev;
struct amdgpu_firmware_info *ucode =
&adev->firmware.ucode[AMDGPU_UCODE_ID_P2S_TABLE];
if (adev->in_runpm && (adev->pm.rpm_mode == AMDGPU_RUNPM_BACO))
return 0;
if (amdgpu_ip_version(adev, MP0_HWIP, 0) == IP_VERSION(13, 0, 6)) {
uint32_t supp_vers = adev->flags & AMD_IS_APU ? 0x0036013D :
0x0036003C;
if (psp->sos.fw_version < supp_vers)
return 0;
}
if (!ucode->fw || amdgpu_sriov_vf(psp->adev))
return 0;
ret = psp_execute_ip_fw_load(psp, ucode);
return ret;
}
static int psp_load_smu_fw(struct psp_context *psp)
{
int ret;
@ -2499,10 +2549,9 @@ static int psp_load_smu_fw(struct psp_context *psp)
if (!ucode->fw || amdgpu_sriov_vf(psp->adev))
return 0;
if ((amdgpu_in_reset(adev) &&
ras && adev->ras_enabled &&
(adev->ip_versions[MP0_HWIP][0] == IP_VERSION(11, 0, 4) ||
adev->ip_versions[MP0_HWIP][0] == IP_VERSION(11, 0, 2)))) {
if ((amdgpu_in_reset(adev) && ras && adev->ras_enabled &&
(amdgpu_ip_version(adev, MP0_HWIP, 0) == IP_VERSION(11, 0, 4) ||
amdgpu_ip_version(adev, MP0_HWIP, 0) == IP_VERSION(11, 0, 2)))) {
ret = amdgpu_dpm_set_mp1_state(adev, PP_MP1_STATE_UNLOAD);
if (ret)
DRM_WARN("Failed to set MP1 state prepare for reload\n");
@ -2522,6 +2571,9 @@ static bool fw_load_skip_check(struct psp_context *psp,
if (!ucode->fw || !ucode->ucode_size)
return true;
if (ucode->ucode_id == AMDGPU_UCODE_ID_P2S_TABLE)
return true;
if (ucode->ucode_id == AMDGPU_UCODE_ID_SMC &&
(psp_smu_reload_quirk(psp) ||
psp->autoload_supported ||
@ -2570,6 +2622,9 @@ static int psp_load_non_psp_fw(struct psp_context *psp)
return ret;
}
/* Load P2S table first if it's available */
psp_load_p2s_table(psp);
for (i = 0; i < adev->firmware.max_ucodes; i++) {
ucode = &adev->firmware.ucode[i];
@ -2585,9 +2640,12 @@ static int psp_load_non_psp_fw(struct psp_context *psp)
continue;
if (psp->autoload_supported &&
(adev->ip_versions[MP0_HWIP][0] == IP_VERSION(11, 0, 7) ||
adev->ip_versions[MP0_HWIP][0] == IP_VERSION(11, 0, 11) ||
adev->ip_versions[MP0_HWIP][0] == IP_VERSION(11, 0, 12)) &&
(amdgpu_ip_version(adev, MP0_HWIP, 0) ==
IP_VERSION(11, 0, 7) ||
amdgpu_ip_version(adev, MP0_HWIP, 0) ==
IP_VERSION(11, 0, 11) ||
amdgpu_ip_version(adev, MP0_HWIP, 0) ==
IP_VERSION(11, 0, 12)) &&
(ucode->ucode_id == AMDGPU_UCODE_ID_SDMA1 ||
ucode->ucode_id == AMDGPU_UCODE_ID_SDMA2 ||
ucode->ucode_id == AMDGPU_UCODE_ID_SDMA3))
@ -3128,7 +3186,7 @@ static int psp_init_sos_base_fw(struct amdgpu_device *adev)
le32_to_cpu(sos_hdr->header.ucode_array_offset_bytes);
if (adev->gmc.xgmi.connected_to_cpu ||
(adev->ip_versions[MP0_HWIP][0] != IP_VERSION(13, 0, 2))) {
(amdgpu_ip_version(adev, MP0_HWIP, 0) != IP_VERSION(13, 0, 2))) {
adev->psp.sos.fw_version = le32_to_cpu(sos_hdr->header.ucode_version);
adev->psp.sos.feature_version = le32_to_cpu(sos_hdr->sos.fw_version);

View File

@ -152,8 +152,9 @@ static bool amdgpu_ras_get_error_query_ready(struct amdgpu_device *adev)
static int amdgpu_reserve_page_direct(struct amdgpu_device *adev, uint64_t address)
{
struct ras_err_data err_data = {0, 0, 0, NULL};
struct ras_err_data err_data;
struct eeprom_table_record err_rec;
int ret;
if ((address >= adev->gmc.mc_vram_size) ||
(address >= RAS_UMC_INJECT_ADDR_LIMIT)) {
@ -170,6 +171,10 @@ static int amdgpu_reserve_page_direct(struct amdgpu_device *adev, uint64_t addre
return 0;
}
ret = amdgpu_ras_error_data_init(&err_data);
if (ret)
return ret;
memset(&err_rec, 0x0, sizeof(struct eeprom_table_record));
err_data.err_addr = &err_rec;
amdgpu_umc_fill_error_record(&err_data, address, address, 0, 0);
@ -180,6 +185,8 @@ static int amdgpu_reserve_page_direct(struct amdgpu_device *adev, uint64_t addre
amdgpu_ras_save_bad_pages(adev, NULL);
}
amdgpu_ras_error_data_fini(&err_data);
dev_warn(adev->dev, "WARNING: THIS IS ONLY FOR TEST PURPOSES AND WILL CORRUPT RAS EEPROM\n");
dev_warn(adev->dev, "Clear EEPROM:\n");
dev_warn(adev->dev, " echo 1 > /sys/kernel/debug/dri/0/ras/ras_eeprom_reset\n");
@ -201,8 +208,8 @@ static ssize_t amdgpu_ras_debugfs_read(struct file *f, char __user *buf,
return -EINVAL;
/* Hardware counter will be reset automatically after the query on Vega20 and Arcturus */
if (obj->adev->ip_versions[MP0_HWIP][0] != IP_VERSION(11, 0, 2) &&
obj->adev->ip_versions[MP0_HWIP][0] != IP_VERSION(11, 0, 4)) {
if (amdgpu_ip_version(obj->adev, MP0_HWIP, 0) != IP_VERSION(11, 0, 2) &&
amdgpu_ip_version(obj->adev, MP0_HWIP, 0) != IP_VERSION(11, 0, 4)) {
if (amdgpu_ras_reset_error_status(obj->adev, info.head.block))
dev_warn(obj->adev->dev, "Failed to reset error counter and error status");
}
@ -611,8 +618,8 @@ static ssize_t amdgpu_ras_sysfs_read(struct device *dev,
if (amdgpu_ras_query_error_status(obj->adev, &info))
return -EINVAL;
if (obj->adev->ip_versions[MP0_HWIP][0] != IP_VERSION(11, 0, 2) &&
obj->adev->ip_versions[MP0_HWIP][0] != IP_VERSION(11, 0, 4)) {
if (amdgpu_ip_version(obj->adev, MP0_HWIP, 0) != IP_VERSION(11, 0, 2) &&
amdgpu_ip_version(obj->adev, MP0_HWIP, 0) != IP_VERSION(11, 0, 4)) {
if (amdgpu_ras_reset_error_status(obj->adev, info.head.block))
dev_warn(obj->adev->dev, "Failed to reset error counter and error status");
}
@ -769,9 +776,10 @@ int amdgpu_ras_feature_enable(struct amdgpu_device *adev,
if (!con)
return -EINVAL;
/* Do not enable ras feature if it is not allowed */
if (enable &&
head->block != AMDGPU_RAS_BLOCK__GFX &&
/* For non-gfx ip, do not enable ras feature if it is not allowed */
/* For gfx ip, regardless of feature support status, */
/* Force issue enable or disable ras feature commands */
if (head->block != AMDGPU_RAS_BLOCK__GFX &&
!amdgpu_ras_is_feature_allowed(adev, head))
return 0;
@ -801,6 +809,7 @@ int amdgpu_ras_feature_enable(struct amdgpu_device *adev,
enable ? "enable":"disable",
get_ras_block_str(head),
amdgpu_ras_is_poison_mode_supported(adev), ret);
kfree(info);
return ret;
}
@ -1013,17 +1022,118 @@ static void amdgpu_ras_get_ecc_info(struct amdgpu_device *adev, struct ras_err_d
}
}
static void amdgpu_ras_error_print_error_data(struct amdgpu_device *adev,
struct ras_query_if *query_if,
struct ras_err_data *err_data,
bool is_ue)
{
struct ras_manager *ras_mgr = amdgpu_ras_find_obj(adev, &query_if->head);
const char *blk_name = get_ras_block_str(&query_if->head);
struct amdgpu_smuio_mcm_config_info *mcm_info;
struct ras_err_node *err_node;
struct ras_err_info *err_info;
if (is_ue)
dev_info(adev->dev, "%ld uncorrectable hardware errors detected in %s block\n",
ras_mgr->err_data.ue_count, blk_name);
else
dev_info(adev->dev, "%ld correctable hardware errors detected in %s block\n",
ras_mgr->err_data.ue_count, blk_name);
for_each_ras_error(err_node, err_data) {
err_info = &err_node->err_info;
mcm_info = &err_info->mcm_info;
if (is_ue && err_info->ue_count) {
dev_info(adev->dev, "socket: %d, die: %d "
"%lld uncorrectable hardware errors detected in %s block\n",
mcm_info->socket_id,
mcm_info->die_id,
err_info->ue_count,
blk_name);
} else if (!is_ue && err_info->ce_count) {
dev_info(adev->dev, "socket: %d, die: %d "
"%lld correctable hardware errors detected in %s block\n",
mcm_info->socket_id,
mcm_info->die_id,
err_info->ue_count,
blk_name);
}
}
}
static void amdgpu_ras_error_generate_report(struct amdgpu_device *adev,
struct ras_query_if *query_if,
struct ras_err_data *err_data)
{
struct ras_manager *ras_mgr = amdgpu_ras_find_obj(adev, &query_if->head);
const char *blk_name = get_ras_block_str(&query_if->head);
if (err_data->ce_count) {
if (!list_empty(&err_data->err_node_list)) {
amdgpu_ras_error_print_error_data(adev, query_if,
err_data, false);
} else if (!adev->aid_mask &&
adev->smuio.funcs &&
adev->smuio.funcs->get_socket_id &&
adev->smuio.funcs->get_die_id) {
dev_info(adev->dev, "socket: %d, die: %d "
"%ld correctable hardware errors "
"detected in %s block, no user "
"action is needed.\n",
adev->smuio.funcs->get_socket_id(adev),
adev->smuio.funcs->get_die_id(adev),
ras_mgr->err_data.ce_count,
blk_name);
} else {
dev_info(adev->dev, "%ld correctable hardware errors "
"detected in %s block, no user "
"action is needed.\n",
ras_mgr->err_data.ce_count,
blk_name);
}
}
if (err_data->ue_count) {
if (!list_empty(&err_data->err_node_list)) {
amdgpu_ras_error_print_error_data(adev, query_if,
err_data, true);
} else if (!adev->aid_mask &&
adev->smuio.funcs &&
adev->smuio.funcs->get_socket_id &&
adev->smuio.funcs->get_die_id) {
dev_info(adev->dev, "socket: %d, die: %d "
"%ld uncorrectable hardware errors "
"detected in %s block\n",
adev->smuio.funcs->get_socket_id(adev),
adev->smuio.funcs->get_die_id(adev),
ras_mgr->err_data.ue_count,
blk_name);
} else {
dev_info(adev->dev, "%ld uncorrectable hardware errors "
"detected in %s block\n",
ras_mgr->err_data.ue_count,
blk_name);
}
}
}
/* query/inject/cure begin */
int amdgpu_ras_query_error_status(struct amdgpu_device *adev,
struct ras_query_if *info)
{
struct amdgpu_ras_block_object *block_obj = NULL;
struct ras_manager *obj = amdgpu_ras_find_obj(adev, &info->head);
struct ras_err_data err_data = {0, 0, 0, NULL};
struct ras_err_data err_data;
int ret;
if (!obj)
return -EINVAL;
ret = amdgpu_ras_error_data_init(&err_data);
if (ret)
return ret;
if (info->head.block == AMDGPU_RAS_BLOCK__UMC) {
amdgpu_ras_get_ecc_info(adev, &err_data);
} else {
@ -1031,7 +1141,8 @@ int amdgpu_ras_query_error_status(struct amdgpu_device *adev,
if (!block_obj || !block_obj->hw_ops) {
dev_dbg_once(adev->dev, "%s doesn't config RAS function\n",
get_ras_block_str(&info->head));
return -EINVAL;
ret = -EINVAL;
goto out_fini_err_data;
}
if (block_obj->hw_ops->query_ras_error_count)
@ -1051,48 +1162,12 @@ int amdgpu_ras_query_error_status(struct amdgpu_device *adev,
info->ue_count = obj->err_data.ue_count;
info->ce_count = obj->err_data.ce_count;
if (err_data.ce_count) {
if (!adev->aid_mask &&
adev->smuio.funcs &&
adev->smuio.funcs->get_socket_id &&
adev->smuio.funcs->get_die_id) {
dev_info(adev->dev, "socket: %d, die: %d "
"%ld correctable hardware errors "
"detected in %s block, no user "
"action is needed.\n",
adev->smuio.funcs->get_socket_id(adev),
adev->smuio.funcs->get_die_id(adev),
obj->err_data.ce_count,
get_ras_block_str(&info->head));
} else {
dev_info(adev->dev, "%ld correctable hardware errors "
"detected in %s block, no user "
"action is needed.\n",
obj->err_data.ce_count,
get_ras_block_str(&info->head));
}
}
if (err_data.ue_count) {
if (!adev->aid_mask &&
adev->smuio.funcs &&
adev->smuio.funcs->get_socket_id &&
adev->smuio.funcs->get_die_id) {
dev_info(adev->dev, "socket: %d, die: %d "
"%ld uncorrectable hardware errors "
"detected in %s block\n",
adev->smuio.funcs->get_socket_id(adev),
adev->smuio.funcs->get_die_id(adev),
obj->err_data.ue_count,
get_ras_block_str(&info->head));
} else {
dev_info(adev->dev, "%ld uncorrectable hardware errors "
"detected in %s block\n",
obj->err_data.ue_count,
get_ras_block_str(&info->head));
}
}
amdgpu_ras_error_generate_report(adev, info, &err_data);
return 0;
out_fini_err_data:
amdgpu_ras_error_data_fini(&err_data);
return ret;
}
int amdgpu_ras_reset_error_status(struct amdgpu_device *adev,
@ -1100,15 +1175,15 @@ int amdgpu_ras_reset_error_status(struct amdgpu_device *adev,
{
struct amdgpu_ras_block_object *block_obj = amdgpu_ras_get_ras_block(adev, block, 0);
if (!amdgpu_ras_is_supported(adev, block))
return -EINVAL;
if (!block_obj || !block_obj->hw_ops) {
if (!block_obj || !block_obj->hw_ops) {
dev_dbg_once(adev->dev, "%s doesn't config RAS function\n",
ras_block_str(block));
return -EINVAL;
return 0;
}
if (!amdgpu_ras_is_supported(adev, block))
return 0;
if (block_obj->hw_ops->reset_ras_error_count)
block_obj->hw_ops->reset_ras_error_count(adev);
@ -1207,8 +1282,8 @@ static int amdgpu_ras_query_error_count_helper(struct amdgpu_device *adev,
/* some hardware/IP supports read to clear
* no need to explictly reset the err status after the query call */
if (adev->ip_versions[MP0_HWIP][0] != IP_VERSION(11, 0, 2) &&
adev->ip_versions[MP0_HWIP][0] != IP_VERSION(11, 0, 4)) {
if (amdgpu_ip_version(adev, MP0_HWIP, 0) != IP_VERSION(11, 0, 2) &&
amdgpu_ip_version(adev, MP0_HWIP, 0) != IP_VERSION(11, 0, 4)) {
if (amdgpu_ras_reset_error_status(adev, query_info->head.block))
dev_warn(adev->dev,
"Failed to reset error counter and error status\n");
@ -1368,6 +1443,22 @@ static ssize_t amdgpu_ras_sysfs_features_read(struct device *dev,
return sysfs_emit(buf, "feature mask: 0x%x\n", con->features);
}
static ssize_t amdgpu_ras_sysfs_version_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct amdgpu_ras *con =
container_of(attr, struct amdgpu_ras, version_attr);
return sysfs_emit(buf, "table version: 0x%x\n", con->eeprom_control.tbl_hdr.version);
}
static ssize_t amdgpu_ras_sysfs_schema_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct amdgpu_ras *con =
container_of(attr, struct amdgpu_ras, schema_attr);
return sysfs_emit(buf, "schema: 0x%x\n", con->schema);
}
static void amdgpu_ras_sysfs_remove_bad_page_node(struct amdgpu_device *adev)
{
struct amdgpu_ras *con = amdgpu_ras_get_context(adev);
@ -1377,11 +1468,13 @@ static void amdgpu_ras_sysfs_remove_bad_page_node(struct amdgpu_device *adev)
RAS_FS_NAME);
}
static int amdgpu_ras_sysfs_remove_feature_node(struct amdgpu_device *adev)
static int amdgpu_ras_sysfs_remove_dev_attr_node(struct amdgpu_device *adev)
{
struct amdgpu_ras *con = amdgpu_ras_get_context(adev);
struct attribute *attrs[] = {
&con->features_attr.attr,
&con->version_attr.attr,
&con->schema_attr.attr,
NULL
};
struct attribute_group group = {
@ -1457,7 +1550,7 @@ static int amdgpu_ras_sysfs_remove_all(struct amdgpu_device *adev)
if (amdgpu_bad_page_threshold != 0)
amdgpu_ras_sysfs_remove_bad_page_node(adev);
amdgpu_ras_sysfs_remove_feature_node(adev);
amdgpu_ras_sysfs_remove_dev_attr_node(adev);
return 0;
}
@ -1569,6 +1662,8 @@ void amdgpu_ras_debugfs_create_all(struct amdgpu_device *adev)
amdgpu_ras_debugfs_create(adev, &fs_info, dir);
}
}
amdgpu_mca_smu_debugfs_init(adev, dir);
}
/* debugfs end */
@ -1578,6 +1673,10 @@ static BIN_ATTR(gpu_vram_bad_pages, S_IRUGO,
amdgpu_ras_sysfs_badpages_read, NULL, 0);
static DEVICE_ATTR(features, S_IRUGO,
amdgpu_ras_sysfs_features_read, NULL);
static DEVICE_ATTR(version, 0444,
amdgpu_ras_sysfs_version_show, NULL);
static DEVICE_ATTR(schema, 0444,
amdgpu_ras_sysfs_schema_show, NULL);
static int amdgpu_ras_fs_init(struct amdgpu_device *adev)
{
struct amdgpu_ras *con = amdgpu_ras_get_context(adev);
@ -1586,6 +1685,8 @@ static int amdgpu_ras_fs_init(struct amdgpu_device *adev)
};
struct attribute *attrs[] = {
&con->features_attr.attr,
&con->version_attr.attr,
&con->schema_attr.attr,
NULL
};
struct bin_attribute *bin_attrs[] = {
@ -1594,11 +1695,20 @@ static int amdgpu_ras_fs_init(struct amdgpu_device *adev)
};
int r;
group.attrs = attrs;
/* add features entry */
con->features_attr = dev_attr_features;
group.attrs = attrs;
sysfs_attr_init(attrs[0]);
/* add version entry */
con->version_attr = dev_attr_version;
sysfs_attr_init(attrs[1]);
/* add schema entry */
con->schema_attr = dev_attr_schema;
sysfs_attr_init(attrs[2]);
if (amdgpu_bad_page_threshold != 0) {
/* add bad_page_features entry */
bin_attr_gpu_vram_bad_pages.private = NULL;
@ -1707,12 +1817,16 @@ static void amdgpu_ras_interrupt_umc_handler(struct ras_manager *obj,
struct amdgpu_iv_entry *entry)
{
struct ras_ih_data *data = &obj->ih_data;
struct ras_err_data err_data = {0, 0, 0, NULL};
struct ras_err_data err_data;
int ret;
if (!data->cb)
return;
ret = amdgpu_ras_error_data_init(&err_data);
if (ret)
return;
/* Let IP handle its data, maybe we need get the output
* from the callback to update the error type/count, etc
*/
@ -1729,6 +1843,8 @@ static void amdgpu_ras_interrupt_umc_handler(struct ras_manager *obj,
obj->err_data.ue_count += err_data.ue_count;
obj->err_data.ce_count += err_data.ce_count;
}
amdgpu_ras_error_data_fini(&err_data);
}
static void amdgpu_ras_interrupt_handler(struct ras_manager *obj)
@ -1904,14 +2020,18 @@ static void amdgpu_ras_log_on_err_counter(struct amdgpu_device *adev)
* should be removed until smu fix handle ecc_info table.
*/
if ((info.head.block == AMDGPU_RAS_BLOCK__UMC) &&
(adev->ip_versions[MP1_HWIP][0] == IP_VERSION(13, 0, 2)))
(amdgpu_ip_version(adev, MP1_HWIP, 0) ==
IP_VERSION(13, 0, 2)))
continue;
amdgpu_ras_query_error_status(adev, &info);
if (adev->ip_versions[MP0_HWIP][0] != IP_VERSION(11, 0, 2) &&
adev->ip_versions[MP0_HWIP][0] != IP_VERSION(11, 0, 4) &&
adev->ip_versions[MP0_HWIP][0] != IP_VERSION(13, 0, 0)) {
if (amdgpu_ip_version(adev, MP0_HWIP, 0) !=
IP_VERSION(11, 0, 2) &&
amdgpu_ip_version(adev, MP0_HWIP, 0) !=
IP_VERSION(11, 0, 4) &&
amdgpu_ip_version(adev, MP0_HWIP, 0) !=
IP_VERSION(13, 0, 0)) {
if (amdgpu_ras_reset_error_status(adev, info.head.block))
dev_warn(adev->dev, "Failed to reset error counter and error status");
}
@ -2399,7 +2519,7 @@ static int amdgpu_ras_recovery_fini(struct amdgpu_device *adev)
static bool amdgpu_ras_asic_supported(struct amdgpu_device *adev)
{
if (amdgpu_sriov_vf(adev)) {
switch (adev->ip_versions[MP0_HWIP][0]) {
switch (amdgpu_ip_version(adev, MP0_HWIP, 0)) {
case IP_VERSION(13, 0, 2):
case IP_VERSION(13, 0, 6):
return true;
@ -2409,7 +2529,7 @@ static bool amdgpu_ras_asic_supported(struct amdgpu_device *adev)
}
if (adev->asic_type == CHIP_IP_DISCOVERY) {
switch (adev->ip_versions[MP0_HWIP][0]) {
switch (amdgpu_ip_version(adev, MP0_HWIP, 0)) {
case IP_VERSION(13, 0, 0):
case IP_VERSION(13, 0, 6):
case IP_VERSION(13, 0, 10):
@ -2483,8 +2603,10 @@ static void amdgpu_ras_check_supported(struct amdgpu_device *adev)
/* VCN/JPEG RAS can be supported on both bare metal and
* SRIOV environment
*/
if (adev->ip_versions[VCN_HWIP][0] == IP_VERSION(2, 6, 0) ||
adev->ip_versions[VCN_HWIP][0] == IP_VERSION(4, 0, 0))
if (amdgpu_ip_version(adev, VCN_HWIP, 0) ==
IP_VERSION(2, 6, 0) ||
amdgpu_ip_version(adev, VCN_HWIP, 0) ==
IP_VERSION(4, 0, 0))
adev->ras_hw_enabled |= (1 << AMDGPU_RAS_BLOCK__VCN |
1 << AMDGPU_RAS_BLOCK__JPEG);
else
@ -2518,7 +2640,7 @@ static void amdgpu_ras_check_supported(struct amdgpu_device *adev)
* Disable ras feature for aqua vanjaram
* by default on apu platform.
*/
if (adev->ip_versions[MP0_HWIP][0] == IP_VERSION(13, 0, 6) &&
if (amdgpu_ip_version(adev, MP0_HWIP, 0) == IP_VERSION(13, 0, 6) &&
adev->gmc.is_app_apu)
adev->ras_enabled = amdgpu_ras_enable != 1 ? 0 :
adev->ras_hw_enabled & amdgpu_ras_mask;
@ -2584,6 +2706,14 @@ static void amdgpu_ras_query_poison_mode(struct amdgpu_device *adev)
}
}
static int amdgpu_get_ras_schema(struct amdgpu_device *adev)
{
return amdgpu_ras_is_poison_mode_supported(adev) ? AMDGPU_RAS_ERROR__POISON : 0 |
AMDGPU_RAS_ERROR__SINGLE_CORRECTABLE |
AMDGPU_RAS_ERROR__MULTI_UNCORRECTABLE |
AMDGPU_RAS_ERROR__PARITY;
}
int amdgpu_ras_init(struct amdgpu_device *adev)
{
struct amdgpu_ras *con = amdgpu_ras_get_context(adev);
@ -2626,6 +2756,7 @@ int amdgpu_ras_init(struct amdgpu_device *adev)
con->update_channel_flag = false;
con->features = 0;
con->schema = 0;
INIT_LIST_HEAD(&con->head);
/* Might need get this flag from vbios. */
con->flags = RAS_DEFAULT_FLAGS;
@ -2633,7 +2764,7 @@ int amdgpu_ras_init(struct amdgpu_device *adev)
/* initialize nbio ras function ahead of any other
* ras functions so hardware fatal error interrupt
* can be enabled as early as possible */
switch (adev->ip_versions[NBIO_HWIP][0]) {
switch (amdgpu_ip_version(adev, NBIO_HWIP, 0)) {
case IP_VERSION(7, 4, 0):
case IP_VERSION(7, 4, 1):
case IP_VERSION(7, 4, 4):
@ -2681,6 +2812,9 @@ int amdgpu_ras_init(struct amdgpu_device *adev)
amdgpu_ras_query_poison_mode(adev);
/* Get RAS schema for particular SOC */
con->schema = amdgpu_get_ras_schema(adev);
if (amdgpu_ras_fs_init(adev)) {
r = -EINVAL;
goto release_con;
@ -3328,3 +3462,128 @@ void amdgpu_ras_inst_reset_ras_error_count(struct amdgpu_device *adev,
WREG32(err_status_hi_offset, 0);
}
}
int amdgpu_ras_error_data_init(struct ras_err_data *err_data)
{
memset(err_data, 0, sizeof(*err_data));
INIT_LIST_HEAD(&err_data->err_node_list);
return 0;
}
static void amdgpu_ras_error_node_release(struct ras_err_node *err_node)
{
if (!err_node)
return;
list_del(&err_node->node);
kvfree(err_node);
}
void amdgpu_ras_error_data_fini(struct ras_err_data *err_data)
{
struct ras_err_node *err_node, *tmp;
list_for_each_entry_safe(err_node, tmp, &err_data->err_node_list, node) {
amdgpu_ras_error_node_release(err_node);
list_del(&err_node->node);
}
}
static struct ras_err_node *amdgpu_ras_error_find_node_by_id(struct ras_err_data *err_data,
struct amdgpu_smuio_mcm_config_info *mcm_info)
{
struct ras_err_node *err_node;
struct amdgpu_smuio_mcm_config_info *ref_id;
if (!err_data || !mcm_info)
return NULL;
for_each_ras_error(err_node, err_data) {
ref_id = &err_node->err_info.mcm_info;
if ((mcm_info->socket_id >= 0 && mcm_info->socket_id != ref_id->socket_id) ||
(mcm_info->die_id >= 0 && mcm_info->die_id != ref_id->die_id))
continue;
return err_node;
}
return NULL;
}
static struct ras_err_node *amdgpu_ras_error_node_new(void)
{
struct ras_err_node *err_node;
err_node = kvzalloc(sizeof(*err_node), GFP_KERNEL);
if (!err_node)
return NULL;
INIT_LIST_HEAD(&err_node->node);
return err_node;
}
static struct ras_err_info *amdgpu_ras_error_get_info(struct ras_err_data *err_data,
struct amdgpu_smuio_mcm_config_info *mcm_info)
{
struct ras_err_node *err_node;
err_node = amdgpu_ras_error_find_node_by_id(err_data, mcm_info);
if (err_node)
return &err_node->err_info;
err_node = amdgpu_ras_error_node_new();
if (!err_node)
return NULL;
memcpy(&err_node->err_info.mcm_info, mcm_info, sizeof(*mcm_info));
err_data->err_list_count++;
list_add_tail(&err_node->node, &err_data->err_node_list);
return &err_node->err_info;
}
int amdgpu_ras_error_statistic_ue_count(struct ras_err_data *err_data,
struct amdgpu_smuio_mcm_config_info *mcm_info, u64 count)
{
struct ras_err_info *err_info;
if (!err_data || !mcm_info)
return -EINVAL;
if (!count)
return 0;
err_info = amdgpu_ras_error_get_info(err_data, mcm_info);
if (!err_info)
return -EINVAL;
err_info->ue_count += count;
err_data->ue_count += count;
return 0;
}
int amdgpu_ras_error_statistic_ce_count(struct ras_err_data *err_data,
struct amdgpu_smuio_mcm_config_info *mcm_info, u64 count)
{
struct ras_err_info *err_info;
if (!err_data || !mcm_info)
return -EINVAL;
if (!count)
return 0;
err_info = amdgpu_ras_error_get_info(err_data, mcm_info);
if (!err_info)
return -EINVAL;
err_info->ce_count += count;
err_data->ce_count += count;
return 0;
}

View File

@ -28,6 +28,7 @@
#include <linux/list.h>
#include "ta_ras_if.h"
#include "amdgpu_ras_eeprom.h"
#include "amdgpu_smuio.h"
struct amdgpu_iv_entry;
@ -389,9 +390,12 @@ struct amdgpu_ras {
/* ras infrastructure */
/* for ras itself. */
uint32_t features;
uint32_t schema;
struct list_head head;
/* sysfs */
struct device_attribute features_attr;
struct device_attribute version_attr;
struct device_attribute schema_attr;
struct bin_attribute badpages_attr;
struct dentry *de_ras_eeprom_table;
/* block array */
@ -436,17 +440,33 @@ struct amdgpu_ras {
};
struct ras_fs_data {
char sysfs_name[32];
char sysfs_name[48];
char debugfs_name[32];
};
struct ras_err_info {
struct amdgpu_smuio_mcm_config_info mcm_info;
u64 ce_count;
u64 ue_count;
};
struct ras_err_node {
struct list_head node;
struct ras_err_info err_info;
};
struct ras_err_data {
unsigned long ue_count;
unsigned long ce_count;
unsigned long err_addr_cnt;
struct eeprom_table_record *err_addr;
u32 err_list_count;
struct list_head err_node_list;
};
#define for_each_ras_error(err_node, err_data) \
list_for_each_entry(err_node, &(err_data)->err_node_list, node)
struct ras_err_handler_data {
/* point to bad page records array */
struct eeprom_table_record *bps;
@ -493,7 +513,10 @@ struct ras_manager {
/* IH data */
struct ras_ih_data ih_data;
struct ras_err_data err_data;
struct {
unsigned long ue_count;
unsigned long ce_count;
} err_data;
};
struct ras_badpage {
@ -767,4 +790,12 @@ void amdgpu_ras_inst_reset_ras_error_count(struct amdgpu_device *adev,
const struct amdgpu_ras_err_status_reg_entry *reg_list,
uint32_t reg_list_size,
uint32_t instance);
int amdgpu_ras_error_data_init(struct ras_err_data *err_data);
void amdgpu_ras_error_data_fini(struct ras_err_data *err_data);
int amdgpu_ras_error_statistic_ce_count(struct ras_err_data *err_data,
struct amdgpu_smuio_mcm_config_info *mcm_info, u64 count);
int amdgpu_ras_error_statistic_ue_count(struct ras_err_data *err_data,
struct amdgpu_smuio_mcm_config_info *mcm_info, u64 count);
#endif

View File

@ -149,11 +149,11 @@
RAS_TABLE_HEADER_SIZE - \
RAS_TABLE_V2_1_INFO_SIZE) / RAS_TABLE_RECORD_SIZE)
#define to_amdgpu_device(x) (container_of(x, struct amdgpu_ras, eeprom_control))->adev
#define to_amdgpu_device(x) ((container_of(x, struct amdgpu_ras, eeprom_control))->adev)
static bool __is_ras_eeprom_supported(struct amdgpu_device *adev)
{
switch (adev->ip_versions[MP1_HWIP][0]) {
switch (amdgpu_ip_version(adev, MP1_HWIP, 0)) {
case IP_VERSION(11, 0, 2): /* VEGA20 and ARCTURUS */
case IP_VERSION(11, 0, 7): /* Sienna cichlid */
case IP_VERSION(13, 0, 0):
@ -191,7 +191,7 @@ static bool __get_eeprom_i2c_addr(struct amdgpu_device *adev,
return true;
}
switch (adev->ip_versions[MP1_HWIP][0]) {
switch (amdgpu_ip_version(adev, MP1_HWIP, 0)) {
case IP_VERSION(11, 0, 2):
/* VEGA20 and ARCTURUS */
if (adev->asic_type == CHIP_VEGA20)
@ -616,7 +616,8 @@ amdgpu_ras_eeprom_append_table(struct amdgpu_ras_eeprom_control *control,
__encode_table_record_to_buf(control, &record[i], pp);
/* update bad channel bitmap */
if (!(control->bad_channel_bitmap & (1 << record[i].mem_channel))) {
if ((record[i].mem_channel < BITS_PER_TYPE(control->bad_channel_bitmap)) &&
!(control->bad_channel_bitmap & (1 << record[i].mem_channel))) {
control->bad_channel_bitmap |= 1 << record[i].mem_channel;
con->update_channel_flag = true;
}
@ -969,7 +970,8 @@ int amdgpu_ras_eeprom_read(struct amdgpu_ras_eeprom_control *control,
__decode_table_record_from_buf(control, &record[i], pp);
/* update bad channel bitmap */
if (!(control->bad_channel_bitmap & (1 << record[i].mem_channel))) {
if ((record[i].mem_channel < BITS_PER_TYPE(control->bad_channel_bitmap)) &&
!(control->bad_channel_bitmap & (1 << record[i].mem_channel))) {
control->bad_channel_bitmap |= 1 << record[i].mem_channel;
con->update_channel_flag = true;
}

View File

@ -112,7 +112,6 @@ fallback:
cur->remaining = size;
cur->node = NULL;
WARN_ON(res && start + size > res->size);
return;
}
/**

View File

@ -26,19 +26,11 @@
#include "sienna_cichlid.h"
#include "smu_v13_0_10.h"
int amdgpu_reset_add_handler(struct amdgpu_reset_control *reset_ctl,
struct amdgpu_reset_handler *handler)
{
/* TODO: Check if handler exists? */
list_add_tail(&handler->handler_list, &reset_ctl->reset_handlers);
return 0;
}
int amdgpu_reset_init(struct amdgpu_device *adev)
{
int ret = 0;
switch (adev->ip_versions[MP1_HWIP][0]) {
switch (amdgpu_ip_version(adev, MP1_HWIP, 0)) {
case IP_VERSION(13, 0, 2):
case IP_VERSION(13, 0, 6):
ret = aldebaran_reset_init(adev);
@ -60,7 +52,7 @@ int amdgpu_reset_fini(struct amdgpu_device *adev)
{
int ret = 0;
switch (adev->ip_versions[MP1_HWIP][0]) {
switch (amdgpu_ip_version(adev, MP1_HWIP, 0)) {
case IP_VERSION(13, 0, 2):
case IP_VERSION(13, 0, 6):
ret = aldebaran_reset_fini(adev);

View File

@ -26,6 +26,8 @@
#include "amdgpu.h"
#define AMDGPU_RESET_MAX_HANDLERS 5
enum AMDGPU_RESET_FLAGS {
AMDGPU_NEED_FULL_RESET = 0,
@ -44,7 +46,6 @@ struct amdgpu_reset_context {
struct amdgpu_reset_handler {
enum amd_reset_method reset_method;
struct list_head handler_list;
int (*prepare_env)(struct amdgpu_reset_control *reset_ctl,
struct amdgpu_reset_context *context);
int (*prepare_hwcontext)(struct amdgpu_reset_control *reset_ctl,
@ -63,7 +64,8 @@ struct amdgpu_reset_control {
void *handle;
struct work_struct reset_work;
struct mutex reset_lock;
struct list_head reset_handlers;
struct amdgpu_reset_handler *(
*reset_handlers)[AMDGPU_RESET_MAX_HANDLERS];
atomic_t in_reset;
enum amd_reset_method active_reset;
struct amdgpu_reset_handler *(*get_reset_handler)(
@ -97,8 +99,10 @@ int amdgpu_reset_prepare_hwcontext(struct amdgpu_device *adev,
int amdgpu_reset_perform_reset(struct amdgpu_device *adev,
struct amdgpu_reset_context *reset_context);
int amdgpu_reset_add_handler(struct amdgpu_reset_control *reset_ctl,
struct amdgpu_reset_handler *handler);
int amdgpu_reset_prepare_env(struct amdgpu_device *adev,
struct amdgpu_reset_context *reset_context);
int amdgpu_reset_restore_env(struct amdgpu_device *adev,
struct amdgpu_reset_context *reset_context);
struct amdgpu_reset_domain *amdgpu_reset_create_reset_domain(enum amdgpu_reset_domain_type type,
char *wq_name);
@ -126,4 +130,8 @@ void amdgpu_device_lock_reset_domain(struct amdgpu_reset_domain *reset_domain);
void amdgpu_device_unlock_reset_domain(struct amdgpu_reset_domain *reset_domain);
#define for_each_handler(i, handler, reset_ctl) \
for (i = 0; (i < AMDGPU_RESET_MAX_HANDLERS) && \
(handler = (*reset_ctl->reset_handlers)[i]); \
++i)
#endif

View File

@ -434,8 +434,12 @@ bool amdgpu_ring_soft_recovery(struct amdgpu_ring *ring, unsigned int vmid,
struct dma_fence *fence)
{
unsigned long flags;
ktime_t deadline;
ktime_t deadline = ktime_add_us(ktime_get(), 10000);
if (unlikely(ring->adev->debug_disable_soft_recovery))
return false;
deadline = ktime_add_us(ktime_get(), 10000);
if (amdgpu_sriov_vf(ring->adev) || !ring->funcs->soft_recovery || !fence)
return false;

View File

@ -44,6 +44,7 @@ struct amdgpu_vm;
#define AMDGPU_MAX_COMPUTE_RINGS 8
#define AMDGPU_MAX_VCE_RINGS 3
#define AMDGPU_MAX_UVD_ENC_RINGS 2
#define AMDGPU_MAX_VPE_RINGS 2
enum amdgpu_ring_priority_level {
AMDGPU_RING_PRIO_0,
@ -77,8 +78,10 @@ enum amdgpu_ring_type {
AMDGPU_RING_TYPE_VCN_DEC = AMDGPU_HW_IP_VCN_DEC,
AMDGPU_RING_TYPE_VCN_ENC = AMDGPU_HW_IP_VCN_ENC,
AMDGPU_RING_TYPE_VCN_JPEG = AMDGPU_HW_IP_VCN_JPEG,
AMDGPU_RING_TYPE_VPE = AMDGPU_HW_IP_VPE,
AMDGPU_RING_TYPE_KIQ,
AMDGPU_RING_TYPE_MES
AMDGPU_RING_TYPE_MES,
AMDGPU_RING_TYPE_UMSCH_MM,
};
enum amdgpu_ib_pool_type {

View File

@ -208,7 +208,7 @@ int amdgpu_sdma_init_microcode(struct amdgpu_device *adev,
const struct sdma_firmware_header_v2_0 *sdma_hdr;
uint16_t version_major;
char ucode_prefix[30];
char fw_name[40];
char fw_name[52];
amdgpu_ucode_ip_version_decode(adev, SDMA0_HWIP, ucode_prefix, sizeof(ucode_prefix));
if (instance == 0)
@ -251,8 +251,11 @@ int amdgpu_sdma_init_microcode(struct amdgpu_device *adev,
else {
/* Use a single copy per SDMA firmware type. PSP uses the same instance for all
* groups of SDMAs */
if (adev->ip_versions[SDMA0_HWIP][0] == IP_VERSION(4, 4, 2) &&
adev->firmware.load_type == AMDGPU_FW_LOAD_PSP &&
if (amdgpu_ip_version(adev, SDMA0_HWIP,
0) ==
IP_VERSION(4, 4, 2) &&
adev->firmware.load_type ==
AMDGPU_FW_LOAD_PSP &&
adev->sdma.num_inst_per_aid == i) {
break;
}

View File

@ -23,6 +23,18 @@
#ifndef __AMDGPU_SMUIO_H__
#define __AMDGPU_SMUIO_H__
enum amdgpu_pkg_type {
AMDGPU_PKG_TYPE_APU = 2,
AMDGPU_PKG_TYPE_CEM = 3,
AMDGPU_PKG_TYPE_OAM = 4,
AMDGPU_PKG_TYPE_UNKNOWN,
};
struct amdgpu_smuio_mcm_config_info {
int socket_id;
int die_id;
};
struct amdgpu_smuio_funcs {
u32 (*get_rom_index_offset)(struct amdgpu_device *adev);
u32 (*get_rom_data_offset)(struct amdgpu_device *adev);

View File

@ -1727,7 +1727,8 @@ static int amdgpu_ttm_reserve_tmr(struct amdgpu_device *adev)
reserve_size =
amdgpu_atomfirmware_get_fw_reserved_fb_size(adev);
if (!adev->bios && adev->ip_versions[GC_HWIP][0] == IP_VERSION(9, 4, 3))
if (!adev->bios &&
amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 4, 3))
reserve_size = max(reserve_size, (uint32_t)280 << 20);
else if (!reserve_size)
reserve_size = DISCOVERY_TMR_OFFSET;

View File

@ -642,6 +642,8 @@ const char *amdgpu_ucode_name(enum AMDGPU_UCODE_ID ucode_id)
return "SMC";
case AMDGPU_UCODE_ID_PPTABLE:
return "PPTABLE";
case AMDGPU_UCODE_ID_P2S_TABLE:
return "P2STABLE";
case AMDGPU_UCODE_ID_UVD:
return "UVD";
case AMDGPU_UCODE_ID_UVD1:
@ -664,20 +666,42 @@ const char *amdgpu_ucode_name(enum AMDGPU_UCODE_ID ucode_id)
return "DMCUB";
case AMDGPU_UCODE_ID_CAP:
return "CAP";
case AMDGPU_UCODE_ID_VPE_CTX:
return "VPE_CTX";
case AMDGPU_UCODE_ID_VPE_CTL:
return "VPE_CTL";
case AMDGPU_UCODE_ID_VPE:
return "VPE";
case AMDGPU_UCODE_ID_UMSCH_MM_UCODE:
return "UMSCH_MM_UCODE";
case AMDGPU_UCODE_ID_UMSCH_MM_DATA:
return "UMSCH_MM_DATA";
case AMDGPU_UCODE_ID_UMSCH_MM_CMD_BUFFER:
return "UMSCH_MM_CMD_BUFFER";
default:
return "UNKNOWN UCODE";
}
}
static inline int amdgpu_ucode_is_valid(uint32_t fw_version)
{
if (!fw_version)
return -EINVAL;
return 0;
}
#define FW_VERSION_ATTR(name, mode, field) \
static ssize_t show_##name(struct device *dev, \
struct device_attribute *attr, \
char *buf) \
struct device_attribute *attr, char *buf) \
{ \
struct drm_device *ddev = dev_get_drvdata(dev); \
struct amdgpu_device *adev = drm_to_adev(ddev); \
\
return sysfs_emit(buf, "0x%08x\n", adev->field); \
if (!buf) \
return amdgpu_ucode_is_valid(adev->field); \
\
return sysfs_emit(buf, "0x%08x\n", adev->field); \
} \
static DEVICE_ATTR(name, mode, show_##name, NULL)
@ -722,9 +746,24 @@ static struct attribute *fw_attrs[] = {
NULL
};
#define to_dev_attr(x) container_of(x, struct device_attribute, attr)
static umode_t amdgpu_ucode_sys_visible(struct kobject *kobj,
struct attribute *attr, int idx)
{
struct device_attribute *dev_attr = to_dev_attr(attr);
struct device *dev = kobj_to_dev(kobj);
if (dev_attr->show(dev, dev_attr, NULL) == -EINVAL)
return 0;
return attr->mode;
}
static const struct attribute_group fw_attr_group = {
.name = "fw_version",
.attrs = fw_attrs
.attrs = fw_attrs,
.is_visible = amdgpu_ucode_sys_visible
};
int amdgpu_ucode_sysfs_init(struct amdgpu_device *adev)
@ -749,6 +788,8 @@ static int amdgpu_ucode_init_single_fw(struct amdgpu_device *adev,
const struct mes_firmware_header_v1_0 *mes_hdr = NULL;
const struct sdma_firmware_header_v2_0 *sdma_hdr = NULL;
const struct imu_firmware_header_v1_0 *imu_hdr = NULL;
const struct vpe_firmware_header_v1_0 *vpe_hdr = NULL;
const struct umsch_mm_firmware_header_v1_0 *umsch_mm_hdr = NULL;
u8 *ucode_addr;
if (!ucode->fw)
@ -768,6 +809,8 @@ static int amdgpu_ucode_init_single_fw(struct amdgpu_device *adev,
mes_hdr = (const struct mes_firmware_header_v1_0 *)ucode->fw->data;
sdma_hdr = (const struct sdma_firmware_header_v2_0 *)ucode->fw->data;
imu_hdr = (const struct imu_firmware_header_v1_0 *)ucode->fw->data;
vpe_hdr = (const struct vpe_firmware_header_v1_0 *)ucode->fw->data;
umsch_mm_hdr = (const struct umsch_mm_firmware_header_v1_0 *)ucode->fw->data;
if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) {
switch (ucode->ucode_id) {
@ -884,6 +927,10 @@ static int amdgpu_ucode_init_single_fw(struct amdgpu_device *adev,
ucode->ucode_size = ucode->fw->size;
ucode_addr = (u8 *)ucode->fw->data;
break;
case AMDGPU_UCODE_ID_P2S_TABLE:
ucode->ucode_size = ucode->fw->size;
ucode_addr = (u8 *)ucode->fw->data;
break;
case AMDGPU_UCODE_ID_IMU_I:
ucode->ucode_size = le32_to_cpu(imu_hdr->imu_iram_ucode_size_bytes);
ucode_addr = (u8 *)ucode->fw->data +
@ -950,6 +997,26 @@ static int amdgpu_ucode_init_single_fw(struct amdgpu_device *adev,
ucode_addr = (u8 *)ucode->fw->data +
le32_to_cpu(cpv2_hdr->data_offset_bytes);
break;
case AMDGPU_UCODE_ID_VPE_CTX:
ucode->ucode_size = le32_to_cpu(vpe_hdr->ctx_ucode_size_bytes);
ucode_addr = (u8 *)ucode->fw->data +
le32_to_cpu(vpe_hdr->header.ucode_array_offset_bytes);
break;
case AMDGPU_UCODE_ID_VPE_CTL:
ucode->ucode_size = le32_to_cpu(vpe_hdr->ctl_ucode_size_bytes);
ucode_addr = (u8 *)ucode->fw->data +
le32_to_cpu(vpe_hdr->ctl_ucode_offset);
break;
case AMDGPU_UCODE_ID_UMSCH_MM_UCODE:
ucode->ucode_size = le32_to_cpu(umsch_mm_hdr->umsch_mm_ucode_size_bytes);
ucode_addr = (u8 *)ucode->fw->data +
le32_to_cpu(umsch_mm_hdr->header.ucode_array_offset_bytes);
break;
case AMDGPU_UCODE_ID_UMSCH_MM_DATA:
ucode->ucode_size = le32_to_cpu(umsch_mm_hdr->umsch_mm_ucode_data_size_bytes);
ucode_addr = (u8 *)ucode->fw->data +
le32_to_cpu(umsch_mm_hdr->umsch_mm_ucode_data_offset_bytes);
break;
default:
ucode->ucode_size = le32_to_cpu(header->ucode_size_bytes);
ucode_addr = (u8 *)ucode->fw->data +
@ -1061,7 +1128,7 @@ int amdgpu_ucode_init_bo(struct amdgpu_device *adev)
static const char *amdgpu_ucode_legacy_naming(struct amdgpu_device *adev, int block_type)
{
if (block_type == MP0_HWIP) {
switch (adev->ip_versions[MP0_HWIP][0]) {
switch (amdgpu_ip_version(adev, MP0_HWIP, 0)) {
case IP_VERSION(9, 0, 0):
switch (adev->asic_type) {
case CHIP_VEGA10:
@ -1112,7 +1179,7 @@ static const char *amdgpu_ucode_legacy_naming(struct amdgpu_device *adev, int bl
return "yellow_carp";
}
} else if (block_type == MP1_HWIP) {
switch (adev->ip_versions[MP1_HWIP][0]) {
switch (amdgpu_ip_version(adev, MP1_HWIP, 0)) {
case IP_VERSION(9, 0, 0):
case IP_VERSION(10, 0, 0):
case IP_VERSION(10, 0, 1):
@ -1138,7 +1205,7 @@ static const char *amdgpu_ucode_legacy_naming(struct amdgpu_device *adev, int bl
return "aldebaran_smc";
}
} else if (block_type == SDMA0_HWIP) {
switch (adev->ip_versions[SDMA0_HWIP][0]) {
switch (amdgpu_ip_version(adev, SDMA0_HWIP, 0)) {
case IP_VERSION(4, 0, 0):
return "vega10_sdma";
case IP_VERSION(4, 0, 1):
@ -1182,7 +1249,7 @@ static const char *amdgpu_ucode_legacy_naming(struct amdgpu_device *adev, int bl
return "vangogh_sdma";
}
} else if (block_type == UVD_HWIP) {
switch (adev->ip_versions[UVD_HWIP][0]) {
switch (amdgpu_ip_version(adev, UVD_HWIP, 0)) {
case IP_VERSION(1, 0, 0):
case IP_VERSION(1, 0, 1):
if (adev->apu_flags & AMD_APU_IS_RAVEN2)
@ -1207,7 +1274,8 @@ static const char *amdgpu_ucode_legacy_naming(struct amdgpu_device *adev, int bl
case IP_VERSION(3, 0, 0):
case IP_VERSION(3, 0, 64):
case IP_VERSION(3, 0, 192):
if (adev->ip_versions[GC_HWIP][0] == IP_VERSION(10, 3, 0))
if (amdgpu_ip_version(adev, GC_HWIP, 0) ==
IP_VERSION(10, 3, 0))
return "sienna_cichlid_vcn";
return "navy_flounder_vcn";
case IP_VERSION(3, 0, 2):
@ -1220,7 +1288,7 @@ static const char *amdgpu_ucode_legacy_naming(struct amdgpu_device *adev, int bl
return "yellow_carp_vcn";
}
} else if (block_type == GC_HWIP) {
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(9, 0, 1):
return "vega10";
case IP_VERSION(9, 2, 1):
@ -1273,7 +1341,7 @@ void amdgpu_ucode_ip_version_decode(struct amdgpu_device *adev, int block_type,
int maj, min, rev;
char *ip_name;
const char *legacy;
uint32_t version = adev->ip_versions[block_type][0];
uint32_t version = amdgpu_ip_version(adev, block_type, 0);
legacy = amdgpu_ucode_legacy_naming(adev, block_type);
if (legacy) {
@ -1297,6 +1365,9 @@ void amdgpu_ucode_ip_version_decode(struct amdgpu_device *adev, int block_type,
case UVD_HWIP:
ip_name = "vcn";
break;
case VPE_HWIP:
ip_name = "vpe";
break;
default:
BUG();
}

View File

@ -315,6 +315,36 @@ struct sdma_firmware_header_v2_0 {
uint32_t ctl_jt_size; /* control thread size of jt */
};
/* version_major=1, version_minor=0 */
struct vpe_firmware_header_v1_0 {
struct common_firmware_header header;
uint32_t ucode_feature_version;
uint32_t ctx_ucode_size_bytes; /* context thread ucode size */
uint32_t ctx_jt_offset; /* context thread jt location */
uint32_t ctx_jt_size; /* context thread size of jt */
uint32_t ctl_ucode_offset;
uint32_t ctl_ucode_size_bytes; /* control thread ucode size */
uint32_t ctl_jt_offset; /* control thread jt location */
uint32_t ctl_jt_size; /* control thread size of jt */
};
/* version_major=1, version_minor=0 */
struct umsch_mm_firmware_header_v1_0 {
struct common_firmware_header header;
uint32_t umsch_mm_ucode_version;
uint32_t umsch_mm_ucode_size_bytes;
uint32_t umsch_mm_ucode_offset_bytes;
uint32_t umsch_mm_ucode_data_version;
uint32_t umsch_mm_ucode_data_size_bytes;
uint32_t umsch_mm_ucode_data_offset_bytes;
uint32_t umsch_mm_irq_start_addr_lo;
uint32_t umsch_mm_irq_start_addr_hi;
uint32_t umsch_mm_uc_start_addr_lo;
uint32_t umsch_mm_uc_start_addr_hi;
uint32_t umsch_mm_data_start_addr_lo;
uint32_t umsch_mm_data_start_addr_hi;
};
/* gpu info payload */
struct gpu_info_firmware_v1_0 {
uint32_t gc_num_se;
@ -474,6 +504,13 @@ enum AMDGPU_UCODE_ID {
AMDGPU_UCODE_ID_VCN0_RAM,
AMDGPU_UCODE_ID_VCN1_RAM,
AMDGPU_UCODE_ID_DMCUB,
AMDGPU_UCODE_ID_VPE_CTX,
AMDGPU_UCODE_ID_VPE_CTL,
AMDGPU_UCODE_ID_VPE,
AMDGPU_UCODE_ID_UMSCH_MM_UCODE,
AMDGPU_UCODE_ID_UMSCH_MM_DATA,
AMDGPU_UCODE_ID_UMSCH_MM_CMD_BUFFER,
AMDGPU_UCODE_ID_P2S_TABLE,
AMDGPU_UCODE_ID_MAXIMUM,
};

View File

@ -28,7 +28,7 @@ static int amdgpu_umc_convert_error_address(struct amdgpu_device *adev,
struct ras_err_data *err_data, uint64_t err_addr,
uint32_t ch_inst, uint32_t umc_inst)
{
switch (adev->ip_versions[UMC_HWIP][0]) {
switch (amdgpu_ip_version(adev, UMC_HWIP, 0)) {
case IP_VERSION(6, 7, 0):
umc_v6_7_convert_error_address(adev,
err_data, err_addr, ch_inst, umc_inst);
@ -45,8 +45,12 @@ static int amdgpu_umc_convert_error_address(struct amdgpu_device *adev,
int amdgpu_umc_page_retirement_mca(struct amdgpu_device *adev,
uint64_t err_addr, uint32_t ch_inst, uint32_t umc_inst)
{
struct ras_err_data err_data = {0, 0, 0, NULL};
int ret = AMDGPU_RAS_FAIL;
struct ras_err_data err_data;
int ret;
ret = amdgpu_ras_error_data_init(&err_data);
if (ret)
return ret;
err_data.err_addr =
kcalloc(adev->umc.max_ras_err_cnt_per_query,
@ -54,7 +58,8 @@ int amdgpu_umc_page_retirement_mca(struct amdgpu_device *adev,
if (!err_data.err_addr) {
dev_warn(adev->dev,
"Failed to alloc memory for umc error record in MCA notifier!\n");
return AMDGPU_RAS_FAIL;
ret = AMDGPU_RAS_FAIL;
goto out_fini_err_data;
}
/*
@ -63,7 +68,7 @@ int amdgpu_umc_page_retirement_mca(struct amdgpu_device *adev,
ret = amdgpu_umc_convert_error_address(adev, &err_data, err_addr,
ch_inst, umc_inst);
if (ret)
goto out;
goto out_free_err_addr;
if (amdgpu_bad_page_threshold != 0) {
amdgpu_ras_add_bad_pages(adev, err_data.err_addr,
@ -71,8 +76,12 @@ int amdgpu_umc_page_retirement_mca(struct amdgpu_device *adev,
amdgpu_ras_save_bad_pages(adev, NULL);
}
out:
out_free_err_addr:
kfree(err_data.err_addr);
out_fini_err_data:
amdgpu_ras_error_data_fini(&err_data);
return ret;
}
@ -182,18 +191,24 @@ int amdgpu_umc_poison_handler(struct amdgpu_device *adev, bool reset)
}
if (!amdgpu_sriov_vf(adev)) {
struct ras_err_data err_data = {0, 0, 0, NULL};
struct ras_err_data err_data;
struct ras_common_if head = {
.block = AMDGPU_RAS_BLOCK__UMC,
};
struct ras_manager *obj = amdgpu_ras_find_obj(adev, &head);
ret = amdgpu_ras_error_data_init(&err_data);
if (ret)
return ret;
ret = amdgpu_umc_do_page_retirement(adev, &err_data, NULL, reset);
if (ret == AMDGPU_RAS_SUCCESS && obj) {
obj->err_data.ue_count += err_data.ue_count;
obj->err_data.ce_count += err_data.ce_count;
}
amdgpu_ras_error_data_fini(&err_data);
} else {
if (adev->virt.ops && adev->virt.ops->ras_poison_handler)
adev->virt.ops->ras_poison_handler(adev);

View File

@ -32,6 +32,11 @@
* is the index of 8KB block
*/
#define ADDR_OF_8KB_BLOCK(addr) (((addr) & ~0xffULL) << 5)
/*
* (addr / 256) * 32768, the higher 26 bits in ErrorAddr
* is the index of 8KB block
*/
#define ADDR_OF_32KB_BLOCK(addr) (((addr) & ~0xffULL) << 7)
/* channel index is the index of 256B block */
#define ADDR_OF_256B_BLOCK(channel_index) ((channel_index) << 8)
/* offset in 256B block */

View File

@ -0,0 +1,862 @@
// SPDX-License-Identifier: MIT
/*
* Copyright 2023 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
*/
#include <linux/firmware.h>
#include <drm/drm_exec.h>
#include "amdgpu.h"
#include "amdgpu_umsch_mm.h"
#include "umsch_mm_v4_0.h"
struct umsch_mm_test_ctx_data {
uint8_t process_csa[PAGE_SIZE];
uint8_t vpe_ctx_csa[PAGE_SIZE];
uint8_t vcn_ctx_csa[PAGE_SIZE];
};
struct umsch_mm_test_mqd_data {
uint8_t vpe_mqd[PAGE_SIZE];
uint8_t vcn_mqd[PAGE_SIZE];
};
struct umsch_mm_test_ring_data {
uint8_t vpe_ring[PAGE_SIZE];
uint8_t vpe_ib[PAGE_SIZE];
uint8_t vcn_ring[PAGE_SIZE];
uint8_t vcn_ib[PAGE_SIZE];
};
struct umsch_mm_test_queue_info {
uint64_t mqd_addr;
uint64_t csa_addr;
uint32_t doorbell_offset_0;
uint32_t doorbell_offset_1;
enum UMSCH_SWIP_ENGINE_TYPE engine;
};
struct umsch_mm_test {
struct amdgpu_bo *ctx_data_obj;
uint64_t ctx_data_gpu_addr;
uint32_t *ctx_data_cpu_addr;
struct amdgpu_bo *mqd_data_obj;
uint64_t mqd_data_gpu_addr;
uint32_t *mqd_data_cpu_addr;
struct amdgpu_bo *ring_data_obj;
uint64_t ring_data_gpu_addr;
uint32_t *ring_data_cpu_addr;
struct amdgpu_vm *vm;
struct amdgpu_bo_va *bo_va;
uint32_t pasid;
uint32_t vm_cntx_cntl;
uint32_t num_queues;
};
static int map_ring_data(struct amdgpu_device *adev, struct amdgpu_vm *vm,
struct amdgpu_bo *bo, struct amdgpu_bo_va **bo_va,
uint64_t addr, uint32_t size)
{
struct amdgpu_sync sync;
struct drm_exec exec;
int r;
amdgpu_sync_create(&sync);
drm_exec_init(&exec, 0);
drm_exec_until_all_locked(&exec) {
r = drm_exec_lock_obj(&exec, &bo->tbo.base);
drm_exec_retry_on_contention(&exec);
if (unlikely(r))
goto error_fini_exec;
r = amdgpu_vm_lock_pd(vm, &exec, 0);
drm_exec_retry_on_contention(&exec);
if (unlikely(r))
goto error_fini_exec;
}
*bo_va = amdgpu_vm_bo_add(adev, vm, bo);
if (!*bo_va) {
r = -ENOMEM;
goto error_fini_exec;
}
r = amdgpu_vm_bo_map(adev, *bo_va, addr, 0, size,
AMDGPU_PTE_READABLE | AMDGPU_PTE_WRITEABLE |
AMDGPU_PTE_EXECUTABLE);
if (r)
goto error_del_bo_va;
r = amdgpu_vm_bo_update(adev, *bo_va, false);
if (r)
goto error_del_bo_va;
amdgpu_sync_fence(&sync, (*bo_va)->last_pt_update);
r = amdgpu_vm_update_pdes(adev, vm, false);
if (r)
goto error_del_bo_va;
amdgpu_sync_fence(&sync, vm->last_update);
amdgpu_sync_wait(&sync, false);
drm_exec_fini(&exec);
amdgpu_sync_free(&sync);
return 0;
error_del_bo_va:
amdgpu_vm_bo_del(adev, *bo_va);
amdgpu_sync_free(&sync);
error_fini_exec:
drm_exec_fini(&exec);
amdgpu_sync_free(&sync);
return r;
}
static int unmap_ring_data(struct amdgpu_device *adev, struct amdgpu_vm *vm,
struct amdgpu_bo *bo, struct amdgpu_bo_va *bo_va,
uint64_t addr)
{
struct drm_exec exec;
long r;
drm_exec_init(&exec, 0);
drm_exec_until_all_locked(&exec) {
r = drm_exec_lock_obj(&exec, &bo->tbo.base);
drm_exec_retry_on_contention(&exec);
if (unlikely(r))
goto out_unlock;
r = amdgpu_vm_lock_pd(vm, &exec, 0);
drm_exec_retry_on_contention(&exec);
if (unlikely(r))
goto out_unlock;
}
r = amdgpu_vm_bo_unmap(adev, bo_va, addr);
if (r)
goto out_unlock;
amdgpu_vm_bo_del(adev, bo_va);
out_unlock:
drm_exec_fini(&exec);
return r;
}
static void setup_vpe_queue(struct amdgpu_device *adev,
struct umsch_mm_test *test,
struct umsch_mm_test_queue_info *qinfo)
{
struct MQD_INFO *mqd = (struct MQD_INFO *)test->mqd_data_cpu_addr;
uint64_t ring_gpu_addr = test->ring_data_gpu_addr;
mqd->rb_base_lo = (ring_gpu_addr >> 8);
mqd->rb_base_hi = (ring_gpu_addr >> 40);
mqd->rb_size = PAGE_SIZE / 4;
mqd->wptr_val = 0;
mqd->rptr_val = 0;
mqd->unmapped = 1;
qinfo->mqd_addr = test->mqd_data_gpu_addr;
qinfo->csa_addr = test->ctx_data_gpu_addr +
offsetof(struct umsch_mm_test_ctx_data, vpe_ctx_csa);
qinfo->doorbell_offset_0 = (adev->doorbell_index.vpe_ring + 1) << 1;
qinfo->doorbell_offset_1 = 0;
}
static void setup_vcn_queue(struct amdgpu_device *adev,
struct umsch_mm_test *test,
struct umsch_mm_test_queue_info *qinfo)
{
}
static int add_test_queue(struct amdgpu_device *adev,
struct umsch_mm_test *test,
struct umsch_mm_test_queue_info *qinfo)
{
struct umsch_mm_add_queue_input queue_input = {};
int r;
queue_input.process_id = test->pasid;
queue_input.page_table_base_addr = amdgpu_gmc_pd_addr(test->vm->root.bo);
queue_input.process_va_start = 0;
queue_input.process_va_end = (adev->vm_manager.max_pfn - 1) << AMDGPU_GPU_PAGE_SHIFT;
queue_input.process_quantum = 100000; /* 10ms */
queue_input.process_csa_addr = test->ctx_data_gpu_addr +
offsetof(struct umsch_mm_test_ctx_data, process_csa);
queue_input.context_quantum = 10000; /* 1ms */
queue_input.context_csa_addr = qinfo->csa_addr;
queue_input.inprocess_context_priority = CONTEXT_PRIORITY_LEVEL_NORMAL;
queue_input.context_global_priority_level = CONTEXT_PRIORITY_LEVEL_NORMAL;
queue_input.doorbell_offset_0 = qinfo->doorbell_offset_0;
queue_input.doorbell_offset_1 = qinfo->doorbell_offset_1;
queue_input.engine_type = qinfo->engine;
queue_input.mqd_addr = qinfo->mqd_addr;
queue_input.vm_context_cntl = test->vm_cntx_cntl;
amdgpu_umsch_mm_lock(&adev->umsch_mm);
r = adev->umsch_mm.funcs->add_queue(&adev->umsch_mm, &queue_input);
amdgpu_umsch_mm_unlock(&adev->umsch_mm);
if (r)
return r;
return 0;
}
static int remove_test_queue(struct amdgpu_device *adev,
struct umsch_mm_test *test,
struct umsch_mm_test_queue_info *qinfo)
{
struct umsch_mm_remove_queue_input queue_input = {};
int r;
queue_input.doorbell_offset_0 = qinfo->doorbell_offset_0;
queue_input.doorbell_offset_1 = qinfo->doorbell_offset_1;
queue_input.context_csa_addr = qinfo->csa_addr;
amdgpu_umsch_mm_lock(&adev->umsch_mm);
r = adev->umsch_mm.funcs->remove_queue(&adev->umsch_mm, &queue_input);
amdgpu_umsch_mm_unlock(&adev->umsch_mm);
if (r)
return r;
return 0;
}
static int submit_vpe_queue(struct amdgpu_device *adev, struct umsch_mm_test *test)
{
struct MQD_INFO *mqd = (struct MQD_INFO *)test->mqd_data_cpu_addr;
uint32_t *ring = test->ring_data_cpu_addr +
offsetof(struct umsch_mm_test_ring_data, vpe_ring) / 4;
uint32_t *ib = test->ring_data_cpu_addr +
offsetof(struct umsch_mm_test_ring_data, vpe_ib) / 4;
uint64_t ib_gpu_addr = test->ring_data_gpu_addr +
offsetof(struct umsch_mm_test_ring_data, vpe_ib);
uint32_t *fence = ib + 2048 / 4;
uint64_t fence_gpu_addr = ib_gpu_addr + 2048;
const uint32_t test_pattern = 0xdeadbeef;
int i;
ib[0] = VPE_CMD_HEADER(VPE_CMD_OPCODE_FENCE, 0);
ib[1] = lower_32_bits(fence_gpu_addr);
ib[2] = upper_32_bits(fence_gpu_addr);
ib[3] = test_pattern;
ring[0] = VPE_CMD_HEADER(VPE_CMD_OPCODE_INDIRECT, 0);
ring[1] = (ib_gpu_addr & 0xffffffe0);
ring[2] = upper_32_bits(ib_gpu_addr);
ring[3] = 4;
ring[4] = 0;
ring[5] = 0;
mqd->wptr_val = (6 << 2);
// WDOORBELL32(adev->umsch_mm.agdb_index[CONTEXT_PRIORITY_LEVEL_NORMAL], mqd->wptr_val);
for (i = 0; i < adev->usec_timeout; i++) {
if (*fence == test_pattern)
return 0;
udelay(1);
}
dev_err(adev->dev, "vpe queue submission timeout\n");
return -ETIMEDOUT;
}
static int submit_vcn_queue(struct amdgpu_device *adev, struct umsch_mm_test *test)
{
return 0;
}
static int setup_umsch_mm_test(struct amdgpu_device *adev,
struct umsch_mm_test *test)
{
struct amdgpu_vmhub *hub = &adev->vmhub[AMDGPU_MMHUB0(0)];
int r;
test->vm_cntx_cntl = hub->vm_cntx_cntl;
test->vm = kzalloc(sizeof(*test->vm), GFP_KERNEL);
if (!test->vm) {
r = -ENOMEM;
return r;
}
r = amdgpu_vm_init(adev, test->vm, -1);
if (r)
goto error_free_vm;
r = amdgpu_pasid_alloc(16);
if (r < 0)
goto error_fini_vm;
test->pasid = r;
r = amdgpu_bo_create_kernel(adev, sizeof(struct umsch_mm_test_ctx_data),
PAGE_SIZE, AMDGPU_GEM_DOMAIN_GTT,
&test->ctx_data_obj,
&test->ctx_data_gpu_addr,
(void **)&test->ctx_data_cpu_addr);
if (r)
goto error_free_pasid;
memset(test->ctx_data_cpu_addr, 0, sizeof(struct umsch_mm_test_ctx_data));
r = amdgpu_bo_create_kernel(adev, PAGE_SIZE,
PAGE_SIZE, AMDGPU_GEM_DOMAIN_GTT,
&test->mqd_data_obj,
&test->mqd_data_gpu_addr,
(void **)&test->mqd_data_cpu_addr);
if (r)
goto error_free_ctx_data_obj;
memset(test->mqd_data_cpu_addr, 0, PAGE_SIZE);
r = amdgpu_bo_create_kernel(adev, sizeof(struct umsch_mm_test_ring_data),
PAGE_SIZE, AMDGPU_GEM_DOMAIN_GTT,
&test->ring_data_obj,
NULL,
(void **)&test->ring_data_cpu_addr);
if (r)
goto error_free_mqd_data_obj;
memset(test->ring_data_cpu_addr, 0, sizeof(struct umsch_mm_test_ring_data));
test->ring_data_gpu_addr = AMDGPU_VA_RESERVED_SIZE;
r = map_ring_data(adev, test->vm, test->ring_data_obj, &test->bo_va,
test->ring_data_gpu_addr, sizeof(struct umsch_mm_test_ring_data));
if (r)
goto error_free_ring_data_obj;
return 0;
error_free_ring_data_obj:
amdgpu_bo_free_kernel(&test->ring_data_obj, NULL,
(void **)&test->ring_data_cpu_addr);
error_free_mqd_data_obj:
amdgpu_bo_free_kernel(&test->mqd_data_obj, &test->mqd_data_gpu_addr,
(void **)&test->mqd_data_cpu_addr);
error_free_ctx_data_obj:
amdgpu_bo_free_kernel(&test->ctx_data_obj, &test->ctx_data_gpu_addr,
(void **)&test->ctx_data_cpu_addr);
error_free_pasid:
amdgpu_pasid_free(test->pasid);
error_fini_vm:
amdgpu_vm_fini(adev, test->vm);
error_free_vm:
kfree(test->vm);
return r;
}
static void cleanup_umsch_mm_test(struct amdgpu_device *adev,
struct umsch_mm_test *test)
{
unmap_ring_data(adev, test->vm, test->ring_data_obj,
test->bo_va, test->ring_data_gpu_addr);
amdgpu_bo_free_kernel(&test->mqd_data_obj, &test->mqd_data_gpu_addr,
(void **)&test->mqd_data_cpu_addr);
amdgpu_bo_free_kernel(&test->ring_data_obj, NULL,
(void **)&test->ring_data_cpu_addr);
amdgpu_bo_free_kernel(&test->ctx_data_obj, &test->ctx_data_gpu_addr,
(void **)&test->ctx_data_cpu_addr);
amdgpu_pasid_free(test->pasid);
amdgpu_vm_fini(adev, test->vm);
kfree(test->vm);
}
static int setup_test_queues(struct amdgpu_device *adev,
struct umsch_mm_test *test,
struct umsch_mm_test_queue_info *qinfo)
{
int i, r;
for (i = 0; i < test->num_queues; i++) {
if (qinfo[i].engine == UMSCH_SWIP_ENGINE_TYPE_VPE)
setup_vpe_queue(adev, test, &qinfo[i]);
else
setup_vcn_queue(adev, test, &qinfo[i]);
r = add_test_queue(adev, test, &qinfo[i]);
if (r)
return r;
}
return 0;
}
static int submit_test_queues(struct amdgpu_device *adev,
struct umsch_mm_test *test,
struct umsch_mm_test_queue_info *qinfo)
{
int i, r;
for (i = 0; i < test->num_queues; i++) {
if (qinfo[i].engine == UMSCH_SWIP_ENGINE_TYPE_VPE)
r = submit_vpe_queue(adev, test);
else
r = submit_vcn_queue(adev, test);
if (r)
return r;
}
return 0;
}
static void cleanup_test_queues(struct amdgpu_device *adev,
struct umsch_mm_test *test,
struct umsch_mm_test_queue_info *qinfo)
{
int i;
for (i = 0; i < test->num_queues; i++)
remove_test_queue(adev, test, &qinfo[i]);
}
static int umsch_mm_test(struct amdgpu_device *adev)
{
struct umsch_mm_test_queue_info qinfo[] = {
{ .engine = UMSCH_SWIP_ENGINE_TYPE_VPE },
};
struct umsch_mm_test test = { .num_queues = ARRAY_SIZE(qinfo) };
int r;
r = setup_umsch_mm_test(adev, &test);
if (r)
return r;
r = setup_test_queues(adev, &test, qinfo);
if (r)
goto cleanup;
r = submit_test_queues(adev, &test, qinfo);
if (r)
goto cleanup;
cleanup_test_queues(adev, &test, qinfo);
cleanup_umsch_mm_test(adev, &test);
return 0;
cleanup:
cleanup_test_queues(adev, &test, qinfo);
cleanup_umsch_mm_test(adev, &test);
return r;
}
int amdgpu_umsch_mm_submit_pkt(struct amdgpu_umsch_mm *umsch, void *pkt, int ndws)
{
struct amdgpu_ring *ring = &umsch->ring;
if (amdgpu_ring_alloc(ring, ndws))
return -ENOMEM;
amdgpu_ring_write_multiple(ring, pkt, ndws);
amdgpu_ring_commit(ring);
return 0;
}
int amdgpu_umsch_mm_query_fence(struct amdgpu_umsch_mm *umsch)
{
struct amdgpu_ring *ring = &umsch->ring;
struct amdgpu_device *adev = ring->adev;
int r;
r = amdgpu_fence_wait_polling(ring, ring->fence_drv.sync_seq, adev->usec_timeout);
if (r < 1) {
dev_err(adev->dev, "ring umsch timeout, emitted fence %u\n",
ring->fence_drv.sync_seq);
return -ETIMEDOUT;
}
return 0;
}
static void umsch_mm_ring_set_wptr(struct amdgpu_ring *ring)
{
struct amdgpu_umsch_mm *umsch = (struct amdgpu_umsch_mm *)ring;
struct amdgpu_device *adev = ring->adev;
if (ring->use_doorbell)
WDOORBELL32(ring->doorbell_index, ring->wptr << 2);
else
WREG32(umsch->rb_wptr, ring->wptr << 2);
}
static u64 umsch_mm_ring_get_rptr(struct amdgpu_ring *ring)
{
struct amdgpu_umsch_mm *umsch = (struct amdgpu_umsch_mm *)ring;
struct amdgpu_device *adev = ring->adev;
return RREG32(umsch->rb_rptr);
}
static u64 umsch_mm_ring_get_wptr(struct amdgpu_ring *ring)
{
struct amdgpu_umsch_mm *umsch = (struct amdgpu_umsch_mm *)ring;
struct amdgpu_device *adev = ring->adev;
return RREG32(umsch->rb_wptr);
}
static const struct amdgpu_ring_funcs umsch_v4_0_ring_funcs = {
.type = AMDGPU_RING_TYPE_UMSCH_MM,
.align_mask = 0,
.nop = 0,
.support_64bit_ptrs = false,
.get_rptr = umsch_mm_ring_get_rptr,
.get_wptr = umsch_mm_ring_get_wptr,
.set_wptr = umsch_mm_ring_set_wptr,
.insert_nop = amdgpu_ring_insert_nop,
};
int amdgpu_umsch_mm_ring_init(struct amdgpu_umsch_mm *umsch)
{
struct amdgpu_device *adev = container_of(umsch, struct amdgpu_device, umsch_mm);
struct amdgpu_ring *ring = &umsch->ring;
ring->vm_hub = AMDGPU_MMHUB0(0);
ring->use_doorbell = true;
ring->no_scheduler = true;
ring->doorbell_index = (AMDGPU_NAVI10_DOORBELL64_VCN0_1 << 1) + 6;
snprintf(ring->name, sizeof(ring->name), "umsch");
return amdgpu_ring_init(adev, ring, 1024, NULL, 0, AMDGPU_RING_PRIO_DEFAULT, NULL);
}
int amdgpu_umsch_mm_init_microcode(struct amdgpu_umsch_mm *umsch)
{
const struct umsch_mm_firmware_header_v1_0 *umsch_mm_hdr;
struct amdgpu_device *adev = umsch->ring.adev;
const char *fw_name = NULL;
int r;
switch (amdgpu_ip_version(adev, VCN_HWIP, 0)) {
case IP_VERSION(4, 0, 5):
fw_name = "amdgpu/umsch_mm_4_0_0.bin";
break;
default:
break;
}
r = amdgpu_ucode_request(adev, &adev->umsch_mm.fw, fw_name);
if (r) {
release_firmware(adev->umsch_mm.fw);
adev->umsch_mm.fw = NULL;
return r;
}
umsch_mm_hdr = (const struct umsch_mm_firmware_header_v1_0 *)adev->umsch_mm.fw->data;
adev->umsch_mm.ucode_size = le32_to_cpu(umsch_mm_hdr->umsch_mm_ucode_size_bytes);
adev->umsch_mm.data_size = le32_to_cpu(umsch_mm_hdr->umsch_mm_ucode_data_size_bytes);
adev->umsch_mm.irq_start_addr =
le32_to_cpu(umsch_mm_hdr->umsch_mm_irq_start_addr_lo) |
((uint64_t)(le32_to_cpu(umsch_mm_hdr->umsch_mm_irq_start_addr_hi)) << 32);
adev->umsch_mm.uc_start_addr =
le32_to_cpu(umsch_mm_hdr->umsch_mm_uc_start_addr_lo) |
((uint64_t)(le32_to_cpu(umsch_mm_hdr->umsch_mm_uc_start_addr_hi)) << 32);
adev->umsch_mm.data_start_addr =
le32_to_cpu(umsch_mm_hdr->umsch_mm_data_start_addr_lo) |
((uint64_t)(le32_to_cpu(umsch_mm_hdr->umsch_mm_data_start_addr_hi)) << 32);
if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) {
struct amdgpu_firmware_info *info;
info = &adev->firmware.ucode[AMDGPU_UCODE_ID_UMSCH_MM_UCODE];
info->ucode_id = AMDGPU_UCODE_ID_UMSCH_MM_UCODE;
info->fw = adev->umsch_mm.fw;
adev->firmware.fw_size +=
ALIGN(le32_to_cpu(umsch_mm_hdr->umsch_mm_ucode_size_bytes), PAGE_SIZE);
info = &adev->firmware.ucode[AMDGPU_UCODE_ID_UMSCH_MM_DATA];
info->ucode_id = AMDGPU_UCODE_ID_UMSCH_MM_DATA;
info->fw = adev->umsch_mm.fw;
adev->firmware.fw_size +=
ALIGN(le32_to_cpu(umsch_mm_hdr->umsch_mm_ucode_data_size_bytes), PAGE_SIZE);
}
return 0;
}
int amdgpu_umsch_mm_allocate_ucode_buffer(struct amdgpu_umsch_mm *umsch)
{
const struct umsch_mm_firmware_header_v1_0 *umsch_mm_hdr;
struct amdgpu_device *adev = umsch->ring.adev;
const __le32 *fw_data;
uint32_t fw_size;
int r;
umsch_mm_hdr = (const struct umsch_mm_firmware_header_v1_0 *)
adev->umsch_mm.fw->data;
fw_data = (const __le32 *)(adev->umsch_mm.fw->data +
le32_to_cpu(umsch_mm_hdr->umsch_mm_ucode_offset_bytes));
fw_size = le32_to_cpu(umsch_mm_hdr->umsch_mm_ucode_size_bytes);
r = amdgpu_bo_create_reserved(adev, fw_size,
4 * 1024, AMDGPU_GEM_DOMAIN_VRAM,
&adev->umsch_mm.ucode_fw_obj,
&adev->umsch_mm.ucode_fw_gpu_addr,
(void **)&adev->umsch_mm.ucode_fw_ptr);
if (r) {
dev_err(adev->dev, "(%d) failed to create umsch_mm fw ucode bo\n", r);
return r;
}
memcpy(adev->umsch_mm.ucode_fw_ptr, fw_data, fw_size);
amdgpu_bo_kunmap(adev->umsch_mm.ucode_fw_obj);
amdgpu_bo_unreserve(adev->umsch_mm.ucode_fw_obj);
return 0;
}
int amdgpu_umsch_mm_allocate_ucode_data_buffer(struct amdgpu_umsch_mm *umsch)
{
const struct umsch_mm_firmware_header_v1_0 *umsch_mm_hdr;
struct amdgpu_device *adev = umsch->ring.adev;
const __le32 *fw_data;
uint32_t fw_size;
int r;
umsch_mm_hdr = (const struct umsch_mm_firmware_header_v1_0 *)
adev->umsch_mm.fw->data;
fw_data = (const __le32 *)(adev->umsch_mm.fw->data +
le32_to_cpu(umsch_mm_hdr->umsch_mm_ucode_data_offset_bytes));
fw_size = le32_to_cpu(umsch_mm_hdr->umsch_mm_ucode_data_size_bytes);
r = amdgpu_bo_create_reserved(adev, fw_size,
64 * 1024, AMDGPU_GEM_DOMAIN_VRAM,
&adev->umsch_mm.data_fw_obj,
&adev->umsch_mm.data_fw_gpu_addr,
(void **)&adev->umsch_mm.data_fw_ptr);
if (r) {
dev_err(adev->dev, "(%d) failed to create umsch_mm fw data bo\n", r);
return r;
}
memcpy(adev->umsch_mm.data_fw_ptr, fw_data, fw_size);
amdgpu_bo_kunmap(adev->umsch_mm.data_fw_obj);
amdgpu_bo_unreserve(adev->umsch_mm.data_fw_obj);
return 0;
}
int amdgpu_umsch_mm_psp_execute_cmd_buf(struct amdgpu_umsch_mm *umsch)
{
struct amdgpu_device *adev = umsch->ring.adev;
struct amdgpu_firmware_info ucode = {
.ucode_id = AMDGPU_UCODE_ID_UMSCH_MM_CMD_BUFFER,
.mc_addr = adev->umsch_mm.cmd_buf_gpu_addr,
.ucode_size = ((uintptr_t)adev->umsch_mm.cmd_buf_curr_ptr -
(uintptr_t)adev->umsch_mm.cmd_buf_ptr),
};
return psp_execute_ip_fw_load(&adev->psp, &ucode);
}
static void umsch_mm_agdb_index_init(struct amdgpu_device *adev)
{
uint32_t umsch_mm_agdb_start;
int i;
umsch_mm_agdb_start = adev->doorbell_index.max_assignment + 1;
umsch_mm_agdb_start = roundup(umsch_mm_agdb_start, 1024);
umsch_mm_agdb_start += (AMDGPU_NAVI10_DOORBELL64_VCN0_1 << 1);
for (i = 0; i < CONTEXT_PRIORITY_NUM_LEVELS; i++)
adev->umsch_mm.agdb_index[i] = umsch_mm_agdb_start + i;
}
static int umsch_mm_init(struct amdgpu_device *adev)
{
int r;
adev->umsch_mm.vmid_mask_mm_vpe = 0xf00;
adev->umsch_mm.engine_mask = (1 << UMSCH_SWIP_ENGINE_TYPE_VPE);
adev->umsch_mm.vpe_hqd_mask = 0xfe;
r = amdgpu_device_wb_get(adev, &adev->umsch_mm.wb_index);
if (r) {
dev_err(adev->dev, "failed to alloc wb for umsch: %d\n", r);
return r;
}
adev->umsch_mm.sch_ctx_gpu_addr = adev->wb.gpu_addr +
(adev->umsch_mm.wb_index * 4);
r = amdgpu_bo_create_kernel(adev, PAGE_SIZE, PAGE_SIZE,
AMDGPU_GEM_DOMAIN_GTT,
&adev->umsch_mm.cmd_buf_obj,
&adev->umsch_mm.cmd_buf_gpu_addr,
(void **)&adev->umsch_mm.cmd_buf_ptr);
if (r) {
dev_err(adev->dev, "failed to allocate cmdbuf bo %d\n", r);
amdgpu_device_wb_free(adev, adev->umsch_mm.wb_index);
return r;
}
mutex_init(&adev->umsch_mm.mutex_hidden);
umsch_mm_agdb_index_init(adev);
return 0;
}
static int umsch_mm_early_init(void *handle)
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
switch (amdgpu_ip_version(adev, VCN_HWIP, 0)) {
case IP_VERSION(4, 0, 5):
umsch_mm_v4_0_set_funcs(&adev->umsch_mm);
break;
default:
return -EINVAL;
}
adev->umsch_mm.ring.funcs = &umsch_v4_0_ring_funcs;
umsch_mm_set_regs(&adev->umsch_mm);
return 0;
}
static int umsch_mm_late_init(void *handle)
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
return umsch_mm_test(adev);
}
static int umsch_mm_sw_init(void *handle)
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
int r;
r = umsch_mm_init(adev);
if (r)
return r;
r = umsch_mm_ring_init(&adev->umsch_mm);
if (r)
return r;
r = umsch_mm_init_microcode(&adev->umsch_mm);
if (r)
return r;
return 0;
}
static int umsch_mm_sw_fini(void *handle)
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
release_firmware(adev->umsch_mm.fw);
adev->umsch_mm.fw = NULL;
amdgpu_ring_fini(&adev->umsch_mm.ring);
mutex_destroy(&adev->umsch_mm.mutex_hidden);
amdgpu_bo_free_kernel(&adev->umsch_mm.cmd_buf_obj,
&adev->umsch_mm.cmd_buf_gpu_addr,
(void **)&adev->umsch_mm.cmd_buf_ptr);
amdgpu_device_wb_free(adev, adev->umsch_mm.wb_index);
return 0;
}
static int umsch_mm_hw_init(void *handle)
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
int r;
r = umsch_mm_load_microcode(&adev->umsch_mm);
if (r)
return r;
umsch_mm_ring_start(&adev->umsch_mm);
r = umsch_mm_set_hw_resources(&adev->umsch_mm);
if (r)
return r;
return 0;
}
static int umsch_mm_hw_fini(void *handle)
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
umsch_mm_ring_stop(&adev->umsch_mm);
amdgpu_bo_free_kernel(&adev->umsch_mm.data_fw_obj,
&adev->umsch_mm.data_fw_gpu_addr,
(void **)&adev->umsch_mm.data_fw_ptr);
amdgpu_bo_free_kernel(&adev->umsch_mm.ucode_fw_obj,
&adev->umsch_mm.ucode_fw_gpu_addr,
(void **)&adev->umsch_mm.ucode_fw_ptr);
return 0;
}
static const struct amd_ip_funcs umsch_mm_v4_0_ip_funcs = {
.name = "umsch_mm_v4_0",
.early_init = umsch_mm_early_init,
.late_init = umsch_mm_late_init,
.sw_init = umsch_mm_sw_init,
.sw_fini = umsch_mm_sw_fini,
.hw_init = umsch_mm_hw_init,
.hw_fini = umsch_mm_hw_fini,
};
const struct amdgpu_ip_block_version umsch_mm_v4_0_ip_block = {
.type = AMD_IP_BLOCK_TYPE_UMSCH_MM,
.major = 4,
.minor = 0,
.rev = 0,
.funcs = &umsch_mm_v4_0_ip_funcs,
};

View File

@ -0,0 +1,228 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright 2023 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
*/
#ifndef __AMDGPU_UMSCH_MM_H__
#define __AMDGPU_UMSCH_MM_H__
enum UMSCH_SWIP_ENGINE_TYPE {
UMSCH_SWIP_ENGINE_TYPE_VCN0 = 0,
UMSCH_SWIP_ENGINE_TYPE_VCN1 = 1,
UMSCH_SWIP_ENGINE_TYPE_VCN = 2,
UMSCH_SWIP_ENGINE_TYPE_VPE = 3,
UMSCH_SWIP_ENGINE_TYPE_MAX
};
enum UMSCH_SWIP_AFFINITY_TYPE {
UMSCH_SWIP_AFFINITY_TYPE_ANY = 0,
UMSCH_SWIP_AFFINITY_TYPE_VCN0 = 1,
UMSCH_SWIP_AFFINITY_TYPE_VCN1 = 2,
UMSCH_SWIP_AFFINITY_TYPE_MAX
};
enum UMSCH_CONTEXT_PRIORITY_LEVEL {
CONTEXT_PRIORITY_LEVEL_IDLE = 0,
CONTEXT_PRIORITY_LEVEL_NORMAL = 1,
CONTEXT_PRIORITY_LEVEL_FOCUS = 2,
CONTEXT_PRIORITY_LEVEL_REALTIME = 3,
CONTEXT_PRIORITY_NUM_LEVELS
};
struct umsch_mm_set_resource_input {
uint32_t vmid_mask_mm_vcn;
uint32_t vmid_mask_mm_vpe;
uint32_t logging_vmid;
uint32_t engine_mask;
union {
struct {
uint32_t disable_reset : 1;
uint32_t disable_umsch_mm_log : 1;
uint32_t reserved : 30;
};
uint32_t uint32_all;
};
};
struct umsch_mm_add_queue_input {
uint32_t process_id;
uint64_t page_table_base_addr;
uint64_t process_va_start;
uint64_t process_va_end;
uint64_t process_quantum;
uint64_t process_csa_addr;
uint64_t context_quantum;
uint64_t context_csa_addr;
uint32_t inprocess_context_priority;
enum UMSCH_CONTEXT_PRIORITY_LEVEL context_global_priority_level;
uint32_t doorbell_offset_0;
uint32_t doorbell_offset_1;
enum UMSCH_SWIP_ENGINE_TYPE engine_type;
uint32_t affinity;
enum UMSCH_SWIP_AFFINITY_TYPE affinity_type;
uint64_t mqd_addr;
uint64_t h_context;
uint64_t h_queue;
uint32_t vm_context_cntl;
struct {
uint32_t is_context_suspended : 1;
uint32_t reserved : 31;
};
};
struct umsch_mm_remove_queue_input {
uint32_t doorbell_offset_0;
uint32_t doorbell_offset_1;
uint64_t context_csa_addr;
};
struct MQD_INFO {
uint32_t rb_base_hi;
uint32_t rb_base_lo;
uint32_t rb_size;
uint32_t wptr_val;
uint32_t rptr_val;
uint32_t unmapped;
};
struct amdgpu_umsch_mm;
struct umsch_mm_funcs {
int (*set_hw_resources)(struct amdgpu_umsch_mm *umsch);
int (*add_queue)(struct amdgpu_umsch_mm *umsch,
struct umsch_mm_add_queue_input *input);
int (*remove_queue)(struct amdgpu_umsch_mm *umsch,
struct umsch_mm_remove_queue_input *input);
int (*set_regs)(struct amdgpu_umsch_mm *umsch);
int (*init_microcode)(struct amdgpu_umsch_mm *umsch);
int (*load_microcode)(struct amdgpu_umsch_mm *umsch);
int (*ring_init)(struct amdgpu_umsch_mm *umsch);
int (*ring_start)(struct amdgpu_umsch_mm *umsch);
int (*ring_stop)(struct amdgpu_umsch_mm *umsch);
int (*ring_fini)(struct amdgpu_umsch_mm *umsch);
};
struct amdgpu_umsch_mm {
struct amdgpu_ring ring;
uint32_t rb_wptr;
uint32_t rb_rptr;
const struct umsch_mm_funcs *funcs;
const struct firmware *fw;
uint32_t fw_version;
uint32_t feature_version;
struct amdgpu_bo *ucode_fw_obj;
uint64_t ucode_fw_gpu_addr;
uint32_t *ucode_fw_ptr;
uint64_t irq_start_addr;
uint64_t uc_start_addr;
uint32_t ucode_size;
struct amdgpu_bo *data_fw_obj;
uint64_t data_fw_gpu_addr;
uint32_t *data_fw_ptr;
uint64_t data_start_addr;
uint32_t data_size;
struct amdgpu_bo *cmd_buf_obj;
uint64_t cmd_buf_gpu_addr;
uint32_t *cmd_buf_ptr;
uint32_t *cmd_buf_curr_ptr;
uint32_t wb_index;
uint64_t sch_ctx_gpu_addr;
uint32_t *sch_ctx_cpu_addr;
uint32_t vmid_mask_mm_vcn;
uint32_t vmid_mask_mm_vpe;
uint32_t engine_mask;
uint32_t vcn0_hqd_mask;
uint32_t vcn1_hqd_mask;
uint32_t vcn_hqd_mask[2];
uint32_t vpe_hqd_mask;
uint32_t agdb_index[CONTEXT_PRIORITY_NUM_LEVELS];
struct mutex mutex_hidden;
};
int amdgpu_umsch_mm_submit_pkt(struct amdgpu_umsch_mm *umsch, void *pkt, int ndws);
int amdgpu_umsch_mm_query_fence(struct amdgpu_umsch_mm *umsch);
int amdgpu_umsch_mm_init_microcode(struct amdgpu_umsch_mm *umsch);
int amdgpu_umsch_mm_allocate_ucode_buffer(struct amdgpu_umsch_mm *umsch);
int amdgpu_umsch_mm_allocate_ucode_data_buffer(struct amdgpu_umsch_mm *umsch);
int amdgpu_umsch_mm_psp_execute_cmd_buf(struct amdgpu_umsch_mm *umsch);
int amdgpu_umsch_mm_ring_init(struct amdgpu_umsch_mm *umsch);
#define WREG32_SOC15_UMSCH(reg, value) \
do { \
uint32_t reg_offset = adev->reg_offset[VCN_HWIP][0][reg##_BASE_IDX] + reg; \
if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) { \
*adev->umsch_mm.cmd_buf_curr_ptr++ = (reg_offset << 2); \
*adev->umsch_mm.cmd_buf_curr_ptr++ = value; \
} else { \
WREG32(reg_offset, value); \
} \
} while (0)
#define umsch_mm_set_hw_resources(umsch) \
((umsch)->funcs->set_hw_resources ? (umsch)->funcs->set_hw_resources((umsch)) : 0)
#define umsch_mm_add_queue(umsch, input) \
((umsch)->funcs->add_queue ? (umsch)->funcs->add_queue((umsch), (input)) : 0)
#define umsch_mm_remove_queue(umsch, input) \
((umsch)->funcs->remove_queue ? (umsch)->funcs->remove_queue((umsch), (input)) : 0)
#define umsch_mm_set_regs(umsch) \
((umsch)->funcs->set_regs ? (umsch)->funcs->set_regs((umsch)) : 0)
#define umsch_mm_init_microcode(umsch) \
((umsch)->funcs->init_microcode ? (umsch)->funcs->init_microcode((umsch)) : 0)
#define umsch_mm_load_microcode(umsch) \
((umsch)->funcs->load_microcode ? (umsch)->funcs->load_microcode((umsch)) : 0)
#define umsch_mm_ring_init(umsch) \
((umsch)->funcs->ring_init ? (umsch)->funcs->ring_init((umsch)) : 0)
#define umsch_mm_ring_start(umsch) \
((umsch)->funcs->ring_start ? (umsch)->funcs->ring_start((umsch)) : 0)
#define umsch_mm_ring_stop(umsch) \
((umsch)->funcs->ring_stop ? (umsch)->funcs->ring_stop((umsch)) : 0)
#define umsch_mm_ring_fini(umsch) \
((umsch)->funcs->ring_fini ? (umsch)->funcs->ring_fini((umsch)) : 0)
static inline void amdgpu_umsch_mm_lock(struct amdgpu_umsch_mm *umsch)
{
mutex_lock(&umsch->mutex_hidden);
}
static inline void amdgpu_umsch_mm_unlock(struct amdgpu_umsch_mm *umsch)
{
mutex_unlock(&umsch->mutex_hidden);
}
extern const struct amdgpu_ip_block_version umsch_mm_v4_0_ip_block;
#endif

View File

@ -418,12 +418,11 @@ int amdgpu_uvd_entity_init(struct amdgpu_device *adev)
return 0;
}
int amdgpu_uvd_suspend(struct amdgpu_device *adev)
int amdgpu_uvd_prepare_suspend(struct amdgpu_device *adev)
{
unsigned int size;
void *ptr;
int i, j, idx;
bool in_ras_intr = amdgpu_ras_intr_triggered();
cancel_delayed_work_sync(&adev->uvd.idle_work);
@ -452,7 +451,7 @@ int amdgpu_uvd_suspend(struct amdgpu_device *adev)
if (drm_dev_enter(adev_to_drm(adev), &idx)) {
/* re-write 0 since err_event_athub will corrupt VCPU buffer */
if (in_ras_intr)
if (amdgpu_ras_intr_triggered())
memset(adev->uvd.inst[j].saved_bo, 0, size);
else
memcpy_fromio(adev->uvd.inst[j].saved_bo, ptr, size);
@ -461,7 +460,12 @@ int amdgpu_uvd_suspend(struct amdgpu_device *adev)
}
}
if (in_ras_intr)
return 0;
}
int amdgpu_uvd_suspend(struct amdgpu_device *adev)
{
if (amdgpu_ras_intr_triggered())
DRM_WARN("UVD VCPU state may lost due to RAS ERREVENT_ATHUB_INTERRUPT\n");
return 0;

View File

@ -74,6 +74,7 @@ struct amdgpu_uvd {
int amdgpu_uvd_sw_init(struct amdgpu_device *adev);
int amdgpu_uvd_sw_fini(struct amdgpu_device *adev);
int amdgpu_uvd_entity_init(struct amdgpu_device *adev);
int amdgpu_uvd_prepare_suspend(struct amdgpu_device *adev);
int amdgpu_uvd_suspend(struct amdgpu_device *adev);
int amdgpu_uvd_resume(struct amdgpu_device *adev);
int amdgpu_uvd_get_create_msg(struct amdgpu_ring *ring, uint32_t handle,

View File

@ -58,6 +58,7 @@
#define FIRMWARE_VCN4_0_2 "amdgpu/vcn_4_0_2.bin"
#define FIRMWARE_VCN4_0_3 "amdgpu/vcn_4_0_3.bin"
#define FIRMWARE_VCN4_0_4 "amdgpu/vcn_4_0_4.bin"
#define FIRMWARE_VCN4_0_5 "amdgpu/vcn_4_0_5.bin"
MODULE_FIRMWARE(FIRMWARE_RAVEN);
MODULE_FIRMWARE(FIRMWARE_PICASSO);
@ -80,6 +81,7 @@ MODULE_FIRMWARE(FIRMWARE_VCN4_0_0);
MODULE_FIRMWARE(FIRMWARE_VCN4_0_2);
MODULE_FIRMWARE(FIRMWARE_VCN4_0_3);
MODULE_FIRMWARE(FIRMWARE_VCN4_0_4);
MODULE_FIRMWARE(FIRMWARE_VCN4_0_5);
static void amdgpu_vcn_idle_work_handler(struct work_struct *work);
@ -124,7 +126,7 @@ int amdgpu_vcn_sw_init(struct amdgpu_device *adev)
* Hence, check for these versions here - notice this is
* restricted to Vangogh (Deck's APU).
*/
if (adev->ip_versions[UVD_HWIP][0] == IP_VERSION(3, 0, 2)) {
if (amdgpu_ip_version(adev, UVD_HWIP, 0) == IP_VERSION(3, 0, 2)) {
const char *bios_ver = dmi_get_system_info(DMI_BIOS_VERSION);
if (bios_ver && (!strncmp("F7A0113", bios_ver, 7) ||
@ -169,7 +171,7 @@ int amdgpu_vcn_sw_init(struct amdgpu_device *adev)
if (adev->firmware.load_type != AMDGPU_FW_LOAD_PSP)
bo_size += AMDGPU_GPU_PAGE_ALIGN(le32_to_cpu(hdr->ucode_size_bytes) + 8);
if (adev->ip_versions[UVD_HWIP][0] >= IP_VERSION(4, 0, 0)) {
if (amdgpu_ip_version(adev, UVD_HWIP, 0) >= IP_VERSION(4, 0, 0)) {
fw_shared_size = AMDGPU_GPU_PAGE_ALIGN(sizeof(struct amdgpu_vcn4_fw_shared));
log_offset = offsetof(struct amdgpu_vcn4_fw_shared, fw_log);
} else {
@ -265,7 +267,7 @@ static bool amdgpu_vcn_using_unified_queue(struct amdgpu_ring *ring)
struct amdgpu_device *adev = ring->adev;
bool ret = false;
if (adev->ip_versions[UVD_HWIP][0] >= IP_VERSION(4, 0, 0))
if (amdgpu_ip_version(adev, UVD_HWIP, 0) >= IP_VERSION(4, 0, 0))
ret = true;
return ret;
@ -292,8 +294,15 @@ int amdgpu_vcn_suspend(struct amdgpu_device *adev)
void *ptr;
int i, idx;
bool in_ras_intr = amdgpu_ras_intr_triggered();
cancel_delayed_work_sync(&adev->vcn.idle_work);
/* err_event_athub will corrupt VCPU buffer, so we need to
* restore fw data and clear buffer in amdgpu_vcn_resume() */
if (in_ras_intr)
return 0;
for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
if (adev->vcn.harvest_config & (1 << i))
continue;
@ -996,7 +1005,7 @@ int amdgpu_vcn_unified_ring_test_ib(struct amdgpu_ring *ring, long timeout)
struct amdgpu_device *adev = ring->adev;
long r;
if (adev->ip_versions[UVD_HWIP][0] != IP_VERSION(4, 0, 3)) {
if (amdgpu_ip_version(adev, UVD_HWIP, 0) != IP_VERSION(4, 0, 3)) {
r = amdgpu_vcn_enc_ring_test_ib(ring, timeout);
if (r)
goto error;
@ -1046,7 +1055,8 @@ void amdgpu_vcn_setup_ucode(struct amdgpu_device *adev)
adev->firmware.fw_size +=
ALIGN(le32_to_cpu(hdr->ucode_size_bytes), PAGE_SIZE);
if (adev->ip_versions[UVD_HWIP][0] == IP_VERSION(4, 0, 3))
if (amdgpu_ip_version(adev, UVD_HWIP, 0) ==
IP_VERSION(4, 0, 3))
break;
}
dev_info(adev->dev, "Will use PSP to load VCN firmware\n");
@ -1087,7 +1097,7 @@ static ssize_t amdgpu_debugfs_vcn_fwlog_read(struct file *f, char __user *buf,
if (write_pos > read_pos) {
available = write_pos - read_pos;
read_num[0] = min(size, (size_t)available);
read_num[0] = min_t(size_t, size, available);
} else {
read_num[0] = AMDGPU_VCNFW_LOG_SIZE - read_pos;
available = read_num[0] + write_pos - plog->header_size;

View File

@ -33,7 +33,7 @@
#define AMDGPU_VCN_MAX_ENC_RINGS 3
#define AMDGPU_MAX_VCN_INSTANCES 4
#define AMDGPU_MAX_VCN_ENC_RINGS AMDGPU_VCN_MAX_ENC_RINGS * AMDGPU_MAX_VCN_INSTANCES
#define AMDGPU_MAX_VCN_ENC_RINGS (AMDGPU_VCN_MAX_ENC_RINGS * AMDGPU_MAX_VCN_INSTANCES)
#define AMDGPU_VCN_HARVEST_VCN0 (1 << 0)
#define AMDGPU_VCN_HARVEST_VCN1 (1 << 1)

View File

@ -837,7 +837,7 @@ enum amdgpu_sriov_vf_mode amdgpu_virt_get_sriov_vf_mode(struct amdgpu_device *ad
void amdgpu_virt_post_reset(struct amdgpu_device *adev)
{
if (adev->ip_versions[GC_HWIP][0] == IP_VERSION(11, 0, 3)) {
if (amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(11, 0, 3)) {
/* force set to GFXOFF state after reset,
* to avoid some invalid operation before GC enable
*/
@ -847,7 +847,7 @@ void amdgpu_virt_post_reset(struct amdgpu_device *adev)
bool amdgpu_virt_fw_load_skip_check(struct amdgpu_device *adev, uint32_t ucode_id)
{
switch (adev->ip_versions[MP0_HWIP][0]) {
switch (amdgpu_ip_version(adev, MP0_HWIP, 0)) {
case IP_VERSION(13, 0, 0):
/* no vf autoload, white list */
if (ucode_id == AMDGPU_UCODE_ID_VCN1 ||

View File

@ -239,6 +239,8 @@ static int amdgpu_vkms_conn_get_modes(struct drm_connector *connector)
for (i = 0; i < ARRAY_SIZE(common_modes); i++) {
mode = drm_cvt_mode(dev, common_modes[i].w, common_modes[i].h, 60, false, false, false);
if (!mode)
continue;
drm_mode_probed_add(connector, mode);
}

View File

@ -885,12 +885,12 @@ int amdgpu_vm_update_range(struct amdgpu_device *adev, struct amdgpu_vm *vm,
* heavy-weight flush TLB unconditionally.
*/
flush_tlb |= adev->gmc.xgmi.num_physical_nodes &&
adev->ip_versions[GC_HWIP][0] == IP_VERSION(9, 4, 0);
amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 4, 0);
/*
* On GFX8 and older any 8 PTE block with a valid bit set enters the TLB
*/
flush_tlb |= adev->ip_versions[GC_HWIP][0] < IP_VERSION(9, 0, 0);
flush_tlb |= amdgpu_ip_version(adev, GC_HWIP, 0) < IP_VERSION(9, 0, 0);
memset(&params, 0, sizeof(params));
params.adev = adev;
@ -1403,7 +1403,7 @@ int amdgpu_vm_handle_moved(struct amdgpu_device *adev,
spin_unlock(&vm->status_lock);
/* Try to reserve the BO to avoid clearing its ptes */
if (!amdgpu_vm_debug && dma_resv_trylock(resv))
if (!adev->debug_vm && dma_resv_trylock(resv))
clear = false;
/* Somebody else is using the BO right now */
else
@ -2059,7 +2059,7 @@ void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint32_t min_vm_size,
if (amdgpu_vm_block_size != -1)
tmp >>= amdgpu_vm_block_size - 9;
tmp = DIV_ROUND_UP(fls64(tmp) - 1, 9) - 1;
adev->vm_manager.num_level = min(max_level, (unsigned)tmp);
adev->vm_manager.num_level = min_t(unsigned int, max_level, tmp);
switch (adev->vm_manager.num_level) {
case 3:
adev->vm_manager.root_level = AMDGPU_VM_PDB2;
@ -2730,3 +2730,53 @@ void amdgpu_debugfs_vm_bo_info(struct amdgpu_vm *vm, struct seq_file *m)
total_done_objs);
}
#endif
/**
* amdgpu_vm_update_fault_cache - update cached fault into.
* @adev: amdgpu device pointer
* @pasid: PASID of the VM
* @addr: Address of the fault
* @status: GPUVM fault status register
* @vmhub: which vmhub got the fault
*
* Cache the fault info for later use by userspace in debugging.
*/
void amdgpu_vm_update_fault_cache(struct amdgpu_device *adev,
unsigned int pasid,
uint64_t addr,
uint32_t status,
unsigned int vmhub)
{
struct amdgpu_vm *vm;
unsigned long flags;
xa_lock_irqsave(&adev->vm_manager.pasids, flags);
vm = xa_load(&adev->vm_manager.pasids, pasid);
/* Don't update the fault cache if status is 0. In the multiple
* fault case, subsequent faults will return a 0 status which is
* useless for userspace and replaces the useful fault status, so
* only update if status is non-0.
*/
if (vm && status) {
vm->fault_info.addr = addr;
vm->fault_info.status = status;
if (AMDGPU_IS_GFXHUB(vmhub)) {
vm->fault_info.vmhub = AMDGPU_VMHUB_TYPE_GFX;
vm->fault_info.vmhub |=
(vmhub - AMDGPU_GFXHUB_START) << AMDGPU_VMHUB_IDX_SHIFT;
} else if (AMDGPU_IS_MMHUB0(vmhub)) {
vm->fault_info.vmhub = AMDGPU_VMHUB_TYPE_MM0;
vm->fault_info.vmhub |=
(vmhub - AMDGPU_MMHUB0_START) << AMDGPU_VMHUB_IDX_SHIFT;
} else if (AMDGPU_IS_MMHUB1(vmhub)) {
vm->fault_info.vmhub = AMDGPU_VMHUB_TYPE_MM1;
vm->fault_info.vmhub |=
(vmhub - AMDGPU_MMHUB1_START) << AMDGPU_VMHUB_IDX_SHIFT;
} else {
WARN_ONCE(1, "Invalid vmhub %u\n", vmhub);
}
}
xa_unlock_irqrestore(&adev->vm_manager.pasids, flags);
}

View File

@ -124,9 +124,16 @@ struct amdgpu_mem_stats;
* layout: max 8 GFXHUB + 4 MMHUB0 + 1 MMHUB1
*/
#define AMDGPU_MAX_VMHUBS 13
#define AMDGPU_GFXHUB(x) (x)
#define AMDGPU_MMHUB0(x) (8 + x)
#define AMDGPU_MMHUB1(x) (8 + 4 + x)
#define AMDGPU_GFXHUB_START 0
#define AMDGPU_MMHUB0_START 8
#define AMDGPU_MMHUB1_START 12
#define AMDGPU_GFXHUB(x) (AMDGPU_GFXHUB_START + (x))
#define AMDGPU_MMHUB0(x) (AMDGPU_MMHUB0_START + (x))
#define AMDGPU_MMHUB1(x) (AMDGPU_MMHUB1_START + (x))
#define AMDGPU_IS_GFXHUB(x) ((x) >= AMDGPU_GFXHUB_START && (x) < AMDGPU_MMHUB0_START)
#define AMDGPU_IS_MMHUB0(x) ((x) >= AMDGPU_MMHUB0_START && (x) < AMDGPU_MMHUB1_START)
#define AMDGPU_IS_MMHUB1(x) ((x) >= AMDGPU_MMHUB1_START && (x) < AMDGPU_MAX_VMHUBS)
/* Reserve 2MB at top/bottom of address space for kernel use */
#define AMDGPU_VA_RESERVED_SIZE (2ULL << 20)
@ -252,6 +259,15 @@ struct amdgpu_vm_update_funcs {
struct dma_fence **fence);
};
struct amdgpu_vm_fault_info {
/* fault address */
uint64_t addr;
/* fault status register */
uint32_t status;
/* which vmhub? gfxhub, mmhub, etc. */
unsigned int vmhub;
};
struct amdgpu_vm {
/* tree of virtual addresses mapped */
struct rb_root_cached va;
@ -343,6 +359,9 @@ struct amdgpu_vm {
/* Memory partition number, -1 means any partition */
int8_t mem_id;
/* cached fault info */
struct amdgpu_vm_fault_info fault_info;
};
struct amdgpu_vm_manager {
@ -554,4 +573,10 @@ static inline void amdgpu_vm_eviction_unlock(struct amdgpu_vm *vm)
mutex_unlock(&vm->eviction_lock);
}
void amdgpu_vm_update_fault_cache(struct amdgpu_device *adev,
unsigned int pasid,
uint64_t addr,
uint32_t status,
unsigned int vmhub);
#endif

View File

@ -843,14 +843,8 @@ static void amdgpu_vm_pte_update_flags(struct amdgpu_vm_update_params *params,
*/
if ((flags & AMDGPU_PTE_SYSTEM) && (adev->flags & AMD_IS_APU) &&
adev->gmc.gmc_funcs->override_vm_pte_flags &&
num_possible_nodes() > 1) {
if (!params->pages_addr)
amdgpu_gmc_override_vm_pte_flags(adev, params->vm,
addr, &flags);
else
dev_dbg(adev->dev,
"override_vm_pte_flags skipped: non-contiguous\n");
}
num_possible_nodes() > 1 && !params->pages_addr)
amdgpu_gmc_override_vm_pte_flags(adev, params->vm, addr, &flags);
params->vm->update_funcs->update(params, pt, pe, addr, count, incr,
flags);

View File

@ -0,0 +1,656 @@
/*
* Copyright 2022 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
#include <linux/firmware.h>
#include <drm/drm_drv.h>
#include "amdgpu.h"
#include "amdgpu_ucode.h"
#include "amdgpu_vpe.h"
#include "soc15_common.h"
#include "vpe_v6_1.h"
#define AMDGPU_CSA_VPE_SIZE 64
/* VPE CSA resides in the 4th page of CSA */
#define AMDGPU_CSA_VPE_OFFSET (4096 * 3)
static void vpe_set_ring_funcs(struct amdgpu_device *adev);
int amdgpu_vpe_psp_update_sram(struct amdgpu_device *adev)
{
struct amdgpu_firmware_info ucode = {
.ucode_id = AMDGPU_UCODE_ID_VPE,
.mc_addr = adev->vpe.cmdbuf_gpu_addr,
.ucode_size = 8,
};
return psp_execute_ip_fw_load(&adev->psp, &ucode);
}
int amdgpu_vpe_init_microcode(struct amdgpu_vpe *vpe)
{
struct amdgpu_device *adev = vpe->ring.adev;
const struct vpe_firmware_header_v1_0 *vpe_hdr;
char fw_prefix[32], fw_name[64];
int ret;
amdgpu_ucode_ip_version_decode(adev, VPE_HWIP, fw_prefix, sizeof(fw_prefix));
snprintf(fw_name, sizeof(fw_name), "amdgpu/%s.bin", fw_prefix);
ret = amdgpu_ucode_request(adev, &adev->vpe.fw, fw_name);
if (ret)
goto out;
vpe_hdr = (const struct vpe_firmware_header_v1_0 *)adev->vpe.fw->data;
adev->vpe.fw_version = le32_to_cpu(vpe_hdr->header.ucode_version);
adev->vpe.feature_version = le32_to_cpu(vpe_hdr->ucode_feature_version);
if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) {
struct amdgpu_firmware_info *info;
info = &adev->firmware.ucode[AMDGPU_UCODE_ID_VPE_CTX];
info->ucode_id = AMDGPU_UCODE_ID_VPE_CTX;
info->fw = adev->vpe.fw;
adev->firmware.fw_size +=
ALIGN(le32_to_cpu(vpe_hdr->ctx_ucode_size_bytes), PAGE_SIZE);
info = &adev->firmware.ucode[AMDGPU_UCODE_ID_VPE_CTL];
info->ucode_id = AMDGPU_UCODE_ID_VPE_CTL;
info->fw = adev->vpe.fw;
adev->firmware.fw_size +=
ALIGN(le32_to_cpu(vpe_hdr->ctl_ucode_size_bytes), PAGE_SIZE);
}
return 0;
out:
dev_err(adev->dev, "fail to initialize vpe microcode\n");
release_firmware(adev->vpe.fw);
adev->vpe.fw = NULL;
return ret;
}
int amdgpu_vpe_ring_init(struct amdgpu_vpe *vpe)
{
struct amdgpu_device *adev = container_of(vpe, struct amdgpu_device, vpe);
struct amdgpu_ring *ring = &vpe->ring;
int ret;
ring->ring_obj = NULL;
ring->use_doorbell = true;
ring->vm_hub = AMDGPU_MMHUB0(0);
ring->doorbell_index = (adev->doorbell_index.vpe_ring << 1);
snprintf(ring->name, 4, "vpe");
ret = amdgpu_ring_init(adev, ring, 1024, &vpe->trap_irq, 0,
AMDGPU_RING_PRIO_DEFAULT, NULL);
if (ret)
return ret;
return 0;
}
int amdgpu_vpe_ring_fini(struct amdgpu_vpe *vpe)
{
amdgpu_ring_fini(&vpe->ring);
return 0;
}
static int vpe_early_init(void *handle)
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
struct amdgpu_vpe *vpe = &adev->vpe;
switch (amdgpu_ip_version(adev, VPE_HWIP, 0)) {
case IP_VERSION(6, 1, 0):
vpe_v6_1_set_funcs(vpe);
break;
default:
return -EINVAL;
}
vpe_set_ring_funcs(adev);
vpe_set_regs(vpe);
return 0;
}
static int vpe_common_init(struct amdgpu_vpe *vpe)
{
struct amdgpu_device *adev = container_of(vpe, struct amdgpu_device, vpe);
int r;
r = amdgpu_bo_create_kernel(adev, PAGE_SIZE, PAGE_SIZE,
AMDGPU_GEM_DOMAIN_GTT,
&adev->vpe.cmdbuf_obj,
&adev->vpe.cmdbuf_gpu_addr,
(void **)&adev->vpe.cmdbuf_cpu_addr);
if (r) {
dev_err(adev->dev, "VPE: failed to allocate cmdbuf bo %d\n", r);
return r;
}
return 0;
}
static int vpe_sw_init(void *handle)
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
struct amdgpu_vpe *vpe = &adev->vpe;
int ret;
ret = vpe_common_init(vpe);
if (ret)
goto out;
ret = vpe_irq_init(vpe);
if (ret)
goto out;
ret = vpe_ring_init(vpe);
if (ret)
goto out;
ret = vpe_init_microcode(vpe);
if (ret)
goto out;
out:
return ret;
}
static int vpe_sw_fini(void *handle)
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
struct amdgpu_vpe *vpe = &adev->vpe;
release_firmware(vpe->fw);
vpe->fw = NULL;
vpe_ring_fini(vpe);
amdgpu_bo_free_kernel(&adev->vpe.cmdbuf_obj,
&adev->vpe.cmdbuf_gpu_addr,
(void **)&adev->vpe.cmdbuf_cpu_addr);
return 0;
}
static int vpe_hw_init(void *handle)
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
struct amdgpu_vpe *vpe = &adev->vpe;
int ret;
ret = vpe_load_microcode(vpe);
if (ret)
return ret;
ret = vpe_ring_start(vpe);
if (ret)
return ret;
return 0;
}
static int vpe_hw_fini(void *handle)
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
struct amdgpu_vpe *vpe = &adev->vpe;
vpe_ring_stop(vpe);
return 0;
}
static int vpe_suspend(void *handle)
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
return vpe_hw_fini(adev);
}
static int vpe_resume(void *handle)
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
return vpe_hw_init(adev);
}
static void vpe_ring_insert_nop(struct amdgpu_ring *ring, uint32_t count)
{
int i;
for (i = 0; i < count; i++)
if (i == 0)
amdgpu_ring_write(ring, ring->funcs->nop |
VPE_CMD_NOP_HEADER_COUNT(count - 1));
else
amdgpu_ring_write(ring, ring->funcs->nop);
}
static uint64_t vpe_get_csa_mc_addr(struct amdgpu_ring *ring, uint32_t vmid)
{
struct amdgpu_device *adev = ring->adev;
uint32_t index = 0;
uint64_t csa_mc_addr;
if (amdgpu_sriov_vf(adev) || vmid == 0 || !adev->gfx.mcbp)
return 0;
csa_mc_addr = amdgpu_csa_vaddr(adev) + AMDGPU_CSA_VPE_OFFSET +
index * AMDGPU_CSA_VPE_SIZE;
return csa_mc_addr;
}
static void vpe_ring_emit_ib(struct amdgpu_ring *ring,
struct amdgpu_job *job,
struct amdgpu_ib *ib,
uint32_t flags)
{
uint32_t vmid = AMDGPU_JOB_GET_VMID(job);
uint64_t csa_mc_addr = vpe_get_csa_mc_addr(ring, vmid);
amdgpu_ring_write(ring, VPE_CMD_HEADER(VPE_CMD_OPCODE_INDIRECT, 0) |
VPE_CMD_INDIRECT_HEADER_VMID(vmid & 0xf));
/* base must be 32 byte aligned */
amdgpu_ring_write(ring, ib->gpu_addr & 0xffffffe0);
amdgpu_ring_write(ring, upper_32_bits(ib->gpu_addr));
amdgpu_ring_write(ring, ib->length_dw);
amdgpu_ring_write(ring, lower_32_bits(csa_mc_addr));
amdgpu_ring_write(ring, upper_32_bits(csa_mc_addr));
}
static void vpe_ring_emit_fence(struct amdgpu_ring *ring, uint64_t addr,
uint64_t seq, unsigned int flags)
{
int i = 0;
do {
/* write the fence */
amdgpu_ring_write(ring, VPE_CMD_HEADER(VPE_CMD_OPCODE_FENCE, 0));
/* zero in first two bits */
WARN_ON_ONCE(addr & 0x3);
amdgpu_ring_write(ring, lower_32_bits(addr));
amdgpu_ring_write(ring, upper_32_bits(addr));
amdgpu_ring_write(ring, i == 0 ? lower_32_bits(seq) : upper_32_bits(seq));
addr += 4;
} while ((flags & AMDGPU_FENCE_FLAG_64BIT) && (i++ < 1));
if (flags & AMDGPU_FENCE_FLAG_INT) {
/* generate an interrupt */
amdgpu_ring_write(ring, VPE_CMD_HEADER(VPE_CMD_OPCODE_TRAP, 0));
amdgpu_ring_write(ring, 0);
}
}
static void vpe_ring_emit_pipeline_sync(struct amdgpu_ring *ring)
{
uint32_t seq = ring->fence_drv.sync_seq;
uint64_t addr = ring->fence_drv.gpu_addr;
/* wait for idle */
amdgpu_ring_write(ring, VPE_CMD_HEADER(VPE_CMD_OPCODE_POLL_REGMEM,
VPE_POLL_REGMEM_SUBOP_REGMEM) |
VPE_CMD_POLL_REGMEM_HEADER_FUNC(3) | /* equal */
VPE_CMD_POLL_REGMEM_HEADER_MEM(1));
amdgpu_ring_write(ring, addr & 0xfffffffc);
amdgpu_ring_write(ring, upper_32_bits(addr));
amdgpu_ring_write(ring, seq); /* reference */
amdgpu_ring_write(ring, 0xffffffff); /* mask */
amdgpu_ring_write(ring, VPE_CMD_POLL_REGMEM_DW5_RETRY_COUNT(0xfff) |
VPE_CMD_POLL_REGMEM_DW5_INTERVAL(4));
}
static void vpe_ring_emit_wreg(struct amdgpu_ring *ring, uint32_t reg, uint32_t val)
{
amdgpu_ring_write(ring, VPE_CMD_HEADER(VPE_CMD_OPCODE_REG_WRITE, 0));
amdgpu_ring_write(ring, reg << 2);
amdgpu_ring_write(ring, val);
}
static void vpe_ring_emit_reg_wait(struct amdgpu_ring *ring, uint32_t reg,
uint32_t val, uint32_t mask)
{
amdgpu_ring_write(ring, VPE_CMD_HEADER(VPE_CMD_OPCODE_POLL_REGMEM,
VPE_POLL_REGMEM_SUBOP_REGMEM) |
VPE_CMD_POLL_REGMEM_HEADER_FUNC(3) | /* equal */
VPE_CMD_POLL_REGMEM_HEADER_MEM(0));
amdgpu_ring_write(ring, reg << 2);
amdgpu_ring_write(ring, 0);
amdgpu_ring_write(ring, val); /* reference */
amdgpu_ring_write(ring, mask); /* mask */
amdgpu_ring_write(ring, VPE_CMD_POLL_REGMEM_DW5_RETRY_COUNT(0xfff) |
VPE_CMD_POLL_REGMEM_DW5_INTERVAL(10));
}
static void vpe_ring_emit_vm_flush(struct amdgpu_ring *ring, unsigned int vmid,
uint64_t pd_addr)
{
amdgpu_gmc_emit_flush_gpu_tlb(ring, vmid, pd_addr);
}
static unsigned int vpe_ring_init_cond_exec(struct amdgpu_ring *ring)
{
unsigned int ret;
amdgpu_ring_write(ring, VPE_CMD_HEADER(VPE_CMD_OPCODE_COND_EXE, 0));
amdgpu_ring_write(ring, lower_32_bits(ring->cond_exe_gpu_addr));
amdgpu_ring_write(ring, upper_32_bits(ring->cond_exe_gpu_addr));
amdgpu_ring_write(ring, 1);
ret = ring->wptr & ring->buf_mask;/* this is the offset we need patch later */
amdgpu_ring_write(ring, 0x55aa55aa);/* insert dummy here and patch it later */
return ret;
}
static void vpe_ring_patch_cond_exec(struct amdgpu_ring *ring, unsigned int offset)
{
unsigned int cur;
WARN_ON_ONCE(offset > ring->buf_mask);
WARN_ON_ONCE(ring->ring[offset] != 0x55aa55aa);
cur = (ring->wptr - 1) & ring->buf_mask;
if (cur > offset)
ring->ring[offset] = cur - offset;
else
ring->ring[offset] = (ring->buf_mask + 1) - offset + cur;
}
static int vpe_ring_preempt_ib(struct amdgpu_ring *ring)
{
struct amdgpu_device *adev = ring->adev;
struct amdgpu_vpe *vpe = &adev->vpe;
uint32_t preempt_reg = vpe->regs.queue0_preempt;
int i, r = 0;
/* assert preemption condition */
amdgpu_ring_set_preempt_cond_exec(ring, false);
/* emit the trailing fence */
ring->trail_seq += 1;
amdgpu_ring_alloc(ring, 10);
vpe_ring_emit_fence(ring, ring->trail_fence_gpu_addr, ring->trail_seq, 0);
amdgpu_ring_commit(ring);
/* assert IB preemption */
WREG32(vpe_get_reg_offset(vpe, ring->me, preempt_reg), 1);
/* poll the trailing fence */
for (i = 0; i < adev->usec_timeout; i++) {
if (ring->trail_seq ==
le32_to_cpu(*(ring->trail_fence_cpu_addr)))
break;
udelay(1);
}
if (i >= adev->usec_timeout) {
r = -EINVAL;
dev_err(adev->dev, "ring %d failed to be preempted\n", ring->idx);
}
/* deassert IB preemption */
WREG32(vpe_get_reg_offset(vpe, ring->me, preempt_reg), 0);
/* deassert the preemption condition */
amdgpu_ring_set_preempt_cond_exec(ring, true);
return r;
}
static int vpe_set_clockgating_state(void *handle,
enum amd_clockgating_state state)
{
return 0;
}
static int vpe_set_powergating_state(void *handle,
enum amd_powergating_state state)
{
return 0;
}
static uint64_t vpe_ring_get_rptr(struct amdgpu_ring *ring)
{
struct amdgpu_device *adev = ring->adev;
struct amdgpu_vpe *vpe = &adev->vpe;
uint64_t rptr;
if (ring->use_doorbell) {
rptr = atomic64_read((atomic64_t *)ring->rptr_cpu_addr);
dev_dbg(adev->dev, "rptr/doorbell before shift == 0x%016llx\n", rptr);
} else {
rptr = RREG32(vpe_get_reg_offset(vpe, ring->me, vpe->regs.queue0_rb_rptr_hi));
rptr = rptr << 32;
rptr |= RREG32(vpe_get_reg_offset(vpe, ring->me, vpe->regs.queue0_rb_rptr_lo));
dev_dbg(adev->dev, "rptr before shift [%i] == 0x%016llx\n", ring->me, rptr);
}
return (rptr >> 2);
}
static uint64_t vpe_ring_get_wptr(struct amdgpu_ring *ring)
{
struct amdgpu_device *adev = ring->adev;
struct amdgpu_vpe *vpe = &adev->vpe;
uint64_t wptr;
if (ring->use_doorbell) {
wptr = atomic64_read((atomic64_t *)ring->wptr_cpu_addr);
dev_dbg(adev->dev, "wptr/doorbell before shift == 0x%016llx\n", wptr);
} else {
wptr = RREG32(vpe_get_reg_offset(vpe, ring->me, vpe->regs.queue0_rb_wptr_hi));
wptr = wptr << 32;
wptr |= RREG32(vpe_get_reg_offset(vpe, ring->me, vpe->regs.queue0_rb_wptr_lo));
dev_dbg(adev->dev, "wptr before shift [%i] == 0x%016llx\n", ring->me, wptr);
}
return (wptr >> 2);
}
static void vpe_ring_set_wptr(struct amdgpu_ring *ring)
{
struct amdgpu_device *adev = ring->adev;
struct amdgpu_vpe *vpe = &adev->vpe;
if (ring->use_doorbell) {
dev_dbg(adev->dev, "Using doorbell, \
wptr_offs == 0x%08x, \
lower_32_bits(ring->wptr) << 2 == 0x%08x, \
upper_32_bits(ring->wptr) << 2 == 0x%08x\n",
ring->wptr_offs,
lower_32_bits(ring->wptr << 2),
upper_32_bits(ring->wptr << 2));
atomic64_set((atomic64_t *)ring->wptr_cpu_addr, ring->wptr << 2);
WDOORBELL64(ring->doorbell_index, ring->wptr << 2);
} else {
dev_dbg(adev->dev, "Not using doorbell, \
regVPEC_QUEUE0_RB_WPTR == 0x%08x, \
regVPEC_QUEUE0_RB_WPTR_HI == 0x%08x\n",
lower_32_bits(ring->wptr << 2),
upper_32_bits(ring->wptr << 2));
WREG32(vpe_get_reg_offset(vpe, ring->me, vpe->regs.queue0_rb_wptr_lo),
lower_32_bits(ring->wptr << 2));
WREG32(vpe_get_reg_offset(vpe, ring->me, vpe->regs.queue0_rb_wptr_hi),
upper_32_bits(ring->wptr << 2));
}
}
static int vpe_ring_test_ring(struct amdgpu_ring *ring)
{
struct amdgpu_device *adev = ring->adev;
const uint32_t test_pattern = 0xdeadbeef;
uint32_t index, i;
uint64_t wb_addr;
int ret;
ret = amdgpu_device_wb_get(adev, &index);
if (ret) {
dev_err(adev->dev, "(%d) failed to allocate wb slot\n", ret);
return ret;
}
adev->wb.wb[index] = 0;
wb_addr = adev->wb.gpu_addr + (index * 4);
ret = amdgpu_ring_alloc(ring, 4);
if (ret) {
dev_err(adev->dev, "amdgpu: dma failed to lock ring %d (%d).\n", ring->idx, ret);
goto out;
}
amdgpu_ring_write(ring, VPE_CMD_HEADER(VPE_CMD_OPCODE_FENCE, 0));
amdgpu_ring_write(ring, lower_32_bits(wb_addr));
amdgpu_ring_write(ring, upper_32_bits(wb_addr));
amdgpu_ring_write(ring, test_pattern);
amdgpu_ring_commit(ring);
for (i = 0; i < adev->usec_timeout; i++) {
if (le32_to_cpu(adev->wb.wb[index]) == test_pattern)
goto out;
udelay(1);
}
ret = -ETIMEDOUT;
out:
amdgpu_device_wb_free(adev, index);
return ret;
}
static int vpe_ring_test_ib(struct amdgpu_ring *ring, long timeout)
{
struct amdgpu_device *adev = ring->adev;
const uint32_t test_pattern = 0xdeadbeef;
struct amdgpu_ib ib = {};
struct dma_fence *f = NULL;
uint32_t index;
uint64_t wb_addr;
int ret;
ret = amdgpu_device_wb_get(adev, &index);
if (ret) {
dev_err(adev->dev, "(%d) failed to allocate wb slot\n", ret);
return ret;
}
adev->wb.wb[index] = 0;
wb_addr = adev->wb.gpu_addr + (index * 4);
ret = amdgpu_ib_get(adev, NULL, 256, AMDGPU_IB_POOL_DIRECT, &ib);
if (ret)
goto err0;
ib.ptr[0] = VPE_CMD_HEADER(VPE_CMD_OPCODE_FENCE, 0);
ib.ptr[1] = lower_32_bits(wb_addr);
ib.ptr[2] = upper_32_bits(wb_addr);
ib.ptr[3] = test_pattern;
ib.ptr[4] = VPE_CMD_HEADER(VPE_CMD_OPCODE_NOP, 0);
ib.ptr[5] = VPE_CMD_HEADER(VPE_CMD_OPCODE_NOP, 0);
ib.ptr[6] = VPE_CMD_HEADER(VPE_CMD_OPCODE_NOP, 0);
ib.ptr[7] = VPE_CMD_HEADER(VPE_CMD_OPCODE_NOP, 0);
ib.length_dw = 8;
ret = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f);
if (ret)
goto err1;
ret = dma_fence_wait_timeout(f, false, timeout);
if (ret <= 0) {
ret = ret ? : -ETIMEDOUT;
goto err1;
}
ret = (le32_to_cpu(adev->wb.wb[index]) == test_pattern) ? 0 : -EINVAL;
err1:
amdgpu_ib_free(adev, &ib, NULL);
dma_fence_put(f);
err0:
amdgpu_device_wb_free(adev, index);
return ret;
}
static const struct amdgpu_ring_funcs vpe_ring_funcs = {
.type = AMDGPU_RING_TYPE_VPE,
.align_mask = 0xf,
.nop = VPE_CMD_HEADER(VPE_CMD_OPCODE_NOP, 0),
.support_64bit_ptrs = true,
.get_rptr = vpe_ring_get_rptr,
.get_wptr = vpe_ring_get_wptr,
.set_wptr = vpe_ring_set_wptr,
.emit_frame_size =
5 + /* vpe_ring_init_cond_exec */
6 + /* vpe_ring_emit_pipeline_sync */
10 + 10 + 10 + /* vpe_ring_emit_fence */
/* vpe_ring_emit_vm_flush */
SOC15_FLUSH_GPU_TLB_NUM_WREG * 3 +
SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 6,
.emit_ib_size = 7 + 6,
.emit_ib = vpe_ring_emit_ib,
.emit_pipeline_sync = vpe_ring_emit_pipeline_sync,
.emit_fence = vpe_ring_emit_fence,
.emit_vm_flush = vpe_ring_emit_vm_flush,
.emit_wreg = vpe_ring_emit_wreg,
.emit_reg_wait = vpe_ring_emit_reg_wait,
.emit_reg_write_reg_wait = amdgpu_ring_emit_reg_write_reg_wait_helper,
.insert_nop = vpe_ring_insert_nop,
.pad_ib = amdgpu_ring_generic_pad_ib,
.test_ring = vpe_ring_test_ring,
.test_ib = vpe_ring_test_ib,
.init_cond_exec = vpe_ring_init_cond_exec,
.patch_cond_exec = vpe_ring_patch_cond_exec,
.preempt_ib = vpe_ring_preempt_ib,
};
static void vpe_set_ring_funcs(struct amdgpu_device *adev)
{
adev->vpe.ring.funcs = &vpe_ring_funcs;
}
const struct amd_ip_funcs vpe_ip_funcs = {
.name = "vpe_v6_1",
.early_init = vpe_early_init,
.late_init = NULL,
.sw_init = vpe_sw_init,
.sw_fini = vpe_sw_fini,
.hw_init = vpe_hw_init,
.hw_fini = vpe_hw_fini,
.suspend = vpe_suspend,
.resume = vpe_resume,
.soft_reset = NULL,
.set_clockgating_state = vpe_set_clockgating_state,
.set_powergating_state = vpe_set_powergating_state,
};
const struct amdgpu_ip_block_version vpe_v6_1_ip_block = {
.type = AMD_IP_BLOCK_TYPE_VPE,
.major = 6,
.minor = 1,
.rev = 0,
.funcs = &vpe_ip_funcs,
};

View File

@ -0,0 +1,91 @@
/*
* Copyright 2022 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
#ifndef __AMDGPU_VPE_H__
#define __AMDGPU_VPE_H__
#include "amdgpu_ring.h"
#include "amdgpu_irq.h"
#include "vpe_6_1_fw_if.h"
struct amdgpu_vpe;
struct vpe_funcs {
uint32_t (*get_reg_offset)(struct amdgpu_vpe *vpe, uint32_t inst, uint32_t offset);
int (*set_regs)(struct amdgpu_vpe *vpe);
int (*irq_init)(struct amdgpu_vpe *vpe);
int (*init_microcode)(struct amdgpu_vpe *vpe);
int (*load_microcode)(struct amdgpu_vpe *vpe);
int (*ring_init)(struct amdgpu_vpe *vpe);
int (*ring_start)(struct amdgpu_vpe *vpe);
int (*ring_stop)(struct amdgpu_vpe *vpe);
int (*ring_fini)(struct amdgpu_vpe *vpe);
};
struct vpe_regs {
uint32_t queue0_rb_rptr_lo;
uint32_t queue0_rb_rptr_hi;
uint32_t queue0_rb_wptr_lo;
uint32_t queue0_rb_wptr_hi;
uint32_t queue0_preempt;
};
struct amdgpu_vpe {
struct amdgpu_ring ring;
struct amdgpu_irq_src trap_irq;
const struct vpe_funcs *funcs;
struct vpe_regs regs;
const struct firmware *fw;
uint32_t fw_version;
uint32_t feature_version;
struct amdgpu_bo *cmdbuf_obj;
uint64_t cmdbuf_gpu_addr;
uint32_t *cmdbuf_cpu_addr;
};
int amdgpu_vpe_psp_update_sram(struct amdgpu_device *adev);
int amdgpu_vpe_init_microcode(struct amdgpu_vpe *vpe);
int amdgpu_vpe_ring_init(struct amdgpu_vpe *vpe);
int amdgpu_vpe_ring_fini(struct amdgpu_vpe *vpe);
#define vpe_ring_init(vpe) ((vpe)->funcs->ring_init ? (vpe)->funcs->ring_init((vpe)) : 0)
#define vpe_ring_start(vpe) ((vpe)->funcs->ring_start ? (vpe)->funcs->ring_start((vpe)) : 0)
#define vpe_ring_stop(vpe) ((vpe)->funcs->ring_stop ? (vpe)->funcs->ring_stop((vpe)) : 0)
#define vpe_ring_fini(vpe) ((vpe)->funcs->ring_fini ? (vpe)->funcs->ring_fini((vpe)) : 0)
#define vpe_get_reg_offset(vpe, inst, offset) \
((vpe)->funcs->get_reg_offset ? (vpe)->funcs->get_reg_offset((vpe), (inst), (offset)) : 0)
#define vpe_set_regs(vpe) \
((vpe)->funcs->set_regs ? (vpe)->funcs->set_regs((vpe)) : 0)
#define vpe_irq_init(vpe) \
((vpe)->funcs->irq_init ? (vpe)->funcs->irq_init((vpe)) : 0)
#define vpe_init_microcode(vpe) \
((vpe)->funcs->init_microcode ? (vpe)->funcs->init_microcode((vpe)) : 0)
#define vpe_load_microcode(vpe) \
((vpe)->funcs->load_microcode ? (vpe)->funcs->load_microcode((vpe)) : 0)
extern const struct amdgpu_ip_block_version vpe_v6_1_ip_block;
#endif

View File

@ -163,16 +163,11 @@ int amdgpu_xcp_init(struct amdgpu_xcp_mgr *xcp_mgr, int num_xcps, int mode)
return 0;
}
int amdgpu_xcp_switch_partition_mode(struct amdgpu_xcp_mgr *xcp_mgr, int mode)
static int __amdgpu_xcp_switch_partition_mode(struct amdgpu_xcp_mgr *xcp_mgr,
int mode)
{
int ret, curr_mode, num_xcps = 0;
if (!xcp_mgr || mode == AMDGPU_XCP_MODE_NONE)
return -EINVAL;
if (xcp_mgr->mode == mode)
return 0;
if (!xcp_mgr->funcs || !xcp_mgr->funcs->switch_partition_mode)
return 0;
@ -201,6 +196,25 @@ out:
return ret;
}
int amdgpu_xcp_switch_partition_mode(struct amdgpu_xcp_mgr *xcp_mgr, int mode)
{
if (!xcp_mgr || mode == AMDGPU_XCP_MODE_NONE)
return -EINVAL;
if (xcp_mgr->mode == mode)
return 0;
return __amdgpu_xcp_switch_partition_mode(xcp_mgr, mode);
}
int amdgpu_xcp_restore_partition_mode(struct amdgpu_xcp_mgr *xcp_mgr)
{
if (!xcp_mgr || xcp_mgr->mode == AMDGPU_XCP_MODE_NONE)
return 0;
return __amdgpu_xcp_switch_partition_mode(xcp_mgr, xcp_mgr->mode);
}
int amdgpu_xcp_query_partition_mode(struct amdgpu_xcp_mgr *xcp_mgr, u32 flags)
{
int mode;

View File

@ -129,6 +129,7 @@ int amdgpu_xcp_mgr_init(struct amdgpu_device *adev, int init_mode,
int amdgpu_xcp_init(struct amdgpu_xcp_mgr *xcp_mgr, int num_xcps, int mode);
int amdgpu_xcp_query_partition_mode(struct amdgpu_xcp_mgr *xcp_mgr, u32 flags);
int amdgpu_xcp_switch_partition_mode(struct amdgpu_xcp_mgr *xcp_mgr, int mode);
int amdgpu_xcp_restore_partition_mode(struct amdgpu_xcp_mgr *xcp_mgr);
int amdgpu_xcp_get_partition(struct amdgpu_xcp_mgr *xcp_mgr,
enum AMDGPU_XCP_IP_BLOCK ip, int instance);

View File

@ -325,6 +325,17 @@ static ssize_t amdgpu_xgmi_show_device_id(struct device *dev,
}
static ssize_t amdgpu_xgmi_show_physical_id(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct drm_device *ddev = dev_get_drvdata(dev);
struct amdgpu_device *adev = drm_to_adev(ddev);
return sysfs_emit(buf, "%u\n", adev->gmc.xgmi.physical_node_id);
}
static ssize_t amdgpu_xgmi_show_num_hops(struct device *dev,
struct device_attribute *attr,
char *buf)
@ -390,6 +401,7 @@ static ssize_t amdgpu_xgmi_show_error(struct device *dev,
static DEVICE_ATTR(xgmi_device_id, S_IRUGO, amdgpu_xgmi_show_device_id, NULL);
static DEVICE_ATTR(xgmi_physical_id, 0444, amdgpu_xgmi_show_physical_id, NULL);
static DEVICE_ATTR(xgmi_error, S_IRUGO, amdgpu_xgmi_show_error, NULL);
static DEVICE_ATTR(xgmi_num_hops, S_IRUGO, amdgpu_xgmi_show_num_hops, NULL);
static DEVICE_ATTR(xgmi_num_links, S_IRUGO, amdgpu_xgmi_show_num_links, NULL);
@ -407,6 +419,12 @@ static int amdgpu_xgmi_sysfs_add_dev_info(struct amdgpu_device *adev,
return ret;
}
ret = device_create_file(adev->dev, &dev_attr_xgmi_physical_id);
if (ret) {
dev_err(adev->dev, "XGMI: Failed to create device file xgmi_physical_id\n");
return ret;
}
/* Create xgmi error file */
ret = device_create_file(adev->dev, &dev_attr_xgmi_error);
if (ret)
@ -448,6 +466,7 @@ remove_link:
remove_file:
device_remove_file(adev->dev, &dev_attr_xgmi_device_id);
device_remove_file(adev->dev, &dev_attr_xgmi_physical_id);
device_remove_file(adev->dev, &dev_attr_xgmi_error);
device_remove_file(adev->dev, &dev_attr_xgmi_num_hops);
device_remove_file(adev->dev, &dev_attr_xgmi_num_links);
@ -463,6 +482,7 @@ static void amdgpu_xgmi_sysfs_rem_dev_info(struct amdgpu_device *adev,
memset(node, 0, sizeof(node));
device_remove_file(adev->dev, &dev_attr_xgmi_device_id);
device_remove_file(adev->dev, &dev_attr_xgmi_physical_id);
device_remove_file(adev->dev, &dev_attr_xgmi_error);
device_remove_file(adev->dev, &dev_attr_xgmi_num_hops);
device_remove_file(adev->dev, &dev_attr_xgmi_num_links);
@ -948,7 +968,8 @@ static int amdgpu_xgmi_query_pcs_error_status(struct amdgpu_device *adev,
uint32_t field_array_size = 0;
if (is_xgmi_pcs) {
if (adev->ip_versions[XGMI_HWIP][0] == IP_VERSION(6, 1, 0)) {
if (amdgpu_ip_version(adev, XGMI_HWIP, 0) ==
IP_VERSION(6, 1, 0)) {
pcs_ras_fields = &xgmi3x16_pcs_ras_fields[0];
field_array_size = ARRAY_SIZE(xgmi3x16_pcs_ras_fields);
} else {
@ -1071,7 +1092,7 @@ static int amdgpu_ras_error_inject_xgmi(struct amdgpu_device *adev,
if (amdgpu_dpm_set_df_cstate(adev, DF_CSTATE_DISALLOW))
dev_warn(adev->dev, "Failed to disallow df cstate");
if (amdgpu_dpm_allow_xgmi_power_down(adev, false))
if (amdgpu_dpm_set_xgmi_plpd_mode(adev, XGMI_PLPD_DISALLOW))
dev_warn(adev->dev, "Failed to disallow XGMI power down");
ret = psp_ras_trigger_error(&adev->psp, block_info, instance_mask);
@ -1079,7 +1100,7 @@ static int amdgpu_ras_error_inject_xgmi(struct amdgpu_device *adev,
if (amdgpu_ras_intr_triggered())
return ret;
if (amdgpu_dpm_allow_xgmi_power_down(adev, true))
if (amdgpu_dpm_set_xgmi_plpd_mode(adev, XGMI_PLPD_DEFAULT))
dev_warn(adev->dev, "Failed to allow XGMI power down");
if (amdgpu_dpm_set_df_cstate(adev, DF_CSTATE_ALLOW))

View File

@ -500,7 +500,7 @@ static int aqua_vanjaram_switch_partition_mode(struct amdgpu_xcp_mgr *xcp_mgr,
return -EINVAL;
}
if (adev->kfd.init_complete)
if (adev->kfd.init_complete && !amdgpu_in_reset(adev))
flags |= AMDGPU_XCP_OPS_KFD;
if (flags & AMDGPU_XCP_OPS_KFD) {

View File

@ -68,7 +68,7 @@ int athub_v1_0_set_clockgating(struct amdgpu_device *adev,
if (amdgpu_sriov_vf(adev))
return 0;
switch (adev->ip_versions[ATHUB_HWIP][0]) {
switch (amdgpu_ip_version(adev, ATHUB_HWIP, 0)) {
case IP_VERSION(9, 0, 0):
case IP_VERSION(9, 1, 0):
case IP_VERSION(9, 2, 0):

View File

@ -77,7 +77,7 @@ int athub_v2_0_set_clockgating(struct amdgpu_device *adev,
if (amdgpu_sriov_vf(adev))
return 0;
switch (adev->ip_versions[ATHUB_HWIP][0]) {
switch (amdgpu_ip_version(adev, ATHUB_HWIP, 0)) {
case IP_VERSION(1, 3, 1):
case IP_VERSION(2, 0, 0):
case IP_VERSION(2, 0, 2):

View File

@ -70,7 +70,7 @@ int athub_v2_1_set_clockgating(struct amdgpu_device *adev,
if (amdgpu_sriov_vf(adev))
return 0;
switch (adev->ip_versions[ATHUB_HWIP][0]) {
switch (amdgpu_ip_version(adev, ATHUB_HWIP, 0)) {
case IP_VERSION(2, 1, 0):
case IP_VERSION(2, 1, 1):
case IP_VERSION(2, 1, 2):

View File

@ -36,7 +36,7 @@ static uint32_t athub_v3_0_get_cg_cntl(struct amdgpu_device *adev)
{
uint32_t data;
switch (adev->ip_versions[ATHUB_HWIP][0]) {
switch (amdgpu_ip_version(adev, ATHUB_HWIP, 0)) {
case IP_VERSION(3, 0, 1):
data = RREG32_SOC15(ATHUB, 0, regATHUB_MISC_CNTL_V3_0_1);
break;
@ -49,7 +49,7 @@ static uint32_t athub_v3_0_get_cg_cntl(struct amdgpu_device *adev)
static void athub_v3_0_set_cg_cntl(struct amdgpu_device *adev, uint32_t data)
{
switch (adev->ip_versions[ATHUB_HWIP][0]) {
switch (amdgpu_ip_version(adev, ATHUB_HWIP, 0)) {
case IP_VERSION(3, 0, 1):
WREG32_SOC15(ATHUB, 0, regATHUB_MISC_CNTL_V3_0_1, data);
break;
@ -99,10 +99,11 @@ int athub_v3_0_set_clockgating(struct amdgpu_device *adev,
if (amdgpu_sriov_vf(adev))
return 0;
switch (adev->ip_versions[ATHUB_HWIP][0]) {
switch (amdgpu_ip_version(adev, ATHUB_HWIP, 0)) {
case IP_VERSION(3, 0, 0):
case IP_VERSION(3, 0, 1):
case IP_VERSION(3, 0, 2):
case IP_VERSION(3, 3, 0):
athub_v3_0_update_medium_grain_clock_gating(adev,
state == AMD_CG_STATE_GATE);
athub_v3_0_update_medium_grain_light_sleep(adev,

View File

@ -1444,10 +1444,27 @@ static void atom_get_vbios_pn(struct atom_context *ctx)
static void atom_get_vbios_version(struct atom_context *ctx)
{
unsigned short start = 3, end;
unsigned char *vbios_ver;
unsigned char *p_rom;
p_rom = ctx->bios;
/* Search from strings offset if it's present */
start = *(unsigned short *)(p_rom +
OFFSET_TO_GET_ATOMBIOS_STRING_START);
/* Search till atom rom header start point */
end = *(unsigned short *)(p_rom + OFFSET_TO_ATOM_ROM_HEADER_POINTER);
/* Use hardcoded offsets, if the offsets are not populated */
if (end <= start) {
start = 3;
end = 1024;
}
/* find anchor ATOMBIOSBK-AMD */
vbios_ver = atom_find_str_in_rom(ctx, BIOS_VERSION_PREFIX, 3, 1024, 64);
vbios_ver =
atom_find_str_in_rom(ctx, BIOS_VERSION_PREFIX, start, end, 64);
if (vbios_ver != NULL) {
/* skip ATOMBIOSBK-AMD VER */
vbios_ver += 18;

View File

@ -228,7 +228,6 @@ error:
register_acpi_backlight:
/* Try registering an ACPI video backlight device instead. */
acpi_video_register_backlight();
return;
}
void

View File

@ -925,9 +925,14 @@ static void cik_enable_sdma_mgls(struct amdgpu_device *adev,
static int cik_sdma_early_init(void *handle)
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
int r;
adev->sdma.num_instances = SDMA_MAX_INSTANCE;
r = cik_sdma_init_microcode(adev);
if (r)
return r;
cik_sdma_set_ring_funcs(adev);
cik_sdma_set_irq_funcs(adev);
cik_sdma_set_buffer_funcs(adev);
@ -942,12 +947,6 @@ static int cik_sdma_sw_init(void *handle)
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
int r, i;
r = cik_sdma_init_microcode(adev);
if (r) {
DRM_ERROR("Failed to load sdma firmware!\n");
return r;
}
/* SDMA trap event */
r = amdgpu_irq_add_id(adev, AMDGPU_IRQ_CLIENTID_LEGACY, 224,
&adev->sdma.trap_irq);

View File

@ -1036,7 +1036,7 @@ static void dce_v10_0_program_watermarks(struct amdgpu_device *adev,
(u32)mode->clock);
line_time = (u32) div_u64((u64)mode->crtc_htotal * 1000000,
(u32)mode->clock);
line_time = min(line_time, (u32)65535);
line_time = min_t(u32, line_time, 65535);
/* watermark for high clocks */
if (adev->pm.dpm_enabled) {
@ -1066,7 +1066,7 @@ static void dce_v10_0_program_watermarks(struct amdgpu_device *adev,
wm_high.num_heads = num_heads;
/* set for high clocks */
latency_watermark_a = min(dce_v10_0_latency_watermark(&wm_high), (u32)65535);
latency_watermark_a = min_t(u32, dce_v10_0_latency_watermark(&wm_high), 65535);
/* possibly force display priority to high */
/* should really do this at mode validation time... */
@ -1105,7 +1105,7 @@ static void dce_v10_0_program_watermarks(struct amdgpu_device *adev,
wm_low.num_heads = num_heads;
/* set for low clocks */
latency_watermark_b = min(dce_v10_0_latency_watermark(&wm_low), (u32)65535);
latency_watermark_b = min_t(u32, dce_v10_0_latency_watermark(&wm_low), 65535);
/* possibly force display priority to high */
/* should really do this at mode validation time... */

View File

@ -1068,7 +1068,7 @@ static void dce_v11_0_program_watermarks(struct amdgpu_device *adev,
(u32)mode->clock);
line_time = (u32) div_u64((u64)mode->crtc_htotal * 1000000,
(u32)mode->clock);
line_time = min(line_time, (u32)65535);
line_time = min_t(u32, line_time, 65535);
/* watermark for high clocks */
if (adev->pm.dpm_enabled) {
@ -1098,7 +1098,7 @@ static void dce_v11_0_program_watermarks(struct amdgpu_device *adev,
wm_high.num_heads = num_heads;
/* set for high clocks */
latency_watermark_a = min(dce_v11_0_latency_watermark(&wm_high), (u32)65535);
latency_watermark_a = min_t(u32, dce_v11_0_latency_watermark(&wm_high), 65535);
/* possibly force display priority to high */
/* should really do this at mode validation time... */
@ -1137,7 +1137,7 @@ static void dce_v11_0_program_watermarks(struct amdgpu_device *adev,
wm_low.num_heads = num_heads;
/* set for low clocks */
latency_watermark_b = min(dce_v11_0_latency_watermark(&wm_low), (u32)65535);
latency_watermark_b = min_t(u32, dce_v11_0_latency_watermark(&wm_low), 65535);
/* possibly force display priority to high */
/* should really do this at mode validation time... */

View File

@ -845,7 +845,7 @@ static void dce_v6_0_program_watermarks(struct amdgpu_device *adev,
(u32)mode->clock);
line_time = (u32) div_u64((u64)mode->crtc_htotal * 1000000,
(u32)mode->clock);
line_time = min(line_time, (u32)65535);
line_time = min_t(u32, line_time, 65535);
priority_a_cnt = 0;
priority_b_cnt = 0;
@ -906,9 +906,9 @@ static void dce_v6_0_program_watermarks(struct amdgpu_device *adev,
wm_low.num_heads = num_heads;
/* set for high clocks */
latency_watermark_a = min(dce_v6_0_latency_watermark(&wm_high), (u32)65535);
latency_watermark_a = min_t(u32, dce_v6_0_latency_watermark(&wm_high), 65535);
/* set for low clocks */
latency_watermark_b = min(dce_v6_0_latency_watermark(&wm_low), (u32)65535);
latency_watermark_b = min_t(u32, dce_v6_0_latency_watermark(&wm_low), 65535);
/* possibly force display priority to high */
/* should really do this at mode validation time... */

View File

@ -975,7 +975,7 @@ static void dce_v8_0_program_watermarks(struct amdgpu_device *adev,
(u32)mode->clock);
line_time = (u32) div_u64((u64)mode->crtc_htotal * 1000000,
(u32)mode->clock);
line_time = min(line_time, (u32)65535);
line_time = min_t(u32, line_time, 65535);
/* watermark for high clocks */
if (adev->pm.dpm_enabled) {
@ -1005,7 +1005,7 @@ static void dce_v8_0_program_watermarks(struct amdgpu_device *adev,
wm_high.num_heads = num_heads;
/* set for high clocks */
latency_watermark_a = min(dce_v8_0_latency_watermark(&wm_high), (u32)65535);
latency_watermark_a = min_t(u32, dce_v8_0_latency_watermark(&wm_high), 65535);
/* possibly force display priority to high */
/* should really do this at mode validation time... */
@ -1044,7 +1044,7 @@ static void dce_v8_0_program_watermarks(struct amdgpu_device *adev,
wm_low.num_heads = num_heads;
/* set for low clocks */
latency_watermark_b = min(dce_v8_0_latency_watermark(&wm_low), (u32)65535);
latency_watermark_b = min_t(u32, dce_v8_0_latency_watermark(&wm_low), 65535);
/* possibly force display priority to high */
/* should really do this at mode validation time... */

View File

@ -102,6 +102,11 @@
#define mmGCR_GENERAL_CNTL_Sienna_Cichlid 0x1580
#define mmGCR_GENERAL_CNTL_Sienna_Cichlid_BASE_IDX 0
#define mmGOLDEN_TSC_COUNT_UPPER_Cyan_Skillfish 0x0105
#define mmGOLDEN_TSC_COUNT_UPPER_Cyan_Skillfish_BASE_IDX 1
#define mmGOLDEN_TSC_COUNT_LOWER_Cyan_Skillfish 0x0106
#define mmGOLDEN_TSC_COUNT_LOWER_Cyan_Skillfish_BASE_IDX 1
#define mmGOLDEN_TSC_COUNT_UPPER_Vangogh 0x0025
#define mmGOLDEN_TSC_COUNT_UPPER_Vangogh_BASE_IDX 1
#define mmGOLDEN_TSC_COUNT_LOWER_Vangogh 0x0026
@ -3627,7 +3632,7 @@ static void gfx_v10_0_set_kiq_pm4_funcs(struct amdgpu_device *adev)
static void gfx_v10_0_init_spm_golden_registers(struct amdgpu_device *adev)
{
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(10, 1, 10):
soc15_program_register_sequence(adev,
golden_settings_gc_rlc_spm_10_0_nv10,
@ -3650,7 +3655,7 @@ static void gfx_v10_0_init_spm_golden_registers(struct amdgpu_device *adev)
static void gfx_v10_0_init_golden_registers(struct amdgpu_device *adev)
{
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(10, 1, 10):
soc15_program_register_sequence(adev,
golden_settings_gc_10_1,
@ -3891,7 +3896,7 @@ static void gfx_v10_0_check_fw_write_wait(struct amdgpu_device *adev)
{
adev->gfx.cp_fw_write_wait = false;
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(10, 1, 10):
case IP_VERSION(10, 1, 2):
case IP_VERSION(10, 1, 1):
@ -3942,7 +3947,7 @@ static bool gfx_v10_0_navi10_gfxoff_should_enable(struct amdgpu_device *adev)
static void gfx_v10_0_check_gfxoff_flag(struct amdgpu_device *adev)
{
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(10, 1, 10):
if (!gfx_v10_0_navi10_gfxoff_should_enable(adev))
adev->pm.pp_feature &= ~PP_GFXOFF_MASK;
@ -3964,8 +3969,8 @@ static int gfx_v10_0_init_microcode(struct amdgpu_device *adev)
DRM_DEBUG("\n");
if (adev->ip_versions[GC_HWIP][0] == IP_VERSION(10, 1, 1) &&
(!(adev->pdev->device == 0x7340 && adev->pdev->revision != 0x00)))
if (amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(10, 1, 1) &&
(!(adev->pdev->device == 0x7340 && adev->pdev->revision != 0x00)))
wks = "_wks";
amdgpu_ucode_ip_version_decode(adev, GC_HWIP, ucode_prefix, sizeof(ucode_prefix));
@ -4144,7 +4149,7 @@ static void gfx_v10_0_init_rlcg_reg_access_ctrl(struct amdgpu_device *adev)
reg_access_ctrl->scratch_reg3 = SOC15_REG_OFFSET(GC, 0, mmSCRATCH_REG3);
reg_access_ctrl->grbm_cntl = SOC15_REG_OFFSET(GC, 0, mmGRBM_GFX_CNTL);
reg_access_ctrl->grbm_idx = SOC15_REG_OFFSET(GC, 0, mmGRBM_GFX_INDEX);
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(10, 3, 0):
reg_access_ctrl->spare_int =
SOC15_REG_OFFSET(GC, 0, mmRLC_SPARE_INT_0_Sienna_Cichlid);
@ -4358,7 +4363,7 @@ static void gfx_v10_0_gpu_early_init(struct amdgpu_device *adev)
{
u32 gb_addr_config;
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(10, 1, 10):
case IP_VERSION(10, 1, 1):
case IP_VERSION(10, 1, 2):
@ -4491,7 +4496,7 @@ static int gfx_v10_0_sw_init(void *handle)
struct amdgpu_kiq *kiq;
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(10, 1, 10):
case IP_VERSION(10, 1, 1):
case IP_VERSION(10, 1, 2):
@ -4749,9 +4754,12 @@ static void gfx_v10_0_setup_rb(struct amdgpu_device *adev)
for (i = 0; i < adev->gfx.config.max_shader_engines; i++) {
for (j = 0; j < adev->gfx.config.max_sh_per_se; j++) {
bitmap = i * adev->gfx.config.max_sh_per_se + j;
if (((adev->ip_versions[GC_HWIP][0] == IP_VERSION(10, 3, 0)) ||
(adev->ip_versions[GC_HWIP][0] == IP_VERSION(10, 3, 3)) ||
(adev->ip_versions[GC_HWIP][0] == IP_VERSION(10, 3, 6))) &&
if (((amdgpu_ip_version(adev, GC_HWIP, 0) ==
IP_VERSION(10, 3, 0)) ||
(amdgpu_ip_version(adev, GC_HWIP, 0) ==
IP_VERSION(10, 3, 3)) ||
(amdgpu_ip_version(adev, GC_HWIP, 0) ==
IP_VERSION(10, 3, 6))) &&
((gfx_v10_3_get_disabled_sa(adev) >> bitmap) & 1))
continue;
gfx_v10_0_select_se_sh(adev, i, j, 0xffffffff, 0);
@ -4779,7 +4787,7 @@ static u32 gfx_v10_0_init_pa_sc_tile_steering_override(struct amdgpu_device *ade
/* for ASICs that integrates GFX v10.3
* pa_sc_tile_steering_override should be set to 0
*/
if (adev->ip_versions[GC_HWIP][0] >= IP_VERSION(10, 3, 0))
if (amdgpu_ip_version(adev, GC_HWIP, 0) >= IP_VERSION(10, 3, 0))
return 0;
/* init num_sc */
@ -4960,7 +4968,7 @@ static void gfx_v10_0_get_tcc_info(struct amdgpu_device *adev)
/* TCCs are global (not instanced). */
uint32_t tcc_disable;
if (adev->ip_versions[GC_HWIP][0] >= IP_VERSION(10, 3, 0)) {
if (amdgpu_ip_version(adev, GC_HWIP, 0) >= IP_VERSION(10, 3, 0)) {
tcc_disable = RREG32_SOC15(GC, 0, mmCGTS_TCC_DISABLE_gc_10_3) |
RREG32_SOC15(GC, 0, mmCGTS_USER_TCC_DISABLE_gc_10_3);
} else {
@ -5037,7 +5045,7 @@ static int gfx_v10_0_init_csb(struct amdgpu_device *adev)
adev->gfx.rlc.funcs->get_csb_buffer(adev, adev->gfx.rlc.cs_ptr);
/* csib */
if (adev->ip_versions[GC_HWIP][0] == IP_VERSION(10, 1, 2)) {
if (amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(10, 1, 2)) {
WREG32_SOC15_RLC(GC, 0, mmRLC_CSIB_ADDR_HI,
adev->gfx.rlc.clear_state_gpu_addr >> 32);
WREG32_SOC15_RLC(GC, 0, mmRLC_CSIB_ADDR_LO,
@ -5666,7 +5674,7 @@ static int gfx_v10_0_cp_gfx_enable(struct amdgpu_device *adev, bool enable)
tmp = REG_SET_FIELD(tmp, CP_ME_CNTL, PFP_HALT, enable ? 0 : 1);
tmp = REG_SET_FIELD(tmp, CP_ME_CNTL, CE_HALT, enable ? 0 : 1);
if (adev->ip_versions[GC_HWIP][0] == IP_VERSION(10, 1, 2))
if (amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(10, 1, 2))
WREG32_SOC15_RLC(GC, 0, mmCP_ME_CNTL, tmp);
else
WREG32_SOC15(GC, 0, mmCP_ME_CNTL, tmp);
@ -6057,7 +6065,7 @@ static void gfx_v10_0_cp_gfx_set_doorbell(struct amdgpu_device *adev,
}
WREG32_SOC15(GC, 0, mmCP_RB_DOORBELL_CONTROL, tmp);
}
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(10, 3, 0):
case IP_VERSION(10, 3, 2):
case IP_VERSION(10, 3, 1):
@ -6190,7 +6198,7 @@ static int gfx_v10_0_cp_gfx_resume(struct amdgpu_device *adev)
static void gfx_v10_0_cp_compute_enable(struct amdgpu_device *adev, bool enable)
{
if (enable) {
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(10, 3, 0):
case IP_VERSION(10, 3, 2):
case IP_VERSION(10, 3, 1):
@ -6206,7 +6214,7 @@ static void gfx_v10_0_cp_compute_enable(struct amdgpu_device *adev, bool enable)
break;
}
} else {
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(10, 3, 0):
case IP_VERSION(10, 3, 2):
case IP_VERSION(10, 3, 1):
@ -6306,7 +6314,7 @@ static void gfx_v10_0_kiq_setting(struct amdgpu_ring *ring)
struct amdgpu_device *adev = ring->adev;
/* tell RLC which is KIQ queue */
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(10, 3, 0):
case IP_VERSION(10, 3, 2):
case IP_VERSION(10, 3, 1):
@ -6917,7 +6925,7 @@ static bool gfx_v10_0_check_grbm_cam_remapping(struct amdgpu_device *adev)
* check if mmVGT_ESGS_RING_SIZE_UMD
* has been remapped to mmVGT_ESGS_RING_SIZE
*/
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(10, 3, 0):
case IP_VERSION(10, 3, 2):
case IP_VERSION(10, 3, 4):
@ -6966,7 +6974,7 @@ static void gfx_v10_0_setup_grbm_cam_remapping(struct amdgpu_device *adev)
*/
WREG32_SOC15(GC, 0, mmGRBM_CAM_INDEX, 0);
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(10, 3, 0):
case IP_VERSION(10, 3, 2):
case IP_VERSION(10, 3, 1):
@ -7139,19 +7147,19 @@ static int gfx_v10_0_hw_init(void *handle)
* init golden registers and rlc resume may override some registers,
* reconfig them here
*/
if (adev->ip_versions[GC_HWIP][0] == IP_VERSION(10, 1, 10) ||
adev->ip_versions[GC_HWIP][0] == IP_VERSION(10, 1, 1) ||
adev->ip_versions[GC_HWIP][0] == IP_VERSION(10, 1, 2))
if (amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(10, 1, 10) ||
amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(10, 1, 1) ||
amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(10, 1, 2))
gfx_v10_0_tcp_harvest(adev);
r = gfx_v10_0_cp_resume(adev);
if (r)
return r;
if (adev->ip_versions[GC_HWIP][0] == IP_VERSION(10, 3, 0))
if (amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(10, 3, 0))
gfx_v10_3_program_pbb_mode(adev);
if (adev->ip_versions[GC_HWIP][0] >= IP_VERSION(10, 3, 0))
if (amdgpu_ip_version(adev, GC_HWIP, 0) >= IP_VERSION(10, 3, 0))
gfx_v10_3_set_power_brake_sequence(adev);
return r;
@ -7255,7 +7263,7 @@ static int gfx_v10_0_soft_reset(void *handle)
/* GRBM_STATUS2 */
tmp = RREG32_SOC15(GC, 0, mmGRBM_STATUS2);
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(10, 3, 0):
case IP_VERSION(10, 3, 2):
case IP_VERSION(10, 3, 1):
@ -7312,7 +7320,23 @@ static uint64_t gfx_v10_0_get_gpu_clock_counter(struct amdgpu_device *adev)
{
uint64_t clock, clock_lo, clock_hi, hi_check;
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(10, 1, 3):
case IP_VERSION(10, 1, 4):
preempt_disable();
clock_hi = RREG32_SOC15_NO_KIQ(SMUIO, 0, mmGOLDEN_TSC_COUNT_UPPER_Cyan_Skillfish);
clock_lo = RREG32_SOC15_NO_KIQ(SMUIO, 0, mmGOLDEN_TSC_COUNT_LOWER_Cyan_Skillfish);
hi_check = RREG32_SOC15_NO_KIQ(SMUIO, 0, mmGOLDEN_TSC_COUNT_UPPER_Cyan_Skillfish);
/* The SMUIO TSC clock frequency is 100MHz, which sets 32-bit carry over
* roughly every 42 seconds.
*/
if (hi_check != clock_hi) {
clock_lo = RREG32_SOC15_NO_KIQ(SMUIO, 0, mmGOLDEN_TSC_COUNT_LOWER_Cyan_Skillfish);
clock_hi = hi_check;
}
preempt_enable();
clock = clock_lo | (clock_hi << 32ULL);
break;
case IP_VERSION(10, 3, 1):
case IP_VERSION(10, 3, 3):
case IP_VERSION(10, 3, 7):
@ -7399,7 +7423,7 @@ static int gfx_v10_0_early_init(void *handle)
adev->gfx.funcs = &gfx_v10_0_gfx_funcs;
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(10, 1, 10):
case IP_VERSION(10, 1, 1):
case IP_VERSION(10, 1, 2):
@ -7470,7 +7494,7 @@ static void gfx_v10_0_set_safe_mode(struct amdgpu_device *adev, int xcc_id)
data = RLC_SAFE_MODE__CMD_MASK;
data |= (1 << RLC_SAFE_MODE__MESSAGE__SHIFT);
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(10, 3, 0):
case IP_VERSION(10, 3, 2):
case IP_VERSION(10, 3, 1):
@ -7508,7 +7532,7 @@ static void gfx_v10_0_unset_safe_mode(struct amdgpu_device *adev, int xcc_id)
uint32_t data;
data = RLC_SAFE_MODE__CMD_MASK;
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(10, 3, 0):
case IP_VERSION(10, 3, 2):
case IP_VERSION(10, 3, 1):
@ -7819,7 +7843,7 @@ static void gfx_v10_0_apply_medium_grain_clock_gating_workaround(struct amdgpu_d
mmCGTS_SA1_QUAD1_SM_CTRL_REG
};
if (adev->ip_versions[GC_HWIP][0] == IP_VERSION(10, 1, 2)) {
if (amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(10, 1, 2)) {
for (i = 0; i < ARRAY_SIZE(tcp_ctrl_regs_nv12); i++) {
reg_idx = adev->reg_offset[GC_HWIP][0][mmCGTS_SA0_WGP00_CU0_TCP_CTRL_REG_BASE_IDX] +
tcp_ctrl_regs_nv12[i];
@ -7864,9 +7888,12 @@ static int gfx_v10_0_update_gfx_clock_gating(struct amdgpu_device *adev,
/* === CGCG + CGLS === */
gfx_v10_0_update_coarse_grain_clock_gating(adev, enable);
if ((adev->ip_versions[GC_HWIP][0] == IP_VERSION(10, 1, 10)) ||
(adev->ip_versions[GC_HWIP][0] == IP_VERSION(10, 1, 1)) ||
(adev->ip_versions[GC_HWIP][0] == IP_VERSION(10, 1, 2)))
if ((amdgpu_ip_version(adev, GC_HWIP, 0) ==
IP_VERSION(10, 1, 10)) ||
(amdgpu_ip_version(adev, GC_HWIP, 0) ==
IP_VERSION(10, 1, 1)) ||
(amdgpu_ip_version(adev, GC_HWIP, 0) ==
IP_VERSION(10, 1, 2)))
gfx_v10_0_apply_medium_grain_clock_gating_workaround(adev);
} else {
/* CGCG/CGLS should be disabled before MGCG/MGLS
@ -7897,22 +7924,15 @@ static int gfx_v10_0_update_gfx_clock_gating(struct amdgpu_device *adev,
static void gfx_v10_0_update_spm_vmid_internal(struct amdgpu_device *adev,
unsigned int vmid)
{
u32 reg, data;
u32 data;
/* not for *_SOC15 */
reg = SOC15_REG_OFFSET(GC, 0, mmRLC_SPM_MC_CNTL);
if (amdgpu_sriov_is_pp_one_vf(adev))
data = RREG32_NO_KIQ(reg);
else
data = RREG32_SOC15(GC, 0, mmRLC_SPM_MC_CNTL);
data = RREG32_SOC15_NO_KIQ(GC, 0, mmRLC_SPM_MC_CNTL);
data &= ~RLC_SPM_MC_CNTL__RLC_SPM_VMID_MASK;
data |= (vmid & RLC_SPM_MC_CNTL__RLC_SPM_VMID_MASK) << RLC_SPM_MC_CNTL__RLC_SPM_VMID__SHIFT;
if (amdgpu_sriov_is_pp_one_vf(adev))
WREG32_SOC15_NO_KIQ(GC, 0, mmRLC_SPM_MC_CNTL, data);
else
WREG32_SOC15(GC, 0, mmRLC_SPM_MC_CNTL, data);
WREG32_SOC15_NO_KIQ(GC, 0, mmRLC_SPM_MC_CNTL, data);
}
static void gfx_v10_0_update_spm_vmid(struct amdgpu_device *adev, unsigned int vmid)
@ -7973,7 +7993,7 @@ static void gfx_v10_cntl_power_gating(struct amdgpu_device *adev, bool enable)
* Power/performance team will optimize it and might give a new value later.
*/
if (enable && (adev->pg_flags & AMD_PG_SUPPORT_GFX_PG)) {
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(10, 3, 1):
case IP_VERSION(10, 3, 3):
case IP_VERSION(10, 3, 6):
@ -8034,7 +8054,7 @@ static int gfx_v10_0_set_powergating_state(void *handle,
if (amdgpu_sriov_vf(adev))
return 0;
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(10, 1, 10):
case IP_VERSION(10, 1, 1):
case IP_VERSION(10, 1, 2):
@ -8071,7 +8091,7 @@ static int gfx_v10_0_set_clockgating_state(void *handle,
if (amdgpu_sriov_vf(adev))
return 0;
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(10, 1, 10):
case IP_VERSION(10, 1, 1):
case IP_VERSION(10, 1, 2):
@ -9318,7 +9338,7 @@ static void gfx_v10_0_set_irq_funcs(struct amdgpu_device *adev)
static void gfx_v10_0_set_rlc_funcs(struct amdgpu_device *adev)
{
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(10, 1, 10):
case IP_VERSION(10, 1, 1):
case IP_VERSION(10, 1, 3):
@ -9435,10 +9455,14 @@ static int gfx_v10_0_get_cu_info(struct amdgpu_device *adev,
for (i = 0; i < adev->gfx.config.max_shader_engines; i++) {
for (j = 0; j < adev->gfx.config.max_sh_per_se; j++) {
bitmap = i * adev->gfx.config.max_sh_per_se + j;
if (((adev->ip_versions[GC_HWIP][0] == IP_VERSION(10, 3, 0)) ||
(adev->ip_versions[GC_HWIP][0] == IP_VERSION(10, 3, 3)) ||
(adev->ip_versions[GC_HWIP][0] == IP_VERSION(10, 3, 6)) ||
(adev->ip_versions[GC_HWIP][0] == IP_VERSION(10, 3, 7))) &&
if (((amdgpu_ip_version(adev, GC_HWIP, 0) ==
IP_VERSION(10, 3, 0)) ||
(amdgpu_ip_version(adev, GC_HWIP, 0) ==
IP_VERSION(10, 3, 3)) ||
(amdgpu_ip_version(adev, GC_HWIP, 0) ==
IP_VERSION(10, 3, 6)) ||
(amdgpu_ip_version(adev, GC_HWIP, 0) ==
IP_VERSION(10, 3, 7))) &&
((gfx_v10_3_get_disabled_sa(adev) >> bitmap) & 1))
continue;
mask = 1;

View File

@ -60,6 +60,8 @@
#define regCGTT_WD_CLK_CTRL_BASE_IDX 1
#define regRLC_RLCS_BOOTLOAD_STATUS_gc_11_0_1 0x4e7e
#define regRLC_RLCS_BOOTLOAD_STATUS_gc_11_0_1_BASE_IDX 1
#define regPC_CONFIG_CNTL_1 0x194d
#define regPC_CONFIG_CNTL_1_BASE_IDX 1
MODULE_FIRMWARE("amdgpu/gc_11_0_0_pfp.bin");
MODULE_FIRMWARE("amdgpu/gc_11_0_0_me.bin");
@ -82,6 +84,10 @@ MODULE_FIRMWARE("amdgpu/gc_11_0_4_pfp.bin");
MODULE_FIRMWARE("amdgpu/gc_11_0_4_me.bin");
MODULE_FIRMWARE("amdgpu/gc_11_0_4_mec.bin");
MODULE_FIRMWARE("amdgpu/gc_11_0_4_rlc.bin");
MODULE_FIRMWARE("amdgpu/gc_11_5_0_pfp.bin");
MODULE_FIRMWARE("amdgpu/gc_11_5_0_me.bin");
MODULE_FIRMWARE("amdgpu/gc_11_5_0_mec.bin");
MODULE_FIRMWARE("amdgpu/gc_11_5_0_rlc.bin");
static const struct soc15_reg_golden golden_settings_gc_11_0_1[] =
{
@ -96,6 +102,23 @@ static const struct soc15_reg_golden golden_settings_gc_11_0_1[] =
SOC15_REG_GOLDEN_VALUE(GC, 0, regTCP_CNTL2, 0xfcffffff, 0x0000000a)
};
static const struct soc15_reg_golden golden_settings_gc_11_5_0[] = {
SOC15_REG_GOLDEN_VALUE(GC, 0, regDB_DEBUG5, 0xffffffff, 0x00000800),
SOC15_REG_GOLDEN_VALUE(GC, 0, regGB_ADDR_CONFIG, 0x0c1807ff, 0x00000242),
SOC15_REG_GOLDEN_VALUE(GC, 0, regGCR_GENERAL_CNTL, 0x1ff1ffff, 0x00000500),
SOC15_REG_GOLDEN_VALUE(GC, 0, regGL2A_ADDR_MATCH_MASK, 0xffffffff, 0xfffffff3),
SOC15_REG_GOLDEN_VALUE(GC, 0, regGL2C_ADDR_MATCH_MASK, 0xffffffff, 0xfffffff3),
SOC15_REG_GOLDEN_VALUE(GC, 0, regGL2C_CTRL, 0xffffffff, 0xf37fff3f),
SOC15_REG_GOLDEN_VALUE(GC, 0, regGL2C_CTRL3, 0xfffffffb, 0x00f40188),
SOC15_REG_GOLDEN_VALUE(GC, 0, regGL2C_CTRL4, 0xf0ffffff, 0x8000b007),
SOC15_REG_GOLDEN_VALUE(GC, 0, regPA_CL_ENHANCE, 0xf1ffffff, 0x00880007),
SOC15_REG_GOLDEN_VALUE(GC, 0, regPC_CONFIG_CNTL_1, 0xffffffff, 0x00010000),
SOC15_REG_GOLDEN_VALUE(GC, 0, regTA_CNTL_AUX, 0xf7f7ffff, 0x01030000),
SOC15_REG_GOLDEN_VALUE(GC, 0, regTA_CNTL2, 0x007f0000, 0x00000000),
SOC15_REG_GOLDEN_VALUE(GC, 0, regTCP_CNTL2, 0xffcfffff, 0x0000200a),
SOC15_REG_GOLDEN_VALUE(GC, 0, regUTCL1_CTRL_2, 0xffffffff, 0x0000048f)
};
#define DEFAULT_SH_MEM_CONFIG \
((SH_MEM_ADDRESS_MODE_64 << SH_MEM_CONFIG__ADDRESS_MODE__SHIFT) | \
(SH_MEM_ALIGNMENT_MODE_UNALIGNED << SH_MEM_CONFIG__ALIGNMENT_MODE__SHIFT) | \
@ -265,13 +288,18 @@ static void gfx_v11_0_set_kiq_pm4_funcs(struct amdgpu_device *adev)
static void gfx_v11_0_init_golden_registers(struct amdgpu_device *adev)
{
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(11, 0, 1):
case IP_VERSION(11, 0, 4):
soc15_program_register_sequence(adev,
golden_settings_gc_11_0_1,
(const u32)ARRAY_SIZE(golden_settings_gc_11_0_1));
break;
case IP_VERSION(11, 5, 0):
soc15_program_register_sequence(adev,
golden_settings_gc_11_5_0,
(const u32)ARRAY_SIZE(golden_settings_gc_11_5_0));
break;
default:
break;
}
@ -465,7 +493,7 @@ out:
static void gfx_v11_0_check_fw_cp_gfx_shadow(struct amdgpu_device *adev)
{
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(11, 0, 0):
case IP_VERSION(11, 0, 2):
case IP_VERSION(11, 0, 3):
@ -856,8 +884,7 @@ static const struct amdgpu_gfx_funcs gfx_v11_0_gfx_funcs = {
static int gfx_v11_0_gpu_early_init(struct amdgpu_device *adev)
{
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(11, 0, 0):
case IP_VERSION(11, 0, 2):
adev->gfx.config.max_hw_contexts = 8;
@ -876,6 +903,7 @@ static int gfx_v11_0_gpu_early_init(struct amdgpu_device *adev)
break;
case IP_VERSION(11, 0, 1):
case IP_VERSION(11, 0, 4):
case IP_VERSION(11, 5, 0):
adev->gfx.config.max_hw_contexts = 8;
adev->gfx.config.sc_prim_fifo_size_frontend = 0x20;
adev->gfx.config.sc_prim_fifo_size_backend = 0x100;
@ -1301,9 +1329,7 @@ static int gfx_v11_0_sw_init(void *handle)
struct amdgpu_kiq *kiq;
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
adev->gfxhub.funcs->init(adev);
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(11, 0, 0):
case IP_VERSION(11, 0, 2):
case IP_VERSION(11, 0, 3):
@ -1316,6 +1342,7 @@ static int gfx_v11_0_sw_init(void *handle)
break;
case IP_VERSION(11, 0, 1):
case IP_VERSION(11, 0, 4):
case IP_VERSION(11, 5, 0):
adev->gfx.me.num_me = 1;
adev->gfx.me.num_pipe_per_me = 1;
adev->gfx.me.num_queue_per_pipe = 1;
@ -1334,8 +1361,8 @@ static int gfx_v11_0_sw_init(void *handle)
}
/* Enable CG flag in one VF mode for enabling RLC safe mode enter/exit */
if (adev->ip_versions[GC_HWIP][0] == IP_VERSION(11, 0, 3) &&
amdgpu_sriov_is_pp_one_vf(adev))
if (amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(11, 0, 3) &&
amdgpu_sriov_is_pp_one_vf(adev))
adev->cg_flags = AMD_CG_SUPPORT_GFX_CGCG;
/* EOP Event */
@ -2562,8 +2589,11 @@ static int gfx_v11_0_wait_for_rlc_autoload_complete(struct amdgpu_device *adev)
for (i = 0; i < adev->usec_timeout; i++) {
cp_status = RREG32_SOC15(GC, 0, regCP_STAT);
if (adev->ip_versions[GC_HWIP][0] == IP_VERSION(11, 0, 1) ||
adev->ip_versions[GC_HWIP][0] == IP_VERSION(11, 0, 4))
if (amdgpu_ip_version(adev, GC_HWIP, 0) ==
IP_VERSION(11, 0, 1) ||
amdgpu_ip_version(adev, GC_HWIP, 0) ==
IP_VERSION(11, 0, 4) ||
amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(11, 5, 0))
bootload_status = RREG32_SOC15(GC, 0,
regRLC_RLCS_BOOTLOAD_STATUS_gc_11_0_1);
else
@ -4953,23 +4983,16 @@ static int gfx_v11_0_update_gfx_clock_gating(struct amdgpu_device *adev,
static void gfx_v11_0_update_spm_vmid(struct amdgpu_device *adev, unsigned vmid)
{
u32 reg, data;
u32 data;
amdgpu_gfx_off_ctrl(adev, false);
reg = SOC15_REG_OFFSET(GC, 0, regRLC_SPM_MC_CNTL);
if (amdgpu_sriov_is_pp_one_vf(adev))
data = RREG32_NO_KIQ(reg);
else
data = RREG32(reg);
data = RREG32_SOC15_NO_KIQ(GC, 0, regRLC_SPM_MC_CNTL);
data &= ~RLC_SPM_MC_CNTL__RLC_SPM_VMID_MASK;
data |= (vmid & RLC_SPM_MC_CNTL__RLC_SPM_VMID_MASK) << RLC_SPM_MC_CNTL__RLC_SPM_VMID__SHIFT;
if (amdgpu_sriov_is_pp_one_vf(adev))
WREG32_SOC15_NO_KIQ(GC, 0, regRLC_SPM_MC_CNTL, data);
else
WREG32_SOC15(GC, 0, regRLC_SPM_MC_CNTL, data);
WREG32_SOC15_NO_KIQ(GC, 0, regRLC_SPM_MC_CNTL, data);
amdgpu_gfx_off_ctrl(adev, true);
}
@ -5001,9 +5024,10 @@ static void gfx_v11_cntl_power_gating(struct amdgpu_device *adev, bool enable)
// Program RLC_PG_DELAY3 for CGPG hysteresis
if (enable && (adev->pg_flags & AMD_PG_SUPPORT_GFX_PG)) {
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(11, 0, 1):
case IP_VERSION(11, 0, 4):
case IP_VERSION(11, 5, 0):
WREG32_SOC15(GC, 0, regRLC_PG_DELAY_3, RLC_PG_DELAY_3_DEFAULT_GC_11_0_1);
break;
default:
@ -5030,7 +5054,7 @@ static int gfx_v11_0_set_powergating_state(void *handle,
if (amdgpu_sriov_vf(adev))
return 0;
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(11, 0, 0):
case IP_VERSION(11, 0, 2):
case IP_VERSION(11, 0, 3):
@ -5038,6 +5062,7 @@ static int gfx_v11_0_set_powergating_state(void *handle,
break;
case IP_VERSION(11, 0, 1):
case IP_VERSION(11, 0, 4):
case IP_VERSION(11, 5, 0):
if (!enable)
amdgpu_gfx_off_ctrl(adev, false);
@ -5062,12 +5087,13 @@ static int gfx_v11_0_set_clockgating_state(void *handle,
if (amdgpu_sriov_vf(adev))
return 0;
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(11, 0, 0):
case IP_VERSION(11, 0, 1):
case IP_VERSION(11, 0, 2):
case IP_VERSION(11, 0, 3):
case IP_VERSION(11, 0, 4):
case IP_VERSION(11, 5, 0):
gfx_v11_0_update_gfx_clock_gating(adev,
state == AMD_CG_STATE_GATE);
break;

View File

@ -895,7 +895,7 @@ static void gfx_v9_0_set_kiq_pm4_funcs(struct amdgpu_device *adev)
static void gfx_v9_0_init_golden_registers(struct amdgpu_device *adev)
{
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(9, 0, 1):
soc15_program_register_sequence(adev,
golden_settings_gc_9_0,
@ -951,8 +951,8 @@ static void gfx_v9_0_init_golden_registers(struct amdgpu_device *adev)
break;
}
if ((adev->ip_versions[GC_HWIP][0] != IP_VERSION(9, 4, 1)) &&
(adev->ip_versions[GC_HWIP][0] != IP_VERSION(9, 4, 2)))
if ((amdgpu_ip_version(adev, GC_HWIP, 0) != IP_VERSION(9, 4, 1)) &&
(amdgpu_ip_version(adev, GC_HWIP, 0) != IP_VERSION(9, 4, 2)))
soc15_program_register_sequence(adev, golden_settings_gc_9_x_common,
(const u32)ARRAY_SIZE(golden_settings_gc_9_x_common));
}
@ -1095,14 +1095,14 @@ static void gfx_v9_0_check_fw_write_wait(struct amdgpu_device *adev)
adev->gfx.me_fw_write_wait = false;
adev->gfx.mec_fw_write_wait = false;
if ((adev->ip_versions[GC_HWIP][0] != IP_VERSION(9, 4, 1)) &&
if ((amdgpu_ip_version(adev, GC_HWIP, 0) != IP_VERSION(9, 4, 1)) &&
((adev->gfx.mec_fw_version < 0x000001a5) ||
(adev->gfx.mec_feature_version < 46) ||
(adev->gfx.pfp_fw_version < 0x000000b7) ||
(adev->gfx.pfp_feature_version < 46)))
(adev->gfx.mec_feature_version < 46) ||
(adev->gfx.pfp_fw_version < 0x000000b7) ||
(adev->gfx.pfp_feature_version < 46)))
DRM_WARN_ONCE("CP firmware version too old, please update!");
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(9, 0, 1):
if ((adev->gfx.me_fw_version >= 0x0000009c) &&
(adev->gfx.me_feature_version >= 42) &&
@ -1202,7 +1202,7 @@ static bool is_raven_kicker(struct amdgpu_device *adev)
static bool check_if_enlarge_doorbell_range(struct amdgpu_device *adev)
{
if ((adev->ip_versions[GC_HWIP][0] == IP_VERSION(9, 3, 0)) &&
if ((amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 3, 0)) &&
(adev->gfx.me_fw_version >= 0x000000a5) &&
(adev->gfx.me_feature_version >= 52))
return true;
@ -1215,7 +1215,7 @@ static void gfx_v9_0_check_if_need_gfxoff(struct amdgpu_device *adev)
if (gfx_v9_0_should_disable_gfxoff(adev->pdev))
adev->pm.pp_feature &= ~PP_GFXOFF_MASK;
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(9, 0, 1):
case IP_VERSION(9, 2, 1):
case IP_VERSION(9, 4, 0):
@ -1326,9 +1326,9 @@ out:
static bool gfx_v9_0_load_mec2_fw_bin_support(struct amdgpu_device *adev)
{
if (adev->ip_versions[GC_HWIP][0] == IP_VERSION(9, 4, 2) ||
adev->ip_versions[GC_HWIP][0] == IP_VERSION(9, 4, 1) ||
adev->ip_versions[GC_HWIP][0] == IP_VERSION(9, 3, 0))
if (amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 4, 2) ||
amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 4, 1) ||
amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 3, 0))
return false;
return true;
@ -1485,7 +1485,7 @@ static void gfx_v9_0_init_always_on_cu_mask(struct amdgpu_device *adev)
if (adev->flags & AMD_IS_APU)
always_on_cu_num = 4;
else if (adev->ip_versions[GC_HWIP][0] == IP_VERSION(9, 2, 1))
else if (amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 2, 1))
always_on_cu_num = 8;
else
always_on_cu_num = 12;
@ -1836,7 +1836,7 @@ static int gfx_v9_0_gpu_early_init(struct amdgpu_device *adev)
u32 gb_addr_config;
int err;
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(9, 0, 1):
adev->gfx.config.max_hw_contexts = 8;
adev->gfx.config.sc_prim_fifo_size_frontend = 0x20;
@ -2002,7 +2002,7 @@ static int gfx_v9_0_sw_init(void *handle)
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
unsigned int hw_prio;
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(9, 0, 1):
case IP_VERSION(9, 2, 1):
case IP_VERSION(9, 4, 0):
@ -2363,7 +2363,7 @@ static void gfx_v9_0_init_sq_config(struct amdgpu_device *adev)
{
uint32_t tmp;
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(9, 4, 1):
tmp = RREG32_SOC15(GC, 0, mmSQ_CONFIG);
tmp = REG_SET_FIELD(tmp, SQ_CONFIG, DISABLE_BARRIER_WAITCNT,
@ -2700,7 +2700,7 @@ static void gfx_v9_0_init_gfx_power_gating(struct amdgpu_device *adev)
/* program GRBM_REG_SAVE_GFX_IDLE_THRESHOLD to 0x55f0 */
data |= (0x55f0 << RLC_AUTO_PG_CTRL__GRBM_REG_SAVE_GFX_IDLE_THRESHOLD__SHIFT);
WREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_AUTO_PG_CTRL), data);
if (adev->ip_versions[GC_HWIP][0] != IP_VERSION(9, 3, 0))
if (amdgpu_ip_version(adev, GC_HWIP, 0) != IP_VERSION(9, 3, 0))
pwr_10_0_gfxip_control_over_cgpg(adev, true);
}
}
@ -2812,7 +2812,8 @@ static void gfx_v9_0_init_pg(struct amdgpu_device *adev)
* And it's needed by gfxoff feature.
*/
if (adev->gfx.rlc.is_rlc_v2_1) {
if (adev->ip_versions[GC_HWIP][0] == IP_VERSION(9, 2, 1) ||
if (amdgpu_ip_version(adev, GC_HWIP, 0) ==
IP_VERSION(9, 2, 1) ||
(adev->apu_flags & AMD_APU_IS_RAVEN2))
gfx_v9_1_init_rlc_save_restore_list(adev);
gfx_v9_0_enable_save_restore_machine(adev);
@ -2925,7 +2926,7 @@ static int gfx_v9_0_rlc_resume(struct amdgpu_device *adev)
return r;
}
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(9, 2, 2):
case IP_VERSION(9, 1, 0):
gfx_v9_0_init_lbpw(adev);
@ -3713,8 +3714,8 @@ static void gfx_v9_0_init_tcp_config(struct amdgpu_device *adev)
{
u32 tmp;
if (adev->ip_versions[GC_HWIP][0] != IP_VERSION(9, 4, 1) &&
adev->ip_versions[GC_HWIP][0] != IP_VERSION(9, 4, 2))
if (amdgpu_ip_version(adev, GC_HWIP, 0) != IP_VERSION(9, 4, 1) &&
amdgpu_ip_version(adev, GC_HWIP, 0) != IP_VERSION(9, 4, 2))
return;
tmp = RREG32_SOC15(GC, 0, mmTCP_ADDR_CONFIG);
@ -3754,7 +3755,7 @@ static int gfx_v9_0_hw_init(void *handle)
if (r)
return r;
if (adev->ip_versions[GC_HWIP][0] == IP_VERSION(9, 4, 2))
if (amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 4, 2))
gfx_v9_4_2_set_power_brake_sequence(adev);
return r;
@ -3802,7 +3803,7 @@ static int gfx_v9_0_hw_fini(void *handle)
/* Skip stopping RLC with A+A reset or when RLC controls GFX clock */
if ((adev->gmc.xgmi.connected_to_cpu && amdgpu_in_reset(adev)) ||
(adev->ip_versions[GC_HWIP][0] >= IP_VERSION(9, 4, 2))) {
(amdgpu_ip_version(adev, GC_HWIP, 0) >= IP_VERSION(9, 4, 2))) {
dev_dbg(adev->dev, "Skipping RLC halt\n");
return 0;
}
@ -3986,7 +3987,7 @@ static uint64_t gfx_v9_0_get_gpu_clock_counter(struct amdgpu_device *adev)
{
uint64_t clock, clock_lo, clock_hi, hi_check;
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(9, 3, 0):
preempt_disable();
clock_hi = RREG32_SOC15_NO_KIQ(SMUIO, 0, mmGOLDEN_TSC_COUNT_UPPER_Renoir);
@ -4005,7 +4006,9 @@ static uint64_t gfx_v9_0_get_gpu_clock_counter(struct amdgpu_device *adev)
default:
amdgpu_gfx_off_ctrl(adev, false);
mutex_lock(&adev->gfx.gpu_clock_mutex);
if (adev->ip_versions[GC_HWIP][0] == IP_VERSION(9, 0, 1) && amdgpu_sriov_runtime(adev)) {
if (amdgpu_ip_version(adev, GC_HWIP, 0) ==
IP_VERSION(9, 0, 1) &&
amdgpu_sriov_runtime(adev)) {
clock = gfx_v9_0_kiq_read_clock(adev);
} else {
WREG32_SOC15(GC, 0, mmRLC_CAPTURE_GPU_CLOCK_COUNT, 1);
@ -4357,7 +4360,7 @@ static int gfx_v9_0_do_edc_gpr_workarounds(struct amdgpu_device *adev)
if (!ring->sched.ready)
return 0;
if (adev->ip_versions[GC_HWIP][0] == IP_VERSION(9, 4, 1)) {
if (amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 4, 1)) {
vgpr_init_shader_ptr = vgpr_init_compute_shader_arcturus;
vgpr_init_shader_size = sizeof(vgpr_init_compute_shader_arcturus);
vgpr_init_regs_ptr = vgpr_init_regs_arcturus;
@ -4509,8 +4512,8 @@ static int gfx_v9_0_early_init(void *handle)
adev->gfx.funcs = &gfx_v9_0_gfx_funcs;
if (adev->ip_versions[GC_HWIP][0] == IP_VERSION(9, 4, 1) ||
adev->ip_versions[GC_HWIP][0] == IP_VERSION(9, 4, 2))
if (amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 4, 1) ||
amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 4, 2))
adev->gfx.num_gfx_rings = 0;
else
adev->gfx.num_gfx_rings = GFX9_NUM_GFX_RINGS;
@ -4548,7 +4551,7 @@ static int gfx_v9_0_ecc_late_init(void *handle)
}
/* requires IBs so do in late init after IB pool is initialized */
if (adev->ip_versions[GC_HWIP][0] == IP_VERSION(9, 4, 2))
if (amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 4, 2))
r = gfx_v9_4_2_do_edc_gpr_workarounds(adev);
else
r = gfx_v9_0_do_edc_gpr_workarounds(adev);
@ -4580,7 +4583,7 @@ static int gfx_v9_0_late_init(void *handle)
if (r)
return r;
if (adev->ip_versions[GC_HWIP][0] == IP_VERSION(9, 4, 2))
if (amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 4, 2))
gfx_v9_4_2_debug_trap_config_init(adev,
adev->vm_manager.first_kfd_vmid, AMDGPU_NUM_VMID);
else
@ -4676,7 +4679,7 @@ static void gfx_v9_0_update_medium_grain_clock_gating(struct amdgpu_device *adev
/* 1 - RLC_CGTT_MGCG_OVERRIDE */
def = data = RREG32_SOC15(GC, 0, mmRLC_CGTT_MGCG_OVERRIDE);
if (adev->ip_versions[GC_HWIP][0] != IP_VERSION(9, 2, 1))
if (amdgpu_ip_version(adev, GC_HWIP, 0) != IP_VERSION(9, 2, 1))
data &= ~RLC_CGTT_MGCG_OVERRIDE__CPF_CGTT_SCLK_OVERRIDE_MASK;
data &= ~(RLC_CGTT_MGCG_OVERRIDE__GRBM_CGTT_SCLK_OVERRIDE_MASK |
@ -4710,7 +4713,7 @@ static void gfx_v9_0_update_medium_grain_clock_gating(struct amdgpu_device *adev
/* 1 - MGCG_OVERRIDE */
def = data = RREG32_SOC15(GC, 0, mmRLC_CGTT_MGCG_OVERRIDE);
if (adev->ip_versions[GC_HWIP][0] != IP_VERSION(9, 2, 1))
if (amdgpu_ip_version(adev, GC_HWIP, 0) != IP_VERSION(9, 2, 1))
data |= RLC_CGTT_MGCG_OVERRIDE__CPF_CGTT_SCLK_OVERRIDE_MASK;
data |= (RLC_CGTT_MGCG_OVERRIDE__RLC_CGTT_SCLK_OVERRIDE_MASK |
@ -4816,7 +4819,7 @@ static void gfx_v9_0_update_coarse_grain_clock_gating(struct amdgpu_device *adev
/* enable cgcg FSM(0x0000363F) */
def = RREG32_SOC15(GC, 0, mmRLC_CGCG_CGLS_CTRL);
if (adev->ip_versions[GC_HWIP][0] == IP_VERSION(9, 4, 1))
if (amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 4, 1))
data = (0x2000 << RLC_CGCG_CGLS_CTRL__CGCG_GFX_IDLE_THRESHOLD__SHIFT) |
RLC_CGCG_CGLS_CTRL__CGCG_EN_MASK;
else
@ -4951,7 +4954,7 @@ static int gfx_v9_0_set_powergating_state(void *handle,
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
bool enable = (state == AMD_PG_STATE_GATE);
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(9, 2, 2):
case IP_VERSION(9, 1, 0):
case IP_VERSION(9, 3, 0):
@ -4998,7 +5001,7 @@ static int gfx_v9_0_set_clockgating_state(void *handle,
if (amdgpu_sriov_vf(adev))
return 0;
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(9, 0, 1):
case IP_VERSION(9, 2, 1):
case IP_VERSION(9, 4, 0):
@ -5048,7 +5051,7 @@ static void gfx_v9_0_get_clockgating_state(void *handle, u64 *flags)
if (data & CP_MEM_SLP_CNTL__CP_MEM_LS_EN_MASK)
*flags |= AMD_CG_SUPPORT_GFX_CP_LS | AMD_CG_SUPPORT_GFX_MGLS;
if (adev->ip_versions[GC_HWIP][0] != IP_VERSION(9, 4, 1)) {
if (amdgpu_ip_version(adev, GC_HWIP, 0) != IP_VERSION(9, 4, 1)) {
/* AMD_CG_SUPPORT_GFX_3D_CGCG */
data = RREG32_KIQ(SOC15_REG_OFFSET(GC, 0, mmRLC_CGCG_CGLS_CTRL_3D));
if (data & RLC_CGCG_CGLS_CTRL_3D__CGCG_EN_MASK)
@ -6454,7 +6457,7 @@ static int gfx_v9_0_ras_error_inject(struct amdgpu_device *adev,
return ret;
}
static const char *vml2_mems[] = {
static const char * const vml2_mems[] = {
"UTC_VML2_BANK_CACHE_0_BIGK_MEM0",
"UTC_VML2_BANK_CACHE_0_BIGK_MEM1",
"UTC_VML2_BANK_CACHE_0_4K_MEM0",
@ -6473,7 +6476,7 @@ static const char *vml2_mems[] = {
"UTC_VML2_BANK_CACHE_3_4K_MEM1",
};
static const char *vml2_walker_mems[] = {
static const char * const vml2_walker_mems[] = {
"UTC_VML2_CACHE_PDE0_MEM0",
"UTC_VML2_CACHE_PDE0_MEM1",
"UTC_VML2_CACHE_PDE1_MEM0",
@ -6483,7 +6486,7 @@ static const char *vml2_walker_mems[] = {
"UTC_VML2_RDIF_LOG_FIFO",
};
static const char *atc_l2_cache_2m_mems[] = {
static const char * const atc_l2_cache_2m_mems[] = {
"UTC_ATCL2_CACHE_2M_BANK0_WAY0_MEM",
"UTC_ATCL2_CACHE_2M_BANK0_WAY1_MEM",
"UTC_ATCL2_CACHE_2M_BANK1_WAY0_MEM",
@ -7087,7 +7090,7 @@ static void gfx_v9_0_set_irq_funcs(struct amdgpu_device *adev)
static void gfx_v9_0_set_rlc_funcs(struct amdgpu_device *adev)
{
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(9, 0, 1):
case IP_VERSION(9, 2, 1):
case IP_VERSION(9, 4, 0):
@ -7106,7 +7109,7 @@ static void gfx_v9_0_set_rlc_funcs(struct amdgpu_device *adev)
static void gfx_v9_0_set_gds_init(struct amdgpu_device *adev)
{
/* init asci gds info */
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(9, 0, 1):
case IP_VERSION(9, 2, 1):
case IP_VERSION(9, 4, 0):
@ -7128,7 +7131,7 @@ static void gfx_v9_0_set_gds_init(struct amdgpu_device *adev)
break;
}
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(9, 0, 1):
case IP_VERSION(9, 4, 0):
adev->gds.gds_compute_max_wave_id = 0x7ff;

View File

@ -746,8 +746,6 @@ void gfx_v9_4_2_init_golden_registers(struct amdgpu_device *adev,
die_id);
break;
}
return;
}
void gfx_v9_4_2_debug_trap_config_init(struct amdgpu_device *adev,
@ -1548,8 +1546,8 @@ static void gfx_v9_4_2_log_utc_edc_count(struct amdgpu_device *adev,
uint32_t ded_cnt)
{
uint32_t bank, way, mem;
static const char *vml2_way_str[] = { "BIGK", "4K" };
static const char *utcl2_rounter_str[] = { "VMC", "APT" };
static const char * const vml2_way_str[] = { "BIGK", "4K" };
static const char * const utcl2_rounter_str[] = { "VMC", "APT" };
mem = instance % blk->num_mem_blocks;
way = (instance / blk->num_mem_blocks) % blk->num_ways;

View File

@ -682,7 +682,7 @@ static int gfx_v9_4_3_gpu_early_init(struct amdgpu_device *adev)
adev->gfx.funcs = &gfx_v9_4_3_gfx_funcs;
adev->gfx.ras = &gfx_v9_4_3_ras;
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(9, 4, 3):
adev->gfx.config.max_hw_contexts = 8;
adev->gfx.config.sc_prim_fifo_size_frontend = 0x20;
@ -2430,7 +2430,7 @@ static int gfx_v9_4_3_set_clockgating_state(void *handle,
return 0;
num_xcc = NUM_XCC(adev->gfx.xcc_mask);
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(9, 4, 3):
for (i = 0; i < num_xcc; i++)
gfx_v9_4_3_xcc_update_gfx_clock_gating(
@ -3653,19 +3653,19 @@ static const struct amdgpu_gfx_ras_reg_entry gfx_v9_4_3_ce_reg_list[] = {
AMDGPU_GFX_GC_CANE_MEM, 1},
{{AMDGPU_RAS_REG_ENTRY(GC, 0, regSPI_CE_ERR_STATUS_LO, regSPI_CE_ERR_STATUS_HI),
1, (AMDGPU_RAS_ERR_INFO_VALID | AMDGPU_RAS_ERR_STATUS_VALID), "SPI"},
AMDGPU_GFX_SPI_MEM, 8},
AMDGPU_GFX_SPI_MEM, 1},
{{AMDGPU_RAS_REG_ENTRY(GC, 0, regSP0_CE_ERR_STATUS_LO, regSP0_CE_ERR_STATUS_HI),
10, (AMDGPU_RAS_ERR_INFO_VALID | AMDGPU_RAS_ERR_STATUS_VALID), "SP0"},
AMDGPU_GFX_SP_MEM, 1},
AMDGPU_GFX_SP_MEM, 4},
{{AMDGPU_RAS_REG_ENTRY(GC, 0, regSP1_CE_ERR_STATUS_LO, regSP1_CE_ERR_STATUS_HI),
10, (AMDGPU_RAS_ERR_INFO_VALID | AMDGPU_RAS_ERR_STATUS_VALID), "SP1"},
AMDGPU_GFX_SP_MEM, 1},
AMDGPU_GFX_SP_MEM, 4},
{{AMDGPU_RAS_REG_ENTRY(GC, 0, regSQ_CE_ERR_STATUS_LO, regSQ_CE_ERR_STATUS_HI),
10, (AMDGPU_RAS_ERR_INFO_VALID | AMDGPU_RAS_ERR_STATUS_VALID), "SQ"},
AMDGPU_GFX_SQ_MEM, 8},
AMDGPU_GFX_SQ_MEM, 4},
{{AMDGPU_RAS_REG_ENTRY(GC, 0, regSQC_CE_EDC_LO, regSQC_CE_EDC_HI),
5, (AMDGPU_RAS_ERR_INFO_VALID | AMDGPU_RAS_ERR_STATUS_VALID), "SQC"},
AMDGPU_GFX_SQC_MEM, 8},
AMDGPU_GFX_SQC_MEM, 4},
{{AMDGPU_RAS_REG_ENTRY(GC, 0, regTCX_CE_ERR_STATUS_LO, regTCX_CE_ERR_STATUS_HI),
2, (AMDGPU_RAS_ERR_INFO_VALID | AMDGPU_RAS_ERR_STATUS_VALID), "TCX"},
AMDGPU_GFX_TCX_MEM, 1},
@ -3674,22 +3674,22 @@ static const struct amdgpu_gfx_ras_reg_entry gfx_v9_4_3_ce_reg_list[] = {
AMDGPU_GFX_TCC_MEM, 1},
{{AMDGPU_RAS_REG_ENTRY(GC, 0, regTA_CE_EDC_LO, regTA_CE_EDC_HI),
10, (AMDGPU_RAS_ERR_INFO_VALID | AMDGPU_RAS_ERR_STATUS_VALID), "TA"},
AMDGPU_GFX_TA_MEM, 8},
AMDGPU_GFX_TA_MEM, 4},
{{AMDGPU_RAS_REG_ENTRY(GC, 0, regTCI_CE_EDC_LO_REG, regTCI_CE_EDC_HI_REG),
31, (AMDGPU_RAS_ERR_INFO_VALID | AMDGPU_RAS_ERR_STATUS_VALID), "TCI"},
27, (AMDGPU_RAS_ERR_INFO_VALID | AMDGPU_RAS_ERR_STATUS_VALID), "TCI"},
AMDGPU_GFX_TCI_MEM, 1},
{{AMDGPU_RAS_REG_ENTRY(GC, 0, regTCP_CE_EDC_LO_REG, regTCP_CE_EDC_HI_REG),
10, (AMDGPU_RAS_ERR_INFO_VALID | AMDGPU_RAS_ERR_STATUS_VALID), "TCP"},
AMDGPU_GFX_TCP_MEM, 8},
AMDGPU_GFX_TCP_MEM, 4},
{{AMDGPU_RAS_REG_ENTRY(GC, 0, regTD_CE_EDC_LO, regTD_CE_EDC_HI),
10, (AMDGPU_RAS_ERR_INFO_VALID | AMDGPU_RAS_ERR_STATUS_VALID), "TD"},
AMDGPU_GFX_TD_MEM, 8},
AMDGPU_GFX_TD_MEM, 4},
{{AMDGPU_RAS_REG_ENTRY(GC, 0, regGCEA_CE_ERR_STATUS_LO, regGCEA_CE_ERR_STATUS_HI),
16, (AMDGPU_RAS_ERR_INFO_VALID | AMDGPU_RAS_ERR_STATUS_VALID), "GCEA"},
AMDGPU_GFX_GCEA_MEM, 1},
{{AMDGPU_RAS_REG_ENTRY(GC, 0, regLDS_CE_ERR_STATUS_LO, regLDS_CE_ERR_STATUS_HI),
10, (AMDGPU_RAS_ERR_INFO_VALID | AMDGPU_RAS_ERR_STATUS_VALID), "LDS"},
AMDGPU_GFX_LDS_MEM, 1},
AMDGPU_GFX_LDS_MEM, 4},
};
static const struct amdgpu_gfx_ras_reg_entry gfx_v9_4_3_ue_reg_list[] = {
@ -3713,19 +3713,19 @@ static const struct amdgpu_gfx_ras_reg_entry gfx_v9_4_3_ue_reg_list[] = {
AMDGPU_GFX_GC_CANE_MEM, 1},
{{AMDGPU_RAS_REG_ENTRY(GC, 0, regSPI_UE_ERR_STATUS_LO, regSPI_UE_ERR_STATUS_HI),
1, (AMDGPU_RAS_ERR_INFO_VALID | AMDGPU_RAS_ERR_STATUS_VALID), "SPI"},
AMDGPU_GFX_SPI_MEM, 8},
AMDGPU_GFX_SPI_MEM, 1},
{{AMDGPU_RAS_REG_ENTRY(GC, 0, regSP0_UE_ERR_STATUS_LO, regSP0_UE_ERR_STATUS_HI),
10, (AMDGPU_RAS_ERR_INFO_VALID | AMDGPU_RAS_ERR_STATUS_VALID), "SP0"},
AMDGPU_GFX_SP_MEM, 1},
AMDGPU_GFX_SP_MEM, 4},
{{AMDGPU_RAS_REG_ENTRY(GC, 0, regSP1_UE_ERR_STATUS_LO, regSP1_UE_ERR_STATUS_HI),
10, (AMDGPU_RAS_ERR_INFO_VALID | AMDGPU_RAS_ERR_STATUS_VALID), "SP1"},
AMDGPU_GFX_SP_MEM, 1},
AMDGPU_GFX_SP_MEM, 4},
{{AMDGPU_RAS_REG_ENTRY(GC, 0, regSQ_UE_ERR_STATUS_LO, regSQ_UE_ERR_STATUS_HI),
10, (AMDGPU_RAS_ERR_INFO_VALID | AMDGPU_RAS_ERR_STATUS_VALID), "SQ"},
AMDGPU_GFX_SQ_MEM, 8},
AMDGPU_GFX_SQ_MEM, 4},
{{AMDGPU_RAS_REG_ENTRY(GC, 0, regSQC_UE_EDC_LO, regSQC_UE_EDC_HI),
5, (AMDGPU_RAS_ERR_INFO_VALID | AMDGPU_RAS_ERR_STATUS_VALID), "SQC"},
AMDGPU_GFX_SQC_MEM, 8},
AMDGPU_GFX_SQC_MEM, 4},
{{AMDGPU_RAS_REG_ENTRY(GC, 0, regTCX_UE_ERR_STATUS_LO, regTCX_UE_ERR_STATUS_HI),
2, (AMDGPU_RAS_ERR_INFO_VALID | AMDGPU_RAS_ERR_STATUS_VALID), "TCX"},
AMDGPU_GFX_TCX_MEM, 1},
@ -3734,16 +3734,16 @@ static const struct amdgpu_gfx_ras_reg_entry gfx_v9_4_3_ue_reg_list[] = {
AMDGPU_GFX_TCC_MEM, 1},
{{AMDGPU_RAS_REG_ENTRY(GC, 0, regTA_UE_EDC_LO, regTA_UE_EDC_HI),
10, (AMDGPU_RAS_ERR_INFO_VALID | AMDGPU_RAS_ERR_STATUS_VALID), "TA"},
AMDGPU_GFX_TA_MEM, 8},
AMDGPU_GFX_TA_MEM, 4},
{{AMDGPU_RAS_REG_ENTRY(GC, 0, regTCI_UE_EDC_LO_REG, regTCI_UE_EDC_HI_REG),
31, (AMDGPU_RAS_ERR_INFO_VALID | AMDGPU_RAS_ERR_STATUS_VALID), "TCI"},
27, (AMDGPU_RAS_ERR_INFO_VALID | AMDGPU_RAS_ERR_STATUS_VALID), "TCI"},
AMDGPU_GFX_TCI_MEM, 1},
{{AMDGPU_RAS_REG_ENTRY(GC, 0, regTCP_UE_EDC_LO_REG, regTCP_UE_EDC_HI_REG),
10, (AMDGPU_RAS_ERR_INFO_VALID | AMDGPU_RAS_ERR_STATUS_VALID), "TCP"},
AMDGPU_GFX_TCP_MEM, 8},
AMDGPU_GFX_TCP_MEM, 4},
{{AMDGPU_RAS_REG_ENTRY(GC, 0, regTD_UE_EDC_LO, regTD_UE_EDC_HI),
10, (AMDGPU_RAS_ERR_INFO_VALID | AMDGPU_RAS_ERR_STATUS_VALID), "TD"},
AMDGPU_GFX_TD_MEM, 8},
AMDGPU_GFX_TD_MEM, 4},
{{AMDGPU_RAS_REG_ENTRY(GC, 0, regTCA_UE_ERR_STATUS_LO, regTCA_UE_ERR_STATUS_HI),
2, (AMDGPU_RAS_ERR_INFO_VALID | AMDGPU_RAS_ERR_STATUS_VALID), "TCA"},
AMDGPU_GFX_TCA_MEM, 1},
@ -3752,7 +3752,7 @@ static const struct amdgpu_gfx_ras_reg_entry gfx_v9_4_3_ue_reg_list[] = {
AMDGPU_GFX_GCEA_MEM, 1},
{{AMDGPU_RAS_REG_ENTRY(GC, 0, regLDS_UE_ERR_STATUS_LO, regLDS_UE_ERR_STATUS_HI),
10, (AMDGPU_RAS_ERR_INFO_VALID | AMDGPU_RAS_ERR_STATUS_VALID), "LDS"},
AMDGPU_GFX_LDS_MEM, 1},
AMDGPU_GFX_LDS_MEM, 4},
};
static const struct soc15_reg_entry gfx_v9_4_3_ea_err_status_regs = {
@ -3766,6 +3766,12 @@ static void gfx_v9_4_3_inst_query_ras_err_count(struct amdgpu_device *adev,
unsigned long ce_count = 0, ue_count = 0;
uint32_t i, j, k;
/* NOTE: convert xcc_id to physical XCD ID (XCD0 or XCD1) */
struct amdgpu_smuio_mcm_config_info mcm_info = {
.socket_id = adev->smuio.funcs->get_socket_id(adev),
.die_id = xcc_id & 0x01 ? 1 : 0,
};
mutex_lock(&adev->grbm_idx_mutex);
for (i = 0; i < ARRAY_SIZE(gfx_v9_4_3_ce_reg_list); i++) {
@ -3804,8 +3810,8 @@ static void gfx_v9_4_3_inst_query_ras_err_count(struct amdgpu_device *adev,
/* the caller should make sure initialize value of
* err_data->ue_count and err_data->ce_count
*/
err_data->ce_count += ce_count;
err_data->ue_count += ue_count;
amdgpu_ras_error_statistic_ue_count(err_data, &mcm_info, ue_count);
amdgpu_ras_error_statistic_ce_count(err_data, &mcm_info, ce_count);
}
static void gfx_v9_4_3_inst_reset_ras_err_count(struct amdgpu_device *adev,
@ -4231,7 +4237,7 @@ static void gfx_v9_4_3_set_rlc_funcs(struct amdgpu_device *adev)
static void gfx_v9_4_3_set_gds_init(struct amdgpu_device *adev)
{
/* init asci gds info */
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(9, 4, 3):
/* 9.4.3 removed all the GDS internal memory,
* only support GWS opcode in kernel, like barrier
@ -4243,7 +4249,7 @@ static void gfx_v9_4_3_set_gds_init(struct amdgpu_device *adev)
break;
}
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(9, 4, 3):
/* deprecated for 9.4.3, no usage at all */
adev->gds.gds_compute_max_wave_id = 0;

View File

@ -0,0 +1,516 @@
/*
* Copyright 2023 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
*/
#include "amdgpu.h"
#include "gfxhub_v11_5_0.h"
#include "gc/gc_11_5_0_offset.h"
#include "gc/gc_11_5_0_sh_mask.h"
#include "navi10_enum.h"
#include "soc15_common.h"
#define regGCVM_L2_CNTL3_DEFAULT 0x80100007
#define regGCVM_L2_CNTL4_DEFAULT 0x000000c1
#define regGCVM_L2_CNTL5_DEFAULT 0x00003fe0
static const char *gfxhub_client_ids[] = {
"CB/DB",
"Reserved",
"GE1",
"GE2",
"CPF",
"CPC",
"CPG",
"RLC",
"TCP",
"SQC (inst)",
"SQC (data)",
"SQG",
"Reserved",
"SDMA0",
"SDMA1",
"GCR",
"SDMA2",
"SDMA3",
};
static uint32_t gfxhub_v11_5_0_get_invalidate_req(unsigned int vmid,
uint32_t flush_type)
{
u32 req = 0;
/* invalidate using legacy mode on vmid*/
req = REG_SET_FIELD(req, GCVM_INVALIDATE_ENG0_REQ,
PER_VMID_INVALIDATE_REQ, 1 << vmid);
req = REG_SET_FIELD(req, GCVM_INVALIDATE_ENG0_REQ, FLUSH_TYPE, flush_type);
req = REG_SET_FIELD(req, GCVM_INVALIDATE_ENG0_REQ, INVALIDATE_L2_PTES, 1);
req = REG_SET_FIELD(req, GCVM_INVALIDATE_ENG0_REQ, INVALIDATE_L2_PDE0, 1);
req = REG_SET_FIELD(req, GCVM_INVALIDATE_ENG0_REQ, INVALIDATE_L2_PDE1, 1);
req = REG_SET_FIELD(req, GCVM_INVALIDATE_ENG0_REQ, INVALIDATE_L2_PDE2, 1);
req = REG_SET_FIELD(req, GCVM_INVALIDATE_ENG0_REQ, INVALIDATE_L1_PTES, 1);
req = REG_SET_FIELD(req, GCVM_INVALIDATE_ENG0_REQ,
CLEAR_PROTECTION_FAULT_STATUS_ADDR, 0);
return req;
}
static void
gfxhub_v11_5_0_print_l2_protection_fault_status(struct amdgpu_device *adev,
uint32_t status)
{
u32 cid = REG_GET_FIELD(status,
GCVM_L2_PROTECTION_FAULT_STATUS, CID);
dev_err(adev->dev,
"GCVM_L2_PROTECTION_FAULT_STATUS:0x%08X\n",
status);
dev_err(adev->dev, "\t Faulty UTCL2 client ID: %s (0x%x)\n",
cid >= ARRAY_SIZE(gfxhub_client_ids) ? "unknown" : gfxhub_client_ids[cid],
cid);
dev_err(adev->dev, "\t MORE_FAULTS: 0x%lx\n",
REG_GET_FIELD(status,
GCVM_L2_PROTECTION_FAULT_STATUS, MORE_FAULTS));
dev_err(adev->dev, "\t WALKER_ERROR: 0x%lx\n",
REG_GET_FIELD(status,
GCVM_L2_PROTECTION_FAULT_STATUS, WALKER_ERROR));
dev_err(adev->dev, "\t PERMISSION_FAULTS: 0x%lx\n",
REG_GET_FIELD(status,
GCVM_L2_PROTECTION_FAULT_STATUS, PERMISSION_FAULTS));
dev_err(adev->dev, "\t MAPPING_ERROR: 0x%lx\n",
REG_GET_FIELD(status,
GCVM_L2_PROTECTION_FAULT_STATUS, MAPPING_ERROR));
dev_err(adev->dev, "\t RW: 0x%lx\n",
REG_GET_FIELD(status,
GCVM_L2_PROTECTION_FAULT_STATUS, RW));
}
static u64 gfxhub_v11_5_0_get_fb_location(struct amdgpu_device *adev)
{
u64 base = RREG32_SOC15(GC, 0, regGCMC_VM_FB_LOCATION_BASE);
base &= GCMC_VM_FB_LOCATION_BASE__FB_BASE_MASK;
base <<= 24;
return base;
}
static u64 gfxhub_v11_5_0_get_mc_fb_offset(struct amdgpu_device *adev)
{
return (u64)RREG32_SOC15(GC, 0, regGCMC_VM_FB_OFFSET) << 24;
}
static void gfxhub_v11_5_0_setup_vm_pt_regs(struct amdgpu_device *adev, uint32_t vmid,
uint64_t page_table_base)
{
struct amdgpu_vmhub *hub = &adev->vmhub[AMDGPU_GFXHUB(0)];
WREG32_SOC15_OFFSET(GC, 0, regGCVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_LO32,
hub->ctx_addr_distance * vmid,
lower_32_bits(page_table_base));
WREG32_SOC15_OFFSET(GC, 0, regGCVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_HI32,
hub->ctx_addr_distance * vmid,
upper_32_bits(page_table_base));
}
static void gfxhub_v11_5_0_init_gart_aperture_regs(struct amdgpu_device *adev)
{
uint64_t pt_base = amdgpu_gmc_pd_addr(adev->gart.bo);
gfxhub_v11_5_0_setup_vm_pt_regs(adev, 0, pt_base);
WREG32_SOC15(GC, 0, regGCVM_CONTEXT0_PAGE_TABLE_START_ADDR_LO32,
(u32)(adev->gmc.gart_start >> 12));
WREG32_SOC15(GC, 0, regGCVM_CONTEXT0_PAGE_TABLE_START_ADDR_HI32,
(u32)(adev->gmc.gart_start >> 44));
WREG32_SOC15(GC, 0, regGCVM_CONTEXT0_PAGE_TABLE_END_ADDR_LO32,
(u32)(adev->gmc.gart_end >> 12));
WREG32_SOC15(GC, 0, regGCVM_CONTEXT0_PAGE_TABLE_END_ADDR_HI32,
(u32)(adev->gmc.gart_end >> 44));
}
static void gfxhub_v11_5_0_init_system_aperture_regs(struct amdgpu_device *adev)
{
uint64_t value;
WREG32_SOC15(GC, 0, regGCMC_VM_AGP_BASE, 0);
WREG32_SOC15(GC, 0, regGCMC_VM_AGP_BOT, adev->gmc.agp_start >> 24);
WREG32_SOC15(GC, 0, regGCMC_VM_AGP_TOP, adev->gmc.agp_end >> 24);
/* Program the system aperture low logical page number. */
WREG32_SOC15(GC, 0, regGCMC_VM_SYSTEM_APERTURE_LOW_ADDR,
min(adev->gmc.fb_start, adev->gmc.agp_start) >> 18);
WREG32_SOC15(GC, 0, regGCMC_VM_SYSTEM_APERTURE_HIGH_ADDR,
max(adev->gmc.fb_end, adev->gmc.agp_end) >> 18);
/* Set default page address. */
value = amdgpu_gmc_vram_mc2pa(adev, adev->mem_scratch.gpu_addr);
WREG32_SOC15(GC, 0, regGCMC_VM_SYSTEM_APERTURE_DEFAULT_ADDR_LSB,
(u32)(value >> 12));
WREG32_SOC15(GC, 0, regGCMC_VM_SYSTEM_APERTURE_DEFAULT_ADDR_MSB,
(u32)(value >> 44));
/* Program "protection fault". */
WREG32_SOC15(GC, 0, regGCVM_L2_PROTECTION_FAULT_DEFAULT_ADDR_LO32,
(u32)(adev->dummy_page_addr >> 12));
WREG32_SOC15(GC, 0, regGCVM_L2_PROTECTION_FAULT_DEFAULT_ADDR_HI32,
(u32)((u64)adev->dummy_page_addr >> 44));
WREG32_FIELD15_PREREG(GC, 0, GCVM_L2_PROTECTION_FAULT_CNTL2,
ACTIVE_PAGE_MIGRATION_PTE_READ_RETRY, 1);
}
static void gfxhub_v11_5_0_init_tlb_regs(struct amdgpu_device *adev)
{
uint32_t tmp;
/* Setup TLB control */
tmp = RREG32_SOC15(GC, 0, regGCMC_VM_MX_L1_TLB_CNTL);
tmp = REG_SET_FIELD(tmp, GCMC_VM_MX_L1_TLB_CNTL, ENABLE_L1_TLB, 1);
tmp = REG_SET_FIELD(tmp, GCMC_VM_MX_L1_TLB_CNTL, SYSTEM_ACCESS_MODE, 3);
tmp = REG_SET_FIELD(tmp, GCMC_VM_MX_L1_TLB_CNTL,
ENABLE_ADVANCED_DRIVER_MODEL, 1);
tmp = REG_SET_FIELD(tmp, GCMC_VM_MX_L1_TLB_CNTL,
SYSTEM_APERTURE_UNMAPPED_ACCESS, 0);
tmp = REG_SET_FIELD(tmp, GCMC_VM_MX_L1_TLB_CNTL, ECO_BITS, 0);
tmp = REG_SET_FIELD(tmp, GCMC_VM_MX_L1_TLB_CNTL,
MTYPE, MTYPE_UC); /* UC, uncached */
WREG32_SOC15(GC, 0, regGCMC_VM_MX_L1_TLB_CNTL, tmp);
}
static void gfxhub_v11_5_0_init_cache_regs(struct amdgpu_device *adev)
{
uint32_t tmp;
/* These registers are not accessible to VF-SRIOV.
* The PF will program them instead.
*/
if (amdgpu_sriov_vf(adev))
return;
/* Setup L2 cache */
tmp = RREG32_SOC15(GC, 0, regGCVM_L2_CNTL);
tmp = REG_SET_FIELD(tmp, GCVM_L2_CNTL, ENABLE_L2_CACHE, 1);
tmp = REG_SET_FIELD(tmp, GCVM_L2_CNTL, ENABLE_L2_FRAGMENT_PROCESSING, 0);
tmp = REG_SET_FIELD(tmp, GCVM_L2_CNTL,
ENABLE_DEFAULT_PAGE_OUT_TO_SYSTEM_MEMORY, 1);
/* XXX for emulation, Refer to closed source code.*/
tmp = REG_SET_FIELD(tmp, GCVM_L2_CNTL,
L2_PDE0_CACHE_TAG_GENERATION_MODE, 0);
tmp = REG_SET_FIELD(tmp, GCVM_L2_CNTL, PDE_FAULT_CLASSIFICATION, 0);
tmp = REG_SET_FIELD(tmp, GCVM_L2_CNTL, CONTEXT1_IDENTITY_ACCESS_MODE, 1);
tmp = REG_SET_FIELD(tmp, GCVM_L2_CNTL, IDENTITY_MODE_FRAGMENT_SIZE, 0);
WREG32_SOC15(GC, 0, regGCVM_L2_CNTL, tmp);
tmp = RREG32_SOC15(GC, 0, regGCVM_L2_CNTL2);
tmp = REG_SET_FIELD(tmp, GCVM_L2_CNTL2, INVALIDATE_ALL_L1_TLBS, 1);
tmp = REG_SET_FIELD(tmp, GCVM_L2_CNTL2, INVALIDATE_L2_CACHE, 1);
WREG32_SOC15(GC, 0, regGCVM_L2_CNTL2, tmp);
tmp = regGCVM_L2_CNTL3_DEFAULT;
if (adev->gmc.translate_further) {
tmp = REG_SET_FIELD(tmp, GCVM_L2_CNTL3, BANK_SELECT, 12);
tmp = REG_SET_FIELD(tmp, GCVM_L2_CNTL3,
L2_CACHE_BIGK_FRAGMENT_SIZE, 9);
} else {
tmp = REG_SET_FIELD(tmp, GCVM_L2_CNTL3, BANK_SELECT, 9);
tmp = REG_SET_FIELD(tmp, GCVM_L2_CNTL3,
L2_CACHE_BIGK_FRAGMENT_SIZE, 6);
}
WREG32_SOC15(GC, 0, regGCVM_L2_CNTL3, tmp);
tmp = regGCVM_L2_CNTL4_DEFAULT;
tmp = REG_SET_FIELD(tmp, GCVM_L2_CNTL4, VMC_TAP_PDE_REQUEST_PHYSICAL, 0);
tmp = REG_SET_FIELD(tmp, GCVM_L2_CNTL4, VMC_TAP_PTE_REQUEST_PHYSICAL, 0);
WREG32_SOC15(GC, 0, regGCVM_L2_CNTL4, tmp);
tmp = regGCVM_L2_CNTL5_DEFAULT;
tmp = REG_SET_FIELD(tmp, GCVM_L2_CNTL5, L2_CACHE_SMALLK_FRAGMENT_SIZE, 0);
WREG32_SOC15(GC, 0, regGCVM_L2_CNTL5, tmp);
}
static void gfxhub_v11_5_0_enable_system_domain(struct amdgpu_device *adev)
{
uint32_t tmp;
tmp = RREG32_SOC15(GC, 0, regGCVM_CONTEXT0_CNTL);
tmp = REG_SET_FIELD(tmp, GCVM_CONTEXT0_CNTL, ENABLE_CONTEXT, 1);
tmp = REG_SET_FIELD(tmp, GCVM_CONTEXT0_CNTL, PAGE_TABLE_DEPTH, 0);
tmp = REG_SET_FIELD(tmp, GCVM_CONTEXT0_CNTL,
RETRY_PERMISSION_OR_INVALID_PAGE_FAULT, 0);
WREG32_SOC15(GC, 0, regGCVM_CONTEXT0_CNTL, tmp);
}
static void gfxhub_v11_5_0_disable_identity_aperture(struct amdgpu_device *adev)
{
/* These registers are not accessible to VF-SRIOV.
* The PF will program them instead.
*/
if (amdgpu_sriov_vf(adev))
return;
WREG32_SOC15(GC, 0, regGCVM_L2_CONTEXT1_IDENTITY_APERTURE_LOW_ADDR_LO32,
0xFFFFFFFF);
WREG32_SOC15(GC, 0, regGCVM_L2_CONTEXT1_IDENTITY_APERTURE_LOW_ADDR_HI32,
0x0000000F);
WREG32_SOC15(GC, 0, regGCVM_L2_CONTEXT1_IDENTITY_APERTURE_HIGH_ADDR_LO32,
0);
WREG32_SOC15(GC, 0, regGCVM_L2_CONTEXT1_IDENTITY_APERTURE_HIGH_ADDR_HI32,
0);
WREG32_SOC15(GC, 0, regGCVM_L2_CONTEXT_IDENTITY_PHYSICAL_OFFSET_LO32, 0);
WREG32_SOC15(GC, 0, regGCVM_L2_CONTEXT_IDENTITY_PHYSICAL_OFFSET_HI32, 0);
}
static void gfxhub_v11_5_0_setup_vmid_config(struct amdgpu_device *adev)
{
struct amdgpu_vmhub *hub = &adev->vmhub[AMDGPU_GFXHUB(0)];
int i;
uint32_t tmp;
for (i = 0; i <= 14; i++) {
tmp = RREG32_SOC15_OFFSET(GC, 0, regGCVM_CONTEXT1_CNTL, i * hub->ctx_distance);
tmp = REG_SET_FIELD(tmp, GCVM_CONTEXT1_CNTL, ENABLE_CONTEXT, 1);
tmp = REG_SET_FIELD(tmp, GCVM_CONTEXT1_CNTL, PAGE_TABLE_DEPTH,
adev->vm_manager.num_level);
tmp = REG_SET_FIELD(tmp, GCVM_CONTEXT1_CNTL,
RANGE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
tmp = REG_SET_FIELD(tmp, GCVM_CONTEXT1_CNTL,
DUMMY_PAGE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
tmp = REG_SET_FIELD(tmp, GCVM_CONTEXT1_CNTL,
PDE0_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
tmp = REG_SET_FIELD(tmp, GCVM_CONTEXT1_CNTL,
VALID_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
tmp = REG_SET_FIELD(tmp, GCVM_CONTEXT1_CNTL,
READ_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
tmp = REG_SET_FIELD(tmp, GCVM_CONTEXT1_CNTL,
WRITE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
tmp = REG_SET_FIELD(tmp, GCVM_CONTEXT1_CNTL,
EXECUTE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
tmp = REG_SET_FIELD(tmp, GCVM_CONTEXT1_CNTL,
PAGE_TABLE_BLOCK_SIZE,
adev->vm_manager.block_size - 9);
/* Send no-retry XNACK on fault to suppress VM fault storm. */
tmp = REG_SET_FIELD(tmp, GCVM_CONTEXT1_CNTL,
RETRY_PERMISSION_OR_INVALID_PAGE_FAULT,
!amdgpu_noretry);
WREG32_SOC15_OFFSET(GC, 0, regGCVM_CONTEXT1_CNTL,
i * hub->ctx_distance, tmp);
WREG32_SOC15_OFFSET(GC, 0, regGCVM_CONTEXT1_PAGE_TABLE_START_ADDR_LO32,
i * hub->ctx_addr_distance, 0);
WREG32_SOC15_OFFSET(GC, 0, regGCVM_CONTEXT1_PAGE_TABLE_START_ADDR_HI32,
i * hub->ctx_addr_distance, 0);
WREG32_SOC15_OFFSET(GC, 0, regGCVM_CONTEXT1_PAGE_TABLE_END_ADDR_LO32,
i * hub->ctx_addr_distance,
lower_32_bits(adev->vm_manager.max_pfn - 1));
WREG32_SOC15_OFFSET(GC, 0, regGCVM_CONTEXT1_PAGE_TABLE_END_ADDR_HI32,
i * hub->ctx_addr_distance,
upper_32_bits(adev->vm_manager.max_pfn - 1));
}
hub->vm_cntx_cntl = tmp;
}
static void gfxhub_v11_5_0_program_invalidation(struct amdgpu_device *adev)
{
struct amdgpu_vmhub *hub = &adev->vmhub[AMDGPU_GFXHUB(0)];
unsigned i;
for (i = 0 ; i < 18; ++i) {
WREG32_SOC15_OFFSET(GC, 0, regGCVM_INVALIDATE_ENG0_ADDR_RANGE_LO32,
i * hub->eng_addr_distance, 0xffffffff);
WREG32_SOC15_OFFSET(GC, 0, regGCVM_INVALIDATE_ENG0_ADDR_RANGE_HI32,
i * hub->eng_addr_distance, 0x1f);
}
}
static int gfxhub_v11_5_0_gart_enable(struct amdgpu_device *adev)
{
if (amdgpu_sriov_vf(adev)) {
/*
* GCMC_VM_FB_LOCATION_BASE/TOP is NULL for VF, becuase they are
* VF copy registers so vbios post doesn't program them, for
* SRIOV driver need to program them
*/
WREG32_SOC15(GC, 0, regGCMC_VM_FB_LOCATION_BASE,
adev->gmc.vram_start >> 24);
WREG32_SOC15(GC, 0, regGCMC_VM_FB_LOCATION_TOP,
adev->gmc.vram_end >> 24);
}
/* GART Enable. */
gfxhub_v11_5_0_init_gart_aperture_regs(adev);
gfxhub_v11_5_0_init_system_aperture_regs(adev);
gfxhub_v11_5_0_init_tlb_regs(adev);
gfxhub_v11_5_0_init_cache_regs(adev);
gfxhub_v11_5_0_enable_system_domain(adev);
gfxhub_v11_5_0_disable_identity_aperture(adev);
gfxhub_v11_5_0_setup_vmid_config(adev);
gfxhub_v11_5_0_program_invalidation(adev);
return 0;
}
static void gfxhub_v11_5_0_gart_disable(struct amdgpu_device *adev)
{
struct amdgpu_vmhub *hub = &adev->vmhub[AMDGPU_GFXHUB(0)];
u32 tmp;
u32 i;
/* Disable all tables */
for (i = 0; i < 16; i++)
WREG32_SOC15_OFFSET(GC, 0, regGCVM_CONTEXT0_CNTL,
i * hub->ctx_distance, 0);
/* Setup TLB control */
tmp = RREG32_SOC15(GC, 0, regGCMC_VM_MX_L1_TLB_CNTL);
tmp = REG_SET_FIELD(tmp, GCMC_VM_MX_L1_TLB_CNTL, ENABLE_L1_TLB, 0);
tmp = REG_SET_FIELD(tmp, GCMC_VM_MX_L1_TLB_CNTL,
ENABLE_ADVANCED_DRIVER_MODEL, 0);
WREG32_SOC15(GC, 0, regGCMC_VM_MX_L1_TLB_CNTL, tmp);
/* Setup L2 cache */
WREG32_FIELD15_PREREG(GC, 0, GCVM_L2_CNTL, ENABLE_L2_CACHE, 0);
WREG32_SOC15(GC, 0, regGCVM_L2_CNTL3, 0);
}
/**
* gfxhub_v11_5_0_set_fault_enable_default - update GART/VM fault handling
*
* @adev: amdgpu_device pointer
* @value: true redirects VM faults to the default page
*/
static void gfxhub_v11_5_0_set_fault_enable_default(struct amdgpu_device *adev,
bool value)
{
u32 tmp;
/* NO halt CP when page fault */
tmp = RREG32_SOC15(GC, 0, regCP_DEBUG);
tmp = REG_SET_FIELD(tmp, CP_DEBUG, CPG_UTCL1_ERROR_HALT_DISABLE, 1);
WREG32_SOC15(GC, 0, regCP_DEBUG, tmp);
/* These registers are not accessible to VF-SRIOV.
* The PF will program them instead.
*/
if (amdgpu_sriov_vf(adev))
return;
tmp = RREG32_SOC15(GC, 0, regGCVM_L2_PROTECTION_FAULT_CNTL);
tmp = REG_SET_FIELD(tmp, GCVM_L2_PROTECTION_FAULT_CNTL,
RANGE_PROTECTION_FAULT_ENABLE_DEFAULT, value);
tmp = REG_SET_FIELD(tmp, GCVM_L2_PROTECTION_FAULT_CNTL,
PDE0_PROTECTION_FAULT_ENABLE_DEFAULT, value);
tmp = REG_SET_FIELD(tmp, GCVM_L2_PROTECTION_FAULT_CNTL,
PDE1_PROTECTION_FAULT_ENABLE_DEFAULT, value);
tmp = REG_SET_FIELD(tmp, GCVM_L2_PROTECTION_FAULT_CNTL,
PDE2_PROTECTION_FAULT_ENABLE_DEFAULT, value);
tmp = REG_SET_FIELD(tmp, GCVM_L2_PROTECTION_FAULT_CNTL,
TRANSLATE_FURTHER_PROTECTION_FAULT_ENABLE_DEFAULT,
value);
tmp = REG_SET_FIELD(tmp, GCVM_L2_PROTECTION_FAULT_CNTL,
NACK_PROTECTION_FAULT_ENABLE_DEFAULT, value);
tmp = REG_SET_FIELD(tmp, GCVM_L2_PROTECTION_FAULT_CNTL,
DUMMY_PAGE_PROTECTION_FAULT_ENABLE_DEFAULT, value);
tmp = REG_SET_FIELD(tmp, GCVM_L2_PROTECTION_FAULT_CNTL,
VALID_PROTECTION_FAULT_ENABLE_DEFAULT, value);
tmp = REG_SET_FIELD(tmp, GCVM_L2_PROTECTION_FAULT_CNTL,
READ_PROTECTION_FAULT_ENABLE_DEFAULT, value);
tmp = REG_SET_FIELD(tmp, GCVM_L2_PROTECTION_FAULT_CNTL,
WRITE_PROTECTION_FAULT_ENABLE_DEFAULT, value);
tmp = REG_SET_FIELD(tmp, GCVM_L2_PROTECTION_FAULT_CNTL,
EXECUTE_PROTECTION_FAULT_ENABLE_DEFAULT, value);
if (!value) {
tmp = REG_SET_FIELD(tmp, GCVM_L2_PROTECTION_FAULT_CNTL,
CRASH_ON_NO_RETRY_FAULT, 1);
tmp = REG_SET_FIELD(tmp, GCVM_L2_PROTECTION_FAULT_CNTL,
CRASH_ON_RETRY_FAULT, 1);
}
WREG32_SOC15(GC, 0, regGCVM_L2_PROTECTION_FAULT_CNTL, tmp);
}
static const struct amdgpu_vmhub_funcs gfxhub_v11_5_0_vmhub_funcs = {
.print_l2_protection_fault_status = gfxhub_v11_5_0_print_l2_protection_fault_status,
.get_invalidate_req = gfxhub_v11_5_0_get_invalidate_req,
};
static void gfxhub_v11_5_0_init(struct amdgpu_device *adev)
{
struct amdgpu_vmhub *hub = &adev->vmhub[AMDGPU_GFXHUB(0)];
hub->ctx0_ptb_addr_lo32 =
SOC15_REG_OFFSET(GC, 0,
regGCVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_LO32);
hub->ctx0_ptb_addr_hi32 =
SOC15_REG_OFFSET(GC, 0,
regGCVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_HI32);
hub->vm_inv_eng0_sem =
SOC15_REG_OFFSET(GC, 0, regGCVM_INVALIDATE_ENG0_SEM);
hub->vm_inv_eng0_req =
SOC15_REG_OFFSET(GC, 0, regGCVM_INVALIDATE_ENG0_REQ);
hub->vm_inv_eng0_ack =
SOC15_REG_OFFSET(GC, 0, regGCVM_INVALIDATE_ENG0_ACK);
hub->vm_context0_cntl =
SOC15_REG_OFFSET(GC, 0, regGCVM_CONTEXT0_CNTL);
hub->vm_l2_pro_fault_status =
SOC15_REG_OFFSET(GC, 0, regGCVM_L2_PROTECTION_FAULT_STATUS);
hub->vm_l2_pro_fault_cntl =
SOC15_REG_OFFSET(GC, 0, regGCVM_L2_PROTECTION_FAULT_CNTL);
hub->ctx_distance = regGCVM_CONTEXT1_CNTL - regGCVM_CONTEXT0_CNTL;
hub->ctx_addr_distance = regGCVM_CONTEXT1_PAGE_TABLE_BASE_ADDR_LO32 -
regGCVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_LO32;
hub->eng_distance = regGCVM_INVALIDATE_ENG1_REQ -
regGCVM_INVALIDATE_ENG0_REQ;
hub->eng_addr_distance = regGCVM_INVALIDATE_ENG1_ADDR_RANGE_LO32 -
regGCVM_INVALIDATE_ENG0_ADDR_RANGE_LO32;
hub->vm_cntx_cntl_vm_fault = GCVM_CONTEXT1_CNTL__RANGE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
GCVM_CONTEXT1_CNTL__DUMMY_PAGE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
GCVM_CONTEXT1_CNTL__PDE0_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
GCVM_CONTEXT1_CNTL__VALID_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
GCVM_CONTEXT1_CNTL__READ_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
GCVM_CONTEXT1_CNTL__WRITE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
GCVM_CONTEXT1_CNTL__EXECUTE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK;
hub->vmhub_funcs = &gfxhub_v11_5_0_vmhub_funcs;
}
const struct amdgpu_gfxhub_funcs gfxhub_v11_5_0_funcs = {
.get_fb_location = gfxhub_v11_5_0_get_fb_location,
.get_mc_fb_offset = gfxhub_v11_5_0_get_mc_fb_offset,
.setup_vm_pt_regs = gfxhub_v11_5_0_setup_vm_pt_regs,
.gart_enable = gfxhub_v11_5_0_gart_enable,
.gart_disable = gfxhub_v11_5_0_gart_disable,
.set_fault_enable_default = gfxhub_v11_5_0_set_fault_enable_default,
.init = gfxhub_v11_5_0_init,
};

View File

@ -19,12 +19,11 @@
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
* Authors: AMD
*
*/
#ifndef __LINK_FPGA_H__
#define __LINK_FPGA_H__
#include "link.h"
void dp_fpga_hpo_enable_link_and_stream(struct dc_state *state,
struct pipe_ctx *pipe_ctx);
#endif /* __LINK_FPGA_H__ */
#ifndef __GFXHUB_V11_5_0_H__
#define __GFXHUB_V11_5_0_H__
extern const struct amdgpu_gfxhub_funcs gfxhub_v11_5_0_funcs;
#endif

View File

@ -260,7 +260,7 @@ static void gfxhub_v1_0_setup_vmid_config(struct amdgpu_device *adev)
block_size -= 9;
for (i = 0; i <= 14; i++) {
tmp = RREG32_SOC15_OFFSET(GC, 0, mmVM_CONTEXT1_CNTL, i);
tmp = RREG32_SOC15_OFFSET(GC, 0, mmVM_CONTEXT1_CNTL, i * hub->ctx_distance);
tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL, ENABLE_CONTEXT, 1);
tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL, PAGE_TABLE_DEPTH,
num_level);

View File

@ -329,7 +329,8 @@ static void gfxhub_v1_2_xcc_setup_vmid_config(struct amdgpu_device *adev,
for_each_inst(j, xcc_mask) {
hub = &adev->vmhub[AMDGPU_GFXHUB(j)];
for (i = 0; i <= 14; i++) {
tmp = RREG32_SOC15_OFFSET(GC, GET_INST(GC, j), regVM_CONTEXT1_CNTL, i);
tmp = RREG32_SOC15_OFFSET(GC, GET_INST(GC, j), regVM_CONTEXT1_CNTL,
i * hub->ctx_distance);
tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL, ENABLE_CONTEXT, 1);
tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL, PAGE_TABLE_DEPTH,
num_level);
@ -356,11 +357,14 @@ static void gfxhub_v1_2_xcc_setup_vmid_config(struct amdgpu_device *adev,
* the SQ per-process.
* Retry faults need to be enabled for that to work.
*/
tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
RETRY_PERMISSION_OR_INVALID_PAGE_FAULT,
!adev->gmc.noretry ||
adev->ip_versions[GC_HWIP][0] == IP_VERSION(9, 4, 2) ||
adev->ip_versions[GC_HWIP][0] == IP_VERSION(9, 4, 3));
tmp = REG_SET_FIELD(
tmp, VM_CONTEXT1_CNTL,
RETRY_PERMISSION_OR_INVALID_PAGE_FAULT,
!adev->gmc.noretry ||
amdgpu_ip_version(adev, GC_HWIP, 0) ==
IP_VERSION(9, 4, 2) ||
amdgpu_ip_version(adev, GC_HWIP, 0) ==
IP_VERSION(9, 4, 3));
WREG32_SOC15_OFFSET(GC, GET_INST(GC, j), regVM_CONTEXT1_CNTL,
i * hub->ctx_distance, tmp);
WREG32_SOC15_OFFSET(GC, GET_INST(GC, j),

View File

@ -287,7 +287,7 @@ static void gfxhub_v2_0_setup_vmid_config(struct amdgpu_device *adev)
uint32_t tmp;
for (i = 0; i <= 14; i++) {
tmp = RREG32_SOC15_OFFSET(GC, 0, mmGCVM_CONTEXT1_CNTL, i);
tmp = RREG32_SOC15_OFFSET(GC, 0, mmGCVM_CONTEXT1_CNTL, i * hub->ctx_distance);
tmp = REG_SET_FIELD(tmp, GCVM_CONTEXT1_CNTL, ENABLE_CONTEXT, 1);
tmp = REG_SET_FIELD(tmp, GCVM_CONTEXT1_CNTL, PAGE_TABLE_DEPTH,
adev->vm_manager.num_level);
@ -471,6 +471,9 @@ static void gfxhub_v2_0_init(struct amdgpu_device *adev)
GCVM_CONTEXT1_CNTL__WRITE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
GCVM_CONTEXT1_CNTL__EXECUTE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK;
/* TODO: This is only needed on some Navi 1x revisions */
hub->sdma_invalidation_workaround = true;
hub->vmhub_funcs = &gfxhub_v2_0_vmhub_funcs;
}

View File

@ -296,7 +296,7 @@ static void gfxhub_v2_1_setup_vmid_config(struct amdgpu_device *adev)
uint32_t tmp;
for (i = 0; i <= 14; i++) {
tmp = RREG32_SOC15_OFFSET(GC, 0, mmGCVM_CONTEXT1_CNTL, i);
tmp = RREG32_SOC15_OFFSET(GC, 0, mmGCVM_CONTEXT1_CNTL, i * hub->ctx_distance);
tmp = REG_SET_FIELD(tmp, GCVM_CONTEXT1_CNTL, ENABLE_CONTEXT, 1);
tmp = REG_SET_FIELD(tmp, GCVM_CONTEXT1_CNTL, PAGE_TABLE_DEPTH,
adev->vm_manager.num_level);
@ -510,7 +510,7 @@ static int gfxhub_v2_1_get_xgmi_info(struct amdgpu_device *adev)
u32 max_num_physical_nodes = 0;
u32 max_physical_node_id = 0;
switch (adev->ip_versions[XGMI_HWIP][0]) {
switch (amdgpu_ip_version(adev, XGMI_HWIP, 0)) {
case IP_VERSION(4, 8, 0):
max_num_physical_nodes = 4;
max_physical_node_id = 3;
@ -548,7 +548,7 @@ static void gfxhub_v2_1_utcl2_harvest(struct amdgpu_device *adev)
adev->gfx.config.max_sh_per_se *
adev->gfx.config.max_shader_engines);
switch (adev->ip_versions[GC_HWIP][0]) {
switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
case IP_VERSION(10, 3, 1):
case IP_VERSION(10, 3, 3):
/* Get SA disabled bitmap from eFuse setting */

View File

@ -164,8 +164,7 @@ static void gfxhub_v3_0_init_system_aperture_regs(struct amdgpu_device *adev)
max(adev->gmc.fb_end, adev->gmc.agp_end) >> 18);
/* Set default page address. */
value = adev->mem_scratch.gpu_addr - adev->gmc.vram_start
+ adev->vm_manager.vram_base_offset;
value = amdgpu_gmc_vram_mc2pa(adev, adev->mem_scratch.gpu_addr);
WREG32_SOC15(GC, 0, regGCMC_VM_SYSTEM_APERTURE_DEFAULT_ADDR_LSB,
(u32)(value >> 12));
WREG32_SOC15(GC, 0, regGCMC_VM_SYSTEM_APERTURE_DEFAULT_ADDR_MSB,
@ -295,7 +294,7 @@ static void gfxhub_v3_0_setup_vmid_config(struct amdgpu_device *adev)
uint32_t tmp;
for (i = 0; i <= 14; i++) {
tmp = RREG32_SOC15_OFFSET(GC, 0, regGCVM_CONTEXT1_CNTL, i);
tmp = RREG32_SOC15_OFFSET(GC, 0, regGCVM_CONTEXT1_CNTL, i * hub->ctx_distance);
tmp = REG_SET_FIELD(tmp, GCVM_CONTEXT1_CNTL, ENABLE_CONTEXT, 1);
tmp = REG_SET_FIELD(tmp, GCVM_CONTEXT1_CNTL, PAGE_TABLE_DEPTH,
adev->vm_manager.num_level);

View File

@ -169,8 +169,7 @@ static void gfxhub_v3_0_3_init_system_aperture_regs(struct amdgpu_device *adev)
max(adev->gmc.fb_end, adev->gmc.agp_end) >> 18);
/* Set default page address. */
value = adev->mem_scratch.gpu_addr - adev->gmc.vram_start
+ adev->vm_manager.vram_base_offset;
value = amdgpu_gmc_vram_mc2pa(adev, adev->mem_scratch.gpu_addr);
WREG32_SOC15(GC, 0, regGCMC_VM_SYSTEM_APERTURE_DEFAULT_ADDR_LSB,
(u32)(value >> 12));
WREG32_SOC15(GC, 0, regGCMC_VM_SYSTEM_APERTURE_DEFAULT_ADDR_MSB,
@ -300,7 +299,7 @@ static void gfxhub_v3_0_3_setup_vmid_config(struct amdgpu_device *adev)
uint32_t tmp;
for (i = 0; i <= 14; i++) {
tmp = RREG32_SOC15_OFFSET(GC, 0, regGCVM_CONTEXT1_CNTL, i);
tmp = RREG32_SOC15_OFFSET(GC, 0, regGCVM_CONTEXT1_CNTL, i * hub->ctx_distance);
tmp = REG_SET_FIELD(tmp, GCVM_CONTEXT1_CNTL, ENABLE_CONTEXT, 1);
tmp = REG_SET_FIELD(tmp, GCVM_CONTEXT1_CNTL, PAGE_TABLE_DEPTH,
adev->vm_manager.num_level);

Some files were not shown because too many files have changed in this diff Show More