amd-drm-next-6.8-2023-12-15:

amdgpu:
 - Suspend fixes
 - Misc code cleanups
 - JPEG fix
 - Add AMD specific color management (protected by AMD_PRIVATE_COLOR)
 - UHBR13.5 cable fixes
 - Misc display fixes
 - Display WB fixes
 - PSR fixes
 - XGMI fix
 - ACPI WBRF support for handling potential RF interference from GPU clocks
 - Enable tunneling on high priority compute queues
 - drm_edid.h include cleanup
 - VPE DPM support
 - SMU 13 fixes
 - Fix possible double frees in error paths
 - Misc fixes
 
 amdkfd:
 - Support import and export of dma-bufs using GEM handles
 - MES shader debugger fixes
 - SVM fixes
 
 radeon:
 - drm_edid.h include cleanup
 - Misc code cleanups
 - Fix possible memory leak in error path
 
 drm:
 - Increase max objects to accomodate new color props
 - Make replace_property_blob_from_id a DRM helper
 - Track color management changes per plane
 
 platform-x86:
 - Merge immutable branch from Hans for platform dependencies for WBRF to coordinate
   merge of WBRF feature across wifi, platform, and GPU
 -----BEGIN PGP SIGNATURE-----
 
 iHUEABYKAB0WIQQgO5Idg2tXNTSZAr293/aFa7yZ2AUCZXygTgAKCRC93/aFa7yZ
 2EW1AQCILfGTtDWXzgLSpUBtt9jOooHqaSrah19Cfw0HlA3QIQD+OCohXH1LLZo1
 tYHyfsLv0LsNawI198qABzB1PwptSAI=
 =M1AO
 -----END PGP SIGNATURE-----

Merge tag 'amd-drm-next-6.8-2023-12-15' of https://gitlab.freedesktop.org/agd5f/linux into drm-next

amd-drm-next-6.8-2023-12-15:

amdgpu:
- Suspend fixes
- Misc code cleanups
- JPEG fix
- Add AMD specific color management (protected by AMD_PRIVATE_COLOR)
- UHBR13.5 cable fixes
- Misc display fixes
- Display WB fixes
- PSR fixes
- XGMI fix
- ACPI WBRF support for handling potential RF interference from GPU clocks
- Enable tunneling on high priority compute queues
- drm_edid.h include cleanup
- VPE DPM support
- SMU 13 fixes
- Fix possible double frees in error paths
- Misc fixes

amdkfd:
- Support import and export of dma-bufs using GEM handles
- MES shader debugger fixes
- SVM fixes

radeon:
- drm_edid.h include cleanup
- Misc code cleanups
- Fix possible memory leak in error path

drm:
- Increase max objects to accomodate new color props
- Make replace_property_blob_from_id a DRM helper
- Track color management changes per plane

platform-x86:
- Merge immutable branch from Hans for platform dependencies for WBRF to coordinate
  merge of WBRF feature across wifi, platform, and GPU

Signed-off-by: Dave Airlie <airlied@redhat.com>

# -----BEGIN PGP SIGNATURE-----
#
# iHUEABYKAB0WIQQgO5Idg2tXNTSZAr293/aFa7yZ2AUCZXygTgAKCRC93/aFa7yZ
# 2EW1AQCILfGTtDWXzgLSpUBtt9jOooHqaSrah19Cfw0HlA3QIQD+OCohXH1LLZo1
# tYHyfsLv0LsNawI198qABzB1PwptSAI=
# =M1AO
# -----END PGP SIGNATURE-----
# gpg: Signature made Sat 16 Dec 2023 04:51:58 AEST
# gpg:                using EDDSA key 203B921D836B5735349902BDBDDFF6856BBC99D8
# gpg: Can't check signature: No public key
From: Alex Deucher <alexander.deucher@amd.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20231215193519.5040-1-alexander.deucher@amd.com
This commit is contained in:
Dave Airlie 2023-12-20 05:59:40 +10:00
commit d2be61f843
126 changed files with 3189 additions and 438 deletions

View File

@ -115,6 +115,7 @@ available subsections can be seen below.
hte/index hte/index
wmi wmi
dpll dpll
wbrf
.. only:: subproject and html .. only:: subproject and html

View File

@ -0,0 +1,78 @@
.. SPDX-License-Identifier: GPL-2.0-or-later
=================================
WBRF - Wifi Band RFI Mitigations
=================================
Due to electrical and mechanical constraints in certain platform designs
there may be likely interference of relatively high-powered harmonics of
the GPU memory clocks with local radio module frequency bands used by
certain Wifi bands.
To mitigate possible RFI interference producers can advertise the
frequencies in use and consumers can use this information to avoid using
these frequencies for sensitive features.
When a platform is known to have this issue with any contained devices,
the platform designer will advertise the availability of this feature via
ACPI devices with a device specific method (_DSM).
* Producers with this _DSM will be able to advertise the frequencies in use.
* Consumers with this _DSM will be able to register for notifications of
frequencies in use.
Some general terms
==================
Producer: such component who can produce high-powered radio frequency
Consumer: such component who can adjust its in-use frequency in
response to the radio frequencies of other components to mitigate the
possible RFI.
To make the mechanism function, those producers should notify active use
of their particular frequencies so that other consumers can make relative
internal adjustments as necessary to avoid this resonance.
ACPI interface
==============
Although initially used by for wifi + dGPU use cases, the ACPI interface
can be scaled to any type of device that a platform designer discovers
can cause interference.
The GUID used for the _DSM is 7B7656CF-DC3D-4C1C-83E9-66E721DE3070.
3 functions are available in this _DSM:
* 0: discover # of functions available
* 1: record RF bands in use
* 2: retrieve RF bands in use
Driver programming interface
============================
.. kernel-doc:: drivers/platform/x86/amd/wbrf.c
Sample Usage
=============
The expected flow for the producers:
1. During probe, call `acpi_amd_wbrf_supported_producer` to check if WBRF
can be enabled for the device.
2. On using some frequency band, call `acpi_amd_wbrf_add_remove` with 'add'
param to get other consumers properly notified.
3. Or on stopping using some frequency band, call
`acpi_amd_wbrf_add_remove` with 'remove' param to get other consumers notified.
The expected flow for the consumers:
1. During probe, call `acpi_amd_wbrf_supported_consumer` to check if WBRF
can be enabled for the device.
2. Call `amd_wbrf_register_notifier` to register for notification
of frequency band change(add or remove) from other producers.
3. Call the `amd_wbrf_retrieve_freq_band` initally to retrieve
current active frequency bands considering some producers may broadcast
such information before the consumer is up.
4. On receiving a notification for frequency band change, run
`amd_wbrf_retrieve_freq_band` again to retrieve the latest
active frequency bands.
5. During driver cleanup, call `amd_wbrf_unregister_notifier` to
unregister the notifier.

View File

@ -252,6 +252,8 @@ extern int amdgpu_seamless;
extern int amdgpu_user_partt_mode; extern int amdgpu_user_partt_mode;
extern int amdgpu_agp; extern int amdgpu_agp;
extern int amdgpu_wbrf;
#define AMDGPU_VM_MAX_NUM_CTX 4096 #define AMDGPU_VM_MAX_NUM_CTX 4096
#define AMDGPU_SG_THRESHOLD (256*1024*1024) #define AMDGPU_SG_THRESHOLD (256*1024*1024)
#define AMDGPU_WAIT_IDLE_TIMEOUT_IN_MS 3000 #define AMDGPU_WAIT_IDLE_TIMEOUT_IN_MS 3000
@ -789,6 +791,7 @@ struct amdgpu_mqd_prop {
uint64_t eop_gpu_addr; uint64_t eop_gpu_addr;
uint32_t hqd_pipe_priority; uint32_t hqd_pipe_priority;
uint32_t hqd_queue_priority; uint32_t hqd_queue_priority;
bool allow_tunneling;
bool hqd_active; bool hqd_active;
}; };

View File

@ -142,6 +142,7 @@ void amdgpu_amdkfd_device_init(struct amdgpu_device *adev)
{ {
int i; int i;
int last_valid_bit; int last_valid_bit;
int ret;
amdgpu_amdkfd_gpuvm_init_mem_limits(); amdgpu_amdkfd_gpuvm_init_mem_limits();
@ -160,6 +161,12 @@ void amdgpu_amdkfd_device_init(struct amdgpu_device *adev)
.enable_mes = adev->enable_mes, .enable_mes = adev->enable_mes,
}; };
ret = drm_client_init(&adev->ddev, &adev->kfd.client, "kfd", NULL);
if (ret) {
dev_err(adev->dev, "Failed to init DRM client: %d\n", ret);
return;
}
/* this is going to have a few of the MSBs set that we need to /* this is going to have a few of the MSBs set that we need to
* clear * clear
*/ */
@ -198,6 +205,10 @@ void amdgpu_amdkfd_device_init(struct amdgpu_device *adev)
adev->kfd.init_complete = kgd2kfd_device_init(adev->kfd.dev, adev->kfd.init_complete = kgd2kfd_device_init(adev->kfd.dev,
&gpu_resources); &gpu_resources);
if (adev->kfd.init_complete)
drm_client_register(&adev->kfd.client);
else
drm_client_release(&adev->kfd.client);
amdgpu_amdkfd_total_mem_size += adev->gmc.real_vram_size; amdgpu_amdkfd_total_mem_size += adev->gmc.real_vram_size;

View File

@ -33,6 +33,7 @@
#include <linux/mmu_notifier.h> #include <linux/mmu_notifier.h>
#include <linux/memremap.h> #include <linux/memremap.h>
#include <kgd_kfd_interface.h> #include <kgd_kfd_interface.h>
#include <drm/drm_client.h>
#include "amdgpu_sync.h" #include "amdgpu_sync.h"
#include "amdgpu_vm.h" #include "amdgpu_vm.h"
#include "amdgpu_xcp.h" #include "amdgpu_xcp.h"
@ -83,6 +84,7 @@ struct kgd_mem {
struct amdgpu_sync sync; struct amdgpu_sync sync;
uint32_t gem_handle;
bool aql_queue; bool aql_queue;
bool is_imported; bool is_imported;
}; };
@ -105,6 +107,9 @@ struct amdgpu_kfd_dev {
/* HMM page migration MEMORY_DEVICE_PRIVATE mapping */ /* HMM page migration MEMORY_DEVICE_PRIVATE mapping */
struct dev_pagemap pgmap; struct dev_pagemap pgmap;
/* Client for KFD BO GEM handle allocations */
struct drm_client_dev client;
}; };
enum kgd_engine_type { enum kgd_engine_type {
@ -309,11 +314,10 @@ int amdgpu_amdkfd_gpuvm_restore_process_bos(void *process_info,
struct dma_fence **ef); struct dma_fence **ef);
int amdgpu_amdkfd_gpuvm_get_vm_fault_info(struct amdgpu_device *adev, int amdgpu_amdkfd_gpuvm_get_vm_fault_info(struct amdgpu_device *adev,
struct kfd_vm_fault_info *info); struct kfd_vm_fault_info *info);
int amdgpu_amdkfd_gpuvm_import_dmabuf(struct amdgpu_device *adev, int amdgpu_amdkfd_gpuvm_import_dmabuf_fd(struct amdgpu_device *adev, int fd,
struct dma_buf *dmabuf, uint64_t va, void *drm_priv,
uint64_t va, void *drm_priv, struct kgd_mem **mem, uint64_t *size,
struct kgd_mem **mem, uint64_t *size, uint64_t *mmap_offset);
uint64_t *mmap_offset);
int amdgpu_amdkfd_gpuvm_export_dmabuf(struct kgd_mem *mem, int amdgpu_amdkfd_gpuvm_export_dmabuf(struct kgd_mem *mem,
struct dma_buf **dmabuf); struct dma_buf **dmabuf);
void amdgpu_amdkfd_debug_mem_fence(struct amdgpu_device *adev); void amdgpu_amdkfd_debug_mem_fence(struct amdgpu_device *adev);

View File

@ -25,6 +25,7 @@
#include <linux/pagemap.h> #include <linux/pagemap.h>
#include <linux/sched/mm.h> #include <linux/sched/mm.h>
#include <linux/sched/task.h> #include <linux/sched/task.h>
#include <linux/fdtable.h>
#include <drm/ttm/ttm_tt.h> #include <drm/ttm/ttm_tt.h>
#include <drm/drm_exec.h> #include <drm/drm_exec.h>
@ -806,13 +807,22 @@ kfd_mem_dmaunmap_attachment(struct kgd_mem *mem,
static int kfd_mem_export_dmabuf(struct kgd_mem *mem) static int kfd_mem_export_dmabuf(struct kgd_mem *mem)
{ {
if (!mem->dmabuf) { if (!mem->dmabuf) {
struct dma_buf *ret = amdgpu_gem_prime_export( struct amdgpu_device *bo_adev;
&mem->bo->tbo.base, struct dma_buf *dmabuf;
int r, fd;
bo_adev = amdgpu_ttm_adev(mem->bo->tbo.bdev);
r = drm_gem_prime_handle_to_fd(&bo_adev->ddev, bo_adev->kfd.client.file,
mem->gem_handle,
mem->alloc_flags & KFD_IOC_ALLOC_MEM_FLAGS_WRITABLE ? mem->alloc_flags & KFD_IOC_ALLOC_MEM_FLAGS_WRITABLE ?
DRM_RDWR : 0); DRM_RDWR : 0, &fd);
if (IS_ERR(ret)) if (r)
return PTR_ERR(ret); return r;
mem->dmabuf = ret; dmabuf = dma_buf_get(fd);
close_fd(fd);
if (WARN_ON_ONCE(IS_ERR(dmabuf)))
return PTR_ERR(dmabuf);
mem->dmabuf = dmabuf;
} }
return 0; return 0;
@ -1778,6 +1788,9 @@ int amdgpu_amdkfd_gpuvm_alloc_memory_of_gpu(
pr_debug("Failed to allow vma node access. ret %d\n", ret); pr_debug("Failed to allow vma node access. ret %d\n", ret);
goto err_node_allow; goto err_node_allow;
} }
ret = drm_gem_handle_create(adev->kfd.client.file, gobj, &(*mem)->gem_handle);
if (ret)
goto err_gem_handle_create;
bo = gem_to_amdgpu_bo(gobj); bo = gem_to_amdgpu_bo(gobj);
if (bo_type == ttm_bo_type_sg) { if (bo_type == ttm_bo_type_sg) {
bo->tbo.sg = sg; bo->tbo.sg = sg;
@ -1829,6 +1842,8 @@ allocate_init_user_pages_failed:
err_pin_bo: err_pin_bo:
err_validate_bo: err_validate_bo:
remove_kgd_mem_from_kfd_bo_list(*mem, avm->process_info); remove_kgd_mem_from_kfd_bo_list(*mem, avm->process_info);
drm_gem_handle_delete(adev->kfd.client.file, (*mem)->gem_handle);
err_gem_handle_create:
drm_vma_node_revoke(&gobj->vma_node, drm_priv); drm_vma_node_revoke(&gobj->vma_node, drm_priv);
err_node_allow: err_node_allow:
/* Don't unreserve system mem limit twice */ /* Don't unreserve system mem limit twice */
@ -1941,8 +1956,11 @@ int amdgpu_amdkfd_gpuvm_free_memory_of_gpu(
/* Free the BO*/ /* Free the BO*/
drm_vma_node_revoke(&mem->bo->tbo.base.vma_node, drm_priv); drm_vma_node_revoke(&mem->bo->tbo.base.vma_node, drm_priv);
if (mem->dmabuf) drm_gem_handle_delete(adev->kfd.client.file, mem->gem_handle);
if (mem->dmabuf) {
dma_buf_put(mem->dmabuf); dma_buf_put(mem->dmabuf);
mem->dmabuf = NULL;
}
mutex_destroy(&mem->lock); mutex_destroy(&mem->lock);
/* If this releases the last reference, it will end up calling /* If this releases the last reference, it will end up calling
@ -2294,34 +2312,26 @@ int amdgpu_amdkfd_gpuvm_get_vm_fault_info(struct amdgpu_device *adev,
return 0; return 0;
} }
int amdgpu_amdkfd_gpuvm_import_dmabuf(struct amdgpu_device *adev, static int import_obj_create(struct amdgpu_device *adev,
struct dma_buf *dma_buf, struct dma_buf *dma_buf,
uint64_t va, void *drm_priv, struct drm_gem_object *obj,
struct kgd_mem **mem, uint64_t *size, uint64_t va, void *drm_priv,
uint64_t *mmap_offset) struct kgd_mem **mem, uint64_t *size,
uint64_t *mmap_offset)
{ {
struct amdgpu_vm *avm = drm_priv_to_vm(drm_priv); struct amdgpu_vm *avm = drm_priv_to_vm(drm_priv);
struct drm_gem_object *obj;
struct amdgpu_bo *bo; struct amdgpu_bo *bo;
int ret; int ret;
obj = amdgpu_gem_prime_import(adev_to_drm(adev), dma_buf);
if (IS_ERR(obj))
return PTR_ERR(obj);
bo = gem_to_amdgpu_bo(obj); bo = gem_to_amdgpu_bo(obj);
if (!(bo->preferred_domains & (AMDGPU_GEM_DOMAIN_VRAM | if (!(bo->preferred_domains & (AMDGPU_GEM_DOMAIN_VRAM |
AMDGPU_GEM_DOMAIN_GTT))) { AMDGPU_GEM_DOMAIN_GTT)))
/* Only VRAM and GTT BOs are supported */ /* Only VRAM and GTT BOs are supported */
ret = -EINVAL; return -EINVAL;
goto err_put_obj;
}
*mem = kzalloc(sizeof(struct kgd_mem), GFP_KERNEL); *mem = kzalloc(sizeof(struct kgd_mem), GFP_KERNEL);
if (!*mem) { if (!*mem)
ret = -ENOMEM; return -ENOMEM;
goto err_put_obj;
}
ret = drm_vma_node_allow(&obj->vma_node, drm_priv); ret = drm_vma_node_allow(&obj->vma_node, drm_priv);
if (ret) if (ret)
@ -2371,8 +2381,41 @@ err_remove_mem:
drm_vma_node_revoke(&obj->vma_node, drm_priv); drm_vma_node_revoke(&obj->vma_node, drm_priv);
err_free_mem: err_free_mem:
kfree(*mem); kfree(*mem);
return ret;
}
int amdgpu_amdkfd_gpuvm_import_dmabuf_fd(struct amdgpu_device *adev, int fd,
uint64_t va, void *drm_priv,
struct kgd_mem **mem, uint64_t *size,
uint64_t *mmap_offset)
{
struct drm_gem_object *obj;
uint32_t handle;
int ret;
ret = drm_gem_prime_fd_to_handle(&adev->ddev, adev->kfd.client.file, fd,
&handle);
if (ret)
return ret;
obj = drm_gem_object_lookup(adev->kfd.client.file, handle);
if (!obj) {
ret = -EINVAL;
goto err_release_handle;
}
ret = import_obj_create(adev, obj->dma_buf, obj, va, drm_priv, mem, size,
mmap_offset);
if (ret)
goto err_put_obj;
(*mem)->gem_handle = handle;
return 0;
err_put_obj: err_put_obj:
drm_gem_object_put(obj); drm_gem_object_put(obj);
err_release_handle:
drm_gem_handle_delete(adev->kfd.client.file, handle);
return ret; return ret;
} }

View File

@ -755,7 +755,7 @@ static ssize_t amdgpu_debugfs_regs_smc_read(struct file *f, char __user *buf,
int r; int r;
if (!adev->smc_rreg) if (!adev->smc_rreg)
return -EPERM; return -EOPNOTSUPP;
if (size & 0x3 || *pos & 0x3) if (size & 0x3 || *pos & 0x3)
return -EINVAL; return -EINVAL;
@ -814,7 +814,7 @@ static ssize_t amdgpu_debugfs_regs_smc_write(struct file *f, const char __user *
int r; int r;
if (!adev->smc_wreg) if (!adev->smc_wreg)
return -EPERM; return -EOPNOTSUPP;
if (size & 0x3 || *pos & 0x3) if (size & 0x3 || *pos & 0x3)
return -EINVAL; return -EINVAL;

View File

@ -1599,7 +1599,7 @@ bool amdgpu_device_seamless_boot_supported(struct amdgpu_device *adev)
if (adev->mman.keep_stolen_vga_memory) if (adev->mman.keep_stolen_vga_memory)
return false; return false;
return adev->ip_versions[DCE_HWIP][0] >= IP_VERSION(3, 0, 0); return amdgpu_ip_version(adev, DCE_HWIP, 0) >= IP_VERSION(3, 0, 0);
} }
/* /*
@ -4589,8 +4589,6 @@ int amdgpu_device_suspend(struct drm_device *dev, bool fbcon)
amdgpu_ras_suspend(adev); amdgpu_ras_suspend(adev);
amdgpu_ttm_set_buffer_funcs_status(adev, false);
amdgpu_device_ip_suspend_phase1(adev); amdgpu_device_ip_suspend_phase1(adev);
if (!adev->in_s0ix) if (!adev->in_s0ix)

View File

@ -115,9 +115,10 @@
* 3.54.0 - Add AMDGPU_CTX_QUERY2_FLAGS_RESET_IN_PROGRESS support * 3.54.0 - Add AMDGPU_CTX_QUERY2_FLAGS_RESET_IN_PROGRESS support
* - 3.55.0 - Add AMDGPU_INFO_GPUVM_FAULT query * - 3.55.0 - Add AMDGPU_INFO_GPUVM_FAULT query
* - 3.56.0 - Update IB start address and size alignment for decode and encode * - 3.56.0 - Update IB start address and size alignment for decode and encode
* - 3.57.0 - Compute tunneling on GFX10+
*/ */
#define KMS_DRIVER_MAJOR 3 #define KMS_DRIVER_MAJOR 3
#define KMS_DRIVER_MINOR 56 #define KMS_DRIVER_MINOR 57
#define KMS_DRIVER_PATCHLEVEL 0 #define KMS_DRIVER_PATCHLEVEL 0
/* /*
@ -208,6 +209,7 @@ int amdgpu_umsch_mm;
int amdgpu_seamless = -1; /* auto */ int amdgpu_seamless = -1; /* auto */
uint amdgpu_debug_mask; uint amdgpu_debug_mask;
int amdgpu_agp = -1; /* auto */ int amdgpu_agp = -1; /* auto */
int amdgpu_wbrf = -1;
static void amdgpu_drv_delayed_reset_work_handler(struct work_struct *work); static void amdgpu_drv_delayed_reset_work_handler(struct work_struct *work);
@ -971,6 +973,22 @@ module_param_named(debug_mask, amdgpu_debug_mask, uint, 0444);
MODULE_PARM_DESC(agp, "AGP (-1 = auto (default), 0 = disable, 1 = enable)"); MODULE_PARM_DESC(agp, "AGP (-1 = auto (default), 0 = disable, 1 = enable)");
module_param_named(agp, amdgpu_agp, int, 0444); module_param_named(agp, amdgpu_agp, int, 0444);
/**
* DOC: wbrf (int)
* Enable Wifi RFI interference mitigation feature.
* Due to electrical and mechanical constraints there may be likely interference of
* relatively high-powered harmonics of the (G-)DDR memory clocks with local radio
* module frequency bands used by Wifi 6/6e/7. To mitigate the possible RFI interference,
* with this feature enabled, PMFW will use either shadowed P-State or P-State based
* on active list of frequencies in-use (to be avoided) as part of initial setting or
* P-state transition. However, there may be potential performance impact with this
* feature enabled.
* (0 = disabled, 1 = enabled, -1 = auto (default setting, will be enabled if supported))
*/
MODULE_PARM_DESC(wbrf,
"Enable Wifi RFI interference mitigation (0 = disabled, 1 = enabled, -1 = auto(default)");
module_param_named(wbrf, amdgpu_wbrf, int, 0444);
/* These devices are not supported by amdgpu. /* These devices are not supported by amdgpu.
* They are supported by the mach64, r128, radeon drivers * They are supported by the mach64, r128, radeon drivers
*/ */

View File

@ -190,8 +190,8 @@ int amdgpu_hmm_range_get_pages(struct mmu_interval_notifier *notifier,
pr_debug("hmm range: start = 0x%lx, end = 0x%lx", pr_debug("hmm range: start = 0x%lx, end = 0x%lx",
hmm_range->start, hmm_range->end); hmm_range->start, hmm_range->end);
/* Assuming 128MB takes maximum 1 second to fault page address */ /* Assuming 64MB takes maximum 1 second to fault page address */
timeout = max((hmm_range->end - hmm_range->start) >> 27, 1UL); timeout = max((hmm_range->end - hmm_range->start) >> 26, 1UL);
timeout *= HMM_RANGE_DEFAULT_TIMEOUT; timeout *= HMM_RANGE_DEFAULT_TIMEOUT;
timeout = jiffies + msecs_to_jiffies(timeout); timeout = jiffies + msecs_to_jiffies(timeout);
@ -199,6 +199,7 @@ retry:
hmm_range->notifier_seq = mmu_interval_read_begin(notifier); hmm_range->notifier_seq = mmu_interval_read_begin(notifier);
r = hmm_range_fault(hmm_range); r = hmm_range_fault(hmm_range);
if (unlikely(r)) { if (unlikely(r)) {
schedule();
/* /*
* FIXME: This timeout should encompass the retry from * FIXME: This timeout should encompass the retry from
* mmu_interval_read_retry() as well. * mmu_interval_read_retry() as well.
@ -212,7 +213,6 @@ retry:
break; break;
hmm_range->hmm_pfns += MAX_WALK_BYTE >> PAGE_SHIFT; hmm_range->hmm_pfns += MAX_WALK_BYTE >> PAGE_SHIFT;
hmm_range->start = hmm_range->end; hmm_range->start = hmm_range->end;
schedule();
} while (hmm_range->end < end); } while (hmm_range->end < end);
hmm_range->start = start; hmm_range->start = start;

View File

@ -46,6 +46,8 @@
#define MCA_REG__STATUS__ERRORCODEEXT(x) MCA_REG_FIELD(x, 21, 16) #define MCA_REG__STATUS__ERRORCODEEXT(x) MCA_REG_FIELD(x, 21, 16)
#define MCA_REG__STATUS__ERRORCODE(x) MCA_REG_FIELD(x, 15, 0) #define MCA_REG__STATUS__ERRORCODE(x) MCA_REG_FIELD(x, 15, 0)
#define MCA_REG__MISC0__ERRCNT(x) MCA_REG_FIELD(x, 43, 32)
#define MCA_REG__SYND__ERRORINFORMATION(x) MCA_REG_FIELD(x, 17, 0) #define MCA_REG__SYND__ERRORINFORMATION(x) MCA_REG_FIELD(x, 17, 0)
enum amdgpu_mca_ip { enum amdgpu_mca_ip {

View File

@ -916,6 +916,11 @@ int amdgpu_mes_set_shader_debugger(struct amdgpu_device *adev,
op_input.op = MES_MISC_OP_SET_SHADER_DEBUGGER; op_input.op = MES_MISC_OP_SET_SHADER_DEBUGGER;
op_input.set_shader_debugger.process_context_addr = process_context_addr; op_input.set_shader_debugger.process_context_addr = process_context_addr;
op_input.set_shader_debugger.flags.u32all = flags; op_input.set_shader_debugger.flags.u32all = flags;
/* use amdgpu mes_flush_shader_debugger instead */
if (op_input.set_shader_debugger.flags.process_ctx_flush)
return -EINVAL;
op_input.set_shader_debugger.spi_gdbg_per_vmid_cntl = spi_gdbg_per_vmid_cntl; op_input.set_shader_debugger.spi_gdbg_per_vmid_cntl = spi_gdbg_per_vmid_cntl;
memcpy(op_input.set_shader_debugger.tcp_watch_cntl, tcp_watch_cntl, memcpy(op_input.set_shader_debugger.tcp_watch_cntl, tcp_watch_cntl,
sizeof(op_input.set_shader_debugger.tcp_watch_cntl)); sizeof(op_input.set_shader_debugger.tcp_watch_cntl));
@ -935,6 +940,32 @@ int amdgpu_mes_set_shader_debugger(struct amdgpu_device *adev,
return r; return r;
} }
int amdgpu_mes_flush_shader_debugger(struct amdgpu_device *adev,
uint64_t process_context_addr)
{
struct mes_misc_op_input op_input = {0};
int r;
if (!adev->mes.funcs->misc_op) {
DRM_ERROR("mes flush shader debugger is not supported!\n");
return -EINVAL;
}
op_input.op = MES_MISC_OP_SET_SHADER_DEBUGGER;
op_input.set_shader_debugger.process_context_addr = process_context_addr;
op_input.set_shader_debugger.flags.process_ctx_flush = true;
amdgpu_mes_lock(&adev->mes);
r = adev->mes.funcs->misc_op(&adev->mes, &op_input);
if (r)
DRM_ERROR("failed to set_shader_debugger\n");
amdgpu_mes_unlock(&adev->mes);
return r;
}
static void static void
amdgpu_mes_ring_to_queue_props(struct amdgpu_device *adev, amdgpu_mes_ring_to_queue_props(struct amdgpu_device *adev,
struct amdgpu_ring *ring, struct amdgpu_ring *ring,

View File

@ -296,9 +296,10 @@ struct mes_misc_op_input {
uint64_t process_context_addr; uint64_t process_context_addr;
union { union {
struct { struct {
uint64_t single_memop : 1; uint32_t single_memop : 1;
uint64_t single_alu_op : 1; uint32_t single_alu_op : 1;
uint64_t reserved: 30; uint32_t reserved: 29;
uint32_t process_ctx_flush: 1;
}; };
uint32_t u32all; uint32_t u32all;
} flags; } flags;
@ -374,7 +375,8 @@ int amdgpu_mes_set_shader_debugger(struct amdgpu_device *adev,
const uint32_t *tcp_watch_cntl, const uint32_t *tcp_watch_cntl,
uint32_t flags, uint32_t flags,
bool trap_en); bool trap_en);
int amdgpu_mes_flush_shader_debugger(struct amdgpu_device *adev,
uint64_t process_context_addr);
int amdgpu_mes_add_ring(struct amdgpu_device *adev, int gang_id, int amdgpu_mes_add_ring(struct amdgpu_device *adev, int gang_id,
int queue_type, int idx, int queue_type, int idx,
struct amdgpu_mes_ctx_data *ctx_data, struct amdgpu_mes_ctx_data *ctx_data,

View File

@ -32,7 +32,6 @@
#include <drm/display/drm_dp_helper.h> #include <drm/display/drm_dp_helper.h>
#include <drm/drm_crtc.h> #include <drm/drm_crtc.h>
#include <drm/drm_edid.h>
#include <drm/drm_encoder.h> #include <drm/drm_encoder.h>
#include <drm/drm_fixed.h> #include <drm/drm_fixed.h>
#include <drm/drm_framebuffer.h> #include <drm/drm_framebuffer.h>
@ -51,6 +50,7 @@ struct amdgpu_device;
struct amdgpu_encoder; struct amdgpu_encoder;
struct amdgpu_router; struct amdgpu_router;
struct amdgpu_hpd; struct amdgpu_hpd;
struct edid;
#define to_amdgpu_crtc(x) container_of(x, struct amdgpu_crtc, base) #define to_amdgpu_crtc(x) container_of(x, struct amdgpu_crtc, base)
#define to_amdgpu_connector(x) container_of(x, struct amdgpu_connector, base) #define to_amdgpu_connector(x) container_of(x, struct amdgpu_connector, base)
@ -343,6 +343,97 @@ struct amdgpu_mode_info {
int disp_priority; int disp_priority;
const struct amdgpu_display_funcs *funcs; const struct amdgpu_display_funcs *funcs;
const enum drm_plane_type *plane_type; const enum drm_plane_type *plane_type;
/* Driver-private color mgmt props */
/* @plane_degamma_lut_property: Plane property to set a degamma LUT to
* convert encoded values to light linear values before sampling or
* blending.
*/
struct drm_property *plane_degamma_lut_property;
/* @plane_degamma_lut_size_property: Plane property to define the max
* size of degamma LUT as supported by the driver (read-only).
*/
struct drm_property *plane_degamma_lut_size_property;
/**
* @plane_degamma_tf_property: Plane pre-defined transfer function to
* to go from scanout/encoded values to linear values.
*/
struct drm_property *plane_degamma_tf_property;
/**
* @plane_hdr_mult_property:
*/
struct drm_property *plane_hdr_mult_property;
struct drm_property *plane_ctm_property;
/**
* @shaper_lut_property: Plane property to set pre-blending shaper LUT
* that converts color content before 3D LUT. If
* plane_shaper_tf_property != Identity TF, AMD color module will
* combine the user LUT values with pre-defined TF into the LUT
* parameters to be programmed.
*/
struct drm_property *plane_shaper_lut_property;
/**
* @shaper_lut_size_property: Plane property for the size of
* pre-blending shaper LUT as supported by the driver (read-only).
*/
struct drm_property *plane_shaper_lut_size_property;
/**
* @plane_shaper_tf_property: Plane property to set a predefined
* transfer function for pre-blending shaper (before applying 3D LUT)
* with or without LUT. There is no shaper ROM, but we can use AMD
* color modules to program LUT parameters from predefined TF (or
* from a combination of pre-defined TF and the custom 1D LUT).
*/
struct drm_property *plane_shaper_tf_property;
/**
* @plane_lut3d_property: Plane property for color transformation using
* a 3D LUT (pre-blending), a three-dimensional array where each
* element is an RGB triplet. Each dimension has the size of
* lut3d_size. The array contains samples from the approximated
* function. On AMD, values between samples are estimated by
* tetrahedral interpolation. The array is accessed with three indices,
* one for each input dimension (color channel), blue being the
* outermost dimension, red the innermost.
*/
struct drm_property *plane_lut3d_property;
/**
* @plane_degamma_lut_size_property: Plane property to define the max
* size of 3D LUT as supported by the driver (read-only). The max size
* is the max size of one dimension and, therefore, the max number of
* entries for 3D LUT array is the 3D LUT size cubed;
*/
struct drm_property *plane_lut3d_size_property;
/**
* @plane_blend_lut_property: Plane property for output gamma before
* blending. Userspace set a blend LUT to convert colors after 3D LUT
* conversion. It works as a post-3DLUT 1D LUT. With shaper LUT, they
* are sandwiching 3D LUT with two 1D LUT. If plane_blend_tf_property
* != Identity TF, AMD color module will combine the user LUT values
* with pre-defined TF into the LUT parameters to be programmed.
*/
struct drm_property *plane_blend_lut_property;
/**
* @plane_blend_lut_size_property: Plane property to define the max
* size of blend LUT as supported by the driver (read-only).
*/
struct drm_property *plane_blend_lut_size_property;
/**
* @plane_blend_tf_property: Plane property to set a predefined
* transfer function for pre-blending blend/out_gamma (after applying
* 3D LUT) with or without LUT. There is no blend ROM, but we can use
* AMD color modules to program LUT parameters from predefined TF (or
* from a combination of pre-defined TF and the custom 1D LUT).
*/
struct drm_property *plane_blend_tf_property;
/* @regamma_tf_property: Transfer function for CRTC regamma
* (post-blending). Possible values are defined by `enum
* amdgpu_transfer_function`. There is no regamma ROM, but we can use
* AMD color modules to program LUT parameters from predefined TF (or
* from a combination of pre-defined TF and the custom 1D LUT).
*/
struct drm_property *regamma_tf_property;
}; };
#define AMDGPU_MAX_BL_LEVEL 0xFF #define AMDGPU_MAX_BL_LEVEL 0xFF

View File

@ -1245,19 +1245,15 @@ int amdgpu_bo_get_metadata(struct amdgpu_bo *bo, void *buffer,
* amdgpu_bo_move_notify - notification about a memory move * amdgpu_bo_move_notify - notification about a memory move
* @bo: pointer to a buffer object * @bo: pointer to a buffer object
* @evict: if this move is evicting the buffer from the graphics address space * @evict: if this move is evicting the buffer from the graphics address space
* @new_mem: new information of the bufer object
* *
* Marks the corresponding &amdgpu_bo buffer object as invalid, also performs * Marks the corresponding &amdgpu_bo buffer object as invalid, also performs
* bookkeeping. * bookkeeping.
* TTM driver callback which is called when ttm moves a buffer. * TTM driver callback which is called when ttm moves a buffer.
*/ */
void amdgpu_bo_move_notify(struct ttm_buffer_object *bo, void amdgpu_bo_move_notify(struct ttm_buffer_object *bo, bool evict)
bool evict,
struct ttm_resource *new_mem)
{ {
struct amdgpu_device *adev = amdgpu_ttm_adev(bo->bdev); struct amdgpu_device *adev = amdgpu_ttm_adev(bo->bdev);
struct amdgpu_bo *abo; struct amdgpu_bo *abo;
struct ttm_resource *old_mem = bo->resource;
if (!amdgpu_bo_is_amdgpu_bo(bo)) if (!amdgpu_bo_is_amdgpu_bo(bo))
return; return;
@ -1274,13 +1270,6 @@ void amdgpu_bo_move_notify(struct ttm_buffer_object *bo,
/* remember the eviction */ /* remember the eviction */
if (evict) if (evict)
atomic64_inc(&adev->num_evictions); atomic64_inc(&adev->num_evictions);
/* update statistics */
if (!new_mem)
return;
/* move_notify is called before move happens */
trace_amdgpu_bo_move(abo, new_mem->mem_type, old_mem->mem_type);
} }
void amdgpu_bo_get_memory(struct amdgpu_bo *bo, void amdgpu_bo_get_memory(struct amdgpu_bo *bo,
@ -1343,6 +1332,8 @@ void amdgpu_bo_release_notify(struct ttm_buffer_object *bo)
abo = ttm_to_amdgpu_bo(bo); abo = ttm_to_amdgpu_bo(bo);
WARN_ON(abo->vm_bo);
if (abo->kfd_bo) if (abo->kfd_bo)
amdgpu_amdkfd_release_notify(abo); amdgpu_amdkfd_release_notify(abo);

View File

@ -344,9 +344,7 @@ int amdgpu_bo_set_metadata (struct amdgpu_bo *bo, void *metadata,
int amdgpu_bo_get_metadata(struct amdgpu_bo *bo, void *buffer, int amdgpu_bo_get_metadata(struct amdgpu_bo *bo, void *buffer,
size_t buffer_size, uint32_t *metadata_size, size_t buffer_size, uint32_t *metadata_size,
uint64_t *flags); uint64_t *flags);
void amdgpu_bo_move_notify(struct ttm_buffer_object *bo, void amdgpu_bo_move_notify(struct ttm_buffer_object *bo, bool evict);
bool evict,
struct ttm_resource *new_mem);
void amdgpu_bo_release_notify(struct ttm_buffer_object *bo); void amdgpu_bo_release_notify(struct ttm_buffer_object *bo);
vm_fault_t amdgpu_bo_fault_reserve_notify(struct ttm_buffer_object *bo); vm_fault_t amdgpu_bo_fault_reserve_notify(struct ttm_buffer_object *bo);
void amdgpu_bo_fence(struct amdgpu_bo *bo, struct dma_fence *fence, void amdgpu_bo_fence(struct amdgpu_bo *bo, struct dma_fence *fence,

View File

@ -1433,8 +1433,8 @@ int psp_xgmi_get_topology_info(struct psp_context *psp,
get_extended_data) || get_extended_data) ||
amdgpu_ip_version(psp->adev, MP0_HWIP, 0) == amdgpu_ip_version(psp->adev, MP0_HWIP, 0) ==
IP_VERSION(13, 0, 6); IP_VERSION(13, 0, 6);
bool ta_port_num_support = psp->xgmi_context.xgmi_ta_caps & bool ta_port_num_support = amdgpu_sriov_vf(psp->adev) ? 0 :
EXTEND_PEER_LINK_INFO_CMD_FLAG; psp->xgmi_context.xgmi_ta_caps & EXTEND_PEER_LINK_INFO_CMD_FLAG;
/* popluate the shared output buffer rather than the cmd input buffer /* popluate the shared output buffer rather than the cmd input buffer
* with node_ids as the input for GET_PEER_LINKS command execution. * with node_ids as the input for GET_PEER_LINKS command execution.

View File

@ -642,6 +642,10 @@ static void amdgpu_ring_to_mqd_prop(struct amdgpu_ring *ring,
struct amdgpu_mqd_prop *prop) struct amdgpu_mqd_prop *prop)
{ {
struct amdgpu_device *adev = ring->adev; struct amdgpu_device *adev = ring->adev;
bool is_high_prio_compute = ring->funcs->type == AMDGPU_RING_TYPE_COMPUTE &&
amdgpu_gfx_is_high_priority_compute_queue(adev, ring);
bool is_high_prio_gfx = ring->funcs->type == AMDGPU_RING_TYPE_GFX &&
amdgpu_gfx_is_high_priority_graphics_queue(adev, ring);
memset(prop, 0, sizeof(*prop)); memset(prop, 0, sizeof(*prop));
@ -659,10 +663,8 @@ static void amdgpu_ring_to_mqd_prop(struct amdgpu_ring *ring,
*/ */
prop->hqd_active = ring->funcs->type == AMDGPU_RING_TYPE_KIQ; prop->hqd_active = ring->funcs->type == AMDGPU_RING_TYPE_KIQ;
if ((ring->funcs->type == AMDGPU_RING_TYPE_COMPUTE && prop->allow_tunneling = is_high_prio_compute;
amdgpu_gfx_is_high_priority_compute_queue(adev, ring)) || if (is_high_prio_compute || is_high_prio_gfx) {
(ring->funcs->type == AMDGPU_RING_TYPE_GFX &&
amdgpu_gfx_is_high_priority_graphics_queue(adev, ring))) {
prop->hqd_pipe_priority = AMDGPU_GFX_PIPE_PRIO_HIGH; prop->hqd_pipe_priority = AMDGPU_GFX_PIPE_PRIO_HIGH;
prop->hqd_queue_priority = AMDGPU_GFX_QUEUE_PRIORITY_MAXIMUM; prop->hqd_queue_priority = AMDGPU_GFX_QUEUE_PRIORITY_MAXIMUM;
} }

View File

@ -545,10 +545,11 @@ static int amdgpu_bo_move(struct ttm_buffer_object *bo, bool evict,
return r; return r;
} }
trace_amdgpu_bo_move(abo, new_mem->mem_type, old_mem->mem_type);
out: out:
/* update statistics */ /* update statistics */
atomic64_add(bo->base.size, &adev->num_bytes_moved); atomic64_add(bo->base.size, &adev->num_bytes_moved);
amdgpu_bo_move_notify(bo, evict, new_mem); amdgpu_bo_move_notify(bo, evict);
return 0; return 0;
} }
@ -1553,7 +1554,7 @@ static int amdgpu_ttm_access_memory(struct ttm_buffer_object *bo,
static void static void
amdgpu_bo_delete_mem_notify(struct ttm_buffer_object *bo) amdgpu_bo_delete_mem_notify(struct ttm_buffer_object *bo)
{ {
amdgpu_bo_move_notify(bo, false, NULL); amdgpu_bo_move_notify(bo, false);
} }
static struct ttm_device_funcs amdgpu_bo_driver = { static struct ttm_device_funcs amdgpu_bo_driver = {

View File

@ -1099,7 +1099,8 @@ bool amdgpu_sriov_xnack_support(struct amdgpu_device *adev)
{ {
bool xnack_mode = true; bool xnack_mode = true;
if (amdgpu_sriov_vf(adev) && adev->ip_versions[GC_HWIP][0] == IP_VERSION(9, 4, 2)) if (amdgpu_sriov_vf(adev) &&
amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 4, 2))
xnack_mode = false; xnack_mode = false;
return xnack_mode; return xnack_mode;

View File

@ -1,6 +1,7 @@
// SPDX-License-Identifier: GPL-2.0+ // SPDX-License-Identifier: GPL-2.0+
#include <drm/drm_atomic_helper.h> #include <drm/drm_atomic_helper.h>
#include <drm/drm_edid.h>
#include <drm/drm_simple_kms_helper.h> #include <drm/drm_simple_kms_helper.h>
#include <drm/drm_vblank.h> #include <drm/drm_vblank.h>

View File

@ -642,13 +642,14 @@ static void amdgpu_vm_pt_free(struct amdgpu_vm_bo_base *entry)
if (!entry->bo) if (!entry->bo)
return; return;
entry->bo->vm_bo = NULL;
shadow = amdgpu_bo_shadowed(entry->bo); shadow = amdgpu_bo_shadowed(entry->bo);
if (shadow) { if (shadow) {
ttm_bo_set_bulk_move(&shadow->tbo, NULL); ttm_bo_set_bulk_move(&shadow->tbo, NULL);
amdgpu_bo_unref(&shadow); amdgpu_bo_unref(&shadow);
} }
ttm_bo_set_bulk_move(&entry->bo->tbo, NULL); ttm_bo_set_bulk_move(&entry->bo->tbo, NULL);
entry->bo->vm_bo = NULL;
spin_lock(&entry->vm->status_lock); spin_lock(&entry->vm->status_lock);
list_del(&entry->vm_status); list_del(&entry->vm_status);

View File

@ -26,6 +26,7 @@
#include "amdgpu.h" #include "amdgpu.h"
#include "amdgpu_ucode.h" #include "amdgpu_ucode.h"
#include "amdgpu_vpe.h" #include "amdgpu_vpe.h"
#include "amdgpu_smu.h"
#include "soc15_common.h" #include "soc15_common.h"
#include "vpe_v6_1.h" #include "vpe_v6_1.h"
@ -33,8 +34,186 @@
/* VPE CSA resides in the 4th page of CSA */ /* VPE CSA resides in the 4th page of CSA */
#define AMDGPU_CSA_VPE_OFFSET (4096 * 3) #define AMDGPU_CSA_VPE_OFFSET (4096 * 3)
/* 1 second timeout */
#define VPE_IDLE_TIMEOUT msecs_to_jiffies(1000)
#define VPE_MAX_DPM_LEVEL 4
#define FIXED1_8_BITS_PER_FRACTIONAL_PART 8
#define GET_PRATIO_INTEGER_PART(x) ((x) >> FIXED1_8_BITS_PER_FRACTIONAL_PART)
static void vpe_set_ring_funcs(struct amdgpu_device *adev); static void vpe_set_ring_funcs(struct amdgpu_device *adev);
static inline uint16_t div16_u16_rem(uint16_t dividend, uint16_t divisor, uint16_t *remainder)
{
*remainder = dividend % divisor;
return dividend / divisor;
}
static inline uint16_t complete_integer_division_u16(
uint16_t dividend,
uint16_t divisor,
uint16_t *remainder)
{
return div16_u16_rem(dividend, divisor, (uint16_t *)remainder);
}
static uint16_t vpe_u1_8_from_fraction(uint16_t numerator, uint16_t denominator)
{
bool arg1_negative = numerator < 0;
bool arg2_negative = denominator < 0;
uint16_t arg1_value = (uint16_t)(arg1_negative ? -numerator : numerator);
uint16_t arg2_value = (uint16_t)(arg2_negative ? -denominator : denominator);
uint16_t remainder;
/* determine integer part */
uint16_t res_value = complete_integer_division_u16(
arg1_value, arg2_value, &remainder);
if (res_value > 127 /* CHAR_MAX */)
return 0;
/* determine fractional part */
{
unsigned int i = FIXED1_8_BITS_PER_FRACTIONAL_PART;
do {
remainder <<= 1;
res_value <<= 1;
if (remainder >= arg2_value) {
res_value |= 1;
remainder -= arg2_value;
}
} while (--i != 0);
}
/* round up LSB */
{
uint16_t summand = (remainder << 1) >= arg2_value;
if ((res_value + summand) > 32767 /* SHRT_MAX */)
return 0;
res_value += summand;
}
if (arg1_negative ^ arg2_negative)
res_value = -res_value;
return res_value;
}
static uint16_t vpe_internal_get_pratio(uint16_t from_frequency, uint16_t to_frequency)
{
uint16_t pratio = vpe_u1_8_from_fraction(from_frequency, to_frequency);
if (GET_PRATIO_INTEGER_PART(pratio) > 1)
pratio = 0;
return pratio;
}
/*
* VPE has 4 DPM levels from level 0 (lowerest) to 3 (highest),
* VPE FW will dynamically decide which level should be used according to current loading.
*
* Get VPE and SOC clocks from PM, and select the appropriate four clock values,
* calculate the ratios of adjusting from one clock to another.
* The VPE FW can then request the appropriate frequency from the PMFW.
*/
int amdgpu_vpe_configure_dpm(struct amdgpu_vpe *vpe)
{
struct amdgpu_device *adev = vpe->ring.adev;
uint32_t dpm_ctl;
if (adev->pm.dpm_enabled) {
struct dpm_clocks clock_table = { 0 };
struct dpm_clock *VPEClks;
struct dpm_clock *SOCClks;
uint32_t idx;
uint32_t pratio_vmax_vnorm = 0, pratio_vnorm_vmid = 0, pratio_vmid_vmin = 0;
uint16_t pratio_vmin_freq = 0, pratio_vmid_freq = 0, pratio_vnorm_freq = 0, pratio_vmax_freq = 0;
dpm_ctl = RREG32(vpe_get_reg_offset(vpe, 0, vpe->regs.dpm_enable));
dpm_ctl |= 1; /* DPM enablement */
WREG32(vpe_get_reg_offset(vpe, 0, vpe->regs.dpm_enable), dpm_ctl);
/* Get VPECLK and SOCCLK */
if (amdgpu_dpm_get_dpm_clock_table(adev, &clock_table)) {
dev_dbg(adev->dev, "%s: get clock failed!\n", __func__);
goto disable_dpm;
}
SOCClks = clock_table.SocClocks;
VPEClks = clock_table.VPEClocks;
/* vpe dpm only cares 4 levels. */
for (idx = 0; idx < VPE_MAX_DPM_LEVEL; idx++) {
uint32_t soc_dpm_level;
uint32_t min_freq;
if (idx == 0)
soc_dpm_level = 0;
else
soc_dpm_level = (idx * 2) + 1;
/* clamp the max level */
if (soc_dpm_level > PP_SMU_NUM_VPECLK_DPM_LEVELS - 1)
soc_dpm_level = PP_SMU_NUM_VPECLK_DPM_LEVELS - 1;
min_freq = (SOCClks[soc_dpm_level].Freq < VPEClks[soc_dpm_level].Freq) ?
SOCClks[soc_dpm_level].Freq : VPEClks[soc_dpm_level].Freq;
switch (idx) {
case 0:
pratio_vmin_freq = min_freq;
break;
case 1:
pratio_vmid_freq = min_freq;
break;
case 2:
pratio_vnorm_freq = min_freq;
break;
case 3:
pratio_vmax_freq = min_freq;
break;
default:
break;
}
}
if (pratio_vmin_freq && pratio_vmid_freq && pratio_vnorm_freq && pratio_vmax_freq) {
uint32_t pratio_ctl;
pratio_vmax_vnorm = (uint32_t)vpe_internal_get_pratio(pratio_vmax_freq, pratio_vnorm_freq);
pratio_vnorm_vmid = (uint32_t)vpe_internal_get_pratio(pratio_vnorm_freq, pratio_vmid_freq);
pratio_vmid_vmin = (uint32_t)vpe_internal_get_pratio(pratio_vmid_freq, pratio_vmin_freq);
pratio_ctl = pratio_vmax_vnorm | (pratio_vnorm_vmid << 9) | (pratio_vmid_vmin << 18);
WREG32(vpe_get_reg_offset(vpe, 0, vpe->regs.dpm_pratio), pratio_ctl); /* PRatio */
WREG32(vpe_get_reg_offset(vpe, 0, vpe->regs.dpm_request_interval), 24000); /* 1ms, unit=1/24MHz */
WREG32(vpe_get_reg_offset(vpe, 0, vpe->regs.dpm_decision_threshold), 1200000); /* 50ms */
WREG32(vpe_get_reg_offset(vpe, 0, vpe->regs.dpm_busy_clamp_threshold), 1200000);/* 50ms */
WREG32(vpe_get_reg_offset(vpe, 0, vpe->regs.dpm_idle_clamp_threshold), 1200000);/* 50ms */
dev_dbg(adev->dev, "%s: configure vpe dpm pratio done!\n", __func__);
} else {
dev_dbg(adev->dev, "%s: invalid pratio parameters!\n", __func__);
goto disable_dpm;
}
}
return 0;
disable_dpm:
dpm_ctl = RREG32(vpe_get_reg_offset(vpe, 0, vpe->regs.dpm_enable));
dpm_ctl &= 0xfffffffe; /* Disable DPM */
WREG32(vpe_get_reg_offset(vpe, 0, vpe->regs.dpm_enable), dpm_ctl);
dev_dbg(adev->dev, "%s: disable vpe dpm\n", __func__);
return 0;
}
int amdgpu_vpe_psp_update_sram(struct amdgpu_device *adev) int amdgpu_vpe_psp_update_sram(struct amdgpu_device *adev)
{ {
struct amdgpu_firmware_info ucode = { struct amdgpu_firmware_info ucode = {
@ -134,6 +313,19 @@ static int vpe_early_init(void *handle)
return 0; return 0;
} }
static void vpe_idle_work_handler(struct work_struct *work)
{
struct amdgpu_device *adev =
container_of(work, struct amdgpu_device, vpe.idle_work.work);
unsigned int fences = 0;
fences += amdgpu_fence_count_emitted(&adev->vpe.ring);
if (fences == 0)
amdgpu_device_ip_set_powergating_state(adev, AMD_IP_BLOCK_TYPE_VPE, AMD_PG_STATE_GATE);
else
schedule_delayed_work(&adev->vpe.idle_work, VPE_IDLE_TIMEOUT);
}
static int vpe_common_init(struct amdgpu_vpe *vpe) static int vpe_common_init(struct amdgpu_vpe *vpe)
{ {
@ -150,6 +342,9 @@ static int vpe_common_init(struct amdgpu_vpe *vpe)
return r; return r;
} }
vpe->context_started = false;
INIT_DELAYED_WORK(&adev->vpe.idle_work, vpe_idle_work_handler);
return 0; return 0;
} }
@ -219,6 +414,9 @@ static int vpe_hw_fini(void *handle)
vpe_ring_stop(vpe); vpe_ring_stop(vpe);
/* Power off VPE */
amdgpu_device_ip_set_powergating_state(adev, AMD_IP_BLOCK_TYPE_VPE, AMD_PG_STATE_GATE);
return 0; return 0;
} }
@ -226,6 +424,8 @@ static int vpe_suspend(void *handle)
{ {
struct amdgpu_device *adev = (struct amdgpu_device *)handle; struct amdgpu_device *adev = (struct amdgpu_device *)handle;
cancel_delayed_work_sync(&adev->vpe.idle_work);
return vpe_hw_fini(adev); return vpe_hw_fini(adev);
} }
@ -430,6 +630,21 @@ static int vpe_set_clockgating_state(void *handle,
static int vpe_set_powergating_state(void *handle, static int vpe_set_powergating_state(void *handle,
enum amd_powergating_state state) enum amd_powergating_state state)
{ {
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
struct amdgpu_vpe *vpe = &adev->vpe;
if (!adev->pm.dpm_enabled)
dev_err(adev->dev, "Without PM, cannot support powergating\n");
dev_dbg(adev->dev, "%s: %s!\n", __func__, (state == AMD_PG_STATE_GATE) ? "GATE":"UNGATE");
if (state == AMD_PG_STATE_GATE) {
amdgpu_dpm_enable_vpe(adev, false);
vpe->context_started = false;
} else {
amdgpu_dpm_enable_vpe(adev, true);
}
return 0; return 0;
} }
@ -595,6 +810,38 @@ err0:
return ret; return ret;
} }
static void vpe_ring_begin_use(struct amdgpu_ring *ring)
{
struct amdgpu_device *adev = ring->adev;
struct amdgpu_vpe *vpe = &adev->vpe;
cancel_delayed_work_sync(&adev->vpe.idle_work);
/* Power on VPE and notify VPE of new context */
if (!vpe->context_started) {
uint32_t context_notify;
/* Power on VPE */
amdgpu_device_ip_set_powergating_state(adev, AMD_IP_BLOCK_TYPE_VPE, AMD_PG_STATE_UNGATE);
/* Indicates that a job from a new context has been submitted. */
context_notify = RREG32(vpe_get_reg_offset(vpe, 0, vpe->regs.context_indicator));
if ((context_notify & 0x1) == 0)
context_notify |= 0x1;
else
context_notify &= ~(0x1);
WREG32(vpe_get_reg_offset(vpe, 0, vpe->regs.context_indicator), context_notify);
vpe->context_started = true;
}
}
static void vpe_ring_end_use(struct amdgpu_ring *ring)
{
struct amdgpu_device *adev = ring->adev;
schedule_delayed_work(&adev->vpe.idle_work, VPE_IDLE_TIMEOUT);
}
static const struct amdgpu_ring_funcs vpe_ring_funcs = { static const struct amdgpu_ring_funcs vpe_ring_funcs = {
.type = AMDGPU_RING_TYPE_VPE, .type = AMDGPU_RING_TYPE_VPE,
.align_mask = 0xf, .align_mask = 0xf,
@ -625,6 +872,8 @@ static const struct amdgpu_ring_funcs vpe_ring_funcs = {
.init_cond_exec = vpe_ring_init_cond_exec, .init_cond_exec = vpe_ring_init_cond_exec,
.patch_cond_exec = vpe_ring_patch_cond_exec, .patch_cond_exec = vpe_ring_patch_cond_exec,
.preempt_ib = vpe_ring_preempt_ib, .preempt_ib = vpe_ring_preempt_ib,
.begin_use = vpe_ring_begin_use,
.end_use = vpe_ring_end_use,
}; };
static void vpe_set_ring_funcs(struct amdgpu_device *adev) static void vpe_set_ring_funcs(struct amdgpu_device *adev)

View File

@ -47,6 +47,15 @@ struct vpe_regs {
uint32_t queue0_rb_wptr_lo; uint32_t queue0_rb_wptr_lo;
uint32_t queue0_rb_wptr_hi; uint32_t queue0_rb_wptr_hi;
uint32_t queue0_preempt; uint32_t queue0_preempt;
uint32_t dpm_enable;
uint32_t dpm_pratio;
uint32_t dpm_request_interval;
uint32_t dpm_decision_threshold;
uint32_t dpm_busy_clamp_threshold;
uint32_t dpm_idle_clamp_threshold;
uint32_t dpm_request_lv;
uint32_t context_indicator;
}; };
struct amdgpu_vpe { struct amdgpu_vpe {
@ -63,12 +72,15 @@ struct amdgpu_vpe {
struct amdgpu_bo *cmdbuf_obj; struct amdgpu_bo *cmdbuf_obj;
uint64_t cmdbuf_gpu_addr; uint64_t cmdbuf_gpu_addr;
uint32_t *cmdbuf_cpu_addr; uint32_t *cmdbuf_cpu_addr;
struct delayed_work idle_work;
bool context_started;
}; };
int amdgpu_vpe_psp_update_sram(struct amdgpu_device *adev); int amdgpu_vpe_psp_update_sram(struct amdgpu_device *adev);
int amdgpu_vpe_init_microcode(struct amdgpu_vpe *vpe); int amdgpu_vpe_init_microcode(struct amdgpu_vpe *vpe);
int amdgpu_vpe_ring_init(struct amdgpu_vpe *vpe); int amdgpu_vpe_ring_init(struct amdgpu_vpe *vpe);
int amdgpu_vpe_ring_fini(struct amdgpu_vpe *vpe); int amdgpu_vpe_ring_fini(struct amdgpu_vpe *vpe);
int amdgpu_vpe_configure_dpm(struct amdgpu_vpe *vpe);
#define vpe_ring_init(vpe) ((vpe)->funcs->ring_init ? (vpe)->funcs->ring_init((vpe)) : 0) #define vpe_ring_init(vpe) ((vpe)->funcs->ring_init ? (vpe)->funcs->ring_init((vpe)) : 0)
#define vpe_ring_start(vpe) ((vpe)->funcs->ring_start ? (vpe)->funcs->ring_start((vpe)) : 0) #define vpe_ring_start(vpe) ((vpe)->funcs->ring_start ? (vpe)->funcs->ring_start((vpe)) : 0)

View File

@ -823,6 +823,28 @@ static int amdgpu_xgmi_initialize_hive_get_data_partition(struct amdgpu_hive_inf
return 0; return 0;
} }
static void amdgpu_xgmi_fill_topology_info(struct amdgpu_device *adev,
struct amdgpu_device *peer_adev)
{
struct psp_xgmi_topology_info *top_info = &adev->psp.xgmi_context.top_info;
struct psp_xgmi_topology_info *peer_info = &peer_adev->psp.xgmi_context.top_info;
for (int i = 0; i < peer_info->num_nodes; i++) {
if (peer_info->nodes[i].node_id == adev->gmc.xgmi.node_id) {
for (int j = 0; j < top_info->num_nodes; j++) {
if (top_info->nodes[j].node_id == peer_adev->gmc.xgmi.node_id) {
peer_info->nodes[i].num_hops = top_info->nodes[j].num_hops;
peer_info->nodes[i].is_sharing_enabled =
top_info->nodes[j].is_sharing_enabled;
peer_info->nodes[i].num_links =
top_info->nodes[j].num_links;
return;
}
}
}
}
}
int amdgpu_xgmi_add_device(struct amdgpu_device *adev) int amdgpu_xgmi_add_device(struct amdgpu_device *adev)
{ {
struct psp_xgmi_topology_info *top_info; struct psp_xgmi_topology_info *top_info;
@ -897,18 +919,38 @@ int amdgpu_xgmi_add_device(struct amdgpu_device *adev)
goto exit_unlock; goto exit_unlock;
} }
/* get latest topology info for each device from psp */ if (amdgpu_sriov_vf(adev) &&
list_for_each_entry(tmp_adev, &hive->device_list, gmc.xgmi.head) { adev->psp.xgmi_context.xgmi_ta_caps & EXTEND_PEER_LINK_INFO_CMD_FLAG) {
ret = psp_xgmi_get_topology_info(&tmp_adev->psp, count, /* only get topology for VF being init if it can support full duplex */
&tmp_adev->psp.xgmi_context.top_info, false); ret = psp_xgmi_get_topology_info(&adev->psp, count,
&adev->psp.xgmi_context.top_info, false);
if (ret) { if (ret) {
dev_err(tmp_adev->dev, dev_err(adev->dev,
"XGMI: Get topology failure on device %llx, hive %llx, ret %d", "XGMI: Get topology failure on device %llx, hive %llx, ret %d",
tmp_adev->gmc.xgmi.node_id, adev->gmc.xgmi.node_id,
tmp_adev->gmc.xgmi.hive_id, ret); adev->gmc.xgmi.hive_id, ret);
/* To do : continue with some node failed or disable the whole hive */ /* To do: continue with some node failed or disable the whole hive*/
goto exit_unlock; goto exit_unlock;
} }
/* fill the topology info for peers instead of getting from PSP */
list_for_each_entry(tmp_adev, &hive->device_list, gmc.xgmi.head) {
amdgpu_xgmi_fill_topology_info(adev, tmp_adev);
}
} else {
/* get latest topology info for each device from psp */
list_for_each_entry(tmp_adev, &hive->device_list, gmc.xgmi.head) {
ret = psp_xgmi_get_topology_info(&tmp_adev->psp, count,
&tmp_adev->psp.xgmi_context.top_info, false);
if (ret) {
dev_err(tmp_adev->dev,
"XGMI: Get topology failure on device %llx, hive %llx, ret %d",
tmp_adev->gmc.xgmi.node_id,
tmp_adev->gmc.xgmi.hive_id, ret);
/* To do : continue with some node failed or disable the whole hive */
goto exit_unlock;
}
}
} }
/* get topology again for hives that support extended data */ /* get topology again for hives that support extended data */

View File

@ -28,6 +28,7 @@
#include <acpi/video.h> #include <acpi/video.h>
#include <drm/drm_edid.h>
#include <drm/amdgpu_drm.h> #include <drm/amdgpu_drm.h>
#include "amdgpu.h" #include "amdgpu.h"
#include "amdgpu_connectors.h" #include "amdgpu_connectors.h"

View File

@ -21,6 +21,7 @@
* *
*/ */
#include <drm/drm_edid.h>
#include <drm/drm_fourcc.h> #include <drm/drm_fourcc.h>
#include <drm/drm_modeset_helper.h> #include <drm/drm_modeset_helper.h>
#include <drm/drm_modeset_helper_vtables.h> #include <drm/drm_modeset_helper_vtables.h>

View File

@ -21,6 +21,7 @@
* *
*/ */
#include <drm/drm_edid.h>
#include <drm/drm_fourcc.h> #include <drm/drm_fourcc.h>
#include <drm/drm_modeset_helper.h> #include <drm/drm_modeset_helper.h>
#include <drm/drm_modeset_helper_vtables.h> #include <drm/drm_modeset_helper_vtables.h>

View File

@ -23,6 +23,7 @@
#include <linux/pci.h> #include <linux/pci.h>
#include <drm/drm_edid.h>
#include <drm/drm_fourcc.h> #include <drm/drm_fourcc.h>
#include <drm/drm_modeset_helper.h> #include <drm/drm_modeset_helper.h>
#include <drm/drm_modeset_helper_vtables.h> #include <drm/drm_modeset_helper_vtables.h>

View File

@ -21,6 +21,7 @@
* *
*/ */
#include <drm/drm_edid.h>
#include <drm/drm_fourcc.h> #include <drm/drm_fourcc.h>
#include <drm/drm_modeset_helper.h> #include <drm/drm_modeset_helper.h>
#include <drm/drm_modeset_helper_vtables.h> #include <drm/drm_modeset_helper_vtables.h>

View File

@ -6593,7 +6593,8 @@ static int gfx_v10_0_compute_mqd_init(struct amdgpu_device *adev, void *m,
tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, ENDIAN_SWAP, 1); tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, ENDIAN_SWAP, 1);
#endif #endif
tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, UNORD_DISPATCH, 0); tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, UNORD_DISPATCH, 0);
tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, TUNNEL_DISPATCH, 0); tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, TUNNEL_DISPATCH,
prop->allow_tunneling);
tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, PRIV_STATE, 1); tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, PRIV_STATE, 1);
tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, KMD_QUEUE, 1); tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, KMD_QUEUE, 1);
mqd->cp_hqd_pq_control = tmp; mqd->cp_hqd_pq_control = tmp;

View File

@ -3847,7 +3847,8 @@ static int gfx_v11_0_compute_mqd_init(struct amdgpu_device *adev, void *m,
tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, RPTR_BLOCK_SIZE, tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, RPTR_BLOCK_SIZE,
(order_base_2(AMDGPU_GPU_PAGE_SIZE / 4) - 1)); (order_base_2(AMDGPU_GPU_PAGE_SIZE / 4) - 1));
tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, UNORD_DISPATCH, 0); tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, UNORD_DISPATCH, 0);
tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, TUNNEL_DISPATCH, 0); tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, TUNNEL_DISPATCH,
prop->allow_tunneling);
tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, PRIV_STATE, 1); tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, PRIV_STATE, 1);
tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, KMD_QUEUE, 1); tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, KMD_QUEUE, 1);
mqd->cp_hqd_pq_control = tmp; mqd->cp_hqd_pq_control = tmp;

View File

@ -883,7 +883,7 @@ static void gmc_v9_0_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
* GRBM interface. * GRBM interface.
*/ */
if ((vmhub == AMDGPU_GFXHUB(0)) && if ((vmhub == AMDGPU_GFXHUB(0)) &&
(adev->ip_versions[GC_HWIP][0] < IP_VERSION(9, 4, 2))) (amdgpu_ip_version(adev, GC_HWIP, 0) < IP_VERSION(9, 4, 2)))
RREG32_NO_KIQ(req); RREG32_NO_KIQ(req);
for (j = 0; j < adev->usec_timeout; j++) { for (j = 0; j < adev->usec_timeout; j++) {

View File

@ -155,13 +155,6 @@ static int jpeg_v4_0_5_hw_init(void *handle)
struct amdgpu_ring *ring = adev->jpeg.inst->ring_dec; struct amdgpu_ring *ring = adev->jpeg.inst->ring_dec;
int r; int r;
adev->nbio.funcs->vcn_doorbell_range(adev, ring->use_doorbell,
(adev->doorbell_index.vcn.vcn_ring0_1 << 1), 0);
WREG32_SOC15(VCN, 0, regVCN_JPEG_DB_CTRL,
ring->doorbell_index << VCN_JPEG_DB_CTRL__OFFSET__SHIFT |
VCN_JPEG_DB_CTRL__EN_MASK);
r = amdgpu_ring_test_helper(ring); r = amdgpu_ring_test_helper(ring);
if (r) if (r)
return r; return r;
@ -336,6 +329,14 @@ static int jpeg_v4_0_5_start(struct amdgpu_device *adev)
if (adev->pm.dpm_enabled) if (adev->pm.dpm_enabled)
amdgpu_dpm_enable_jpeg(adev, true); amdgpu_dpm_enable_jpeg(adev, true);
/* doorbell programming is done for every playback */
adev->nbio.funcs->vcn_doorbell_range(adev, ring->use_doorbell,
(adev->doorbell_index.vcn.vcn_ring0_1 << 1), 0);
WREG32_SOC15(VCN, 0, regVCN_JPEG_DB_CTRL,
ring->doorbell_index << VCN_JPEG_DB_CTRL__OFFSET__SHIFT |
VCN_JPEG_DB_CTRL__EN_MASK);
/* disable power gating */ /* disable power gating */
r = jpeg_v4_0_5_disable_static_power_gating(adev); r = jpeg_v4_0_5_disable_static_power_gating(adev);
if (r) if (r)

View File

@ -813,12 +813,12 @@ static int sdma_v2_4_early_init(void *handle)
struct amdgpu_device *adev = (struct amdgpu_device *)handle; struct amdgpu_device *adev = (struct amdgpu_device *)handle;
int r; int r;
adev->sdma.num_instances = SDMA_MAX_INSTANCE;
r = sdma_v2_4_init_microcode(adev); r = sdma_v2_4_init_microcode(adev);
if (r) if (r)
return r; return r;
adev->sdma.num_instances = SDMA_MAX_INSTANCE;
sdma_v2_4_set_ring_funcs(adev); sdma_v2_4_set_ring_funcs(adev);
sdma_v2_4_set_buffer_funcs(adev); sdma_v2_4_set_buffer_funcs(adev);
sdma_v2_4_set_vm_pte_funcs(adev); sdma_v2_4_set_vm_pte_funcs(adev);

View File

@ -1643,6 +1643,32 @@ static void sdma_v5_2_get_clockgating_state(void *handle, u64 *flags)
*flags |= AMD_CG_SUPPORT_SDMA_LS; *flags |= AMD_CG_SUPPORT_SDMA_LS;
} }
static void sdma_v5_2_ring_begin_use(struct amdgpu_ring *ring)
{
struct amdgpu_device *adev = ring->adev;
/* SDMA 5.2.3 (RMB) FW doesn't seem to properly
* disallow GFXOFF in some cases leading to
* hangs in SDMA. Disallow GFXOFF while SDMA is active.
* We can probably just limit this to 5.2.3,
* but it shouldn't hurt for other parts since
* this GFXOFF will be disallowed anyway when SDMA is
* active, this just makes it explicit.
*/
amdgpu_gfx_off_ctrl(adev, false);
}
static void sdma_v5_2_ring_end_use(struct amdgpu_ring *ring)
{
struct amdgpu_device *adev = ring->adev;
/* SDMA 5.2.3 (RMB) FW doesn't seem to properly
* disallow GFXOFF in some cases leading to
* hangs in SDMA. Allow GFXOFF when SDMA is complete.
*/
amdgpu_gfx_off_ctrl(adev, true);
}
const struct amd_ip_funcs sdma_v5_2_ip_funcs = { const struct amd_ip_funcs sdma_v5_2_ip_funcs = {
.name = "sdma_v5_2", .name = "sdma_v5_2",
.early_init = sdma_v5_2_early_init, .early_init = sdma_v5_2_early_init,
@ -1690,6 +1716,8 @@ static const struct amdgpu_ring_funcs sdma_v5_2_ring_funcs = {
.test_ib = sdma_v5_2_ring_test_ib, .test_ib = sdma_v5_2_ring_test_ib,
.insert_nop = sdma_v5_2_ring_insert_nop, .insert_nop = sdma_v5_2_ring_insert_nop,
.pad_ib = sdma_v5_2_ring_pad_ib, .pad_ib = sdma_v5_2_ring_pad_ib,
.begin_use = sdma_v5_2_ring_begin_use,
.end_use = sdma_v5_2_ring_end_use,
.emit_wreg = sdma_v5_2_ring_emit_wreg, .emit_wreg = sdma_v5_2_ring_emit_wreg,
.emit_reg_wait = sdma_v5_2_ring_emit_reg_wait, .emit_reg_wait = sdma_v5_2_ring_emit_reg_wait,
.emit_reg_write_reg_wait = sdma_v5_2_ring_emit_reg_write_reg_wait, .emit_reg_write_reg_wait = sdma_v5_2_ring_emit_reg_write_reg_wait,

View File

@ -96,6 +96,10 @@ static int vpe_v6_1_load_microcode(struct amdgpu_vpe *vpe)
adev->vpe.cmdbuf_cpu_addr[1] = f32_cntl; adev->vpe.cmdbuf_cpu_addr[1] = f32_cntl;
amdgpu_vpe_psp_update_sram(adev); amdgpu_vpe_psp_update_sram(adev);
/* Config DPM */
amdgpu_vpe_configure_dpm(vpe);
return 0; return 0;
} }
@ -128,6 +132,8 @@ static int vpe_v6_1_load_microcode(struct amdgpu_vpe *vpe)
} }
vpe_v6_1_halt(vpe, false); vpe_v6_1_halt(vpe, false);
/* Config DPM */
amdgpu_vpe_configure_dpm(vpe);
return 0; return 0;
} }
@ -264,6 +270,15 @@ static int vpe_v6_1_set_regs(struct amdgpu_vpe *vpe)
vpe->regs.queue0_rb_wptr_hi = regVPEC_QUEUE0_RB_WPTR_HI; vpe->regs.queue0_rb_wptr_hi = regVPEC_QUEUE0_RB_WPTR_HI;
vpe->regs.queue0_preempt = regVPEC_QUEUE0_PREEMPT; vpe->regs.queue0_preempt = regVPEC_QUEUE0_PREEMPT;
vpe->regs.dpm_enable = regVPEC_PUB_DUMMY2;
vpe->regs.dpm_pratio = regVPEC_QUEUE6_DUMMY4;
vpe->regs.dpm_request_interval = regVPEC_QUEUE5_DUMMY3;
vpe->regs.dpm_decision_threshold = regVPEC_QUEUE5_DUMMY4;
vpe->regs.dpm_busy_clamp_threshold = regVPEC_QUEUE7_DUMMY2;
vpe->regs.dpm_idle_clamp_threshold = regVPEC_QUEUE7_DUMMY3;
vpe->regs.dpm_request_lv = regVPEC_QUEUE7_DUMMY1;
vpe->regs.context_indicator = regVPEC_QUEUE6_DUMMY3;
return 0; return 0;
} }

View File

@ -1564,16 +1564,11 @@ static int kfd_ioctl_import_dmabuf(struct file *filep,
{ {
struct kfd_ioctl_import_dmabuf_args *args = data; struct kfd_ioctl_import_dmabuf_args *args = data;
struct kfd_process_device *pdd; struct kfd_process_device *pdd;
struct dma_buf *dmabuf;
int idr_handle; int idr_handle;
uint64_t size; uint64_t size;
void *mem; void *mem;
int r; int r;
dmabuf = dma_buf_get(args->dmabuf_fd);
if (IS_ERR(dmabuf))
return PTR_ERR(dmabuf);
mutex_lock(&p->mutex); mutex_lock(&p->mutex);
pdd = kfd_process_device_data_by_id(p, args->gpu_id); pdd = kfd_process_device_data_by_id(p, args->gpu_id);
if (!pdd) { if (!pdd) {
@ -1587,10 +1582,10 @@ static int kfd_ioctl_import_dmabuf(struct file *filep,
goto err_unlock; goto err_unlock;
} }
r = amdgpu_amdkfd_gpuvm_import_dmabuf(pdd->dev->adev, dmabuf, r = amdgpu_amdkfd_gpuvm_import_dmabuf_fd(pdd->dev->adev, args->dmabuf_fd,
args->va_addr, pdd->drm_priv, args->va_addr, pdd->drm_priv,
(struct kgd_mem **)&mem, &size, (struct kgd_mem **)&mem, &size,
NULL); NULL);
if (r) if (r)
goto err_unlock; goto err_unlock;
@ -1601,7 +1596,6 @@ static int kfd_ioctl_import_dmabuf(struct file *filep,
} }
mutex_unlock(&p->mutex); mutex_unlock(&p->mutex);
dma_buf_put(dmabuf);
args->handle = MAKE_HANDLE(args->gpu_id, idr_handle); args->handle = MAKE_HANDLE(args->gpu_id, idr_handle);
@ -1612,7 +1606,6 @@ err_free:
pdd->drm_priv, NULL); pdd->drm_priv, NULL);
err_unlock: err_unlock:
mutex_unlock(&p->mutex); mutex_unlock(&p->mutex);
dma_buf_put(dmabuf);
return r; return r;
} }
@ -1855,8 +1848,8 @@ static uint32_t get_process_num_bos(struct kfd_process *p)
return num_of_bos; return num_of_bos;
} }
static int criu_get_prime_handle(struct kgd_mem *mem, int flags, static int criu_get_prime_handle(struct kgd_mem *mem,
u32 *shared_fd) int flags, u32 *shared_fd)
{ {
struct dma_buf *dmabuf; struct dma_buf *dmabuf;
int ret; int ret;

View File

@ -87,6 +87,8 @@ void kfd_process_dequeue_from_device(struct kfd_process_device *pdd)
return; return;
dev->dqm->ops.process_termination(dev->dqm, &pdd->qpd); dev->dqm->ops.process_termination(dev->dqm, &pdd->qpd);
if (dev->kfd->shared_resources.enable_mes)
amdgpu_mes_flush_shader_debugger(dev->adev, pdd->proc_ctx_gpu_addr);
pdd->already_dequeued = true; pdd->already_dequeued = true;
} }

View File

@ -1607,18 +1607,24 @@ static int svm_range_validate_and_map(struct mm_struct *mm,
if (test_bit(gpuidx, prange->bitmap_access)) if (test_bit(gpuidx, prange->bitmap_access))
bitmap_set(ctx->bitmap, gpuidx, 1); bitmap_set(ctx->bitmap, gpuidx, 1);
} }
/*
* If prange is already mapped or with always mapped flag,
* update mapping on GPUs with ACCESS attribute
*/
if (bitmap_empty(ctx->bitmap, MAX_GPU_INSTANCE)) {
if (prange->mapped_to_gpu ||
prange->flags & KFD_IOCTL_SVM_FLAG_GPU_ALWAYS_MAPPED)
bitmap_copy(ctx->bitmap, prange->bitmap_access, MAX_GPU_INSTANCE);
}
} else { } else {
bitmap_or(ctx->bitmap, prange->bitmap_access, bitmap_or(ctx->bitmap, prange->bitmap_access,
prange->bitmap_aip, MAX_GPU_INSTANCE); prange->bitmap_aip, MAX_GPU_INSTANCE);
} }
if (bitmap_empty(ctx->bitmap, MAX_GPU_INSTANCE)) { if (bitmap_empty(ctx->bitmap, MAX_GPU_INSTANCE)) {
bitmap_copy(ctx->bitmap, prange->bitmap_access, MAX_GPU_INSTANCE); r = 0;
if (!prange->mapped_to_gpu || goto free_ctx;
bitmap_empty(ctx->bitmap, MAX_GPU_INSTANCE)) {
r = 0;
goto free_ctx;
}
} }
if (prange->actual_loc && !prange->ttm_res) { if (prange->actual_loc && !prange->ttm_res) {

View File

@ -1712,7 +1712,7 @@ static int amdgpu_dm_init(struct amdgpu_device *adev)
init_data.clk_reg_offsets = adev->reg_offset[CLK_HWIP][0]; init_data.clk_reg_offsets = adev->reg_offset[CLK_HWIP][0];
/* Enable DWB for tested platforms only */ /* Enable DWB for tested platforms only */
if (adev->ip_versions[DCE_HWIP][0] >= IP_VERSION(3, 0, 0)) if (amdgpu_ip_version(adev, DCE_HWIP, 0) >= IP_VERSION(3, 0, 0))
init_data.num_virtual_links = 1; init_data.num_virtual_links = 1;
INIT_LIST_HEAD(&adev->dm.da_list); INIT_LIST_HEAD(&adev->dm.da_list);
@ -2687,6 +2687,7 @@ static int dm_suspend(void *handle)
hpd_rx_irq_work_suspend(dm); hpd_rx_irq_work_suspend(dm);
dc_set_power_state(dm->dc, DC_ACPI_CM_POWER_STATE_D3); dc_set_power_state(dm->dc, DC_ACPI_CM_POWER_STATE_D3);
dc_dmub_srv_set_power_state(dm->dc->ctx->dmub_srv, DC_ACPI_CM_POWER_STATE_D3);
return 0; return 0;
} }
@ -2882,6 +2883,7 @@ static int dm_resume(void *handle)
if (r) if (r)
DRM_ERROR("DMUB interface failed to initialize: status=%d\n", r); DRM_ERROR("DMUB interface failed to initialize: status=%d\n", r);
dc_dmub_srv_set_power_state(dm->dc->ctx->dmub_srv, DC_ACPI_CM_POWER_STATE_D0);
dc_set_power_state(dm->dc, DC_ACPI_CM_POWER_STATE_D0); dc_set_power_state(dm->dc, DC_ACPI_CM_POWER_STATE_D0);
dc_resume(dm->dc); dc_resume(dm->dc);
@ -2932,6 +2934,7 @@ static int dm_resume(void *handle)
} }
/* power on hardware */ /* power on hardware */
dc_dmub_srv_set_power_state(dm->dc->ctx->dmub_srv, DC_ACPI_CM_POWER_STATE_D0);
dc_set_power_state(dm->dc, DC_ACPI_CM_POWER_STATE_D0); dc_set_power_state(dm->dc, DC_ACPI_CM_POWER_STATE_D0);
/* program HPD filter */ /* program HPD filter */
@ -4067,6 +4070,11 @@ static int amdgpu_dm_mode_config_init(struct amdgpu_device *adev)
return r; return r;
} }
#ifdef AMD_PRIVATE_COLOR
if (amdgpu_dm_create_color_properties(adev))
return -ENOMEM;
#endif
r = amdgpu_dm_audio_init(adev); r = amdgpu_dm_audio_init(adev);
if (r) { if (r) {
dc_release_state(state->context); dc_release_state(state->context);
@ -5164,7 +5172,9 @@ static int fill_dc_plane_attributes(struct amdgpu_device *adev,
* Always set input transfer function, since plane state is refreshed * Always set input transfer function, since plane state is refreshed
* every time. * every time.
*/ */
ret = amdgpu_dm_update_plane_color_mgmt(dm_crtc_state, dc_plane_state); ret = amdgpu_dm_update_plane_color_mgmt(dm_crtc_state,
plane_state,
dc_plane_state);
if (ret) if (ret)
return ret; return ret;
@ -8261,6 +8271,10 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
bundle->surface_updates[planes_count].gamma = dc_plane->gamma_correction; bundle->surface_updates[planes_count].gamma = dc_plane->gamma_correction;
bundle->surface_updates[planes_count].in_transfer_func = dc_plane->in_transfer_func; bundle->surface_updates[planes_count].in_transfer_func = dc_plane->in_transfer_func;
bundle->surface_updates[planes_count].gamut_remap_matrix = &dc_plane->gamut_remap_matrix; bundle->surface_updates[planes_count].gamut_remap_matrix = &dc_plane->gamut_remap_matrix;
bundle->surface_updates[planes_count].hdr_mult = dc_plane->hdr_mult;
bundle->surface_updates[planes_count].func_shaper = dc_plane->in_shaper_func;
bundle->surface_updates[planes_count].lut3d_func = dc_plane->lut3d_func;
bundle->surface_updates[planes_count].blend_tf = dc_plane->blend_tf;
} }
amdgpu_dm_plane_fill_dc_scaling_info(dm->adev, new_plane_state, amdgpu_dm_plane_fill_dc_scaling_info(dm->adev, new_plane_state,
@ -8472,6 +8486,10 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
&acrtc_state->stream->csc_color_matrix; &acrtc_state->stream->csc_color_matrix;
bundle->stream_update.out_transfer_func = bundle->stream_update.out_transfer_func =
acrtc_state->stream->out_transfer_func; acrtc_state->stream->out_transfer_func;
bundle->stream_update.lut3d_func =
(struct dc_3dlut *) acrtc_state->stream->lut3d_func;
bundle->stream_update.func_shaper =
(struct dc_transfer_func *) acrtc_state->stream->func_shaper;
} }
acrtc_state->stream->abm_level = acrtc_state->abm_level; acrtc_state->stream->abm_level = acrtc_state->abm_level;
@ -8874,12 +8892,14 @@ static void dm_set_writeback(struct amdgpu_display_manager *dm,
acrtc = to_amdgpu_crtc(wb_conn->encoder.crtc); acrtc = to_amdgpu_crtc(wb_conn->encoder.crtc);
if (!acrtc) { if (!acrtc) {
DRM_ERROR("no amdgpu_crtc found\n"); DRM_ERROR("no amdgpu_crtc found\n");
kfree(wb_info);
return; return;
} }
afb = to_amdgpu_framebuffer(new_con_state->writeback_job->fb); afb = to_amdgpu_framebuffer(new_con_state->writeback_job->fb);
if (!afb) { if (!afb) {
DRM_ERROR("No amdgpu_framebuffer found\n"); DRM_ERROR("No amdgpu_framebuffer found\n");
kfree(wb_info);
return; return;
} }
@ -8934,7 +8954,7 @@ static void dm_set_writeback(struct amdgpu_display_manager *dm,
} }
wb_info->mcif_buf_params.p_vmid = 1; wb_info->mcif_buf_params.p_vmid = 1;
if (adev->ip_versions[DCE_HWIP][0] >= IP_VERSION(3, 0, 0)) { if (amdgpu_ip_version(adev, DCE_HWIP, 0) >= IP_VERSION(3, 0, 0)) {
wb_info->mcif_warmup_params.start_address.quad_part = afb->address; wb_info->mcif_warmup_params.start_address.quad_part = afb->address;
wb_info->mcif_warmup_params.region_size = wb_info->mcif_warmup_params.region_size =
wb_info->mcif_buf_params.luma_pitch * wb_info->dwb_params.dest_height; wb_info->mcif_buf_params.luma_pitch * wb_info->dwb_params.dest_height;
@ -9853,6 +9873,7 @@ skip_modeset:
* when a modeset is needed, to ensure it gets reprogrammed. * when a modeset is needed, to ensure it gets reprogrammed.
*/ */
if (dm_new_crtc_state->base.color_mgmt_changed || if (dm_new_crtc_state->base.color_mgmt_changed ||
dm_old_crtc_state->regamma_tf != dm_new_crtc_state->regamma_tf ||
drm_atomic_crtc_needs_modeset(new_crtc_state)) { drm_atomic_crtc_needs_modeset(new_crtc_state)) {
ret = amdgpu_dm_update_crtc_color_mgmt(dm_new_crtc_state); ret = amdgpu_dm_update_crtc_color_mgmt(dm_new_crtc_state);
if (ret) if (ret)
@ -9886,7 +9907,8 @@ static bool should_reset_plane(struct drm_atomic_state *state,
* TODO: Remove this hack for all asics once it proves that the * TODO: Remove this hack for all asics once it proves that the
* fast updates works fine on DCN3.2+. * fast updates works fine on DCN3.2+.
*/ */
if (adev->ip_versions[DCE_HWIP][0] < IP_VERSION(3, 2, 0) && state->allow_modeset) if (amdgpu_ip_version(adev, DCE_HWIP, 0) < IP_VERSION(3, 2, 0) &&
state->allow_modeset)
return true; return true;
/* Exit early if we know that we're adding or removing the plane. */ /* Exit early if we know that we're adding or removing the plane. */
@ -9920,6 +9942,10 @@ static bool should_reset_plane(struct drm_atomic_state *state,
*/ */
for_each_oldnew_plane_in_state(state, other, old_other_state, new_other_state, i) { for_each_oldnew_plane_in_state(state, other, old_other_state, new_other_state, i) {
struct amdgpu_framebuffer *old_afb, *new_afb; struct amdgpu_framebuffer *old_afb, *new_afb;
struct dm_plane_state *dm_new_other_state, *dm_old_other_state;
dm_new_other_state = to_dm_plane_state(new_other_state);
dm_old_other_state = to_dm_plane_state(old_other_state);
if (other->type == DRM_PLANE_TYPE_CURSOR) if (other->type == DRM_PLANE_TYPE_CURSOR)
continue; continue;
@ -9956,6 +9982,18 @@ static bool should_reset_plane(struct drm_atomic_state *state,
old_other_state->color_encoding != new_other_state->color_encoding) old_other_state->color_encoding != new_other_state->color_encoding)
return true; return true;
/* HDR/Transfer Function changes. */
if (dm_old_other_state->degamma_tf != dm_new_other_state->degamma_tf ||
dm_old_other_state->degamma_lut != dm_new_other_state->degamma_lut ||
dm_old_other_state->hdr_mult != dm_new_other_state->hdr_mult ||
dm_old_other_state->ctm != dm_new_other_state->ctm ||
dm_old_other_state->shaper_lut != dm_new_other_state->shaper_lut ||
dm_old_other_state->shaper_tf != dm_new_other_state->shaper_tf ||
dm_old_other_state->lut3d != dm_new_other_state->lut3d ||
dm_old_other_state->blend_lut != dm_new_other_state->blend_lut ||
dm_old_other_state->blend_tf != dm_new_other_state->blend_tf)
return true;
/* Framebuffer checks fall at the end. */ /* Framebuffer checks fall at the end. */
if (!old_other_state->fb || !new_other_state->fb) if (!old_other_state->fb || !new_other_state->fb)
continue; continue;

View File

@ -55,6 +55,9 @@
#define HDMI_AMD_VENDOR_SPECIFIC_DATA_BLOCK_IEEE_REGISTRATION_ID 0x00001A #define HDMI_AMD_VENDOR_SPECIFIC_DATA_BLOCK_IEEE_REGISTRATION_ID 0x00001A
#define AMD_VSDB_VERSION_3_FEATURECAP_REPLAYMODE 0x40 #define AMD_VSDB_VERSION_3_FEATURECAP_REPLAYMODE 0x40
#define HDMI_AMD_VENDOR_SPECIFIC_DATA_BLOCK_VERSION_3 0x3 #define HDMI_AMD_VENDOR_SPECIFIC_DATA_BLOCK_VERSION_3 0x3
#define AMDGPU_HDR_MULT_DEFAULT (0x100000000LL)
/* /*
#include "include/amdgpu_dal_power_if.h" #include "include/amdgpu_dal_power_if.h"
#include "amdgpu_dm_irq.h" #include "amdgpu_dm_irq.h"
@ -724,9 +727,98 @@ struct amdgpu_dm_wb_connector {
extern const struct amdgpu_ip_block_version dm_ip_block; extern const struct amdgpu_ip_block_version dm_ip_block;
/* enum amdgpu_transfer_function: pre-defined transfer function supported by AMD.
*
* It includes standardized transfer functions and pure power functions. The
* transfer function coefficients are available at modules/color/color_gamma.c
*/
enum amdgpu_transfer_function {
AMDGPU_TRANSFER_FUNCTION_DEFAULT,
AMDGPU_TRANSFER_FUNCTION_SRGB_EOTF,
AMDGPU_TRANSFER_FUNCTION_BT709_INV_OETF,
AMDGPU_TRANSFER_FUNCTION_PQ_EOTF,
AMDGPU_TRANSFER_FUNCTION_IDENTITY,
AMDGPU_TRANSFER_FUNCTION_GAMMA22_EOTF,
AMDGPU_TRANSFER_FUNCTION_GAMMA24_EOTF,
AMDGPU_TRANSFER_FUNCTION_GAMMA26_EOTF,
AMDGPU_TRANSFER_FUNCTION_SRGB_INV_EOTF,
AMDGPU_TRANSFER_FUNCTION_BT709_OETF,
AMDGPU_TRANSFER_FUNCTION_PQ_INV_EOTF,
AMDGPU_TRANSFER_FUNCTION_GAMMA22_INV_EOTF,
AMDGPU_TRANSFER_FUNCTION_GAMMA24_INV_EOTF,
AMDGPU_TRANSFER_FUNCTION_GAMMA26_INV_EOTF,
AMDGPU_TRANSFER_FUNCTION_COUNT
};
struct dm_plane_state { struct dm_plane_state {
struct drm_plane_state base; struct drm_plane_state base;
struct dc_plane_state *dc_state; struct dc_plane_state *dc_state;
/* Plane color mgmt */
/**
* @degamma_lut:
*
* 1D LUT for mapping framebuffer/plane pixel data before sampling or
* blending operations. It's usually applied to linearize input space.
* The blob (if not NULL) is an array of &struct drm_color_lut.
*/
struct drm_property_blob *degamma_lut;
/**
* @degamma_tf:
*
* Predefined transfer function to tell DC driver the input space to
* linearize.
*/
enum amdgpu_transfer_function degamma_tf;
/**
* @hdr_mult:
*
* Multiplier to 'gain' the plane. When PQ is decoded using the fixed
* func transfer function to the internal FP16 fb, 1.0 -> 80 nits (on
* AMD at least). When sRGB is decoded, 1.0 -> 1.0, obviously.
* Therefore, 1.0 multiplier = 80 nits for SDR content. So if you
* want, 203 nits for SDR content, pass in (203.0 / 80.0). Format is
* S31.32 sign-magnitude.
*
* HDR multiplier can wide range beyond [0.0, 1.0]. This means that PQ
* TF is needed for any subsequent linear-to-non-linear transforms.
*/
__u64 hdr_mult;
/**
* @ctm:
*
* Color transformation matrix. The blob (if not NULL) is a &struct
* drm_color_ctm_3x4.
*/
struct drm_property_blob *ctm;
/**
* @shaper_lut: shaper lookup table blob. The blob (if not NULL) is an
* array of &struct drm_color_lut.
*/
struct drm_property_blob *shaper_lut;
/**
* @shaper_tf:
*
* Predefined transfer function to delinearize color space.
*/
enum amdgpu_transfer_function shaper_tf;
/**
* @lut3d: 3D lookup table blob. The blob (if not NULL) is an array of
* &struct drm_color_lut.
*/
struct drm_property_blob *lut3d;
/**
* @blend_lut: blend lut lookup table blob. The blob (if not NULL) is an
* array of &struct drm_color_lut.
*/
struct drm_property_blob *blend_lut;
/**
* @blend_tf:
*
* Pre-defined transfer function for converting plane pixel data before
* applying blend LUT.
*/
enum amdgpu_transfer_function blend_tf;
}; };
struct dm_crtc_state { struct dm_crtc_state {
@ -751,6 +843,14 @@ struct dm_crtc_state {
struct dc_info_packet vrr_infopacket; struct dc_info_packet vrr_infopacket;
int abm_level; int abm_level;
/**
* @regamma_tf:
*
* Pre-defined transfer function for converting internal FB -> wire
* encoding.
*/
enum amdgpu_transfer_function regamma_tf;
}; };
#define to_dm_crtc_state(x) container_of(x, struct dm_crtc_state, base) #define to_dm_crtc_state(x) container_of(x, struct dm_crtc_state, base)
@ -812,14 +912,22 @@ void amdgpu_dm_update_freesync_caps(struct drm_connector *connector,
void amdgpu_dm_trigger_timing_sync(struct drm_device *dev); void amdgpu_dm_trigger_timing_sync(struct drm_device *dev);
/* 3D LUT max size is 17x17x17 (4913 entries) */
#define MAX_COLOR_3DLUT_SIZE 17
#define MAX_COLOR_3DLUT_BITDEPTH 12
int amdgpu_dm_verify_lut3d_size(struct amdgpu_device *adev,
struct drm_plane_state *plane_state);
/* 1D LUT size */
#define MAX_COLOR_LUT_ENTRIES 4096 #define MAX_COLOR_LUT_ENTRIES 4096
/* Legacy gamm LUT users such as X doesn't like large LUT sizes */ /* Legacy gamm LUT users such as X doesn't like large LUT sizes */
#define MAX_COLOR_LEGACY_LUT_ENTRIES 256 #define MAX_COLOR_LEGACY_LUT_ENTRIES 256
void amdgpu_dm_init_color_mod(void); void amdgpu_dm_init_color_mod(void);
int amdgpu_dm_create_color_properties(struct amdgpu_device *adev);
int amdgpu_dm_verify_lut_sizes(const struct drm_crtc_state *crtc_state); int amdgpu_dm_verify_lut_sizes(const struct drm_crtc_state *crtc_state);
int amdgpu_dm_update_crtc_color_mgmt(struct dm_crtc_state *crtc); int amdgpu_dm_update_crtc_color_mgmt(struct dm_crtc_state *crtc);
int amdgpu_dm_update_plane_color_mgmt(struct dm_crtc_state *crtc, int amdgpu_dm_update_plane_color_mgmt(struct dm_crtc_state *crtc,
struct drm_plane_state *plane_state,
struct dc_plane_state *dc_plane_state); struct dc_plane_state *dc_plane_state);
void amdgpu_dm_update_connector_after_detect( void amdgpu_dm_update_connector_after_detect(

View File

@ -72,6 +72,7 @@
*/ */
#define MAX_DRM_LUT_VALUE 0xFFFF #define MAX_DRM_LUT_VALUE 0xFFFF
#define SDR_WHITE_LEVEL_INIT_VALUE 80
/** /**
* amdgpu_dm_init_color_mod - Initialize the color module. * amdgpu_dm_init_color_mod - Initialize the color module.
@ -84,6 +85,235 @@ void amdgpu_dm_init_color_mod(void)
setup_x_points_distribution(); setup_x_points_distribution();
} }
#ifdef AMD_PRIVATE_COLOR
/* Pre-defined Transfer Functions (TF)
*
* AMD driver supports pre-defined mathematical functions for transferring
* between encoded values and optical/linear space. Depending on HW color caps,
* ROMs and curves built by the AMD color module support these transforms.
*
* The driver-specific color implementation exposes properties for pre-blending
* degamma TF, shaper TF (before 3D LUT), and blend(dpp.ogam) TF and
* post-blending regamma (mpc.ogam) TF. However, only pre-blending degamma
* supports ROM curves. AMD color module uses pre-defined coefficients to build
* curves for the other blocks. What can be done by each color block is
* described by struct dpp_color_capsand struct mpc_color_caps.
*
* AMD driver-specific color API exposes the following pre-defined transfer
* functions:
*
* - Identity: linear/identity relationship between pixel value and
* luminance value;
* - Gamma 2.2, Gamma 2.4, Gamma 2.6: pure power functions;
* - sRGB: 2.4: The piece-wise transfer function from IEC 61966-2-1:1999;
* - BT.709: has a linear segment in the bottom part and then a power function
* with a 0.45 (~1/2.22) gamma for the rest of the range; standardized by
* ITU-R BT.709-6;
* - PQ (Perceptual Quantizer): used for HDR display, allows luminance range
* capability of 0 to 10,000 nits; standardized by SMPTE ST 2084.
*
* The AMD color model is designed with an assumption that SDR (sRGB, BT.709,
* Gamma 2.2, etc.) peak white maps (normalized to 1.0 FP) to 80 nits in the PQ
* system. This has the implication that PQ EOTF (non-linear to linear) maps to
* [0.0..125.0] where 125.0 = 10,000 nits / 80 nits.
*
* Non-linear and linear forms are described in the table below:
*
*
* Non-linear Linear
*
* sRGB UNORM or [0.0, 1.0] [0.0, 1.0]
*
* BT709 UNORM or [0.0, 1.0] [0.0, 1.0]
*
* Gamma 2.x UNORM or [0.0, 1.0] [0.0, 1.0]
*
* PQ UNORM or FP16 CCCS* [0.0, 125.0]
*
* Identity UNORM or FP16 CCCS* [0.0, 1.0] or CCCS**
*
* * CCCS: Windows canonical composition color space
* ** Respectively
*
* In the driver-specific API, color block names attached to TF properties
* suggest the intention regarding non-linear encoding pixel's luminance
* values. As some newer encodings don't use gamma curve, we make encoding and
* decoding explicit by defining an enum list of transfer functions supported
* in terms of EOTF and inverse EOTF, where:
*
* - EOTF (electro-optical transfer function): is the transfer function to go
* from the encoded value to an optical (linear) value. De-gamma functions
* traditionally do this.
* - Inverse EOTF (simply the inverse of the EOTF): is usually intended to go
* from an optical/linear space (which might have been used for blending)
* back to the encoded values. Gamma functions traditionally do this.
*/
static const char * const
amdgpu_transfer_function_names[] = {
[AMDGPU_TRANSFER_FUNCTION_DEFAULT] = "Default",
[AMDGPU_TRANSFER_FUNCTION_IDENTITY] = "Identity",
[AMDGPU_TRANSFER_FUNCTION_SRGB_EOTF] = "sRGB EOTF",
[AMDGPU_TRANSFER_FUNCTION_BT709_INV_OETF] = "BT.709 inv_OETF",
[AMDGPU_TRANSFER_FUNCTION_PQ_EOTF] = "PQ EOTF",
[AMDGPU_TRANSFER_FUNCTION_GAMMA22_EOTF] = "Gamma 2.2 EOTF",
[AMDGPU_TRANSFER_FUNCTION_GAMMA24_EOTF] = "Gamma 2.4 EOTF",
[AMDGPU_TRANSFER_FUNCTION_GAMMA26_EOTF] = "Gamma 2.6 EOTF",
[AMDGPU_TRANSFER_FUNCTION_SRGB_INV_EOTF] = "sRGB inv_EOTF",
[AMDGPU_TRANSFER_FUNCTION_BT709_OETF] = "BT.709 OETF",
[AMDGPU_TRANSFER_FUNCTION_PQ_INV_EOTF] = "PQ inv_EOTF",
[AMDGPU_TRANSFER_FUNCTION_GAMMA22_INV_EOTF] = "Gamma 2.2 inv_EOTF",
[AMDGPU_TRANSFER_FUNCTION_GAMMA24_INV_EOTF] = "Gamma 2.4 inv_EOTF",
[AMDGPU_TRANSFER_FUNCTION_GAMMA26_INV_EOTF] = "Gamma 2.6 inv_EOTF",
};
static const u32 amdgpu_eotf =
BIT(AMDGPU_TRANSFER_FUNCTION_SRGB_EOTF) |
BIT(AMDGPU_TRANSFER_FUNCTION_BT709_INV_OETF) |
BIT(AMDGPU_TRANSFER_FUNCTION_PQ_EOTF) |
BIT(AMDGPU_TRANSFER_FUNCTION_GAMMA22_EOTF) |
BIT(AMDGPU_TRANSFER_FUNCTION_GAMMA24_EOTF) |
BIT(AMDGPU_TRANSFER_FUNCTION_GAMMA26_EOTF);
static const u32 amdgpu_inv_eotf =
BIT(AMDGPU_TRANSFER_FUNCTION_SRGB_INV_EOTF) |
BIT(AMDGPU_TRANSFER_FUNCTION_BT709_OETF) |
BIT(AMDGPU_TRANSFER_FUNCTION_PQ_INV_EOTF) |
BIT(AMDGPU_TRANSFER_FUNCTION_GAMMA22_INV_EOTF) |
BIT(AMDGPU_TRANSFER_FUNCTION_GAMMA24_INV_EOTF) |
BIT(AMDGPU_TRANSFER_FUNCTION_GAMMA26_INV_EOTF);
static struct drm_property *
amdgpu_create_tf_property(struct drm_device *dev,
const char *name,
u32 supported_tf)
{
u32 transfer_functions = supported_tf |
BIT(AMDGPU_TRANSFER_FUNCTION_DEFAULT) |
BIT(AMDGPU_TRANSFER_FUNCTION_IDENTITY);
struct drm_prop_enum_list enum_list[AMDGPU_TRANSFER_FUNCTION_COUNT];
int i, len;
len = 0;
for (i = 0; i < AMDGPU_TRANSFER_FUNCTION_COUNT; i++) {
if ((transfer_functions & BIT(i)) == 0)
continue;
enum_list[len].type = i;
enum_list[len].name = amdgpu_transfer_function_names[i];
len++;
}
return drm_property_create_enum(dev, DRM_MODE_PROP_ENUM,
name, enum_list, len);
}
int
amdgpu_dm_create_color_properties(struct amdgpu_device *adev)
{
struct drm_property *prop;
prop = drm_property_create(adev_to_drm(adev),
DRM_MODE_PROP_BLOB,
"AMD_PLANE_DEGAMMA_LUT", 0);
if (!prop)
return -ENOMEM;
adev->mode_info.plane_degamma_lut_property = prop;
prop = drm_property_create_range(adev_to_drm(adev),
DRM_MODE_PROP_IMMUTABLE,
"AMD_PLANE_DEGAMMA_LUT_SIZE",
0, UINT_MAX);
if (!prop)
return -ENOMEM;
adev->mode_info.plane_degamma_lut_size_property = prop;
prop = amdgpu_create_tf_property(adev_to_drm(adev),
"AMD_PLANE_DEGAMMA_TF",
amdgpu_eotf);
if (!prop)
return -ENOMEM;
adev->mode_info.plane_degamma_tf_property = prop;
prop = drm_property_create_range(adev_to_drm(adev),
0, "AMD_PLANE_HDR_MULT", 0, U64_MAX);
if (!prop)
return -ENOMEM;
adev->mode_info.plane_hdr_mult_property = prop;
prop = drm_property_create(adev_to_drm(adev),
DRM_MODE_PROP_BLOB,
"AMD_PLANE_CTM", 0);
if (!prop)
return -ENOMEM;
adev->mode_info.plane_ctm_property = prop;
prop = drm_property_create(adev_to_drm(adev),
DRM_MODE_PROP_BLOB,
"AMD_PLANE_SHAPER_LUT", 0);
if (!prop)
return -ENOMEM;
adev->mode_info.plane_shaper_lut_property = prop;
prop = drm_property_create_range(adev_to_drm(adev),
DRM_MODE_PROP_IMMUTABLE,
"AMD_PLANE_SHAPER_LUT_SIZE", 0, UINT_MAX);
if (!prop)
return -ENOMEM;
adev->mode_info.plane_shaper_lut_size_property = prop;
prop = amdgpu_create_tf_property(adev_to_drm(adev),
"AMD_PLANE_SHAPER_TF",
amdgpu_inv_eotf);
if (!prop)
return -ENOMEM;
adev->mode_info.plane_shaper_tf_property = prop;
prop = drm_property_create(adev_to_drm(adev),
DRM_MODE_PROP_BLOB,
"AMD_PLANE_LUT3D", 0);
if (!prop)
return -ENOMEM;
adev->mode_info.plane_lut3d_property = prop;
prop = drm_property_create_range(adev_to_drm(adev),
DRM_MODE_PROP_IMMUTABLE,
"AMD_PLANE_LUT3D_SIZE", 0, UINT_MAX);
if (!prop)
return -ENOMEM;
adev->mode_info.plane_lut3d_size_property = prop;
prop = drm_property_create(adev_to_drm(adev),
DRM_MODE_PROP_BLOB,
"AMD_PLANE_BLEND_LUT", 0);
if (!prop)
return -ENOMEM;
adev->mode_info.plane_blend_lut_property = prop;
prop = drm_property_create_range(adev_to_drm(adev),
DRM_MODE_PROP_IMMUTABLE,
"AMD_PLANE_BLEND_LUT_SIZE", 0, UINT_MAX);
if (!prop)
return -ENOMEM;
adev->mode_info.plane_blend_lut_size_property = prop;
prop = amdgpu_create_tf_property(adev_to_drm(adev),
"AMD_PLANE_BLEND_TF",
amdgpu_eotf);
if (!prop)
return -ENOMEM;
adev->mode_info.plane_blend_tf_property = prop;
prop = amdgpu_create_tf_property(adev_to_drm(adev),
"AMD_CRTC_REGAMMA_TF",
amdgpu_inv_eotf);
if (!prop)
return -ENOMEM;
adev->mode_info.regamma_tf_property = prop;
return 0;
}
#endif
/** /**
* __extract_blob_lut - Extracts the DRM lut and lut size from a blob. * __extract_blob_lut - Extracts the DRM lut and lut size from a blob.
* @blob: DRM color mgmt property blob * @blob: DRM color mgmt property blob
@ -182,7 +412,6 @@ static void __drm_lut_to_dc_gamma(const struct drm_color_lut *lut,
static void __drm_ctm_to_dc_matrix(const struct drm_color_ctm *ctm, static void __drm_ctm_to_dc_matrix(const struct drm_color_ctm *ctm,
struct fixed31_32 *matrix) struct fixed31_32 *matrix)
{ {
int64_t val;
int i; int i;
/* /*
@ -201,12 +430,29 @@ static void __drm_ctm_to_dc_matrix(const struct drm_color_ctm *ctm,
} }
/* gamut_remap_matrix[i] = ctm[i - floor(i/4)] */ /* gamut_remap_matrix[i] = ctm[i - floor(i/4)] */
val = ctm->matrix[i - (i / 4)]; matrix[i] = dc_fixpt_from_s3132(ctm->matrix[i - (i / 4)]);
/* If negative, convert to 2's complement. */ }
if (val & (1ULL << 63)) }
val = -(val & ~(1ULL << 63));
matrix[i].value = val; /**
* __drm_ctm_3x4_to_dc_matrix - converts a DRM CTM 3x4 to a DC CSC float matrix
* @ctm: DRM color transformation matrix with 3x4 dimensions
* @matrix: DC CSC float matrix
*
* The matrix needs to be a 3x4 (12 entry) matrix.
*/
static void __drm_ctm_3x4_to_dc_matrix(const struct drm_color_ctm_3x4 *ctm,
struct fixed31_32 *matrix)
{
int i;
/* The format provided is S31.32, using signed-magnitude representation.
* Our fixed31_32 is also S31.32, but is using 2's complement. We have
* to convert from signed-magnitude to 2's complement.
*/
for (i = 0; i < 12; i++) {
/* gamut_remap_matrix[i] = ctm[i - floor(i/4)] */
matrix[i] = dc_fixpt_from_s3132(ctm->matrix[i]);
} }
} }
@ -268,16 +514,18 @@ static int __set_output_tf(struct dc_transfer_func *func,
struct calculate_buffer cal_buffer = {0}; struct calculate_buffer cal_buffer = {0};
bool res; bool res;
ASSERT(lut && lut_size == MAX_COLOR_LUT_ENTRIES);
cal_buffer.buffer_index = -1; cal_buffer.buffer_index = -1;
gamma = dc_create_gamma(); if (lut_size) {
if (!gamma) ASSERT(lut && lut_size == MAX_COLOR_LUT_ENTRIES);
return -ENOMEM;
gamma->num_entries = lut_size; gamma = dc_create_gamma();
__drm_lut_to_dc_gamma(lut, gamma, false); if (!gamma)
return -ENOMEM;
gamma->num_entries = lut_size;
__drm_lut_to_dc_gamma(lut, gamma, false);
}
if (func->tf == TRANSFER_FUNCTION_LINEAR) { if (func->tf == TRANSFER_FUNCTION_LINEAR) {
/* /*
@ -285,27 +533,68 @@ static int __set_output_tf(struct dc_transfer_func *func,
* on top of a linear input. But degamma params can be used * on top of a linear input. But degamma params can be used
* instead to simulate this. * instead to simulate this.
*/ */
gamma->type = GAMMA_CUSTOM; if (gamma)
gamma->type = GAMMA_CUSTOM;
res = mod_color_calculate_degamma_params(NULL, func, res = mod_color_calculate_degamma_params(NULL, func,
gamma, true); gamma, gamma != NULL);
} else { } else {
/* /*
* Assume sRGB. The actual mapping will depend on whether the * Assume sRGB. The actual mapping will depend on whether the
* input was legacy or not. * input was legacy or not.
*/ */
gamma->type = GAMMA_CS_TFM_1D; if (gamma)
res = mod_color_calculate_regamma_params(func, gamma, false, gamma->type = GAMMA_CS_TFM_1D;
res = mod_color_calculate_regamma_params(func, gamma, gamma != NULL,
has_rom, NULL, &cal_buffer); has_rom, NULL, &cal_buffer);
} }
dc_gamma_release(&gamma); if (gamma)
dc_gamma_release(&gamma);
return res ? 0 : -ENOMEM; return res ? 0 : -ENOMEM;
} }
static int amdgpu_dm_set_atomic_regamma(struct dc_stream_state *stream,
const struct drm_color_lut *regamma_lut,
uint32_t regamma_size, bool has_rom,
enum dc_transfer_func_predefined tf)
{
struct dc_transfer_func *out_tf = stream->out_transfer_func;
int ret = 0;
if (regamma_size || tf != TRANSFER_FUNCTION_LINEAR) {
/*
* CRTC RGM goes into RGM LUT.
*
* Note: there is no implicit sRGB regamma here. We are using
* degamma calculation from color module to calculate the curve
* from a linear base if gamma TF is not set. However, if gamma
* TF (!= Linear) and LUT are set at the same time, we will use
* regamma calculation, and the color module will combine the
* pre-defined TF and the custom LUT values into the LUT that's
* actually programmed.
*/
out_tf->type = TF_TYPE_DISTRIBUTED_POINTS;
out_tf->tf = tf;
out_tf->sdr_ref_white_level = SDR_WHITE_LEVEL_INIT_VALUE;
ret = __set_output_tf(out_tf, regamma_lut, regamma_size, has_rom);
} else {
/*
* No CRTC RGM means we can just put the block into bypass
* since we don't have any plane level adjustments using it.
*/
out_tf->type = TF_TYPE_BYPASS;
out_tf->tf = TRANSFER_FUNCTION_LINEAR;
}
return ret;
}
/** /**
* __set_input_tf - calculates the input transfer function based on expected * __set_input_tf - calculates the input transfer function based on expected
* input space. * input space.
* @caps: dc color capabilities
* @func: transfer function * @func: transfer function
* @lut: lookup table that defines the color space * @lut: lookup table that defines the color space
* @lut_size: size of respective lut. * @lut_size: size of respective lut.
@ -313,27 +602,241 @@ static int __set_output_tf(struct dc_transfer_func *func,
* Returns: * Returns:
* 0 in case of success. -ENOMEM if fails. * 0 in case of success. -ENOMEM if fails.
*/ */
static int __set_input_tf(struct dc_transfer_func *func, static int __set_input_tf(struct dc_color_caps *caps, struct dc_transfer_func *func,
const struct drm_color_lut *lut, uint32_t lut_size) const struct drm_color_lut *lut, uint32_t lut_size)
{ {
struct dc_gamma *gamma = NULL; struct dc_gamma *gamma = NULL;
bool res; bool res;
gamma = dc_create_gamma(); if (lut_size) {
if (!gamma) gamma = dc_create_gamma();
return -ENOMEM; if (!gamma)
return -ENOMEM;
gamma->type = GAMMA_CUSTOM; gamma->type = GAMMA_CUSTOM;
gamma->num_entries = lut_size; gamma->num_entries = lut_size;
__drm_lut_to_dc_gamma(lut, gamma, false); __drm_lut_to_dc_gamma(lut, gamma, false);
}
res = mod_color_calculate_degamma_params(NULL, func, gamma, true); res = mod_color_calculate_degamma_params(caps, func, gamma, gamma != NULL);
dc_gamma_release(&gamma);
if (gamma)
dc_gamma_release(&gamma);
return res ? 0 : -ENOMEM; return res ? 0 : -ENOMEM;
} }
static enum dc_transfer_func_predefined
amdgpu_tf_to_dc_tf(enum amdgpu_transfer_function tf)
{
switch (tf)
{
default:
case AMDGPU_TRANSFER_FUNCTION_DEFAULT:
case AMDGPU_TRANSFER_FUNCTION_IDENTITY:
return TRANSFER_FUNCTION_LINEAR;
case AMDGPU_TRANSFER_FUNCTION_SRGB_EOTF:
case AMDGPU_TRANSFER_FUNCTION_SRGB_INV_EOTF:
return TRANSFER_FUNCTION_SRGB;
case AMDGPU_TRANSFER_FUNCTION_BT709_OETF:
case AMDGPU_TRANSFER_FUNCTION_BT709_INV_OETF:
return TRANSFER_FUNCTION_BT709;
case AMDGPU_TRANSFER_FUNCTION_PQ_EOTF:
case AMDGPU_TRANSFER_FUNCTION_PQ_INV_EOTF:
return TRANSFER_FUNCTION_PQ;
case AMDGPU_TRANSFER_FUNCTION_GAMMA22_EOTF:
case AMDGPU_TRANSFER_FUNCTION_GAMMA22_INV_EOTF:
return TRANSFER_FUNCTION_GAMMA22;
case AMDGPU_TRANSFER_FUNCTION_GAMMA24_EOTF:
case AMDGPU_TRANSFER_FUNCTION_GAMMA24_INV_EOTF:
return TRANSFER_FUNCTION_GAMMA24;
case AMDGPU_TRANSFER_FUNCTION_GAMMA26_EOTF:
case AMDGPU_TRANSFER_FUNCTION_GAMMA26_INV_EOTF:
return TRANSFER_FUNCTION_GAMMA26;
}
}
static void __to_dc_lut3d_color(struct dc_rgb *rgb,
const struct drm_color_lut lut,
int bit_precision)
{
rgb->red = drm_color_lut_extract(lut.red, bit_precision);
rgb->green = drm_color_lut_extract(lut.green, bit_precision);
rgb->blue = drm_color_lut_extract(lut.blue, bit_precision);
}
static void __drm_3dlut_to_dc_3dlut(const struct drm_color_lut *lut,
uint32_t lut3d_size,
struct tetrahedral_params *params,
bool use_tetrahedral_9,
int bit_depth)
{
struct dc_rgb *lut0;
struct dc_rgb *lut1;
struct dc_rgb *lut2;
struct dc_rgb *lut3;
int lut_i, i;
if (use_tetrahedral_9) {
lut0 = params->tetrahedral_9.lut0;
lut1 = params->tetrahedral_9.lut1;
lut2 = params->tetrahedral_9.lut2;
lut3 = params->tetrahedral_9.lut3;
} else {
lut0 = params->tetrahedral_17.lut0;
lut1 = params->tetrahedral_17.lut1;
lut2 = params->tetrahedral_17.lut2;
lut3 = params->tetrahedral_17.lut3;
}
for (lut_i = 0, i = 0; i < lut3d_size - 4; lut_i++, i += 4) {
/*
* We should consider the 3D LUT RGB values are distributed
* along four arrays lut0-3 where the first sizes 1229 and the
* other 1228. The bit depth supported for 3dlut channel is
* 12-bit, but DC also supports 10-bit.
*
* TODO: improve color pipeline API to enable the userspace set
* bit depth and 3D LUT size/stride, as specified by VA-API.
*/
__to_dc_lut3d_color(&lut0[lut_i], lut[i], bit_depth);
__to_dc_lut3d_color(&lut1[lut_i], lut[i + 1], bit_depth);
__to_dc_lut3d_color(&lut2[lut_i], lut[i + 2], bit_depth);
__to_dc_lut3d_color(&lut3[lut_i], lut[i + 3], bit_depth);
}
/* lut0 has 1229 points (lut_size/4 + 1) */
__to_dc_lut3d_color(&lut0[lut_i], lut[i], bit_depth);
}
/* amdgpu_dm_atomic_lut3d - set DRM 3D LUT to DC stream
* @drm_lut3d: user 3D LUT
* @drm_lut3d_size: size of 3D LUT
* @lut3d: DC 3D LUT
*
* Map user 3D LUT data to DC 3D LUT and all necessary bits to program it
* on DCN accordingly.
*/
static void amdgpu_dm_atomic_lut3d(const struct drm_color_lut *drm_lut3d,
uint32_t drm_lut3d_size,
struct dc_3dlut *lut)
{
if (!drm_lut3d_size) {
lut->state.bits.initialized = 0;
} else {
/* Stride and bit depth are not programmable by API yet.
* Therefore, only supports 17x17x17 3D LUT (12-bit).
*/
lut->lut_3d.use_tetrahedral_9 = false;
lut->lut_3d.use_12bits = true;
lut->state.bits.initialized = 1;
__drm_3dlut_to_dc_3dlut(drm_lut3d, drm_lut3d_size, &lut->lut_3d,
lut->lut_3d.use_tetrahedral_9,
MAX_COLOR_3DLUT_BITDEPTH);
}
}
static int amdgpu_dm_atomic_shaper_lut(const struct drm_color_lut *shaper_lut,
bool has_rom,
enum dc_transfer_func_predefined tf,
uint32_t shaper_size,
struct dc_transfer_func *func_shaper)
{
int ret = 0;
if (shaper_size || tf != TRANSFER_FUNCTION_LINEAR) {
/*
* If user shaper LUT is set, we assume a linear color space
* (linearized by degamma 1D LUT or not).
*/
func_shaper->type = TF_TYPE_DISTRIBUTED_POINTS;
func_shaper->tf = tf;
func_shaper->sdr_ref_white_level = SDR_WHITE_LEVEL_INIT_VALUE;
ret = __set_output_tf(func_shaper, shaper_lut, shaper_size, has_rom);
} else {
func_shaper->type = TF_TYPE_BYPASS;
func_shaper->tf = TRANSFER_FUNCTION_LINEAR;
}
return ret;
}
static int amdgpu_dm_atomic_blend_lut(const struct drm_color_lut *blend_lut,
bool has_rom,
enum dc_transfer_func_predefined tf,
uint32_t blend_size,
struct dc_transfer_func *func_blend)
{
int ret = 0;
if (blend_size || tf != TRANSFER_FUNCTION_LINEAR) {
/*
* DRM plane gamma LUT or TF means we are linearizing color
* space before blending (similar to degamma programming). As
* we don't have hardcoded curve support, or we use AMD color
* module to fill the parameters that will be translated to HW
* points.
*/
func_blend->type = TF_TYPE_DISTRIBUTED_POINTS;
func_blend->tf = tf;
func_blend->sdr_ref_white_level = SDR_WHITE_LEVEL_INIT_VALUE;
ret = __set_input_tf(NULL, func_blend, blend_lut, blend_size);
} else {
func_blend->type = TF_TYPE_BYPASS;
func_blend->tf = TRANSFER_FUNCTION_LINEAR;
}
return ret;
}
/**
* amdgpu_dm_verify_lut3d_size - verifies if 3D LUT is supported and if user
* shaper and 3D LUTs match the hw supported size
* @adev: amdgpu device
* @plane_state: the DRM plane state
*
* Verifies if pre-blending (DPP) 3D LUT is supported by the HW (DCN 2.0 or
* newer) and if the user shaper and 3D LUTs match the supported size.
*
* Returns:
* 0 on success. -EINVAL if lut size are invalid.
*/
int amdgpu_dm_verify_lut3d_size(struct amdgpu_device *adev,
struct drm_plane_state *plane_state)
{
struct dm_plane_state *dm_plane_state = to_dm_plane_state(plane_state);
const struct drm_color_lut *shaper = NULL, *lut3d = NULL;
uint32_t exp_size, size, dim_size = MAX_COLOR_3DLUT_SIZE;
bool has_3dlut = adev->dm.dc->caps.color.dpp.hw_3d_lut;
/* shaper LUT is only available if 3D LUT color caps */
exp_size = has_3dlut ? MAX_COLOR_LUT_ENTRIES : 0;
shaper = __extract_blob_lut(dm_plane_state->shaper_lut, &size);
if (shaper && size != exp_size) {
drm_dbg(&adev->ddev,
"Invalid Shaper LUT size. Should be %u but got %u.\n",
exp_size, size);
return -EINVAL;
}
/* The number of 3D LUT entries is the dimension size cubed */
exp_size = has_3dlut ? dim_size * dim_size * dim_size : 0;
lut3d = __extract_blob_lut(dm_plane_state->lut3d, &size);
if (lut3d && size != exp_size) {
drm_dbg(&adev->ddev,
"Invalid 3D LUT size. Should be %u but got %u.\n",
exp_size, size);
return -EINVAL;
}
return 0;
}
/** /**
* amdgpu_dm_verify_lut_sizes - verifies if DRM luts match the hw supported sizes * amdgpu_dm_verify_lut_sizes - verifies if DRM luts match the hw supported sizes
* @crtc_state: the DRM CRTC state * @crtc_state: the DRM CRTC state
@ -401,9 +904,12 @@ int amdgpu_dm_update_crtc_color_mgmt(struct dm_crtc_state *crtc)
const struct drm_color_lut *degamma_lut, *regamma_lut; const struct drm_color_lut *degamma_lut, *regamma_lut;
uint32_t degamma_size, regamma_size; uint32_t degamma_size, regamma_size;
bool has_regamma, has_degamma; bool has_regamma, has_degamma;
enum dc_transfer_func_predefined tf = TRANSFER_FUNCTION_LINEAR;
bool is_legacy; bool is_legacy;
int r; int r;
tf = amdgpu_tf_to_dc_tf(crtc->regamma_tf);
r = amdgpu_dm_verify_lut_sizes(&crtc->base); r = amdgpu_dm_verify_lut_sizes(&crtc->base);
if (r) if (r)
return r; return r;
@ -439,27 +945,23 @@ int amdgpu_dm_update_crtc_color_mgmt(struct dm_crtc_state *crtc)
crtc->cm_is_degamma_srgb = true; crtc->cm_is_degamma_srgb = true;
stream->out_transfer_func->type = TF_TYPE_DISTRIBUTED_POINTS; stream->out_transfer_func->type = TF_TYPE_DISTRIBUTED_POINTS;
stream->out_transfer_func->tf = TRANSFER_FUNCTION_SRGB; stream->out_transfer_func->tf = TRANSFER_FUNCTION_SRGB;
/*
* Note: although we pass has_rom as parameter here, we never
* actually use ROM because the color module only takes the ROM
* path if transfer_func->type == PREDEFINED.
*
* See more in mod_color_calculate_regamma_params()
*/
r = __set_legacy_tf(stream->out_transfer_func, regamma_lut, r = __set_legacy_tf(stream->out_transfer_func, regamma_lut,
regamma_size, has_rom); regamma_size, has_rom);
if (r) if (r)
return r; return r;
} else if (has_regamma) { } else {
/* If atomic regamma, CRTC RGM goes into RGM LUT. */ regamma_size = has_regamma ? regamma_size : 0;
stream->out_transfer_func->type = TF_TYPE_DISTRIBUTED_POINTS; r = amdgpu_dm_set_atomic_regamma(stream, regamma_lut,
stream->out_transfer_func->tf = TRANSFER_FUNCTION_LINEAR; regamma_size, has_rom, tf);
r = __set_output_tf(stream->out_transfer_func, regamma_lut,
regamma_size, has_rom);
if (r) if (r)
return r; return r;
} else {
/*
* No CRTC RGM means we can just put the block into bypass
* since we don't have any plane level adjustments using it.
*/
stream->out_transfer_func->type = TF_TYPE_BYPASS;
stream->out_transfer_func->tf = TRANSFER_FUNCTION_LINEAR;
} }
/* /*
@ -495,20 +997,10 @@ int amdgpu_dm_update_crtc_color_mgmt(struct dm_crtc_state *crtc)
return 0; return 0;
} }
/** static int
* amdgpu_dm_update_plane_color_mgmt: Maps DRM color management to DC plane. map_crtc_degamma_to_dc_plane(struct dm_crtc_state *crtc,
* @crtc: amdgpu_dm crtc state struct dc_plane_state *dc_plane_state,
* @dc_plane_state: target DC surface struct dc_color_caps *caps)
*
* Update the underlying dc_stream_state's input transfer function (ITF) in
* preparation for hardware commit. The transfer function used depends on
* the preparation done on the stream for color management.
*
* Returns:
* 0 on success. -ENOMEM if mem allocation fails.
*/
int amdgpu_dm_update_plane_color_mgmt(struct dm_crtc_state *crtc,
struct dc_plane_state *dc_plane_state)
{ {
const struct drm_color_lut *degamma_lut; const struct drm_color_lut *degamma_lut;
enum dc_transfer_func_predefined tf = TRANSFER_FUNCTION_SRGB; enum dc_transfer_func_predefined tf = TRANSFER_FUNCTION_SRGB;
@ -531,8 +1023,7 @@ int amdgpu_dm_update_plane_color_mgmt(struct dm_crtc_state *crtc,
&degamma_size); &degamma_size);
ASSERT(degamma_size == MAX_COLOR_LUT_ENTRIES); ASSERT(degamma_size == MAX_COLOR_LUT_ENTRIES);
dc_plane_state->in_transfer_func->type = dc_plane_state->in_transfer_func->type = TF_TYPE_DISTRIBUTED_POINTS;
TF_TYPE_DISTRIBUTED_POINTS;
/* /*
* This case isn't fully correct, but also fairly * This case isn't fully correct, but also fairly
@ -564,11 +1055,11 @@ int amdgpu_dm_update_plane_color_mgmt(struct dm_crtc_state *crtc,
dc_plane_state->in_transfer_func->tf = dc_plane_state->in_transfer_func->tf =
TRANSFER_FUNCTION_LINEAR; TRANSFER_FUNCTION_LINEAR;
r = __set_input_tf(dc_plane_state->in_transfer_func, r = __set_input_tf(caps, dc_plane_state->in_transfer_func,
degamma_lut, degamma_size); degamma_lut, degamma_size);
if (r) if (r)
return r; return r;
} else if (crtc->cm_is_degamma_srgb) { } else {
/* /*
* For legacy gamma support we need the regamma input * For legacy gamma support we need the regamma input
* in linear space. Assume that the input is sRGB. * in linear space. Assume that the input is sRGB.
@ -577,14 +1068,209 @@ int amdgpu_dm_update_plane_color_mgmt(struct dm_crtc_state *crtc,
dc_plane_state->in_transfer_func->tf = tf; dc_plane_state->in_transfer_func->tf = tf;
if (tf != TRANSFER_FUNCTION_SRGB && if (tf != TRANSFER_FUNCTION_SRGB &&
!mod_color_calculate_degamma_params(NULL, !mod_color_calculate_degamma_params(caps,
dc_plane_state->in_transfer_func, NULL, false)) dc_plane_state->in_transfer_func,
NULL, false))
return -ENOMEM; return -ENOMEM;
} else {
/* ...Otherwise we can just bypass the DGM block. */
dc_plane_state->in_transfer_func->type = TF_TYPE_BYPASS;
dc_plane_state->in_transfer_func->tf = TRANSFER_FUNCTION_LINEAR;
} }
return 0; return 0;
} }
static int
__set_dm_plane_degamma(struct drm_plane_state *plane_state,
struct dc_plane_state *dc_plane_state,
struct dc_color_caps *color_caps)
{
struct dm_plane_state *dm_plane_state = to_dm_plane_state(plane_state);
const struct drm_color_lut *degamma_lut;
enum amdgpu_transfer_function tf = AMDGPU_TRANSFER_FUNCTION_DEFAULT;
uint32_t degamma_size;
bool has_degamma_lut;
int ret;
degamma_lut = __extract_blob_lut(dm_plane_state->degamma_lut,
&degamma_size);
has_degamma_lut = degamma_lut &&
!__is_lut_linear(degamma_lut, degamma_size);
tf = dm_plane_state->degamma_tf;
/* If we don't have plane degamma LUT nor TF to set on DC, we have
* nothing to do here, return.
*/
if (!has_degamma_lut && tf == AMDGPU_TRANSFER_FUNCTION_DEFAULT)
return -EINVAL;
dc_plane_state->in_transfer_func->tf = amdgpu_tf_to_dc_tf(tf);
if (has_degamma_lut) {
ASSERT(degamma_size == MAX_COLOR_LUT_ENTRIES);
dc_plane_state->in_transfer_func->type =
TF_TYPE_DISTRIBUTED_POINTS;
ret = __set_input_tf(color_caps, dc_plane_state->in_transfer_func,
degamma_lut, degamma_size);
if (ret)
return ret;
} else {
dc_plane_state->in_transfer_func->type =
TF_TYPE_PREDEFINED;
if (!mod_color_calculate_degamma_params(color_caps,
dc_plane_state->in_transfer_func, NULL, false))
return -ENOMEM;
}
return 0;
}
static int
amdgpu_dm_plane_set_color_properties(struct drm_plane_state *plane_state,
struct dc_plane_state *dc_plane_state)
{
struct dm_plane_state *dm_plane_state = to_dm_plane_state(plane_state);
enum amdgpu_transfer_function shaper_tf = AMDGPU_TRANSFER_FUNCTION_DEFAULT;
enum amdgpu_transfer_function blend_tf = AMDGPU_TRANSFER_FUNCTION_DEFAULT;
const struct drm_color_lut *shaper_lut, *lut3d, *blend_lut;
uint32_t shaper_size, lut3d_size, blend_size;
int ret;
dc_plane_state->hdr_mult = dc_fixpt_from_s3132(dm_plane_state->hdr_mult);
shaper_lut = __extract_blob_lut(dm_plane_state->shaper_lut, &shaper_size);
shaper_size = shaper_lut != NULL ? shaper_size : 0;
shaper_tf = dm_plane_state->shaper_tf;
lut3d = __extract_blob_lut(dm_plane_state->lut3d, &lut3d_size);
lut3d_size = lut3d != NULL ? lut3d_size : 0;
amdgpu_dm_atomic_lut3d(lut3d, lut3d_size, dc_plane_state->lut3d_func);
ret = amdgpu_dm_atomic_shaper_lut(shaper_lut, false,
amdgpu_tf_to_dc_tf(shaper_tf),
shaper_size,
dc_plane_state->in_shaper_func);
if (ret) {
drm_dbg_kms(plane_state->plane->dev,
"setting plane %d shaper LUT failed.\n",
plane_state->plane->index);
return ret;
}
blend_tf = dm_plane_state->blend_tf;
blend_lut = __extract_blob_lut(dm_plane_state->blend_lut, &blend_size);
blend_size = blend_lut != NULL ? blend_size : 0;
ret = amdgpu_dm_atomic_blend_lut(blend_lut, false,
amdgpu_tf_to_dc_tf(blend_tf),
blend_size, dc_plane_state->blend_tf);
if (ret) {
drm_dbg_kms(plane_state->plane->dev,
"setting plane %d gamma lut failed.\n",
plane_state->plane->index);
return ret;
}
return 0;
}
/**
* amdgpu_dm_update_plane_color_mgmt: Maps DRM color management to DC plane.
* @crtc: amdgpu_dm crtc state
* @plane_state: DRM plane state
* @dc_plane_state: target DC surface
*
* Update the underlying dc_stream_state's input transfer function (ITF) in
* preparation for hardware commit. The transfer function used depends on
* the preparation done on the stream for color management.
*
* Returns:
* 0 on success. -ENOMEM if mem allocation fails.
*/
int amdgpu_dm_update_plane_color_mgmt(struct dm_crtc_state *crtc,
struct drm_plane_state *plane_state,
struct dc_plane_state *dc_plane_state)
{
struct amdgpu_device *adev = drm_to_adev(crtc->base.state->dev);
struct dm_plane_state *dm_plane_state = to_dm_plane_state(plane_state);
struct drm_color_ctm_3x4 *ctm = NULL;
struct dc_color_caps *color_caps = NULL;
bool has_crtc_cm_degamma;
int ret;
ret = amdgpu_dm_verify_lut3d_size(adev, plane_state);
if (ret) {
drm_dbg_driver(&adev->ddev, "amdgpu_dm_verify_lut3d_size() failed\n");
return ret;
}
if (dc_plane_state->ctx && dc_plane_state->ctx->dc)
color_caps = &dc_plane_state->ctx->dc->caps.color;
/* Initially, we can just bypass the DGM block. */
dc_plane_state->in_transfer_func->type = TF_TYPE_BYPASS;
dc_plane_state->in_transfer_func->tf = TRANSFER_FUNCTION_LINEAR;
/* After, we start to update values according to color props */
has_crtc_cm_degamma = (crtc->cm_has_degamma || crtc->cm_is_degamma_srgb);
ret = __set_dm_plane_degamma(plane_state, dc_plane_state, color_caps);
if (ret == -ENOMEM)
return ret;
/* We only have one degamma block available (pre-blending) for the
* whole color correction pipeline, so that we can't actually perform
* plane and CRTC degamma at the same time. Explicitly reject atomic
* updates when userspace sets both plane and CRTC degamma properties.
*/
if (has_crtc_cm_degamma && ret != -EINVAL){
drm_dbg_kms(crtc->base.crtc->dev,
"doesn't support plane and CRTC degamma at the same time\n");
return -EINVAL;
}
/* If we are here, it means we don't have plane degamma settings, check
* if we have CRTC degamma waiting for mapping to pre-blending degamma
* block
*/
if (has_crtc_cm_degamma) {
/*
* AMD HW doesn't have post-blending degamma caps. When DRM
* CRTC atomic degamma is set, we maps it to DPP degamma block
* (pre-blending) or, on legacy gamma, we use DPP degamma to
* linearize (implicit degamma) from sRGB/BT709 according to
* the input space.
*/
ret = map_crtc_degamma_to_dc_plane(crtc, dc_plane_state, color_caps);
if (ret)
return ret;
}
/* Setup CRTC CTM. */
if (dm_plane_state->ctm) {
ctm = (struct drm_color_ctm_3x4 *)dm_plane_state->ctm->data;
/*
* DCN2 and older don't support both pre-blending and
* post-blending gamut remap. For this HW family, if we have
* the plane and CRTC CTMs simultaneously, CRTC CTM takes
* priority, and we discard plane CTM, as implemented in
* dcn10_program_gamut_remap(). However, DCN3+ has DPP
* (pre-blending) and MPC (post-blending) `gamut remap` blocks;
* therefore, we can program plane and CRTC CTMs together by
* mapping CRTC CTM to MPC and keeping plane CTM setup at DPP,
* as it's done by dcn30_program_gamut_remap().
*/
__drm_ctm_3x4_to_dc_matrix(ctm, dc_plane_state->gamut_remap_matrix.matrix);
dc_plane_state->gamut_remap_matrix.enable_remap = true;
dc_plane_state->input_csc_color_matrix.enable_adjustment = false;
} else {
/* Bypass CTM. */
dc_plane_state->gamut_remap_matrix.enable_remap = false;
dc_plane_state->input_csc_color_matrix.enable_adjustment = false;
}
return amdgpu_dm_plane_set_color_properties(plane_state, dc_plane_state);
}

View File

@ -260,6 +260,7 @@ static struct drm_crtc_state *amdgpu_dm_crtc_duplicate_state(struct drm_crtc *cr
state->freesync_config = cur->freesync_config; state->freesync_config = cur->freesync_config;
state->cm_has_degamma = cur->cm_has_degamma; state->cm_has_degamma = cur->cm_has_degamma;
state->cm_is_degamma_srgb = cur->cm_is_degamma_srgb; state->cm_is_degamma_srgb = cur->cm_is_degamma_srgb;
state->regamma_tf = cur->regamma_tf;
state->crc_skip_count = cur->crc_skip_count; state->crc_skip_count = cur->crc_skip_count;
state->mpo_requested = cur->mpo_requested; state->mpo_requested = cur->mpo_requested;
/* TODO Duplicate dc_stream after objects are stream object is flattened */ /* TODO Duplicate dc_stream after objects are stream object is flattened */
@ -296,6 +297,70 @@ static int amdgpu_dm_crtc_late_register(struct drm_crtc *crtc)
} }
#endif #endif
#ifdef AMD_PRIVATE_COLOR
/**
* dm_crtc_additional_color_mgmt - enable additional color properties
* @crtc: DRM CRTC
*
* This function lets the driver enable post-blending CRTC regamma transfer
* function property in addition to DRM CRTC gamma LUT. Default value means
* linear transfer function, which is the default CRTC gamma LUT behaviour
* without this property.
*/
static void
dm_crtc_additional_color_mgmt(struct drm_crtc *crtc)
{
struct amdgpu_device *adev = drm_to_adev(crtc->dev);
if(adev->dm.dc->caps.color.mpc.ogam_ram)
drm_object_attach_property(&crtc->base,
adev->mode_info.regamma_tf_property,
AMDGPU_TRANSFER_FUNCTION_DEFAULT);
}
static int
amdgpu_dm_atomic_crtc_set_property(struct drm_crtc *crtc,
struct drm_crtc_state *state,
struct drm_property *property,
uint64_t val)
{
struct amdgpu_device *adev = drm_to_adev(crtc->dev);
struct dm_crtc_state *acrtc_state = to_dm_crtc_state(state);
if (property == adev->mode_info.regamma_tf_property) {
if (acrtc_state->regamma_tf != val) {
acrtc_state->regamma_tf = val;
acrtc_state->base.color_mgmt_changed |= 1;
}
} else {
drm_dbg_atomic(crtc->dev,
"[CRTC:%d:%s] unknown property [PROP:%d:%s]]\n",
crtc->base.id, crtc->name,
property->base.id, property->name);
return -EINVAL;
}
return 0;
}
static int
amdgpu_dm_atomic_crtc_get_property(struct drm_crtc *crtc,
const struct drm_crtc_state *state,
struct drm_property *property,
uint64_t *val)
{
struct amdgpu_device *adev = drm_to_adev(crtc->dev);
struct dm_crtc_state *acrtc_state = to_dm_crtc_state(state);
if (property == adev->mode_info.regamma_tf_property)
*val = acrtc_state->regamma_tf;
else
return -EINVAL;
return 0;
}
#endif
/* Implemented only the options currently available for the driver */ /* Implemented only the options currently available for the driver */
static const struct drm_crtc_funcs amdgpu_dm_crtc_funcs = { static const struct drm_crtc_funcs amdgpu_dm_crtc_funcs = {
.reset = amdgpu_dm_crtc_reset_state, .reset = amdgpu_dm_crtc_reset_state,
@ -314,6 +379,10 @@ static const struct drm_crtc_funcs amdgpu_dm_crtc_funcs = {
#if defined(CONFIG_DEBUG_FS) #if defined(CONFIG_DEBUG_FS)
.late_register = amdgpu_dm_crtc_late_register, .late_register = amdgpu_dm_crtc_late_register,
#endif #endif
#ifdef AMD_PRIVATE_COLOR
.atomic_set_property = amdgpu_dm_atomic_crtc_set_property,
.atomic_get_property = amdgpu_dm_atomic_crtc_get_property,
#endif
}; };
static void amdgpu_dm_crtc_helper_disable(struct drm_crtc *crtc) static void amdgpu_dm_crtc_helper_disable(struct drm_crtc *crtc)
@ -489,6 +558,9 @@ int amdgpu_dm_crtc_init(struct amdgpu_display_manager *dm,
drm_mode_crtc_set_gamma_size(&acrtc->base, MAX_COLOR_LEGACY_LUT_ENTRIES); drm_mode_crtc_set_gamma_size(&acrtc->base, MAX_COLOR_LEGACY_LUT_ENTRIES);
#ifdef AMD_PRIVATE_COLOR
dm_crtc_additional_color_mgmt(&acrtc->base);
#endif
return 0; return 0;
fail: fail:

View File

@ -28,6 +28,7 @@
#include <drm/drm_atomic.h> #include <drm/drm_atomic.h>
#include <drm/drm_atomic_helper.h> #include <drm/drm_atomic_helper.h>
#include <drm/drm_fixed.h> #include <drm/drm_fixed.h>
#include <drm/drm_edid.h>
#include "dm_services.h" #include "dm_services.h"
#include "amdgpu.h" #include "amdgpu.h"
#include "amdgpu_dm.h" #include "amdgpu_dm.h"

View File

@ -1337,8 +1337,14 @@ static void amdgpu_dm_plane_drm_plane_reset(struct drm_plane *plane)
amdgpu_state = kzalloc(sizeof(*amdgpu_state), GFP_KERNEL); amdgpu_state = kzalloc(sizeof(*amdgpu_state), GFP_KERNEL);
WARN_ON(amdgpu_state == NULL); WARN_ON(amdgpu_state == NULL);
if (amdgpu_state) if (!amdgpu_state)
__drm_atomic_helper_plane_reset(plane, &amdgpu_state->base); return;
__drm_atomic_helper_plane_reset(plane, &amdgpu_state->base);
amdgpu_state->degamma_tf = AMDGPU_TRANSFER_FUNCTION_DEFAULT;
amdgpu_state->hdr_mult = AMDGPU_HDR_MULT_DEFAULT;
amdgpu_state->shaper_tf = AMDGPU_TRANSFER_FUNCTION_DEFAULT;
amdgpu_state->blend_tf = AMDGPU_TRANSFER_FUNCTION_DEFAULT;
} }
static struct drm_plane_state *amdgpu_dm_plane_drm_plane_duplicate_state(struct drm_plane *plane) static struct drm_plane_state *amdgpu_dm_plane_drm_plane_duplicate_state(struct drm_plane *plane)
@ -1357,6 +1363,27 @@ static struct drm_plane_state *amdgpu_dm_plane_drm_plane_duplicate_state(struct
dc_plane_state_retain(dm_plane_state->dc_state); dc_plane_state_retain(dm_plane_state->dc_state);
} }
if (old_dm_plane_state->degamma_lut)
dm_plane_state->degamma_lut =
drm_property_blob_get(old_dm_plane_state->degamma_lut);
if (old_dm_plane_state->ctm)
dm_plane_state->ctm =
drm_property_blob_get(old_dm_plane_state->ctm);
if (old_dm_plane_state->shaper_lut)
dm_plane_state->shaper_lut =
drm_property_blob_get(old_dm_plane_state->shaper_lut);
if (old_dm_plane_state->lut3d)
dm_plane_state->lut3d =
drm_property_blob_get(old_dm_plane_state->lut3d);
if (old_dm_plane_state->blend_lut)
dm_plane_state->blend_lut =
drm_property_blob_get(old_dm_plane_state->blend_lut);
dm_plane_state->degamma_tf = old_dm_plane_state->degamma_tf;
dm_plane_state->hdr_mult = old_dm_plane_state->hdr_mult;
dm_plane_state->shaper_tf = old_dm_plane_state->shaper_tf;
dm_plane_state->blend_tf = old_dm_plane_state->blend_tf;
return &dm_plane_state->base; return &dm_plane_state->base;
} }
@ -1424,12 +1451,206 @@ static void amdgpu_dm_plane_drm_plane_destroy_state(struct drm_plane *plane,
{ {
struct dm_plane_state *dm_plane_state = to_dm_plane_state(state); struct dm_plane_state *dm_plane_state = to_dm_plane_state(state);
if (dm_plane_state->degamma_lut)
drm_property_blob_put(dm_plane_state->degamma_lut);
if (dm_plane_state->ctm)
drm_property_blob_put(dm_plane_state->ctm);
if (dm_plane_state->lut3d)
drm_property_blob_put(dm_plane_state->lut3d);
if (dm_plane_state->shaper_lut)
drm_property_blob_put(dm_plane_state->shaper_lut);
if (dm_plane_state->blend_lut)
drm_property_blob_put(dm_plane_state->blend_lut);
if (dm_plane_state->dc_state) if (dm_plane_state->dc_state)
dc_plane_state_release(dm_plane_state->dc_state); dc_plane_state_release(dm_plane_state->dc_state);
drm_atomic_helper_plane_destroy_state(plane, state); drm_atomic_helper_plane_destroy_state(plane, state);
} }
#ifdef AMD_PRIVATE_COLOR
static void
dm_atomic_plane_attach_color_mgmt_properties(struct amdgpu_display_manager *dm,
struct drm_plane *plane)
{
struct amdgpu_mode_info mode_info = dm->adev->mode_info;
struct dpp_color_caps dpp_color_caps = dm->dc->caps.color.dpp;
/* Check HW color pipeline capabilities on DPP block (pre-blending)
* before exposing related properties.
*/
if (dpp_color_caps.dgam_ram || dpp_color_caps.gamma_corr) {
drm_object_attach_property(&plane->base,
mode_info.plane_degamma_lut_property,
0);
drm_object_attach_property(&plane->base,
mode_info.plane_degamma_lut_size_property,
MAX_COLOR_LUT_ENTRIES);
drm_object_attach_property(&plane->base,
dm->adev->mode_info.plane_degamma_tf_property,
AMDGPU_TRANSFER_FUNCTION_DEFAULT);
}
/* HDR MULT is always available */
drm_object_attach_property(&plane->base,
dm->adev->mode_info.plane_hdr_mult_property,
AMDGPU_HDR_MULT_DEFAULT);
/* Only enable plane CTM if both DPP and MPC gamut remap is available. */
if (dm->dc->caps.color.mpc.gamut_remap)
drm_object_attach_property(&plane->base,
dm->adev->mode_info.plane_ctm_property, 0);
if (dpp_color_caps.hw_3d_lut) {
drm_object_attach_property(&plane->base,
mode_info.plane_shaper_lut_property, 0);
drm_object_attach_property(&plane->base,
mode_info.plane_shaper_lut_size_property,
MAX_COLOR_LUT_ENTRIES);
drm_object_attach_property(&plane->base,
mode_info.plane_shaper_tf_property,
AMDGPU_TRANSFER_FUNCTION_DEFAULT);
drm_object_attach_property(&plane->base,
mode_info.plane_lut3d_property, 0);
drm_object_attach_property(&plane->base,
mode_info.plane_lut3d_size_property,
MAX_COLOR_3DLUT_SIZE);
}
if (dpp_color_caps.ogam_ram) {
drm_object_attach_property(&plane->base,
mode_info.plane_blend_lut_property, 0);
drm_object_attach_property(&plane->base,
mode_info.plane_blend_lut_size_property,
MAX_COLOR_LUT_ENTRIES);
drm_object_attach_property(&plane->base,
mode_info.plane_blend_tf_property,
AMDGPU_TRANSFER_FUNCTION_DEFAULT);
}
}
static int
dm_atomic_plane_set_property(struct drm_plane *plane,
struct drm_plane_state *state,
struct drm_property *property,
uint64_t val)
{
struct dm_plane_state *dm_plane_state = to_dm_plane_state(state);
struct amdgpu_device *adev = drm_to_adev(plane->dev);
bool replaced = false;
int ret;
if (property == adev->mode_info.plane_degamma_lut_property) {
ret = drm_property_replace_blob_from_id(plane->dev,
&dm_plane_state->degamma_lut,
val, -1,
sizeof(struct drm_color_lut),
&replaced);
dm_plane_state->base.color_mgmt_changed |= replaced;
return ret;
} else if (property == adev->mode_info.plane_degamma_tf_property) {
if (dm_plane_state->degamma_tf != val) {
dm_plane_state->degamma_tf = val;
dm_plane_state->base.color_mgmt_changed = 1;
}
} else if (property == adev->mode_info.plane_hdr_mult_property) {
if (dm_plane_state->hdr_mult != val) {
dm_plane_state->hdr_mult = val;
dm_plane_state->base.color_mgmt_changed = 1;
}
} else if (property == adev->mode_info.plane_ctm_property) {
ret = drm_property_replace_blob_from_id(plane->dev,
&dm_plane_state->ctm,
val,
sizeof(struct drm_color_ctm_3x4), -1,
&replaced);
dm_plane_state->base.color_mgmt_changed |= replaced;
return ret;
} else if (property == adev->mode_info.plane_shaper_lut_property) {
ret = drm_property_replace_blob_from_id(plane->dev,
&dm_plane_state->shaper_lut,
val, -1,
sizeof(struct drm_color_lut),
&replaced);
dm_plane_state->base.color_mgmt_changed |= replaced;
return ret;
} else if (property == adev->mode_info.plane_shaper_tf_property) {
if (dm_plane_state->shaper_tf != val) {
dm_plane_state->shaper_tf = val;
dm_plane_state->base.color_mgmt_changed = 1;
}
} else if (property == adev->mode_info.plane_lut3d_property) {
ret = drm_property_replace_blob_from_id(plane->dev,
&dm_plane_state->lut3d,
val, -1,
sizeof(struct drm_color_lut),
&replaced);
dm_plane_state->base.color_mgmt_changed |= replaced;
return ret;
} else if (property == adev->mode_info.plane_blend_lut_property) {
ret = drm_property_replace_blob_from_id(plane->dev,
&dm_plane_state->blend_lut,
val, -1,
sizeof(struct drm_color_lut),
&replaced);
dm_plane_state->base.color_mgmt_changed |= replaced;
return ret;
} else if (property == adev->mode_info.plane_blend_tf_property) {
if (dm_plane_state->blend_tf != val) {
dm_plane_state->blend_tf = val;
dm_plane_state->base.color_mgmt_changed = 1;
}
} else {
drm_dbg_atomic(plane->dev,
"[PLANE:%d:%s] unknown property [PROP:%d:%s]]\n",
plane->base.id, plane->name,
property->base.id, property->name);
return -EINVAL;
}
return 0;
}
static int
dm_atomic_plane_get_property(struct drm_plane *plane,
const struct drm_plane_state *state,
struct drm_property *property,
uint64_t *val)
{
struct dm_plane_state *dm_plane_state = to_dm_plane_state(state);
struct amdgpu_device *adev = drm_to_adev(plane->dev);
if (property == adev->mode_info.plane_degamma_lut_property) {
*val = (dm_plane_state->degamma_lut) ?
dm_plane_state->degamma_lut->base.id : 0;
} else if (property == adev->mode_info.plane_degamma_tf_property) {
*val = dm_plane_state->degamma_tf;
} else if (property == adev->mode_info.plane_hdr_mult_property) {
*val = dm_plane_state->hdr_mult;
} else if (property == adev->mode_info.plane_ctm_property) {
*val = (dm_plane_state->ctm) ?
dm_plane_state->ctm->base.id : 0;
} else if (property == adev->mode_info.plane_shaper_lut_property) {
*val = (dm_plane_state->shaper_lut) ?
dm_plane_state->shaper_lut->base.id : 0;
} else if (property == adev->mode_info.plane_shaper_tf_property) {
*val = dm_plane_state->shaper_tf;
} else if (property == adev->mode_info.plane_lut3d_property) {
*val = (dm_plane_state->lut3d) ?
dm_plane_state->lut3d->base.id : 0;
} else if (property == adev->mode_info.plane_blend_lut_property) {
*val = (dm_plane_state->blend_lut) ?
dm_plane_state->blend_lut->base.id : 0;
} else if (property == adev->mode_info.plane_blend_tf_property) {
*val = dm_plane_state->blend_tf;
} else {
return -EINVAL;
}
return 0;
}
#endif
static const struct drm_plane_funcs dm_plane_funcs = { static const struct drm_plane_funcs dm_plane_funcs = {
.update_plane = drm_atomic_helper_update_plane, .update_plane = drm_atomic_helper_update_plane,
.disable_plane = drm_atomic_helper_disable_plane, .disable_plane = drm_atomic_helper_disable_plane,
@ -1438,6 +1659,10 @@ static const struct drm_plane_funcs dm_plane_funcs = {
.atomic_duplicate_state = amdgpu_dm_plane_drm_plane_duplicate_state, .atomic_duplicate_state = amdgpu_dm_plane_drm_plane_duplicate_state,
.atomic_destroy_state = amdgpu_dm_plane_drm_plane_destroy_state, .atomic_destroy_state = amdgpu_dm_plane_drm_plane_destroy_state,
.format_mod_supported = amdgpu_dm_plane_format_mod_supported, .format_mod_supported = amdgpu_dm_plane_format_mod_supported,
#ifdef AMD_PRIVATE_COLOR
.atomic_set_property = dm_atomic_plane_set_property,
.atomic_get_property = dm_atomic_plane_get_property,
#endif
}; };
int amdgpu_dm_plane_init(struct amdgpu_display_manager *dm, int amdgpu_dm_plane_init(struct amdgpu_display_manager *dm,
@ -1517,6 +1742,9 @@ int amdgpu_dm_plane_init(struct amdgpu_display_manager *dm,
drm_plane_helper_add(plane, &dm_plane_helper_funcs); drm_plane_helper_add(plane, &dm_plane_helper_funcs);
#ifdef AMD_PRIVATE_COLOR
dm_atomic_plane_attach_color_mgmt_properties(dm, plane);
#endif
/* Create (reset) the plane state */ /* Create (reset) the plane state */
if (plane->funcs->reset) if (plane->funcs->reset)
plane->funcs->reset(plane); plane->funcs->reset(plane);

View File

@ -32,6 +32,7 @@
#include "amdgpu_display.h" #include "amdgpu_display.h"
#include "dc.h" #include "dc.h"
#include <drm/drm_edid.h>
#include <drm/drm_atomic_state_helper.h> #include <drm/drm_atomic_state_helper.h>
#include <drm/drm_modeset_helper_vtables.h> #include <drm/drm_modeset_helper_vtables.h>

View File

@ -1747,7 +1747,6 @@ static enum bp_result bios_parser_get_firmware_info(
result = get_firmware_info_v3_2(bp, info); result = get_firmware_info_v3_2(bp, info);
break; break;
case 4: case 4:
case 5:
result = get_firmware_info_v3_4(bp, info); result = get_firmware_info_v3_4(bp, info);
break; break;
default: default:
@ -2387,18 +2386,11 @@ static enum bp_result get_vram_info_v30(
return BP_RESULT_BADBIOSTABLE; return BP_RESULT_BADBIOSTABLE;
info->num_chans = info_v30->channel_num; info->num_chans = info_v30->channel_num;
/* As suggested by VBIOS we should always use info->dram_channel_width_bytes = (1 << info_v30->channel_width) / 8;
* dram_channel_width_bytes = 2 when using VRAM
* table version 3.0. This is because the channel_width
* param in the VRAM info table is changed in 7000 series and
* no longer represents the memory channel width.
*/
info->dram_channel_width_bytes = 2;
return result; return result;
} }
/* /*
* get_integrated_info_v11 * get_integrated_info_v11
* *

View File

@ -368,7 +368,7 @@ struct clk_mgr *dc_clk_mgr_create(struct dc_context *ctx, struct pp_smu_funcs *p
} }
break; break;
#endif /* CONFIG_DRM_AMD_DC_FP - Family RV */ #endif /* CONFIG_DRM_AMD_DC_FP */
default: default:
ASSERT(0); /* Unknown Asic */ ASSERT(0); /* Unknown Asic */
break; break;

View File

@ -361,26 +361,26 @@ void dcn35_smu_set_zstate_support(struct clk_mgr_internal *clk_mgr, enum dcn_zst
case DCN_ZSTATE_SUPPORT_ALLOW: case DCN_ZSTATE_SUPPORT_ALLOW:
msg_id = VBIOSSMC_MSG_AllowZstatesEntry; msg_id = VBIOSSMC_MSG_AllowZstatesEntry;
param = (1 << 10) | (1 << 9) | (1 << 8); param = (1 << 10) | (1 << 9) | (1 << 8);
smu_print("%s: SMC_MSG_AllowZstatesEntr msg = ALLOW, param = %d\n", __func__, param); smu_print("%s: SMC_MSG_AllowZstatesEntry msg = ALLOW, param = %d\n", __func__, param);
break; break;
case DCN_ZSTATE_SUPPORT_DISALLOW: case DCN_ZSTATE_SUPPORT_DISALLOW:
msg_id = VBIOSSMC_MSG_AllowZstatesEntry; msg_id = VBIOSSMC_MSG_AllowZstatesEntry;
param = 0; param = 0;
smu_print("%s: SMC_MSG_AllowZstatesEntr msg_id = DISALLOW, param = %d\n", __func__, param); smu_print("%s: SMC_MSG_AllowZstatesEntry msg_id = DISALLOW, param = %d\n", __func__, param);
break; break;
case DCN_ZSTATE_SUPPORT_ALLOW_Z10_ONLY: case DCN_ZSTATE_SUPPORT_ALLOW_Z10_ONLY:
msg_id = VBIOSSMC_MSG_AllowZstatesEntry; msg_id = VBIOSSMC_MSG_AllowZstatesEntry;
param = (1 << 10); param = (1 << 10);
smu_print("%s: SMC_MSG_AllowZstatesEntr msg = ALLOW_Z10_ONLY, param = %d\n", __func__, param); smu_print("%s: SMC_MSG_AllowZstatesEntry msg = ALLOW_Z10_ONLY, param = %d\n", __func__, param);
break; break;
case DCN_ZSTATE_SUPPORT_ALLOW_Z8_Z10_ONLY: case DCN_ZSTATE_SUPPORT_ALLOW_Z8_Z10_ONLY:
msg_id = VBIOSSMC_MSG_AllowZstatesEntry; msg_id = VBIOSSMC_MSG_AllowZstatesEntry;
param = (1 << 10) | (1 << 8); param = (1 << 10) | (1 << 8);
smu_print("%s: SMC_MSG_AllowZstatesEntr msg = ALLOW_Z8_Z10_ONLY, param = %d\n", __func__, param); smu_print("%s: SMC_MSG_AllowZstatesEntry msg = ALLOW_Z8_Z10_ONLY, param = %d\n", __func__, param);
break; break;
case DCN_ZSTATE_SUPPORT_ALLOW_Z8_ONLY: case DCN_ZSTATE_SUPPORT_ALLOW_Z8_ONLY:

View File

@ -49,7 +49,7 @@ struct aux_payload;
struct set_config_cmd_payload; struct set_config_cmd_payload;
struct dmub_notification; struct dmub_notification;
#define DC_VER "3.2.263" #define DC_VER "3.2.264"
#define MAX_SURFACES 3 #define MAX_SURFACES 3
#define MAX_PLANES 6 #define MAX_PLANES 6

View File

@ -1268,3 +1268,17 @@ void dc_dmub_srv_exit_low_power_state(const struct dc *dc)
ASSERT(0); ASSERT(0);
} }
void dc_dmub_srv_set_power_state(struct dc_dmub_srv *dc_dmub_srv, enum dc_acpi_cm_power_state powerState)
{
struct dmub_srv *dmub;
if (!dc_dmub_srv)
return;
dmub = dc_dmub_srv->dmub;
if (powerState == DC_ACPI_CM_POWER_STATE_D0)
dmub_srv_set_power_state(dmub, DMUB_POWER_STATE_D0);
else
dmub_srv_set_power_state(dmub, DMUB_POWER_STATE_D3);
}

View File

@ -102,4 +102,6 @@ void dc_dmub_srv_subvp_save_surf_addr(const struct dc_dmub_srv *dc_dmub_srv, con
bool dc_dmub_srv_is_hw_pwr_up(struct dc_dmub_srv *dc_dmub_srv, bool wait); bool dc_dmub_srv_is_hw_pwr_up(struct dc_dmub_srv *dc_dmub_srv, bool wait);
void dc_dmub_srv_notify_idle(const struct dc *dc, bool allow_idle); void dc_dmub_srv_notify_idle(const struct dc *dc, bool allow_idle);
void dc_dmub_srv_exit_low_power_state(const struct dc *dc); void dc_dmub_srv_exit_low_power_state(const struct dc *dc);
void dc_dmub_srv_set_power_state(struct dc_dmub_srv *dc_dmub_srv, enum dc_acpi_cm_power_state powerState);
#endif /* _DMUB_DC_SRV_H_ */ #endif /* _DMUB_DC_SRV_H_ */

View File

@ -64,11 +64,15 @@ static void dmub_abm_init_ex(struct abm *abm, uint32_t backlight)
static unsigned int dmub_abm_get_current_backlight_ex(struct abm *abm) static unsigned int dmub_abm_get_current_backlight_ex(struct abm *abm)
{ {
dc_allow_idle_optimizations(abm->ctx->dc, false);
return dmub_abm_get_current_backlight(abm); return dmub_abm_get_current_backlight(abm);
} }
static unsigned int dmub_abm_get_target_backlight_ex(struct abm *abm) static unsigned int dmub_abm_get_target_backlight_ex(struct abm *abm)
{ {
dc_allow_idle_optimizations(abm->ctx->dc, false);
return dmub_abm_get_target_backlight(abm); return dmub_abm_get_target_backlight(abm);
} }

View File

@ -118,7 +118,8 @@ static const struct hw_sequencer_funcs dcn35_funcs = {
.update_dsc_pg = dcn32_update_dsc_pg, .update_dsc_pg = dcn32_update_dsc_pg,
.calc_blocks_to_gate = dcn35_calc_blocks_to_gate, .calc_blocks_to_gate = dcn35_calc_blocks_to_gate,
.calc_blocks_to_ungate = dcn35_calc_blocks_to_ungate, .calc_blocks_to_ungate = dcn35_calc_blocks_to_ungate,
.block_power_control = dcn35_block_power_control, .hw_block_power_up = dcn35_hw_block_power_up,
.hw_block_power_down = dcn35_hw_block_power_down,
.root_clock_control = dcn35_root_clock_control, .root_clock_control = dcn35_root_clock_control,
.set_idle_state = dcn35_set_idle_state, .set_idle_state = dcn35_set_idle_state,
.get_idle_state = dcn35_get_idle_state .get_idle_state = dcn35_get_idle_state

View File

@ -247,6 +247,7 @@ struct pp_smu_funcs_nv {
#define PP_SMU_NUM_MEMCLK_DPM_LEVELS 4 #define PP_SMU_NUM_MEMCLK_DPM_LEVELS 4
#define PP_SMU_NUM_DCLK_DPM_LEVELS 8 #define PP_SMU_NUM_DCLK_DPM_LEVELS 8
#define PP_SMU_NUM_VCLK_DPM_LEVELS 8 #define PP_SMU_NUM_VCLK_DPM_LEVELS 8
#define PP_SMU_NUM_VPECLK_DPM_LEVELS 8
struct dpm_clock { struct dpm_clock {
uint32_t Freq; // In MHz uint32_t Freq; // In MHz
@ -262,6 +263,7 @@ struct dpm_clocks {
struct dpm_clock MemClocks[PP_SMU_NUM_MEMCLK_DPM_LEVELS]; struct dpm_clock MemClocks[PP_SMU_NUM_MEMCLK_DPM_LEVELS];
struct dpm_clock VClocks[PP_SMU_NUM_VCLK_DPM_LEVELS]; struct dpm_clock VClocks[PP_SMU_NUM_VCLK_DPM_LEVELS];
struct dpm_clock DClocks[PP_SMU_NUM_DCLK_DPM_LEVELS]; struct dpm_clock DClocks[PP_SMU_NUM_DCLK_DPM_LEVELS];
struct dpm_clock VPEClocks[PP_SMU_NUM_VPECLK_DPM_LEVELS];
}; };

View File

@ -813,6 +813,8 @@ static void DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerforman
(v->DRAMSpeedPerState[mode_lib->vba.VoltageLevel] <= MEM_STROBE_FREQ_MHZ || (v->DRAMSpeedPerState[mode_lib->vba.VoltageLevel] <= MEM_STROBE_FREQ_MHZ ||
v->DCFCLKPerState[mode_lib->vba.VoltageLevel] <= DCFCLK_FREQ_EXTRA_PREFETCH_REQ_MHZ) ? v->DCFCLKPerState[mode_lib->vba.VoltageLevel] <= DCFCLK_FREQ_EXTRA_PREFETCH_REQ_MHZ) ?
mode_lib->vba.ip.min_prefetch_in_strobe_us : 0, mode_lib->vba.ip.min_prefetch_in_strobe_us : 0,
mode_lib->vba.PrefetchModePerState[mode_lib->vba.VoltageLevel][mode_lib->vba.maxMpcComb] > 0 || mode_lib->vba.DRAMClockChangeRequirementFinal == false,
/* Output */ /* Output */
&v->DSTXAfterScaler[k], &v->DSTXAfterScaler[k],
&v->DSTYAfterScaler[k], &v->DSTYAfterScaler[k],
@ -3317,6 +3319,7 @@ void dml32_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
v->SwathHeightCThisState[k], v->TWait, v->SwathHeightCThisState[k], v->TWait,
(v->DRAMSpeedPerState[i] <= MEM_STROBE_FREQ_MHZ || v->DCFCLKState[i][j] <= DCFCLK_FREQ_EXTRA_PREFETCH_REQ_MHZ) ? (v->DRAMSpeedPerState[i] <= MEM_STROBE_FREQ_MHZ || v->DCFCLKState[i][j] <= DCFCLK_FREQ_EXTRA_PREFETCH_REQ_MHZ) ?
mode_lib->vba.ip.min_prefetch_in_strobe_us : 0, mode_lib->vba.ip.min_prefetch_in_strobe_us : 0,
mode_lib->vba.PrefetchModePerState[i][j] > 0 || mode_lib->vba.DRAMClockChangeRequirementFinal == false,
/* Output */ /* Output */
&v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.DSTXAfterScaler[k], &v->dummy_vars.dml32_ModeSupportAndSystemConfigurationFull.DSTXAfterScaler[k],

View File

@ -3423,6 +3423,7 @@ bool dml32_CalculatePrefetchSchedule(
unsigned int SwathHeightC, unsigned int SwathHeightC,
double TWait, double TWait,
double TPreReq, double TPreReq,
bool ExtendPrefetchIfPossible,
/* Output */ /* Output */
double *DSTXAfterScaler, double *DSTXAfterScaler,
double *DSTYAfterScaler, double *DSTYAfterScaler,
@ -3892,12 +3893,32 @@ bool dml32_CalculatePrefetchSchedule(
/* Clamp to oto for bandwidth calculation */ /* Clamp to oto for bandwidth calculation */
LinesForPrefetchBandwidth = dst_y_prefetch_oto; LinesForPrefetchBandwidth = dst_y_prefetch_oto;
} else { } else {
*DestinationLinesForPrefetch = dst_y_prefetch_equ; /* For mode programming we want to extend the prefetch as much as possible
TimeForFetchingMetaPTE = Tvm_equ; * (up to oto, or as long as we can for equ) if we're not already applying
TimeForFetchingRowInVBlank = Tr0_equ; * the 60us prefetch requirement. This is to avoid intermittent underflow
*PrefetchBandwidth = prefetch_bw_equ; * issues during prefetch.
/* Clamp to equ for bandwidth calculation */ *
LinesForPrefetchBandwidth = dst_y_prefetch_equ; * The prefetch extension is applied under the following scenarios:
* 1. We're in prefetch mode > 0 (i.e. we don't support MCLK switch in blank)
* 2. We're using subvp or drr methods of p-state switch, in which case we
* we don't care if prefetch takes up more of the blanking time
*
* Mode programming typically chooses the smallest prefetch time possible
* (i.e. highest bandwidth during prefetch) presumably to create margin between
* p-states / c-states that happen in vblank and prefetch. Therefore we only
* apply this prefetch extension when p-state in vblank is not required (UCLK
* p-states take up the most vblank time).
*/
if (ExtendPrefetchIfPossible && TPreReq == 0 && VStartup < MaxVStartup) {
MyError = true;
} else {
*DestinationLinesForPrefetch = dst_y_prefetch_equ;
TimeForFetchingMetaPTE = Tvm_equ;
TimeForFetchingRowInVBlank = Tr0_equ;
*PrefetchBandwidth = prefetch_bw_equ;
/* Clamp to equ for bandwidth calculation */
LinesForPrefetchBandwidth = dst_y_prefetch_equ;
}
} }
*DestinationLinesToRequestVMInVBlank = dml_ceil(4.0 * TimeForFetchingMetaPTE / LineTime, 1.0) / 4.0; *DestinationLinesToRequestVMInVBlank = dml_ceil(4.0 * TimeForFetchingMetaPTE / LineTime, 1.0) / 4.0;

View File

@ -747,6 +747,7 @@ bool dml32_CalculatePrefetchSchedule(
unsigned int SwathHeightC, unsigned int SwathHeightC,
double TWait, double TWait,
double TPreReq, double TPreReq,
bool ExtendPrefetchIfPossible,
/* Output */ /* Output */
double *DSTXAfterScaler, double *DSTXAfterScaler,
double *DSTYAfterScaler, double *DSTYAfterScaler,

View File

@ -124,7 +124,7 @@ struct _vcs_dpi_soc_bounding_box_st dcn3_5_soc = {
.phyclk_mhz = 600.0, .phyclk_mhz = 600.0,
.phyclk_d18_mhz = 667.0, .phyclk_d18_mhz = 667.0,
.dscclk_mhz = 186.0, .dscclk_mhz = 186.0,
.dtbclk_mhz = 625.0, .dtbclk_mhz = 600.0,
}, },
{ {
.state = 1, .state = 1,
@ -133,7 +133,7 @@ struct _vcs_dpi_soc_bounding_box_st dcn3_5_soc = {
.phyclk_mhz = 810.0, .phyclk_mhz = 810.0,
.phyclk_d18_mhz = 667.0, .phyclk_d18_mhz = 667.0,
.dscclk_mhz = 209.0, .dscclk_mhz = 209.0,
.dtbclk_mhz = 625.0, .dtbclk_mhz = 600.0,
}, },
{ {
.state = 2, .state = 2,
@ -142,7 +142,7 @@ struct _vcs_dpi_soc_bounding_box_st dcn3_5_soc = {
.phyclk_mhz = 810.0, .phyclk_mhz = 810.0,
.phyclk_d18_mhz = 667.0, .phyclk_d18_mhz = 667.0,
.dscclk_mhz = 209.0, .dscclk_mhz = 209.0,
.dtbclk_mhz = 625.0, .dtbclk_mhz = 600.0,
}, },
{ {
.state = 3, .state = 3,
@ -151,7 +151,7 @@ struct _vcs_dpi_soc_bounding_box_st dcn3_5_soc = {
.phyclk_mhz = 810.0, .phyclk_mhz = 810.0,
.phyclk_d18_mhz = 667.0, .phyclk_d18_mhz = 667.0,
.dscclk_mhz = 371.0, .dscclk_mhz = 371.0,
.dtbclk_mhz = 625.0, .dtbclk_mhz = 600.0,
}, },
{ {
.state = 4, .state = 4,
@ -160,7 +160,7 @@ struct _vcs_dpi_soc_bounding_box_st dcn3_5_soc = {
.phyclk_mhz = 810.0, .phyclk_mhz = 810.0,
.phyclk_d18_mhz = 667.0, .phyclk_d18_mhz = 667.0,
.dscclk_mhz = 417.0, .dscclk_mhz = 417.0,
.dtbclk_mhz = 625.0, .dtbclk_mhz = 600.0,
}, },
}, },
.num_states = 5, .num_states = 5,
@ -367,6 +367,8 @@ void dcn35_update_bw_bounding_box_fpu(struct dc *dc,
clock_limits[i].socclk_mhz; clock_limits[i].socclk_mhz;
dc->dml2_options.bbox_overrides.clks_table.clk_entries[i].memclk_mhz = dc->dml2_options.bbox_overrides.clks_table.clk_entries[i].memclk_mhz =
clk_table->entries[i].memclk_mhz * clk_table->entries[i].wck_ratio; clk_table->entries[i].memclk_mhz * clk_table->entries[i].wck_ratio;
dc->dml2_options.bbox_overrides.clks_table.clk_entries[i].dtbclk_mhz =
clock_limits[i].dtbclk_mhz;
dc->dml2_options.bbox_overrides.clks_table.num_entries_per_clk.num_dcfclk_levels = dc->dml2_options.bbox_overrides.clks_table.num_entries_per_clk.num_dcfclk_levels =
clk_table->num_entries; clk_table->num_entries;
dc->dml2_options.bbox_overrides.clks_table.num_entries_per_clk.num_fclk_levels = dc->dml2_options.bbox_overrides.clks_table.num_entries_per_clk.num_fclk_levels =
@ -379,6 +381,8 @@ void dcn35_update_bw_bounding_box_fpu(struct dc *dc,
clk_table->num_entries; clk_table->num_entries;
dc->dml2_options.bbox_overrides.clks_table.num_entries_per_clk.num_memclk_levels = dc->dml2_options.bbox_overrides.clks_table.num_entries_per_clk.num_memclk_levels =
clk_table->num_entries; clk_table->num_entries;
dc->dml2_options.bbox_overrides.clks_table.num_entries_per_clk.num_dtbclk_levels =
clk_table->num_entries;
} }
} }

View File

@ -6329,7 +6329,7 @@ static void dml_prefetch_check(struct display_mode_lib_st *mode_lib)
mode_lib->ms.NoOfDPPThisState, mode_lib->ms.NoOfDPPThisState,
mode_lib->ms.dpte_group_bytes, mode_lib->ms.dpte_group_bytes,
s->HostVMInefficiencyFactor, s->HostVMInefficiencyFactor,
mode_lib->ms.soc.hostvm_min_page_size_kbytes * 1024, mode_lib->ms.soc.hostvm_min_page_size_kbytes,
mode_lib->ms.cache_display_cfg.plane.HostVMMaxPageTableLevels); mode_lib->ms.cache_display_cfg.plane.HostVMMaxPageTableLevels);
s->NextMaxVStartup = s->MaxVStartupAllPlanes[j]; s->NextMaxVStartup = s->MaxVStartupAllPlanes[j];
@ -6542,7 +6542,7 @@ static void dml_prefetch_check(struct display_mode_lib_st *mode_lib)
mode_lib->ms.cache_display_cfg.plane.HostVMEnable, mode_lib->ms.cache_display_cfg.plane.HostVMEnable,
mode_lib->ms.cache_display_cfg.plane.HostVMMaxPageTableLevels, mode_lib->ms.cache_display_cfg.plane.HostVMMaxPageTableLevels,
mode_lib->ms.cache_display_cfg.plane.GPUVMEnable, mode_lib->ms.cache_display_cfg.plane.GPUVMEnable,
mode_lib->ms.soc.hostvm_min_page_size_kbytes * 1024, mode_lib->ms.soc.hostvm_min_page_size_kbytes,
mode_lib->ms.PDEAndMetaPTEBytesPerFrame[j][k], mode_lib->ms.PDEAndMetaPTEBytesPerFrame[j][k],
mode_lib->ms.MetaRowBytes[j][k], mode_lib->ms.MetaRowBytes[j][k],
mode_lib->ms.DPTEBytesPerRow[j][k], mode_lib->ms.DPTEBytesPerRow[j][k],
@ -7687,7 +7687,7 @@ dml_bool_t dml_core_mode_support(struct display_mode_lib_st *mode_lib)
CalculateVMRowAndSwath_params->HostVMMaxNonCachedPageTableLevels = mode_lib->ms.cache_display_cfg.plane.HostVMMaxPageTableLevels; CalculateVMRowAndSwath_params->HostVMMaxNonCachedPageTableLevels = mode_lib->ms.cache_display_cfg.plane.HostVMMaxPageTableLevels;
CalculateVMRowAndSwath_params->GPUVMMaxPageTableLevels = mode_lib->ms.cache_display_cfg.plane.GPUVMMaxPageTableLevels; CalculateVMRowAndSwath_params->GPUVMMaxPageTableLevels = mode_lib->ms.cache_display_cfg.plane.GPUVMMaxPageTableLevels;
CalculateVMRowAndSwath_params->GPUVMMinPageSizeKBytes = mode_lib->ms.cache_display_cfg.plane.GPUVMMinPageSizeKBytes; CalculateVMRowAndSwath_params->GPUVMMinPageSizeKBytes = mode_lib->ms.cache_display_cfg.plane.GPUVMMinPageSizeKBytes;
CalculateVMRowAndSwath_params->HostVMMinPageSize = mode_lib->ms.soc.hostvm_min_page_size_kbytes * 1024; CalculateVMRowAndSwath_params->HostVMMinPageSize = mode_lib->ms.soc.hostvm_min_page_size_kbytes;
CalculateVMRowAndSwath_params->PTEBufferModeOverrideEn = mode_lib->ms.cache_display_cfg.plane.PTEBufferModeOverrideEn; CalculateVMRowAndSwath_params->PTEBufferModeOverrideEn = mode_lib->ms.cache_display_cfg.plane.PTEBufferModeOverrideEn;
CalculateVMRowAndSwath_params->PTEBufferModeOverrideVal = mode_lib->ms.cache_display_cfg.plane.PTEBufferMode; CalculateVMRowAndSwath_params->PTEBufferModeOverrideVal = mode_lib->ms.cache_display_cfg.plane.PTEBufferMode;
CalculateVMRowAndSwath_params->PTEBufferSizeNotExceeded = mode_lib->ms.PTEBufferSizeNotExceededPerState; CalculateVMRowAndSwath_params->PTEBufferSizeNotExceeded = mode_lib->ms.PTEBufferSizeNotExceededPerState;
@ -7957,7 +7957,7 @@ dml_bool_t dml_core_mode_support(struct display_mode_lib_st *mode_lib)
UseMinimumDCFCLK_params->GPUVMMaxPageTableLevels = mode_lib->ms.cache_display_cfg.plane.GPUVMMaxPageTableLevels; UseMinimumDCFCLK_params->GPUVMMaxPageTableLevels = mode_lib->ms.cache_display_cfg.plane.GPUVMMaxPageTableLevels;
UseMinimumDCFCLK_params->HostVMEnable = mode_lib->ms.cache_display_cfg.plane.HostVMEnable; UseMinimumDCFCLK_params->HostVMEnable = mode_lib->ms.cache_display_cfg.plane.HostVMEnable;
UseMinimumDCFCLK_params->NumberOfActiveSurfaces = mode_lib->ms.num_active_planes; UseMinimumDCFCLK_params->NumberOfActiveSurfaces = mode_lib->ms.num_active_planes;
UseMinimumDCFCLK_params->HostVMMinPageSize = mode_lib->ms.soc.hostvm_min_page_size_kbytes * 1024; UseMinimumDCFCLK_params->HostVMMinPageSize = mode_lib->ms.soc.hostvm_min_page_size_kbytes;
UseMinimumDCFCLK_params->HostVMMaxNonCachedPageTableLevels = mode_lib->ms.cache_display_cfg.plane.HostVMMaxPageTableLevels; UseMinimumDCFCLK_params->HostVMMaxNonCachedPageTableLevels = mode_lib->ms.cache_display_cfg.plane.HostVMMaxPageTableLevels;
UseMinimumDCFCLK_params->DynamicMetadataVMEnabled = mode_lib->ms.ip.dynamic_metadata_vm_enabled; UseMinimumDCFCLK_params->DynamicMetadataVMEnabled = mode_lib->ms.ip.dynamic_metadata_vm_enabled;
UseMinimumDCFCLK_params->ImmediateFlipRequirement = s->ImmediateFlipRequiredFinal; UseMinimumDCFCLK_params->ImmediateFlipRequirement = s->ImmediateFlipRequiredFinal;
@ -8699,7 +8699,7 @@ void dml_core_mode_programming(struct display_mode_lib_st *mode_lib, const struc
CalculateVMRowAndSwath_params->HostVMMaxNonCachedPageTableLevels = mode_lib->ms.cache_display_cfg.plane.HostVMMaxPageTableLevels; CalculateVMRowAndSwath_params->HostVMMaxNonCachedPageTableLevels = mode_lib->ms.cache_display_cfg.plane.HostVMMaxPageTableLevels;
CalculateVMRowAndSwath_params->GPUVMMaxPageTableLevels = mode_lib->ms.cache_display_cfg.plane.GPUVMMaxPageTableLevels; CalculateVMRowAndSwath_params->GPUVMMaxPageTableLevels = mode_lib->ms.cache_display_cfg.plane.GPUVMMaxPageTableLevels;
CalculateVMRowAndSwath_params->GPUVMMinPageSizeKBytes = mode_lib->ms.cache_display_cfg.plane.GPUVMMinPageSizeKBytes; CalculateVMRowAndSwath_params->GPUVMMinPageSizeKBytes = mode_lib->ms.cache_display_cfg.plane.GPUVMMinPageSizeKBytes;
CalculateVMRowAndSwath_params->HostVMMinPageSize = mode_lib->ms.soc.hostvm_min_page_size_kbytes * 1024; CalculateVMRowAndSwath_params->HostVMMinPageSize = mode_lib->ms.soc.hostvm_min_page_size_kbytes;
CalculateVMRowAndSwath_params->PTEBufferModeOverrideEn = mode_lib->ms.cache_display_cfg.plane.PTEBufferModeOverrideEn; CalculateVMRowAndSwath_params->PTEBufferModeOverrideEn = mode_lib->ms.cache_display_cfg.plane.PTEBufferModeOverrideEn;
CalculateVMRowAndSwath_params->PTEBufferModeOverrideVal = mode_lib->ms.cache_display_cfg.plane.PTEBufferMode; CalculateVMRowAndSwath_params->PTEBufferModeOverrideVal = mode_lib->ms.cache_display_cfg.plane.PTEBufferMode;
CalculateVMRowAndSwath_params->PTEBufferSizeNotExceeded = s->dummy_boolean_array[0]; CalculateVMRowAndSwath_params->PTEBufferSizeNotExceeded = s->dummy_boolean_array[0];
@ -8805,7 +8805,7 @@ void dml_core_mode_programming(struct display_mode_lib_st *mode_lib, const struc
mode_lib->ms.cache_display_cfg.hw.DPPPerSurface, mode_lib->ms.cache_display_cfg.hw.DPPPerSurface,
locals->dpte_group_bytes, locals->dpte_group_bytes,
s->HostVMInefficiencyFactor, s->HostVMInefficiencyFactor,
mode_lib->ms.soc.hostvm_min_page_size_kbytes * 1024, mode_lib->ms.soc.hostvm_min_page_size_kbytes,
mode_lib->ms.cache_display_cfg.plane.HostVMMaxPageTableLevels); mode_lib->ms.cache_display_cfg.plane.HostVMMaxPageTableLevels);
locals->TCalc = 24.0 / locals->DCFCLKDeepSleep; locals->TCalc = 24.0 / locals->DCFCLKDeepSleep;
@ -8995,7 +8995,7 @@ void dml_core_mode_programming(struct display_mode_lib_st *mode_lib, const struc
CalculatePrefetchSchedule_params->GPUVMEnable = mode_lib->ms.cache_display_cfg.plane.GPUVMEnable; CalculatePrefetchSchedule_params->GPUVMEnable = mode_lib->ms.cache_display_cfg.plane.GPUVMEnable;
CalculatePrefetchSchedule_params->HostVMEnable = mode_lib->ms.cache_display_cfg.plane.HostVMEnable; CalculatePrefetchSchedule_params->HostVMEnable = mode_lib->ms.cache_display_cfg.plane.HostVMEnable;
CalculatePrefetchSchedule_params->HostVMMaxNonCachedPageTableLevels = mode_lib->ms.cache_display_cfg.plane.HostVMMaxPageTableLevels; CalculatePrefetchSchedule_params->HostVMMaxNonCachedPageTableLevels = mode_lib->ms.cache_display_cfg.plane.HostVMMaxPageTableLevels;
CalculatePrefetchSchedule_params->HostVMMinPageSize = mode_lib->ms.soc.hostvm_min_page_size_kbytes * 1024; CalculatePrefetchSchedule_params->HostVMMinPageSize = mode_lib->ms.soc.hostvm_min_page_size_kbytes;
CalculatePrefetchSchedule_params->DynamicMetadataEnable = mode_lib->ms.cache_display_cfg.plane.DynamicMetadataEnable[k]; CalculatePrefetchSchedule_params->DynamicMetadataEnable = mode_lib->ms.cache_display_cfg.plane.DynamicMetadataEnable[k];
CalculatePrefetchSchedule_params->DynamicMetadataVMEnabled = mode_lib->ms.ip.dynamic_metadata_vm_enabled; CalculatePrefetchSchedule_params->DynamicMetadataVMEnabled = mode_lib->ms.ip.dynamic_metadata_vm_enabled;
CalculatePrefetchSchedule_params->DynamicMetadataLinesBeforeActiveRequired = mode_lib->ms.cache_display_cfg.plane.DynamicMetadataLinesBeforeActiveRequired[k]; CalculatePrefetchSchedule_params->DynamicMetadataLinesBeforeActiveRequired = mode_lib->ms.cache_display_cfg.plane.DynamicMetadataLinesBeforeActiveRequired[k];
@ -9240,7 +9240,7 @@ void dml_core_mode_programming(struct display_mode_lib_st *mode_lib, const struc
mode_lib->ms.cache_display_cfg.plane.HostVMEnable, mode_lib->ms.cache_display_cfg.plane.HostVMEnable,
mode_lib->ms.cache_display_cfg.plane.HostVMMaxPageTableLevels, mode_lib->ms.cache_display_cfg.plane.HostVMMaxPageTableLevels,
mode_lib->ms.cache_display_cfg.plane.GPUVMEnable, mode_lib->ms.cache_display_cfg.plane.GPUVMEnable,
mode_lib->ms.soc.hostvm_min_page_size_kbytes * 1024, mode_lib->ms.soc.hostvm_min_page_size_kbytes,
locals->PDEAndMetaPTEBytesFrame[k], locals->PDEAndMetaPTEBytesFrame[k],
locals->MetaRowByte[k], locals->MetaRowByte[k],
locals->PixelPTEBytesPerRow[k], locals->PixelPTEBytesPerRow[k],

View File

@ -425,8 +425,9 @@ void dml2_init_soc_states(struct dml2_context *dml2, const struct dc *in_dc,
} }
for (i = 0; i < dml2->config.bbox_overrides.clks_table.num_entries_per_clk.num_dtbclk_levels; i++) { for (i = 0; i < dml2->config.bbox_overrides.clks_table.num_entries_per_clk.num_dtbclk_levels; i++) {
p->in_states->state_array[i].dtbclk_mhz = if (dml2->config.bbox_overrides.clks_table.clk_entries[i].dtbclk_mhz > 0)
dml2->config.bbox_overrides.clks_table.clk_entries[i].dtbclk_mhz; p->in_states->state_array[i].dtbclk_mhz =
dml2->config.bbox_overrides.clks_table.clk_entries[i].dtbclk_mhz;
} }
for (i = 0; i < dml2->config.bbox_overrides.clks_table.num_entries_per_clk.num_dispclk_levels; i++) { for (i = 0; i < dml2->config.bbox_overrides.clks_table.num_entries_per_clk.num_dispclk_levels; i++) {

View File

@ -160,13 +160,6 @@ bool is_dp2p0_output_encoder(const struct pipe_ctx *pipe_ctx)
if (pipe_ctx->stream == NULL) if (pipe_ctx->stream == NULL)
return false; return false;
/* Count MST hubs once by treating only 1st remote sink in topology as an encoder */
if (pipe_ctx->stream->link && pipe_ctx->stream->link->remote_sinks[0]) {
return (pipe_ctx->stream_res.hpo_dp_stream_enc &&
pipe_ctx->link_res.hpo_dp_link_enc &&
dc_is_dp_signal(pipe_ctx->stream->signal) &&
(pipe_ctx->stream->link->remote_sinks[0] == pipe_ctx->stream->sink));
}
return (pipe_ctx->stream_res.hpo_dp_stream_enc && return (pipe_ctx->stream_res.hpo_dp_stream_enc &&
pipe_ctx->link_res.hpo_dp_link_enc && pipe_ctx->link_res.hpo_dp_link_enc &&

View File

@ -1877,6 +1877,8 @@ void dcn20_program_front_end_for_ctx(
int i; int i;
struct dce_hwseq *hws = dc->hwseq; struct dce_hwseq *hws = dc->hwseq;
DC_LOGGER_INIT(dc->ctx->logger); DC_LOGGER_INIT(dc->ctx->logger);
unsigned int prev_hubp_count = 0;
unsigned int hubp_count = 0;
if (resource_is_pipe_topology_changed(dc->current_state, context)) if (resource_is_pipe_topology_changed(dc->current_state, context))
resource_log_pipe_topology_update(dc, context); resource_log_pipe_topology_update(dc, context);
@ -1894,6 +1896,20 @@ void dcn20_program_front_end_for_ctx(
} }
} }
for (i = 0; i < dc->res_pool->pipe_count; i++) {
if (dc->current_state->res_ctx.pipe_ctx[i].plane_state)
prev_hubp_count++;
if (context->res_ctx.pipe_ctx[i].plane_state)
hubp_count++;
}
if (prev_hubp_count == 0 && hubp_count > 0) {
if (dc->res_pool->hubbub->funcs->force_pstate_change_control)
dc->res_pool->hubbub->funcs->force_pstate_change_control(
dc->res_pool->hubbub, true, false);
udelay(500);
}
/* Set pipe update flags and lock pipes */ /* Set pipe update flags and lock pipes */
for (i = 0; i < dc->res_pool->pipe_count; i++) for (i = 0; i < dc->res_pool->pipe_count; i++)
dcn20_detect_pipe_changes(&dc->current_state->res_ctx.pipe_ctx[i], dcn20_detect_pipe_changes(&dc->current_state->res_ctx.pipe_ctx[i],
@ -2039,6 +2055,10 @@ void dcn20_post_unlock_program_front_end(
} }
} }
if (dc->res_pool->hubbub->funcs->force_pstate_change_control)
dc->res_pool->hubbub->funcs->force_pstate_change_control(
dc->res_pool->hubbub, false, false);
for (i = 0; i < dc->res_pool->pipe_count; i++) { for (i = 0; i < dc->res_pool->pipe_count; i++) {
struct pipe_ctx *pipe = &context->res_ctx.pipe_ctx[i]; struct pipe_ctx *pipe = &context->res_ctx.pipe_ctx[i];

View File

@ -1123,9 +1123,23 @@ void dcn35_calc_blocks_to_ungate(struct dc *dc, struct dc_state *context,
update_state->pg_res_update[PG_HPO] = true; update_state->pg_res_update[PG_HPO] = true;
} }
/**
void dcn35_block_power_control(struct dc *dc, * power down sequence
struct pg_block_update *update_state, bool power_on) * ONO Region 3, DCPG 25: hpo - SKIPPED
* ONO Region 4, DCPG 0: dchubp0, dpp0
* ONO Region 6, DCPG 1: dchubp1, dpp1
* ONO Region 8, DCPG 2: dchubp2, dpp2
* ONO Region 10, DCPG 3: dchubp3, dpp3
* ONO Region 1, DCPG 23: dchubbub dchvm dchubbubmem - SKIPPED. PMFW will pwr dwn at IPS2 entry
* ONO Region 5, DCPG 16: dsc0
* ONO Region 7, DCPG 17: dsc1
* ONO Region 9, DCPG 18: dsc2
* ONO Region 11, DCPG 19: dsc3
* ONO Region 2, DCPG 24: mpc opp optc dwb
* ONO Region 0, DCPG 22: dccg dio dcio - SKIPPED. will be pwr dwn after lono timer is armed
*/
void dcn35_hw_block_power_down(struct dc *dc,
struct pg_block_update *update_state)
{ {
int i = 0; int i = 0;
struct pg_cntl *pg_cntl = dc->res_pool->pg_cntl; struct pg_cntl *pg_cntl = dc->res_pool->pg_cntl;
@ -1134,50 +1148,81 @@ void dcn35_block_power_control(struct dc *dc,
return; return;
if (dc->debug.ignore_pg) if (dc->debug.ignore_pg)
return; return;
if (update_state->pg_res_update[PG_HPO]) { if (update_state->pg_res_update[PG_HPO]) {
if (pg_cntl->funcs->hpo_pg_control) if (pg_cntl->funcs->hpo_pg_control)
pg_cntl->funcs->hpo_pg_control(pg_cntl, power_on); pg_cntl->funcs->hpo_pg_control(pg_cntl, false);
} }
for (i = 0; i < dc->res_pool->pipe_count; i++) { for (i = 0; i < dc->res_pool->pipe_count; i++) {
if (update_state->pg_pipe_res_update[PG_HUBP][i] && if (update_state->pg_pipe_res_update[PG_HUBP][i] &&
update_state->pg_pipe_res_update[PG_DPP][i]) { update_state->pg_pipe_res_update[PG_DPP][i]) {
if (pg_cntl->funcs->hubp_dpp_pg_control) if (pg_cntl->funcs->hubp_dpp_pg_control)
pg_cntl->funcs->hubp_dpp_pg_control(pg_cntl, i, power_on); pg_cntl->funcs->hubp_dpp_pg_control(pg_cntl, i, false);
} }
}
for (i = 0; i < dc->res_pool->res_cap->num_dsc; i++)
if (update_state->pg_pipe_res_update[PG_DSC][i]) { if (update_state->pg_pipe_res_update[PG_DSC][i]) {
if (pg_cntl->funcs->dsc_pg_control) if (pg_cntl->funcs->dsc_pg_control)
pg_cntl->funcs->dsc_pg_control(pg_cntl, i, power_on); pg_cntl->funcs->dsc_pg_control(pg_cntl, i, false);
} }
if (update_state->pg_pipe_res_update[PG_MPCC][i]) {
if (pg_cntl->funcs->mpcc_pg_control)
pg_cntl->funcs->mpcc_pg_control(pg_cntl, i, power_on);
}
if (update_state->pg_pipe_res_update[PG_OPP][i]) {
if (pg_cntl->funcs->opp_pg_control)
pg_cntl->funcs->opp_pg_control(pg_cntl, i, power_on);
}
if (update_state->pg_pipe_res_update[PG_OPTC][i]) {
if (pg_cntl->funcs->optc_pg_control)
pg_cntl->funcs->optc_pg_control(pg_cntl, i, power_on);
}
}
if (update_state->pg_res_update[PG_DWB]) {
if (pg_cntl->funcs->dwb_pg_control)
pg_cntl->funcs->dwb_pg_control(pg_cntl, power_on);
}
/*this will need all the clients to unregister optc interruts let dmubfw handle this*/ /*this will need all the clients to unregister optc interruts let dmubfw handle this*/
if (pg_cntl->funcs->plane_otg_pg_control) if (pg_cntl->funcs->plane_otg_pg_control)
pg_cntl->funcs->plane_otg_pg_control(pg_cntl, power_on); pg_cntl->funcs->plane_otg_pg_control(pg_cntl, false);
//domain22, 23, 25 currently always on.
} }
/**
* power up sequence
* ONO Region 0, DCPG 22: dccg dio dcio - SKIPPED
* ONO Region 2, DCPG 24: mpc opp optc dwb
* ONO Region 5, DCPG 16: dsc0
* ONO Region 7, DCPG 17: dsc1
* ONO Region 9, DCPG 18: dsc2
* ONO Region 11, DCPG 19: dsc3
* ONO Region 1, DCPG 23: dchubbub dchvm dchubbubmem - SKIPPED. PMFW will power up at IPS2 exit
* ONO Region 4, DCPG 0: dchubp0, dpp0
* ONO Region 6, DCPG 1: dchubp1, dpp1
* ONO Region 8, DCPG 2: dchubp2, dpp2
* ONO Region 10, DCPG 3: dchubp3, dpp3
* ONO Region 3, DCPG 25: hpo - SKIPPED
*/
void dcn35_hw_block_power_up(struct dc *dc,
struct pg_block_update *update_state)
{
int i = 0;
struct pg_cntl *pg_cntl = dc->res_pool->pg_cntl;
if (!pg_cntl)
return;
if (dc->debug.ignore_pg)
return;
//domain22, 23, 25 currently always on.
/*this will need all the clients to unregister optc interruts let dmubfw handle this*/
if (pg_cntl->funcs->plane_otg_pg_control)
pg_cntl->funcs->plane_otg_pg_control(pg_cntl, true);
for (i = 0; i < dc->res_pool->res_cap->num_dsc; i++)
if (update_state->pg_pipe_res_update[PG_DSC][i]) {
if (pg_cntl->funcs->dsc_pg_control)
pg_cntl->funcs->dsc_pg_control(pg_cntl, i, true);
}
for (i = 0; i < dc->res_pool->pipe_count; i++) {
if (update_state->pg_pipe_res_update[PG_HUBP][i] &&
update_state->pg_pipe_res_update[PG_DPP][i]) {
if (pg_cntl->funcs->hubp_dpp_pg_control)
pg_cntl->funcs->hubp_dpp_pg_control(pg_cntl, i, true);
}
}
if (update_state->pg_res_update[PG_HPO]) {
if (pg_cntl->funcs->hpo_pg_control)
pg_cntl->funcs->hpo_pg_control(pg_cntl, true);
}
}
void dcn35_root_clock_control(struct dc *dc, void dcn35_root_clock_control(struct dc *dc,
struct pg_block_update *update_state, bool power_on) struct pg_block_update *update_state, bool power_on)
{ {
@ -1186,14 +1231,16 @@ void dcn35_root_clock_control(struct dc *dc,
if (!pg_cntl) if (!pg_cntl)
return; return;
/*enable root clock first when power up*/
for (i = 0; i < dc->res_pool->pipe_count; i++) { if (power_on)
if (update_state->pg_pipe_res_update[PG_HUBP][i] && for (i = 0; i < dc->res_pool->pipe_count; i++) {
update_state->pg_pipe_res_update[PG_DPP][i]) { if (update_state->pg_pipe_res_update[PG_HUBP][i] &&
if (dc->hwseq->funcs.dpp_root_clock_control) update_state->pg_pipe_res_update[PG_DPP][i]) {
dc->hwseq->funcs.dpp_root_clock_control(dc->hwseq, i, power_on); if (dc->hwseq->funcs.dpp_root_clock_control)
dc->hwseq->funcs.dpp_root_clock_control(dc->hwseq, i, power_on);
}
} }
for (i = 0; i < dc->res_pool->res_cap->num_dsc; i++) {
if (update_state->pg_pipe_res_update[PG_DSC][i]) { if (update_state->pg_pipe_res_update[PG_DSC][i]) {
if (power_on) { if (power_on) {
if (dc->res_pool->dccg->funcs->enable_dsc) if (dc->res_pool->dccg->funcs->enable_dsc)
@ -1204,6 +1251,15 @@ void dcn35_root_clock_control(struct dc *dc,
} }
} }
} }
/*disable root clock first when power down*/
if (!power_on)
for (i = 0; i < dc->res_pool->pipe_count; i++) {
if (update_state->pg_pipe_res_update[PG_HUBP][i] &&
update_state->pg_pipe_res_update[PG_DPP][i]) {
if (dc->hwseq->funcs.dpp_root_clock_control)
dc->hwseq->funcs.dpp_root_clock_control(dc->hwseq, i, power_on);
}
}
} }
void dcn35_prepare_bandwidth( void dcn35_prepare_bandwidth(
@ -1217,9 +1273,9 @@ void dcn35_prepare_bandwidth(
if (dc->hwss.root_clock_control) if (dc->hwss.root_clock_control)
dc->hwss.root_clock_control(dc, &pg_update_state, true); dc->hwss.root_clock_control(dc, &pg_update_state, true);
/*power up required HW block*/
if (dc->hwss.block_power_control) if (dc->hwss.hw_block_power_up)
dc->hwss.block_power_control(dc, &pg_update_state, true); dc->hwss.hw_block_power_up(dc, &pg_update_state);
} }
dcn20_prepare_bandwidth(dc, context); dcn20_prepare_bandwidth(dc, context);
@ -1235,9 +1291,9 @@ void dcn35_optimize_bandwidth(
if (dc->hwss.calc_blocks_to_gate) { if (dc->hwss.calc_blocks_to_gate) {
dc->hwss.calc_blocks_to_gate(dc, context, &pg_update_state); dc->hwss.calc_blocks_to_gate(dc, context, &pg_update_state);
/*try to power down unused block*/
if (dc->hwss.block_power_control) if (dc->hwss.hw_block_power_down)
dc->hwss.block_power_control(dc, &pg_update_state, false); dc->hwss.hw_block_power_down(dc, &pg_update_state);
if (dc->hwss.root_clock_control) if (dc->hwss.root_clock_control)
dc->hwss.root_clock_control(dc, &pg_update_state, false); dc->hwss.root_clock_control(dc, &pg_update_state, false);

View File

@ -63,8 +63,10 @@ void dcn35_calc_blocks_to_gate(struct dc *dc, struct dc_state *context,
struct pg_block_update *update_state); struct pg_block_update *update_state);
void dcn35_calc_blocks_to_ungate(struct dc *dc, struct dc_state *context, void dcn35_calc_blocks_to_ungate(struct dc *dc, struct dc_state *context,
struct pg_block_update *update_state); struct pg_block_update *update_state);
void dcn35_block_power_control(struct dc *dc, void dcn35_hw_block_power_up(struct dc *dc,
struct pg_block_update *update_state, bool power_on); struct pg_block_update *update_state);
void dcn35_hw_block_power_down(struct dc *dc,
struct pg_block_update *update_state);
void dcn35_root_clock_control(struct dc *dc, void dcn35_root_clock_control(struct dc *dc,
struct pg_block_update *update_state, bool power_on); struct pg_block_update *update_state, bool power_on);

View File

@ -414,8 +414,10 @@ struct hw_sequencer_funcs {
struct pg_block_update *update_state); struct pg_block_update *update_state);
void (*calc_blocks_to_ungate)(struct dc *dc, struct dc_state *context, void (*calc_blocks_to_ungate)(struct dc *dc, struct dc_state *context,
struct pg_block_update *update_state); struct pg_block_update *update_state);
void (*block_power_control)(struct dc *dc, void (*hw_block_power_up)(struct dc *dc,
struct pg_block_update *update_state, bool power_on); struct pg_block_update *update_state);
void (*hw_block_power_down)(struct dc *dc,
struct pg_block_update *update_state);
void (*root_clock_control)(struct dc *dc, void (*root_clock_control)(struct dc *dc,
struct pg_block_update *update_state, bool power_on); struct pg_block_update *update_state, bool power_on);
void (*set_idle_state)(const struct dc *dc, bool allow_idle); void (*set_idle_state)(const struct dc *dc, bool allow_idle);

View File

@ -59,8 +59,8 @@ enum dentist_dispclk_change_mode {
struct dp_dto_params { struct dp_dto_params {
int otg_inst; int otg_inst;
enum signal_type signal; enum signal_type signal;
long long pixclk_hz; uint64_t pixclk_hz;
long long refclk_hz; uint64_t refclk_hz;
}; };
enum pixel_rate_div { enum pixel_rate_div {

View File

@ -412,12 +412,18 @@ static enum dc_link_rate get_cable_max_link_rate(struct dc_link *link)
{ {
enum dc_link_rate cable_max_link_rate = LINK_RATE_UNKNOWN; enum dc_link_rate cable_max_link_rate = LINK_RATE_UNKNOWN;
if (link->dpcd_caps.cable_id.bits.UHBR10_20_CAPABILITY & DP_UHBR20) if (link->dpcd_caps.cable_id.bits.UHBR10_20_CAPABILITY & DP_UHBR20) {
cable_max_link_rate = LINK_RATE_UHBR20; cable_max_link_rate = LINK_RATE_UHBR20;
else if (link->dpcd_caps.cable_id.bits.UHBR13_5_CAPABILITY) } else if (link->dpcd_caps.cable_id.bits.UHBR13_5_CAPABILITY) {
cable_max_link_rate = LINK_RATE_UHBR13_5; cable_max_link_rate = LINK_RATE_UHBR13_5;
else if (link->dpcd_caps.cable_id.bits.UHBR10_20_CAPABILITY & DP_UHBR10) } else if (link->dpcd_caps.cable_id.bits.UHBR10_20_CAPABILITY & DP_UHBR10) {
cable_max_link_rate = LINK_RATE_UHBR10; // allow DP40 cables to do UHBR13.5 for passive or unknown cable type
if (link->dpcd_caps.cable_id.bits.CABLE_TYPE < 2) {
cable_max_link_rate = LINK_RATE_UHBR13_5;
} else {
cable_max_link_rate = LINK_RATE_UHBR10;
}
}
return cable_max_link_rate; return cable_max_link_rate;
} }

View File

@ -717,6 +717,8 @@ static const struct dc_debug_options debug_defaults_drv = {
.disable_dcc = DCC_ENABLE, .disable_dcc = DCC_ENABLE,
.disable_dpp_power_gate = true, .disable_dpp_power_gate = true,
.disable_hubp_power_gate = true, .disable_hubp_power_gate = true,
.disable_optc_power_gate = true, /*should the same as above two*/
.disable_hpo_power_gate = true, /*dmubfw force domain25 on*/
.disable_clock_gate = false, .disable_clock_gate = false,
.disable_dsc_power_gate = true, .disable_dsc_power_gate = true,
.vsr_support = true, .vsr_support = true,

View File

@ -150,6 +150,13 @@ enum dmub_memory_access_type {
DMUB_MEMORY_ACCESS_DMA DMUB_MEMORY_ACCESS_DMA
}; };
/* enum dmub_power_state type - to track DC power state in dmub_srv */
enum dmub_srv_power_state_type {
DMUB_POWER_STATE_UNDEFINED = 0,
DMUB_POWER_STATE_D0 = 1,
DMUB_POWER_STATE_D3 = 8
};
/** /**
* struct dmub_region - dmub hw memory region * struct dmub_region - dmub hw memory region
* @base: base address for region, must be 256 byte aligned * @base: base address for region, must be 256 byte aligned
@ -485,6 +492,8 @@ struct dmub_srv {
/* Feature capabilities reported by fw */ /* Feature capabilities reported by fw */
struct dmub_feature_caps feature_caps; struct dmub_feature_caps feature_caps;
struct dmub_visual_confirm_color visual_confirm_color; struct dmub_visual_confirm_color visual_confirm_color;
enum dmub_srv_power_state_type power_state;
}; };
/** /**
@ -889,6 +898,18 @@ enum dmub_status dmub_srv_clear_inbox0_ack(struct dmub_srv *dmub);
*/ */
void dmub_srv_subvp_save_surf_addr(struct dmub_srv *dmub, const struct dc_plane_address *addr, uint8_t subvp_index); void dmub_srv_subvp_save_surf_addr(struct dmub_srv *dmub, const struct dc_plane_address *addr, uint8_t subvp_index);
/**
* dmub_srv_set_power_state() - Track DC power state in dmub_srv
* @dmub: The dmub service
* @power_state: DC power state setting
*
* Store DC power state in dmub_srv. If dmub_srv is in D3, then don't send messages to DMUB
*
* Return:
* void
*/
void dmub_srv_set_power_state(struct dmub_srv *dmub, enum dmub_srv_power_state_type dmub_srv_power_state);
#if defined(__cplusplus) #if defined(__cplusplus)
} }
#endif #endif

View File

@ -713,6 +713,7 @@ enum dmub_status dmub_srv_hw_init(struct dmub_srv *dmub,
dmub->hw_funcs.reset_release(dmub); dmub->hw_funcs.reset_release(dmub);
dmub->hw_init = true; dmub->hw_init = true;
dmub->power_state = DMUB_POWER_STATE_D0;
return DMUB_STATUS_OK; return DMUB_STATUS_OK;
} }
@ -766,6 +767,9 @@ enum dmub_status dmub_srv_cmd_queue(struct dmub_srv *dmub,
if (!dmub->hw_init) if (!dmub->hw_init)
return DMUB_STATUS_INVALID; return DMUB_STATUS_INVALID;
if (dmub->power_state != DMUB_POWER_STATE_D0)
return DMUB_STATUS_INVALID;
if (dmub->inbox1_rb.rptr > dmub->inbox1_rb.capacity || if (dmub->inbox1_rb.rptr > dmub->inbox1_rb.capacity ||
dmub->inbox1_rb.wrpt > dmub->inbox1_rb.capacity) { dmub->inbox1_rb.wrpt > dmub->inbox1_rb.capacity) {
return DMUB_STATUS_HW_FAILURE; return DMUB_STATUS_HW_FAILURE;
@ -784,6 +788,9 @@ enum dmub_status dmub_srv_cmd_execute(struct dmub_srv *dmub)
if (!dmub->hw_init) if (!dmub->hw_init)
return DMUB_STATUS_INVALID; return DMUB_STATUS_INVALID;
if (dmub->power_state != DMUB_POWER_STATE_D0)
return DMUB_STATUS_INVALID;
/** /**
* Read back all the queued commands to ensure that they've * Read back all the queued commands to ensure that they've
* been flushed to framebuffer memory. Otherwise DMCUB might * been flushed to framebuffer memory. Otherwise DMCUB might
@ -1100,3 +1107,11 @@ void dmub_srv_subvp_save_surf_addr(struct dmub_srv *dmub, const struct dc_plane_
subvp_index); subvp_index);
} }
} }
void dmub_srv_set_power_state(struct dmub_srv *dmub, enum dmub_srv_power_state_type dmub_srv_power_state)
{
if (!dmub || !dmub->hw_init)
return;
dmub->power_state = dmub_srv_power_state;
}

View File

@ -69,6 +69,18 @@ static const struct fixed31_32 dc_fixpt_epsilon = { 1LL };
static const struct fixed31_32 dc_fixpt_half = { 0x80000000LL }; static const struct fixed31_32 dc_fixpt_half = { 0x80000000LL };
static const struct fixed31_32 dc_fixpt_one = { 0x100000000LL }; static const struct fixed31_32 dc_fixpt_one = { 0x100000000LL };
static inline struct fixed31_32 dc_fixpt_from_s3132(__u64 x)
{
struct fixed31_32 val;
/* If negative, convert to 2's complement. */
if (x & (1ULL << 63))
x = -(x & ~(1ULL << 63));
val.value = x;
return val;
}
/* /*
* @brief * @brief
* Initialization routines * Initialization routines

View File

@ -839,6 +839,8 @@ bool is_psr_su_specific_panel(struct dc_link *link)
((dpcd_caps->sink_dev_id_str[1] == 0x08 && dpcd_caps->sink_dev_id_str[0] == 0x08) || ((dpcd_caps->sink_dev_id_str[1] == 0x08 && dpcd_caps->sink_dev_id_str[0] == 0x08) ||
(dpcd_caps->sink_dev_id_str[1] == 0x08 && dpcd_caps->sink_dev_id_str[0] == 0x07))) (dpcd_caps->sink_dev_id_str[1] == 0x08 && dpcd_caps->sink_dev_id_str[0] == 0x07)))
isPSRSUSupported = false; isPSRSUSupported = false;
else if (dpcd_caps->sink_dev_id_str[1] == 0x08 && dpcd_caps->sink_dev_id_str[0] == 0x03)
isPSRSUSupported = false;
else if (dpcd_caps->psr_info.force_psrsu_cap == 0x1) else if (dpcd_caps->psr_info.force_psrsu_cap == 0x1)
isPSRSUSupported = true; isPSRSUSupported = true;
} }

View File

@ -572,7 +572,8 @@ struct SET_SHADER_DEBUGGER {
struct { struct {
uint32_t single_memop : 1; /* SQ_DEBUG.single_memop */ uint32_t single_memop : 1; /* SQ_DEBUG.single_memop */
uint32_t single_alu_op : 1; /* SQ_DEBUG.single_alu_op */ uint32_t single_alu_op : 1; /* SQ_DEBUG.single_alu_op */
uint32_t reserved : 30; uint32_t reserved : 29;
uint32_t process_ctx_flush : 1;
}; };
uint32_t u32all; uint32_t u32all;
} flags; } flags;

View File

@ -616,6 +616,16 @@ void amdgpu_dpm_enable_jpeg(struct amdgpu_device *adev, bool enable)
enable ? "enable" : "disable", ret); enable ? "enable" : "disable", ret);
} }
void amdgpu_dpm_enable_vpe(struct amdgpu_device *adev, bool enable)
{
int ret = 0;
ret = amdgpu_dpm_set_powergating_by_smu(adev, AMD_IP_BLOCK_TYPE_VPE, !enable);
if (ret)
DRM_ERROR("Dpm %s vpe failed, ret = %d.\n",
enable ? "enable" : "disable", ret);
}
int amdgpu_pm_load_smu_firmware(struct amdgpu_device *adev, uint32_t *smu_version) int amdgpu_pm_load_smu_firmware(struct amdgpu_device *adev, uint32_t *smu_version)
{ {
const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs; const struct amd_pm_funcs *pp_funcs = adev->powerplay.pp_funcs;

View File

@ -445,6 +445,7 @@ void amdgpu_dpm_compute_clocks(struct amdgpu_device *adev);
void amdgpu_dpm_enable_uvd(struct amdgpu_device *adev, bool enable); void amdgpu_dpm_enable_uvd(struct amdgpu_device *adev, bool enable);
void amdgpu_dpm_enable_vce(struct amdgpu_device *adev, bool enable); void amdgpu_dpm_enable_vce(struct amdgpu_device *adev, bool enable);
void amdgpu_dpm_enable_jpeg(struct amdgpu_device *adev, bool enable); void amdgpu_dpm_enable_jpeg(struct amdgpu_device *adev, bool enable);
void amdgpu_dpm_enable_vpe(struct amdgpu_device *adev, bool enable);
int amdgpu_pm_load_smu_firmware(struct amdgpu_device *adev, uint32_t *smu_version); int amdgpu_pm_load_smu_firmware(struct amdgpu_device *adev, uint32_t *smu_version);
int amdgpu_dpm_handle_passthrough_sbr(struct amdgpu_device *adev, bool enable); int amdgpu_dpm_handle_passthrough_sbr(struct amdgpu_device *adev, bool enable);
int amdgpu_dpm_send_hbm_bad_pages_num(struct amdgpu_device *adev, uint32_t size); int amdgpu_dpm_send_hbm_bad_pages_num(struct amdgpu_device *adev, uint32_t size);

View File

@ -2735,10 +2735,8 @@ static int kv_parse_power_table(struct amdgpu_device *adev)
non_clock_info = (struct _ATOM_PPLIB_NONCLOCK_INFO *) non_clock_info = (struct _ATOM_PPLIB_NONCLOCK_INFO *)
&non_clock_info_array->nonClockInfo[non_clock_array_index]; &non_clock_info_array->nonClockInfo[non_clock_array_index];
ps = kzalloc(sizeof(struct kv_ps), GFP_KERNEL); ps = kzalloc(sizeof(struct kv_ps), GFP_KERNEL);
if (ps == NULL) { if (ps == NULL)
kfree(adev->pm.dpm.ps);
return -ENOMEM; return -ENOMEM;
}
adev->pm.dpm.ps[i].ps_priv = ps; adev->pm.dpm.ps[i].ps_priv = ps;
k = 0; k = 0;
idx = (u8 *)&power_state->v2.clockInfoIndex[0]; idx = (u8 *)&power_state->v2.clockInfoIndex[0];

View File

@ -272,10 +272,8 @@ int amdgpu_parse_extended_power_table(struct amdgpu_device *adev)
le16_to_cpu(power_info->pplib4.usVddcDependencyOnSCLKOffset)); le16_to_cpu(power_info->pplib4.usVddcDependencyOnSCLKOffset));
ret = amdgpu_parse_clk_voltage_dep_table(&adev->pm.dpm.dyn_state.vddc_dependency_on_sclk, ret = amdgpu_parse_clk_voltage_dep_table(&adev->pm.dpm.dyn_state.vddc_dependency_on_sclk,
dep_table); dep_table);
if (ret) { if (ret)
amdgpu_free_extended_power_table(adev);
return ret; return ret;
}
} }
if (power_info->pplib4.usVddciDependencyOnMCLKOffset) { if (power_info->pplib4.usVddciDependencyOnMCLKOffset) {
dep_table = (ATOM_PPLIB_Clock_Voltage_Dependency_Table *) dep_table = (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
@ -283,10 +281,8 @@ int amdgpu_parse_extended_power_table(struct amdgpu_device *adev)
le16_to_cpu(power_info->pplib4.usVddciDependencyOnMCLKOffset)); le16_to_cpu(power_info->pplib4.usVddciDependencyOnMCLKOffset));
ret = amdgpu_parse_clk_voltage_dep_table(&adev->pm.dpm.dyn_state.vddci_dependency_on_mclk, ret = amdgpu_parse_clk_voltage_dep_table(&adev->pm.dpm.dyn_state.vddci_dependency_on_mclk,
dep_table); dep_table);
if (ret) { if (ret)
amdgpu_free_extended_power_table(adev);
return ret; return ret;
}
} }
if (power_info->pplib4.usVddcDependencyOnMCLKOffset) { if (power_info->pplib4.usVddcDependencyOnMCLKOffset) {
dep_table = (ATOM_PPLIB_Clock_Voltage_Dependency_Table *) dep_table = (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
@ -294,10 +290,8 @@ int amdgpu_parse_extended_power_table(struct amdgpu_device *adev)
le16_to_cpu(power_info->pplib4.usVddcDependencyOnMCLKOffset)); le16_to_cpu(power_info->pplib4.usVddcDependencyOnMCLKOffset));
ret = amdgpu_parse_clk_voltage_dep_table(&adev->pm.dpm.dyn_state.vddc_dependency_on_mclk, ret = amdgpu_parse_clk_voltage_dep_table(&adev->pm.dpm.dyn_state.vddc_dependency_on_mclk,
dep_table); dep_table);
if (ret) { if (ret)
amdgpu_free_extended_power_table(adev);
return ret; return ret;
}
} }
if (power_info->pplib4.usMvddDependencyOnMCLKOffset) { if (power_info->pplib4.usMvddDependencyOnMCLKOffset) {
dep_table = (ATOM_PPLIB_Clock_Voltage_Dependency_Table *) dep_table = (ATOM_PPLIB_Clock_Voltage_Dependency_Table *)
@ -305,10 +299,8 @@ int amdgpu_parse_extended_power_table(struct amdgpu_device *adev)
le16_to_cpu(power_info->pplib4.usMvddDependencyOnMCLKOffset)); le16_to_cpu(power_info->pplib4.usMvddDependencyOnMCLKOffset));
ret = amdgpu_parse_clk_voltage_dep_table(&adev->pm.dpm.dyn_state.mvdd_dependency_on_mclk, ret = amdgpu_parse_clk_voltage_dep_table(&adev->pm.dpm.dyn_state.mvdd_dependency_on_mclk,
dep_table); dep_table);
if (ret) { if (ret)
amdgpu_free_extended_power_table(adev);
return ret; return ret;
}
} }
if (power_info->pplib4.usMaxClockVoltageOnDCOffset) { if (power_info->pplib4.usMaxClockVoltageOnDCOffset) {
ATOM_PPLIB_Clock_Voltage_Limit_Table *clk_v = ATOM_PPLIB_Clock_Voltage_Limit_Table *clk_v =
@ -339,10 +331,8 @@ int amdgpu_parse_extended_power_table(struct amdgpu_device *adev)
kcalloc(psl->ucNumEntries, kcalloc(psl->ucNumEntries,
sizeof(struct amdgpu_phase_shedding_limits_entry), sizeof(struct amdgpu_phase_shedding_limits_entry),
GFP_KERNEL); GFP_KERNEL);
if (!adev->pm.dpm.dyn_state.phase_shedding_limits_table.entries) { if (!adev->pm.dpm.dyn_state.phase_shedding_limits_table.entries)
amdgpu_free_extended_power_table(adev);
return -ENOMEM; return -ENOMEM;
}
entry = &psl->entries[0]; entry = &psl->entries[0];
for (i = 0; i < psl->ucNumEntries; i++) { for (i = 0; i < psl->ucNumEntries; i++) {
@ -383,10 +373,8 @@ int amdgpu_parse_extended_power_table(struct amdgpu_device *adev)
ATOM_PPLIB_CAC_Leakage_Record *entry; ATOM_PPLIB_CAC_Leakage_Record *entry;
u32 size = cac_table->ucNumEntries * sizeof(struct amdgpu_cac_leakage_table); u32 size = cac_table->ucNumEntries * sizeof(struct amdgpu_cac_leakage_table);
adev->pm.dpm.dyn_state.cac_leakage_table.entries = kzalloc(size, GFP_KERNEL); adev->pm.dpm.dyn_state.cac_leakage_table.entries = kzalloc(size, GFP_KERNEL);
if (!adev->pm.dpm.dyn_state.cac_leakage_table.entries) { if (!adev->pm.dpm.dyn_state.cac_leakage_table.entries)
amdgpu_free_extended_power_table(adev);
return -ENOMEM; return -ENOMEM;
}
entry = &cac_table->entries[0]; entry = &cac_table->entries[0];
for (i = 0; i < cac_table->ucNumEntries; i++) { for (i = 0; i < cac_table->ucNumEntries; i++) {
if (adev->pm.dpm.platform_caps & ATOM_PP_PLATFORM_CAP_EVV) { if (adev->pm.dpm.platform_caps & ATOM_PP_PLATFORM_CAP_EVV) {
@ -438,10 +426,8 @@ int amdgpu_parse_extended_power_table(struct amdgpu_device *adev)
sizeof(struct amdgpu_vce_clock_voltage_dependency_entry); sizeof(struct amdgpu_vce_clock_voltage_dependency_entry);
adev->pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries = adev->pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries =
kzalloc(size, GFP_KERNEL); kzalloc(size, GFP_KERNEL);
if (!adev->pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries) { if (!adev->pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries)
amdgpu_free_extended_power_table(adev);
return -ENOMEM; return -ENOMEM;
}
adev->pm.dpm.dyn_state.vce_clock_voltage_dependency_table.count = adev->pm.dpm.dyn_state.vce_clock_voltage_dependency_table.count =
limits->numEntries; limits->numEntries;
entry = &limits->entries[0]; entry = &limits->entries[0];
@ -493,10 +479,8 @@ int amdgpu_parse_extended_power_table(struct amdgpu_device *adev)
sizeof(struct amdgpu_uvd_clock_voltage_dependency_entry); sizeof(struct amdgpu_uvd_clock_voltage_dependency_entry);
adev->pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries = adev->pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries =
kzalloc(size, GFP_KERNEL); kzalloc(size, GFP_KERNEL);
if (!adev->pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries) { if (!adev->pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries)
amdgpu_free_extended_power_table(adev);
return -ENOMEM; return -ENOMEM;
}
adev->pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.count = adev->pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.count =
limits->numEntries; limits->numEntries;
entry = &limits->entries[0]; entry = &limits->entries[0];
@ -525,10 +509,8 @@ int amdgpu_parse_extended_power_table(struct amdgpu_device *adev)
sizeof(struct amdgpu_clock_voltage_dependency_entry); sizeof(struct amdgpu_clock_voltage_dependency_entry);
adev->pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries = adev->pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries =
kzalloc(size, GFP_KERNEL); kzalloc(size, GFP_KERNEL);
if (!adev->pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries) { if (!adev->pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries)
amdgpu_free_extended_power_table(adev);
return -ENOMEM; return -ENOMEM;
}
adev->pm.dpm.dyn_state.samu_clock_voltage_dependency_table.count = adev->pm.dpm.dyn_state.samu_clock_voltage_dependency_table.count =
limits->numEntries; limits->numEntries;
entry = &limits->entries[0]; entry = &limits->entries[0];
@ -548,10 +530,8 @@ int amdgpu_parse_extended_power_table(struct amdgpu_device *adev)
le16_to_cpu(ext_hdr->usPPMTableOffset)); le16_to_cpu(ext_hdr->usPPMTableOffset));
adev->pm.dpm.dyn_state.ppm_table = adev->pm.dpm.dyn_state.ppm_table =
kzalloc(sizeof(struct amdgpu_ppm_table), GFP_KERNEL); kzalloc(sizeof(struct amdgpu_ppm_table), GFP_KERNEL);
if (!adev->pm.dpm.dyn_state.ppm_table) { if (!adev->pm.dpm.dyn_state.ppm_table)
amdgpu_free_extended_power_table(adev);
return -ENOMEM; return -ENOMEM;
}
adev->pm.dpm.dyn_state.ppm_table->ppm_design = ppm->ucPpmDesign; adev->pm.dpm.dyn_state.ppm_table->ppm_design = ppm->ucPpmDesign;
adev->pm.dpm.dyn_state.ppm_table->cpu_core_number = adev->pm.dpm.dyn_state.ppm_table->cpu_core_number =
le16_to_cpu(ppm->usCpuCoreNumber); le16_to_cpu(ppm->usCpuCoreNumber);
@ -583,10 +563,8 @@ int amdgpu_parse_extended_power_table(struct amdgpu_device *adev)
sizeof(struct amdgpu_clock_voltage_dependency_entry); sizeof(struct amdgpu_clock_voltage_dependency_entry);
adev->pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries = adev->pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries =
kzalloc(size, GFP_KERNEL); kzalloc(size, GFP_KERNEL);
if (!adev->pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries) { if (!adev->pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries)
amdgpu_free_extended_power_table(adev);
return -ENOMEM; return -ENOMEM;
}
adev->pm.dpm.dyn_state.acp_clock_voltage_dependency_table.count = adev->pm.dpm.dyn_state.acp_clock_voltage_dependency_table.count =
limits->numEntries; limits->numEntries;
entry = &limits->entries[0]; entry = &limits->entries[0];
@ -606,10 +584,8 @@ int amdgpu_parse_extended_power_table(struct amdgpu_device *adev)
ATOM_PowerTune_Table *pt; ATOM_PowerTune_Table *pt;
adev->pm.dpm.dyn_state.cac_tdp_table = adev->pm.dpm.dyn_state.cac_tdp_table =
kzalloc(sizeof(struct amdgpu_cac_tdp_table), GFP_KERNEL); kzalloc(sizeof(struct amdgpu_cac_tdp_table), GFP_KERNEL);
if (!adev->pm.dpm.dyn_state.cac_tdp_table) { if (!adev->pm.dpm.dyn_state.cac_tdp_table)
amdgpu_free_extended_power_table(adev);
return -ENOMEM; return -ENOMEM;
}
if (rev > 0) { if (rev > 0) {
ATOM_PPLIB_POWERTUNE_Table_V1 *ppt = (ATOM_PPLIB_POWERTUNE_Table_V1 *) ATOM_PPLIB_POWERTUNE_Table_V1 *ppt = (ATOM_PPLIB_POWERTUNE_Table_V1 *)
(mode_info->atom_context->bios + data_offset + (mode_info->atom_context->bios + data_offset +
@ -645,10 +621,8 @@ int amdgpu_parse_extended_power_table(struct amdgpu_device *adev)
ret = amdgpu_parse_clk_voltage_dep_table( ret = amdgpu_parse_clk_voltage_dep_table(
&adev->pm.dpm.dyn_state.vddgfx_dependency_on_sclk, &adev->pm.dpm.dyn_state.vddgfx_dependency_on_sclk,
dep_table); dep_table);
if (ret) { if (ret)
kfree(adev->pm.dpm.dyn_state.vddgfx_dependency_on_sclk.entries);
return ret; return ret;
}
} }
} }

View File

@ -7379,10 +7379,9 @@ static int si_dpm_init(struct amdgpu_device *adev)
kcalloc(4, kcalloc(4,
sizeof(struct amdgpu_clock_voltage_dependency_entry), sizeof(struct amdgpu_clock_voltage_dependency_entry),
GFP_KERNEL); GFP_KERNEL);
if (!adev->pm.dpm.dyn_state.vddc_dependency_on_dispclk.entries) { if (!adev->pm.dpm.dyn_state.vddc_dependency_on_dispclk.entries)
amdgpu_free_extended_power_table(adev);
return -ENOMEM; return -ENOMEM;
}
adev->pm.dpm.dyn_state.vddc_dependency_on_dispclk.count = 4; adev->pm.dpm.dyn_state.vddc_dependency_on_dispclk.count = 4;
adev->pm.dpm.dyn_state.vddc_dependency_on_dispclk.entries[0].clk = 0; adev->pm.dpm.dyn_state.vddc_dependency_on_dispclk.entries[0].clk = 0;
adev->pm.dpm.dyn_state.vddc_dependency_on_dispclk.entries[0].v = 0; adev->pm.dpm.dyn_state.vddc_dependency_on_dispclk.entries[0].v = 0;

View File

@ -1322,6 +1322,187 @@ static int smu_get_thermal_temperature_range(struct smu_context *smu)
return ret; return ret;
} }
/**
* smu_wbrf_handle_exclusion_ranges - consume the wbrf exclusion ranges
*
* @smu: smu_context pointer
*
* Retrieve the wbrf exclusion ranges and send them to PMFW for proper handling.
* Returns 0 on success, error on failure.
*/
static int smu_wbrf_handle_exclusion_ranges(struct smu_context *smu)
{
struct wbrf_ranges_in_out wbrf_exclusion = {0};
struct freq_band_range *wifi_bands = wbrf_exclusion.band_list;
struct amdgpu_device *adev = smu->adev;
uint32_t num_of_wbrf_ranges = MAX_NUM_OF_WBRF_RANGES;
uint64_t start, end;
int ret, i, j;
ret = amd_wbrf_retrieve_freq_band(adev->dev, &wbrf_exclusion);
if (ret) {
dev_err(adev->dev, "Failed to retrieve exclusion ranges!\n");
return ret;
}
/*
* The exclusion ranges array we got might be filled with holes and duplicate
* entries. For example:
* {(2400, 2500), (0, 0), (6882, 6962), (2400, 2500), (0, 0), (6117, 6189), (0, 0)...}
* We need to do some sortups to eliminate those holes and duplicate entries.
* Expected output: {(2400, 2500), (6117, 6189), (6882, 6962), (0, 0)...}
*/
for (i = 0; i < num_of_wbrf_ranges; i++) {
start = wifi_bands[i].start;
end = wifi_bands[i].end;
/* get the last valid entry to fill the intermediate hole */
if (!start && !end) {
for (j = num_of_wbrf_ranges - 1; j > i; j--)
if (wifi_bands[j].start && wifi_bands[j].end)
break;
/* no valid entry left */
if (j <= i)
break;
start = wifi_bands[i].start = wifi_bands[j].start;
end = wifi_bands[i].end = wifi_bands[j].end;
wifi_bands[j].start = 0;
wifi_bands[j].end = 0;
num_of_wbrf_ranges = j;
}
/* eliminate duplicate entries */
for (j = i + 1; j < num_of_wbrf_ranges; j++) {
if ((wifi_bands[j].start == start) && (wifi_bands[j].end == end)) {
wifi_bands[j].start = 0;
wifi_bands[j].end = 0;
}
}
}
/* Send the sorted wifi_bands to PMFW */
ret = smu_set_wbrf_exclusion_ranges(smu, wifi_bands);
/* Try to set the wifi_bands again */
if (unlikely(ret == -EBUSY)) {
mdelay(5);
ret = smu_set_wbrf_exclusion_ranges(smu, wifi_bands);
}
return ret;
}
/**
* smu_wbrf_event_handler - handle notify events
*
* @nb: notifier block
* @action: event type
* @_arg: event data
*
* Calls relevant amdgpu function in response to wbrf event
* notification from kernel.
*/
static int smu_wbrf_event_handler(struct notifier_block *nb,
unsigned long action, void *_arg)
{
struct smu_context *smu = container_of(nb, struct smu_context, wbrf_notifier);
switch (action) {
case WBRF_CHANGED:
schedule_delayed_work(&smu->wbrf_delayed_work,
msecs_to_jiffies(SMU_WBRF_EVENT_HANDLING_PACE));
break;
default:
return NOTIFY_DONE;
}
return NOTIFY_OK;
}
/**
* smu_wbrf_delayed_work_handler - callback on delayed work timer expired
*
* @work: struct work_struct pointer
*
* Flood is over and driver will consume the latest exclusion ranges.
*/
static void smu_wbrf_delayed_work_handler(struct work_struct *work)
{
struct smu_context *smu = container_of(work, struct smu_context, wbrf_delayed_work.work);
smu_wbrf_handle_exclusion_ranges(smu);
}
/**
* smu_wbrf_support_check - check wbrf support
*
* @smu: smu_context pointer
*
* Verifies the ACPI interface whether wbrf is supported.
*/
static void smu_wbrf_support_check(struct smu_context *smu)
{
struct amdgpu_device *adev = smu->adev;
smu->wbrf_supported = smu_is_asic_wbrf_supported(smu) && amdgpu_wbrf &&
acpi_amd_wbrf_supported_consumer(adev->dev);
if (smu->wbrf_supported)
dev_info(adev->dev, "RF interference mitigation is supported\n");
}
/**
* smu_wbrf_init - init driver wbrf support
*
* @smu: smu_context pointer
*
* Verifies the AMD ACPI interfaces and registers with the wbrf
* notifier chain if wbrf feature is supported.
* Returns 0 on success, error on failure.
*/
static int smu_wbrf_init(struct smu_context *smu)
{
int ret;
if (!smu->wbrf_supported)
return 0;
INIT_DELAYED_WORK(&smu->wbrf_delayed_work, smu_wbrf_delayed_work_handler);
smu->wbrf_notifier.notifier_call = smu_wbrf_event_handler;
ret = amd_wbrf_register_notifier(&smu->wbrf_notifier);
if (ret)
return ret;
/*
* Some wifiband exclusion ranges may be already there
* before our driver loaded. To make sure our driver
* is awared of those exclusion ranges.
*/
schedule_delayed_work(&smu->wbrf_delayed_work,
msecs_to_jiffies(SMU_WBRF_EVENT_HANDLING_PACE));
return 0;
}
/**
* smu_wbrf_fini - tear down driver wbrf support
*
* @smu: smu_context pointer
*
* Unregisters with the wbrf notifier chain.
*/
static void smu_wbrf_fini(struct smu_context *smu)
{
if (!smu->wbrf_supported)
return;
amd_wbrf_unregister_notifier(&smu->wbrf_notifier);
cancel_delayed_work_sync(&smu->wbrf_delayed_work);
}
static int smu_smc_hw_setup(struct smu_context *smu) static int smu_smc_hw_setup(struct smu_context *smu)
{ {
struct smu_feature *feature = &smu->smu_feature; struct smu_feature *feature = &smu->smu_feature;
@ -1414,6 +1595,15 @@ static int smu_smc_hw_setup(struct smu_context *smu)
if (ret) if (ret)
return ret; return ret;
/* Enable UclkShadow on wbrf supported */
if (smu->wbrf_supported) {
ret = smu_enable_uclk_shadow(smu, true);
if (ret) {
dev_err(adev->dev, "Failed to enable UclkShadow feature to support wbrf!\n");
return ret;
}
}
/* /*
* With SCPM enabled, these actions(and relevant messages) are * With SCPM enabled, these actions(and relevant messages) are
* not needed and permitted. * not needed and permitted.
@ -1512,6 +1702,15 @@ static int smu_smc_hw_setup(struct smu_context *smu)
*/ */
ret = smu_set_min_dcef_deep_sleep(smu, ret = smu_set_min_dcef_deep_sleep(smu,
smu->smu_table.boot_values.dcefclk / 100); smu->smu_table.boot_values.dcefclk / 100);
if (ret) {
dev_err(adev->dev, "Error setting min deepsleep dcefclk\n");
return ret;
}
/* Init wbrf support. Properly setup the notifier */
ret = smu_wbrf_init(smu);
if (ret)
dev_err(adev->dev, "Error during wbrf init call\n");
return ret; return ret;
} }
@ -1567,6 +1766,13 @@ static int smu_hw_init(void *handle)
return ret; return ret;
} }
/*
* Check whether wbrf is supported. This needs to be done
* before SMU setup starts since part of SMU configuration
* relies on this.
*/
smu_wbrf_support_check(smu);
if (smu->is_apu) { if (smu->is_apu) {
ret = smu_set_gfx_imu_enable(smu); ret = smu_set_gfx_imu_enable(smu);
if (ret) if (ret)
@ -1733,6 +1939,8 @@ static int smu_smc_hw_cleanup(struct smu_context *smu)
struct amdgpu_device *adev = smu->adev; struct amdgpu_device *adev = smu->adev;
int ret = 0; int ret = 0;
smu_wbrf_fini(smu);
cancel_work_sync(&smu->throttling_logging_work); cancel_work_sync(&smu->throttling_logging_work);
cancel_work_sync(&smu->interrupt_work); cancel_work_sync(&smu->interrupt_work);

View File

@ -22,6 +22,9 @@
#ifndef __AMDGPU_SMU_H__ #ifndef __AMDGPU_SMU_H__
#define __AMDGPU_SMU_H__ #define __AMDGPU_SMU_H__
#include <linux/acpi_amd_wbrf.h>
#include <linux/units.h>
#include "amdgpu.h" #include "amdgpu.h"
#include "kgd_pp_interface.h" #include "kgd_pp_interface.h"
#include "dm_pp_interface.h" #include "dm_pp_interface.h"
@ -318,6 +321,7 @@ enum smu_table_id {
SMU_TABLE_PACE, SMU_TABLE_PACE,
SMU_TABLE_ECCINFO, SMU_TABLE_ECCINFO,
SMU_TABLE_COMBO_PPTABLE, SMU_TABLE_COMBO_PPTABLE,
SMU_TABLE_WIFIBAND,
SMU_TABLE_COUNT, SMU_TABLE_COUNT,
}; };
@ -471,6 +475,12 @@ struct stb_context {
#define WORKLOAD_POLICY_MAX 7 #define WORKLOAD_POLICY_MAX 7
/*
* Configure wbrf event handling pace as there can be only one
* event processed every SMU_WBRF_EVENT_HANDLING_PACE ms.
*/
#define SMU_WBRF_EVENT_HANDLING_PACE 10
struct smu_context { struct smu_context {
struct amdgpu_device *adev; struct amdgpu_device *adev;
struct amdgpu_irq_src irq_source; struct amdgpu_irq_src irq_source;
@ -570,6 +580,11 @@ struct smu_context {
struct delayed_work swctf_delayed_work; struct delayed_work swctf_delayed_work;
enum pp_xgmi_plpd_mode plpd_mode; enum pp_xgmi_plpd_mode plpd_mode;
/* data structures for wbrf feature support */
bool wbrf_supported;
struct notifier_block wbrf_notifier;
struct delayed_work wbrf_delayed_work;
}; };
struct i2c_adapter; struct i2c_adapter;
@ -1375,6 +1390,22 @@ struct pptable_funcs {
* @notify_rlc_state: Notify RLC power state to SMU. * @notify_rlc_state: Notify RLC power state to SMU.
*/ */
int (*notify_rlc_state)(struct smu_context *smu, bool en); int (*notify_rlc_state)(struct smu_context *smu, bool en);
/**
* @is_asic_wbrf_supported: check whether PMFW supports the wbrf feature
*/
bool (*is_asic_wbrf_supported)(struct smu_context *smu);
/**
* @enable_uclk_shadow: Enable the uclk shadow feature on wbrf supported
*/
int (*enable_uclk_shadow)(struct smu_context *smu, bool enable);
/**
* @set_wbrf_exclusion_ranges: notify SMU the wifi bands occupied
*/
int (*set_wbrf_exclusion_ranges)(struct smu_context *smu,
struct freq_band_range *exclusion_ranges);
}; };
typedef enum { typedef enum {
@ -1501,6 +1532,17 @@ enum smu_baco_seq {
__dst_size); \ __dst_size); \
}) })
typedef struct {
uint16_t LowFreq;
uint16_t HighFreq;
} WifiOneBand_t;
typedef struct {
uint32_t WifiBandEntryNum;
WifiOneBand_t WifiBandEntry[11];
uint32_t MmHubPadding[8];
} WifiBandEntryTable_t;
#if !defined(SWSMU_CODE_LAYER_L2) && !defined(SWSMU_CODE_LAYER_L3) && !defined(SWSMU_CODE_LAYER_L4) #if !defined(SWSMU_CODE_LAYER_L2) && !defined(SWSMU_CODE_LAYER_L3) && !defined(SWSMU_CODE_LAYER_L4)
int smu_get_power_limit(void *handle, int smu_get_power_limit(void *handle,
uint32_t *limit, uint32_t *limit,

View File

@ -1615,7 +1615,8 @@ typedef struct {
#define TABLE_I2C_COMMANDS 9 #define TABLE_I2C_COMMANDS 9
#define TABLE_DRIVER_INFO 10 #define TABLE_DRIVER_INFO 10
#define TABLE_ECCINFO 11 #define TABLE_ECCINFO 11
#define TABLE_COUNT 12 #define TABLE_WIFIBAND 12
#define TABLE_COUNT 13
//IH Interupt ID //IH Interupt ID
#define IH_INTERRUPT_ID_TO_DRIVER 0xFE #define IH_INTERRUPT_ID_TO_DRIVER 0xFE

View File

@ -1605,7 +1605,8 @@ typedef struct {
#define TABLE_I2C_COMMANDS 9 #define TABLE_I2C_COMMANDS 9
#define TABLE_DRIVER_INFO 10 #define TABLE_DRIVER_INFO 10
#define TABLE_ECCINFO 11 #define TABLE_ECCINFO 11
#define TABLE_COUNT 12 #define TABLE_WIFIBAND 12
#define TABLE_COUNT 13
//IH Interupt ID //IH Interupt ID
#define IH_INTERRUPT_ID_TO_DRIVER 0xFE #define IH_INTERRUPT_ID_TO_DRIVER 0xFE

View File

@ -24,11 +24,6 @@
#ifndef SMU14_DRIVER_IF_V14_0_0_H #ifndef SMU14_DRIVER_IF_V14_0_0_H
#define SMU14_DRIVER_IF_V14_0_0_H #define SMU14_DRIVER_IF_V14_0_0_H
// *** IMPORTANT ***
// SMU TEAM: Always increment the interface version if
// any structure is changed in this file
#define PMFW_DRIVER_IF_VERSION 7
typedef struct { typedef struct {
int32_t value; int32_t value;
uint32_t numFractionalBits; uint32_t numFractionalBits;

View File

@ -138,10 +138,9 @@
#define PPSMC_MSG_SetBadMemoryPagesRetiredFlagsPerChannel 0x4A #define PPSMC_MSG_SetBadMemoryPagesRetiredFlagsPerChannel 0x4A
#define PPSMC_MSG_SetPriorityDeltaGain 0x4B #define PPSMC_MSG_SetPriorityDeltaGain 0x4B
#define PPSMC_MSG_AllowIHHostInterrupt 0x4C #define PPSMC_MSG_AllowIHHostInterrupt 0x4C
#define PPSMC_MSG_DALNotPresent 0x4E #define PPSMC_MSG_DALNotPresent 0x4E
#define PPSMC_MSG_EnableUCLKShadow 0x51
#define PPSMC_Message_Count 0x4F #define PPSMC_Message_Count 0x52
//Debug Dump Message //Debug Dump Message
#define DEBUGSMC_MSG_TestMessage 0x1 #define DEBUGSMC_MSG_TestMessage 0x1

View File

@ -134,6 +134,7 @@
#define PPSMC_MSG_SetBadMemoryPagesRetiredFlagsPerChannel 0x4A #define PPSMC_MSG_SetBadMemoryPagesRetiredFlagsPerChannel 0x4A
#define PPSMC_MSG_SetPriorityDeltaGain 0x4B #define PPSMC_MSG_SetPriorityDeltaGain 0x4B
#define PPSMC_MSG_AllowIHHostInterrupt 0x4C #define PPSMC_MSG_AllowIHHostInterrupt 0x4C
#define PPSMC_Message_Count 0x4D #define PPSMC_MSG_EnableUCLKShadow 0x51
#define PPSMC_Message_Count 0x52
#endif #endif

View File

@ -260,7 +260,8 @@
__SMU_DUMMY_MAP(PowerDownUmsch), \ __SMU_DUMMY_MAP(PowerDownUmsch), \
__SMU_DUMMY_MAP(SetSoftMaxVpe), \ __SMU_DUMMY_MAP(SetSoftMaxVpe), \
__SMU_DUMMY_MAP(SetSoftMinVpe), \ __SMU_DUMMY_MAP(SetSoftMinVpe), \
__SMU_DUMMY_MAP(GetMetricsVersion), __SMU_DUMMY_MAP(GetMetricsVersion), \
__SMU_DUMMY_MAP(EnableUCLKShadow),
#undef __SMU_DUMMY_MAP #undef __SMU_DUMMY_MAP
#define __SMU_DUMMY_MAP(type) SMU_MSG_##type #define __SMU_DUMMY_MAP(type) SMU_MSG_##type

View File

@ -212,10 +212,6 @@ int smu_v13_0_get_max_sustainable_clocks_by_dc(struct smu_context *smu,
bool smu_v13_0_baco_is_support(struct smu_context *smu); bool smu_v13_0_baco_is_support(struct smu_context *smu);
enum smu_baco_state smu_v13_0_baco_get_state(struct smu_context *smu);
int smu_v13_0_baco_set_state(struct smu_context *smu, enum smu_baco_state state);
int smu_v13_0_baco_enter(struct smu_context *smu); int smu_v13_0_baco_enter(struct smu_context *smu);
int smu_v13_0_baco_exit(struct smu_context *smu); int smu_v13_0_baco_exit(struct smu_context *smu);
@ -298,5 +294,9 @@ int smu_v13_0_update_pcie_parameters(struct smu_context *smu,
int smu_v13_0_disable_pmfw_state(struct smu_context *smu); int smu_v13_0_disable_pmfw_state(struct smu_context *smu);
int smu_v13_0_enable_uclk_shadow(struct smu_context *smu, bool enable);
int smu_v13_0_set_wbrf_exclusion_ranges(struct smu_context *smu,
struct freq_band_range *exclusion_ranges);
#endif #endif
#endif #endif

View File

@ -26,8 +26,8 @@
#include "amdgpu_smu.h" #include "amdgpu_smu.h"
#define SMU14_DRIVER_IF_VERSION_INV 0xFFFFFFFF #define SMU14_DRIVER_IF_VERSION_INV 0xFFFFFFFF
#define SMU14_DRIVER_IF_VERSION_SMU_V14_0_0 0x7
#define SMU14_DRIVER_IF_VERSION_SMU_V14_0_2 0x1 #define SMU14_DRIVER_IF_VERSION_SMU_V14_0_2 0x1
#define SMU14_DRIVER_IF_VERSION_SMU_V14_0_0 0x6
#define FEATURE_MASK(feature) (1ULL << feature) #define FEATURE_MASK(feature) (1ULL << feature)

View File

@ -2407,8 +2407,6 @@ static const struct pptable_funcs arcturus_ppt_funcs = {
.set_azalia_d3_pme = smu_v11_0_set_azalia_d3_pme, .set_azalia_d3_pme = smu_v11_0_set_azalia_d3_pme,
.get_max_sustainable_clocks_by_dc = smu_v11_0_get_max_sustainable_clocks_by_dc, .get_max_sustainable_clocks_by_dc = smu_v11_0_get_max_sustainable_clocks_by_dc,
.baco_is_support = smu_v11_0_baco_is_support, .baco_is_support = smu_v11_0_baco_is_support,
.baco_get_state = smu_v11_0_baco_get_state,
.baco_set_state = smu_v11_0_baco_set_state,
.baco_enter = smu_v11_0_baco_enter, .baco_enter = smu_v11_0_baco_enter,
.baco_exit = smu_v11_0_baco_exit, .baco_exit = smu_v11_0_baco_exit,
.get_dpm_ultimate_freq = smu_v11_0_get_dpm_ultimate_freq, .get_dpm_ultimate_freq = smu_v11_0_get_dpm_ultimate_freq,

View File

@ -3537,8 +3537,6 @@ static const struct pptable_funcs navi10_ppt_funcs = {
.set_azalia_d3_pme = smu_v11_0_set_azalia_d3_pme, .set_azalia_d3_pme = smu_v11_0_set_azalia_d3_pme,
.get_max_sustainable_clocks_by_dc = smu_v11_0_get_max_sustainable_clocks_by_dc, .get_max_sustainable_clocks_by_dc = smu_v11_0_get_max_sustainable_clocks_by_dc,
.baco_is_support = smu_v11_0_baco_is_support, .baco_is_support = smu_v11_0_baco_is_support,
.baco_get_state = smu_v11_0_baco_get_state,
.baco_set_state = smu_v11_0_baco_set_state,
.baco_enter = navi10_baco_enter, .baco_enter = navi10_baco_enter,
.baco_exit = navi10_baco_exit, .baco_exit = navi10_baco_exit,
.get_dpm_ultimate_freq = smu_v11_0_get_dpm_ultimate_freq, .get_dpm_ultimate_freq = smu_v11_0_get_dpm_ultimate_freq,

View File

@ -4428,8 +4428,6 @@ static const struct pptable_funcs sienna_cichlid_ppt_funcs = {
.set_azalia_d3_pme = smu_v11_0_set_azalia_d3_pme, .set_azalia_d3_pme = smu_v11_0_set_azalia_d3_pme,
.get_max_sustainable_clocks_by_dc = smu_v11_0_get_max_sustainable_clocks_by_dc, .get_max_sustainable_clocks_by_dc = smu_v11_0_get_max_sustainable_clocks_by_dc,
.baco_is_support = smu_v11_0_baco_is_support, .baco_is_support = smu_v11_0_baco_is_support,
.baco_get_state = smu_v11_0_baco_get_state,
.baco_set_state = smu_v11_0_baco_set_state,
.baco_enter = sienna_cichlid_baco_enter, .baco_enter = sienna_cichlid_baco_enter,
.baco_exit = sienna_cichlid_baco_exit, .baco_exit = sienna_cichlid_baco_exit,
.mode1_reset_is_support = sienna_cichlid_is_mode1_reset_supported, .mode1_reset_is_support = sienna_cichlid_is_mode1_reset_supported,

View File

@ -2221,33 +2221,14 @@ static int smu_v13_0_baco_set_armd3_sequence(struct smu_context *smu,
return 0; return 0;
} }
bool smu_v13_0_baco_is_support(struct smu_context *smu) static enum smu_baco_state smu_v13_0_baco_get_state(struct smu_context *smu)
{
struct smu_baco_context *smu_baco = &smu->smu_baco;
if (amdgpu_sriov_vf(smu->adev) ||
!smu_baco->platform_support)
return false;
/* return true if ASIC is in BACO state already */
if (smu_v13_0_baco_get_state(smu) == SMU_BACO_STATE_ENTER)
return true;
if (smu_cmn_feature_is_supported(smu, SMU_FEATURE_BACO_BIT) &&
!smu_cmn_feature_is_enabled(smu, SMU_FEATURE_BACO_BIT))
return false;
return true;
}
enum smu_baco_state smu_v13_0_baco_get_state(struct smu_context *smu)
{ {
struct smu_baco_context *smu_baco = &smu->smu_baco; struct smu_baco_context *smu_baco = &smu->smu_baco;
return smu_baco->state; return smu_baco->state;
} }
int smu_v13_0_baco_set_state(struct smu_context *smu, static int smu_v13_0_baco_set_state(struct smu_context *smu,
enum smu_baco_state state) enum smu_baco_state state)
{ {
struct smu_baco_context *smu_baco = &smu->smu_baco; struct smu_baco_context *smu_baco = &smu->smu_baco;
@ -2281,6 +2262,24 @@ int smu_v13_0_baco_set_state(struct smu_context *smu,
return ret; return ret;
} }
bool smu_v13_0_baco_is_support(struct smu_context *smu)
{
struct smu_baco_context *smu_baco = &smu->smu_baco;
if (amdgpu_sriov_vf(smu->adev) || !smu_baco->platform_support)
return false;
/* return true if ASIC is in BACO state already */
if (smu_v13_0_baco_get_state(smu) == SMU_BACO_STATE_ENTER)
return true;
if (smu_cmn_feature_is_supported(smu, SMU_FEATURE_BACO_BIT) &&
!smu_cmn_feature_is_enabled(smu, SMU_FEATURE_BACO_BIT))
return false;
return true;
}
int smu_v13_0_baco_enter(struct smu_context *smu) int smu_v13_0_baco_enter(struct smu_context *smu)
{ {
struct smu_baco_context *smu_baco = &smu->smu_baco; struct smu_baco_context *smu_baco = &smu->smu_baco;
@ -2508,3 +2507,51 @@ int smu_v13_0_disable_pmfw_state(struct smu_context *smu)
return ret == 0 ? 0 : -EINVAL; return ret == 0 ? 0 : -EINVAL;
} }
int smu_v13_0_enable_uclk_shadow(struct smu_context *smu, bool enable)
{
return smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_EnableUCLKShadow, enable, NULL);
}
int smu_v13_0_set_wbrf_exclusion_ranges(struct smu_context *smu,
struct freq_band_range *exclusion_ranges)
{
WifiBandEntryTable_t wifi_bands;
int valid_entries = 0;
int ret, i;
memset(&wifi_bands, 0, sizeof(wifi_bands));
for (i = 0; i < ARRAY_SIZE(wifi_bands.WifiBandEntry); i++) {
if (!exclusion_ranges[i].start && !exclusion_ranges[i].end)
break;
/* PMFW expects the inputs to be in Mhz unit */
wifi_bands.WifiBandEntry[valid_entries].LowFreq =
DIV_ROUND_DOWN_ULL(exclusion_ranges[i].start, HZ_PER_MHZ);
wifi_bands.WifiBandEntry[valid_entries++].HighFreq =
DIV_ROUND_UP_ULL(exclusion_ranges[i].end, HZ_PER_MHZ);
}
wifi_bands.WifiBandEntryNum = valid_entries;
/*
* Per confirm with PMFW team, WifiBandEntryNum = 0
* is a valid setting.
*
* Considering the scenarios below:
* - At first the wifi device adds an exclusion range e.g. (2400,2500) to
* BIOS and our driver gets notified. We will set WifiBandEntryNum = 1
* and pass the WifiBandEntry (2400, 2500) to PMFW.
*
* - Later the wifi device removes the wifiband list added above and
* our driver gets notified again. At this time, driver will set
* WifiBandEntryNum = 0 and pass an empty WifiBandEntry list to PMFW.
*
* - PMFW may still need to do some uclk shadow update(e.g. switching
* from shadow clock back to primary clock) on receiving this.
*/
ret = smu_cmn_update_table(smu, SMU_TABLE_WIFIBAND, 0, &wifi_bands, true);
if (ret)
dev_warn(smu->adev->dev, "Failed to set wifiband!");
return ret;
}

View File

@ -169,6 +169,7 @@ static struct cmn2asic_msg_mapping smu_v13_0_0_message_map[SMU_MSG_MAX_COUNT] =
MSG_MAP(AllowIHHostInterrupt, PPSMC_MSG_AllowIHHostInterrupt, 0), MSG_MAP(AllowIHHostInterrupt, PPSMC_MSG_AllowIHHostInterrupt, 0),
MSG_MAP(ReenableAcDcInterrupt, PPSMC_MSG_ReenableAcDcInterrupt, 0), MSG_MAP(ReenableAcDcInterrupt, PPSMC_MSG_ReenableAcDcInterrupt, 0),
MSG_MAP(DALNotPresent, PPSMC_MSG_DALNotPresent, 0), MSG_MAP(DALNotPresent, PPSMC_MSG_DALNotPresent, 0),
MSG_MAP(EnableUCLKShadow, PPSMC_MSG_EnableUCLKShadow, 0),
}; };
static struct cmn2asic_mapping smu_v13_0_0_clk_map[SMU_CLK_COUNT] = { static struct cmn2asic_mapping smu_v13_0_0_clk_map[SMU_CLK_COUNT] = {
@ -253,6 +254,7 @@ static struct cmn2asic_mapping smu_v13_0_0_table_map[SMU_TABLE_COUNT] = {
TAB_MAP(I2C_COMMANDS), TAB_MAP(I2C_COMMANDS),
TAB_MAP(ECCINFO), TAB_MAP(ECCINFO),
TAB_MAP(OVERDRIVE), TAB_MAP(OVERDRIVE),
TAB_MAP(WIFIBAND),
}; };
static struct cmn2asic_mapping smu_v13_0_0_pwr_src_map[SMU_POWER_SOURCE_COUNT] = { static struct cmn2asic_mapping smu_v13_0_0_pwr_src_map[SMU_POWER_SOURCE_COUNT] = {
@ -498,6 +500,9 @@ static int smu_v13_0_0_tables_init(struct smu_context *smu)
PAGE_SIZE, AMDGPU_GEM_DOMAIN_VRAM); PAGE_SIZE, AMDGPU_GEM_DOMAIN_VRAM);
SMU_TABLE_INIT(tables, SMU_TABLE_ECCINFO, sizeof(EccInfoTable_t), SMU_TABLE_INIT(tables, SMU_TABLE_ECCINFO, sizeof(EccInfoTable_t),
PAGE_SIZE, AMDGPU_GEM_DOMAIN_VRAM); PAGE_SIZE, AMDGPU_GEM_DOMAIN_VRAM);
SMU_TABLE_INIT(tables, SMU_TABLE_WIFIBAND,
sizeof(WifiBandEntryTable_t), PAGE_SIZE,
AMDGPU_GEM_DOMAIN_VRAM);
smu_table->metrics_table = kzalloc(sizeof(SmuMetricsExternal_t), GFP_KERNEL); smu_table->metrics_table = kzalloc(sizeof(SmuMetricsExternal_t), GFP_KERNEL);
if (!smu_table->metrics_table) if (!smu_table->metrics_table)
@ -2540,16 +2545,19 @@ static int smu_v13_0_0_set_power_profile_mode(struct smu_context *smu,
workload_mask = 1 << workload_type; workload_mask = 1 << workload_type;
/* Add optimizations for SMU13.0.0. Reuse the power saving profile */ /* Add optimizations for SMU13.0.0/10. Reuse the power saving profile */
if (smu->power_profile_mode == PP_SMC_POWER_PROFILE_COMPUTE && if (smu->power_profile_mode == PP_SMC_POWER_PROFILE_COMPUTE) {
(amdgpu_ip_version(smu->adev, MP1_HWIP, 0) == IP_VERSION(13, 0, 0)) && if ((amdgpu_ip_version(smu->adev, MP1_HWIP, 0) == IP_VERSION(13, 0, 0) &&
((smu->adev->pm.fw_version == 0x004e6601) || ((smu->adev->pm.fw_version == 0x004e6601) ||
(smu->adev->pm.fw_version >= 0x004e7400))) { (smu->adev->pm.fw_version >= 0x004e7300))) ||
workload_type = smu_cmn_to_asic_specific_index(smu, (amdgpu_ip_version(smu->adev, MP1_HWIP, 0) == IP_VERSION(13, 0, 10) &&
CMN2ASIC_MAPPING_WORKLOAD, smu->adev->pm.fw_version >= 0x00504500)) {
PP_SMC_POWER_PROFILE_POWERSAVING); workload_type = smu_cmn_to_asic_specific_index(smu,
if (workload_type >= 0) CMN2ASIC_MAPPING_WORKLOAD,
workload_mask |= 1 << workload_type; PP_SMC_POWER_PROFILE_POWERSAVING);
if (workload_type >= 0)
workload_mask |= 1 << workload_type;
}
} }
return smu_cmn_send_smc_msg_with_param(smu, return smu_cmn_send_smc_msg_with_param(smu,
@ -2938,6 +2946,20 @@ static ssize_t smu_v13_0_0_get_ecc_info(struct smu_context *smu,
return ret; return ret;
} }
static bool smu_v13_0_0_wbrf_support_check(struct smu_context *smu)
{
struct amdgpu_device *adev = smu->adev;
switch (adev->ip_versions[MP1_HWIP][0]) {
case IP_VERSION(13, 0, 0):
return smu->smc_fw_version >= 0x004e6300;
case IP_VERSION(13, 0, 10):
return smu->smc_fw_version >= 0x00503300;
default:
return false;
}
}
static const struct pptable_funcs smu_v13_0_0_ppt_funcs = { static const struct pptable_funcs smu_v13_0_0_ppt_funcs = {
.get_allowed_feature_mask = smu_v13_0_0_get_allowed_feature_mask, .get_allowed_feature_mask = smu_v13_0_0_get_allowed_feature_mask,
.set_default_dpm_table = smu_v13_0_0_set_default_dpm_table, .set_default_dpm_table = smu_v13_0_0_set_default_dpm_table,
@ -3003,8 +3025,6 @@ static const struct pptable_funcs smu_v13_0_0_ppt_funcs = {
.deep_sleep_control = smu_v13_0_deep_sleep_control, .deep_sleep_control = smu_v13_0_deep_sleep_control,
.gfx_ulv_control = smu_v13_0_gfx_ulv_control, .gfx_ulv_control = smu_v13_0_gfx_ulv_control,
.baco_is_support = smu_v13_0_baco_is_support, .baco_is_support = smu_v13_0_baco_is_support,
.baco_get_state = smu_v13_0_baco_get_state,
.baco_set_state = smu_v13_0_baco_set_state,
.baco_enter = smu_v13_0_baco_enter, .baco_enter = smu_v13_0_baco_enter,
.baco_exit = smu_v13_0_baco_exit, .baco_exit = smu_v13_0_baco_exit,
.mode1_reset_is_support = smu_v13_0_0_is_mode1_reset_supported, .mode1_reset_is_support = smu_v13_0_0_is_mode1_reset_supported,
@ -3018,6 +3038,9 @@ static const struct pptable_funcs smu_v13_0_0_ppt_funcs = {
.gpo_control = smu_v13_0_gpo_control, .gpo_control = smu_v13_0_gpo_control,
.get_ecc_info = smu_v13_0_0_get_ecc_info, .get_ecc_info = smu_v13_0_0_get_ecc_info,
.notify_display_change = smu_v13_0_notify_display_change, .notify_display_change = smu_v13_0_notify_display_change,
.is_asic_wbrf_supported = smu_v13_0_0_wbrf_support_check,
.enable_uclk_shadow = smu_v13_0_enable_uclk_shadow,
.set_wbrf_exclusion_ranges = smu_v13_0_set_wbrf_exclusion_ranges,
}; };
void smu_v13_0_0_set_ppt_funcs(struct smu_context *smu) void smu_v13_0_0_set_ppt_funcs(struct smu_context *smu)

View File

@ -2537,13 +2537,15 @@ static int mca_pcs_xgmi_mca_get_err_count(const struct mca_ras_info *mca_ras, st
uint32_t *count) uint32_t *count)
{ {
u32 ext_error_code; u32 ext_error_code;
u32 err_cnt;
ext_error_code = MCA_REG__STATUS__ERRORCODEEXT(entry->regs[MCA_REG_IDX_STATUS]); ext_error_code = MCA_REG__STATUS__ERRORCODEEXT(entry->regs[MCA_REG_IDX_STATUS]);
err_cnt = MCA_REG__MISC0__ERRCNT(entry->regs[MCA_REG_IDX_MISC0]);
if (type == AMDGPU_MCA_ERROR_TYPE_UE && ext_error_code == 0) if (type == AMDGPU_MCA_ERROR_TYPE_UE && ext_error_code == 0)
*count = 1; *count = err_cnt;
else if (type == AMDGPU_MCA_ERROR_TYPE_CE && ext_error_code == 6) else if (type == AMDGPU_MCA_ERROR_TYPE_CE && ext_error_code == 6)
*count = 1; *count = err_cnt;
return 0; return 0;
} }

View File

@ -140,6 +140,7 @@ static struct cmn2asic_msg_mapping smu_v13_0_7_message_map[SMU_MSG_MAX_COUNT] =
MSG_MAP(AllowGpo, PPSMC_MSG_SetGpoAllow, 0), MSG_MAP(AllowGpo, PPSMC_MSG_SetGpoAllow, 0),
MSG_MAP(GetPptLimit, PPSMC_MSG_GetPptLimit, 0), MSG_MAP(GetPptLimit, PPSMC_MSG_GetPptLimit, 0),
MSG_MAP(NotifyPowerSource, PPSMC_MSG_NotifyPowerSource, 0), MSG_MAP(NotifyPowerSource, PPSMC_MSG_NotifyPowerSource, 0),
MSG_MAP(EnableUCLKShadow, PPSMC_MSG_EnableUCLKShadow, 0),
}; };
static struct cmn2asic_mapping smu_v13_0_7_clk_map[SMU_CLK_COUNT] = { static struct cmn2asic_mapping smu_v13_0_7_clk_map[SMU_CLK_COUNT] = {
@ -222,6 +223,7 @@ static struct cmn2asic_mapping smu_v13_0_7_table_map[SMU_TABLE_COUNT] = {
TAB_MAP(ACTIVITY_MONITOR_COEFF), TAB_MAP(ACTIVITY_MONITOR_COEFF),
[SMU_TABLE_COMBO_PPTABLE] = {1, TABLE_COMBO_PPTABLE}, [SMU_TABLE_COMBO_PPTABLE] = {1, TABLE_COMBO_PPTABLE},
TAB_MAP(OVERDRIVE), TAB_MAP(OVERDRIVE),
TAB_MAP(WIFIBAND),
}; };
static struct cmn2asic_mapping smu_v13_0_7_pwr_src_map[SMU_POWER_SOURCE_COUNT] = { static struct cmn2asic_mapping smu_v13_0_7_pwr_src_map[SMU_POWER_SOURCE_COUNT] = {
@ -512,6 +514,9 @@ static int smu_v13_0_7_tables_init(struct smu_context *smu)
AMDGPU_GEM_DOMAIN_VRAM); AMDGPU_GEM_DOMAIN_VRAM);
SMU_TABLE_INIT(tables, SMU_TABLE_COMBO_PPTABLE, MP0_MP1_DATA_REGION_SIZE_COMBOPPTABLE, SMU_TABLE_INIT(tables, SMU_TABLE_COMBO_PPTABLE, MP0_MP1_DATA_REGION_SIZE_COMBOPPTABLE,
PAGE_SIZE, AMDGPU_GEM_DOMAIN_VRAM); PAGE_SIZE, AMDGPU_GEM_DOMAIN_VRAM);
SMU_TABLE_INIT(tables, SMU_TABLE_WIFIBAND,
sizeof(WifiBandEntryTable_t), PAGE_SIZE,
AMDGPU_GEM_DOMAIN_VRAM);
smu_table->metrics_table = kzalloc(sizeof(SmuMetricsExternal_t), GFP_KERNEL); smu_table->metrics_table = kzalloc(sizeof(SmuMetricsExternal_t), GFP_KERNEL);
if (!smu_table->metrics_table) if (!smu_table->metrics_table)
@ -2535,6 +2540,11 @@ static int smu_v13_0_7_set_df_cstate(struct smu_context *smu,
NULL); NULL);
} }
static bool smu_v13_0_7_wbrf_support_check(struct smu_context *smu)
{
return smu->smc_fw_version > 0x00524600;
}
static const struct pptable_funcs smu_v13_0_7_ppt_funcs = { static const struct pptable_funcs smu_v13_0_7_ppt_funcs = {
.get_allowed_feature_mask = smu_v13_0_7_get_allowed_feature_mask, .get_allowed_feature_mask = smu_v13_0_7_get_allowed_feature_mask,
.set_default_dpm_table = smu_v13_0_7_set_default_dpm_table, .set_default_dpm_table = smu_v13_0_7_set_default_dpm_table,
@ -2594,8 +2604,6 @@ static const struct pptable_funcs smu_v13_0_7_ppt_funcs = {
.get_pp_feature_mask = smu_cmn_get_pp_feature_mask, .get_pp_feature_mask = smu_cmn_get_pp_feature_mask,
.set_pp_feature_mask = smu_cmn_set_pp_feature_mask, .set_pp_feature_mask = smu_cmn_set_pp_feature_mask,
.baco_is_support = smu_v13_0_baco_is_support, .baco_is_support = smu_v13_0_baco_is_support,
.baco_get_state = smu_v13_0_baco_get_state,
.baco_set_state = smu_v13_0_baco_set_state,
.baco_enter = smu_v13_0_baco_enter, .baco_enter = smu_v13_0_baco_enter,
.baco_exit = smu_v13_0_baco_exit, .baco_exit = smu_v13_0_baco_exit,
.mode1_reset_is_support = smu_v13_0_7_is_mode1_reset_supported, .mode1_reset_is_support = smu_v13_0_7_is_mode1_reset_supported,
@ -2603,6 +2611,9 @@ static const struct pptable_funcs smu_v13_0_7_ppt_funcs = {
.set_mp1_state = smu_v13_0_7_set_mp1_state, .set_mp1_state = smu_v13_0_7_set_mp1_state,
.set_df_cstate = smu_v13_0_7_set_df_cstate, .set_df_cstate = smu_v13_0_7_set_df_cstate,
.gpo_control = smu_v13_0_gpo_control, .gpo_control = smu_v13_0_gpo_control,
.is_asic_wbrf_supported = smu_v13_0_7_wbrf_support_check,
.enable_uclk_shadow = smu_v13_0_enable_uclk_shadow,
.set_wbrf_exclusion_ranges = smu_v13_0_set_wbrf_exclusion_ranges,
}; };
void smu_v13_0_7_set_ppt_funcs(struct smu_context *smu) void smu_v13_0_7_set_ppt_funcs(struct smu_context *smu)

View File

@ -224,7 +224,7 @@ int smu_v14_0_check_fw_version(struct smu_context *smu)
if (smu->is_apu) if (smu->is_apu)
adev->pm.fw_version = smu_version; adev->pm.fw_version = smu_version;
switch (adev->ip_versions[MP1_HWIP][0]) { switch (amdgpu_ip_version(adev, MP1_HWIP, 0)) {
case IP_VERSION(14, 0, 2): case IP_VERSION(14, 0, 2):
smu->smc_driver_if_version = SMU14_DRIVER_IF_VERSION_SMU_V14_0_2; smu->smc_driver_if_version = SMU14_DRIVER_IF_VERSION_SMU_V14_0_2;
break; break;
@ -235,7 +235,7 @@ int smu_v14_0_check_fw_version(struct smu_context *smu)
break; break;
default: default:
dev_err(adev->dev, "smu unsupported IP version: 0x%x.\n", dev_err(adev->dev, "smu unsupported IP version: 0x%x.\n",
adev->ip_versions[MP1_HWIP][0]); amdgpu_ip_version(adev, MP1_HWIP, 0));
smu->smc_driver_if_version = SMU14_DRIVER_IF_VERSION_INV; smu->smc_driver_if_version = SMU14_DRIVER_IF_VERSION_INV;
break; break;
} }
@ -733,7 +733,7 @@ int smu_v14_0_gfx_off_control(struct smu_context *smu, bool enable)
int ret = 0; int ret = 0;
struct amdgpu_device *adev = smu->adev; struct amdgpu_device *adev = smu->adev;
switch (adev->ip_versions[MP1_HWIP][0]) { switch (amdgpu_ip_version(adev, MP1_HWIP, 0)) {
case IP_VERSION(14, 0, 2): case IP_VERSION(14, 0, 2):
case IP_VERSION(14, 0, 0): case IP_VERSION(14, 0, 0):
if (!(adev->pm.pp_feature & PP_GFXOFF_MASK)) if (!(adev->pm.pp_feature & PP_GFXOFF_MASK))

View File

@ -1085,6 +1085,25 @@ static int smu_v14_0_0_set_umsch_mm_enable(struct smu_context *smu,
0, NULL); 0, NULL);
} }
static int smu_14_0_0_get_dpm_table(struct smu_context *smu, struct dpm_clocks *clock_table)
{
DpmClocks_t *clk_table = smu->smu_table.clocks_table;
uint8_t idx;
/* Only the Clock information of SOC and VPE is copied to provide VPE DPM settings for use. */
for (idx = 0; idx < NUM_SOCCLK_DPM_LEVELS; idx++) {
clock_table->SocClocks[idx].Freq = (idx < clk_table->NumSocClkLevelsEnabled) ? clk_table->SocClocks[idx]:0;
clock_table->SocClocks[idx].Vol = 0;
}
for (idx = 0; idx < NUM_VPE_DPM_LEVELS; idx++) {
clock_table->VPEClocks[idx].Freq = (idx < clk_table->VpeClkLevelsEnabled) ? clk_table->VPEClocks[idx]:0;
clock_table->VPEClocks[idx].Vol = 0;
}
return 0;
}
static const struct pptable_funcs smu_v14_0_0_ppt_funcs = { static const struct pptable_funcs smu_v14_0_0_ppt_funcs = {
.check_fw_status = smu_v14_0_check_fw_status, .check_fw_status = smu_v14_0_check_fw_status,
.check_fw_version = smu_v14_0_check_fw_version, .check_fw_version = smu_v14_0_check_fw_version,
@ -1115,6 +1134,7 @@ static const struct pptable_funcs smu_v14_0_0_ppt_funcs = {
.set_gfx_power_up_by_imu = smu_v14_0_set_gfx_power_up_by_imu, .set_gfx_power_up_by_imu = smu_v14_0_set_gfx_power_up_by_imu,
.dpm_set_vpe_enable = smu_v14_0_0_set_vpe_enable, .dpm_set_vpe_enable = smu_v14_0_0_set_vpe_enable,
.dpm_set_umsch_mm_enable = smu_v14_0_0_set_umsch_mm_enable, .dpm_set_umsch_mm_enable = smu_v14_0_0_set_umsch_mm_enable,
.get_dpm_clock_table = smu_14_0_0_get_dpm_table,
}; };
static void smu_v14_0_0_set_smu_mailbox_registers(struct smu_context *smu) static void smu_v14_0_0_set_smu_mailbox_registers(struct smu_context *smu)

View File

@ -98,6 +98,9 @@
#define smu_set_config_table(smu, config_table) smu_ppt_funcs(set_config_table, -EOPNOTSUPP, smu, config_table) #define smu_set_config_table(smu, config_table) smu_ppt_funcs(set_config_table, -EOPNOTSUPP, smu, config_table)
#define smu_init_pptable_microcode(smu) smu_ppt_funcs(init_pptable_microcode, 0, smu) #define smu_init_pptable_microcode(smu) smu_ppt_funcs(init_pptable_microcode, 0, smu)
#define smu_notify_rlc_state(smu, en) smu_ppt_funcs(notify_rlc_state, 0, smu, en) #define smu_notify_rlc_state(smu, en) smu_ppt_funcs(notify_rlc_state, 0, smu, en)
#define smu_is_asic_wbrf_supported(smu) smu_ppt_funcs(is_asic_wbrf_supported, false, smu)
#define smu_enable_uclk_shadow(smu, enable) smu_ppt_funcs(enable_uclk_shadow, 0, smu, enable)
#define smu_set_wbrf_exclusion_ranges(smu, freq_band_range) smu_ppt_funcs(set_wbrf_exclusion_ranges, -EOPNOTSUPP, smu, freq_band_range)
#endif #endif
#endif #endif

Some files were not shown because too many files have changed in this diff Show More