Linux core:
----------- iosys-map: Add offset to iosys_map_memcpy_to() (Lucas) iosys-map: Add a few more helpers (Lucas) i915 (display and core changes on drm-intel-next): -------------------------------------------------- - Display's DBuf and watermark related fixes and improvements (Ville) - More i915 header and other code clean-up (Jani) - Display IPS fixes and improvements (Ville) - OPRegion fixes and cleanups (Jani) - Fix the plane end Y offset check for FBC (Ville) - DP 128b/132b updates (Jani) - Disable runtime pm wakeref tracking for the mock device selftest (Ville) - Many display code clean-up while targeting to fix up DP DFP 4:2:0 handling (Ville) - Bigjoiner state tracking and more bigjoiner related work (Ville) - Update DMC_DEBUG3 register for DG1 (Chuansheng) - SAGV fixes (Ville) - More GT register cleanup (Matt) - Fix build issue when using clang (Tong) - Display DG2 fixes (Matt) - ADL-P PHY related fixes (Imre) - PSR2 fixes (Jose) - Add PCH Support for Alder Lake N (Tejas) drm-intel-gt-next (drm-intel-gt-next-2022-02-17): ------------------------------------------------- UAPI Changes: - Weak parallel submission support for execlists Minimal implementation of the parallel submission support for execlists backend that was previously only implemented for GuC. Support one sibling non-virtual engine. Core Changes: - Two backmerges of drm/drm-next for header file renames/changes and i915_regs reorganization Driver Changes: - Add new DG2 subplatform: DG2-G12 (Matt R) - Add new DG2 workarounds (Matt R, Ram, Bruce) - Handle pre-programmed WOPCM registers for DG2+ (Daniele) - Update guc shim control programming on XeHP SDV+ (Daniele) - Add RPL-S C0/D0 stepping information (Anusha) - Improve GuC ADS initialization to work on ARM64 on dGFX (Lucas) - Fix KMD and GuC race on accessing PMU busyness (Umesh) - Use PM timestamp instead of RING TIMESTAMP for reference in PMU with GuC (Umesh) - Report error on invalid reset notification from GuC (John) - Avoid WARN splat by holding RPM wakelock during PXP unbind (Juston) - Fixes to parallel submission implementation (Matt B.) - Improve GuC loading status check/error reports (John) - Tweak TTM LRU priority hint selection (Matt A.) - Align the plane_vma to min_page_size of stolen mem (Ram) - Introduce vma resources and implement async unbinding (Thomas) - Use struct vma_resource instead of struct vma_snapshot (Thomas) - Return some TTM accel move errors instead of trying memcpy move (Thomas) - Fix a race between vma / object destruction and unbinding (Thomas) - Remove short-term pins from execbuf (Maarten) - Update to GuC version 69.0.3 (John, Michal Wa.) - Improvements to GT reset paths in GuC backend (Matt B.) - Use shrinker_release_pages instead of writeback in shmem object hooks (Matt A., Tvrtko) - Use trylock instead of blocking lock when freeing GEM objects (Maarten) - Allocate intel_engine_coredump_alloc with ALLOW_FAIL (Matt B.) - Fixes to object unmapping and purging (Matt A) - Check for wedged device in GuC backend (John) - Avoid lockdep splat by locking dpt_obj around set_cache_level (Maarten) - Allow dead vm to unbind vma's without lock (Maarten) - s/engine->i915/i915/ for DG2 engine workarounds (Matt R) - Use to_gt() helper for GGTT accesses (Michal Wi.) - Selftest improvements (Matt B., Thomas, Ram) - Coding style and compiler warning fixes (Matt B., Jasmine, Andi, Colin, Gustavo, Dan) -----BEGIN PGP SIGNATURE----- iQEzBAABCAAdFiEEbSBwaO7dZQkcLOKj+mJfZA7rE8oFAmIWwlQACgkQ+mJfZA7r E8pcSgf/fHF0h1/YIxl9tb5P+tLXOUdI180z0yhzdeqtHnoUd6oQE98c5XrENbRa 2sagZ6t8NINySCv6+KsGCdqRjdNQI3Kk8ApSqUQJ2f2dBBhf6W15PZSUDG0cvDgL YGksiqQvMCjUw0eWa4UNzYiFCS9m/B+v8Qf3DIH9gQcL+hZUlQbGc2X+ega2gk9q qeUvG08Pfdc8RnnH5jIbKGktms+lEohj4eQEDk/fwcejr/xMlcJk+eVAG0F/XPR9 4xe8KEhelxHHUD2a1qwZ8qCxDUMW7hWI1wfwWAHEhTnKIjOVp2LbtQG7LMxkr+tK 5uO5weqycFRpEsqbDVeIcumU4S2J0Q== =e9Gp -----END PGP SIGNATURE----- Merge tag 'drm-intel-next-2022-02-23' of git://anongit.freedesktop.org/drm/drm-intel into drm-next Linux core: ----------- iosys-map: Add offset to iosys_map_memcpy_to() (Lucas) iosys-map: Add a few more helpers (Lucas) i915 (display and core changes on drm-intel-next): -------------------------------------------------- - Display's DBuf and watermark related fixes and improvements (Ville) - More i915 header and other code clean-up (Jani) - Display IPS fixes and improvements (Ville) - OPRegion fixes and cleanups (Jani) - Fix the plane end Y offset check for FBC (Ville) - DP 128b/132b updates (Jani) - Disable runtime pm wakeref tracking for the mock device selftest (Ville) - Many display code clean-up while targeting to fix up DP DFP 4:2:0 handling (Ville) - Bigjoiner state tracking and more bigjoiner related work (Ville) - Update DMC_DEBUG3 register for DG1 (Chuansheng) - SAGV fixes (Ville) - More GT register cleanup (Matt) - Fix build issue when using clang (Tong) - Display DG2 fixes (Matt) - ADL-P PHY related fixes (Imre) - PSR2 fixes (Jose) - Add PCH Support for Alder Lake N (Tejas) drm-intel-gt-next (drm-intel-gt-next-2022-02-17): ------------------------------------------------- UAPI Changes: - Weak parallel submission support for execlists Minimal implementation of the parallel submission support for execlists backend that was previously only implemented for GuC. Support one sibling non-virtual engine. Core Changes: - Two backmerges of drm/drm-next for header file renames/changes and i915_regs reorganization Driver Changes: - Add new DG2 subplatform: DG2-G12 (Matt R) - Add new DG2 workarounds (Matt R, Ram, Bruce) - Handle pre-programmed WOPCM registers for DG2+ (Daniele) - Update guc shim control programming on XeHP SDV+ (Daniele) - Add RPL-S C0/D0 stepping information (Anusha) - Improve GuC ADS initialization to work on ARM64 on dGFX (Lucas) - Fix KMD and GuC race on accessing PMU busyness (Umesh) - Use PM timestamp instead of RING TIMESTAMP for reference in PMU with GuC (Umesh) - Report error on invalid reset notification from GuC (John) - Avoid WARN splat by holding RPM wakelock during PXP unbind (Juston) - Fixes to parallel submission implementation (Matt B.) - Improve GuC loading status check/error reports (John) - Tweak TTM LRU priority hint selection (Matt A.) - Align the plane_vma to min_page_size of stolen mem (Ram) - Introduce vma resources and implement async unbinding (Thomas) - Use struct vma_resource instead of struct vma_snapshot (Thomas) - Return some TTM accel move errors instead of trying memcpy move (Thomas) - Fix a race between vma / object destruction and unbinding (Thomas) - Remove short-term pins from execbuf (Maarten) - Update to GuC version 69.0.3 (John, Michal Wa.) - Improvements to GT reset paths in GuC backend (Matt B.) - Use shrinker_release_pages instead of writeback in shmem object hooks (Matt A., Tvrtko) - Use trylock instead of blocking lock when freeing GEM objects (Maarten) - Allocate intel_engine_coredump_alloc with ALLOW_FAIL (Matt B.) - Fixes to object unmapping and purging (Matt A) - Check for wedged device in GuC backend (John) - Avoid lockdep splat by locking dpt_obj around set_cache_level (Maarten) - Allow dead vm to unbind vma's without lock (Maarten) - s/engine->i915/i915/ for DG2 engine workarounds (Matt R) - Use to_gt() helper for GGTT accesses (Michal Wi.) - Selftest improvements (Matt B., Thomas, Ram) - Coding style and compiler warning fixes (Matt B., Jasmine, Andi, Colin, Gustavo, Dan) Signed-off-by: Dave Airlie <airlied@redhat.com> From: Rodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/YhbDan8wNZBR6FzF@intel.com
This commit is contained in:
commit
7f44571b53
@ -539,6 +539,7 @@ GuC ABI
|
||||
.. kernel-doc:: drivers/gpu/drm/i915/gt/uc/abi/guc_communication_mmio_abi.h
|
||||
.. kernel-doc:: drivers/gpu/drm/i915/gt/uc/abi/guc_communication_ctb_abi.h
|
||||
.. kernel-doc:: drivers/gpu/drm/i915/gt/uc/abi/guc_actions_abi.h
|
||||
.. kernel-doc:: drivers/gpu/drm/i915/gt/uc/abi/guc_klvs_abi.h
|
||||
|
||||
HuC
|
||||
---
|
||||
|
@ -144,6 +144,69 @@ u8 drm_dp_get_adjust_tx_ffe_preset(const u8 link_status[DP_LINK_STATUS_SIZE],
|
||||
}
|
||||
EXPORT_SYMBOL(drm_dp_get_adjust_tx_ffe_preset);
|
||||
|
||||
/* DP 2.0 errata for 128b/132b */
|
||||
bool drm_dp_128b132b_lane_channel_eq_done(const u8 link_status[DP_LINK_STATUS_SIZE],
|
||||
int lane_count)
|
||||
{
|
||||
u8 lane_align, lane_status;
|
||||
int lane;
|
||||
|
||||
lane_align = dp_link_status(link_status, DP_LANE_ALIGN_STATUS_UPDATED);
|
||||
if (!(lane_align & DP_INTERLANE_ALIGN_DONE))
|
||||
return false;
|
||||
|
||||
for (lane = 0; lane < lane_count; lane++) {
|
||||
lane_status = dp_get_lane_status(link_status, lane);
|
||||
if (!(lane_status & DP_LANE_CHANNEL_EQ_DONE))
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_dp_128b132b_lane_channel_eq_done);
|
||||
|
||||
/* DP 2.0 errata for 128b/132b */
|
||||
bool drm_dp_128b132b_lane_symbol_locked(const u8 link_status[DP_LINK_STATUS_SIZE],
|
||||
int lane_count)
|
||||
{
|
||||
u8 lane_status;
|
||||
int lane;
|
||||
|
||||
for (lane = 0; lane < lane_count; lane++) {
|
||||
lane_status = dp_get_lane_status(link_status, lane);
|
||||
if (!(lane_status & DP_LANE_SYMBOL_LOCKED))
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_dp_128b132b_lane_symbol_locked);
|
||||
|
||||
/* DP 2.0 errata for 128b/132b */
|
||||
bool drm_dp_128b132b_eq_interlane_align_done(const u8 link_status[DP_LINK_STATUS_SIZE])
|
||||
{
|
||||
u8 status = dp_link_status(link_status, DP_LANE_ALIGN_STATUS_UPDATED);
|
||||
|
||||
return status & DP_128B132B_DPRX_EQ_INTERLANE_ALIGN_DONE;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_dp_128b132b_eq_interlane_align_done);
|
||||
|
||||
/* DP 2.0 errata for 128b/132b */
|
||||
bool drm_dp_128b132b_cds_interlane_align_done(const u8 link_status[DP_LINK_STATUS_SIZE])
|
||||
{
|
||||
u8 status = dp_link_status(link_status, DP_LANE_ALIGN_STATUS_UPDATED);
|
||||
|
||||
return status & DP_128B132B_DPRX_CDS_INTERLANE_ALIGN_DONE;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_dp_128b132b_cds_interlane_align_done);
|
||||
|
||||
/* DP 2.0 errata for 128b/132b */
|
||||
bool drm_dp_128b132b_link_training_failed(const u8 link_status[DP_LINK_STATUS_SIZE])
|
||||
{
|
||||
u8 status = dp_link_status(link_status, DP_LANE_ALIGN_STATUS_UPDATED);
|
||||
|
||||
return status & DP_128B132B_LT_FAILED;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_dp_128b132b_link_training_failed);
|
||||
|
||||
u8 drm_dp_get_adjust_request_post_cursor(const u8 link_status[DP_LINK_STATUS_SIZE],
|
||||
unsigned int lane)
|
||||
{
|
||||
@ -281,6 +344,26 @@ int drm_dp_read_channel_eq_delay(struct drm_dp_aux *aux, const u8 dpcd[DP_RECEIV
|
||||
}
|
||||
EXPORT_SYMBOL(drm_dp_read_channel_eq_delay);
|
||||
|
||||
/* Per DP 2.0 Errata */
|
||||
int drm_dp_128b132b_read_aux_rd_interval(struct drm_dp_aux *aux)
|
||||
{
|
||||
int unit;
|
||||
u8 val;
|
||||
|
||||
if (drm_dp_dpcd_readb(aux, DP_128B132B_TRAINING_AUX_RD_INTERVAL, &val) != 1) {
|
||||
drm_err(aux->drm_dev, "%s: failed rd interval read\n",
|
||||
aux->name);
|
||||
/* default to max */
|
||||
val = DP_128B132B_TRAINING_AUX_RD_INTERVAL_MASK;
|
||||
}
|
||||
|
||||
unit = (val & DP_128B132B_TRAINING_AUX_RD_INTERVAL_1MS_UNIT) ? 1 : 2;
|
||||
val &= DP_128B132B_TRAINING_AUX_RD_INTERVAL_MASK;
|
||||
|
||||
return (val + 1) * unit * 1000;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_dp_128b132b_read_aux_rd_interval);
|
||||
|
||||
void drm_dp_link_train_clock_recovery_delay(const struct drm_dp_aux *aux,
|
||||
const u8 dpcd[DP_RECEIVER_CAP_SIZE])
|
||||
{
|
||||
|
@ -221,7 +221,7 @@ static void memcpy_fallback(struct iosys_map *dst,
|
||||
if (!dst->is_iomem && !src->is_iomem) {
|
||||
memcpy(dst->vaddr, src->vaddr, len);
|
||||
} else if (!src->is_iomem) {
|
||||
iosys_map_memcpy_to(dst, src->vaddr, len);
|
||||
iosys_map_memcpy_to(dst, 0, src->vaddr, len);
|
||||
} else if (!dst->is_iomem) {
|
||||
memcpy_fromio(dst->vaddr, src->vaddr_iomem, len);
|
||||
} else {
|
||||
|
@ -385,7 +385,7 @@ static void drm_fb_helper_damage_blit_real(struct drm_fb_helper *fb_helper,
|
||||
iosys_map_incr(dst, offset); /* go to first pixel within clip rect */
|
||||
|
||||
for (y = clip->y1; y < clip->y2; y++) {
|
||||
iosys_map_memcpy_to(dst, src, len);
|
||||
iosys_map_memcpy_to(dst, 0, src, len);
|
||||
iosys_map_incr(dst, fb->pitches[0]);
|
||||
src += fb->pitches[0];
|
||||
}
|
||||
|
@ -13,6 +13,7 @@
|
||||
# will most likely get a sudden build breakage... Hopefully we will fix
|
||||
# new warnings before CI updates!
|
||||
subdir-ccflags-y := -Wall -Wextra
|
||||
subdir-ccflags-y += -Wno-format-security
|
||||
subdir-ccflags-y += -Wno-unused-parameter
|
||||
subdir-ccflags-y += -Wno-type-limits
|
||||
subdir-ccflags-y += -Wno-missing-field-initializers
|
||||
@ -174,7 +175,7 @@ i915-y += \
|
||||
i915_trace_points.o \
|
||||
i915_ttm_buddy_manager.o \
|
||||
i915_vma.o \
|
||||
i915_vma_snapshot.o \
|
||||
i915_vma_resource.o \
|
||||
intel_wopcm.o
|
||||
|
||||
# general-purpose microcontroller (GuC) support
|
||||
@ -197,6 +198,7 @@ i915-y += gt/uc/intel_uc.o \
|
||||
|
||||
# modesetting core code
|
||||
i915-y += \
|
||||
display/hsw_ips.o \
|
||||
display/intel_atomic.o \
|
||||
display/intel_atomic_plane.o \
|
||||
display/intel_audio.o \
|
||||
|
271
drivers/gpu/drm/i915/display/hsw_ips.c
Normal file
271
drivers/gpu/drm/i915/display/hsw_ips.c
Normal file
@ -0,0 +1,271 @@
|
||||
// SPDX-License-Identifier: MIT
|
||||
/*
|
||||
* Copyright © 2022 Intel Corporation
|
||||
*/
|
||||
|
||||
#include "hsw_ips.h"
|
||||
#include "i915_drv.h"
|
||||
#include "i915_reg.h"
|
||||
#include "intel_de.h"
|
||||
#include "intel_display_types.h"
|
||||
#include "intel_pcode.h"
|
||||
|
||||
static void hsw_ips_enable(const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
|
||||
struct drm_i915_private *i915 = to_i915(crtc->base.dev);
|
||||
|
||||
if (!crtc_state->ips_enabled)
|
||||
return;
|
||||
|
||||
/*
|
||||
* We can only enable IPS after we enable a plane and wait for a vblank
|
||||
* This function is called from post_plane_update, which is run after
|
||||
* a vblank wait.
|
||||
*/
|
||||
drm_WARN_ON(&i915->drm,
|
||||
!(crtc_state->active_planes & ~BIT(PLANE_CURSOR)));
|
||||
|
||||
if (IS_BROADWELL(i915)) {
|
||||
drm_WARN_ON(&i915->drm,
|
||||
snb_pcode_write(i915, DISPLAY_IPS_CONTROL,
|
||||
IPS_ENABLE | IPS_PCODE_CONTROL));
|
||||
/*
|
||||
* Quoting Art Runyan: "its not safe to expect any particular
|
||||
* value in IPS_CTL bit 31 after enabling IPS through the
|
||||
* mailbox." Moreover, the mailbox may return a bogus state,
|
||||
* so we need to just enable it and continue on.
|
||||
*/
|
||||
} else {
|
||||
intel_de_write(i915, IPS_CTL, IPS_ENABLE);
|
||||
/*
|
||||
* The bit only becomes 1 in the next vblank, so this wait here
|
||||
* is essentially intel_wait_for_vblank. If we don't have this
|
||||
* and don't wait for vblanks until the end of crtc_enable, then
|
||||
* the HW state readout code will complain that the expected
|
||||
* IPS_CTL value is not the one we read.
|
||||
*/
|
||||
if (intel_de_wait_for_set(i915, IPS_CTL, IPS_ENABLE, 50))
|
||||
drm_err(&i915->drm,
|
||||
"Timed out waiting for IPS enable\n");
|
||||
}
|
||||
}
|
||||
|
||||
bool hsw_ips_disable(const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
|
||||
struct drm_i915_private *i915 = to_i915(crtc->base.dev);
|
||||
bool need_vblank_wait = false;
|
||||
|
||||
if (!crtc_state->ips_enabled)
|
||||
return need_vblank_wait;
|
||||
|
||||
if (IS_BROADWELL(i915)) {
|
||||
drm_WARN_ON(&i915->drm,
|
||||
snb_pcode_write(i915, DISPLAY_IPS_CONTROL, 0));
|
||||
/*
|
||||
* Wait for PCODE to finish disabling IPS. The BSpec specified
|
||||
* 42ms timeout value leads to occasional timeouts so use 100ms
|
||||
* instead.
|
||||
*/
|
||||
if (intel_de_wait_for_clear(i915, IPS_CTL, IPS_ENABLE, 100))
|
||||
drm_err(&i915->drm,
|
||||
"Timed out waiting for IPS disable\n");
|
||||
} else {
|
||||
intel_de_write(i915, IPS_CTL, 0);
|
||||
intel_de_posting_read(i915, IPS_CTL);
|
||||
}
|
||||
|
||||
/* We need to wait for a vblank before we can disable the plane. */
|
||||
need_vblank_wait = true;
|
||||
|
||||
return need_vblank_wait;
|
||||
}
|
||||
|
||||
static bool hsw_ips_need_disable(struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(state->base.dev);
|
||||
const struct intel_crtc_state *old_crtc_state =
|
||||
intel_atomic_get_old_crtc_state(state, crtc);
|
||||
const struct intel_crtc_state *new_crtc_state =
|
||||
intel_atomic_get_new_crtc_state(state, crtc);
|
||||
|
||||
if (!old_crtc_state->ips_enabled)
|
||||
return false;
|
||||
|
||||
if (intel_crtc_needs_modeset(new_crtc_state))
|
||||
return true;
|
||||
|
||||
/*
|
||||
* Workaround : Do not read or write the pipe palette/gamma data while
|
||||
* GAMMA_MODE is configured for split gamma and IPS_CTL has IPS enabled.
|
||||
*
|
||||
* Disable IPS before we program the LUT.
|
||||
*/
|
||||
if (IS_HASWELL(i915) &&
|
||||
(new_crtc_state->uapi.color_mgmt_changed ||
|
||||
new_crtc_state->update_pipe) &&
|
||||
new_crtc_state->gamma_mode == GAMMA_MODE_MODE_SPLIT)
|
||||
return true;
|
||||
|
||||
return !new_crtc_state->ips_enabled;
|
||||
}
|
||||
|
||||
bool hsw_ips_pre_update(struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc)
|
||||
{
|
||||
const struct intel_crtc_state *old_crtc_state =
|
||||
intel_atomic_get_old_crtc_state(state, crtc);
|
||||
|
||||
if (!hsw_ips_need_disable(state, crtc))
|
||||
return false;
|
||||
|
||||
return hsw_ips_disable(old_crtc_state);
|
||||
}
|
||||
|
||||
static bool hsw_ips_need_enable(struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(state->base.dev);
|
||||
const struct intel_crtc_state *old_crtc_state =
|
||||
intel_atomic_get_old_crtc_state(state, crtc);
|
||||
const struct intel_crtc_state *new_crtc_state =
|
||||
intel_atomic_get_new_crtc_state(state, crtc);
|
||||
|
||||
if (!new_crtc_state->ips_enabled)
|
||||
return false;
|
||||
|
||||
if (intel_crtc_needs_modeset(new_crtc_state))
|
||||
return true;
|
||||
|
||||
/*
|
||||
* Workaround : Do not read or write the pipe palette/gamma data while
|
||||
* GAMMA_MODE is configured for split gamma and IPS_CTL has IPS enabled.
|
||||
*
|
||||
* Re-enable IPS after the LUT has been programmed.
|
||||
*/
|
||||
if (IS_HASWELL(i915) &&
|
||||
(new_crtc_state->uapi.color_mgmt_changed ||
|
||||
new_crtc_state->update_pipe) &&
|
||||
new_crtc_state->gamma_mode == GAMMA_MODE_MODE_SPLIT)
|
||||
return true;
|
||||
|
||||
/*
|
||||
* We can't read out IPS on broadwell, assume the worst and
|
||||
* forcibly enable IPS on the first fastset.
|
||||
*/
|
||||
if (new_crtc_state->update_pipe && old_crtc_state->inherited)
|
||||
return true;
|
||||
|
||||
return !old_crtc_state->ips_enabled;
|
||||
}
|
||||
|
||||
void hsw_ips_post_update(struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc)
|
||||
{
|
||||
const struct intel_crtc_state *new_crtc_state =
|
||||
intel_atomic_get_new_crtc_state(state, crtc);
|
||||
|
||||
if (!hsw_ips_need_enable(state, crtc))
|
||||
return;
|
||||
|
||||
hsw_ips_enable(new_crtc_state);
|
||||
}
|
||||
|
||||
/* IPS only exists on ULT machines and is tied to pipe A. */
|
||||
bool hsw_crtc_supports_ips(struct intel_crtc *crtc)
|
||||
{
|
||||
return HAS_IPS(to_i915(crtc->base.dev)) && crtc->pipe == PIPE_A;
|
||||
}
|
||||
|
||||
bool hsw_crtc_state_ips_capable(const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
|
||||
struct drm_i915_private *i915 = to_i915(crtc->base.dev);
|
||||
|
||||
/* IPS only exists on ULT machines and is tied to pipe A. */
|
||||
if (!hsw_crtc_supports_ips(crtc))
|
||||
return false;
|
||||
|
||||
if (!i915->params.enable_ips)
|
||||
return false;
|
||||
|
||||
if (crtc_state->pipe_bpp > 24)
|
||||
return false;
|
||||
|
||||
/*
|
||||
* We compare against max which means we must take
|
||||
* the increased cdclk requirement into account when
|
||||
* calculating the new cdclk.
|
||||
*
|
||||
* Should measure whether using a lower cdclk w/o IPS
|
||||
*/
|
||||
if (IS_BROADWELL(i915) &&
|
||||
crtc_state->pixel_rate > i915->max_cdclk_freq * 95 / 100)
|
||||
return false;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
int hsw_ips_compute_config(struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(state->base.dev);
|
||||
struct intel_crtc_state *crtc_state =
|
||||
intel_atomic_get_new_crtc_state(state, crtc);
|
||||
|
||||
crtc_state->ips_enabled = false;
|
||||
|
||||
if (!hsw_crtc_state_ips_capable(crtc_state))
|
||||
return 0;
|
||||
|
||||
/*
|
||||
* When IPS gets enabled, the pipe CRC changes. Since IPS gets
|
||||
* enabled and disabled dynamically based on package C states,
|
||||
* user space can't make reliable use of the CRCs, so let's just
|
||||
* completely disable it.
|
||||
*/
|
||||
if (crtc_state->crc_enabled)
|
||||
return 0;
|
||||
|
||||
/* IPS should be fine as long as at least one plane is enabled. */
|
||||
if (!(crtc_state->active_planes & ~BIT(PLANE_CURSOR)))
|
||||
return 0;
|
||||
|
||||
if (IS_BROADWELL(i915)) {
|
||||
const struct intel_cdclk_state *cdclk_state;
|
||||
|
||||
cdclk_state = intel_atomic_get_cdclk_state(state);
|
||||
if (IS_ERR(cdclk_state))
|
||||
return PTR_ERR(cdclk_state);
|
||||
|
||||
/* pixel rate mustn't exceed 95% of cdclk with IPS on BDW */
|
||||
if (crtc_state->pixel_rate > cdclk_state->logical.cdclk * 95 / 100)
|
||||
return 0;
|
||||
}
|
||||
|
||||
crtc_state->ips_enabled = true;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
void hsw_ips_get_config(struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
|
||||
struct drm_i915_private *i915 = to_i915(crtc->base.dev);
|
||||
|
||||
if (!hsw_crtc_supports_ips(crtc))
|
||||
return;
|
||||
|
||||
if (IS_HASWELL(i915)) {
|
||||
crtc_state->ips_enabled = intel_de_read(i915, IPS_CTL) & IPS_ENABLE;
|
||||
} else {
|
||||
/*
|
||||
* We cannot readout IPS state on broadwell, set to
|
||||
* true so we can set it to a defined state on first
|
||||
* commit.
|
||||
*/
|
||||
crtc_state->ips_enabled = true;
|
||||
}
|
||||
}
|
26
drivers/gpu/drm/i915/display/hsw_ips.h
Normal file
26
drivers/gpu/drm/i915/display/hsw_ips.h
Normal file
@ -0,0 +1,26 @@
|
||||
/* SPDX-License-Identifier: MIT */
|
||||
/*
|
||||
* Copyright © 2022 Intel Corporation
|
||||
*/
|
||||
|
||||
#ifndef __HSW_IPS_H__
|
||||
#define __HSW_IPS_H__
|
||||
|
||||
#include <linux/types.h>
|
||||
|
||||
struct intel_atomic_state;
|
||||
struct intel_crtc;
|
||||
struct intel_crtc_state;
|
||||
|
||||
bool hsw_ips_disable(const struct intel_crtc_state *crtc_state);
|
||||
bool hsw_ips_pre_update(struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc);
|
||||
void hsw_ips_post_update(struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc);
|
||||
bool hsw_crtc_supports_ips(struct intel_crtc *crtc);
|
||||
bool hsw_crtc_state_ips_capable(const struct intel_crtc_state *crtc_state);
|
||||
int hsw_ips_compute_config(struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc);
|
||||
void hsw_ips_get_config(struct intel_crtc_state *crtc_state);
|
||||
|
||||
#endif /* __HSW_IPS_H__ */
|
@ -29,6 +29,7 @@
|
||||
#include <drm/drm_mipi_dsi.h>
|
||||
|
||||
#include "icl_dsi.h"
|
||||
#include "icl_dsi_regs.h"
|
||||
#include "intel_atomic.h"
|
||||
#include "intel_backlight.h"
|
||||
#include "intel_combo_phy.h"
|
||||
@ -570,7 +571,7 @@ gen11_dsi_setup_dphy_timings(struct intel_encoder *encoder,
|
||||
/* Program T-INIT master registers */
|
||||
for_each_dsi_port(port, intel_dsi->ports) {
|
||||
tmp = intel_de_read(dev_priv, ICL_DSI_T_INIT_MASTER(port));
|
||||
tmp &= ~MASTER_INIT_TIMER_MASK;
|
||||
tmp &= ~DSI_T_INIT_MASTER_MASK;
|
||||
tmp |= intel_dsi->init_count;
|
||||
intel_de_write(dev_priv, ICL_DSI_T_INIT_MASTER(port), tmp);
|
||||
}
|
||||
@ -788,14 +789,14 @@ gen11_dsi_configure_transcoder(struct intel_encoder *encoder,
|
||||
/* program DSI operation mode */
|
||||
if (is_vid_mode(intel_dsi)) {
|
||||
tmp &= ~OP_MODE_MASK;
|
||||
switch (intel_dsi->video_mode_format) {
|
||||
switch (intel_dsi->video_mode) {
|
||||
default:
|
||||
MISSING_CASE(intel_dsi->video_mode_format);
|
||||
MISSING_CASE(intel_dsi->video_mode);
|
||||
fallthrough;
|
||||
case VIDEO_MODE_NON_BURST_WITH_SYNC_EVENTS:
|
||||
case NON_BURST_SYNC_EVENTS:
|
||||
tmp |= VIDEO_MODE_SYNC_EVENT;
|
||||
break;
|
||||
case VIDEO_MODE_NON_BURST_WITH_SYNC_PULSE:
|
||||
case NON_BURST_SYNC_PULSE:
|
||||
tmp |= VIDEO_MODE_SYNC_PULSE;
|
||||
break;
|
||||
}
|
||||
@ -960,8 +961,7 @@ gen11_dsi_set_transcoder_timings(struct intel_encoder *encoder,
|
||||
|
||||
/* TRANS_HSYNC register to be programmed only for video mode */
|
||||
if (is_vid_mode(intel_dsi)) {
|
||||
if (intel_dsi->video_mode_format ==
|
||||
VIDEO_MODE_NON_BURST_WITH_SYNC_PULSE) {
|
||||
if (intel_dsi->video_mode == NON_BURST_SYNC_PULSE) {
|
||||
/* BSPEC: hsync size should be atleast 16 pixels */
|
||||
if (hsync_size < 16)
|
||||
drm_err(&dev_priv->drm,
|
||||
|
342
drivers/gpu/drm/i915/display/icl_dsi_regs.h
Normal file
342
drivers/gpu/drm/i915/display/icl_dsi_regs.h
Normal file
@ -0,0 +1,342 @@
|
||||
/* SPDX-License-Identifier: MIT */
|
||||
/*
|
||||
* Copyright © 2022 Intel Corporation
|
||||
*/
|
||||
|
||||
#ifndef __ICL_DSI_REGS_H__
|
||||
#define __ICL_DSI_REGS_H__
|
||||
|
||||
#include "i915_reg_defs.h"
|
||||
|
||||
/* Gen11 DSI */
|
||||
#define _MMIO_DSI(tc, dsi0, dsi1) _MMIO_TRANS((tc) - TRANSCODER_DSI_0, \
|
||||
dsi0, dsi1)
|
||||
#define _ICL_DSI_ESC_CLK_DIV0 0x6b090
|
||||
#define _ICL_DSI_ESC_CLK_DIV1 0x6b890
|
||||
#define ICL_DSI_ESC_CLK_DIV(port) _MMIO_PORT((port), \
|
||||
_ICL_DSI_ESC_CLK_DIV0, \
|
||||
_ICL_DSI_ESC_CLK_DIV1)
|
||||
#define _ICL_DPHY_ESC_CLK_DIV0 0x162190
|
||||
#define _ICL_DPHY_ESC_CLK_DIV1 0x6C190
|
||||
#define ICL_DPHY_ESC_CLK_DIV(port) _MMIO_PORT((port), \
|
||||
_ICL_DPHY_ESC_CLK_DIV0, \
|
||||
_ICL_DPHY_ESC_CLK_DIV1)
|
||||
#define ICL_BYTE_CLK_PER_ESC_CLK_MASK (0x1f << 16)
|
||||
#define ICL_BYTE_CLK_PER_ESC_CLK_SHIFT 16
|
||||
#define ICL_ESC_CLK_DIV_MASK 0x1ff
|
||||
#define ICL_ESC_CLK_DIV_SHIFT 0
|
||||
#define DSI_MAX_ESC_CLK 20000 /* in KHz */
|
||||
|
||||
#define _ADL_MIPIO_REG 0x180
|
||||
#define ADL_MIPIO_DW(port, dw) _MMIO(_ICL_COMBOPHY(port) + _ADL_MIPIO_REG + 4 * (dw))
|
||||
#define TX_ESC_CLK_DIV_PHY_SEL REGBIT(16)
|
||||
#define TX_ESC_CLK_DIV_PHY_MASK REG_GENMASK(23, 16)
|
||||
#define TX_ESC_CLK_DIV_PHY REG_FIELD_PREP(TX_ESC_CLK_DIV_PHY_MASK, 0x7f)
|
||||
|
||||
#define _DSI_CMD_FRMCTL_0 0x6b034
|
||||
#define _DSI_CMD_FRMCTL_1 0x6b834
|
||||
#define DSI_CMD_FRMCTL(port) _MMIO_PORT(port, \
|
||||
_DSI_CMD_FRMCTL_0,\
|
||||
_DSI_CMD_FRMCTL_1)
|
||||
#define DSI_FRAME_UPDATE_REQUEST (1 << 31)
|
||||
#define DSI_PERIODIC_FRAME_UPDATE_ENABLE (1 << 29)
|
||||
#define DSI_NULL_PACKET_ENABLE (1 << 28)
|
||||
#define DSI_FRAME_IN_PROGRESS (1 << 0)
|
||||
|
||||
#define _DSI_INTR_MASK_REG_0 0x6b070
|
||||
#define _DSI_INTR_MASK_REG_1 0x6b870
|
||||
#define DSI_INTR_MASK_REG(port) _MMIO_PORT(port, \
|
||||
_DSI_INTR_MASK_REG_0,\
|
||||
_DSI_INTR_MASK_REG_1)
|
||||
|
||||
#define _DSI_INTR_IDENT_REG_0 0x6b074
|
||||
#define _DSI_INTR_IDENT_REG_1 0x6b874
|
||||
#define DSI_INTR_IDENT_REG(port) _MMIO_PORT(port, \
|
||||
_DSI_INTR_IDENT_REG_0,\
|
||||
_DSI_INTR_IDENT_REG_1)
|
||||
#define DSI_TE_EVENT (1 << 31)
|
||||
#define DSI_RX_DATA_OR_BTA_TERMINATED (1 << 30)
|
||||
#define DSI_TX_DATA (1 << 29)
|
||||
#define DSI_ULPS_ENTRY_DONE (1 << 28)
|
||||
#define DSI_NON_TE_TRIGGER_RECEIVED (1 << 27)
|
||||
#define DSI_HOST_CHKSUM_ERROR (1 << 26)
|
||||
#define DSI_HOST_MULTI_ECC_ERROR (1 << 25)
|
||||
#define DSI_HOST_SINGL_ECC_ERROR (1 << 24)
|
||||
#define DSI_HOST_CONTENTION_DETECTED (1 << 23)
|
||||
#define DSI_HOST_FALSE_CONTROL_ERROR (1 << 22)
|
||||
#define DSI_HOST_TIMEOUT_ERROR (1 << 21)
|
||||
#define DSI_HOST_LOW_POWER_TX_SYNC_ERROR (1 << 20)
|
||||
#define DSI_HOST_ESCAPE_MODE_ENTRY_ERROR (1 << 19)
|
||||
#define DSI_FRAME_UPDATE_DONE (1 << 16)
|
||||
#define DSI_PROTOCOL_VIOLATION_REPORTED (1 << 15)
|
||||
#define DSI_INVALID_TX_LENGTH (1 << 13)
|
||||
#define DSI_INVALID_VC (1 << 12)
|
||||
#define DSI_INVALID_DATA_TYPE (1 << 11)
|
||||
#define DSI_PERIPHERAL_CHKSUM_ERROR (1 << 10)
|
||||
#define DSI_PERIPHERAL_MULTI_ECC_ERROR (1 << 9)
|
||||
#define DSI_PERIPHERAL_SINGLE_ECC_ERROR (1 << 8)
|
||||
#define DSI_PERIPHERAL_CONTENTION_DETECTED (1 << 7)
|
||||
#define DSI_PERIPHERAL_FALSE_CTRL_ERROR (1 << 6)
|
||||
#define DSI_PERIPHERAL_TIMEOUT_ERROR (1 << 5)
|
||||
#define DSI_PERIPHERAL_LP_TX_SYNC_ERROR (1 << 4)
|
||||
#define DSI_PERIPHERAL_ESC_MODE_ENTRY_CMD_ERR (1 << 3)
|
||||
#define DSI_EOT_SYNC_ERROR (1 << 2)
|
||||
#define DSI_SOT_SYNC_ERROR (1 << 1)
|
||||
#define DSI_SOT_ERROR (1 << 0)
|
||||
|
||||
/* ICL DSI MODE control */
|
||||
#define _ICL_DSI_IO_MODECTL_0 0x6B094
|
||||
#define _ICL_DSI_IO_MODECTL_1 0x6B894
|
||||
#define ICL_DSI_IO_MODECTL(port) _MMIO_PORT(port, \
|
||||
_ICL_DSI_IO_MODECTL_0, \
|
||||
_ICL_DSI_IO_MODECTL_1)
|
||||
#define COMBO_PHY_MODE_DSI (1 << 0)
|
||||
|
||||
/* TGL DSI Chicken register */
|
||||
#define _TGL_DSI_CHKN_REG_0 0x6B0C0
|
||||
#define _TGL_DSI_CHKN_REG_1 0x6B8C0
|
||||
#define TGL_DSI_CHKN_REG(port) _MMIO_PORT(port, \
|
||||
_TGL_DSI_CHKN_REG_0, \
|
||||
_TGL_DSI_CHKN_REG_1)
|
||||
#define TGL_DSI_CHKN_LSHS_GB_MASK REG_GENMASK(15, 12)
|
||||
#define TGL_DSI_CHKN_LSHS_GB(byte_clocks) REG_FIELD_PREP(TGL_DSI_CHKN_LSHS_GB_MASK, \
|
||||
(byte_clocks))
|
||||
#define _ICL_DSI_T_INIT_MASTER_0 0x6b088
|
||||
#define _ICL_DSI_T_INIT_MASTER_1 0x6b888
|
||||
#define ICL_DSI_T_INIT_MASTER(port) _MMIO_PORT(port, \
|
||||
_ICL_DSI_T_INIT_MASTER_0,\
|
||||
_ICL_DSI_T_INIT_MASTER_1)
|
||||
#define DSI_T_INIT_MASTER_MASK REG_GENMASK(15, 0)
|
||||
|
||||
#define _DPHY_CLK_TIMING_PARAM_0 0x162180
|
||||
#define _DPHY_CLK_TIMING_PARAM_1 0x6c180
|
||||
#define DPHY_CLK_TIMING_PARAM(port) _MMIO_PORT(port, \
|
||||
_DPHY_CLK_TIMING_PARAM_0,\
|
||||
_DPHY_CLK_TIMING_PARAM_1)
|
||||
#define _DSI_CLK_TIMING_PARAM_0 0x6b080
|
||||
#define _DSI_CLK_TIMING_PARAM_1 0x6b880
|
||||
#define DSI_CLK_TIMING_PARAM(port) _MMIO_PORT(port, \
|
||||
_DSI_CLK_TIMING_PARAM_0,\
|
||||
_DSI_CLK_TIMING_PARAM_1)
|
||||
#define CLK_PREPARE_OVERRIDE (1 << 31)
|
||||
#define CLK_PREPARE(x) ((x) << 28)
|
||||
#define CLK_PREPARE_MASK (0x7 << 28)
|
||||
#define CLK_PREPARE_SHIFT 28
|
||||
#define CLK_ZERO_OVERRIDE (1 << 27)
|
||||
#define CLK_ZERO(x) ((x) << 20)
|
||||
#define CLK_ZERO_MASK (0xf << 20)
|
||||
#define CLK_ZERO_SHIFT 20
|
||||
#define CLK_PRE_OVERRIDE (1 << 19)
|
||||
#define CLK_PRE(x) ((x) << 16)
|
||||
#define CLK_PRE_MASK (0x3 << 16)
|
||||
#define CLK_PRE_SHIFT 16
|
||||
#define CLK_POST_OVERRIDE (1 << 15)
|
||||
#define CLK_POST(x) ((x) << 8)
|
||||
#define CLK_POST_MASK (0x7 << 8)
|
||||
#define CLK_POST_SHIFT 8
|
||||
#define CLK_TRAIL_OVERRIDE (1 << 7)
|
||||
#define CLK_TRAIL(x) ((x) << 0)
|
||||
#define CLK_TRAIL_MASK (0xf << 0)
|
||||
#define CLK_TRAIL_SHIFT 0
|
||||
|
||||
#define _DPHY_DATA_TIMING_PARAM_0 0x162184
|
||||
#define _DPHY_DATA_TIMING_PARAM_1 0x6c184
|
||||
#define DPHY_DATA_TIMING_PARAM(port) _MMIO_PORT(port, \
|
||||
_DPHY_DATA_TIMING_PARAM_0,\
|
||||
_DPHY_DATA_TIMING_PARAM_1)
|
||||
#define _DSI_DATA_TIMING_PARAM_0 0x6B084
|
||||
#define _DSI_DATA_TIMING_PARAM_1 0x6B884
|
||||
#define DSI_DATA_TIMING_PARAM(port) _MMIO_PORT(port, \
|
||||
_DSI_DATA_TIMING_PARAM_0,\
|
||||
_DSI_DATA_TIMING_PARAM_1)
|
||||
#define HS_PREPARE_OVERRIDE (1 << 31)
|
||||
#define HS_PREPARE(x) ((x) << 24)
|
||||
#define HS_PREPARE_MASK (0x7 << 24)
|
||||
#define HS_PREPARE_SHIFT 24
|
||||
#define HS_ZERO_OVERRIDE (1 << 23)
|
||||
#define HS_ZERO(x) ((x) << 16)
|
||||
#define HS_ZERO_MASK (0xf << 16)
|
||||
#define HS_ZERO_SHIFT 16
|
||||
#define HS_TRAIL_OVERRIDE (1 << 15)
|
||||
#define HS_TRAIL(x) ((x) << 8)
|
||||
#define HS_TRAIL_MASK (0x7 << 8)
|
||||
#define HS_TRAIL_SHIFT 8
|
||||
#define HS_EXIT_OVERRIDE (1 << 7)
|
||||
#define HS_EXIT(x) ((x) << 0)
|
||||
#define HS_EXIT_MASK (0x7 << 0)
|
||||
#define HS_EXIT_SHIFT 0
|
||||
|
||||
#define _DPHY_TA_TIMING_PARAM_0 0x162188
|
||||
#define _DPHY_TA_TIMING_PARAM_1 0x6c188
|
||||
#define DPHY_TA_TIMING_PARAM(port) _MMIO_PORT(port, \
|
||||
_DPHY_TA_TIMING_PARAM_0,\
|
||||
_DPHY_TA_TIMING_PARAM_1)
|
||||
#define _DSI_TA_TIMING_PARAM_0 0x6b098
|
||||
#define _DSI_TA_TIMING_PARAM_1 0x6b898
|
||||
#define DSI_TA_TIMING_PARAM(port) _MMIO_PORT(port, \
|
||||
_DSI_TA_TIMING_PARAM_0,\
|
||||
_DSI_TA_TIMING_PARAM_1)
|
||||
#define TA_SURE_OVERRIDE (1 << 31)
|
||||
#define TA_SURE(x) ((x) << 16)
|
||||
#define TA_SURE_MASK (0x1f << 16)
|
||||
#define TA_SURE_SHIFT 16
|
||||
#define TA_GO_OVERRIDE (1 << 15)
|
||||
#define TA_GO(x) ((x) << 8)
|
||||
#define TA_GO_MASK (0xf << 8)
|
||||
#define TA_GO_SHIFT 8
|
||||
#define TA_GET_OVERRIDE (1 << 7)
|
||||
#define TA_GET(x) ((x) << 0)
|
||||
#define TA_GET_MASK (0xf << 0)
|
||||
#define TA_GET_SHIFT 0
|
||||
|
||||
/* DSI transcoder configuration */
|
||||
#define _DSI_TRANS_FUNC_CONF_0 0x6b030
|
||||
#define _DSI_TRANS_FUNC_CONF_1 0x6b830
|
||||
#define DSI_TRANS_FUNC_CONF(tc) _MMIO_DSI(tc, \
|
||||
_DSI_TRANS_FUNC_CONF_0,\
|
||||
_DSI_TRANS_FUNC_CONF_1)
|
||||
#define OP_MODE_MASK (0x3 << 28)
|
||||
#define OP_MODE_SHIFT 28
|
||||
#define CMD_MODE_NO_GATE (0x0 << 28)
|
||||
#define CMD_MODE_TE_GATE (0x1 << 28)
|
||||
#define VIDEO_MODE_SYNC_EVENT (0x2 << 28)
|
||||
#define VIDEO_MODE_SYNC_PULSE (0x3 << 28)
|
||||
#define TE_SOURCE_GPIO (1 << 27)
|
||||
#define LINK_READY (1 << 20)
|
||||
#define PIX_FMT_MASK (0x3 << 16)
|
||||
#define PIX_FMT_SHIFT 16
|
||||
#define PIX_FMT_RGB565 (0x0 << 16)
|
||||
#define PIX_FMT_RGB666_PACKED (0x1 << 16)
|
||||
#define PIX_FMT_RGB666_LOOSE (0x2 << 16)
|
||||
#define PIX_FMT_RGB888 (0x3 << 16)
|
||||
#define PIX_FMT_RGB101010 (0x4 << 16)
|
||||
#define PIX_FMT_RGB121212 (0x5 << 16)
|
||||
#define PIX_FMT_COMPRESSED (0x6 << 16)
|
||||
#define BGR_TRANSMISSION (1 << 15)
|
||||
#define PIX_VIRT_CHAN(x) ((x) << 12)
|
||||
#define PIX_VIRT_CHAN_MASK (0x3 << 12)
|
||||
#define PIX_VIRT_CHAN_SHIFT 12
|
||||
#define PIX_BUF_THRESHOLD_MASK (0x3 << 10)
|
||||
#define PIX_BUF_THRESHOLD_SHIFT 10
|
||||
#define PIX_BUF_THRESHOLD_1_4 (0x0 << 10)
|
||||
#define PIX_BUF_THRESHOLD_1_2 (0x1 << 10)
|
||||
#define PIX_BUF_THRESHOLD_3_4 (0x2 << 10)
|
||||
#define PIX_BUF_THRESHOLD_FULL (0x3 << 10)
|
||||
#define CONTINUOUS_CLK_MASK (0x3 << 8)
|
||||
#define CONTINUOUS_CLK_SHIFT 8
|
||||
#define CLK_ENTER_LP_AFTER_DATA (0x0 << 8)
|
||||
#define CLK_HS_OR_LP (0x2 << 8)
|
||||
#define CLK_HS_CONTINUOUS (0x3 << 8)
|
||||
#define LINK_CALIBRATION_MASK (0x3 << 4)
|
||||
#define LINK_CALIBRATION_SHIFT 4
|
||||
#define CALIBRATION_DISABLED (0x0 << 4)
|
||||
#define CALIBRATION_ENABLED_INITIAL_ONLY (0x2 << 4)
|
||||
#define CALIBRATION_ENABLED_INITIAL_PERIODIC (0x3 << 4)
|
||||
#define BLANKING_PACKET_ENABLE (1 << 2)
|
||||
#define S3D_ORIENTATION_LANDSCAPE (1 << 1)
|
||||
#define EOTP_DISABLED (1 << 0)
|
||||
|
||||
#define _DSI_CMD_RXCTL_0 0x6b0d4
|
||||
#define _DSI_CMD_RXCTL_1 0x6b8d4
|
||||
#define DSI_CMD_RXCTL(tc) _MMIO_DSI(tc, \
|
||||
_DSI_CMD_RXCTL_0,\
|
||||
_DSI_CMD_RXCTL_1)
|
||||
#define READ_UNLOADS_DW (1 << 16)
|
||||
#define RECEIVED_UNASSIGNED_TRIGGER (1 << 15)
|
||||
#define RECEIVED_ACKNOWLEDGE_TRIGGER (1 << 14)
|
||||
#define RECEIVED_TEAR_EFFECT_TRIGGER (1 << 13)
|
||||
#define RECEIVED_RESET_TRIGGER (1 << 12)
|
||||
#define RECEIVED_PAYLOAD_WAS_LOST (1 << 11)
|
||||
#define RECEIVED_CRC_WAS_LOST (1 << 10)
|
||||
#define NUMBER_RX_PLOAD_DW_MASK (0xff << 0)
|
||||
#define NUMBER_RX_PLOAD_DW_SHIFT 0
|
||||
|
||||
#define _DSI_CMD_TXCTL_0 0x6b0d0
|
||||
#define _DSI_CMD_TXCTL_1 0x6b8d0
|
||||
#define DSI_CMD_TXCTL(tc) _MMIO_DSI(tc, \
|
||||
_DSI_CMD_TXCTL_0,\
|
||||
_DSI_CMD_TXCTL_1)
|
||||
#define KEEP_LINK_IN_HS (1 << 24)
|
||||
#define FREE_HEADER_CREDIT_MASK (0x1f << 8)
|
||||
#define FREE_HEADER_CREDIT_SHIFT 0x8
|
||||
#define FREE_PLOAD_CREDIT_MASK (0xff << 0)
|
||||
#define FREE_PLOAD_CREDIT_SHIFT 0
|
||||
#define MAX_HEADER_CREDIT 0x10
|
||||
#define MAX_PLOAD_CREDIT 0x40
|
||||
|
||||
#define _DSI_CMD_TXHDR_0 0x6b100
|
||||
#define _DSI_CMD_TXHDR_1 0x6b900
|
||||
#define DSI_CMD_TXHDR(tc) _MMIO_DSI(tc, \
|
||||
_DSI_CMD_TXHDR_0,\
|
||||
_DSI_CMD_TXHDR_1)
|
||||
#define PAYLOAD_PRESENT (1 << 31)
|
||||
#define LP_DATA_TRANSFER (1 << 30)
|
||||
#define VBLANK_FENCE (1 << 29)
|
||||
#define PARAM_WC_MASK (0xffff << 8)
|
||||
#define PARAM_WC_LOWER_SHIFT 8
|
||||
#define PARAM_WC_UPPER_SHIFT 16
|
||||
#define VC_MASK (0x3 << 6)
|
||||
#define VC_SHIFT 6
|
||||
#define DT_MASK (0x3f << 0)
|
||||
#define DT_SHIFT 0
|
||||
|
||||
#define _DSI_CMD_TXPYLD_0 0x6b104
|
||||
#define _DSI_CMD_TXPYLD_1 0x6b904
|
||||
#define DSI_CMD_TXPYLD(tc) _MMIO_DSI(tc, \
|
||||
_DSI_CMD_TXPYLD_0,\
|
||||
_DSI_CMD_TXPYLD_1)
|
||||
|
||||
#define _DSI_LP_MSG_0 0x6b0d8
|
||||
#define _DSI_LP_MSG_1 0x6b8d8
|
||||
#define DSI_LP_MSG(tc) _MMIO_DSI(tc, \
|
||||
_DSI_LP_MSG_0,\
|
||||
_DSI_LP_MSG_1)
|
||||
#define LPTX_IN_PROGRESS (1 << 17)
|
||||
#define LINK_IN_ULPS (1 << 16)
|
||||
#define LINK_ULPS_TYPE_LP11 (1 << 8)
|
||||
#define LINK_ENTER_ULPS (1 << 0)
|
||||
|
||||
/* DSI timeout registers */
|
||||
#define _DSI_HSTX_TO_0 0x6b044
|
||||
#define _DSI_HSTX_TO_1 0x6b844
|
||||
#define DSI_HSTX_TO(tc) _MMIO_DSI(tc, \
|
||||
_DSI_HSTX_TO_0,\
|
||||
_DSI_HSTX_TO_1)
|
||||
#define HSTX_TIMEOUT_VALUE_MASK (0xffff << 16)
|
||||
#define HSTX_TIMEOUT_VALUE_SHIFT 16
|
||||
#define HSTX_TIMEOUT_VALUE(x) ((x) << 16)
|
||||
#define HSTX_TIMED_OUT (1 << 0)
|
||||
|
||||
#define _DSI_LPRX_HOST_TO_0 0x6b048
|
||||
#define _DSI_LPRX_HOST_TO_1 0x6b848
|
||||
#define DSI_LPRX_HOST_TO(tc) _MMIO_DSI(tc, \
|
||||
_DSI_LPRX_HOST_TO_0,\
|
||||
_DSI_LPRX_HOST_TO_1)
|
||||
#define LPRX_TIMED_OUT (1 << 16)
|
||||
#define LPRX_TIMEOUT_VALUE_MASK (0xffff << 0)
|
||||
#define LPRX_TIMEOUT_VALUE_SHIFT 0
|
||||
#define LPRX_TIMEOUT_VALUE(x) ((x) << 0)
|
||||
|
||||
#define _DSI_PWAIT_TO_0 0x6b040
|
||||
#define _DSI_PWAIT_TO_1 0x6b840
|
||||
#define DSI_PWAIT_TO(tc) _MMIO_DSI(tc, \
|
||||
_DSI_PWAIT_TO_0,\
|
||||
_DSI_PWAIT_TO_1)
|
||||
#define PRESET_TIMEOUT_VALUE_MASK (0xffff << 16)
|
||||
#define PRESET_TIMEOUT_VALUE_SHIFT 16
|
||||
#define PRESET_TIMEOUT_VALUE(x) ((x) << 16)
|
||||
#define PRESPONSE_TIMEOUT_VALUE_MASK (0xffff << 0)
|
||||
#define PRESPONSE_TIMEOUT_VALUE_SHIFT 0
|
||||
#define PRESPONSE_TIMEOUT_VALUE(x) ((x) << 0)
|
||||
|
||||
#define _DSI_TA_TO_0 0x6b04c
|
||||
#define _DSI_TA_TO_1 0x6b84c
|
||||
#define DSI_TA_TO(tc) _MMIO_DSI(tc, \
|
||||
_DSI_TA_TO_0,\
|
||||
_DSI_TA_TO_1)
|
||||
#define TA_TIMED_OUT (1 << 16)
|
||||
#define TA_TIMEOUT_VALUE_MASK (0xffff << 0)
|
||||
#define TA_TIMEOUT_VALUE_SHIFT 0
|
||||
#define TA_TIMEOUT_VALUE(x) ((x) << 0)
|
||||
|
||||
#endif /* __ICL_DSI_REGS_H__ */
|
@ -281,17 +281,6 @@ void intel_crtc_free_hw_state(struct intel_crtc_state *crtc_state)
|
||||
intel_crtc_put_color_blobs(crtc_state);
|
||||
}
|
||||
|
||||
void intel_crtc_copy_color_blobs(struct intel_crtc_state *crtc_state,
|
||||
const struct intel_crtc_state *from_crtc_state)
|
||||
{
|
||||
drm_property_replace_blob(&crtc_state->hw.degamma_lut,
|
||||
from_crtc_state->uapi.degamma_lut);
|
||||
drm_property_replace_blob(&crtc_state->hw.gamma_lut,
|
||||
from_crtc_state->uapi.gamma_lut);
|
||||
drm_property_replace_blob(&crtc_state->hw.ctm,
|
||||
from_crtc_state->uapi.ctm);
|
||||
}
|
||||
|
||||
/**
|
||||
* intel_crtc_destroy_state - destroy crtc state
|
||||
* @crtc: drm crtc
|
||||
|
@ -44,8 +44,6 @@ struct drm_crtc_state *intel_crtc_duplicate_state(struct drm_crtc *crtc);
|
||||
void intel_crtc_destroy_state(struct drm_crtc *crtc,
|
||||
struct drm_crtc_state *state);
|
||||
void intel_crtc_free_hw_state(struct intel_crtc_state *crtc_state);
|
||||
void intel_crtc_copy_color_blobs(struct intel_crtc_state *crtc_state,
|
||||
const struct intel_crtc_state *from_crtc_state);
|
||||
struct drm_atomic_state *intel_atomic_state_alloc(struct drm_device *dev);
|
||||
void intel_atomic_state_free(struct drm_atomic_state *state);
|
||||
void intel_atomic_state_clear(struct drm_atomic_state *state);
|
||||
|
@ -45,6 +45,7 @@
|
||||
#include "intel_fb_pin.h"
|
||||
#include "intel_pm.h"
|
||||
#include "intel_sprite.h"
|
||||
#include "skl_scaler.h"
|
||||
|
||||
static void intel_plane_state_reset(struct intel_plane_state *plane_state,
|
||||
struct intel_plane *plane)
|
||||
@ -322,6 +323,7 @@ void intel_plane_set_invisible(struct intel_crtc_state *crtc_state,
|
||||
struct intel_plane *plane = to_intel_plane(plane_state->uapi.plane);
|
||||
|
||||
crtc_state->active_planes &= ~BIT(plane->id);
|
||||
crtc_state->scaled_planes &= ~BIT(plane->id);
|
||||
crtc_state->nv12_planes &= ~BIT(plane->id);
|
||||
crtc_state->c8_planes &= ~BIT(plane->id);
|
||||
crtc_state->data_rate[plane->id] = 0;
|
||||
@ -330,6 +332,185 @@ void intel_plane_set_invisible(struct intel_crtc_state *crtc_state,
|
||||
plane_state->uapi.visible = false;
|
||||
}
|
||||
|
||||
/* FIXME nuke when all wm code is atomic */
|
||||
static bool intel_wm_need_update(const struct intel_plane_state *cur,
|
||||
struct intel_plane_state *new)
|
||||
{
|
||||
/* Update watermarks on tiling or size changes. */
|
||||
if (new->uapi.visible != cur->uapi.visible)
|
||||
return true;
|
||||
|
||||
if (!cur->hw.fb || !new->hw.fb)
|
||||
return false;
|
||||
|
||||
if (cur->hw.fb->modifier != new->hw.fb->modifier ||
|
||||
cur->hw.rotation != new->hw.rotation ||
|
||||
drm_rect_width(&new->uapi.src) != drm_rect_width(&cur->uapi.src) ||
|
||||
drm_rect_height(&new->uapi.src) != drm_rect_height(&cur->uapi.src) ||
|
||||
drm_rect_width(&new->uapi.dst) != drm_rect_width(&cur->uapi.dst) ||
|
||||
drm_rect_height(&new->uapi.dst) != drm_rect_height(&cur->uapi.dst))
|
||||
return true;
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
static bool intel_plane_is_scaled(const struct intel_plane_state *plane_state)
|
||||
{
|
||||
int src_w = drm_rect_width(&plane_state->uapi.src) >> 16;
|
||||
int src_h = drm_rect_height(&plane_state->uapi.src) >> 16;
|
||||
int dst_w = drm_rect_width(&plane_state->uapi.dst);
|
||||
int dst_h = drm_rect_height(&plane_state->uapi.dst);
|
||||
|
||||
return src_w != dst_w || src_h != dst_h;
|
||||
}
|
||||
|
||||
static bool intel_plane_do_async_flip(struct intel_plane *plane,
|
||||
const struct intel_crtc_state *old_crtc_state,
|
||||
const struct intel_crtc_state *new_crtc_state)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(plane->base.dev);
|
||||
|
||||
if (!plane->async_flip)
|
||||
return false;
|
||||
|
||||
if (!new_crtc_state->uapi.async_flip)
|
||||
return false;
|
||||
|
||||
/*
|
||||
* In platforms after DISPLAY13, we might need to override
|
||||
* first async flip in order to change watermark levels
|
||||
* as part of optimization.
|
||||
* So for those, we are checking if this is a first async flip.
|
||||
* For platforms earlier than DISPLAY13 we always do async flip.
|
||||
*/
|
||||
return DISPLAY_VER(i915) < 13 || old_crtc_state->uapi.async_flip;
|
||||
}
|
||||
|
||||
static int intel_plane_atomic_calc_changes(const struct intel_crtc_state *old_crtc_state,
|
||||
struct intel_crtc_state *new_crtc_state,
|
||||
const struct intel_plane_state *old_plane_state,
|
||||
struct intel_plane_state *new_plane_state)
|
||||
{
|
||||
struct intel_crtc *crtc = to_intel_crtc(new_crtc_state->uapi.crtc);
|
||||
struct intel_plane *plane = to_intel_plane(new_plane_state->uapi.plane);
|
||||
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
|
||||
bool mode_changed = intel_crtc_needs_modeset(new_crtc_state);
|
||||
bool was_crtc_enabled = old_crtc_state->hw.active;
|
||||
bool is_crtc_enabled = new_crtc_state->hw.active;
|
||||
bool turn_off, turn_on, visible, was_visible;
|
||||
int ret;
|
||||
|
||||
if (DISPLAY_VER(dev_priv) >= 9 && plane->id != PLANE_CURSOR) {
|
||||
ret = skl_update_scaler_plane(new_crtc_state, new_plane_state);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
was_visible = old_plane_state->uapi.visible;
|
||||
visible = new_plane_state->uapi.visible;
|
||||
|
||||
if (!was_crtc_enabled && drm_WARN_ON(&dev_priv->drm, was_visible))
|
||||
was_visible = false;
|
||||
|
||||
/*
|
||||
* Visibility is calculated as if the crtc was on, but
|
||||
* after scaler setup everything depends on it being off
|
||||
* when the crtc isn't active.
|
||||
*
|
||||
* FIXME this is wrong for watermarks. Watermarks should also
|
||||
* be computed as if the pipe would be active. Perhaps move
|
||||
* per-plane wm computation to the .check_plane() hook, and
|
||||
* only combine the results from all planes in the current place?
|
||||
*/
|
||||
if (!is_crtc_enabled) {
|
||||
intel_plane_set_invisible(new_crtc_state, new_plane_state);
|
||||
visible = false;
|
||||
}
|
||||
|
||||
if (!was_visible && !visible)
|
||||
return 0;
|
||||
|
||||
turn_off = was_visible && (!visible || mode_changed);
|
||||
turn_on = visible && (!was_visible || mode_changed);
|
||||
|
||||
drm_dbg_atomic(&dev_priv->drm,
|
||||
"[CRTC:%d:%s] with [PLANE:%d:%s] visible %i -> %i, off %i, on %i, ms %i\n",
|
||||
crtc->base.base.id, crtc->base.name,
|
||||
plane->base.base.id, plane->base.name,
|
||||
was_visible, visible,
|
||||
turn_off, turn_on, mode_changed);
|
||||
|
||||
if (turn_on) {
|
||||
if (DISPLAY_VER(dev_priv) < 5 && !IS_G4X(dev_priv))
|
||||
new_crtc_state->update_wm_pre = true;
|
||||
|
||||
/* must disable cxsr around plane enable/disable */
|
||||
if (plane->id != PLANE_CURSOR)
|
||||
new_crtc_state->disable_cxsr = true;
|
||||
} else if (turn_off) {
|
||||
if (DISPLAY_VER(dev_priv) < 5 && !IS_G4X(dev_priv))
|
||||
new_crtc_state->update_wm_post = true;
|
||||
|
||||
/* must disable cxsr around plane enable/disable */
|
||||
if (plane->id != PLANE_CURSOR)
|
||||
new_crtc_state->disable_cxsr = true;
|
||||
} else if (intel_wm_need_update(old_plane_state, new_plane_state)) {
|
||||
if (DISPLAY_VER(dev_priv) < 5 && !IS_G4X(dev_priv)) {
|
||||
/* FIXME bollocks */
|
||||
new_crtc_state->update_wm_pre = true;
|
||||
new_crtc_state->update_wm_post = true;
|
||||
}
|
||||
}
|
||||
|
||||
if (visible || was_visible)
|
||||
new_crtc_state->fb_bits |= plane->frontbuffer_bit;
|
||||
|
||||
/*
|
||||
* ILK/SNB DVSACNTR/Sprite Enable
|
||||
* IVB SPR_CTL/Sprite Enable
|
||||
* "When in Self Refresh Big FIFO mode, a write to enable the
|
||||
* plane will be internally buffered and delayed while Big FIFO
|
||||
* mode is exiting."
|
||||
*
|
||||
* Which means that enabling the sprite can take an extra frame
|
||||
* when we start in big FIFO mode (LP1+). Thus we need to drop
|
||||
* down to LP0 and wait for vblank in order to make sure the
|
||||
* sprite gets enabled on the next vblank after the register write.
|
||||
* Doing otherwise would risk enabling the sprite one frame after
|
||||
* we've already signalled flip completion. We can resume LP1+
|
||||
* once the sprite has been enabled.
|
||||
*
|
||||
*
|
||||
* WaCxSRDisabledForSpriteScaling:ivb
|
||||
* IVB SPR_SCALE/Scaling Enable
|
||||
* "Low Power watermarks must be disabled for at least one
|
||||
* frame before enabling sprite scaling, and kept disabled
|
||||
* until sprite scaling is disabled."
|
||||
*
|
||||
* ILK/SNB DVSASCALE/Scaling Enable
|
||||
* "When in Self Refresh Big FIFO mode, scaling enable will be
|
||||
* masked off while Big FIFO mode is exiting."
|
||||
*
|
||||
* Despite the w/a only being listed for IVB we assume that
|
||||
* the ILK/SNB note has similar ramifications, hence we apply
|
||||
* the w/a on all three platforms.
|
||||
*
|
||||
* With experimental results seems this is needed also for primary
|
||||
* plane, not only sprite plane.
|
||||
*/
|
||||
if (plane->id != PLANE_CURSOR &&
|
||||
(IS_IRONLAKE(dev_priv) || IS_SANDYBRIDGE(dev_priv) ||
|
||||
IS_IVYBRIDGE(dev_priv)) &&
|
||||
(turn_on || (!intel_plane_is_scaled(old_plane_state) &&
|
||||
intel_plane_is_scaled(new_plane_state))))
|
||||
new_crtc_state->disable_lp_wm = true;
|
||||
|
||||
if (intel_plane_do_async_flip(plane, old_crtc_state, new_crtc_state))
|
||||
new_plane_state->do_async_flip = true;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int intel_plane_atomic_check_with_state(const struct intel_crtc_state *old_crtc_state,
|
||||
struct intel_crtc_state *new_crtc_state,
|
||||
const struct intel_plane_state *old_plane_state,
|
||||
@ -356,6 +537,10 @@ int intel_plane_atomic_check_with_state(const struct intel_crtc_state *old_crtc_
|
||||
if (new_plane_state->uapi.visible)
|
||||
new_crtc_state->active_planes |= BIT(plane->id);
|
||||
|
||||
if (new_plane_state->uapi.visible &&
|
||||
intel_plane_is_scaled(new_plane_state))
|
||||
new_crtc_state->scaled_planes |= BIT(plane->id);
|
||||
|
||||
if (new_plane_state->uapi.visible &&
|
||||
intel_format_info_is_yuv_semiplanar(fb->format, fb->modifier))
|
||||
new_crtc_state->nv12_planes |= BIT(plane->id);
|
||||
@ -403,10 +588,11 @@ int intel_plane_atomic_check(struct intel_atomic_state *state,
|
||||
struct intel_crtc_state *new_crtc_state =
|
||||
intel_atomic_get_new_crtc_state(state, crtc);
|
||||
|
||||
if (new_crtc_state && new_crtc_state->bigjoiner_slave) {
|
||||
if (new_crtc_state && intel_crtc_is_bigjoiner_slave(new_crtc_state)) {
|
||||
struct intel_crtc *master_crtc =
|
||||
intel_master_crtc(new_crtc_state);
|
||||
struct intel_plane *master_plane =
|
||||
intel_crtc_get_plane(new_crtc_state->bigjoiner_linked_crtc,
|
||||
plane->id);
|
||||
intel_crtc_get_plane(master_crtc, plane->id);
|
||||
|
||||
new_master_plane_state =
|
||||
intel_atomic_get_new_plane_state(state, master_plane);
|
||||
@ -507,8 +693,8 @@ void intel_plane_disable_arm(struct intel_plane *plane,
|
||||
plane->disable_arm(plane, crtc_state);
|
||||
}
|
||||
|
||||
void intel_update_planes_on_crtc(struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc)
|
||||
void intel_crtc_planes_update_noarm(struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc)
|
||||
{
|
||||
struct intel_crtc_state *new_crtc_state =
|
||||
intel_atomic_get_new_crtc_state(state, crtc);
|
||||
@ -536,8 +722,8 @@ void intel_update_planes_on_crtc(struct intel_atomic_state *state,
|
||||
}
|
||||
}
|
||||
|
||||
void skl_arm_planes_on_crtc(struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc)
|
||||
static void skl_crtc_planes_update_arm(struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc)
|
||||
{
|
||||
struct intel_crtc_state *old_crtc_state =
|
||||
intel_atomic_get_old_crtc_state(state, crtc);
|
||||
@ -571,8 +757,8 @@ void skl_arm_planes_on_crtc(struct intel_atomic_state *state,
|
||||
}
|
||||
}
|
||||
|
||||
void i9xx_arm_planes_on_crtc(struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc)
|
||||
static void i9xx_crtc_planes_update_arm(struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc)
|
||||
{
|
||||
struct intel_crtc_state *new_crtc_state =
|
||||
intel_atomic_get_new_crtc_state(state, crtc);
|
||||
@ -597,6 +783,17 @@ void i9xx_arm_planes_on_crtc(struct intel_atomic_state *state,
|
||||
}
|
||||
}
|
||||
|
||||
void intel_crtc_planes_update_arm(struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(state->base.dev);
|
||||
|
||||
if (DISPLAY_VER(i915) >= 9)
|
||||
skl_crtc_planes_update_arm(state, crtc);
|
||||
else
|
||||
i9xx_crtc_planes_update_arm(state, crtc);
|
||||
}
|
||||
|
||||
int intel_atomic_plane_check_clipping(struct intel_plane_state *plane_state,
|
||||
struct intel_crtc_state *crtc_state,
|
||||
int min_scale, int max_scale,
|
||||
@ -633,7 +830,7 @@ int intel_atomic_plane_check_clipping(struct intel_plane_state *plane_state,
|
||||
}
|
||||
|
||||
/* right side of the image is on the slave crtc, adjust dst to match */
|
||||
if (crtc_state->bigjoiner_slave)
|
||||
if (intel_crtc_is_bigjoiner_slave(crtc_state))
|
||||
drm_rect_translate(dst, -crtc_state->pipe_src_w, 0);
|
||||
|
||||
/*
|
||||
|
@ -44,22 +44,16 @@ void intel_plane_free(struct intel_plane *plane);
|
||||
struct drm_plane_state *intel_plane_duplicate_state(struct drm_plane *plane);
|
||||
void intel_plane_destroy_state(struct drm_plane *plane,
|
||||
struct drm_plane_state *state);
|
||||
void intel_update_planes_on_crtc(struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc);
|
||||
void skl_arm_planes_on_crtc(struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc);
|
||||
void i9xx_arm_planes_on_crtc(struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc);
|
||||
void intel_crtc_planes_update_noarm(struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc);
|
||||
void intel_crtc_planes_update_arm(struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc);
|
||||
int intel_plane_atomic_check_with_state(const struct intel_crtc_state *old_crtc_state,
|
||||
struct intel_crtc_state *crtc_state,
|
||||
const struct intel_plane_state *old_plane_state,
|
||||
struct intel_plane_state *intel_state);
|
||||
int intel_plane_atomic_check(struct intel_atomic_state *state,
|
||||
struct intel_plane *plane);
|
||||
int intel_plane_atomic_calc_changes(const struct intel_crtc_state *old_crtc_state,
|
||||
struct intel_crtc_state *crtc_state,
|
||||
const struct intel_plane_state *old_plane_state,
|
||||
struct intel_plane_state *plane_state);
|
||||
int intel_plane_calc_min_cdclk(struct intel_atomic_state *state,
|
||||
struct intel_plane *plane,
|
||||
bool *need_cdclk_calc);
|
||||
|
@ -596,6 +596,12 @@ parse_general_features(struct drm_i915_private *i915,
|
||||
} else {
|
||||
i915->vbt.orientation = DRM_MODE_PANEL_ORIENTATION_UNKNOWN;
|
||||
}
|
||||
|
||||
if (bdb->version >= 249 && general->afc_startup_config) {
|
||||
i915->vbt.override_afc_startup = true;
|
||||
i915->vbt.override_afc_startup_val = general->afc_startup_config == 0x1 ? 0x0 : 0x7;
|
||||
}
|
||||
|
||||
drm_dbg_kms(&i915->drm,
|
||||
"BDB_GENERAL_FEATURES int_tv_support %d int_crt_support %d lvds_use_ssc %d lvds_ssc_freq %d display_clock_mode %d fdi_rx_polarity_inverted %d\n",
|
||||
i915->vbt.int_tv_support,
|
||||
|
@ -10,6 +10,7 @@
|
||||
#include "intel_bw.h"
|
||||
#include "intel_cdclk.h"
|
||||
#include "intel_display_types.h"
|
||||
#include "intel_mchbar_regs.h"
|
||||
#include "intel_pcode.h"
|
||||
#include "intel_pm.h"
|
||||
|
||||
@ -673,6 +674,49 @@ intel_atomic_get_bw_state(struct intel_atomic_state *state)
|
||||
return to_intel_bw_state(bw_state);
|
||||
}
|
||||
|
||||
static void skl_crtc_calc_dbuf_bw(struct intel_bw_state *bw_state,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
|
||||
struct drm_i915_private *i915 = to_i915(crtc->base.dev);
|
||||
struct intel_dbuf_bw *crtc_bw = &bw_state->dbuf_bw[crtc->pipe];
|
||||
enum plane_id plane_id;
|
||||
|
||||
memset(&crtc_bw->used_bw, 0, sizeof(crtc_bw->used_bw));
|
||||
|
||||
if (!crtc_state->hw.active)
|
||||
return;
|
||||
|
||||
for_each_plane_id_on_crtc(crtc, plane_id) {
|
||||
const struct skl_ddb_entry *ddb_y =
|
||||
&crtc_state->wm.skl.plane_ddb_y[plane_id];
|
||||
const struct skl_ddb_entry *ddb_uv =
|
||||
&crtc_state->wm.skl.plane_ddb_uv[plane_id];
|
||||
unsigned int data_rate = crtc_state->data_rate[plane_id];
|
||||
unsigned int dbuf_mask = 0;
|
||||
enum dbuf_slice slice;
|
||||
|
||||
dbuf_mask |= skl_ddb_dbuf_slice_mask(i915, ddb_y);
|
||||
dbuf_mask |= skl_ddb_dbuf_slice_mask(i915, ddb_uv);
|
||||
|
||||
/*
|
||||
* FIXME: To calculate that more properly we probably
|
||||
* need to split per plane data_rate into data_rate_y
|
||||
* and data_rate_uv for multiplanar formats in order not
|
||||
* to get accounted those twice if they happen to reside
|
||||
* on different slices.
|
||||
* However for pre-icl this would work anyway because
|
||||
* we have only single slice and for icl+ uv plane has
|
||||
* non-zero data rate.
|
||||
* So in worst case those calculation are a bit
|
||||
* pessimistic, which shouldn't pose any significant
|
||||
* problem anyway.
|
||||
*/
|
||||
for_each_dbuf_slice_in_mask(i915, slice, dbuf_mask)
|
||||
crtc_bw->used_bw[slice] += data_rate;
|
||||
}
|
||||
}
|
||||
|
||||
int skl_bw_calc_min_cdclk(struct intel_atomic_state *state)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(state->base.dev);
|
||||
@ -685,50 +729,13 @@ int skl_bw_calc_min_cdclk(struct intel_atomic_state *state)
|
||||
int i;
|
||||
|
||||
for_each_new_intel_crtc_in_state(state, crtc, crtc_state, i) {
|
||||
enum plane_id plane_id;
|
||||
struct intel_dbuf_bw *crtc_bw;
|
||||
|
||||
new_bw_state = intel_atomic_get_bw_state(state);
|
||||
if (IS_ERR(new_bw_state))
|
||||
return PTR_ERR(new_bw_state);
|
||||
|
||||
old_bw_state = intel_atomic_get_old_bw_state(state);
|
||||
|
||||
crtc_bw = &new_bw_state->dbuf_bw[crtc->pipe];
|
||||
|
||||
memset(&crtc_bw->used_bw, 0, sizeof(crtc_bw->used_bw));
|
||||
|
||||
if (!crtc_state->hw.active)
|
||||
continue;
|
||||
|
||||
for_each_plane_id_on_crtc(crtc, plane_id) {
|
||||
const struct skl_ddb_entry *plane_alloc =
|
||||
&crtc_state->wm.skl.plane_ddb_y[plane_id];
|
||||
const struct skl_ddb_entry *uv_plane_alloc =
|
||||
&crtc_state->wm.skl.plane_ddb_uv[plane_id];
|
||||
unsigned int data_rate = crtc_state->data_rate[plane_id];
|
||||
unsigned int dbuf_mask = 0;
|
||||
enum dbuf_slice slice;
|
||||
|
||||
dbuf_mask |= skl_ddb_dbuf_slice_mask(dev_priv, plane_alloc);
|
||||
dbuf_mask |= skl_ddb_dbuf_slice_mask(dev_priv, uv_plane_alloc);
|
||||
|
||||
/*
|
||||
* FIXME: To calculate that more properly we probably
|
||||
* need to to split per plane data_rate into data_rate_y
|
||||
* and data_rate_uv for multiplanar formats in order not
|
||||
* to get accounted those twice if they happen to reside
|
||||
* on different slices.
|
||||
* However for pre-icl this would work anyway because
|
||||
* we have only single slice and for icl+ uv plane has
|
||||
* non-zero data rate.
|
||||
* So in worst case those calculation are a bit
|
||||
* pessimistic, which shouldn't pose any significant
|
||||
* problem anyway.
|
||||
*/
|
||||
for_each_dbuf_slice_in_mask(dev_priv, slice, dbuf_mask)
|
||||
crtc_bw->used_bw[slice] += data_rate;
|
||||
}
|
||||
skl_crtc_calc_dbuf_bw(new_bw_state, crtc_state);
|
||||
}
|
||||
|
||||
if (!old_bw_state)
|
||||
@ -809,25 +816,11 @@ int intel_bw_calc_min_cdclk(struct intel_atomic_state *state)
|
||||
return 0;
|
||||
}
|
||||
|
||||
int intel_bw_atomic_check(struct intel_atomic_state *state)
|
||||
static u16 icl_qgv_points_mask(struct drm_i915_private *i915)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(state->base.dev);
|
||||
struct intel_crtc_state *new_crtc_state, *old_crtc_state;
|
||||
struct intel_bw_state *new_bw_state = NULL;
|
||||
const struct intel_bw_state *old_bw_state = NULL;
|
||||
unsigned int data_rate;
|
||||
unsigned int num_active_planes;
|
||||
struct intel_crtc *crtc;
|
||||
int i, ret;
|
||||
u32 allowed_points = 0;
|
||||
unsigned int max_bw_point = 0, max_bw = 0;
|
||||
unsigned int num_qgv_points = dev_priv->max_bw[0].num_qgv_points;
|
||||
unsigned int num_psf_gv_points = dev_priv->max_bw[0].num_psf_gv_points;
|
||||
u32 mask = 0;
|
||||
|
||||
/* FIXME earlier gens need some checks too */
|
||||
if (DISPLAY_VER(dev_priv) < 11)
|
||||
return 0;
|
||||
unsigned int num_psf_gv_points = i915->max_bw[0].num_psf_gv_points;
|
||||
unsigned int num_qgv_points = i915->max_bw[0].num_qgv_points;
|
||||
u16 mask = 0;
|
||||
|
||||
/*
|
||||
* We can _not_ use the whole ADLS_QGV_PT_MASK here, as PCode rejects
|
||||
@ -840,6 +833,16 @@ int intel_bw_atomic_check(struct intel_atomic_state *state)
|
||||
if (num_psf_gv_points > 0)
|
||||
mask |= REG_GENMASK(num_psf_gv_points - 1, 0) << ADLS_PSF_PT_SHIFT;
|
||||
|
||||
return mask;
|
||||
}
|
||||
|
||||
static int intel_bw_check_data_rate(struct intel_atomic_state *state, bool *changed)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(state->base.dev);
|
||||
const struct intel_crtc_state *new_crtc_state, *old_crtc_state;
|
||||
struct intel_crtc *crtc;
|
||||
int i;
|
||||
|
||||
for_each_oldnew_intel_crtc_in_state(state, crtc, old_crtc_state,
|
||||
new_crtc_state, i) {
|
||||
unsigned int old_data_rate =
|
||||
@ -850,6 +853,7 @@ int intel_bw_atomic_check(struct intel_atomic_state *state)
|
||||
intel_bw_crtc_num_active_planes(old_crtc_state);
|
||||
unsigned int new_active_planes =
|
||||
intel_bw_crtc_num_active_planes(new_crtc_state);
|
||||
struct intel_bw_state *new_bw_state;
|
||||
|
||||
/*
|
||||
* Avoid locking the bw state when
|
||||
@ -866,14 +870,53 @@ int intel_bw_atomic_check(struct intel_atomic_state *state)
|
||||
new_bw_state->data_rate[crtc->pipe] = new_data_rate;
|
||||
new_bw_state->num_active_planes[crtc->pipe] = new_active_planes;
|
||||
|
||||
drm_dbg_kms(&dev_priv->drm,
|
||||
"pipe %c data rate %u num active planes %u\n",
|
||||
pipe_name(crtc->pipe),
|
||||
*changed = true;
|
||||
|
||||
drm_dbg_kms(&i915->drm,
|
||||
"[CRTC:%d:%s] data rate %u num active planes %u\n",
|
||||
crtc->base.base.id, crtc->base.name,
|
||||
new_bw_state->data_rate[crtc->pipe],
|
||||
new_bw_state->num_active_planes[crtc->pipe]);
|
||||
}
|
||||
|
||||
if (!new_bw_state)
|
||||
return 0;
|
||||
}
|
||||
|
||||
int intel_bw_atomic_check(struct intel_atomic_state *state)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(state->base.dev);
|
||||
const struct intel_bw_state *old_bw_state;
|
||||
struct intel_bw_state *new_bw_state;
|
||||
unsigned int data_rate;
|
||||
unsigned int num_active_planes;
|
||||
int i, ret;
|
||||
u32 allowed_points = 0;
|
||||
unsigned int max_bw_point = 0, max_bw = 0;
|
||||
unsigned int num_qgv_points = dev_priv->max_bw[0].num_qgv_points;
|
||||
unsigned int num_psf_gv_points = dev_priv->max_bw[0].num_psf_gv_points;
|
||||
bool changed = false;
|
||||
|
||||
/* FIXME earlier gens need some checks too */
|
||||
if (DISPLAY_VER(dev_priv) < 11)
|
||||
return 0;
|
||||
|
||||
ret = intel_bw_check_data_rate(state, &changed);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
old_bw_state = intel_atomic_get_old_bw_state(state);
|
||||
new_bw_state = intel_atomic_get_new_bw_state(state);
|
||||
|
||||
if (new_bw_state &&
|
||||
intel_can_enable_sagv(dev_priv, old_bw_state) !=
|
||||
intel_can_enable_sagv(dev_priv, new_bw_state))
|
||||
changed = true;
|
||||
|
||||
/*
|
||||
* If none of our inputs (data rates, number of active
|
||||
* planes, SAGV yes/no) changed then nothing to do here.
|
||||
*/
|
||||
if (!changed)
|
||||
return 0;
|
||||
|
||||
ret = intel_atomic_lock_global_state(&new_bw_state->base);
|
||||
@ -957,9 +1000,9 @@ int intel_bw_atomic_check(struct intel_atomic_state *state)
|
||||
* We store the ones which need to be masked as that is what PCode
|
||||
* actually accepts as a parameter.
|
||||
*/
|
||||
new_bw_state->qgv_points_mask = ~allowed_points & mask;
|
||||
new_bw_state->qgv_points_mask = ~allowed_points &
|
||||
icl_qgv_points_mask(dev_priv);
|
||||
|
||||
old_bw_state = intel_atomic_get_old_bw_state(state);
|
||||
/*
|
||||
* If the actual mask had changed we need to make sure that
|
||||
* the commits are serialized(in case this is a nomodeset, nonblocking)
|
||||
|
@ -30,19 +30,19 @@ struct intel_bw_state {
|
||||
*/
|
||||
u8 pipe_sagv_reject;
|
||||
|
||||
/* bitmask of active pipes */
|
||||
u8 active_pipes;
|
||||
|
||||
/*
|
||||
* Current QGV points mask, which restricts
|
||||
* some particular SAGV states, not to confuse
|
||||
* with pipe_sagv_mask.
|
||||
*/
|
||||
u8 qgv_points_mask;
|
||||
u16 qgv_points_mask;
|
||||
|
||||
unsigned int data_rate[I915_MAX_PIPES];
|
||||
u8 num_active_planes[I915_MAX_PIPES];
|
||||
|
||||
/* bitmask of active pipes */
|
||||
u8 active_pipes;
|
||||
|
||||
int min_cdclk;
|
||||
};
|
||||
|
||||
|
@ -23,6 +23,7 @@
|
||||
|
||||
#include <linux/time.h>
|
||||
|
||||
#include "hsw_ips.h"
|
||||
#include "intel_atomic.h"
|
||||
#include "intel_atomic_plane.h"
|
||||
#include "intel_audio.h"
|
||||
@ -31,6 +32,7 @@
|
||||
#include "intel_crtc.h"
|
||||
#include "intel_de.h"
|
||||
#include "intel_display_types.h"
|
||||
#include "intel_mchbar_regs.h"
|
||||
#include "intel_pci_config.h"
|
||||
#include "intel_pcode.h"
|
||||
#include "intel_psr.h"
|
||||
|
@ -28,6 +28,25 @@
|
||||
#include "intel_dpll.h"
|
||||
#include "vlv_dsi_pll.h"
|
||||
|
||||
struct intel_color_funcs {
|
||||
int (*color_check)(struct intel_crtc_state *crtc_state);
|
||||
/*
|
||||
* Program double buffered color management registers during
|
||||
* vblank evasion. The registers should then latch during the
|
||||
* next vblank start, alongside any other double buffered registers
|
||||
* involved with the same commit.
|
||||
*/
|
||||
void (*color_commit)(const struct intel_crtc_state *crtc_state);
|
||||
/*
|
||||
* Load LUTs (and other single buffered color management
|
||||
* registers). Will (hopefully) be called during the vblank
|
||||
* following the latching of any double buffered registers
|
||||
* involved with the same commit.
|
||||
*/
|
||||
void (*load_luts)(const struct intel_crtc_state *crtc_state);
|
||||
void (*read_luts)(struct intel_crtc_state *crtc_state);
|
||||
};
|
||||
|
||||
#define CTM_COEFF_SIGN (1ULL << 63)
|
||||
|
||||
#define CTM_COEFF_1_0 (1ULL << 32)
|
||||
@ -160,29 +179,29 @@ static void ilk_update_pipe_csc(struct intel_crtc *crtc,
|
||||
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
|
||||
enum pipe pipe = crtc->pipe;
|
||||
|
||||
intel_de_write(dev_priv, PIPE_CSC_PREOFF_HI(pipe), preoff[0]);
|
||||
intel_de_write(dev_priv, PIPE_CSC_PREOFF_ME(pipe), preoff[1]);
|
||||
intel_de_write(dev_priv, PIPE_CSC_PREOFF_LO(pipe), preoff[2]);
|
||||
intel_de_write_fw(dev_priv, PIPE_CSC_PREOFF_HI(pipe), preoff[0]);
|
||||
intel_de_write_fw(dev_priv, PIPE_CSC_PREOFF_ME(pipe), preoff[1]);
|
||||
intel_de_write_fw(dev_priv, PIPE_CSC_PREOFF_LO(pipe), preoff[2]);
|
||||
|
||||
intel_de_write(dev_priv, PIPE_CSC_COEFF_RY_GY(pipe),
|
||||
coeff[0] << 16 | coeff[1]);
|
||||
intel_de_write(dev_priv, PIPE_CSC_COEFF_BY(pipe), coeff[2] << 16);
|
||||
intel_de_write_fw(dev_priv, PIPE_CSC_COEFF_RY_GY(pipe),
|
||||
coeff[0] << 16 | coeff[1]);
|
||||
intel_de_write_fw(dev_priv, PIPE_CSC_COEFF_BY(pipe), coeff[2] << 16);
|
||||
|
||||
intel_de_write(dev_priv, PIPE_CSC_COEFF_RU_GU(pipe),
|
||||
coeff[3] << 16 | coeff[4]);
|
||||
intel_de_write(dev_priv, PIPE_CSC_COEFF_BU(pipe), coeff[5] << 16);
|
||||
intel_de_write_fw(dev_priv, PIPE_CSC_COEFF_RU_GU(pipe),
|
||||
coeff[3] << 16 | coeff[4]);
|
||||
intel_de_write_fw(dev_priv, PIPE_CSC_COEFF_BU(pipe), coeff[5] << 16);
|
||||
|
||||
intel_de_write(dev_priv, PIPE_CSC_COEFF_RV_GV(pipe),
|
||||
coeff[6] << 16 | coeff[7]);
|
||||
intel_de_write(dev_priv, PIPE_CSC_COEFF_BV(pipe), coeff[8] << 16);
|
||||
intel_de_write_fw(dev_priv, PIPE_CSC_COEFF_RV_GV(pipe),
|
||||
coeff[6] << 16 | coeff[7]);
|
||||
intel_de_write_fw(dev_priv, PIPE_CSC_COEFF_BV(pipe), coeff[8] << 16);
|
||||
|
||||
if (DISPLAY_VER(dev_priv) >= 7) {
|
||||
intel_de_write(dev_priv, PIPE_CSC_POSTOFF_HI(pipe),
|
||||
postoff[0]);
|
||||
intel_de_write(dev_priv, PIPE_CSC_POSTOFF_ME(pipe),
|
||||
postoff[1]);
|
||||
intel_de_write(dev_priv, PIPE_CSC_POSTOFF_LO(pipe),
|
||||
postoff[2]);
|
||||
intel_de_write_fw(dev_priv, PIPE_CSC_POSTOFF_HI(pipe),
|
||||
postoff[0]);
|
||||
intel_de_write_fw(dev_priv, PIPE_CSC_POSTOFF_ME(pipe),
|
||||
postoff[1]);
|
||||
intel_de_write_fw(dev_priv, PIPE_CSC_POSTOFF_LO(pipe),
|
||||
postoff[2]);
|
||||
}
|
||||
}
|
||||
|
||||
@ -194,28 +213,28 @@ static void icl_update_output_csc(struct intel_crtc *crtc,
|
||||
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
|
||||
enum pipe pipe = crtc->pipe;
|
||||
|
||||
intel_de_write(dev_priv, PIPE_CSC_OUTPUT_PREOFF_HI(pipe), preoff[0]);
|
||||
intel_de_write(dev_priv, PIPE_CSC_OUTPUT_PREOFF_ME(pipe), preoff[1]);
|
||||
intel_de_write(dev_priv, PIPE_CSC_OUTPUT_PREOFF_LO(pipe), preoff[2]);
|
||||
intel_de_write_fw(dev_priv, PIPE_CSC_OUTPUT_PREOFF_HI(pipe), preoff[0]);
|
||||
intel_de_write_fw(dev_priv, PIPE_CSC_OUTPUT_PREOFF_ME(pipe), preoff[1]);
|
||||
intel_de_write_fw(dev_priv, PIPE_CSC_OUTPUT_PREOFF_LO(pipe), preoff[2]);
|
||||
|
||||
intel_de_write(dev_priv, PIPE_CSC_OUTPUT_COEFF_RY_GY(pipe),
|
||||
coeff[0] << 16 | coeff[1]);
|
||||
intel_de_write(dev_priv, PIPE_CSC_OUTPUT_COEFF_BY(pipe),
|
||||
coeff[2] << 16);
|
||||
intel_de_write_fw(dev_priv, PIPE_CSC_OUTPUT_COEFF_RY_GY(pipe),
|
||||
coeff[0] << 16 | coeff[1]);
|
||||
intel_de_write_fw(dev_priv, PIPE_CSC_OUTPUT_COEFF_BY(pipe),
|
||||
coeff[2] << 16);
|
||||
|
||||
intel_de_write(dev_priv, PIPE_CSC_OUTPUT_COEFF_RU_GU(pipe),
|
||||
coeff[3] << 16 | coeff[4]);
|
||||
intel_de_write(dev_priv, PIPE_CSC_OUTPUT_COEFF_BU(pipe),
|
||||
coeff[5] << 16);
|
||||
intel_de_write_fw(dev_priv, PIPE_CSC_OUTPUT_COEFF_RU_GU(pipe),
|
||||
coeff[3] << 16 | coeff[4]);
|
||||
intel_de_write_fw(dev_priv, PIPE_CSC_OUTPUT_COEFF_BU(pipe),
|
||||
coeff[5] << 16);
|
||||
|
||||
intel_de_write(dev_priv, PIPE_CSC_OUTPUT_COEFF_RV_GV(pipe),
|
||||
coeff[6] << 16 | coeff[7]);
|
||||
intel_de_write(dev_priv, PIPE_CSC_OUTPUT_COEFF_BV(pipe),
|
||||
coeff[8] << 16);
|
||||
intel_de_write_fw(dev_priv, PIPE_CSC_OUTPUT_COEFF_RV_GV(pipe),
|
||||
coeff[6] << 16 | coeff[7]);
|
||||
intel_de_write_fw(dev_priv, PIPE_CSC_OUTPUT_COEFF_BV(pipe),
|
||||
coeff[8] << 16);
|
||||
|
||||
intel_de_write(dev_priv, PIPE_CSC_OUTPUT_POSTOFF_HI(pipe), postoff[0]);
|
||||
intel_de_write(dev_priv, PIPE_CSC_OUTPUT_POSTOFF_ME(pipe), postoff[1]);
|
||||
intel_de_write(dev_priv, PIPE_CSC_OUTPUT_POSTOFF_LO(pipe), postoff[2]);
|
||||
intel_de_write_fw(dev_priv, PIPE_CSC_OUTPUT_POSTOFF_HI(pipe), postoff[0]);
|
||||
intel_de_write_fw(dev_priv, PIPE_CSC_OUTPUT_POSTOFF_ME(pipe), postoff[1]);
|
||||
intel_de_write_fw(dev_priv, PIPE_CSC_OUTPUT_POSTOFF_LO(pipe), postoff[2]);
|
||||
}
|
||||
|
||||
static bool ilk_csc_limited_range(const struct intel_crtc_state *crtc_state)
|
||||
@ -319,8 +338,8 @@ static void ilk_load_csc_matrix(const struct intel_crtc_state *crtc_state)
|
||||
ilk_csc_off_zero);
|
||||
}
|
||||
|
||||
intel_de_write(dev_priv, PIPE_CSC_MODE(crtc->pipe),
|
||||
crtc_state->csc_mode);
|
||||
intel_de_write_fw(dev_priv, PIPE_CSC_MODE(crtc->pipe),
|
||||
crtc_state->csc_mode);
|
||||
}
|
||||
|
||||
static void icl_load_csc_matrix(const struct intel_crtc_state *crtc_state)
|
||||
@ -346,8 +365,8 @@ static void icl_load_csc_matrix(const struct intel_crtc_state *crtc_state)
|
||||
ilk_csc_postoff_limited_range);
|
||||
}
|
||||
|
||||
intel_de_write(dev_priv, PIPE_CSC_MODE(crtc->pipe),
|
||||
crtc_state->csc_mode);
|
||||
intel_de_write_fw(dev_priv, PIPE_CSC_MODE(crtc->pipe),
|
||||
crtc_state->csc_mode);
|
||||
}
|
||||
|
||||
static void chv_load_cgm_csc(struct intel_crtc *crtc,
|
||||
@ -377,16 +396,16 @@ static void chv_load_cgm_csc(struct intel_crtc *crtc,
|
||||
coeffs[i] |= (abs_coeff >> 20) & 0xfff;
|
||||
}
|
||||
|
||||
intel_de_write(dev_priv, CGM_PIPE_CSC_COEFF01(pipe),
|
||||
coeffs[1] << 16 | coeffs[0]);
|
||||
intel_de_write(dev_priv, CGM_PIPE_CSC_COEFF23(pipe),
|
||||
coeffs[3] << 16 | coeffs[2]);
|
||||
intel_de_write(dev_priv, CGM_PIPE_CSC_COEFF45(pipe),
|
||||
coeffs[5] << 16 | coeffs[4]);
|
||||
intel_de_write(dev_priv, CGM_PIPE_CSC_COEFF67(pipe),
|
||||
coeffs[7] << 16 | coeffs[6]);
|
||||
intel_de_write(dev_priv, CGM_PIPE_CSC_COEFF8(pipe),
|
||||
coeffs[8]);
|
||||
intel_de_write_fw(dev_priv, CGM_PIPE_CSC_COEFF01(pipe),
|
||||
coeffs[1] << 16 | coeffs[0]);
|
||||
intel_de_write_fw(dev_priv, CGM_PIPE_CSC_COEFF23(pipe),
|
||||
coeffs[3] << 16 | coeffs[2]);
|
||||
intel_de_write_fw(dev_priv, CGM_PIPE_CSC_COEFF45(pipe),
|
||||
coeffs[5] << 16 | coeffs[4]);
|
||||
intel_de_write_fw(dev_priv, CGM_PIPE_CSC_COEFF67(pipe),
|
||||
coeffs[7] << 16 | coeffs[6]);
|
||||
intel_de_write_fw(dev_priv, CGM_PIPE_CSC_COEFF8(pipe),
|
||||
coeffs[8]);
|
||||
}
|
||||
|
||||
/* convert hw value with given bit_precision to lut property val */
|
||||
|
@ -2703,6 +2703,7 @@ static void intel_ddi_post_disable(struct intel_atomic_state *state,
|
||||
struct intel_digital_port *dig_port = enc_to_dig_port(encoder);
|
||||
enum phy phy = intel_port_to_phy(dev_priv, encoder->port);
|
||||
bool is_tc_port = intel_phy_is_tc(dev_priv, phy);
|
||||
struct intel_crtc *slave_crtc;
|
||||
|
||||
if (!intel_crtc_has_type(old_crtc_state, INTEL_OUTPUT_DP_MST)) {
|
||||
intel_crtc_vblank_off(old_crtc_state);
|
||||
@ -2721,9 +2722,8 @@ static void intel_ddi_post_disable(struct intel_atomic_state *state,
|
||||
ilk_pfit_disable(old_crtc_state);
|
||||
}
|
||||
|
||||
if (old_crtc_state->bigjoiner_linked_crtc) {
|
||||
struct intel_crtc *slave_crtc =
|
||||
old_crtc_state->bigjoiner_linked_crtc;
|
||||
for_each_intel_crtc_in_pipe_mask(&dev_priv->drm, slave_crtc,
|
||||
intel_crtc_bigjoiner_slave_pipes(old_crtc_state)) {
|
||||
const struct intel_crtc_state *old_slave_crtc_state =
|
||||
intel_atomic_get_old_crtc_state(state, slave_crtc);
|
||||
|
||||
@ -2926,7 +2926,7 @@ static void intel_enable_ddi(struct intel_atomic_state *state,
|
||||
{
|
||||
drm_WARN_ON(state->base.dev, crtc_state->has_pch_encoder);
|
||||
|
||||
if (!crtc_state->bigjoiner_slave)
|
||||
if (!intel_crtc_is_bigjoiner_slave(crtc_state))
|
||||
intel_ddi_enable_transcoder_func(encoder, crtc_state);
|
||||
|
||||
intel_vrr_enable(encoder, crtc_state);
|
||||
@ -3041,6 +3041,7 @@ intel_ddi_update_prepare(struct intel_atomic_state *state,
|
||||
struct intel_encoder *encoder,
|
||||
struct intel_crtc *crtc)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(state->base.dev);
|
||||
struct intel_crtc_state *crtc_state =
|
||||
crtc ? intel_atomic_get_new_crtc_state(state, crtc) : NULL;
|
||||
int required_lanes = crtc_state ? crtc_state->lane_count : 1;
|
||||
@ -3050,11 +3051,12 @@ intel_ddi_update_prepare(struct intel_atomic_state *state,
|
||||
intel_tc_port_get_link(enc_to_dig_port(encoder),
|
||||
required_lanes);
|
||||
if (crtc_state && crtc_state->hw.active) {
|
||||
struct intel_crtc *slave_crtc = crtc_state->bigjoiner_linked_crtc;
|
||||
struct intel_crtc *slave_crtc;
|
||||
|
||||
intel_update_active_dpll(state, crtc, encoder);
|
||||
|
||||
if (slave_crtc)
|
||||
for_each_intel_crtc_in_pipe_mask(&i915->drm, slave_crtc,
|
||||
intel_crtc_bigjoiner_slave_pipes(crtc_state))
|
||||
intel_update_active_dpll(state, slave_crtc, encoder);
|
||||
}
|
||||
}
|
||||
@ -3099,10 +3101,23 @@ intel_ddi_pre_pll_enable(struct intel_atomic_state *state,
|
||||
crtc_state->lane_lat_optim_mask);
|
||||
}
|
||||
|
||||
static void adlp_tbt_to_dp_alt_switch_wa(struct intel_encoder *encoder)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(encoder->base.dev);
|
||||
enum tc_port tc_port = intel_port_to_tc(i915, encoder->port);
|
||||
int ln;
|
||||
|
||||
for (ln = 0; ln < 2; ln++) {
|
||||
intel_de_write(i915, HIP_INDEX_REG(tc_port), HIP_INDEX_VAL(tc_port, ln));
|
||||
intel_de_rmw(i915, DKL_PCS_DW5(tc_port), DKL_PCS_DW5_CORE_SOFTRESET, 0);
|
||||
}
|
||||
}
|
||||
|
||||
static void intel_ddi_prepare_link_retrain(struct intel_dp *intel_dp,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
|
||||
struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
|
||||
struct intel_encoder *encoder = &dig_port->base;
|
||||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
enum port port = encoder->port;
|
||||
u32 dp_tp_ctl, ddi_buf_ctl;
|
||||
@ -3138,6 +3153,10 @@ static void intel_ddi_prepare_link_retrain(struct intel_dp *intel_dp,
|
||||
intel_de_write(dev_priv, dp_tp_ctl_reg(encoder, crtc_state), dp_tp_ctl);
|
||||
intel_de_posting_read(dev_priv, dp_tp_ctl_reg(encoder, crtc_state));
|
||||
|
||||
if (IS_ALDERLAKE_P(dev_priv) &&
|
||||
(intel_tc_port_in_dp_alt_mode(dig_port) || intel_tc_port_in_legacy_mode(dig_port)))
|
||||
adlp_tbt_to_dp_alt_switch_wa(encoder);
|
||||
|
||||
intel_dp->DP |= DDI_BUF_CTL_ENABLE;
|
||||
intel_de_write(dev_priv, DDI_BUF_CTL(port), intel_dp->DP);
|
||||
intel_de_posting_read(dev_priv, DDI_BUF_CTL(port));
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -430,11 +430,11 @@ enum hpd_pin {
|
||||
&(dev)->mode_config.crtc_list, \
|
||||
base.head)
|
||||
|
||||
#define for_each_intel_crtc_mask(dev, intel_crtc, crtc_mask) \
|
||||
#define for_each_intel_crtc_in_pipe_mask(dev, intel_crtc, pipe_mask) \
|
||||
list_for_each_entry(intel_crtc, \
|
||||
&(dev)->mode_config.crtc_list, \
|
||||
base.head) \
|
||||
for_each_if((crtc_mask) & drm_crtc_mask(&intel_crtc->base))
|
||||
for_each_if((pipe_mask) & BIT(intel_crtc->pipe))
|
||||
|
||||
#define for_each_intel_encoder(dev, intel_encoder) \
|
||||
list_for_each_entry(intel_encoder, \
|
||||
@ -555,6 +555,10 @@ intel_mode_valid_max_plane_size(struct drm_i915_private *dev_priv,
|
||||
bool bigjoiner);
|
||||
enum phy intel_port_to_phy(struct drm_i915_private *i915, enum port port);
|
||||
bool is_trans_port_sync_mode(const struct intel_crtc_state *state);
|
||||
bool intel_crtc_is_bigjoiner_slave(const struct intel_crtc_state *crtc_state);
|
||||
bool intel_crtc_is_bigjoiner_master(const struct intel_crtc_state *crtc_state);
|
||||
u8 intel_crtc_bigjoiner_slave_pipes(const struct intel_crtc_state *crtc_state);
|
||||
struct intel_crtc *intel_master_crtc(const struct intel_crtc_state *crtc_state);
|
||||
|
||||
void intel_plane_destroy(struct drm_plane *plane);
|
||||
void intel_enable_transcoder(const struct intel_crtc_state *new_crtc_state);
|
||||
@ -632,9 +636,6 @@ void intel_cpu_transcoder_get_m2_n2(struct intel_crtc *crtc,
|
||||
void i9xx_crtc_clock_get(struct intel_crtc *crtc,
|
||||
struct intel_crtc_state *pipe_config);
|
||||
int intel_dotclock_calculate(int link_freq, const struct intel_link_m_n *m_n);
|
||||
bool hsw_crtc_state_ips_capable(const struct intel_crtc_state *crtc_state);
|
||||
void hsw_enable_ips(const struct intel_crtc_state *crtc_state);
|
||||
void hsw_disable_ips(const struct intel_crtc_state *crtc_state);
|
||||
enum intel_display_power_domain intel_port_to_power_domain(enum port port);
|
||||
enum intel_display_power_domain
|
||||
intel_aux_power_domain(struct intel_digital_port *dig_port);
|
||||
|
@ -16,6 +16,7 @@
|
||||
#include "intel_dp_mst.h"
|
||||
#include "intel_drrs.h"
|
||||
#include "intel_fbc.h"
|
||||
#include "intel_fbdev.h"
|
||||
#include "intel_hdcp.h"
|
||||
#include "intel_hdmi.h"
|
||||
#include "intel_pm.h"
|
||||
@ -78,7 +79,7 @@ static int i915_sr_status(struct seq_file *m, void *unused)
|
||||
if (DISPLAY_VER(dev_priv) >= 9)
|
||||
/* no global SR status; inspect per-plane WM */;
|
||||
else if (HAS_PCH_SPLIT(dev_priv))
|
||||
sr_enabled = intel_de_read(dev_priv, WM1_LP_ILK) & WM1_LP_SR_EN;
|
||||
sr_enabled = intel_de_read(dev_priv, WM1_LP_ILK) & WM_LP_ENABLE;
|
||||
else if (IS_I965GM(dev_priv) || IS_G4X(dev_priv) ||
|
||||
IS_I945G(dev_priv) || IS_I945GM(dev_priv))
|
||||
sr_enabled = intel_de_read(dev_priv, FW_BLC_SELF) & FW_BLC_SELF_EN;
|
||||
@ -124,9 +125,8 @@ static int i915_gem_framebuffer_info(struct seq_file *m, void *data)
|
||||
struct drm_framebuffer *drm_fb;
|
||||
|
||||
#ifdef CONFIG_DRM_FBDEV_EMULATION
|
||||
if (dev_priv->fbdev && dev_priv->fbdev->helper.fb) {
|
||||
fbdev_fb = to_intel_framebuffer(dev_priv->fbdev->helper.fb);
|
||||
|
||||
fbdev_fb = intel_fbdev_framebuffer(dev_priv->fbdev);
|
||||
if (fbdev_fb) {
|
||||
seq_printf(m, "fbcon size: %d x %d, depth %d, %d bpp, modifier 0x%llx, refcount %d, obj ",
|
||||
fbdev_fb->base.width,
|
||||
fbdev_fb->base.height,
|
||||
@ -474,8 +474,8 @@ static int i915_dmc_info(struct seq_file *m, void *unused)
|
||||
* reg for DC3CO debugging and validation,
|
||||
* but TGL DMC f/w is using DMC_DEBUG3 reg for DC3CO counter.
|
||||
*/
|
||||
seq_printf(m, "DC3CO count: %d\n",
|
||||
intel_de_read(dev_priv, DMC_DEBUG3));
|
||||
seq_printf(m, "DC3CO count: %d\n", intel_de_read(dev_priv, IS_DGFX(dev_priv) ?
|
||||
DG1_DMC_DEBUG3 : TGL_DMC_DEBUG3));
|
||||
} else {
|
||||
dc5_reg = IS_BROXTON(dev_priv) ? BXT_DMC_DC3_DC5_COUNT :
|
||||
SKL_DMC_DC3_DC5_COUNT;
|
||||
@ -923,23 +923,23 @@ static void intel_crtc_info(struct seq_file *m, struct intel_crtc *crtc)
|
||||
yesno(crtc_state->uapi.active),
|
||||
DRM_MODE_ARG(&crtc_state->uapi.mode));
|
||||
|
||||
if (crtc_state->hw.enable) {
|
||||
seq_printf(m, "\thw: active=%s, adjusted_mode=" DRM_MODE_FMT "\n",
|
||||
yesno(crtc_state->hw.active),
|
||||
DRM_MODE_ARG(&crtc_state->hw.adjusted_mode));
|
||||
seq_printf(m, "\thw: enable=%s, active=%s\n",
|
||||
yesno(crtc_state->hw.enable), yesno(crtc_state->hw.active));
|
||||
seq_printf(m, "\tadjusted_mode=" DRM_MODE_FMT "\n",
|
||||
DRM_MODE_ARG(&crtc_state->hw.adjusted_mode));
|
||||
seq_printf(m, "\tpipe__mode=" DRM_MODE_FMT "\n",
|
||||
DRM_MODE_ARG(&crtc_state->hw.pipe_mode));
|
||||
|
||||
seq_printf(m, "\tpipe src size=%dx%d, dither=%s, bpp=%d\n",
|
||||
crtc_state->pipe_src_w, crtc_state->pipe_src_h,
|
||||
yesno(crtc_state->dither), crtc_state->pipe_bpp);
|
||||
seq_printf(m, "\tpipe src size=%dx%d, dither=%s, bpp=%d\n",
|
||||
crtc_state->pipe_src_w, crtc_state->pipe_src_h,
|
||||
yesno(crtc_state->dither), crtc_state->pipe_bpp);
|
||||
|
||||
intel_scaler_info(m, crtc);
|
||||
}
|
||||
intel_scaler_info(m, crtc);
|
||||
|
||||
if (crtc_state->bigjoiner)
|
||||
seq_printf(m, "\tLinked to [CRTC:%d:%s] as a %s\n",
|
||||
crtc_state->bigjoiner_linked_crtc->base.base.id,
|
||||
crtc_state->bigjoiner_linked_crtc->base.name,
|
||||
crtc_state->bigjoiner_slave ? "slave" : "master");
|
||||
seq_printf(m, "\tLinked to 0x%x pipes as a %s\n",
|
||||
crtc_state->bigjoiner_pipes,
|
||||
intel_crtc_is_bigjoiner_slave(crtc_state) ? "slave" : "master");
|
||||
|
||||
for_each_intel_encoder_mask(&dev_priv->drm, encoder,
|
||||
crtc_state->uapi.encoder_mask)
|
||||
@ -1015,6 +1015,7 @@ static int i915_shared_dplls_info(struct seq_file *m, void *unused)
|
||||
seq_printf(m, " wrpll: 0x%08x\n", pll->state.hw_state.wrpll);
|
||||
seq_printf(m, " cfgcr0: 0x%08x\n", pll->state.hw_state.cfgcr0);
|
||||
seq_printf(m, " cfgcr1: 0x%08x\n", pll->state.hw_state.cfgcr1);
|
||||
seq_printf(m, " div0: 0x%08x\n", pll->state.hw_state.div0);
|
||||
seq_printf(m, " mg_refclkin_ctl: 0x%08x\n",
|
||||
pll->state.hw_state.mg_refclkin_ctl);
|
||||
seq_printf(m, " mg_clktop2_coreclkctl1: 0x%08x\n",
|
||||
|
@ -16,6 +16,7 @@
|
||||
#include "intel_dpio_phy.h"
|
||||
#include "intel_dpll.h"
|
||||
#include "intel_hotplug.h"
|
||||
#include "intel_mchbar_regs.h"
|
||||
#include "intel_pch_refclk.h"
|
||||
#include "intel_pcode.h"
|
||||
#include "intel_pm.h"
|
||||
|
@ -26,7 +26,6 @@
|
||||
#ifndef __INTEL_DISPLAY_TYPES_H__
|
||||
#define __INTEL_DISPLAY_TYPES_H__
|
||||
|
||||
#include <linux/async.h>
|
||||
#include <linux/i2c.h>
|
||||
#include <linux/pm_qos.h>
|
||||
#include <linux/pwm.h>
|
||||
@ -38,7 +37,6 @@
|
||||
#include <drm/drm_crtc.h>
|
||||
#include <drm/drm_dsc.h>
|
||||
#include <drm/drm_encoder.h>
|
||||
#include <drm/drm_fb_helper.h>
|
||||
#include <drm/drm_fourcc.h>
|
||||
#include <drm/drm_probe_helper.h>
|
||||
#include <drm/drm_rect.h>
|
||||
@ -145,25 +143,6 @@ struct intel_framebuffer {
|
||||
struct i915_address_space *dpt_vm;
|
||||
};
|
||||
|
||||
struct intel_fbdev {
|
||||
struct drm_fb_helper helper;
|
||||
struct intel_framebuffer *fb;
|
||||
struct i915_vma *vma;
|
||||
unsigned long vma_flags;
|
||||
async_cookie_t cookie;
|
||||
int preferred_bpp;
|
||||
|
||||
/* Whether or not fbdev hpd processing is temporarily suspended */
|
||||
bool hpd_suspended : 1;
|
||||
/* Set when a hotplug was received while HPD processing was
|
||||
* suspended
|
||||
*/
|
||||
bool hpd_waiting : 1;
|
||||
|
||||
/* Protects hpd_suspended */
|
||||
struct mutex hpd_lock;
|
||||
};
|
||||
|
||||
enum intel_hotplug_state {
|
||||
INTEL_HOTPLUG_UNCHANGED,
|
||||
INTEL_HOTPLUG_CHANGED,
|
||||
@ -1168,6 +1147,7 @@ struct intel_crtc_state {
|
||||
|
||||
/* bitmask of actually visible planes (enum plane_id) */
|
||||
u8 active_planes;
|
||||
u8 scaled_planes;
|
||||
u8 nv12_planes;
|
||||
u8 c8_planes;
|
||||
|
||||
@ -1202,11 +1182,8 @@ struct intel_crtc_state {
|
||||
/* enable pipe big joiner? */
|
||||
bool bigjoiner;
|
||||
|
||||
/* big joiner slave crtc? */
|
||||
bool bigjoiner_slave;
|
||||
|
||||
/* linked crtc for bigjoiner, either slave or master */
|
||||
struct intel_crtc *bigjoiner_linked_crtc;
|
||||
/* big joiner pipe bitmask */
|
||||
u8 bigjoiner_pipes;
|
||||
|
||||
/* Display Stream compression state */
|
||||
struct {
|
||||
|
@ -886,9 +886,8 @@ intel_dp_mode_valid_downstream(struct intel_connector *connector,
|
||||
return MODE_CLOCK_HIGH;
|
||||
|
||||
/* Assume 8bpc for the DP++/HDMI/DVI TMDS clock check */
|
||||
tmds_clock = target_clock;
|
||||
if (drm_mode_is_420_only(info, mode))
|
||||
tmds_clock /= 2;
|
||||
tmds_clock = intel_hdmi_tmds_clock(target_clock, 8,
|
||||
drm_mode_is_420_only(info, mode));
|
||||
|
||||
if (intel_dp->dfp.min_tmds_clock &&
|
||||
tmds_clock < intel_dp->dfp.min_tmds_clock)
|
||||
@ -1139,21 +1138,12 @@ static bool intel_dp_hdmi_ycbcr420(struct intel_dp *intel_dp,
|
||||
intel_dp->dfp.ycbcr_444_to_420);
|
||||
}
|
||||
|
||||
static int intel_dp_hdmi_tmds_clock(struct intel_dp *intel_dp,
|
||||
const struct intel_crtc_state *crtc_state, int bpc)
|
||||
{
|
||||
int clock = crtc_state->hw.adjusted_mode.crtc_clock * bpc / 8;
|
||||
|
||||
if (intel_dp_hdmi_ycbcr420(intel_dp, crtc_state))
|
||||
clock /= 2;
|
||||
|
||||
return clock;
|
||||
}
|
||||
|
||||
static bool intel_dp_hdmi_tmds_clock_valid(struct intel_dp *intel_dp,
|
||||
const struct intel_crtc_state *crtc_state, int bpc)
|
||||
{
|
||||
int tmds_clock = intel_dp_hdmi_tmds_clock(intel_dp, crtc_state, bpc);
|
||||
int clock = crtc_state->hw.adjusted_mode.crtc_clock;
|
||||
int tmds_clock = intel_hdmi_tmds_clock(clock, bpc,
|
||||
intel_dp_hdmi_ycbcr420(intel_dp, crtc_state));
|
||||
|
||||
if (intel_dp->dfp.min_tmds_clock &&
|
||||
tmds_clock < intel_dp->dfp.min_tmds_clock)
|
||||
@ -3628,6 +3618,32 @@ update_status:
|
||||
"Could not write test response to sink\n");
|
||||
}
|
||||
|
||||
static bool intel_dp_link_ok(struct intel_dp *intel_dp,
|
||||
u8 link_status[DP_LINK_STATUS_SIZE])
|
||||
{
|
||||
struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
|
||||
struct drm_i915_private *i915 = to_i915(encoder->base.dev);
|
||||
bool uhbr = intel_dp->link_rate >= 1000000;
|
||||
bool ok;
|
||||
|
||||
if (uhbr)
|
||||
ok = drm_dp_128b132b_lane_channel_eq_done(link_status,
|
||||
intel_dp->lane_count);
|
||||
else
|
||||
ok = drm_dp_channel_eq_ok(link_status, intel_dp->lane_count);
|
||||
|
||||
if (ok)
|
||||
return true;
|
||||
|
||||
intel_dp_dump_link_status(intel_dp, DP_PHY_DPRX, link_status);
|
||||
drm_dbg_kms(&i915->drm,
|
||||
"[ENCODER:%d:%s] %s link not ok, retraining\n",
|
||||
encoder->base.base.id, encoder->base.name,
|
||||
uhbr ? "128b/132b" : "8b/10b");
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
static void
|
||||
intel_dp_mst_hpd_irq(struct intel_dp *intel_dp, u8 *esi, u8 *ack)
|
||||
{
|
||||
@ -3658,14 +3674,7 @@ static bool intel_dp_mst_link_status(struct intel_dp *intel_dp)
|
||||
return false;
|
||||
}
|
||||
|
||||
if (!drm_dp_channel_eq_ok(link_status, intel_dp->lane_count)) {
|
||||
drm_dbg_kms(&i915->drm,
|
||||
"[ENCODER:%d:%s] channel EQ not ok, retraining\n",
|
||||
encoder->base.base.id, encoder->base.name);
|
||||
return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
return intel_dp_link_ok(intel_dp, link_status);
|
||||
}
|
||||
|
||||
/**
|
||||
@ -3779,8 +3788,8 @@ intel_dp_needs_link_retrain(struct intel_dp *intel_dp)
|
||||
intel_dp->lane_count))
|
||||
return false;
|
||||
|
||||
/* Retrain if Channel EQ or CR not ok */
|
||||
return !drm_dp_channel_eq_ok(link_status, intel_dp->lane_count);
|
||||
/* Retrain if link not ok */
|
||||
return !intel_dp_link_ok(intel_dp, link_status);
|
||||
}
|
||||
|
||||
static bool intel_dp_has_connector(struct intel_dp *intel_dp,
|
||||
@ -3810,14 +3819,14 @@ static bool intel_dp_has_connector(struct intel_dp *intel_dp,
|
||||
|
||||
static int intel_dp_prep_link_retrain(struct intel_dp *intel_dp,
|
||||
struct drm_modeset_acquire_ctx *ctx,
|
||||
u32 *crtc_mask)
|
||||
u8 *pipe_mask)
|
||||
{
|
||||
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
||||
struct drm_connector_list_iter conn_iter;
|
||||
struct intel_connector *connector;
|
||||
int ret = 0;
|
||||
|
||||
*crtc_mask = 0;
|
||||
*pipe_mask = 0;
|
||||
|
||||
if (!intel_dp_needs_link_retrain(intel_dp))
|
||||
return 0;
|
||||
@ -3851,12 +3860,12 @@ static int intel_dp_prep_link_retrain(struct intel_dp *intel_dp,
|
||||
!try_wait_for_completion(&conn_state->commit->hw_done))
|
||||
continue;
|
||||
|
||||
*crtc_mask |= drm_crtc_mask(&crtc->base);
|
||||
*pipe_mask |= BIT(crtc->pipe);
|
||||
}
|
||||
drm_connector_list_iter_end(&conn_iter);
|
||||
|
||||
if (!intel_dp_needs_link_retrain(intel_dp))
|
||||
*crtc_mask = 0;
|
||||
*pipe_mask = 0;
|
||||
|
||||
return ret;
|
||||
}
|
||||
@ -3875,7 +3884,7 @@ int intel_dp_retrain_link(struct intel_encoder *encoder,
|
||||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
|
||||
struct intel_crtc *crtc;
|
||||
u32 crtc_mask;
|
||||
u8 pipe_mask;
|
||||
int ret;
|
||||
|
||||
if (!intel_dp_is_connected(intel_dp))
|
||||
@ -3886,17 +3895,17 @@ int intel_dp_retrain_link(struct intel_encoder *encoder,
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = intel_dp_prep_link_retrain(intel_dp, ctx, &crtc_mask);
|
||||
ret = intel_dp_prep_link_retrain(intel_dp, ctx, &pipe_mask);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (crtc_mask == 0)
|
||||
if (pipe_mask == 0)
|
||||
return 0;
|
||||
|
||||
drm_dbg_kms(&dev_priv->drm, "[ENCODER:%d:%s] retraining link\n",
|
||||
encoder->base.base.id, encoder->base.name);
|
||||
|
||||
for_each_intel_crtc_mask(&dev_priv->drm, crtc, crtc_mask) {
|
||||
for_each_intel_crtc_in_pipe_mask(&dev_priv->drm, crtc, pipe_mask) {
|
||||
const struct intel_crtc_state *crtc_state =
|
||||
to_intel_crtc_state(crtc->base.state);
|
||||
|
||||
@ -3907,7 +3916,7 @@ int intel_dp_retrain_link(struct intel_encoder *encoder,
|
||||
intel_crtc_pch_transcoder(crtc), false);
|
||||
}
|
||||
|
||||
for_each_intel_crtc_mask(&dev_priv->drm, crtc, crtc_mask) {
|
||||
for_each_intel_crtc_in_pipe_mask(&dev_priv->drm, crtc, pipe_mask) {
|
||||
const struct intel_crtc_state *crtc_state =
|
||||
to_intel_crtc_state(crtc->base.state);
|
||||
|
||||
@ -3924,7 +3933,7 @@ int intel_dp_retrain_link(struct intel_encoder *encoder,
|
||||
break;
|
||||
}
|
||||
|
||||
for_each_intel_crtc_mask(&dev_priv->drm, crtc, crtc_mask) {
|
||||
for_each_intel_crtc_in_pipe_mask(&dev_priv->drm, crtc, pipe_mask) {
|
||||
const struct intel_crtc_state *crtc_state =
|
||||
to_intel_crtc_state(crtc->base.state);
|
||||
|
||||
@ -3942,14 +3951,14 @@ int intel_dp_retrain_link(struct intel_encoder *encoder,
|
||||
|
||||
static int intel_dp_prep_phy_test(struct intel_dp *intel_dp,
|
||||
struct drm_modeset_acquire_ctx *ctx,
|
||||
u32 *crtc_mask)
|
||||
u8 *pipe_mask)
|
||||
{
|
||||
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
||||
struct drm_connector_list_iter conn_iter;
|
||||
struct intel_connector *connector;
|
||||
int ret = 0;
|
||||
|
||||
*crtc_mask = 0;
|
||||
*pipe_mask = 0;
|
||||
|
||||
drm_connector_list_iter_begin(&i915->drm, &conn_iter);
|
||||
for_each_intel_connector_iter(connector, &conn_iter) {
|
||||
@ -3980,7 +3989,7 @@ static int intel_dp_prep_phy_test(struct intel_dp *intel_dp,
|
||||
!try_wait_for_completion(&conn_state->commit->hw_done))
|
||||
continue;
|
||||
|
||||
*crtc_mask |= drm_crtc_mask(&crtc->base);
|
||||
*pipe_mask |= BIT(crtc->pipe);
|
||||
}
|
||||
drm_connector_list_iter_end(&conn_iter);
|
||||
|
||||
@ -3993,7 +4002,7 @@ static int intel_dp_do_phy_test(struct intel_encoder *encoder,
|
||||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
|
||||
struct intel_crtc *crtc;
|
||||
u32 crtc_mask;
|
||||
u8 pipe_mask;
|
||||
int ret;
|
||||
|
||||
ret = drm_modeset_lock(&dev_priv->drm.mode_config.connection_mutex,
|
||||
@ -4001,17 +4010,17 @@ static int intel_dp_do_phy_test(struct intel_encoder *encoder,
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = intel_dp_prep_phy_test(intel_dp, ctx, &crtc_mask);
|
||||
ret = intel_dp_prep_phy_test(intel_dp, ctx, &pipe_mask);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (crtc_mask == 0)
|
||||
if (pipe_mask == 0)
|
||||
return 0;
|
||||
|
||||
drm_dbg_kms(&dev_priv->drm, "[ENCODER:%d:%s] PHY test\n",
|
||||
encoder->base.base.id, encoder->base.name);
|
||||
|
||||
for_each_intel_crtc_mask(&dev_priv->drm, crtc, crtc_mask) {
|
||||
for_each_intel_crtc_in_pipe_mask(&dev_priv->drm, crtc, pipe_mask) {
|
||||
const struct intel_crtc_state *crtc_state =
|
||||
to_intel_crtc_state(crtc->base.state);
|
||||
|
||||
|
@ -712,7 +712,7 @@ static bool intel_dp_adjust_request_changed(const struct intel_crtc_state *crtc_
|
||||
return false;
|
||||
}
|
||||
|
||||
static void
|
||||
void
|
||||
intel_dp_dump_link_status(struct intel_dp *intel_dp, enum drm_dp_phy dp_phy,
|
||||
const u8 link_status[DP_LINK_STATUS_SIZE])
|
||||
{
|
||||
@ -996,6 +996,23 @@ static bool intel_dp_disable_dpcd_training_pattern(struct intel_dp *intel_dp,
|
||||
return drm_dp_dpcd_write(&intel_dp->aux, reg, &val, 1) == 1;
|
||||
}
|
||||
|
||||
static int
|
||||
intel_dp_128b132b_intra_hop(struct intel_dp *intel_dp,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
||||
u8 sink_status;
|
||||
int ret;
|
||||
|
||||
ret = drm_dp_dpcd_readb(&intel_dp->aux, DP_SINK_STATUS, &sink_status);
|
||||
if (ret != 1) {
|
||||
drm_dbg_kms(&i915->drm, "Failed to read sink status\n");
|
||||
return ret < 0 ? ret : -EIO;
|
||||
}
|
||||
|
||||
return sink_status & DP_INTRA_HOP_AUX_REPLY_INDICATION ? 1 : 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* intel_dp_stop_link_train - stop link training
|
||||
* @intel_dp: DP struct
|
||||
@ -1015,11 +1032,21 @@ static bool intel_dp_disable_dpcd_training_pattern(struct intel_dp *intel_dp,
|
||||
void intel_dp_stop_link_train(struct intel_dp *intel_dp,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
||||
struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
|
||||
|
||||
intel_dp->link_trained = true;
|
||||
|
||||
intel_dp_disable_dpcd_training_pattern(intel_dp, DP_PHY_DPRX);
|
||||
intel_dp_program_link_training_pattern(intel_dp, crtc_state, DP_PHY_DPRX,
|
||||
DP_TRAINING_PATTERN_DISABLE);
|
||||
|
||||
if (intel_dp_is_uhbr(crtc_state) &&
|
||||
wait_for(intel_dp_128b132b_intra_hop(intel_dp, crtc_state) == 0, 500)) {
|
||||
drm_dbg_kms(&i915->drm,
|
||||
"[ENCODER:%d:%s] 128b/132b intra-hop not clearing\n",
|
||||
encoder->base.base.id, encoder->base.name);
|
||||
}
|
||||
}
|
||||
|
||||
static bool
|
||||
@ -1083,8 +1110,6 @@ intel_dp_link_train_all_phys(struct intel_dp *intel_dp,
|
||||
bool ret = true;
|
||||
int i;
|
||||
|
||||
intel_dp_prepare_link_train(intel_dp, crtc_state);
|
||||
|
||||
for (i = lttpr_count - 1; i >= 0; i--) {
|
||||
enum drm_dp_phy dp_phy = DP_PHY_LTTPR(i);
|
||||
|
||||
@ -1104,6 +1129,272 @@ intel_dp_link_train_all_phys(struct intel_dp *intel_dp,
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* 128b/132b DP LANEx_EQ_DONE Sequence (DP 2.0 E11 3.5.2.16.1)
|
||||
*/
|
||||
static bool
|
||||
intel_dp_128b132b_lane_eq(struct intel_dp *intel_dp,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
|
||||
struct drm_i915_private *i915 = to_i915(encoder->base.dev);
|
||||
u8 link_status[DP_LINK_STATUS_SIZE];
|
||||
int delay_us;
|
||||
int try, max_tries = 20;
|
||||
unsigned long deadline;
|
||||
bool timeout = false;
|
||||
|
||||
/*
|
||||
* Reset signal levels. Start transmitting 128b/132b TPS1.
|
||||
*
|
||||
* Put DPRX and LTTPRs (if any) into intra-hop AUX mode by writing TPS1
|
||||
* in DP_TRAINING_PATTERN_SET.
|
||||
*/
|
||||
if (!intel_dp_reset_link_train(intel_dp, crtc_state, DP_PHY_DPRX,
|
||||
DP_TRAINING_PATTERN_1)) {
|
||||
drm_err(&i915->drm,
|
||||
"[ENCODER:%d:%s] Failed to start 128b/132b TPS1\n",
|
||||
encoder->base.base.id, encoder->base.name);
|
||||
return false;
|
||||
}
|
||||
|
||||
delay_us = drm_dp_128b132b_read_aux_rd_interval(&intel_dp->aux);
|
||||
|
||||
/* Read the initial TX FFE settings. */
|
||||
if (drm_dp_dpcd_read_link_status(&intel_dp->aux, link_status) < 0) {
|
||||
drm_err(&i915->drm,
|
||||
"[ENCODER:%d:%s] Failed to read TX FFE presets\n",
|
||||
encoder->base.base.id, encoder->base.name);
|
||||
return false;
|
||||
}
|
||||
|
||||
/* Update signal levels and training set as requested. */
|
||||
intel_dp_get_adjust_train(intel_dp, crtc_state, DP_PHY_DPRX, link_status);
|
||||
if (!intel_dp_update_link_train(intel_dp, crtc_state, DP_PHY_DPRX)) {
|
||||
drm_err(&i915->drm,
|
||||
"[ENCODER:%d:%s] Failed to set initial TX FFE settings\n",
|
||||
encoder->base.base.id, encoder->base.name);
|
||||
return false;
|
||||
}
|
||||
|
||||
/* Start transmitting 128b/132b TPS2. */
|
||||
if (!intel_dp_set_link_train(intel_dp, crtc_state, DP_PHY_DPRX,
|
||||
DP_TRAINING_PATTERN_2)) {
|
||||
drm_err(&i915->drm,
|
||||
"[ENCODER:%d:%s] Failed to start 128b/132b TPS2\n",
|
||||
encoder->base.base.id, encoder->base.name);
|
||||
return false;
|
||||
}
|
||||
|
||||
/* Time budget for the LANEx_EQ_DONE Sequence */
|
||||
deadline = jiffies + msecs_to_jiffies_timeout(400);
|
||||
|
||||
for (try = 0; try < max_tries; try++) {
|
||||
usleep_range(delay_us, 2 * delay_us);
|
||||
|
||||
/*
|
||||
* The delay may get updated. The transmitter shall read the
|
||||
* delay before link status during link training.
|
||||
*/
|
||||
delay_us = drm_dp_128b132b_read_aux_rd_interval(&intel_dp->aux);
|
||||
|
||||
if (drm_dp_dpcd_read_link_status(&intel_dp->aux, link_status) < 0) {
|
||||
drm_err(&i915->drm,
|
||||
"[ENCODER:%d:%s] Failed to read link status\n",
|
||||
encoder->base.base.id, encoder->base.name);
|
||||
return false;
|
||||
}
|
||||
|
||||
if (drm_dp_128b132b_link_training_failed(link_status)) {
|
||||
intel_dp_dump_link_status(intel_dp, DP_PHY_DPRX, link_status);
|
||||
drm_err(&i915->drm,
|
||||
"[ENCODER:%d:%s] Downstream link training failure\n",
|
||||
encoder->base.base.id, encoder->base.name);
|
||||
return false;
|
||||
}
|
||||
|
||||
if (drm_dp_128b132b_lane_channel_eq_done(link_status, crtc_state->lane_count)) {
|
||||
drm_dbg_kms(&i915->drm,
|
||||
"[ENCODER:%d:%s] Lane channel eq done\n",
|
||||
encoder->base.base.id, encoder->base.name);
|
||||
break;
|
||||
}
|
||||
|
||||
if (timeout) {
|
||||
intel_dp_dump_link_status(intel_dp, DP_PHY_DPRX, link_status);
|
||||
drm_err(&i915->drm,
|
||||
"[ENCODER:%d:%s] Lane channel eq timeout\n",
|
||||
encoder->base.base.id, encoder->base.name);
|
||||
return false;
|
||||
}
|
||||
|
||||
if (time_after(jiffies, deadline))
|
||||
timeout = true; /* try one last time after deadline */
|
||||
|
||||
/* Update signal levels and training set as requested. */
|
||||
intel_dp_get_adjust_train(intel_dp, crtc_state, DP_PHY_DPRX, link_status);
|
||||
if (!intel_dp_update_link_train(intel_dp, crtc_state, DP_PHY_DPRX)) {
|
||||
drm_err(&i915->drm,
|
||||
"[ENCODER:%d:%s] Failed to update TX FFE settings\n",
|
||||
encoder->base.base.id, encoder->base.name);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
if (try == max_tries) {
|
||||
intel_dp_dump_link_status(intel_dp, DP_PHY_DPRX, link_status);
|
||||
drm_err(&i915->drm,
|
||||
"[ENCODER:%d:%s] Max loop count reached\n",
|
||||
encoder->base.base.id, encoder->base.name);
|
||||
return false;
|
||||
}
|
||||
|
||||
for (;;) {
|
||||
if (time_after(jiffies, deadline))
|
||||
timeout = true; /* try one last time after deadline */
|
||||
|
||||
if (drm_dp_dpcd_read_link_status(&intel_dp->aux, link_status) < 0) {
|
||||
drm_err(&i915->drm,
|
||||
"[ENCODER:%d:%s] Failed to read link status\n",
|
||||
encoder->base.base.id, encoder->base.name);
|
||||
return false;
|
||||
}
|
||||
|
||||
if (drm_dp_128b132b_link_training_failed(link_status)) {
|
||||
intel_dp_dump_link_status(intel_dp, DP_PHY_DPRX, link_status);
|
||||
drm_err(&i915->drm,
|
||||
"[ENCODER:%d:%s] Downstream link training failure\n",
|
||||
encoder->base.base.id, encoder->base.name);
|
||||
return false;
|
||||
}
|
||||
|
||||
if (drm_dp_128b132b_eq_interlane_align_done(link_status)) {
|
||||
drm_dbg_kms(&i915->drm,
|
||||
"[ENCODER:%d:%s] Interlane align done\n",
|
||||
encoder->base.base.id, encoder->base.name);
|
||||
break;
|
||||
}
|
||||
|
||||
if (timeout) {
|
||||
intel_dp_dump_link_status(intel_dp, DP_PHY_DPRX, link_status);
|
||||
drm_err(&i915->drm,
|
||||
"[ENCODER:%d:%s] Interlane align timeout\n",
|
||||
encoder->base.base.id, encoder->base.name);
|
||||
return false;
|
||||
}
|
||||
|
||||
usleep_range(2000, 3000);
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/*
|
||||
* 128b/132b DP LANEx_CDS_DONE Sequence (DP 2.0 E11 3.5.2.16.2)
|
||||
*/
|
||||
static bool
|
||||
intel_dp_128b132b_lane_cds(struct intel_dp *intel_dp,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
int lttpr_count)
|
||||
{
|
||||
struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
|
||||
struct drm_i915_private *i915 = to_i915(encoder->base.dev);
|
||||
u8 link_status[DP_LINK_STATUS_SIZE];
|
||||
unsigned long deadline;
|
||||
|
||||
if (drm_dp_dpcd_writeb(&intel_dp->aux, DP_TRAINING_PATTERN_SET,
|
||||
DP_TRAINING_PATTERN_2_CDS) != 1) {
|
||||
drm_err(&i915->drm,
|
||||
"[ENCODER:%d:%s] Failed to start 128b/132b TPS2 CDS\n",
|
||||
encoder->base.base.id, encoder->base.name);
|
||||
return false;
|
||||
}
|
||||
|
||||
/* Time budget for the LANEx_CDS_DONE Sequence */
|
||||
deadline = jiffies + msecs_to_jiffies_timeout((lttpr_count + 1) * 20);
|
||||
|
||||
for (;;) {
|
||||
bool timeout = false;
|
||||
|
||||
if (time_after(jiffies, deadline))
|
||||
timeout = true; /* try one last time after deadline */
|
||||
|
||||
usleep_range(2000, 3000);
|
||||
|
||||
if (drm_dp_dpcd_read_link_status(&intel_dp->aux, link_status) < 0) {
|
||||
drm_err(&i915->drm,
|
||||
"[ENCODER:%d:%s] Failed to read link status\n",
|
||||
encoder->base.base.id, encoder->base.name);
|
||||
return false;
|
||||
}
|
||||
|
||||
if (drm_dp_128b132b_eq_interlane_align_done(link_status) &&
|
||||
drm_dp_128b132b_cds_interlane_align_done(link_status) &&
|
||||
drm_dp_128b132b_lane_symbol_locked(link_status, crtc_state->lane_count)) {
|
||||
drm_dbg_kms(&i915->drm,
|
||||
"[ENCODER:%d:%s] CDS interlane align done\n",
|
||||
encoder->base.base.id, encoder->base.name);
|
||||
break;
|
||||
}
|
||||
|
||||
if (drm_dp_128b132b_link_training_failed(link_status)) {
|
||||
intel_dp_dump_link_status(intel_dp, DP_PHY_DPRX, link_status);
|
||||
drm_err(&i915->drm,
|
||||
"[ENCODER:%d:%s] Downstream link training failure\n",
|
||||
encoder->base.base.id, encoder->base.name);
|
||||
return false;
|
||||
}
|
||||
|
||||
if (timeout) {
|
||||
intel_dp_dump_link_status(intel_dp, DP_PHY_DPRX, link_status);
|
||||
drm_err(&i915->drm,
|
||||
"[ENCODER:%d:%s] CDS timeout\n",
|
||||
encoder->base.base.id, encoder->base.name);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
/* FIXME: Should DP_TRAINING_PATTERN_DISABLE be written first? */
|
||||
if (intel_dp->set_idle_link_train)
|
||||
intel_dp->set_idle_link_train(intel_dp, crtc_state);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/*
|
||||
* 128b/132b link training sequence. (DP 2.0 E11 SCR on link training.)
|
||||
*/
|
||||
static bool
|
||||
intel_dp_128b132b_link_train(struct intel_dp *intel_dp,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
int lttpr_count)
|
||||
{
|
||||
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
||||
struct intel_connector *connector = intel_dp->attached_connector;
|
||||
struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
|
||||
bool passed = false;
|
||||
|
||||
if (wait_for(intel_dp_128b132b_intra_hop(intel_dp, crtc_state) == 0, 500)) {
|
||||
drm_err(&i915->drm,
|
||||
"[ENCODER:%d:%s] 128b/132b intra-hop not clear\n",
|
||||
encoder->base.base.id, encoder->base.name);
|
||||
return false;
|
||||
}
|
||||
|
||||
if (intel_dp_128b132b_lane_eq(intel_dp, crtc_state) &&
|
||||
intel_dp_128b132b_lane_cds(intel_dp, crtc_state, lttpr_count))
|
||||
passed = true;
|
||||
|
||||
drm_dbg_kms(&i915->drm,
|
||||
"[CONNECTOR:%d:%s][ENCODER:%d:%s] 128b/132b Link Training %s at link rate = %d, lane count = %d\n",
|
||||
connector->base.base.id, connector->base.name,
|
||||
encoder->base.base.id, encoder->base.name,
|
||||
passed ? "passed" : "failed",
|
||||
crtc_state->port_clock, crtc_state->lane_count);
|
||||
|
||||
return passed;
|
||||
}
|
||||
|
||||
/**
|
||||
* intel_dp_start_link_train - start link training
|
||||
* @intel_dp: DP struct
|
||||
@ -1117,6 +1408,7 @@ intel_dp_link_train_all_phys(struct intel_dp *intel_dp,
|
||||
void intel_dp_start_link_train(struct intel_dp *intel_dp,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
bool passed;
|
||||
/*
|
||||
* TODO: Reiniting LTTPRs here won't be needed once proper connector
|
||||
* HW state readout is added.
|
||||
@ -1127,6 +1419,13 @@ void intel_dp_start_link_train(struct intel_dp *intel_dp,
|
||||
/* Still continue with enabling the port and link training. */
|
||||
lttpr_count = 0;
|
||||
|
||||
if (!intel_dp_link_train_all_phys(intel_dp, crtc_state, lttpr_count))
|
||||
intel_dp_prepare_link_train(intel_dp, crtc_state);
|
||||
|
||||
if (intel_dp_is_uhbr(crtc_state))
|
||||
passed = intel_dp_128b132b_link_train(intel_dp, crtc_state, lttpr_count);
|
||||
else
|
||||
passed = intel_dp_link_train_all_phys(intel_dp, crtc_state, lttpr_count);
|
||||
|
||||
if (!passed)
|
||||
intel_dp_schedule_fallback_link_training(intel_dp, crtc_state);
|
||||
}
|
||||
|
@ -29,6 +29,10 @@ void intel_dp_start_link_train(struct intel_dp *intel_dp,
|
||||
void intel_dp_stop_link_train(struct intel_dp *intel_dp,
|
||||
const struct intel_crtc_state *crtc_state);
|
||||
|
||||
void
|
||||
intel_dp_dump_link_status(struct intel_dp *intel_dp, enum drm_dp_phy dp_phy,
|
||||
const u8 link_status[DP_LINK_STATUS_SIZE]);
|
||||
|
||||
/* Get the TPSx symbol type of the value programmed to DP_TRAINING_PATTERN_SET */
|
||||
static inline u8 intel_dp_training_pattern_symbol(u8 pattern)
|
||||
{
|
||||
|
@ -99,6 +99,29 @@ static int intel_dp_mst_compute_link_config(struct intel_encoder *encoder,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int intel_dp_mst_update_slots(struct intel_encoder *encoder,
|
||||
struct intel_crtc_state *crtc_state,
|
||||
struct drm_connector_state *conn_state)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(encoder->base.dev);
|
||||
struct intel_dp_mst_encoder *intel_mst = enc_to_mst(encoder);
|
||||
struct intel_dp *intel_dp = &intel_mst->primary->dp;
|
||||
struct drm_dp_mst_topology_mgr *mgr = &intel_dp->mst_mgr;
|
||||
struct drm_dp_mst_topology_state *topology_state;
|
||||
u8 link_coding_cap = intel_dp_is_uhbr(crtc_state) ?
|
||||
DP_CAP_ANSI_128B132B : DP_CAP_ANSI_8B10B;
|
||||
|
||||
topology_state = drm_atomic_get_mst_topology_state(conn_state->state, mgr);
|
||||
if (IS_ERR(topology_state)) {
|
||||
drm_dbg_kms(&i915->drm, "slot update failed\n");
|
||||
return PTR_ERR(topology_state);
|
||||
}
|
||||
|
||||
drm_dp_mst_update_slots(topology_state, link_coding_cap);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int intel_dp_mst_compute_config(struct intel_encoder *encoder,
|
||||
struct intel_crtc_state *pipe_config,
|
||||
struct drm_connector_state *conn_state)
|
||||
@ -155,6 +178,10 @@ static int intel_dp_mst_compute_config(struct intel_encoder *encoder,
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = intel_dp_mst_update_slots(encoder, pipe_config, conn_state);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
pipe_config->limited_color_range =
|
||||
intel_dp_limited_color_range(pipe_config, conn_state);
|
||||
|
||||
@ -357,6 +384,7 @@ static void intel_mst_disable_dp(struct intel_atomic_state *state,
|
||||
struct intel_connector *connector =
|
||||
to_intel_connector(old_conn_state->connector);
|
||||
struct drm_i915_private *i915 = to_i915(connector->base.dev);
|
||||
int start_slot = intel_dp_is_uhbr(old_crtc_state) ? 0 : 1;
|
||||
int ret;
|
||||
|
||||
drm_dbg_kms(&i915->drm, "active links %d\n",
|
||||
@ -366,7 +394,7 @@ static void intel_mst_disable_dp(struct intel_atomic_state *state,
|
||||
|
||||
drm_dp_mst_reset_vcpi_slots(&intel_dp->mst_mgr, connector->port);
|
||||
|
||||
ret = drm_dp_update_payload_part1(&intel_dp->mst_mgr, 1);
|
||||
ret = drm_dp_update_payload_part1(&intel_dp->mst_mgr, start_slot);
|
||||
if (ret) {
|
||||
drm_dbg_kms(&i915->drm, "failed to update payload %d\n", ret);
|
||||
}
|
||||
@ -475,6 +503,7 @@ static void intel_mst_pre_enable_dp(struct intel_atomic_state *state,
|
||||
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
||||
struct intel_connector *connector =
|
||||
to_intel_connector(conn_state->connector);
|
||||
int start_slot = intel_dp_is_uhbr(pipe_config) ? 0 : 1;
|
||||
int ret;
|
||||
bool first_mst_stream;
|
||||
|
||||
@ -509,7 +538,7 @@ static void intel_mst_pre_enable_dp(struct intel_atomic_state *state,
|
||||
|
||||
intel_dp->active_mst_links++;
|
||||
|
||||
ret = drm_dp_update_payload_part1(&intel_dp->mst_mgr, 1);
|
||||
ret = drm_dp_update_payload_part1(&intel_dp->mst_mgr, start_slot);
|
||||
|
||||
/*
|
||||
* Before Gen 12 this is not done as part of
|
||||
|
@ -16,6 +16,10 @@
|
||||
#include "intel_snps_phy.h"
|
||||
#include "vlv_sideband.h"
|
||||
|
||||
struct intel_dpll_funcs {
|
||||
int (*crtc_compute_clock)(struct intel_crtc_state *crtc_state);
|
||||
};
|
||||
|
||||
struct intel_limit {
|
||||
struct {
|
||||
int min, max;
|
||||
@ -1400,6 +1404,14 @@ static const struct intel_dpll_funcs i8xx_dpll_funcs = {
|
||||
.crtc_compute_clock = i8xx_crtc_compute_clock,
|
||||
};
|
||||
|
||||
int intel_dpll_crtc_compute_clock(struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
|
||||
struct drm_i915_private *i915 = to_i915(crtc->base.dev);
|
||||
|
||||
return i915->dpll_funcs->crtc_compute_clock(crtc_state);
|
||||
}
|
||||
|
||||
void
|
||||
intel_dpll_init_clock_hook(struct drm_i915_private *dev_priv)
|
||||
{
|
||||
|
@ -15,6 +15,7 @@ struct intel_crtc_state;
|
||||
enum pipe;
|
||||
|
||||
void intel_dpll_init_clock_hook(struct drm_i915_private *dev_priv);
|
||||
int intel_dpll_crtc_compute_clock(struct intel_crtc_state *crtc_state);
|
||||
int vlv_calc_dpll_params(int refclk, struct dpll *clock);
|
||||
int pnv_calc_dpll_params(int refclk, struct dpll *clock);
|
||||
int i9xx_calc_dpll_params(int refclk, struct dpll *clock);
|
||||
|
@ -2748,6 +2748,9 @@ static void icl_calc_dpll_state(struct drm_i915_private *i915,
|
||||
pll_state->cfgcr1 |= TGL_DPLL_CFGCR1_CFSELOVRD_NORMAL_XTAL;
|
||||
else
|
||||
pll_state->cfgcr1 |= DPLL_CFGCR1_CENTRAL_FREQ_8400;
|
||||
|
||||
if (i915->vbt.override_afc_startup)
|
||||
pll_state->div0 = TGL_DPLL0_DIV0_AFC_STARTUP(i915->vbt.override_afc_startup_val);
|
||||
}
|
||||
|
||||
static bool icl_mg_pll_find_divisors(int clock_khz, bool is_dp, bool use_ssc,
|
||||
@ -2949,6 +2952,11 @@ static bool icl_calc_mg_pll_state(struct intel_crtc_state *crtc_state,
|
||||
DKL_PLL_DIV0_PROP_COEFF(prop_coeff) |
|
||||
DKL_PLL_DIV0_FBPREDIV(m1div) |
|
||||
DKL_PLL_DIV0_FBDIV_INT(m2div_int);
|
||||
if (dev_priv->vbt.override_afc_startup) {
|
||||
u8 val = dev_priv->vbt.override_afc_startup_val;
|
||||
|
||||
pll_state->mg_pll_div0 |= DKL_PLL_DIV0_AFC_STARTUP(val);
|
||||
}
|
||||
|
||||
pll_state->mg_pll_div1 = DKL_PLL_DIV1_IREF_TRIM(iref_trim) |
|
||||
DKL_PLL_DIV1_TDC_TARGET_CNT(tdc_targetcnt);
|
||||
@ -3448,10 +3456,10 @@ static bool dkl_pll_get_hw_state(struct drm_i915_private *dev_priv,
|
||||
MG_CLKTOP2_CORECLKCTL1_A_DIVRATIO_MASK;
|
||||
|
||||
hw_state->mg_pll_div0 = intel_de_read(dev_priv, DKL_PLL_DIV0(tc_port));
|
||||
hw_state->mg_pll_div0 &= (DKL_PLL_DIV0_INTEG_COEFF_MASK |
|
||||
DKL_PLL_DIV0_PROP_COEFF_MASK |
|
||||
DKL_PLL_DIV0_FBPREDIV_MASK |
|
||||
DKL_PLL_DIV0_FBDIV_INT_MASK);
|
||||
val = DKL_PLL_DIV0_MASK;
|
||||
if (dev_priv->vbt.override_afc_startup)
|
||||
val |= DKL_PLL_DIV0_AFC_STARTUP_MASK;
|
||||
hw_state->mg_pll_div0 &= val;
|
||||
|
||||
hw_state->mg_pll_div1 = intel_de_read(dev_priv, DKL_PLL_DIV1(tc_port));
|
||||
hw_state->mg_pll_div1 &= (DKL_PLL_DIV1_IREF_TRIM_MASK |
|
||||
@ -3513,6 +3521,10 @@ static bool icl_pll_get_hw_state(struct drm_i915_private *dev_priv,
|
||||
TGL_DPLL_CFGCR0(id));
|
||||
hw_state->cfgcr1 = intel_de_read(dev_priv,
|
||||
TGL_DPLL_CFGCR1(id));
|
||||
if (dev_priv->vbt.override_afc_startup) {
|
||||
hw_state->div0 = intel_de_read(dev_priv, TGL_DPLL0_DIV0(id));
|
||||
hw_state->div0 &= TGL_DPLL0_DIV0_AFC_STARTUP_MASK;
|
||||
}
|
||||
} else {
|
||||
if (IS_JSL_EHL(dev_priv) && id == DPLL_ID_EHL_DPLL4) {
|
||||
hw_state->cfgcr0 = intel_de_read(dev_priv,
|
||||
@ -3554,7 +3566,7 @@ static void icl_dpll_write(struct drm_i915_private *dev_priv,
|
||||
{
|
||||
struct intel_dpll_hw_state *hw_state = &pll->state.hw_state;
|
||||
const enum intel_dpll_id id = pll->info->id;
|
||||
i915_reg_t cfgcr0_reg, cfgcr1_reg;
|
||||
i915_reg_t cfgcr0_reg, cfgcr1_reg, div0_reg = INVALID_MMIO_REG;
|
||||
|
||||
if (IS_ALDERLAKE_S(dev_priv)) {
|
||||
cfgcr0_reg = ADLS_DPLL_CFGCR0(id);
|
||||
@ -3568,6 +3580,7 @@ static void icl_dpll_write(struct drm_i915_private *dev_priv,
|
||||
} else if (DISPLAY_VER(dev_priv) >= 12) {
|
||||
cfgcr0_reg = TGL_DPLL_CFGCR0(id);
|
||||
cfgcr1_reg = TGL_DPLL_CFGCR1(id);
|
||||
div0_reg = TGL_DPLL0_DIV0(id);
|
||||
} else {
|
||||
if (IS_JSL_EHL(dev_priv) && id == DPLL_ID_EHL_DPLL4) {
|
||||
cfgcr0_reg = ICL_DPLL_CFGCR0(4);
|
||||
@ -3580,6 +3593,12 @@ static void icl_dpll_write(struct drm_i915_private *dev_priv,
|
||||
|
||||
intel_de_write(dev_priv, cfgcr0_reg, hw_state->cfgcr0);
|
||||
intel_de_write(dev_priv, cfgcr1_reg, hw_state->cfgcr1);
|
||||
drm_WARN_ON_ONCE(&dev_priv->drm, dev_priv->vbt.override_afc_startup &&
|
||||
!i915_mmio_reg_valid(div0_reg));
|
||||
if (dev_priv->vbt.override_afc_startup &&
|
||||
i915_mmio_reg_valid(div0_reg))
|
||||
intel_de_rmw(dev_priv, div0_reg, TGL_DPLL0_DIV0_AFC_STARTUP_MASK,
|
||||
hw_state->div0);
|
||||
intel_de_posting_read(dev_priv, cfgcr1_reg);
|
||||
}
|
||||
|
||||
@ -3667,13 +3686,11 @@ static void dkl_pll_write(struct drm_i915_private *dev_priv,
|
||||
val |= hw_state->mg_clktop2_hsclkctl;
|
||||
intel_de_write(dev_priv, DKL_CLKTOP2_HSCLKCTL(tc_port), val);
|
||||
|
||||
val = intel_de_read(dev_priv, DKL_PLL_DIV0(tc_port));
|
||||
val &= ~(DKL_PLL_DIV0_INTEG_COEFF_MASK |
|
||||
DKL_PLL_DIV0_PROP_COEFF_MASK |
|
||||
DKL_PLL_DIV0_FBPREDIV_MASK |
|
||||
DKL_PLL_DIV0_FBDIV_INT_MASK);
|
||||
val |= hw_state->mg_pll_div0;
|
||||
intel_de_write(dev_priv, DKL_PLL_DIV0(tc_port), val);
|
||||
val = DKL_PLL_DIV0_MASK;
|
||||
if (dev_priv->vbt.override_afc_startup)
|
||||
val |= DKL_PLL_DIV0_AFC_STARTUP_MASK;
|
||||
intel_de_rmw(dev_priv, DKL_PLL_DIV0(tc_port), val,
|
||||
hw_state->mg_pll_div0);
|
||||
|
||||
val = intel_de_read(dev_priv, DKL_PLL_DIV1(tc_port));
|
||||
val &= ~(DKL_PLL_DIV1_IREF_TRIM_MASK |
|
||||
@ -3912,13 +3929,14 @@ static void icl_dump_hw_state(struct drm_i915_private *dev_priv,
|
||||
const struct intel_dpll_hw_state *hw_state)
|
||||
{
|
||||
drm_dbg_kms(&dev_priv->drm,
|
||||
"dpll_hw_state: cfgcr0: 0x%x, cfgcr1: 0x%x, "
|
||||
"dpll_hw_state: cfgcr0: 0x%x, cfgcr1: 0x%x, div0: 0x%x, "
|
||||
"mg_refclkin_ctl: 0x%x, hg_clktop2_coreclkctl1: 0x%x, "
|
||||
"mg_clktop2_hsclkctl: 0x%x, mg_pll_div0: 0x%x, "
|
||||
"mg_pll_div2: 0x%x, mg_pll_lf: 0x%x, "
|
||||
"mg_pll_frac_lock: 0x%x, mg_pll_ssc: 0x%x, "
|
||||
"mg_pll_bias: 0x%x, mg_pll_tdc_coldst_bias: 0x%x\n",
|
||||
hw_state->cfgcr0, hw_state->cfgcr1,
|
||||
hw_state->div0,
|
||||
hw_state->mg_refclkin_ctl,
|
||||
hw_state->mg_clktop2_coreclkctl1,
|
||||
hw_state->mg_clktop2_hsclkctl,
|
||||
|
@ -208,6 +208,9 @@ struct intel_dpll_hw_state {
|
||||
/* icl */
|
||||
u32 cfgcr0;
|
||||
|
||||
/* tgl */
|
||||
u32 div0;
|
||||
|
||||
/* bxt */
|
||||
u32 ebb0, ebb4, pll0, pll1, pll2, pll3, pll6, pll8, pll9, pll10, pcsdw12;
|
||||
|
||||
|
@ -3,11 +3,13 @@
|
||||
* Copyright © 2021 Intel Corporation
|
||||
*/
|
||||
|
||||
#include "gem/i915_gem_domain.h"
|
||||
#include "gt/gen8_ppgtt.h"
|
||||
|
||||
#include "i915_drv.h"
|
||||
#include "intel_display_types.h"
|
||||
#include "intel_dpt.h"
|
||||
#include "intel_fb.h"
|
||||
#include "gt/gen8_ppgtt.h"
|
||||
|
||||
struct i915_dpt {
|
||||
struct i915_address_space vm;
|
||||
@ -48,7 +50,7 @@ static void dpt_insert_page(struct i915_address_space *vm,
|
||||
}
|
||||
|
||||
static void dpt_insert_entries(struct i915_address_space *vm,
|
||||
struct i915_vma *vma,
|
||||
struct i915_vma_resource *vma_res,
|
||||
enum i915_cache_level level,
|
||||
u32 flags)
|
||||
{
|
||||
@ -64,8 +66,8 @@ static void dpt_insert_entries(struct i915_address_space *vm,
|
||||
* not to allow the user to override access to a read only page.
|
||||
*/
|
||||
|
||||
i = vma->node.start / I915_GTT_PAGE_SIZE;
|
||||
for_each_sgt_daddr(addr, sgt_iter, vma->pages)
|
||||
i = vma_res->start / I915_GTT_PAGE_SIZE;
|
||||
for_each_sgt_daddr(addr, sgt_iter, vma_res->bi.pages)
|
||||
gen8_set_pte(&base[i++], pte_encode | addr);
|
||||
}
|
||||
|
||||
@ -76,35 +78,38 @@ static void dpt_clear_range(struct i915_address_space *vm,
|
||||
|
||||
static void dpt_bind_vma(struct i915_address_space *vm,
|
||||
struct i915_vm_pt_stash *stash,
|
||||
struct i915_vma *vma,
|
||||
struct i915_vma_resource *vma_res,
|
||||
enum i915_cache_level cache_level,
|
||||
u32 flags)
|
||||
{
|
||||
struct drm_i915_gem_object *obj = vma->obj;
|
||||
u32 pte_flags;
|
||||
|
||||
if (vma_res->bound_flags)
|
||||
return;
|
||||
|
||||
/* Applicable to VLV (gen8+ do not support RO in the GGTT) */
|
||||
pte_flags = 0;
|
||||
if (vma->vm->has_read_only && i915_gem_object_is_readonly(obj))
|
||||
if (vm->has_read_only && vma_res->bi.readonly)
|
||||
pte_flags |= PTE_READ_ONLY;
|
||||
if (i915_gem_object_is_lmem(obj))
|
||||
if (vma_res->bi.lmem)
|
||||
pte_flags |= PTE_LM;
|
||||
|
||||
vma->vm->insert_entries(vma->vm, vma, cache_level, pte_flags);
|
||||
vm->insert_entries(vm, vma_res, cache_level, pte_flags);
|
||||
|
||||
vma->page_sizes.gtt = I915_GTT_PAGE_SIZE;
|
||||
vma_res->page_sizes_gtt = I915_GTT_PAGE_SIZE;
|
||||
|
||||
/*
|
||||
* Without aliasing PPGTT there's no difference between
|
||||
* GLOBAL/LOCAL_BIND, it's all the same ptes. Hence unconditionally
|
||||
* upgrade to both bound if we bind either to avoid double-binding.
|
||||
*/
|
||||
atomic_or(I915_VMA_GLOBAL_BIND | I915_VMA_LOCAL_BIND, &vma->flags);
|
||||
vma_res->bound_flags = I915_VMA_GLOBAL_BIND | I915_VMA_LOCAL_BIND;
|
||||
}
|
||||
|
||||
static void dpt_unbind_vma(struct i915_address_space *vm, struct i915_vma *vma)
|
||||
static void dpt_unbind_vma(struct i915_address_space *vm,
|
||||
struct i915_vma_resource *vma_res)
|
||||
{
|
||||
vm->clear_range(vm, vma->node.start, vma->size);
|
||||
vm->clear_range(vm, vma_res->start, vma_res->vma_size);
|
||||
}
|
||||
|
||||
static void dpt_cleanup(struct i915_address_space *vm)
|
||||
@ -250,7 +255,11 @@ intel_dpt_create(struct intel_framebuffer *fb)
|
||||
if (IS_ERR(dpt_obj))
|
||||
return ERR_CAST(dpt_obj);
|
||||
|
||||
ret = i915_gem_object_set_cache_level(dpt_obj, I915_CACHE_NONE);
|
||||
ret = i915_gem_object_lock_interruptible(dpt_obj, NULL);
|
||||
if (!ret) {
|
||||
ret = i915_gem_object_set_cache_level(dpt_obj, I915_CACHE_NONE);
|
||||
i915_gem_object_unlock(dpt_obj);
|
||||
}
|
||||
if (ret) {
|
||||
i915_gem_object_put(dpt_obj);
|
||||
return ERR_PTR(ret);
|
||||
|
@ -4,6 +4,8 @@
|
||||
*
|
||||
*/
|
||||
|
||||
#include "gem/i915_gem_internal.h"
|
||||
|
||||
#include "i915_drv.h"
|
||||
#include "intel_de.h"
|
||||
#include "intel_display_types.h"
|
||||
|
@ -79,8 +79,8 @@ struct intel_dsi {
|
||||
*/
|
||||
enum mipi_dsi_pixel_format pixel_format;
|
||||
|
||||
/* video mode format for MIPI_VIDEO_MODE_FORMAT register */
|
||||
u32 video_mode_format;
|
||||
/* NON_BURST_SYNC_PULSE, NON_BURST_SYNC_EVENTS, or BURST_MODE */
|
||||
int video_mode;
|
||||
|
||||
/* eot for MIPI_EOT_DISABLE register */
|
||||
u8 eotp_pkt;
|
||||
|
@ -44,6 +44,7 @@
|
||||
#include "intel_dsi.h"
|
||||
#include "intel_dsi_vbt.h"
|
||||
#include "vlv_dsi.h"
|
||||
#include "vlv_dsi_regs.h"
|
||||
#include "vlv_sideband.h"
|
||||
|
||||
#define MIPI_TRANSFER_MODE_SHIFT 0
|
||||
@ -675,11 +676,11 @@ void intel_dsi_log_params(struct intel_dsi *intel_dsi)
|
||||
drm_dbg_kms(&i915->drm, "Lane count %d\n", intel_dsi->lane_count);
|
||||
drm_dbg_kms(&i915->drm, "DPHY param reg 0x%x\n", intel_dsi->dphy_reg);
|
||||
drm_dbg_kms(&i915->drm, "Video mode format %s\n",
|
||||
intel_dsi->video_mode_format == VIDEO_MODE_NON_BURST_WITH_SYNC_PULSE ?
|
||||
intel_dsi->video_mode == NON_BURST_SYNC_PULSE ?
|
||||
"non-burst with sync pulse" :
|
||||
intel_dsi->video_mode_format == VIDEO_MODE_NON_BURST_WITH_SYNC_EVENTS ?
|
||||
intel_dsi->video_mode == NON_BURST_SYNC_EVENTS ?
|
||||
"non-burst with sync events" :
|
||||
intel_dsi->video_mode_format == VIDEO_MODE_BURST ?
|
||||
intel_dsi->video_mode == BURST_MODE ?
|
||||
"burst" : "<unknown>");
|
||||
drm_dbg_kms(&i915->drm, "Burst mode ratio %d\n",
|
||||
intel_dsi->burst_mode_ratio);
|
||||
@ -739,7 +740,7 @@ bool intel_dsi_vbt_init(struct intel_dsi *intel_dsi, u16 panel_id)
|
||||
intel_dsi->dual_link = mipi_config->dual_link;
|
||||
intel_dsi->pixel_overlap = mipi_config->pixel_overlap;
|
||||
intel_dsi->operation_mode = mipi_config->is_cmd_mode;
|
||||
intel_dsi->video_mode_format = mipi_config->video_transfer_mode;
|
||||
intel_dsi->video_mode = mipi_config->video_transfer_mode;
|
||||
intel_dsi->escape_clk_div = mipi_config->byte_clk_sel;
|
||||
intel_dsi->lp_rx_timeout = mipi_config->lp_rx_timeout;
|
||||
intel_dsi->hs_tx_timeout = mipi_config->hs_tx_timeout;
|
||||
@ -770,7 +771,7 @@ bool intel_dsi_vbt_init(struct intel_dsi *intel_dsi, u16 panel_id)
|
||||
* Target ddr frequency from VBT / non burst ddr freq
|
||||
* multiply by 100 to preserve remainder
|
||||
*/
|
||||
if (intel_dsi->video_mode_format == VIDEO_MODE_BURST) {
|
||||
if (intel_dsi->video_mode == BURST_MODE) {
|
||||
if (mipi_config->target_burst_mode_freq) {
|
||||
u32 bitrate = intel_dsi_bitrate(intel_dsi);
|
||||
|
||||
|
@ -7,6 +7,7 @@
|
||||
* DOC: display pinning helpers
|
||||
*/
|
||||
|
||||
#include "gem/i915_gem_domain.h"
|
||||
#include "gem/i915_gem_object.h"
|
||||
|
||||
#include "i915_drv.h"
|
||||
@ -36,7 +37,11 @@ intel_pin_fb_obj_dpt(struct drm_framebuffer *fb,
|
||||
|
||||
atomic_inc(&dev_priv->gpu_error.pending_fb_pin);
|
||||
|
||||
ret = i915_gem_object_set_cache_level(obj, I915_CACHE_NONE);
|
||||
ret = i915_gem_object_lock_interruptible(obj, NULL);
|
||||
if (!ret) {
|
||||
ret = i915_gem_object_set_cache_level(obj, I915_CACHE_NONE);
|
||||
i915_gem_object_unlock(obj);
|
||||
}
|
||||
if (ret) {
|
||||
vma = ERR_PTR(ret);
|
||||
goto err;
|
||||
@ -47,7 +52,7 @@ intel_pin_fb_obj_dpt(struct drm_framebuffer *fb,
|
||||
goto err;
|
||||
|
||||
if (i915_vma_misplaced(vma, 0, alignment, 0)) {
|
||||
ret = i915_vma_unbind(vma);
|
||||
ret = i915_vma_unbind_unlocked(vma);
|
||||
if (ret) {
|
||||
vma = ERR_PTR(ret);
|
||||
goto err;
|
||||
|
@ -605,7 +605,7 @@ static void ivb_fbc_activate(struct intel_fbc *fbc)
|
||||
else if (DISPLAY_VER(i915) == 9)
|
||||
skl_fbc_program_cfb_stride(fbc);
|
||||
|
||||
if (i915->ggtt.num_fences)
|
||||
if (to_gt(i915)->ggtt->num_fences)
|
||||
snb_fbc_program_fence(fbc);
|
||||
|
||||
intel_de_write(i915, ILK_DPFC_CONTROL(fbc->id),
|
||||
@ -1125,7 +1125,8 @@ static int intel_fbc_check_plane(struct intel_atomic_state *state,
|
||||
|
||||
/* Wa_22010751166: icl, ehl, tgl, dg1, rkl */
|
||||
if (DISPLAY_VER(i915) >= 11 &&
|
||||
(plane_state->view.color_plane[0].y + drm_rect_height(&plane_state->uapi.src)) & 3) {
|
||||
(plane_state->view.color_plane[0].y +
|
||||
(drm_rect_height(&plane_state->uapi.src) >> 16)) & 3) {
|
||||
plane_state->no_fbc_reason = "plane end Y offset misaligned";
|
||||
return false;
|
||||
}
|
||||
|
@ -50,6 +50,23 @@
|
||||
#include "intel_fbdev.h"
|
||||
#include "intel_frontbuffer.h"
|
||||
|
||||
struct intel_fbdev {
|
||||
struct drm_fb_helper helper;
|
||||
struct intel_framebuffer *fb;
|
||||
struct i915_vma *vma;
|
||||
unsigned long vma_flags;
|
||||
async_cookie_t cookie;
|
||||
int preferred_bpp;
|
||||
|
||||
/* Whether or not fbdev hpd processing is temporarily suspended */
|
||||
bool hpd_suspended: 1;
|
||||
/* Set when a hotplug was received while HPD processing was suspended */
|
||||
bool hpd_waiting: 1;
|
||||
|
||||
/* Protects hpd_suspended */
|
||||
struct mutex hpd_lock;
|
||||
};
|
||||
|
||||
static struct intel_frontbuffer *to_frontbuffer(struct intel_fbdev *ifbdev)
|
||||
{
|
||||
return ifbdev->fb->frontbuffer;
|
||||
@ -180,7 +197,7 @@ static int intelfb_create(struct drm_fb_helper *helper,
|
||||
struct drm_device *dev = helper->dev;
|
||||
struct drm_i915_private *dev_priv = to_i915(dev);
|
||||
struct pci_dev *pdev = to_pci_dev(dev_priv->drm.dev);
|
||||
struct i915_ggtt *ggtt = &dev_priv->ggtt;
|
||||
struct i915_ggtt *ggtt = to_gt(dev_priv)->ggtt;
|
||||
const struct i915_ggtt_view view = {
|
||||
.type = I915_GGTT_VIEW_NORMAL,
|
||||
};
|
||||
@ -680,3 +697,11 @@ void intel_fbdev_restore_mode(struct drm_device *dev)
|
||||
if (drm_fb_helper_restore_fbdev_mode_unlocked(&ifbdev->helper) == 0)
|
||||
intel_fbdev_invalidate(ifbdev);
|
||||
}
|
||||
|
||||
struct intel_framebuffer *intel_fbdev_framebuffer(struct intel_fbdev *fbdev)
|
||||
{
|
||||
if (!fbdev || !fbdev->helper.fb)
|
||||
return NULL;
|
||||
|
||||
return to_intel_framebuffer(fbdev->helper.fb);
|
||||
}
|
||||
|
@ -10,6 +10,8 @@
|
||||
|
||||
struct drm_device;
|
||||
struct drm_i915_private;
|
||||
struct intel_fbdev;
|
||||
struct intel_framebuffer;
|
||||
|
||||
#ifdef CONFIG_DRM_FBDEV_EMULATION
|
||||
int intel_fbdev_init(struct drm_device *dev);
|
||||
@ -19,6 +21,7 @@ void intel_fbdev_fini(struct drm_i915_private *dev_priv);
|
||||
void intel_fbdev_set_suspend(struct drm_device *dev, int state, bool synchronous);
|
||||
void intel_fbdev_output_poll_changed(struct drm_device *dev);
|
||||
void intel_fbdev_restore_mode(struct drm_device *dev);
|
||||
struct intel_framebuffer *intel_fbdev_framebuffer(struct intel_fbdev *fbdev);
|
||||
#else
|
||||
static inline int intel_fbdev_init(struct drm_device *dev)
|
||||
{
|
||||
@ -48,6 +51,10 @@ static inline void intel_fbdev_output_poll_changed(struct drm_device *dev)
|
||||
static inline void intel_fbdev_restore_mode(struct drm_device *dev)
|
||||
{
|
||||
}
|
||||
static inline struct intel_framebuffer *intel_fbdev_framebuffer(struct intel_fbdev *fbdev)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
#endif
|
||||
|
||||
#endif /* __INTEL_FBDEV_H__ */
|
||||
|
@ -10,6 +10,11 @@
|
||||
#include "intel_display_types.h"
|
||||
#include "intel_fdi.h"
|
||||
|
||||
struct intel_fdi_funcs {
|
||||
void (*fdi_link_train)(struct intel_crtc *crtc,
|
||||
const struct intel_crtc_state *crtc_state);
|
||||
};
|
||||
|
||||
static void assert_fdi_tx(struct drm_i915_private *dev_priv,
|
||||
enum pipe pipe, bool state)
|
||||
{
|
||||
|
@ -98,11 +98,21 @@ static const struct gmbus_pin gmbus_pins_dg1[] = {
|
||||
[GMBUS_PIN_4_CNP] = { "dpd", GPIOE },
|
||||
};
|
||||
|
||||
static const struct gmbus_pin gmbus_pins_dg2[] = {
|
||||
[GMBUS_PIN_1_BXT] = { "dpa", GPIOB },
|
||||
[GMBUS_PIN_2_BXT] = { "dpb", GPIOC },
|
||||
[GMBUS_PIN_3_BXT] = { "dpc", GPIOD },
|
||||
[GMBUS_PIN_4_CNP] = { "dpd", GPIOE },
|
||||
[GMBUS_PIN_9_TC1_ICP] = { "tc1", GPIOJ },
|
||||
};
|
||||
|
||||
/* pin is expected to be valid */
|
||||
static const struct gmbus_pin *get_gmbus_pin(struct drm_i915_private *dev_priv,
|
||||
unsigned int pin)
|
||||
{
|
||||
if (INTEL_PCH_TYPE(dev_priv) >= PCH_DG1)
|
||||
if (INTEL_PCH_TYPE(dev_priv) >= PCH_DG2)
|
||||
return &gmbus_pins_dg2[pin];
|
||||
else if (INTEL_PCH_TYPE(dev_priv) >= PCH_DG1)
|
||||
return &gmbus_pins_dg1[pin];
|
||||
else if (INTEL_PCH_TYPE(dev_priv) >= PCH_ICP)
|
||||
return &gmbus_pins_icp[pin];
|
||||
@ -123,7 +133,9 @@ bool intel_gmbus_is_valid_pin(struct drm_i915_private *dev_priv,
|
||||
{
|
||||
unsigned int size;
|
||||
|
||||
if (INTEL_PCH_TYPE(dev_priv) >= PCH_DG1)
|
||||
if (INTEL_PCH_TYPE(dev_priv) >= PCH_DG2)
|
||||
size = ARRAY_SIZE(gmbus_pins_dg2);
|
||||
else if (INTEL_PCH_TYPE(dev_priv) >= PCH_DG1)
|
||||
size = ARRAY_SIZE(gmbus_pins_dg1);
|
||||
else if (INTEL_PCH_TYPE(dev_priv) >= PCH_ICP)
|
||||
size = ARRAY_SIZE(gmbus_pins_icp);
|
||||
|
@ -1869,7 +1869,7 @@ hdmi_port_clock_valid(struct intel_hdmi *hdmi,
|
||||
return MODE_OK;
|
||||
}
|
||||
|
||||
static int intel_hdmi_tmds_clock(int clock, int bpc, bool ycbcr420_output)
|
||||
int intel_hdmi_tmds_clock(int clock, int bpc, bool ycbcr420_output)
|
||||
{
|
||||
/* YCBCR420 TMDS rate requirement is half the pixel clock */
|
||||
if (ycbcr420_output)
|
||||
@ -1935,25 +1935,30 @@ intel_hdmi_mode_clock_valid(struct drm_connector *connector, int clock,
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(connector->dev);
|
||||
struct intel_hdmi *hdmi = intel_attached_hdmi(to_intel_connector(connector));
|
||||
enum drm_mode_status status;
|
||||
enum drm_mode_status status = MODE_OK;
|
||||
int bpc;
|
||||
|
||||
/* check if we can do 8bpc */
|
||||
status = hdmi_port_clock_valid(hdmi, intel_hdmi_tmds_clock(clock, 8, ycbcr420_output),
|
||||
true, has_hdmi_sink);
|
||||
/*
|
||||
* Try all color depths since valid port clock range
|
||||
* can have holes. Any mode that can be used with at
|
||||
* least one color depth is accepted.
|
||||
*/
|
||||
for (bpc = 12; bpc >= 8; bpc -= 2) {
|
||||
int tmds_clock = intel_hdmi_tmds_clock(clock, bpc, ycbcr420_output);
|
||||
|
||||
/* if we can't do 8bpc we may still be able to do 12bpc */
|
||||
if (status != MODE_OK &&
|
||||
intel_hdmi_source_bpc_possible(i915, 12) &&
|
||||
intel_hdmi_sink_bpc_possible(connector, 12, has_hdmi_sink, ycbcr420_output))
|
||||
status = hdmi_port_clock_valid(hdmi, intel_hdmi_tmds_clock(clock, 12, ycbcr420_output),
|
||||
true, has_hdmi_sink);
|
||||
if (!intel_hdmi_source_bpc_possible(i915, bpc))
|
||||
continue;
|
||||
|
||||
/* if we can't do 8,12bpc we may still be able to do 10bpc */
|
||||
if (status != MODE_OK &&
|
||||
intel_hdmi_source_bpc_possible(i915, 10) &&
|
||||
intel_hdmi_sink_bpc_possible(connector, 10, has_hdmi_sink, ycbcr420_output))
|
||||
status = hdmi_port_clock_valid(hdmi, intel_hdmi_tmds_clock(clock, 10, ycbcr420_output),
|
||||
true, has_hdmi_sink);
|
||||
if (!intel_hdmi_sink_bpc_possible(connector, bpc, has_hdmi_sink, ycbcr420_output))
|
||||
continue;
|
||||
|
||||
status = hdmi_port_clock_valid(hdmi, tmds_clock, true, has_hdmi_sink);
|
||||
if (status == MODE_OK)
|
||||
return MODE_OK;
|
||||
}
|
||||
|
||||
/* can never happen */
|
||||
drm_WARN_ON(&i915->drm, status == MODE_OK);
|
||||
|
||||
return status;
|
||||
}
|
||||
|
@ -46,6 +46,7 @@ bool intel_hdmi_limited_color_range(const struct intel_crtc_state *crtc_state,
|
||||
const struct drm_connector_state *conn_state);
|
||||
bool intel_hdmi_bpc_possible(const struct intel_crtc_state *crtc_state,
|
||||
int bpc, bool has_hdmi_sink, bool ycbcr420_output);
|
||||
int intel_hdmi_tmds_clock(int clock, int bpc, bool ycbcr420_output);
|
||||
int intel_hdmi_dsc_get_bpp(int src_fractional_bpp, int slice_width,
|
||||
int num_slices, int output_format, bool hdmi_all_bpp,
|
||||
int hdmi_max_chunk_bytes);
|
||||
|
@ -24,6 +24,7 @@
|
||||
#include <linux/kernel.h>
|
||||
|
||||
#include "i915_drv.h"
|
||||
#include "i915_irq.h"
|
||||
#include "intel_display_types.h"
|
||||
#include "intel_hotplug.h"
|
||||
|
||||
@ -213,12 +214,6 @@ intel_hpd_irq_storm_switch_to_polling(struct drm_i915_private *dev_priv)
|
||||
}
|
||||
}
|
||||
|
||||
static void intel_hpd_irq_setup(struct drm_i915_private *i915)
|
||||
{
|
||||
if (i915->display_irqs_enabled && i915->hotplug_funcs)
|
||||
i915->hotplug_funcs->hpd_irq_setup(i915);
|
||||
}
|
||||
|
||||
static void intel_hpd_irq_storm_reenable_work(struct work_struct *work)
|
||||
{
|
||||
struct drm_i915_private *dev_priv =
|
||||
|
@ -47,10 +47,11 @@
|
||||
#define OPREGION_ASLE_EXT_OFFSET 0x1C00
|
||||
|
||||
#define OPREGION_SIGNATURE "IntelGraphicsMem"
|
||||
#define MBOX_ACPI (1<<0)
|
||||
#define MBOX_SWSCI (1<<1)
|
||||
#define MBOX_ASLE (1<<2)
|
||||
#define MBOX_ASLE_EXT (1<<4)
|
||||
#define MBOX_ACPI BIT(0) /* Mailbox #1 */
|
||||
#define MBOX_SWSCI BIT(1) /* Mailbox #2 (obsolete from v2.x) */
|
||||
#define MBOX_ASLE BIT(2) /* Mailbox #3 */
|
||||
#define MBOX_ASLE_EXT BIT(4) /* Mailbox #5 */
|
||||
#define MBOX_BACKLIGHT BIT(5) /* Mailbox #2 (valid from v3.x) */
|
||||
|
||||
struct opregion_header {
|
||||
u8 signature[16];
|
||||
@ -245,14 +246,10 @@ struct opregion_asle_ext {
|
||||
|
||||
#define MAX_DSLP 1500
|
||||
|
||||
static int swsci(struct drm_i915_private *dev_priv,
|
||||
u32 function, u32 parm, u32 *parm_out)
|
||||
static int check_swsci_function(struct drm_i915_private *i915, u32 function)
|
||||
{
|
||||
struct opregion_swsci *swsci = dev_priv->opregion.swsci;
|
||||
struct pci_dev *pdev = to_pci_dev(dev_priv->drm.dev);
|
||||
u32 main_function, sub_function, scic;
|
||||
u16 swsci_val;
|
||||
u32 dslp;
|
||||
struct opregion_swsci *swsci = i915->opregion.swsci;
|
||||
u32 main_function, sub_function;
|
||||
|
||||
if (!swsci)
|
||||
return -ENODEV;
|
||||
@ -264,15 +261,31 @@ static int swsci(struct drm_i915_private *dev_priv,
|
||||
|
||||
/* Check if we can call the function. See swsci_setup for details. */
|
||||
if (main_function == SWSCI_SBCB) {
|
||||
if ((dev_priv->opregion.swsci_sbcb_sub_functions &
|
||||
if ((i915->opregion.swsci_sbcb_sub_functions &
|
||||
(1 << sub_function)) == 0)
|
||||
return -EINVAL;
|
||||
} else if (main_function == SWSCI_GBDA) {
|
||||
if ((dev_priv->opregion.swsci_gbda_sub_functions &
|
||||
if ((i915->opregion.swsci_gbda_sub_functions &
|
||||
(1 << sub_function)) == 0)
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int swsci(struct drm_i915_private *dev_priv,
|
||||
u32 function, u32 parm, u32 *parm_out)
|
||||
{
|
||||
struct opregion_swsci *swsci = dev_priv->opregion.swsci;
|
||||
struct pci_dev *pdev = to_pci_dev(dev_priv->drm.dev);
|
||||
u32 scic, dslp;
|
||||
u16 swsci_val;
|
||||
int ret;
|
||||
|
||||
ret = check_swsci_function(dev_priv, function);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/* Driver sleep timeout in ms. */
|
||||
dslp = swsci->dslp;
|
||||
if (!dslp) {
|
||||
@ -346,11 +359,17 @@ int intel_opregion_notify_encoder(struct intel_encoder *intel_encoder,
|
||||
u32 parm = 0;
|
||||
u32 type = 0;
|
||||
u32 port;
|
||||
int ret;
|
||||
|
||||
/* don't care about old stuff for now */
|
||||
if (!HAS_DDI(dev_priv))
|
||||
return 0;
|
||||
|
||||
/* Avoid port out of bounds checks if SWSCI isn't there. */
|
||||
ret = check_swsci_function(dev_priv, SWSCI_SBCB_DISPLAY_POWER_STATE);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (intel_encoder->type == INTEL_OUTPUT_DSI)
|
||||
port = 0;
|
||||
else
|
||||
@ -363,6 +382,21 @@ int intel_opregion_notify_encoder(struct intel_encoder *intel_encoder,
|
||||
port++;
|
||||
}
|
||||
|
||||
/*
|
||||
* The port numbering and mapping here is bizarre. The now-obsolete
|
||||
* swsci spec supports ports numbered [0..4]. Port E is handled as a
|
||||
* special case, but port F and beyond are not. The functionality is
|
||||
* supposed to be obsolete for new platforms. Just bail out if the port
|
||||
* number is out of bounds after mapping.
|
||||
*/
|
||||
if (port > 4) {
|
||||
drm_dbg_kms(&dev_priv->drm,
|
||||
"[ENCODER:%d:%s] port %c (index %u) out of bounds for display power state notification\n",
|
||||
intel_encoder->base.base.id, intel_encoder->base.name,
|
||||
port_name(intel_encoder->port), port);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (!enable)
|
||||
parm |= 4 << 8;
|
||||
|
||||
@ -899,9 +933,17 @@ int intel_opregion_setup(struct drm_i915_private *dev_priv)
|
||||
}
|
||||
|
||||
if (mboxes & MBOX_SWSCI) {
|
||||
drm_dbg(&dev_priv->drm, "SWSCI supported\n");
|
||||
opregion->swsci = base + OPREGION_SWSCI_OFFSET;
|
||||
swsci_setup(dev_priv);
|
||||
u8 major = opregion->header->over.major;
|
||||
|
||||
if (major >= 3) {
|
||||
drm_err(&dev_priv->drm, "SWSCI Mailbox #2 present for opregion v3.x, ignoring\n");
|
||||
} else {
|
||||
if (major >= 2)
|
||||
drm_dbg(&dev_priv->drm, "SWSCI Mailbox #2 present for opregion v2.x\n");
|
||||
drm_dbg(&dev_priv->drm, "SWSCI supported\n");
|
||||
opregion->swsci = base + OPREGION_SWSCI_OFFSET;
|
||||
swsci_setup(dev_priv);
|
||||
}
|
||||
}
|
||||
|
||||
if (mboxes & MBOX_ASLE) {
|
||||
@ -916,6 +958,10 @@ int intel_opregion_setup(struct drm_i915_private *dev_priv)
|
||||
opregion->asle_ext = base + OPREGION_ASLE_EXT_OFFSET;
|
||||
}
|
||||
|
||||
if (mboxes & MBOX_BACKLIGHT) {
|
||||
drm_dbg(&dev_priv->drm, "Mailbox #2 for backlight present\n");
|
||||
}
|
||||
|
||||
if (intel_load_vbt_firmware(dev_priv) == 0)
|
||||
goto out;
|
||||
|
||||
|
@ -28,6 +28,7 @@
|
||||
|
||||
#include <drm/drm_fourcc.h>
|
||||
|
||||
#include "gem/i915_gem_internal.h"
|
||||
#include "gem/i915_gem_pm.h"
|
||||
#include "gt/intel_gpu_commands.h"
|
||||
#include "gt/intel_ring.h"
|
||||
|
@ -46,17 +46,18 @@ static struct i915_vma *
|
||||
initial_plane_vma(struct drm_i915_private *i915,
|
||||
struct intel_initial_plane_config *plane_config)
|
||||
{
|
||||
struct intel_memory_region *mem = i915->mm.stolen_region;
|
||||
struct drm_i915_gem_object *obj;
|
||||
struct i915_vma *vma;
|
||||
u32 base, size;
|
||||
|
||||
if (plane_config->size == 0)
|
||||
if (!mem || plane_config->size == 0)
|
||||
return NULL;
|
||||
|
||||
base = round_down(plane_config->base,
|
||||
I915_GTT_MIN_ALIGNMENT);
|
||||
size = round_up(plane_config->base + plane_config->size,
|
||||
I915_GTT_MIN_ALIGNMENT);
|
||||
mem->min_page_size);
|
||||
size -= base;
|
||||
|
||||
/*
|
||||
@ -94,7 +95,7 @@ initial_plane_vma(struct drm_i915_private *i915,
|
||||
goto err_obj;
|
||||
}
|
||||
|
||||
vma = i915_vma_instance(obj, &i915->ggtt.vm, NULL);
|
||||
vma = i915_vma_instance(obj, &to_gt(i915)->ggtt->vm, NULL);
|
||||
if (IS_ERR(vma))
|
||||
goto err_obj;
|
||||
|
||||
@ -165,8 +166,6 @@ intel_find_initial_plane_obj(struct intel_crtc *crtc,
|
||||
{
|
||||
struct drm_device *dev = crtc->base.dev;
|
||||
struct drm_i915_private *dev_priv = to_i915(dev);
|
||||
struct intel_crtc_state *crtc_state =
|
||||
to_intel_crtc_state(crtc->base.state);
|
||||
struct intel_plane *plane =
|
||||
to_intel_plane(crtc->base.primary);
|
||||
struct intel_plane_state *plane_state =
|
||||
@ -203,11 +202,6 @@ intel_find_initial_plane_obj(struct intel_crtc *crtc,
|
||||
* pretend the BIOS never had it enabled.
|
||||
*/
|
||||
intel_plane_disable_noatomic(crtc, plane);
|
||||
if (crtc_state->bigjoiner) {
|
||||
struct intel_crtc *slave =
|
||||
crtc_state->bigjoiner_linked_crtc;
|
||||
intel_plane_disable_noatomic(slave, to_intel_plane(slave->base.primary));
|
||||
}
|
||||
|
||||
return;
|
||||
|
||||
|
@ -1063,31 +1063,28 @@ static void intel_psr_activate(struct intel_dp *intel_dp)
|
||||
intel_dp->psr.active = true;
|
||||
}
|
||||
|
||||
static void intel_psr_enable_source(struct intel_dp *intel_dp)
|
||||
static u32 wa_16013835468_bit_get(struct intel_dp *intel_dp)
|
||||
{
|
||||
switch (intel_dp->psr.pipe) {
|
||||
case PIPE_A:
|
||||
return LATENCY_REPORTING_REMOVED_PIPE_A;
|
||||
case PIPE_B:
|
||||
return LATENCY_REPORTING_REMOVED_PIPE_B;
|
||||
case PIPE_C:
|
||||
return LATENCY_REPORTING_REMOVED_PIPE_C;
|
||||
default:
|
||||
MISSING_CASE(intel_dp->psr.pipe);
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
static void intel_psr_enable_source(struct intel_dp *intel_dp,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
|
||||
enum transcoder cpu_transcoder = intel_dp->psr.transcoder;
|
||||
u32 mask;
|
||||
|
||||
if (intel_dp->psr.psr2_enabled && DISPLAY_VER(dev_priv) == 9) {
|
||||
i915_reg_t reg = CHICKEN_TRANS(cpu_transcoder);
|
||||
u32 chicken = intel_de_read(dev_priv, reg);
|
||||
|
||||
chicken |= PSR2_VSC_ENABLE_PROG_HEADER |
|
||||
PSR2_ADD_VERTICAL_LINE_COUNT;
|
||||
intel_de_write(dev_priv, reg, chicken);
|
||||
}
|
||||
|
||||
/*
|
||||
* Wa_16014451276:adlp
|
||||
* All supported adlp panels have 1-based X granularity, this may
|
||||
* cause issues if non-supported panels are used.
|
||||
*/
|
||||
if (IS_ALDERLAKE_P(dev_priv) &&
|
||||
intel_dp->psr.psr2_enabled)
|
||||
intel_de_rmw(dev_priv, CHICKEN_TRANS(cpu_transcoder), 0,
|
||||
ADLP_1_BASED_X_GRANULARITY);
|
||||
|
||||
/*
|
||||
* Per Spec: Avoid continuous PSR exit by masking MEMUP and HPD also
|
||||
* mask LPSP to avoid dependency on other drivers that might block
|
||||
@ -1126,18 +1123,47 @@ static void intel_psr_enable_source(struct intel_dp *intel_dp)
|
||||
intel_dp->psr.psr2_sel_fetch_enabled ?
|
||||
IGNORE_PSR2_HW_TRACKING : 0);
|
||||
|
||||
/* Wa_16011168373:adl-p */
|
||||
if (IS_ADLP_DISPLAY_STEP(dev_priv, STEP_A0, STEP_B0) &&
|
||||
intel_dp->psr.psr2_enabled)
|
||||
intel_de_rmw(dev_priv,
|
||||
TRANS_SET_CONTEXT_LATENCY(intel_dp->psr.transcoder),
|
||||
TRANS_SET_CONTEXT_LATENCY_MASK,
|
||||
TRANS_SET_CONTEXT_LATENCY_VALUE(1));
|
||||
if (intel_dp->psr.psr2_enabled) {
|
||||
if (DISPLAY_VER(dev_priv) == 9)
|
||||
intel_de_rmw(dev_priv, CHICKEN_TRANS(cpu_transcoder), 0,
|
||||
PSR2_VSC_ENABLE_PROG_HEADER |
|
||||
PSR2_ADD_VERTICAL_LINE_COUNT);
|
||||
|
||||
/* Wa_16012604467:adlp */
|
||||
if (IS_ALDERLAKE_P(dev_priv) && intel_dp->psr.psr2_enabled)
|
||||
intel_de_rmw(dev_priv, CLKGATE_DIS_MISC, 0,
|
||||
CLKGATE_DIS_MISC_DMASC_GATING_DIS);
|
||||
/*
|
||||
* Wa_16014451276:adlp
|
||||
* All supported adlp panels have 1-based X granularity, this may
|
||||
* cause issues if non-supported panels are used.
|
||||
*/
|
||||
if (IS_ALDERLAKE_P(dev_priv))
|
||||
intel_de_rmw(dev_priv, CHICKEN_TRANS(cpu_transcoder), 0,
|
||||
ADLP_1_BASED_X_GRANULARITY);
|
||||
|
||||
/* Wa_16011168373:adl-p */
|
||||
if (IS_ADLP_DISPLAY_STEP(dev_priv, STEP_A0, STEP_B0))
|
||||
intel_de_rmw(dev_priv,
|
||||
TRANS_SET_CONTEXT_LATENCY(intel_dp->psr.transcoder),
|
||||
TRANS_SET_CONTEXT_LATENCY_MASK,
|
||||
TRANS_SET_CONTEXT_LATENCY_VALUE(1));
|
||||
|
||||
/* Wa_16012604467:adlp */
|
||||
if (IS_ALDERLAKE_P(dev_priv))
|
||||
intel_de_rmw(dev_priv, CLKGATE_DIS_MISC, 0,
|
||||
CLKGATE_DIS_MISC_DMASC_GATING_DIS);
|
||||
|
||||
/* Wa_16013835468:tgl[b0+], dg1 */
|
||||
if (IS_TGL_DISPLAY_STEP(dev_priv, STEP_B0, STEP_FOREVER) ||
|
||||
IS_DG1(dev_priv)) {
|
||||
u16 vtotal, vblank;
|
||||
|
||||
vtotal = crtc_state->uapi.adjusted_mode.crtc_vtotal -
|
||||
crtc_state->uapi.adjusted_mode.crtc_vdisplay;
|
||||
vblank = crtc_state->uapi.adjusted_mode.crtc_vblank_end -
|
||||
crtc_state->uapi.adjusted_mode.crtc_vblank_start;
|
||||
if (vblank > vtotal)
|
||||
intel_de_rmw(dev_priv, GEN8_CHICKEN_DCPR_1, 0,
|
||||
wa_16013835468_bit_get(intel_dp));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static bool psr_interrupt_error_check(struct intel_dp *intel_dp)
|
||||
@ -1202,7 +1228,7 @@ static void intel_psr_enable_locked(struct intel_dp *intel_dp,
|
||||
intel_write_dp_vsc_sdp(encoder, crtc_state, &crtc_state->psr_vsc);
|
||||
intel_snps_phy_update_psr_power_state(dev_priv, phy, true);
|
||||
intel_psr_enable_sink(intel_dp);
|
||||
intel_psr_enable_source(intel_dp);
|
||||
intel_psr_enable_source(intel_dp, crtc_state);
|
||||
intel_dp->psr.enabled = true;
|
||||
intel_dp->psr.paused = false;
|
||||
|
||||
@ -1290,17 +1316,24 @@ static void intel_psr_disable_locked(struct intel_dp *intel_dp)
|
||||
intel_de_rmw(dev_priv, CHICKEN_PAR1_1,
|
||||
DIS_RAM_BYPASS_PSR2_MAN_TRACK, 0);
|
||||
|
||||
/* Wa_16011168373:adl-p */
|
||||
if (IS_ADLP_DISPLAY_STEP(dev_priv, STEP_A0, STEP_B0) &&
|
||||
intel_dp->psr.psr2_enabled)
|
||||
intel_de_rmw(dev_priv,
|
||||
TRANS_SET_CONTEXT_LATENCY(intel_dp->psr.transcoder),
|
||||
TRANS_SET_CONTEXT_LATENCY_MASK, 0);
|
||||
if (intel_dp->psr.psr2_enabled) {
|
||||
/* Wa_16011168373:adl-p */
|
||||
if (IS_ADLP_DISPLAY_STEP(dev_priv, STEP_A0, STEP_B0))
|
||||
intel_de_rmw(dev_priv,
|
||||
TRANS_SET_CONTEXT_LATENCY(intel_dp->psr.transcoder),
|
||||
TRANS_SET_CONTEXT_LATENCY_MASK, 0);
|
||||
|
||||
/* Wa_16012604467:adlp */
|
||||
if (IS_ALDERLAKE_P(dev_priv) && intel_dp->psr.psr2_enabled)
|
||||
intel_de_rmw(dev_priv, CLKGATE_DIS_MISC,
|
||||
CLKGATE_DIS_MISC_DMASC_GATING_DIS, 0);
|
||||
/* Wa_16012604467:adlp */
|
||||
if (IS_ALDERLAKE_P(dev_priv))
|
||||
intel_de_rmw(dev_priv, CLKGATE_DIS_MISC,
|
||||
CLKGATE_DIS_MISC_DMASC_GATING_DIS, 0);
|
||||
|
||||
/* Wa_16013835468:tgl[b0+], dg1 */
|
||||
if (IS_TGL_DISPLAY_STEP(dev_priv, STEP_B0, STEP_FOREVER) ||
|
||||
IS_DG1(dev_priv))
|
||||
intel_de_rmw(dev_priv, GEN8_CHICKEN_DCPR_1,
|
||||
wa_16013835468_bit_get(intel_dp), 0);
|
||||
}
|
||||
|
||||
intel_snps_phy_update_psr_power_state(dev_priv, phy, false);
|
||||
|
||||
|
@ -32,10 +32,10 @@ void intel_snps_phy_wait_for_calibration(struct drm_i915_private *i915)
|
||||
if (!intel_phy_is_snps(i915, phy))
|
||||
continue;
|
||||
|
||||
if (intel_de_wait_for_clear(i915, ICL_PHY_MISC(phy),
|
||||
if (intel_de_wait_for_clear(i915, DG2_PHY_MISC(phy),
|
||||
DG2_PHY_DP_TX_ACK_MASK, 25))
|
||||
drm_err(&i915->drm, "SNPS PHY %c failed to calibrate after 25ms.\n",
|
||||
phy);
|
||||
phy_name(phy));
|
||||
}
|
||||
}
|
||||
|
||||
@ -250,197 +250,6 @@ static const struct intel_mpllb_state * const dg2_dp_100_tables[] = {
|
||||
NULL,
|
||||
};
|
||||
|
||||
/*
|
||||
* Basic DP link rates with 38.4 MHz reference clock.
|
||||
*/
|
||||
|
||||
static const struct intel_mpllb_state dg2_dp_rbr_38_4 = {
|
||||
.clock = 162000,
|
||||
.ref_control =
|
||||
REG_FIELD_PREP(SNPS_PHY_REF_CONTROL_REF_RANGE, 1),
|
||||
.mpllb_cp =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_CP_INT, 5) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_CP_PROP, 25) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_CP_INT_GS, 65) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_CP_PROP_GS, 127),
|
||||
.mpllb_div =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_DIV5_CLK_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_TX_CLK_DIV, 2) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_PMIX_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_V2I, 2) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_FREQ_VCO, 2),
|
||||
.mpllb_div2 =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_REF_CLK_DIV, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_MULTIPLIER, 304),
|
||||
.mpllb_fracn1 =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_FRACN_CGG_UPDATE_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_FRACN_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_FRACN_DEN, 1),
|
||||
.mpllb_fracn2 =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_FRACN_QUOT, 49152),
|
||||
};
|
||||
|
||||
static const struct intel_mpllb_state dg2_dp_hbr1_38_4 = {
|
||||
.clock = 270000,
|
||||
.ref_control =
|
||||
REG_FIELD_PREP(SNPS_PHY_REF_CONTROL_REF_RANGE, 1),
|
||||
.mpllb_cp =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_CP_INT, 5) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_CP_PROP, 25) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_CP_INT_GS, 65) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_CP_PROP_GS, 127),
|
||||
.mpllb_div =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_DIV5_CLK_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_TX_CLK_DIV, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_PMIX_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_V2I, 2) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_FREQ_VCO, 3),
|
||||
.mpllb_div2 =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_REF_CLK_DIV, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_MULTIPLIER, 248),
|
||||
.mpllb_fracn1 =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_FRACN_CGG_UPDATE_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_FRACN_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_FRACN_DEN, 1),
|
||||
.mpllb_fracn2 =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_FRACN_QUOT, 40960),
|
||||
};
|
||||
|
||||
static const struct intel_mpllb_state dg2_dp_hbr2_38_4 = {
|
||||
.clock = 540000,
|
||||
.ref_control =
|
||||
REG_FIELD_PREP(SNPS_PHY_REF_CONTROL_REF_RANGE, 1),
|
||||
.mpllb_cp =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_CP_INT, 5) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_CP_PROP, 25) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_CP_INT_GS, 65) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_CP_PROP_GS, 127),
|
||||
.mpllb_div =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_DIV5_CLK_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_PMIX_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_V2I, 2) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_FREQ_VCO, 3),
|
||||
.mpllb_div2 =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_REF_CLK_DIV, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_MULTIPLIER, 248),
|
||||
.mpllb_fracn1 =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_FRACN_CGG_UPDATE_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_FRACN_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_FRACN_DEN, 1),
|
||||
.mpllb_fracn2 =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_FRACN_QUOT, 40960),
|
||||
};
|
||||
|
||||
static const struct intel_mpllb_state dg2_dp_hbr3_38_4 = {
|
||||
.clock = 810000,
|
||||
.ref_control =
|
||||
REG_FIELD_PREP(SNPS_PHY_REF_CONTROL_REF_RANGE, 1),
|
||||
.mpllb_cp =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_CP_INT, 6) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_CP_PROP, 26) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_CP_INT_GS, 65) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_CP_PROP_GS, 127),
|
||||
.mpllb_div =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_DIV5_CLK_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_PMIX_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_V2I, 2),
|
||||
.mpllb_div2 =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_REF_CLK_DIV, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_MULTIPLIER, 388),
|
||||
.mpllb_fracn1 =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_FRACN_CGG_UPDATE_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_FRACN_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_FRACN_DEN, 1),
|
||||
.mpllb_fracn2 =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_FRACN_QUOT, 61440),
|
||||
};
|
||||
|
||||
static const struct intel_mpllb_state dg2_dp_uhbr10_38_4 = {
|
||||
.clock = 1000000,
|
||||
.ref_control =
|
||||
REG_FIELD_PREP(SNPS_PHY_REF_CONTROL_REF_RANGE, 1),
|
||||
.mpllb_cp =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_CP_INT, 5) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_CP_PROP, 26) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_CP_INT_GS, 65) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_CP_PROP_GS, 127),
|
||||
.mpllb_div =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_DIV5_CLK_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_DIV_CLK_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_DIV_MULTIPLIER, 8) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_PMIX_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_WORD_DIV2_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_DP2_MODE, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_SHIM_DIV32_CLK_SEL, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_V2I, 2),
|
||||
.mpllb_div2 =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_REF_CLK_DIV, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_MULTIPLIER, 488),
|
||||
.mpllb_fracn1 =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_FRACN_CGG_UPDATE_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_FRACN_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_FRACN_DEN, 3),
|
||||
.mpllb_fracn2 =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_FRACN_REM, 2) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_FRACN_QUOT, 27306),
|
||||
|
||||
/*
|
||||
* SSC will be enabled, DP UHBR has a minimum SSC requirement.
|
||||
*/
|
||||
.mpllb_sscen =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_SSC_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_SSC_PEAK, 76800),
|
||||
.mpllb_sscstep =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_SSC_STEPSIZE, 129024),
|
||||
};
|
||||
|
||||
static const struct intel_mpllb_state dg2_dp_uhbr13_38_4 = {
|
||||
.clock = 1350000,
|
||||
.ref_control =
|
||||
REG_FIELD_PREP(SNPS_PHY_REF_CONTROL_REF_RANGE, 1),
|
||||
.mpllb_cp =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_CP_INT, 6) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_CP_PROP, 56) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_CP_INT_GS, 65) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_CP_PROP_GS, 127),
|
||||
.mpllb_div =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_DIV5_CLK_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_DIV_CLK_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_DIV_MULTIPLIER, 8) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_PMIX_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_WORD_DIV2_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_DP2_MODE, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_V2I, 3),
|
||||
.mpllb_div2 =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_REF_CLK_DIV, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_MULTIPLIER, 670),
|
||||
.mpllb_fracn1 =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_FRACN_CGG_UPDATE_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_FRACN_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_FRACN_DEN, 1),
|
||||
.mpllb_fracn2 =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_FRACN_QUOT, 36864),
|
||||
|
||||
/*
|
||||
* SSC will be enabled, DP UHBR has a minimum SSC requirement.
|
||||
*/
|
||||
.mpllb_sscen =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_SSC_EN, 1) |
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_SSC_PEAK, 103680),
|
||||
.mpllb_sscstep =
|
||||
REG_FIELD_PREP(SNPS_PHY_MPLLB_SSC_STEPSIZE, 174182),
|
||||
};
|
||||
|
||||
static const struct intel_mpllb_state * const dg2_dp_38_4_tables[] = {
|
||||
&dg2_dp_rbr_38_4,
|
||||
&dg2_dp_hbr1_38_4,
|
||||
&dg2_dp_hbr2_38_4,
|
||||
&dg2_dp_hbr3_38_4,
|
||||
&dg2_dp_uhbr10_38_4,
|
||||
&dg2_dp_uhbr13_38_4,
|
||||
NULL,
|
||||
};
|
||||
|
||||
/*
|
||||
* eDP link rates with 100 MHz reference clock.
|
||||
*/
|
||||
@ -749,22 +558,7 @@ intel_mpllb_tables_get(struct intel_crtc_state *crtc_state,
|
||||
if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_EDP)) {
|
||||
return dg2_edp_tables;
|
||||
} else if (intel_crtc_has_dp_encoder(crtc_state)) {
|
||||
/*
|
||||
* FIXME: Initially we're just enabling the "combo" outputs on
|
||||
* port A-D. The MPLLB for those ports takes an input from the
|
||||
* "Display Filter PLL" which always has an output frequency
|
||||
* of 100 MHz, hence the use of the _100 tables below.
|
||||
*
|
||||
* Once we enable port TC1 it will either use the same 100 MHz
|
||||
* "Display Filter PLL" (when strapped to support a native
|
||||
* display connection) or different 38.4 MHz "Filter PLL" when
|
||||
* strapped to support a USB connection, so we'll need to check
|
||||
* that to determine which table to use.
|
||||
*/
|
||||
if (0)
|
||||
return dg2_dp_38_4_tables;
|
||||
else
|
||||
return dg2_dp_100_tables;
|
||||
return dg2_dp_100_tables;
|
||||
} else if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_HDMI)) {
|
||||
return dg2_hdmi_tables;
|
||||
}
|
||||
|
@ -693,6 +693,8 @@ void intel_tc_port_sanitize(struct intel_digital_port *dig_port)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev);
|
||||
struct intel_encoder *encoder = &dig_port->base;
|
||||
intel_wakeref_t tc_cold_wref;
|
||||
enum intel_display_power_domain domain;
|
||||
int active_links = 0;
|
||||
|
||||
mutex_lock(&dig_port->tc_lock);
|
||||
@ -704,12 +706,11 @@ void intel_tc_port_sanitize(struct intel_digital_port *dig_port)
|
||||
|
||||
drm_WARN_ON(&i915->drm, dig_port->tc_mode != TC_PORT_DISCONNECTED);
|
||||
drm_WARN_ON(&i915->drm, dig_port->tc_lock_wakeref);
|
||||
|
||||
tc_cold_wref = tc_cold_block(dig_port, &domain);
|
||||
|
||||
dig_port->tc_mode = intel_tc_port_get_current_mode(dig_port);
|
||||
if (active_links) {
|
||||
enum intel_display_power_domain domain;
|
||||
intel_wakeref_t tc_cold_wref = tc_cold_block(dig_port, &domain);
|
||||
|
||||
dig_port->tc_mode = intel_tc_port_get_current_mode(dig_port);
|
||||
|
||||
if (!icl_tc_phy_is_connected(dig_port))
|
||||
drm_dbg_kms(&i915->drm,
|
||||
"Port %s: PHY disconnected with %d active link(s)\n",
|
||||
@ -718,10 +719,23 @@ void intel_tc_port_sanitize(struct intel_digital_port *dig_port)
|
||||
|
||||
dig_port->tc_lock_wakeref = tc_cold_block(dig_port,
|
||||
&dig_port->tc_lock_power_domain);
|
||||
|
||||
tc_cold_unblock(dig_port, domain, tc_cold_wref);
|
||||
} else {
|
||||
/*
|
||||
* TBT-alt is the default mode in any case the PHY ownership is not
|
||||
* held (regardless of the sink's connected live state), so
|
||||
* we'll just switch to disconnected mode from it here without
|
||||
* a note.
|
||||
*/
|
||||
if (dig_port->tc_mode != TC_PORT_TBT_ALT)
|
||||
drm_dbg_kms(&i915->drm,
|
||||
"Port %s: PHY left in %s mode on disabled port, disconnecting it\n",
|
||||
dig_port->tc_port_name,
|
||||
tc_port_mode_name(dig_port->tc_mode));
|
||||
icl_tc_phy_disconnect(dig_port);
|
||||
}
|
||||
|
||||
tc_cold_unblock(dig_port, domain, tc_cold_wref);
|
||||
|
||||
drm_dbg_kms(&i915->drm, "Port %s: sanitize mode (%s)\n",
|
||||
dig_port->tc_port_name,
|
||||
tc_port_mode_name(dig_port->tc_mode));
|
||||
|
@ -162,6 +162,14 @@ struct bdb_general_features {
|
||||
u8 dp_ssc_freq:1; /* SSC freq for PCH attached eDP */
|
||||
u8 dp_ssc_dongle_supported:1;
|
||||
u8 rsvd11:2; /* finish byte */
|
||||
|
||||
/* bits 6 */
|
||||
u8 tc_hpd_retry_timeout:7; /* 242 */
|
||||
u8 rsvd12:1;
|
||||
|
||||
/* bits 7 */
|
||||
u8 afc_startup_config:2;/* 249 */
|
||||
u8 rsvd13:6;
|
||||
} __packed;
|
||||
|
||||
/*
|
||||
|
@ -1107,18 +1107,6 @@ static i915_reg_t dss_ctl2_reg(struct intel_crtc *crtc, enum transcoder cpu_tran
|
||||
ICL_PIPE_DSS_CTL2(crtc->pipe) : DSS_CTL2;
|
||||
}
|
||||
|
||||
struct intel_crtc *
|
||||
intel_dsc_get_bigjoiner_secondary(const struct intel_crtc *primary_crtc)
|
||||
{
|
||||
return intel_crtc_for_pipe(to_i915(primary_crtc->base.dev), primary_crtc->pipe + 1);
|
||||
}
|
||||
|
||||
static struct intel_crtc *
|
||||
intel_dsc_get_bigjoiner_primary(const struct intel_crtc *secondary_crtc)
|
||||
{
|
||||
return intel_crtc_for_pipe(to_i915(secondary_crtc->base.dev), secondary_crtc->pipe - 1);
|
||||
}
|
||||
|
||||
void intel_uncompressed_joiner_enable(const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
|
||||
@ -1126,7 +1114,7 @@ void intel_uncompressed_joiner_enable(const struct intel_crtc_state *crtc_state)
|
||||
u32 dss_ctl1_val = 0;
|
||||
|
||||
if (crtc_state->bigjoiner && !crtc_state->dsc.compression_enable) {
|
||||
if (crtc_state->bigjoiner_slave)
|
||||
if (intel_crtc_is_bigjoiner_slave(crtc_state))
|
||||
dss_ctl1_val |= UNCOMPRESSED_JOINER_SLAVE;
|
||||
else
|
||||
dss_ctl1_val |= UNCOMPRESSED_JOINER_MASTER;
|
||||
@ -1154,7 +1142,7 @@ void intel_dsc_enable(const struct intel_crtc_state *crtc_state)
|
||||
}
|
||||
if (crtc_state->bigjoiner) {
|
||||
dss_ctl1_val |= BIG_JOINER_ENABLE;
|
||||
if (!crtc_state->bigjoiner_slave)
|
||||
if (!intel_crtc_is_bigjoiner_slave(crtc_state))
|
||||
dss_ctl1_val |= MASTER_BIG_JOINER_ENABLE;
|
||||
}
|
||||
intel_de_write(dev_priv, dss_ctl1_reg(crtc, crtc_state->cpu_transcoder), dss_ctl1_val);
|
||||
@ -1174,25 +1162,6 @@ void intel_dsc_disable(const struct intel_crtc_state *old_crtc_state)
|
||||
}
|
||||
}
|
||||
|
||||
void intel_uncompressed_joiner_get_config(struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
|
||||
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
|
||||
u32 dss_ctl1;
|
||||
|
||||
dss_ctl1 = intel_de_read(dev_priv, dss_ctl1_reg(crtc, crtc_state->cpu_transcoder));
|
||||
if (dss_ctl1 & UNCOMPRESSED_JOINER_MASTER) {
|
||||
crtc_state->bigjoiner = true;
|
||||
crtc_state->bigjoiner_linked_crtc = intel_dsc_get_bigjoiner_secondary(crtc);
|
||||
drm_WARN_ON(&dev_priv->drm, !crtc_state->bigjoiner_linked_crtc);
|
||||
} else if (dss_ctl1 & UNCOMPRESSED_JOINER_SLAVE) {
|
||||
crtc_state->bigjoiner = true;
|
||||
crtc_state->bigjoiner_slave = true;
|
||||
crtc_state->bigjoiner_linked_crtc = intel_dsc_get_bigjoiner_primary(crtc);
|
||||
drm_WARN_ON(&dev_priv->drm, !crtc_state->bigjoiner_linked_crtc);
|
||||
}
|
||||
}
|
||||
|
||||
void intel_dsc_get_config(struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
|
||||
@ -1223,18 +1192,6 @@ void intel_dsc_get_config(struct intel_crtc_state *crtc_state)
|
||||
crtc_state->dsc.dsc_split = (dss_ctl2 & RIGHT_BRANCH_VDSC_ENABLE) &&
|
||||
(dss_ctl1 & JOINER_ENABLE);
|
||||
|
||||
if (dss_ctl1 & BIG_JOINER_ENABLE) {
|
||||
crtc_state->bigjoiner = true;
|
||||
|
||||
if (!(dss_ctl1 & MASTER_BIG_JOINER_ENABLE)) {
|
||||
crtc_state->bigjoiner_slave = true;
|
||||
crtc_state->bigjoiner_linked_crtc = intel_dsc_get_bigjoiner_primary(crtc);
|
||||
} else {
|
||||
crtc_state->bigjoiner_linked_crtc = intel_dsc_get_bigjoiner_secondary(crtc);
|
||||
}
|
||||
drm_WARN_ON(&dev_priv->drm, !crtc_state->bigjoiner_linked_crtc);
|
||||
}
|
||||
|
||||
/* FIXME: add more state readout as needed */
|
||||
|
||||
/* PPS1 */
|
||||
|
@ -18,7 +18,6 @@ void intel_uncompressed_joiner_enable(const struct intel_crtc_state *crtc_state)
|
||||
void intel_dsc_enable(const struct intel_crtc_state *crtc_state);
|
||||
void intel_dsc_disable(const struct intel_crtc_state *crtc_state);
|
||||
int intel_dsc_compute_params(struct intel_crtc_state *pipe_config);
|
||||
void intel_uncompressed_joiner_get_config(struct intel_crtc_state *crtc_state);
|
||||
void intel_dsc_get_config(struct intel_crtc_state *crtc_state);
|
||||
enum intel_display_power_domain
|
||||
intel_dsc_power_domain(struct intel_crtc *crtc, enum transcoder cpu_transcoder);
|
||||
|
@ -44,6 +44,7 @@
|
||||
#include "skl_scaler.h"
|
||||
#include "vlv_dsi.h"
|
||||
#include "vlv_dsi_pll.h"
|
||||
#include "vlv_dsi_regs.h"
|
||||
#include "vlv_sideband.h"
|
||||
|
||||
/* return pixels in terms of txbyteclkhs */
|
||||
@ -1492,7 +1493,7 @@ static void intel_dsi_prepare(struct intel_encoder *intel_encoder,
|
||||
*/
|
||||
|
||||
if (is_vid_mode(intel_dsi) &&
|
||||
intel_dsi->video_mode_format == VIDEO_MODE_BURST) {
|
||||
intel_dsi->video_mode == BURST_MODE) {
|
||||
intel_de_write(dev_priv, MIPI_HS_TX_TIMEOUT(port),
|
||||
txbyteclkhs(adjusted_mode->crtc_htotal, bpp, intel_dsi->lane_count, intel_dsi->burst_mode_ratio) + 1);
|
||||
} else {
|
||||
@ -1568,12 +1569,33 @@ static void intel_dsi_prepare(struct intel_encoder *intel_encoder,
|
||||
intel_de_write(dev_priv, MIPI_CLK_LANE_SWITCH_TIME_CNT(port),
|
||||
intel_dsi->clk_lp_to_hs_count << LP_HS_SSW_CNT_SHIFT | intel_dsi->clk_hs_to_lp_count << HS_LP_PWR_SW_CNT_SHIFT);
|
||||
|
||||
if (is_vid_mode(intel_dsi))
|
||||
/* Some panels might have resolution which is not a
|
||||
if (is_vid_mode(intel_dsi)) {
|
||||
u32 fmt = intel_dsi->video_frmt_cfg_bits | IP_TG_CONFIG;
|
||||
|
||||
/*
|
||||
* Some panels might have resolution which is not a
|
||||
* multiple of 64 like 1366 x 768. Enable RANDOM
|
||||
* resolution support for such panels by default */
|
||||
intel_de_write(dev_priv, MIPI_VIDEO_MODE_FORMAT(port),
|
||||
intel_dsi->video_frmt_cfg_bits | intel_dsi->video_mode_format | IP_TG_CONFIG | RANDOM_DPI_DISPLAY_RESOLUTION);
|
||||
* resolution support for such panels by default.
|
||||
*/
|
||||
fmt |= RANDOM_DPI_DISPLAY_RESOLUTION;
|
||||
|
||||
switch (intel_dsi->video_mode) {
|
||||
default:
|
||||
MISSING_CASE(intel_dsi->video_mode);
|
||||
fallthrough;
|
||||
case NON_BURST_SYNC_EVENTS:
|
||||
fmt |= VIDEO_MODE_NON_BURST_WITH_SYNC_EVENTS;
|
||||
break;
|
||||
case NON_BURST_SYNC_PULSE:
|
||||
fmt |= VIDEO_MODE_NON_BURST_WITH_SYNC_PULSE;
|
||||
break;
|
||||
case BURST_MODE:
|
||||
fmt |= VIDEO_MODE_BURST;
|
||||
break;
|
||||
}
|
||||
|
||||
intel_de_write(dev_priv, MIPI_VIDEO_MODE_FORMAT(port), fmt);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -32,6 +32,7 @@
|
||||
#include "intel_display_types.h"
|
||||
#include "intel_dsi.h"
|
||||
#include "vlv_dsi_pll.h"
|
||||
#include "vlv_dsi_pll_regs.h"
|
||||
#include "vlv_sideband.h"
|
||||
|
||||
static const u16 lfsr_converts[] = {
|
||||
|
109
drivers/gpu/drm/i915/display/vlv_dsi_pll_regs.h
Normal file
109
drivers/gpu/drm/i915/display/vlv_dsi_pll_regs.h
Normal file
@ -0,0 +1,109 @@
|
||||
/* SPDX-License-Identifier: MIT */
|
||||
/*
|
||||
* Copyright © 2022 Intel Corporation
|
||||
*/
|
||||
|
||||
#ifndef __VLV_DSI_PLL_REGS_H__
|
||||
#define __VLV_DSI_PLL_REGS_H__
|
||||
|
||||
#include "vlv_dsi_regs.h"
|
||||
|
||||
#define MIPIO_TXESC_CLK_DIV1 _MMIO(0x160004)
|
||||
#define GLK_TX_ESC_CLK_DIV1_MASK 0x3FF
|
||||
#define MIPIO_TXESC_CLK_DIV2 _MMIO(0x160008)
|
||||
#define GLK_TX_ESC_CLK_DIV2_MASK 0x3FF
|
||||
|
||||
#define BXT_MAX_VAR_OUTPUT_KHZ 39500
|
||||
|
||||
#define BXT_MIPI_CLOCK_CTL _MMIO(0x46090)
|
||||
#define BXT_MIPI1_DIV_SHIFT 26
|
||||
#define BXT_MIPI2_DIV_SHIFT 10
|
||||
#define BXT_MIPI_DIV_SHIFT(port) \
|
||||
_MIPI_PORT(port, BXT_MIPI1_DIV_SHIFT, \
|
||||
BXT_MIPI2_DIV_SHIFT)
|
||||
|
||||
/* TX control divider to select actual TX clock output from (8x/var) */
|
||||
#define BXT_MIPI1_TX_ESCLK_SHIFT 26
|
||||
#define BXT_MIPI2_TX_ESCLK_SHIFT 10
|
||||
#define BXT_MIPI_TX_ESCLK_SHIFT(port) \
|
||||
_MIPI_PORT(port, BXT_MIPI1_TX_ESCLK_SHIFT, \
|
||||
BXT_MIPI2_TX_ESCLK_SHIFT)
|
||||
#define BXT_MIPI1_TX_ESCLK_FIXDIV_MASK (0x3F << 26)
|
||||
#define BXT_MIPI2_TX_ESCLK_FIXDIV_MASK (0x3F << 10)
|
||||
#define BXT_MIPI_TX_ESCLK_FIXDIV_MASK(port) \
|
||||
_MIPI_PORT(port, BXT_MIPI1_TX_ESCLK_FIXDIV_MASK, \
|
||||
BXT_MIPI2_TX_ESCLK_FIXDIV_MASK)
|
||||
#define BXT_MIPI_TX_ESCLK_DIVIDER(port, val) \
|
||||
(((val) & 0x3F) << BXT_MIPI_TX_ESCLK_SHIFT(port))
|
||||
/* RX upper control divider to select actual RX clock output from 8x */
|
||||
#define BXT_MIPI1_RX_ESCLK_UPPER_SHIFT 21
|
||||
#define BXT_MIPI2_RX_ESCLK_UPPER_SHIFT 5
|
||||
#define BXT_MIPI_RX_ESCLK_UPPER_SHIFT(port) \
|
||||
_MIPI_PORT(port, BXT_MIPI1_RX_ESCLK_UPPER_SHIFT, \
|
||||
BXT_MIPI2_RX_ESCLK_UPPER_SHIFT)
|
||||
#define BXT_MIPI1_RX_ESCLK_UPPER_FIXDIV_MASK (3 << 21)
|
||||
#define BXT_MIPI2_RX_ESCLK_UPPER_FIXDIV_MASK (3 << 5)
|
||||
#define BXT_MIPI_RX_ESCLK_UPPER_FIXDIV_MASK(port) \
|
||||
_MIPI_PORT(port, BXT_MIPI1_RX_ESCLK_UPPER_FIXDIV_MASK, \
|
||||
BXT_MIPI2_RX_ESCLK_UPPER_FIXDIV_MASK)
|
||||
#define BXT_MIPI_RX_ESCLK_UPPER_DIVIDER(port, val) \
|
||||
(((val) & 3) << BXT_MIPI_RX_ESCLK_UPPER_SHIFT(port))
|
||||
/* 8/3X divider to select the actual 8/3X clock output from 8x */
|
||||
#define BXT_MIPI1_8X_BY3_SHIFT 19
|
||||
#define BXT_MIPI2_8X_BY3_SHIFT 3
|
||||
#define BXT_MIPI_8X_BY3_SHIFT(port) \
|
||||
_MIPI_PORT(port, BXT_MIPI1_8X_BY3_SHIFT, \
|
||||
BXT_MIPI2_8X_BY3_SHIFT)
|
||||
#define BXT_MIPI1_8X_BY3_DIVIDER_MASK (3 << 19)
|
||||
#define BXT_MIPI2_8X_BY3_DIVIDER_MASK (3 << 3)
|
||||
#define BXT_MIPI_8X_BY3_DIVIDER_MASK(port) \
|
||||
_MIPI_PORT(port, BXT_MIPI1_8X_BY3_DIVIDER_MASK, \
|
||||
BXT_MIPI2_8X_BY3_DIVIDER_MASK)
|
||||
#define BXT_MIPI_8X_BY3_DIVIDER(port, val) \
|
||||
(((val) & 3) << BXT_MIPI_8X_BY3_SHIFT(port))
|
||||
/* RX lower control divider to select actual RX clock output from 8x */
|
||||
#define BXT_MIPI1_RX_ESCLK_LOWER_SHIFT 16
|
||||
#define BXT_MIPI2_RX_ESCLK_LOWER_SHIFT 0
|
||||
#define BXT_MIPI_RX_ESCLK_LOWER_SHIFT(port) \
|
||||
_MIPI_PORT(port, BXT_MIPI1_RX_ESCLK_LOWER_SHIFT, \
|
||||
BXT_MIPI2_RX_ESCLK_LOWER_SHIFT)
|
||||
#define BXT_MIPI1_RX_ESCLK_LOWER_FIXDIV_MASK (3 << 16)
|
||||
#define BXT_MIPI2_RX_ESCLK_LOWER_FIXDIV_MASK (3 << 0)
|
||||
#define BXT_MIPI_RX_ESCLK_LOWER_FIXDIV_MASK(port) \
|
||||
_MIPI_PORT(port, BXT_MIPI1_RX_ESCLK_LOWER_FIXDIV_MASK, \
|
||||
BXT_MIPI2_RX_ESCLK_LOWER_FIXDIV_MASK)
|
||||
#define BXT_MIPI_RX_ESCLK_LOWER_DIVIDER(port, val) \
|
||||
(((val) & 3) << BXT_MIPI_RX_ESCLK_LOWER_SHIFT(port))
|
||||
|
||||
#define RX_DIVIDER_BIT_1_2 0x3
|
||||
#define RX_DIVIDER_BIT_3_4 0xC
|
||||
|
||||
#define BXT_DSI_PLL_CTL _MMIO(0x161000)
|
||||
#define BXT_DSI_PLL_PVD_RATIO_SHIFT 16
|
||||
#define BXT_DSI_PLL_PVD_RATIO_MASK (3 << BXT_DSI_PLL_PVD_RATIO_SHIFT)
|
||||
#define BXT_DSI_PLL_PVD_RATIO_1 (1 << BXT_DSI_PLL_PVD_RATIO_SHIFT)
|
||||
#define BXT_DSIC_16X_BY1 (0 << 10)
|
||||
#define BXT_DSIC_16X_BY2 (1 << 10)
|
||||
#define BXT_DSIC_16X_BY3 (2 << 10)
|
||||
#define BXT_DSIC_16X_BY4 (3 << 10)
|
||||
#define BXT_DSIC_16X_MASK (3 << 10)
|
||||
#define BXT_DSIA_16X_BY1 (0 << 8)
|
||||
#define BXT_DSIA_16X_BY2 (1 << 8)
|
||||
#define BXT_DSIA_16X_BY3 (2 << 8)
|
||||
#define BXT_DSIA_16X_BY4 (3 << 8)
|
||||
#define BXT_DSIA_16X_MASK (3 << 8)
|
||||
#define BXT_DSI_FREQ_SEL_SHIFT 8
|
||||
#define BXT_DSI_FREQ_SEL_MASK (0xF << BXT_DSI_FREQ_SEL_SHIFT)
|
||||
|
||||
#define BXT_DSI_PLL_RATIO_MAX 0x7D
|
||||
#define BXT_DSI_PLL_RATIO_MIN 0x22
|
||||
#define GLK_DSI_PLL_RATIO_MAX 0x6F
|
||||
#define GLK_DSI_PLL_RATIO_MIN 0x22
|
||||
#define BXT_DSI_PLL_RATIO_MASK 0xFF
|
||||
#define BXT_REF_CLOCK_KHZ 19200
|
||||
|
||||
#define BXT_DSI_PLL_ENABLE _MMIO(0x46080)
|
||||
#define BXT_DSI_PLL_DO_ENABLE (1 << 31)
|
||||
#define BXT_DSI_PLL_LOCKED (1 << 30)
|
||||
|
||||
#endif /* __VLV_DSI_PLL_REGS_H__ */
|
480
drivers/gpu/drm/i915/display/vlv_dsi_regs.h
Normal file
480
drivers/gpu/drm/i915/display/vlv_dsi_regs.h
Normal file
@ -0,0 +1,480 @@
|
||||
/* SPDX-License-Identifier: MIT */
|
||||
/*
|
||||
* Copyright © 2022 Intel Corporation
|
||||
*/
|
||||
|
||||
#ifndef __VLV_DSI_REGS_H__
|
||||
#define __VLV_DSI_REGS_H__
|
||||
|
||||
#include "i915_reg_defs.h"
|
||||
|
||||
#define VLV_MIPI_BASE VLV_DISPLAY_BASE
|
||||
#define BXT_MIPI_BASE 0x60000
|
||||
|
||||
#define _MIPI_PORT(port, a, c) (((port) == PORT_A) ? a : c) /* ports A and C only */
|
||||
#define _MMIO_MIPI(port, a, c) _MMIO(_MIPI_PORT(port, a, c))
|
||||
|
||||
/* BXT MIPI mode configure */
|
||||
#define _BXT_MIPIA_TRANS_HACTIVE 0x6B0F8
|
||||
#define _BXT_MIPIC_TRANS_HACTIVE 0x6B8F8
|
||||
#define BXT_MIPI_TRANS_HACTIVE(tc) _MMIO_MIPI(tc, \
|
||||
_BXT_MIPIA_TRANS_HACTIVE, _BXT_MIPIC_TRANS_HACTIVE)
|
||||
|
||||
#define _BXT_MIPIA_TRANS_VACTIVE 0x6B0FC
|
||||
#define _BXT_MIPIC_TRANS_VACTIVE 0x6B8FC
|
||||
#define BXT_MIPI_TRANS_VACTIVE(tc) _MMIO_MIPI(tc, \
|
||||
_BXT_MIPIA_TRANS_VACTIVE, _BXT_MIPIC_TRANS_VACTIVE)
|
||||
|
||||
#define _BXT_MIPIA_TRANS_VTOTAL 0x6B100
|
||||
#define _BXT_MIPIC_TRANS_VTOTAL 0x6B900
|
||||
#define BXT_MIPI_TRANS_VTOTAL(tc) _MMIO_MIPI(tc, \
|
||||
_BXT_MIPIA_TRANS_VTOTAL, _BXT_MIPIC_TRANS_VTOTAL)
|
||||
|
||||
#define BXT_P_DSI_REGULATOR_CFG _MMIO(0x160020)
|
||||
#define STAP_SELECT (1 << 0)
|
||||
|
||||
#define BXT_P_DSI_REGULATOR_TX_CTRL _MMIO(0x160054)
|
||||
#define HS_IO_CTRL_SELECT (1 << 0)
|
||||
|
||||
#define _MIPIA_PORT_CTRL (VLV_DISPLAY_BASE + 0x61190)
|
||||
#define _MIPIC_PORT_CTRL (VLV_DISPLAY_BASE + 0x61700)
|
||||
#define MIPI_PORT_CTRL(port) _MMIO_MIPI(port, _MIPIA_PORT_CTRL, _MIPIC_PORT_CTRL)
|
||||
|
||||
/* BXT port control */
|
||||
#define _BXT_MIPIA_PORT_CTRL 0x6B0C0
|
||||
#define _BXT_MIPIC_PORT_CTRL 0x6B8C0
|
||||
#define BXT_MIPI_PORT_CTRL(tc) _MMIO_MIPI(tc, _BXT_MIPIA_PORT_CTRL, _BXT_MIPIC_PORT_CTRL)
|
||||
|
||||
#define DPI_ENABLE (1 << 31) /* A + C */
|
||||
#define MIPIA_MIPI4DPHY_DELAY_COUNT_SHIFT 27
|
||||
#define MIPIA_MIPI4DPHY_DELAY_COUNT_MASK (0xf << 27)
|
||||
#define DUAL_LINK_MODE_SHIFT 26
|
||||
#define DUAL_LINK_MODE_MASK (1 << 26)
|
||||
#define DUAL_LINK_MODE_FRONT_BACK (0 << 26)
|
||||
#define DUAL_LINK_MODE_PIXEL_ALTERNATIVE (1 << 26)
|
||||
#define DITHERING_ENABLE (1 << 25) /* A + C */
|
||||
#define FLOPPED_HSTX (1 << 23)
|
||||
#define DE_INVERT (1 << 19) /* XXX */
|
||||
#define MIPIA_FLISDSI_DELAY_COUNT_SHIFT 18
|
||||
#define MIPIA_FLISDSI_DELAY_COUNT_MASK (0xf << 18)
|
||||
#define AFE_LATCHOUT (1 << 17)
|
||||
#define LP_OUTPUT_HOLD (1 << 16)
|
||||
#define MIPIC_FLISDSI_DELAY_COUNT_HIGH_SHIFT 15
|
||||
#define MIPIC_FLISDSI_DELAY_COUNT_HIGH_MASK (1 << 15)
|
||||
#define MIPIC_MIPI4DPHY_DELAY_COUNT_SHIFT 11
|
||||
#define MIPIC_MIPI4DPHY_DELAY_COUNT_MASK (0xf << 11)
|
||||
#define CSB_SHIFT 9
|
||||
#define CSB_MASK (3 << 9)
|
||||
#define CSB_20MHZ (0 << 9)
|
||||
#define CSB_10MHZ (1 << 9)
|
||||
#define CSB_40MHZ (2 << 9)
|
||||
#define BANDGAP_MASK (1 << 8)
|
||||
#define BANDGAP_PNW_CIRCUIT (0 << 8)
|
||||
#define BANDGAP_LNC_CIRCUIT (1 << 8)
|
||||
#define MIPIC_FLISDSI_DELAY_COUNT_LOW_SHIFT 5
|
||||
#define MIPIC_FLISDSI_DELAY_COUNT_LOW_MASK (7 << 5)
|
||||
#define TEARING_EFFECT_DELAY (1 << 4) /* A + C */
|
||||
#define TEARING_EFFECT_SHIFT 2 /* A + C */
|
||||
#define TEARING_EFFECT_MASK (3 << 2)
|
||||
#define TEARING_EFFECT_OFF (0 << 2)
|
||||
#define TEARING_EFFECT_DSI (1 << 2)
|
||||
#define TEARING_EFFECT_GPIO (2 << 2)
|
||||
#define LANE_CONFIGURATION_SHIFT 0
|
||||
#define LANE_CONFIGURATION_MASK (3 << 0)
|
||||
#define LANE_CONFIGURATION_4LANE (0 << 0)
|
||||
#define LANE_CONFIGURATION_DUAL_LINK_A (1 << 0)
|
||||
#define LANE_CONFIGURATION_DUAL_LINK_B (2 << 0)
|
||||
|
||||
#define _MIPIA_TEARING_CTRL (VLV_DISPLAY_BASE + 0x61194)
|
||||
#define _MIPIC_TEARING_CTRL (VLV_DISPLAY_BASE + 0x61704)
|
||||
#define MIPI_TEARING_CTRL(port) _MMIO_MIPI(port, _MIPIA_TEARING_CTRL, _MIPIC_TEARING_CTRL)
|
||||
#define TEARING_EFFECT_DELAY_SHIFT 0
|
||||
#define TEARING_EFFECT_DELAY_MASK (0xffff << 0)
|
||||
|
||||
/* XXX: all bits reserved */
|
||||
#define _MIPIA_AUTOPWG (VLV_DISPLAY_BASE + 0x611a0)
|
||||
|
||||
/* MIPI DSI Controller and D-PHY registers */
|
||||
|
||||
#define _MIPIA_DEVICE_READY (dev_priv->mipi_mmio_base + 0xb000)
|
||||
#define _MIPIC_DEVICE_READY (dev_priv->mipi_mmio_base + 0xb800)
|
||||
#define MIPI_DEVICE_READY(port) _MMIO_MIPI(port, _MIPIA_DEVICE_READY, _MIPIC_DEVICE_READY)
|
||||
#define BUS_POSSESSION (1 << 3) /* set to give bus to receiver */
|
||||
#define ULPS_STATE_MASK (3 << 1)
|
||||
#define ULPS_STATE_ENTER (2 << 1)
|
||||
#define ULPS_STATE_EXIT (1 << 1)
|
||||
#define ULPS_STATE_NORMAL_OPERATION (0 << 1)
|
||||
#define DEVICE_READY (1 << 0)
|
||||
|
||||
#define _MIPIA_INTR_STAT (dev_priv->mipi_mmio_base + 0xb004)
|
||||
#define _MIPIC_INTR_STAT (dev_priv->mipi_mmio_base + 0xb804)
|
||||
#define MIPI_INTR_STAT(port) _MMIO_MIPI(port, _MIPIA_INTR_STAT, _MIPIC_INTR_STAT)
|
||||
#define _MIPIA_INTR_EN (dev_priv->mipi_mmio_base + 0xb008)
|
||||
#define _MIPIC_INTR_EN (dev_priv->mipi_mmio_base + 0xb808)
|
||||
#define MIPI_INTR_EN(port) _MMIO_MIPI(port, _MIPIA_INTR_EN, _MIPIC_INTR_EN)
|
||||
#define TEARING_EFFECT (1 << 31)
|
||||
#define SPL_PKT_SENT_INTERRUPT (1 << 30)
|
||||
#define GEN_READ_DATA_AVAIL (1 << 29)
|
||||
#define LP_GENERIC_WR_FIFO_FULL (1 << 28)
|
||||
#define HS_GENERIC_WR_FIFO_FULL (1 << 27)
|
||||
#define RX_PROT_VIOLATION (1 << 26)
|
||||
#define RX_INVALID_TX_LENGTH (1 << 25)
|
||||
#define ACK_WITH_NO_ERROR (1 << 24)
|
||||
#define TURN_AROUND_ACK_TIMEOUT (1 << 23)
|
||||
#define LP_RX_TIMEOUT (1 << 22)
|
||||
#define HS_TX_TIMEOUT (1 << 21)
|
||||
#define DPI_FIFO_UNDERRUN (1 << 20)
|
||||
#define LOW_CONTENTION (1 << 19)
|
||||
#define HIGH_CONTENTION (1 << 18)
|
||||
#define TXDSI_VC_ID_INVALID (1 << 17)
|
||||
#define TXDSI_DATA_TYPE_NOT_RECOGNISED (1 << 16)
|
||||
#define TXCHECKSUM_ERROR (1 << 15)
|
||||
#define TXECC_MULTIBIT_ERROR (1 << 14)
|
||||
#define TXECC_SINGLE_BIT_ERROR (1 << 13)
|
||||
#define TXFALSE_CONTROL_ERROR (1 << 12)
|
||||
#define RXDSI_VC_ID_INVALID (1 << 11)
|
||||
#define RXDSI_DATA_TYPE_NOT_REGOGNISED (1 << 10)
|
||||
#define RXCHECKSUM_ERROR (1 << 9)
|
||||
#define RXECC_MULTIBIT_ERROR (1 << 8)
|
||||
#define RXECC_SINGLE_BIT_ERROR (1 << 7)
|
||||
#define RXFALSE_CONTROL_ERROR (1 << 6)
|
||||
#define RXHS_RECEIVE_TIMEOUT_ERROR (1 << 5)
|
||||
#define RX_LP_TX_SYNC_ERROR (1 << 4)
|
||||
#define RXEXCAPE_MODE_ENTRY_ERROR (1 << 3)
|
||||
#define RXEOT_SYNC_ERROR (1 << 2)
|
||||
#define RXSOT_SYNC_ERROR (1 << 1)
|
||||
#define RXSOT_ERROR (1 << 0)
|
||||
|
||||
#define _MIPIA_DSI_FUNC_PRG (dev_priv->mipi_mmio_base + 0xb00c)
|
||||
#define _MIPIC_DSI_FUNC_PRG (dev_priv->mipi_mmio_base + 0xb80c)
|
||||
#define MIPI_DSI_FUNC_PRG(port) _MMIO_MIPI(port, _MIPIA_DSI_FUNC_PRG, _MIPIC_DSI_FUNC_PRG)
|
||||
#define CMD_MODE_DATA_WIDTH_MASK (7 << 13)
|
||||
#define CMD_MODE_NOT_SUPPORTED (0 << 13)
|
||||
#define CMD_MODE_DATA_WIDTH_16_BIT (1 << 13)
|
||||
#define CMD_MODE_DATA_WIDTH_9_BIT (2 << 13)
|
||||
#define CMD_MODE_DATA_WIDTH_8_BIT (3 << 13)
|
||||
#define CMD_MODE_DATA_WIDTH_OPTION1 (4 << 13)
|
||||
#define CMD_MODE_DATA_WIDTH_OPTION2 (5 << 13)
|
||||
#define VID_MODE_FORMAT_MASK (0xf << 7)
|
||||
#define VID_MODE_NOT_SUPPORTED (0 << 7)
|
||||
#define VID_MODE_FORMAT_RGB565 (1 << 7)
|
||||
#define VID_MODE_FORMAT_RGB666_PACKED (2 << 7)
|
||||
#define VID_MODE_FORMAT_RGB666 (3 << 7)
|
||||
#define VID_MODE_FORMAT_RGB888 (4 << 7)
|
||||
#define CMD_MODE_CHANNEL_NUMBER_SHIFT 5
|
||||
#define CMD_MODE_CHANNEL_NUMBER_MASK (3 << 5)
|
||||
#define VID_MODE_CHANNEL_NUMBER_SHIFT 3
|
||||
#define VID_MODE_CHANNEL_NUMBER_MASK (3 << 3)
|
||||
#define DATA_LANES_PRG_REG_SHIFT 0
|
||||
#define DATA_LANES_PRG_REG_MASK (7 << 0)
|
||||
|
||||
#define _MIPIA_HS_TX_TIMEOUT (dev_priv->mipi_mmio_base + 0xb010)
|
||||
#define _MIPIC_HS_TX_TIMEOUT (dev_priv->mipi_mmio_base + 0xb810)
|
||||
#define MIPI_HS_TX_TIMEOUT(port) _MMIO_MIPI(port, _MIPIA_HS_TX_TIMEOUT, _MIPIC_HS_TX_TIMEOUT)
|
||||
#define HIGH_SPEED_TX_TIMEOUT_COUNTER_MASK 0xffffff
|
||||
|
||||
#define _MIPIA_LP_RX_TIMEOUT (dev_priv->mipi_mmio_base + 0xb014)
|
||||
#define _MIPIC_LP_RX_TIMEOUT (dev_priv->mipi_mmio_base + 0xb814)
|
||||
#define MIPI_LP_RX_TIMEOUT(port) _MMIO_MIPI(port, _MIPIA_LP_RX_TIMEOUT, _MIPIC_LP_RX_TIMEOUT)
|
||||
#define LOW_POWER_RX_TIMEOUT_COUNTER_MASK 0xffffff
|
||||
|
||||
#define _MIPIA_TURN_AROUND_TIMEOUT (dev_priv->mipi_mmio_base + 0xb018)
|
||||
#define _MIPIC_TURN_AROUND_TIMEOUT (dev_priv->mipi_mmio_base + 0xb818)
|
||||
#define MIPI_TURN_AROUND_TIMEOUT(port) _MMIO_MIPI(port, _MIPIA_TURN_AROUND_TIMEOUT, _MIPIC_TURN_AROUND_TIMEOUT)
|
||||
#define TURN_AROUND_TIMEOUT_MASK 0x3f
|
||||
|
||||
#define _MIPIA_DEVICE_RESET_TIMER (dev_priv->mipi_mmio_base + 0xb01c)
|
||||
#define _MIPIC_DEVICE_RESET_TIMER (dev_priv->mipi_mmio_base + 0xb81c)
|
||||
#define MIPI_DEVICE_RESET_TIMER(port) _MMIO_MIPI(port, _MIPIA_DEVICE_RESET_TIMER, _MIPIC_DEVICE_RESET_TIMER)
|
||||
#define DEVICE_RESET_TIMER_MASK 0xffff
|
||||
|
||||
#define _MIPIA_DPI_RESOLUTION (dev_priv->mipi_mmio_base + 0xb020)
|
||||
#define _MIPIC_DPI_RESOLUTION (dev_priv->mipi_mmio_base + 0xb820)
|
||||
#define MIPI_DPI_RESOLUTION(port) _MMIO_MIPI(port, _MIPIA_DPI_RESOLUTION, _MIPIC_DPI_RESOLUTION)
|
||||
#define VERTICAL_ADDRESS_SHIFT 16
|
||||
#define VERTICAL_ADDRESS_MASK (0xffff << 16)
|
||||
#define HORIZONTAL_ADDRESS_SHIFT 0
|
||||
#define HORIZONTAL_ADDRESS_MASK 0xffff
|
||||
|
||||
#define _MIPIA_DBI_FIFO_THROTTLE (dev_priv->mipi_mmio_base + 0xb024)
|
||||
#define _MIPIC_DBI_FIFO_THROTTLE (dev_priv->mipi_mmio_base + 0xb824)
|
||||
#define MIPI_DBI_FIFO_THROTTLE(port) _MMIO_MIPI(port, _MIPIA_DBI_FIFO_THROTTLE, _MIPIC_DBI_FIFO_THROTTLE)
|
||||
#define DBI_FIFO_EMPTY_HALF (0 << 0)
|
||||
#define DBI_FIFO_EMPTY_QUARTER (1 << 0)
|
||||
#define DBI_FIFO_EMPTY_7_LOCATIONS (2 << 0)
|
||||
|
||||
/* regs below are bits 15:0 */
|
||||
#define _MIPIA_HSYNC_PADDING_COUNT (dev_priv->mipi_mmio_base + 0xb028)
|
||||
#define _MIPIC_HSYNC_PADDING_COUNT (dev_priv->mipi_mmio_base + 0xb828)
|
||||
#define MIPI_HSYNC_PADDING_COUNT(port) _MMIO_MIPI(port, _MIPIA_HSYNC_PADDING_COUNT, _MIPIC_HSYNC_PADDING_COUNT)
|
||||
|
||||
#define _MIPIA_HBP_COUNT (dev_priv->mipi_mmio_base + 0xb02c)
|
||||
#define _MIPIC_HBP_COUNT (dev_priv->mipi_mmio_base + 0xb82c)
|
||||
#define MIPI_HBP_COUNT(port) _MMIO_MIPI(port, _MIPIA_HBP_COUNT, _MIPIC_HBP_COUNT)
|
||||
|
||||
#define _MIPIA_HFP_COUNT (dev_priv->mipi_mmio_base + 0xb030)
|
||||
#define _MIPIC_HFP_COUNT (dev_priv->mipi_mmio_base + 0xb830)
|
||||
#define MIPI_HFP_COUNT(port) _MMIO_MIPI(port, _MIPIA_HFP_COUNT, _MIPIC_HFP_COUNT)
|
||||
|
||||
#define _MIPIA_HACTIVE_AREA_COUNT (dev_priv->mipi_mmio_base + 0xb034)
|
||||
#define _MIPIC_HACTIVE_AREA_COUNT (dev_priv->mipi_mmio_base + 0xb834)
|
||||
#define MIPI_HACTIVE_AREA_COUNT(port) _MMIO_MIPI(port, _MIPIA_HACTIVE_AREA_COUNT, _MIPIC_HACTIVE_AREA_COUNT)
|
||||
|
||||
#define _MIPIA_VSYNC_PADDING_COUNT (dev_priv->mipi_mmio_base + 0xb038)
|
||||
#define _MIPIC_VSYNC_PADDING_COUNT (dev_priv->mipi_mmio_base + 0xb838)
|
||||
#define MIPI_VSYNC_PADDING_COUNT(port) _MMIO_MIPI(port, _MIPIA_VSYNC_PADDING_COUNT, _MIPIC_VSYNC_PADDING_COUNT)
|
||||
|
||||
#define _MIPIA_VBP_COUNT (dev_priv->mipi_mmio_base + 0xb03c)
|
||||
#define _MIPIC_VBP_COUNT (dev_priv->mipi_mmio_base + 0xb83c)
|
||||
#define MIPI_VBP_COUNT(port) _MMIO_MIPI(port, _MIPIA_VBP_COUNT, _MIPIC_VBP_COUNT)
|
||||
|
||||
#define _MIPIA_VFP_COUNT (dev_priv->mipi_mmio_base + 0xb040)
|
||||
#define _MIPIC_VFP_COUNT (dev_priv->mipi_mmio_base + 0xb840)
|
||||
#define MIPI_VFP_COUNT(port) _MMIO_MIPI(port, _MIPIA_VFP_COUNT, _MIPIC_VFP_COUNT)
|
||||
|
||||
#define _MIPIA_HIGH_LOW_SWITCH_COUNT (dev_priv->mipi_mmio_base + 0xb044)
|
||||
#define _MIPIC_HIGH_LOW_SWITCH_COUNT (dev_priv->mipi_mmio_base + 0xb844)
|
||||
#define MIPI_HIGH_LOW_SWITCH_COUNT(port) _MMIO_MIPI(port, _MIPIA_HIGH_LOW_SWITCH_COUNT, _MIPIC_HIGH_LOW_SWITCH_COUNT)
|
||||
|
||||
#define _MIPIA_DPI_CONTROL (dev_priv->mipi_mmio_base + 0xb048)
|
||||
#define _MIPIC_DPI_CONTROL (dev_priv->mipi_mmio_base + 0xb848)
|
||||
#define MIPI_DPI_CONTROL(port) _MMIO_MIPI(port, _MIPIA_DPI_CONTROL, _MIPIC_DPI_CONTROL)
|
||||
#define DPI_LP_MODE (1 << 6)
|
||||
#define BACKLIGHT_OFF (1 << 5)
|
||||
#define BACKLIGHT_ON (1 << 4)
|
||||
#define COLOR_MODE_OFF (1 << 3)
|
||||
#define COLOR_MODE_ON (1 << 2)
|
||||
#define TURN_ON (1 << 1)
|
||||
#define SHUTDOWN (1 << 0)
|
||||
|
||||
#define _MIPIA_DPI_DATA (dev_priv->mipi_mmio_base + 0xb04c)
|
||||
#define _MIPIC_DPI_DATA (dev_priv->mipi_mmio_base + 0xb84c)
|
||||
#define MIPI_DPI_DATA(port) _MMIO_MIPI(port, _MIPIA_DPI_DATA, _MIPIC_DPI_DATA)
|
||||
#define COMMAND_BYTE_SHIFT 0
|
||||
#define COMMAND_BYTE_MASK (0x3f << 0)
|
||||
|
||||
#define _MIPIA_INIT_COUNT (dev_priv->mipi_mmio_base + 0xb050)
|
||||
#define _MIPIC_INIT_COUNT (dev_priv->mipi_mmio_base + 0xb850)
|
||||
#define MIPI_INIT_COUNT(port) _MMIO_MIPI(port, _MIPIA_INIT_COUNT, _MIPIC_INIT_COUNT)
|
||||
#define MASTER_INIT_TIMER_SHIFT 0
|
||||
#define MASTER_INIT_TIMER_MASK (0xffff << 0)
|
||||
|
||||
#define _MIPIA_MAX_RETURN_PKT_SIZE (dev_priv->mipi_mmio_base + 0xb054)
|
||||
#define _MIPIC_MAX_RETURN_PKT_SIZE (dev_priv->mipi_mmio_base + 0xb854)
|
||||
#define MIPI_MAX_RETURN_PKT_SIZE(port) _MMIO_MIPI(port, \
|
||||
_MIPIA_MAX_RETURN_PKT_SIZE, _MIPIC_MAX_RETURN_PKT_SIZE)
|
||||
#define MAX_RETURN_PKT_SIZE_SHIFT 0
|
||||
#define MAX_RETURN_PKT_SIZE_MASK (0x3ff << 0)
|
||||
|
||||
#define _MIPIA_VIDEO_MODE_FORMAT (dev_priv->mipi_mmio_base + 0xb058)
|
||||
#define _MIPIC_VIDEO_MODE_FORMAT (dev_priv->mipi_mmio_base + 0xb858)
|
||||
#define MIPI_VIDEO_MODE_FORMAT(port) _MMIO_MIPI(port, _MIPIA_VIDEO_MODE_FORMAT, _MIPIC_VIDEO_MODE_FORMAT)
|
||||
#define RANDOM_DPI_DISPLAY_RESOLUTION (1 << 4)
|
||||
#define DISABLE_VIDEO_BTA (1 << 3)
|
||||
#define IP_TG_CONFIG (1 << 2)
|
||||
#define VIDEO_MODE_NON_BURST_WITH_SYNC_PULSE (1 << 0)
|
||||
#define VIDEO_MODE_NON_BURST_WITH_SYNC_EVENTS (2 << 0)
|
||||
#define VIDEO_MODE_BURST (3 << 0)
|
||||
|
||||
#define _MIPIA_EOT_DISABLE (dev_priv->mipi_mmio_base + 0xb05c)
|
||||
#define _MIPIC_EOT_DISABLE (dev_priv->mipi_mmio_base + 0xb85c)
|
||||
#define MIPI_EOT_DISABLE(port) _MMIO_MIPI(port, _MIPIA_EOT_DISABLE, _MIPIC_EOT_DISABLE)
|
||||
#define BXT_DEFEATURE_DPI_FIFO_CTR (1 << 9)
|
||||
#define BXT_DPHY_DEFEATURE_EN (1 << 8)
|
||||
#define LP_RX_TIMEOUT_ERROR_RECOVERY_DISABLE (1 << 7)
|
||||
#define HS_RX_TIMEOUT_ERROR_RECOVERY_DISABLE (1 << 6)
|
||||
#define LOW_CONTENTION_RECOVERY_DISABLE (1 << 5)
|
||||
#define HIGH_CONTENTION_RECOVERY_DISABLE (1 << 4)
|
||||
#define TXDSI_TYPE_NOT_RECOGNISED_ERROR_RECOVERY_DISABLE (1 << 3)
|
||||
#define TXECC_MULTIBIT_ERROR_RECOVERY_DISABLE (1 << 2)
|
||||
#define CLOCKSTOP (1 << 1)
|
||||
#define EOT_DISABLE (1 << 0)
|
||||
|
||||
#define _MIPIA_LP_BYTECLK (dev_priv->mipi_mmio_base + 0xb060)
|
||||
#define _MIPIC_LP_BYTECLK (dev_priv->mipi_mmio_base + 0xb860)
|
||||
#define MIPI_LP_BYTECLK(port) _MMIO_MIPI(port, _MIPIA_LP_BYTECLK, _MIPIC_LP_BYTECLK)
|
||||
#define LP_BYTECLK_SHIFT 0
|
||||
#define LP_BYTECLK_MASK (0xffff << 0)
|
||||
|
||||
#define _MIPIA_TLPX_TIME_COUNT (dev_priv->mipi_mmio_base + 0xb0a4)
|
||||
#define _MIPIC_TLPX_TIME_COUNT (dev_priv->mipi_mmio_base + 0xb8a4)
|
||||
#define MIPI_TLPX_TIME_COUNT(port) _MMIO_MIPI(port, _MIPIA_TLPX_TIME_COUNT, _MIPIC_TLPX_TIME_COUNT)
|
||||
|
||||
#define _MIPIA_CLK_LANE_TIMING (dev_priv->mipi_mmio_base + 0xb098)
|
||||
#define _MIPIC_CLK_LANE_TIMING (dev_priv->mipi_mmio_base + 0xb898)
|
||||
#define MIPI_CLK_LANE_TIMING(port) _MMIO_MIPI(port, _MIPIA_CLK_LANE_TIMING, _MIPIC_CLK_LANE_TIMING)
|
||||
|
||||
/* bits 31:0 */
|
||||
#define _MIPIA_LP_GEN_DATA (dev_priv->mipi_mmio_base + 0xb064)
|
||||
#define _MIPIC_LP_GEN_DATA (dev_priv->mipi_mmio_base + 0xb864)
|
||||
#define MIPI_LP_GEN_DATA(port) _MMIO_MIPI(port, _MIPIA_LP_GEN_DATA, _MIPIC_LP_GEN_DATA)
|
||||
|
||||
/* bits 31:0 */
|
||||
#define _MIPIA_HS_GEN_DATA (dev_priv->mipi_mmio_base + 0xb068)
|
||||
#define _MIPIC_HS_GEN_DATA (dev_priv->mipi_mmio_base + 0xb868)
|
||||
#define MIPI_HS_GEN_DATA(port) _MMIO_MIPI(port, _MIPIA_HS_GEN_DATA, _MIPIC_HS_GEN_DATA)
|
||||
|
||||
#define _MIPIA_LP_GEN_CTRL (dev_priv->mipi_mmio_base + 0xb06c)
|
||||
#define _MIPIC_LP_GEN_CTRL (dev_priv->mipi_mmio_base + 0xb86c)
|
||||
#define MIPI_LP_GEN_CTRL(port) _MMIO_MIPI(port, _MIPIA_LP_GEN_CTRL, _MIPIC_LP_GEN_CTRL)
|
||||
#define _MIPIA_HS_GEN_CTRL (dev_priv->mipi_mmio_base + 0xb070)
|
||||
#define _MIPIC_HS_GEN_CTRL (dev_priv->mipi_mmio_base + 0xb870)
|
||||
#define MIPI_HS_GEN_CTRL(port) _MMIO_MIPI(port, _MIPIA_HS_GEN_CTRL, _MIPIC_HS_GEN_CTRL)
|
||||
#define LONG_PACKET_WORD_COUNT_SHIFT 8
|
||||
#define LONG_PACKET_WORD_COUNT_MASK (0xffff << 8)
|
||||
#define SHORT_PACKET_PARAM_SHIFT 8
|
||||
#define SHORT_PACKET_PARAM_MASK (0xffff << 8)
|
||||
#define VIRTUAL_CHANNEL_SHIFT 6
|
||||
#define VIRTUAL_CHANNEL_MASK (3 << 6)
|
||||
#define DATA_TYPE_SHIFT 0
|
||||
#define DATA_TYPE_MASK (0x3f << 0)
|
||||
/* data type values, see include/video/mipi_display.h */
|
||||
|
||||
#define _MIPIA_GEN_FIFO_STAT (dev_priv->mipi_mmio_base + 0xb074)
|
||||
#define _MIPIC_GEN_FIFO_STAT (dev_priv->mipi_mmio_base + 0xb874)
|
||||
#define MIPI_GEN_FIFO_STAT(port) _MMIO_MIPI(port, _MIPIA_GEN_FIFO_STAT, _MIPIC_GEN_FIFO_STAT)
|
||||
#define DPI_FIFO_EMPTY (1 << 28)
|
||||
#define DBI_FIFO_EMPTY (1 << 27)
|
||||
#define LP_CTRL_FIFO_EMPTY (1 << 26)
|
||||
#define LP_CTRL_FIFO_HALF_EMPTY (1 << 25)
|
||||
#define LP_CTRL_FIFO_FULL (1 << 24)
|
||||
#define HS_CTRL_FIFO_EMPTY (1 << 18)
|
||||
#define HS_CTRL_FIFO_HALF_EMPTY (1 << 17)
|
||||
#define HS_CTRL_FIFO_FULL (1 << 16)
|
||||
#define LP_DATA_FIFO_EMPTY (1 << 10)
|
||||
#define LP_DATA_FIFO_HALF_EMPTY (1 << 9)
|
||||
#define LP_DATA_FIFO_FULL (1 << 8)
|
||||
#define HS_DATA_FIFO_EMPTY (1 << 2)
|
||||
#define HS_DATA_FIFO_HALF_EMPTY (1 << 1)
|
||||
#define HS_DATA_FIFO_FULL (1 << 0)
|
||||
|
||||
#define _MIPIA_HS_LS_DBI_ENABLE (dev_priv->mipi_mmio_base + 0xb078)
|
||||
#define _MIPIC_HS_LS_DBI_ENABLE (dev_priv->mipi_mmio_base + 0xb878)
|
||||
#define MIPI_HS_LP_DBI_ENABLE(port) _MMIO_MIPI(port, _MIPIA_HS_LS_DBI_ENABLE, _MIPIC_HS_LS_DBI_ENABLE)
|
||||
#define DBI_HS_LP_MODE_MASK (1 << 0)
|
||||
#define DBI_LP_MODE (1 << 0)
|
||||
#define DBI_HS_MODE (0 << 0)
|
||||
|
||||
#define _MIPIA_DPHY_PARAM (dev_priv->mipi_mmio_base + 0xb080)
|
||||
#define _MIPIC_DPHY_PARAM (dev_priv->mipi_mmio_base + 0xb880)
|
||||
#define MIPI_DPHY_PARAM(port) _MMIO_MIPI(port, _MIPIA_DPHY_PARAM, _MIPIC_DPHY_PARAM)
|
||||
#define EXIT_ZERO_COUNT_SHIFT 24
|
||||
#define EXIT_ZERO_COUNT_MASK (0x3f << 24)
|
||||
#define TRAIL_COUNT_SHIFT 16
|
||||
#define TRAIL_COUNT_MASK (0x1f << 16)
|
||||
#define CLK_ZERO_COUNT_SHIFT 8
|
||||
#define CLK_ZERO_COUNT_MASK (0xff << 8)
|
||||
#define PREPARE_COUNT_SHIFT 0
|
||||
#define PREPARE_COUNT_MASK (0x3f << 0)
|
||||
|
||||
#define _MIPIA_DBI_BW_CTRL (dev_priv->mipi_mmio_base + 0xb084)
|
||||
#define _MIPIC_DBI_BW_CTRL (dev_priv->mipi_mmio_base + 0xb884)
|
||||
#define MIPI_DBI_BW_CTRL(port) _MMIO_MIPI(port, _MIPIA_DBI_BW_CTRL, _MIPIC_DBI_BW_CTRL)
|
||||
|
||||
#define _MIPIA_CLK_LANE_SWITCH_TIME_CNT (dev_priv->mipi_mmio_base + 0xb088)
|
||||
#define _MIPIC_CLK_LANE_SWITCH_TIME_CNT (dev_priv->mipi_mmio_base + 0xb888)
|
||||
#define MIPI_CLK_LANE_SWITCH_TIME_CNT(port) _MMIO_MIPI(port, _MIPIA_CLK_LANE_SWITCH_TIME_CNT, _MIPIC_CLK_LANE_SWITCH_TIME_CNT)
|
||||
#define LP_HS_SSW_CNT_SHIFT 16
|
||||
#define LP_HS_SSW_CNT_MASK (0xffff << 16)
|
||||
#define HS_LP_PWR_SW_CNT_SHIFT 0
|
||||
#define HS_LP_PWR_SW_CNT_MASK (0xffff << 0)
|
||||
|
||||
#define _MIPIA_STOP_STATE_STALL (dev_priv->mipi_mmio_base + 0xb08c)
|
||||
#define _MIPIC_STOP_STATE_STALL (dev_priv->mipi_mmio_base + 0xb88c)
|
||||
#define MIPI_STOP_STATE_STALL(port) _MMIO_MIPI(port, _MIPIA_STOP_STATE_STALL, _MIPIC_STOP_STATE_STALL)
|
||||
#define STOP_STATE_STALL_COUNTER_SHIFT 0
|
||||
#define STOP_STATE_STALL_COUNTER_MASK (0xff << 0)
|
||||
|
||||
#define _MIPIA_INTR_STAT_REG_1 (dev_priv->mipi_mmio_base + 0xb090)
|
||||
#define _MIPIC_INTR_STAT_REG_1 (dev_priv->mipi_mmio_base + 0xb890)
|
||||
#define MIPI_INTR_STAT_REG_1(port) _MMIO_MIPI(port, _MIPIA_INTR_STAT_REG_1, _MIPIC_INTR_STAT_REG_1)
|
||||
#define _MIPIA_INTR_EN_REG_1 (dev_priv->mipi_mmio_base + 0xb094)
|
||||
#define _MIPIC_INTR_EN_REG_1 (dev_priv->mipi_mmio_base + 0xb894)
|
||||
#define MIPI_INTR_EN_REG_1(port) _MMIO_MIPI(port, _MIPIA_INTR_EN_REG_1, _MIPIC_INTR_EN_REG_1)
|
||||
#define RX_CONTENTION_DETECTED (1 << 0)
|
||||
|
||||
/* XXX: only pipe A ?!? */
|
||||
#define MIPIA_DBI_TYPEC_CTRL (dev_priv->mipi_mmio_base + 0xb100)
|
||||
#define DBI_TYPEC_ENABLE (1 << 31)
|
||||
#define DBI_TYPEC_WIP (1 << 30)
|
||||
#define DBI_TYPEC_OPTION_SHIFT 28
|
||||
#define DBI_TYPEC_OPTION_MASK (3 << 28)
|
||||
#define DBI_TYPEC_FREQ_SHIFT 24
|
||||
#define DBI_TYPEC_FREQ_MASK (0xf << 24)
|
||||
#define DBI_TYPEC_OVERRIDE (1 << 8)
|
||||
#define DBI_TYPEC_OVERRIDE_COUNTER_SHIFT 0
|
||||
#define DBI_TYPEC_OVERRIDE_COUNTER_MASK (0xff << 0)
|
||||
|
||||
/* MIPI adapter registers */
|
||||
|
||||
#define _MIPIA_CTRL (dev_priv->mipi_mmio_base + 0xb104)
|
||||
#define _MIPIC_CTRL (dev_priv->mipi_mmio_base + 0xb904)
|
||||
#define MIPI_CTRL(port) _MMIO_MIPI(port, _MIPIA_CTRL, _MIPIC_CTRL)
|
||||
#define ESCAPE_CLOCK_DIVIDER_SHIFT 5 /* A only */
|
||||
#define ESCAPE_CLOCK_DIVIDER_MASK (3 << 5)
|
||||
#define ESCAPE_CLOCK_DIVIDER_1 (0 << 5)
|
||||
#define ESCAPE_CLOCK_DIVIDER_2 (1 << 5)
|
||||
#define ESCAPE_CLOCK_DIVIDER_4 (2 << 5)
|
||||
#define READ_REQUEST_PRIORITY_SHIFT 3
|
||||
#define READ_REQUEST_PRIORITY_MASK (3 << 3)
|
||||
#define READ_REQUEST_PRIORITY_LOW (0 << 3)
|
||||
#define READ_REQUEST_PRIORITY_HIGH (3 << 3)
|
||||
#define RGB_FLIP_TO_BGR (1 << 2)
|
||||
|
||||
#define BXT_PIPE_SELECT_SHIFT 7
|
||||
#define BXT_PIPE_SELECT_MASK (7 << 7)
|
||||
#define BXT_PIPE_SELECT(pipe) ((pipe) << 7)
|
||||
#define GLK_PHY_STATUS_PORT_READY (1 << 31) /* RO */
|
||||
#define GLK_ULPS_NOT_ACTIVE (1 << 30) /* RO */
|
||||
#define GLK_MIPIIO_RESET_RELEASED (1 << 28)
|
||||
#define GLK_CLOCK_LANE_STOP_STATE (1 << 27) /* RO */
|
||||
#define GLK_DATA_LANE_STOP_STATE (1 << 26) /* RO */
|
||||
#define GLK_LP_WAKE (1 << 22)
|
||||
#define GLK_LP11_LOW_PWR_MODE (1 << 21)
|
||||
#define GLK_LP00_LOW_PWR_MODE (1 << 20)
|
||||
#define GLK_FIREWALL_ENABLE (1 << 16)
|
||||
#define BXT_PIXEL_OVERLAP_CNT_MASK (0xf << 10)
|
||||
#define BXT_PIXEL_OVERLAP_CNT_SHIFT 10
|
||||
#define BXT_DSC_ENABLE (1 << 3)
|
||||
#define BXT_RGB_FLIP (1 << 2)
|
||||
#define GLK_MIPIIO_PORT_POWERED (1 << 1) /* RO */
|
||||
#define GLK_MIPIIO_ENABLE (1 << 0)
|
||||
|
||||
#define _MIPIA_DATA_ADDRESS (dev_priv->mipi_mmio_base + 0xb108)
|
||||
#define _MIPIC_DATA_ADDRESS (dev_priv->mipi_mmio_base + 0xb908)
|
||||
#define MIPI_DATA_ADDRESS(port) _MMIO_MIPI(port, _MIPIA_DATA_ADDRESS, _MIPIC_DATA_ADDRESS)
|
||||
#define DATA_MEM_ADDRESS_SHIFT 5
|
||||
#define DATA_MEM_ADDRESS_MASK (0x7ffffff << 5)
|
||||
#define DATA_VALID (1 << 0)
|
||||
|
||||
#define _MIPIA_DATA_LENGTH (dev_priv->mipi_mmio_base + 0xb10c)
|
||||
#define _MIPIC_DATA_LENGTH (dev_priv->mipi_mmio_base + 0xb90c)
|
||||
#define MIPI_DATA_LENGTH(port) _MMIO_MIPI(port, _MIPIA_DATA_LENGTH, _MIPIC_DATA_LENGTH)
|
||||
#define DATA_LENGTH_SHIFT 0
|
||||
#define DATA_LENGTH_MASK (0xfffff << 0)
|
||||
|
||||
#define _MIPIA_COMMAND_ADDRESS (dev_priv->mipi_mmio_base + 0xb110)
|
||||
#define _MIPIC_COMMAND_ADDRESS (dev_priv->mipi_mmio_base + 0xb910)
|
||||
#define MIPI_COMMAND_ADDRESS(port) _MMIO_MIPI(port, _MIPIA_COMMAND_ADDRESS, _MIPIC_COMMAND_ADDRESS)
|
||||
#define COMMAND_MEM_ADDRESS_SHIFT 5
|
||||
#define COMMAND_MEM_ADDRESS_MASK (0x7ffffff << 5)
|
||||
#define AUTO_PWG_ENABLE (1 << 2)
|
||||
#define MEMORY_WRITE_DATA_FROM_PIPE_RENDERING (1 << 1)
|
||||
#define COMMAND_VALID (1 << 0)
|
||||
|
||||
#define _MIPIA_COMMAND_LENGTH (dev_priv->mipi_mmio_base + 0xb114)
|
||||
#define _MIPIC_COMMAND_LENGTH (dev_priv->mipi_mmio_base + 0xb914)
|
||||
#define MIPI_COMMAND_LENGTH(port) _MMIO_MIPI(port, _MIPIA_COMMAND_LENGTH, _MIPIC_COMMAND_LENGTH)
|
||||
#define COMMAND_LENGTH_SHIFT(n) (8 * (n)) /* n: 0...3 */
|
||||
#define COMMAND_LENGTH_MASK(n) (0xff << (8 * (n)))
|
||||
|
||||
#define _MIPIA_READ_DATA_RETURN0 (dev_priv->mipi_mmio_base + 0xb118)
|
||||
#define _MIPIC_READ_DATA_RETURN0 (dev_priv->mipi_mmio_base + 0xb918)
|
||||
#define MIPI_READ_DATA_RETURN(port, n) _MMIO(_MIPI(port, _MIPIA_READ_DATA_RETURN0, _MIPIC_READ_DATA_RETURN0) + 4 * (n)) /* n: 0...7 */
|
||||
|
||||
#define _MIPIA_READ_DATA_VALID (dev_priv->mipi_mmio_base + 0xb138)
|
||||
#define _MIPIC_READ_DATA_VALID (dev_priv->mipi_mmio_base + 0xb938)
|
||||
#define MIPI_READ_DATA_VALID(port) _MMIO_MIPI(port, _MIPIA_READ_DATA_VALID, _MIPIC_READ_DATA_VALID)
|
||||
#define READ_DATA_VALID(n) (1 << (n))
|
||||
|
||||
#endif /* __VLV_DSI_REGS_H__ */
|
@ -4,6 +4,8 @@
|
||||
* Copyright © 2016 Intel Corporation
|
||||
*/
|
||||
|
||||
#include <drm/drm_cache.h>
|
||||
|
||||
#include "display/intel_frontbuffer.h"
|
||||
|
||||
#include "i915_drv.h"
|
||||
|
@ -67,6 +67,7 @@
|
||||
#include <linux/log2.h>
|
||||
#include <linux/nospec.h>
|
||||
|
||||
#include <drm/drm_cache.h>
|
||||
#include <drm/drm_syncobj.h>
|
||||
|
||||
#include "gt/gen6_ppgtt.h"
|
||||
@ -79,6 +80,7 @@
|
||||
|
||||
#include "pxp/intel_pxp.h"
|
||||
|
||||
#include "i915_file_private.h"
|
||||
#include "i915_gem_context.h"
|
||||
#include "i915_trace.h"
|
||||
#include "i915_user_extensions.h"
|
||||
@ -343,6 +345,20 @@ static int proto_context_register(struct drm_i915_file_private *fpriv,
|
||||
return ret;
|
||||
}
|
||||
|
||||
static struct i915_address_space *
|
||||
i915_gem_vm_lookup(struct drm_i915_file_private *file_priv, u32 id)
|
||||
{
|
||||
struct i915_address_space *vm;
|
||||
|
||||
xa_lock(&file_priv->vm_xa);
|
||||
vm = xa_load(&file_priv->vm_xa, id);
|
||||
if (vm)
|
||||
kref_get(&vm->ref);
|
||||
xa_unlock(&file_priv->vm_xa);
|
||||
|
||||
return vm;
|
||||
}
|
||||
|
||||
static int set_proto_ctx_vm(struct drm_i915_file_private *fpriv,
|
||||
struct i915_gem_proto_context *pc,
|
||||
const struct drm_i915_gem_context_param *args)
|
||||
@ -571,10 +587,6 @@ set_proto_ctx_engines_parallel_submit(struct i915_user_extension __user *base,
|
||||
struct intel_engine_cs **siblings = NULL;
|
||||
intel_engine_mask_t prev_mask;
|
||||
|
||||
/* FIXME: This is NIY for execlists */
|
||||
if (!(intel_uc_uses_guc_submission(&to_gt(i915)->uc)))
|
||||
return -ENODEV;
|
||||
|
||||
if (get_user(slot, &ext->engine_index))
|
||||
return -EFAULT;
|
||||
|
||||
@ -584,6 +596,13 @@ set_proto_ctx_engines_parallel_submit(struct i915_user_extension __user *base,
|
||||
if (get_user(num_siblings, &ext->num_siblings))
|
||||
return -EFAULT;
|
||||
|
||||
if (!intel_uc_uses_guc_submission(&to_gt(i915)->uc) &&
|
||||
num_siblings != 1) {
|
||||
drm_dbg(&i915->drm, "Only 1 sibling (%d) supported in non-GuC mode\n",
|
||||
num_siblings);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (slot >= set->num_engines) {
|
||||
drm_dbg(&i915->drm, "Invalid placement value, %d >= %d\n",
|
||||
slot, set->num_engines);
|
||||
|
@ -174,7 +174,7 @@ i915_gem_context_get_eb_vm(struct i915_gem_context *ctx)
|
||||
|
||||
vm = ctx->vm;
|
||||
if (!vm)
|
||||
vm = &ctx->i915->ggtt.vm;
|
||||
vm = &to_gt(ctx->i915)->ggtt->vm;
|
||||
vm = i915_vm_get(vm);
|
||||
|
||||
return vm;
|
||||
|
@ -3,12 +3,15 @@
|
||||
* Copyright © 2020 Intel Corporation
|
||||
*/
|
||||
|
||||
#include <drm/drm_fourcc.h>
|
||||
|
||||
#include "gem/i915_gem_ioctls.h"
|
||||
#include "gem/i915_gem_lmem.h"
|
||||
#include "gem/i915_gem_region.h"
|
||||
#include "pxp/intel_pxp.h"
|
||||
|
||||
#include "i915_drv.h"
|
||||
#include "i915_gem_create.h"
|
||||
#include "i915_trace.h"
|
||||
#include "i915_user_extensions.h"
|
||||
|
||||
|
17
drivers/gpu/drm/i915/gem/i915_gem_create.h
Normal file
17
drivers/gpu/drm/i915/gem/i915_gem_create.h
Normal file
@ -0,0 +1,17 @@
|
||||
/* SPDX-License-Identifier: MIT */
|
||||
/*
|
||||
* Copyright © 2021 Intel Corporation
|
||||
*/
|
||||
|
||||
#ifndef __I915_GEM_CREATE_H__
|
||||
#define __I915_GEM_CREATE_H__
|
||||
|
||||
struct drm_file;
|
||||
struct drm_device;
|
||||
struct drm_mode_create_dumb;
|
||||
|
||||
int i915_gem_dumb_create(struct drm_file *file_priv,
|
||||
struct drm_device *dev,
|
||||
struct drm_mode_create_dumb *args);
|
||||
|
||||
#endif /* __I915_GEM_CREATE_H__ */
|
@ -11,6 +11,7 @@
|
||||
|
||||
#include <asm/smp.h>
|
||||
|
||||
#include "gem/i915_gem_dmabuf.h"
|
||||
#include "i915_drv.h"
|
||||
#include "i915_gem_object.h"
|
||||
#include "i915_scatterlist.h"
|
||||
|
18
drivers/gpu/drm/i915/gem/i915_gem_dmabuf.h
Normal file
18
drivers/gpu/drm/i915/gem/i915_gem_dmabuf.h
Normal file
@ -0,0 +1,18 @@
|
||||
/* SPDX-License-Identifier: MIT */
|
||||
/*
|
||||
* Copyright © 2022 Intel Corporation
|
||||
*/
|
||||
|
||||
#ifndef __I915_GEM_DMABUF_H__
|
||||
#define __I915_GEM_DMABUF_H__
|
||||
|
||||
struct drm_gem_object;
|
||||
struct drm_device;
|
||||
struct dma_buf;
|
||||
|
||||
struct drm_gem_object *i915_gem_prime_import(struct drm_device *dev,
|
||||
struct dma_buf *dma_buf);
|
||||
|
||||
struct dma_buf *i915_gem_prime_export(struct drm_gem_object *gem_obj, int flags);
|
||||
|
||||
#endif /* __I915_GEM_DMABUF_H__ */
|
@ -9,12 +9,13 @@
|
||||
|
||||
#include "i915_drv.h"
|
||||
#include "i915_gem_clflush.h"
|
||||
#include "i915_gem_domain.h"
|
||||
#include "i915_gem_gtt.h"
|
||||
#include "i915_gem_ioctls.h"
|
||||
#include "i915_gem_object.h"
|
||||
#include "i915_vma.h"
|
||||
#include "i915_gem_lmem.h"
|
||||
#include "i915_gem_mman.h"
|
||||
#include "i915_gem_object.h"
|
||||
#include "i915_vma.h"
|
||||
|
||||
static bool gpu_write_needs_clflush(struct drm_i915_gem_object *obj)
|
||||
{
|
||||
|
15
drivers/gpu/drm/i915/gem/i915_gem_domain.h
Normal file
15
drivers/gpu/drm/i915/gem/i915_gem_domain.h
Normal file
@ -0,0 +1,15 @@
|
||||
/* SPDX-License-Identifier: MIT */
|
||||
/*
|
||||
* Copyright © 2022 Intel Corporation
|
||||
*/
|
||||
|
||||
#ifndef __I915_GEM_DOMAIN_H__
|
||||
#define __I915_GEM_DOMAIN_H__
|
||||
|
||||
struct drm_i915_gem_object;
|
||||
enum i915_cache_level;
|
||||
|
||||
int i915_gem_object_set_cache_level(struct drm_i915_gem_object *obj,
|
||||
enum i915_cache_level cache_level);
|
||||
|
||||
#endif /* __I915_GEM_DOMAIN_H__ */
|
@ -25,13 +25,13 @@
|
||||
|
||||
#include "i915_cmd_parser.h"
|
||||
#include "i915_drv.h"
|
||||
#include "i915_file_private.h"
|
||||
#include "i915_gem_clflush.h"
|
||||
#include "i915_gem_context.h"
|
||||
#include "i915_gem_evict.h"
|
||||
#include "i915_gem_ioctls.h"
|
||||
#include "i915_trace.h"
|
||||
#include "i915_user_extensions.h"
|
||||
#include "i915_vma_snapshot.h"
|
||||
|
||||
struct eb_vma {
|
||||
struct i915_vma *vma;
|
||||
@ -443,7 +443,7 @@ eb_pin_vma(struct i915_execbuffer *eb,
|
||||
else
|
||||
pin_flags = entry->offset & PIN_OFFSET_MASK;
|
||||
|
||||
pin_flags |= PIN_USER | PIN_NOEVICT | PIN_OFFSET_FIXED;
|
||||
pin_flags |= PIN_USER | PIN_NOEVICT | PIN_OFFSET_FIXED | PIN_VALIDATE;
|
||||
if (unlikely(ev->flags & EXEC_OBJECT_NEEDS_GTT))
|
||||
pin_flags |= PIN_GLOBAL;
|
||||
|
||||
@ -461,17 +461,15 @@ eb_pin_vma(struct i915_execbuffer *eb,
|
||||
entry->pad_to_size,
|
||||
entry->alignment,
|
||||
eb_pin_flags(entry, ev->flags) |
|
||||
PIN_USER | PIN_NOEVICT);
|
||||
PIN_USER | PIN_NOEVICT | PIN_VALIDATE);
|
||||
if (unlikely(err))
|
||||
return err;
|
||||
}
|
||||
|
||||
if (unlikely(ev->flags & EXEC_OBJECT_NEEDS_FENCE)) {
|
||||
err = i915_vma_pin_fence(vma);
|
||||
if (unlikely(err)) {
|
||||
i915_vma_unpin(vma);
|
||||
if (unlikely(err))
|
||||
return err;
|
||||
}
|
||||
|
||||
if (vma->fence)
|
||||
ev->flags |= __EXEC_OBJECT_HAS_FENCE;
|
||||
@ -487,13 +485,9 @@ eb_pin_vma(struct i915_execbuffer *eb,
|
||||
static inline void
|
||||
eb_unreserve_vma(struct eb_vma *ev)
|
||||
{
|
||||
if (!(ev->flags & __EXEC_OBJECT_HAS_PIN))
|
||||
return;
|
||||
|
||||
if (unlikely(ev->flags & __EXEC_OBJECT_HAS_FENCE))
|
||||
__i915_vma_unpin_fence(ev->vma);
|
||||
|
||||
__i915_vma_unpin(ev->vma);
|
||||
ev->flags &= ~__EXEC_OBJECT_RESERVED;
|
||||
}
|
||||
|
||||
@ -675,10 +669,8 @@ static int eb_reserve_vma(struct i915_execbuffer *eb,
|
||||
|
||||
if (unlikely(ev->flags & EXEC_OBJECT_NEEDS_FENCE)) {
|
||||
err = i915_vma_pin_fence(vma);
|
||||
if (unlikely(err)) {
|
||||
i915_vma_unpin(vma);
|
||||
if (unlikely(err))
|
||||
return err;
|
||||
}
|
||||
|
||||
if (vma->fence)
|
||||
ev->flags |= __EXEC_OBJECT_HAS_FENCE;
|
||||
@ -690,85 +682,95 @@ static int eb_reserve_vma(struct i915_execbuffer *eb,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int eb_reserve(struct i915_execbuffer *eb)
|
||||
static bool eb_unbind(struct i915_execbuffer *eb, bool force)
|
||||
{
|
||||
const unsigned int count = eb->buffer_count;
|
||||
unsigned int pin_flags = PIN_USER | PIN_NONBLOCK;
|
||||
unsigned int i;
|
||||
struct list_head last;
|
||||
bool unpinned = false;
|
||||
|
||||
/* Resort *all* the objects into priority order */
|
||||
INIT_LIST_HEAD(&eb->unbound);
|
||||
INIT_LIST_HEAD(&last);
|
||||
|
||||
for (i = 0; i < count; i++) {
|
||||
struct eb_vma *ev = &eb->vma[i];
|
||||
unsigned int flags = ev->flags;
|
||||
|
||||
if (!force && flags & EXEC_OBJECT_PINNED &&
|
||||
flags & __EXEC_OBJECT_HAS_PIN)
|
||||
continue;
|
||||
|
||||
unpinned = true;
|
||||
eb_unreserve_vma(ev);
|
||||
|
||||
if (flags & EXEC_OBJECT_PINNED)
|
||||
/* Pinned must have their slot */
|
||||
list_add(&ev->bind_link, &eb->unbound);
|
||||
else if (flags & __EXEC_OBJECT_NEEDS_MAP)
|
||||
/* Map require the lowest 256MiB (aperture) */
|
||||
list_add_tail(&ev->bind_link, &eb->unbound);
|
||||
else if (!(flags & EXEC_OBJECT_SUPPORTS_48B_ADDRESS))
|
||||
/* Prioritise 4GiB region for restricted bo */
|
||||
list_add(&ev->bind_link, &last);
|
||||
else
|
||||
list_add_tail(&ev->bind_link, &last);
|
||||
}
|
||||
|
||||
list_splice_tail(&last, &eb->unbound);
|
||||
return unpinned;
|
||||
}
|
||||
|
||||
static int eb_reserve(struct i915_execbuffer *eb)
|
||||
{
|
||||
struct eb_vma *ev;
|
||||
unsigned int i, pass;
|
||||
unsigned int pass;
|
||||
int err = 0;
|
||||
bool unpinned;
|
||||
|
||||
/*
|
||||
* Attempt to pin all of the buffers into the GTT.
|
||||
* This is done in 3 phases:
|
||||
* This is done in 2 phases:
|
||||
*
|
||||
* 1a. Unbind all objects that do not match the GTT constraints for
|
||||
* the execbuffer (fenceable, mappable, alignment etc).
|
||||
* 1b. Increment pin count for already bound objects.
|
||||
* 2. Bind new objects.
|
||||
* 3. Decrement pin count.
|
||||
* 1. Unbind all objects that do not match the GTT constraints for
|
||||
* the execbuffer (fenceable, mappable, alignment etc).
|
||||
* 2. Bind new objects.
|
||||
*
|
||||
* This avoid unnecessary unbinding of later objects in order to make
|
||||
* room for the earlier objects *unless* we need to defragment.
|
||||
*
|
||||
* Defragmenting is skipped if all objects are pinned at a fixed location.
|
||||
*/
|
||||
pass = 0;
|
||||
do {
|
||||
for (pass = 0; pass <= 2; pass++) {
|
||||
int pin_flags = PIN_USER | PIN_VALIDATE;
|
||||
|
||||
if (pass == 0)
|
||||
pin_flags |= PIN_NONBLOCK;
|
||||
|
||||
if (pass >= 1)
|
||||
unpinned = eb_unbind(eb, pass == 2);
|
||||
|
||||
if (pass == 2) {
|
||||
err = mutex_lock_interruptible(&eb->context->vm->mutex);
|
||||
if (!err) {
|
||||
err = i915_gem_evict_vm(eb->context->vm, &eb->ww);
|
||||
mutex_unlock(&eb->context->vm->mutex);
|
||||
}
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
|
||||
list_for_each_entry(ev, &eb->unbound, bind_link) {
|
||||
err = eb_reserve_vma(eb, ev, pin_flags);
|
||||
if (err)
|
||||
break;
|
||||
}
|
||||
|
||||
if (err != -ENOSPC)
|
||||
return err;
|
||||
|
||||
/* Resort *all* the objects into priority order */
|
||||
INIT_LIST_HEAD(&eb->unbound);
|
||||
INIT_LIST_HEAD(&last);
|
||||
for (i = 0; i < count; i++) {
|
||||
unsigned int flags;
|
||||
|
||||
ev = &eb->vma[i];
|
||||
flags = ev->flags;
|
||||
if (flags & EXEC_OBJECT_PINNED &&
|
||||
flags & __EXEC_OBJECT_HAS_PIN)
|
||||
continue;
|
||||
|
||||
eb_unreserve_vma(ev);
|
||||
|
||||
if (flags & EXEC_OBJECT_PINNED)
|
||||
/* Pinned must have their slot */
|
||||
list_add(&ev->bind_link, &eb->unbound);
|
||||
else if (flags & __EXEC_OBJECT_NEEDS_MAP)
|
||||
/* Map require the lowest 256MiB (aperture) */
|
||||
list_add_tail(&ev->bind_link, &eb->unbound);
|
||||
else if (!(flags & EXEC_OBJECT_SUPPORTS_48B_ADDRESS))
|
||||
/* Prioritise 4GiB region for restricted bo */
|
||||
list_add(&ev->bind_link, &last);
|
||||
else
|
||||
list_add_tail(&ev->bind_link, &last);
|
||||
}
|
||||
list_splice_tail(&last, &eb->unbound);
|
||||
|
||||
switch (pass++) {
|
||||
case 0:
|
||||
break;
|
||||
}
|
||||
|
||||
case 1:
|
||||
/* Too fragmented, unbind everything and retry */
|
||||
mutex_lock(&eb->context->vm->mutex);
|
||||
err = i915_gem_evict_vm(eb->context->vm);
|
||||
mutex_unlock(&eb->context->vm->mutex);
|
||||
if (err)
|
||||
return err;
|
||||
break;
|
||||
|
||||
default:
|
||||
return -ENOSPC;
|
||||
}
|
||||
|
||||
pin_flags = PIN_USER;
|
||||
} while (1);
|
||||
return err;
|
||||
}
|
||||
|
||||
static int eb_select_context(struct i915_execbuffer *eb)
|
||||
@ -1097,7 +1099,7 @@ static inline struct i915_ggtt *cache_to_ggtt(struct reloc_cache *cache)
|
||||
{
|
||||
struct drm_i915_private *i915 =
|
||||
container_of(cache, struct i915_execbuffer, reloc_cache)->i915;
|
||||
return &i915->ggtt;
|
||||
return to_gt(i915)->ggtt;
|
||||
}
|
||||
|
||||
static void reloc_cache_unmap(struct reloc_cache *cache)
|
||||
@ -1216,10 +1218,11 @@ static void *reloc_kmap(struct drm_i915_gem_object *obj,
|
||||
return vaddr;
|
||||
}
|
||||
|
||||
static void *reloc_iomap(struct drm_i915_gem_object *obj,
|
||||
static void *reloc_iomap(struct i915_vma *batch,
|
||||
struct i915_execbuffer *eb,
|
||||
unsigned long page)
|
||||
{
|
||||
struct drm_i915_gem_object *obj = batch->obj;
|
||||
struct reloc_cache *cache = &eb->reloc_cache;
|
||||
struct i915_ggtt *ggtt = cache_to_ggtt(cache);
|
||||
unsigned long offset;
|
||||
@ -1229,7 +1232,7 @@ static void *reloc_iomap(struct drm_i915_gem_object *obj,
|
||||
intel_gt_flush_ggtt_writes(ggtt->vm.gt);
|
||||
io_mapping_unmap_atomic((void __force __iomem *) unmask_page(cache->vaddr));
|
||||
} else {
|
||||
struct i915_vma *vma;
|
||||
struct i915_vma *vma = ERR_PTR(-ENODEV);
|
||||
int err;
|
||||
|
||||
if (i915_gem_object_is_tiled(obj))
|
||||
@ -1242,10 +1245,23 @@ static void *reloc_iomap(struct drm_i915_gem_object *obj,
|
||||
if (err)
|
||||
return ERR_PTR(err);
|
||||
|
||||
vma = i915_gem_object_ggtt_pin_ww(obj, &eb->ww, NULL, 0, 0,
|
||||
PIN_MAPPABLE |
|
||||
PIN_NONBLOCK /* NOWARN */ |
|
||||
PIN_NOEVICT);
|
||||
/*
|
||||
* i915_gem_object_ggtt_pin_ww may attempt to remove the batch
|
||||
* VMA from the object list because we no longer pin.
|
||||
*
|
||||
* Only attempt to pin the batch buffer to ggtt if the current batch
|
||||
* is not inside ggtt, or the batch buffer is not misplaced.
|
||||
*/
|
||||
if (!i915_is_ggtt(batch->vm)) {
|
||||
vma = i915_gem_object_ggtt_pin_ww(obj, &eb->ww, NULL, 0, 0,
|
||||
PIN_MAPPABLE |
|
||||
PIN_NONBLOCK /* NOWARN */ |
|
||||
PIN_NOEVICT);
|
||||
} else if (i915_vma_is_map_and_fenceable(batch)) {
|
||||
__i915_vma_pin(batch);
|
||||
vma = batch;
|
||||
}
|
||||
|
||||
if (vma == ERR_PTR(-EDEADLK))
|
||||
return vma;
|
||||
|
||||
@ -1283,7 +1299,7 @@ static void *reloc_iomap(struct drm_i915_gem_object *obj,
|
||||
return vaddr;
|
||||
}
|
||||
|
||||
static void *reloc_vaddr(struct drm_i915_gem_object *obj,
|
||||
static void *reloc_vaddr(struct i915_vma *vma,
|
||||
struct i915_execbuffer *eb,
|
||||
unsigned long page)
|
||||
{
|
||||
@ -1295,9 +1311,9 @@ static void *reloc_vaddr(struct drm_i915_gem_object *obj,
|
||||
} else {
|
||||
vaddr = NULL;
|
||||
if ((cache->vaddr & KMAP) == 0)
|
||||
vaddr = reloc_iomap(obj, eb, page);
|
||||
vaddr = reloc_iomap(vma, eb, page);
|
||||
if (!vaddr)
|
||||
vaddr = reloc_kmap(obj, cache, page);
|
||||
vaddr = reloc_kmap(vma->obj, cache, page);
|
||||
}
|
||||
|
||||
return vaddr;
|
||||
@ -1338,7 +1354,7 @@ relocate_entry(struct i915_vma *vma,
|
||||
void *vaddr;
|
||||
|
||||
repeat:
|
||||
vaddr = reloc_vaddr(vma->obj, eb,
|
||||
vaddr = reloc_vaddr(vma, eb,
|
||||
offset >> PAGE_SHIFT);
|
||||
if (IS_ERR(vaddr))
|
||||
return PTR_ERR(vaddr);
|
||||
@ -1413,7 +1429,7 @@ eb_relocate_entry(struct i915_execbuffer *eb,
|
||||
mutex_lock(&vma->vm->mutex);
|
||||
err = i915_vma_bind(target->vma,
|
||||
target->vma->obj->cache_level,
|
||||
PIN_GLOBAL, NULL);
|
||||
PIN_GLOBAL, NULL, NULL);
|
||||
mutex_unlock(&vma->vm->mutex);
|
||||
reloc_cache_remap(&eb->reloc_cache, ev->vma->obj);
|
||||
if (err)
|
||||
@ -1943,7 +1959,6 @@ static void eb_capture_stage(struct i915_execbuffer *eb)
|
||||
{
|
||||
const unsigned int count = eb->buffer_count;
|
||||
unsigned int i = count, j;
|
||||
struct i915_vma_snapshot *vsnap;
|
||||
|
||||
while (i--) {
|
||||
struct eb_vma *ev = &eb->vma[i];
|
||||
@ -1953,11 +1968,6 @@ static void eb_capture_stage(struct i915_execbuffer *eb)
|
||||
if (!(flags & EXEC_OBJECT_CAPTURE))
|
||||
continue;
|
||||
|
||||
vsnap = i915_vma_snapshot_alloc(GFP_KERNEL);
|
||||
if (!vsnap)
|
||||
continue;
|
||||
|
||||
i915_vma_snapshot_init(vsnap, vma, "user");
|
||||
for_each_batch_create_order(eb, j) {
|
||||
struct i915_capture_list *capture;
|
||||
|
||||
@ -1966,10 +1976,9 @@ static void eb_capture_stage(struct i915_execbuffer *eb)
|
||||
continue;
|
||||
|
||||
capture->next = eb->capture_lists[j];
|
||||
capture->vma_snapshot = i915_vma_snapshot_get(vsnap);
|
||||
capture->vma_res = i915_vma_resource_get(vma->resource);
|
||||
eb->capture_lists[j] = capture;
|
||||
}
|
||||
i915_vma_snapshot_put(vsnap);
|
||||
}
|
||||
}
|
||||
|
||||
@ -2200,7 +2209,7 @@ shadow_batch_pin(struct i915_execbuffer *eb,
|
||||
if (IS_ERR(vma))
|
||||
return vma;
|
||||
|
||||
err = i915_vma_pin_ww(vma, &eb->ww, 0, 0, flags);
|
||||
err = i915_vma_pin_ww(vma, &eb->ww, 0, 0, flags | PIN_VALIDATE);
|
||||
if (err)
|
||||
return ERR_PTR(err);
|
||||
|
||||
@ -2214,7 +2223,7 @@ static struct i915_vma *eb_dispatch_secure(struct i915_execbuffer *eb, struct i9
|
||||
* batch" bit. Hence we need to pin secure batches into the global gtt.
|
||||
* hsw should have this fixed, but bdw mucks it up again. */
|
||||
if (eb->batch_flags & I915_DISPATCH_SECURE)
|
||||
return i915_gem_object_ggtt_pin_ww(vma->obj, &eb->ww, NULL, 0, 0, 0);
|
||||
return i915_gem_object_ggtt_pin_ww(vma->obj, &eb->ww, NULL, 0, 0, PIN_VALIDATE);
|
||||
|
||||
return NULL;
|
||||
}
|
||||
@ -2265,13 +2274,12 @@ static int eb_parse(struct i915_execbuffer *eb)
|
||||
|
||||
err = i915_gem_object_lock(pool->obj, &eb->ww);
|
||||
if (err)
|
||||
goto err;
|
||||
return err;
|
||||
|
||||
shadow = shadow_batch_pin(eb, pool->obj, eb->context->vm, PIN_USER);
|
||||
if (IS_ERR(shadow)) {
|
||||
err = PTR_ERR(shadow);
|
||||
goto err;
|
||||
}
|
||||
if (IS_ERR(shadow))
|
||||
return PTR_ERR(shadow);
|
||||
|
||||
intel_gt_buffer_pool_mark_used(pool);
|
||||
i915_gem_object_set_readonly(shadow->obj);
|
||||
shadow->private = pool;
|
||||
@ -2283,25 +2291,21 @@ static int eb_parse(struct i915_execbuffer *eb)
|
||||
shadow = shadow_batch_pin(eb, pool->obj,
|
||||
&eb->gt->ggtt->vm,
|
||||
PIN_GLOBAL);
|
||||
if (IS_ERR(shadow)) {
|
||||
err = PTR_ERR(shadow);
|
||||
shadow = trampoline;
|
||||
goto err_shadow;
|
||||
}
|
||||
if (IS_ERR(shadow))
|
||||
return PTR_ERR(shadow);
|
||||
|
||||
shadow->private = pool;
|
||||
|
||||
eb->batch_flags |= I915_DISPATCH_SECURE;
|
||||
}
|
||||
|
||||
batch = eb_dispatch_secure(eb, shadow);
|
||||
if (IS_ERR(batch)) {
|
||||
err = PTR_ERR(batch);
|
||||
goto err_trampoline;
|
||||
}
|
||||
if (IS_ERR(batch))
|
||||
return PTR_ERR(batch);
|
||||
|
||||
err = dma_resv_reserve_shared(shadow->obj->base.resv, 1);
|
||||
if (err)
|
||||
goto err_trampoline;
|
||||
return err;
|
||||
|
||||
err = intel_engine_cmd_parser(eb->context->engine,
|
||||
eb->batches[0]->vma,
|
||||
@ -2309,7 +2313,7 @@ static int eb_parse(struct i915_execbuffer *eb)
|
||||
eb->batch_len[0],
|
||||
shadow, trampoline);
|
||||
if (err)
|
||||
goto err_unpin_batch;
|
||||
return err;
|
||||
|
||||
eb->batches[0] = &eb->vma[eb->buffer_count++];
|
||||
eb->batches[0]->vma = i915_vma_get(shadow);
|
||||
@ -2328,17 +2332,6 @@ secure_batch:
|
||||
eb->batches[0]->vma = i915_vma_get(batch);
|
||||
}
|
||||
return 0;
|
||||
|
||||
err_unpin_batch:
|
||||
if (batch)
|
||||
i915_vma_unpin(batch);
|
||||
err_trampoline:
|
||||
if (trampoline)
|
||||
i915_vma_unpin(trampoline);
|
||||
err_shadow:
|
||||
i915_vma_unpin(shadow);
|
||||
err:
|
||||
return err;
|
||||
}
|
||||
|
||||
static int eb_request_submit(struct i915_execbuffer *eb,
|
||||
@ -3277,9 +3270,8 @@ eb_requests_create(struct i915_execbuffer *eb, struct dma_fence *in_fence,
|
||||
* _onstack interface.
|
||||
*/
|
||||
if (eb->batches[i]->vma)
|
||||
i915_vma_snapshot_init_onstack(&eb->requests[i]->batch_snapshot,
|
||||
eb->batches[i]->vma,
|
||||
"batch");
|
||||
eb->requests[i]->batch_res =
|
||||
i915_vma_resource_get(eb->batches[i]->vma->resource);
|
||||
if (eb->batch_pool) {
|
||||
GEM_BUG_ON(intel_context_is_parallel(eb->context));
|
||||
intel_gt_buffer_pool_mark_active(eb->batch_pool,
|
||||
@ -3464,8 +3456,6 @@ err_request:
|
||||
|
||||
err_vma:
|
||||
eb_release_vmas(&eb, true);
|
||||
if (eb.trampoline)
|
||||
i915_vma_unpin(eb.trampoline);
|
||||
WARN_ON(err == -EDEADLK);
|
||||
i915_gem_ww_ctx_fini(&eb.ww);
|
||||
|
||||
|
@ -10,6 +10,7 @@
|
||||
|
||||
#include "i915_drv.h"
|
||||
#include "i915_gem.h"
|
||||
#include "i915_gem_internal.h"
|
||||
#include "i915_gem_object.h"
|
||||
#include "i915_scatterlist.h"
|
||||
#include "i915_utils.h"
|
||||
|
23
drivers/gpu/drm/i915/gem/i915_gem_internal.h
Normal file
23
drivers/gpu/drm/i915/gem/i915_gem_internal.h
Normal file
@ -0,0 +1,23 @@
|
||||
/* SPDX-License-Identifier: MIT */
|
||||
/*
|
||||
* Copyright © 2022 Intel Corporation
|
||||
*/
|
||||
|
||||
#ifndef __I915_GEM_INTERNAL_H__
|
||||
#define __I915_GEM_INTERNAL_H__
|
||||
|
||||
#include <linux/types.h>
|
||||
|
||||
struct drm_i915_gem_object;
|
||||
struct drm_i915_gem_object_ops;
|
||||
struct drm_i915_private;
|
||||
|
||||
struct drm_i915_gem_object *
|
||||
i915_gem_object_create_internal(struct drm_i915_private *i915,
|
||||
phys_addr_t size);
|
||||
struct drm_i915_gem_object *
|
||||
__i915_gem_object_create_internal(struct drm_i915_private *i915,
|
||||
const struct drm_i915_gem_object_ops *ops,
|
||||
phys_addr_t size);
|
||||
|
||||
#endif /* __I915_GEM_INTERNAL_H__ */
|
@ -9,10 +9,13 @@
|
||||
#include <linux/pfn_t.h>
|
||||
#include <linux/sizes.h>
|
||||
|
||||
#include <drm/drm_cache.h>
|
||||
|
||||
#include "gt/intel_gt.h"
|
||||
#include "gt/intel_gt_requests.h"
|
||||
|
||||
#include "i915_drv.h"
|
||||
#include "i915_gem_evict.h"
|
||||
#include "i915_gem_gtt.h"
|
||||
#include "i915_gem_ioctls.h"
|
||||
#include "i915_gem_object.h"
|
||||
@ -295,7 +298,7 @@ static vm_fault_t vm_fault_gtt(struct vm_fault *vmf)
|
||||
struct drm_device *dev = obj->base.dev;
|
||||
struct drm_i915_private *i915 = to_i915(dev);
|
||||
struct intel_runtime_pm *rpm = &i915->runtime_pm;
|
||||
struct i915_ggtt *ggtt = &i915->ggtt;
|
||||
struct i915_ggtt *ggtt = to_gt(i915)->ggtt;
|
||||
bool write = area->vm_flags & VM_WRITE;
|
||||
struct i915_gem_ww_ctx ww;
|
||||
intel_wakeref_t wakeref;
|
||||
@ -358,8 +361,21 @@ retry:
|
||||
vma = i915_gem_object_ggtt_pin_ww(obj, &ww, &view, 0, 0, flags);
|
||||
}
|
||||
|
||||
/* The entire mappable GGTT is pinned? Unexpected! */
|
||||
GEM_BUG_ON(vma == ERR_PTR(-ENOSPC));
|
||||
/*
|
||||
* The entire mappable GGTT is pinned? Unexpected!
|
||||
* Try to evict the object we locked too, as normally we skip it
|
||||
* due to lack of short term pinning inside execbuf.
|
||||
*/
|
||||
if (vma == ERR_PTR(-ENOSPC)) {
|
||||
ret = mutex_lock_interruptible(&ggtt->vm.mutex);
|
||||
if (!ret) {
|
||||
ret = i915_gem_evict_vm(&ggtt->vm, &ww);
|
||||
mutex_unlock(&ggtt->vm.mutex);
|
||||
}
|
||||
if (ret)
|
||||
goto err_reset;
|
||||
vma = i915_gem_object_ggtt_pin_ww(obj, &ww, &view, 0, 0, flags);
|
||||
}
|
||||
}
|
||||
if (IS_ERR(vma)) {
|
||||
ret = PTR_ERR(vma);
|
||||
@ -388,16 +404,16 @@ retry:
|
||||
assert_rpm_wakelock_held(rpm);
|
||||
|
||||
/* Mark as being mmapped into userspace for later revocation */
|
||||
mutex_lock(&i915->ggtt.vm.mutex);
|
||||
mutex_lock(&to_gt(i915)->ggtt->vm.mutex);
|
||||
if (!i915_vma_set_userfault(vma) && !obj->userfault_count++)
|
||||
list_add(&obj->userfault_link, &i915->ggtt.userfault_list);
|
||||
mutex_unlock(&i915->ggtt.vm.mutex);
|
||||
list_add(&obj->userfault_link, &to_gt(i915)->ggtt->userfault_list);
|
||||
mutex_unlock(&to_gt(i915)->ggtt->vm.mutex);
|
||||
|
||||
/* Track the mmo associated with the fenced vma */
|
||||
vma->mmo = mmo;
|
||||
|
||||
if (CONFIG_DRM_I915_USERFAULT_AUTOSUSPEND)
|
||||
intel_wakeref_auto(&i915->ggtt.userfault_wakeref,
|
||||
intel_wakeref_auto(&to_gt(i915)->ggtt->userfault_wakeref,
|
||||
msecs_to_jiffies_timeout(CONFIG_DRM_I915_USERFAULT_AUTOSUSPEND));
|
||||
|
||||
if (write) {
|
||||
@ -512,7 +528,7 @@ void i915_gem_object_release_mmap_gtt(struct drm_i915_gem_object *obj)
|
||||
* wakeref.
|
||||
*/
|
||||
wakeref = intel_runtime_pm_get(&i915->runtime_pm);
|
||||
mutex_lock(&i915->ggtt.vm.mutex);
|
||||
mutex_lock(&to_gt(i915)->ggtt->vm.mutex);
|
||||
|
||||
if (!obj->userfault_count)
|
||||
goto out;
|
||||
@ -530,7 +546,7 @@ void i915_gem_object_release_mmap_gtt(struct drm_i915_gem_object *obj)
|
||||
wmb();
|
||||
|
||||
out:
|
||||
mutex_unlock(&i915->ggtt.vm.mutex);
|
||||
mutex_unlock(&to_gt(i915)->ggtt->vm.mutex);
|
||||
intel_runtime_pm_put(&i915->runtime_pm, wakeref);
|
||||
}
|
||||
|
||||
@ -736,13 +752,14 @@ i915_gem_dumb_mmap_offset(struct drm_file *file,
|
||||
u32 handle,
|
||||
u64 *offset)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(dev);
|
||||
enum i915_mmap_type mmap_type;
|
||||
|
||||
if (HAS_LMEM(to_i915(dev)))
|
||||
mmap_type = I915_MMAP_TYPE_FIXED;
|
||||
else if (pat_enabled())
|
||||
mmap_type = I915_MMAP_TYPE_WC;
|
||||
else if (!i915_ggtt_has_aperture(&to_i915(dev)->ggtt))
|
||||
else if (!i915_ggtt_has_aperture(to_gt(i915)->ggtt))
|
||||
return -ENODEV;
|
||||
else
|
||||
mmap_type = I915_MMAP_TYPE_GTT;
|
||||
@ -790,7 +807,7 @@ i915_gem_mmap_offset_ioctl(struct drm_device *dev, void *data,
|
||||
|
||||
switch (args->flags) {
|
||||
case I915_MMAP_OFFSET_GTT:
|
||||
if (!i915_ggtt_has_aperture(&i915->ggtt))
|
||||
if (!i915_ggtt_has_aperture(to_gt(i915)->ggtt))
|
||||
return -ENODEV;
|
||||
type = I915_MMAP_TYPE_GTT;
|
||||
break;
|
||||
|
@ -24,11 +24,16 @@
|
||||
|
||||
#include <linux/sched/mm.h>
|
||||
|
||||
#include <drm/drm_cache.h>
|
||||
|
||||
#include "display/intel_frontbuffer.h"
|
||||
#include "pxp/intel_pxp.h"
|
||||
|
||||
#include "i915_drv.h"
|
||||
#include "i915_file_private.h"
|
||||
#include "i915_gem_clflush.h"
|
||||
#include "i915_gem_context.h"
|
||||
#include "i915_gem_dmabuf.h"
|
||||
#include "i915_gem_mman.h"
|
||||
#include "i915_gem_object.h"
|
||||
#include "i915_gem_ttm.h"
|
||||
@ -280,6 +285,12 @@ void __i915_gem_object_pages_fini(struct drm_i915_gem_object *obj)
|
||||
GEM_BUG_ON(vma->obj != obj);
|
||||
spin_unlock(&obj->vma.lock);
|
||||
|
||||
/* Verify that the vma is unbound under the vm mutex. */
|
||||
mutex_lock(&vma->vm->mutex);
|
||||
atomic_and(~I915_VMA_PIN_MASK, &vma->flags);
|
||||
__i915_vma_unbind(vma);
|
||||
mutex_unlock(&vma->vm->mutex);
|
||||
|
||||
__i915_vma_put(vma);
|
||||
|
||||
spin_lock(&obj->vma.lock);
|
||||
@ -756,6 +767,18 @@ i915_gem_object_get_moving_fence(struct drm_i915_gem_object *obj)
|
||||
return dma_fence_get(i915_gem_to_ttm(obj)->moving);
|
||||
}
|
||||
|
||||
void i915_gem_object_set_moving_fence(struct drm_i915_gem_object *obj,
|
||||
struct dma_fence *fence)
|
||||
{
|
||||
struct dma_fence **moving = &i915_gem_to_ttm(obj)->moving;
|
||||
|
||||
if (*moving == fence)
|
||||
return;
|
||||
|
||||
dma_fence_put(*moving);
|
||||
*moving = dma_fence_get(fence);
|
||||
}
|
||||
|
||||
/**
|
||||
* i915_gem_object_wait_moving_fence - Wait for the object's moving fence if any
|
||||
* @obj: The object whose moving fence to wait for.
|
||||
|
@ -459,7 +459,6 @@ i915_gem_object_unpin_pages(struct drm_i915_gem_object *obj)
|
||||
|
||||
int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj);
|
||||
int i915_gem_object_truncate(struct drm_i915_gem_object *obj);
|
||||
void i915_gem_object_writeback(struct drm_i915_gem_object *obj);
|
||||
|
||||
/**
|
||||
* i915_gem_object_pin_map - return a contiguous mapping of the entire object
|
||||
@ -524,6 +523,9 @@ i915_gem_object_finish_access(struct drm_i915_gem_object *obj)
|
||||
struct dma_fence *
|
||||
i915_gem_object_get_moving_fence(struct drm_i915_gem_object *obj);
|
||||
|
||||
void i915_gem_object_set_moving_fence(struct drm_i915_gem_object *obj,
|
||||
struct dma_fence *fence);
|
||||
|
||||
int i915_gem_object_wait_moving_fence(struct drm_i915_gem_object *obj,
|
||||
bool intr);
|
||||
|
||||
|
@ -15,6 +15,7 @@
|
||||
|
||||
#include "i915_active.h"
|
||||
#include "i915_selftest.h"
|
||||
#include "i915_vma_resource.h"
|
||||
|
||||
struct drm_i915_gem_object;
|
||||
struct intel_fronbuffer;
|
||||
@ -57,10 +58,26 @@ struct drm_i915_gem_object_ops {
|
||||
void (*put_pages)(struct drm_i915_gem_object *obj,
|
||||
struct sg_table *pages);
|
||||
int (*truncate)(struct drm_i915_gem_object *obj);
|
||||
void (*writeback)(struct drm_i915_gem_object *obj);
|
||||
int (*shrinker_release_pages)(struct drm_i915_gem_object *obj,
|
||||
bool no_gpu_wait,
|
||||
bool should_writeback);
|
||||
/**
|
||||
* shrink - Perform further backend specific actions to facilate
|
||||
* shrinking.
|
||||
* @obj: The gem object
|
||||
* @flags: Extra flags to control shrinking behaviour in the backend
|
||||
*
|
||||
* Possible values for @flags:
|
||||
*
|
||||
* I915_GEM_OBJECT_SHRINK_WRITEBACK - Try to perform writeback of the
|
||||
* backing pages, if supported.
|
||||
*
|
||||
* I915_GEM_OBJECT_SHRINK_NO_GPU_WAIT - Don't wait for the object to
|
||||
* idle. Active objects can be considered later. The TTM backend for
|
||||
* example might have aync migrations going on, which don't use any
|
||||
* i915_vma to track the active GTT binding, and hence having an unbound
|
||||
* object might not be enough.
|
||||
*/
|
||||
#define I915_GEM_OBJECT_SHRINK_WRITEBACK BIT(0)
|
||||
#define I915_GEM_OBJECT_SHRINK_NO_GPU_WAIT BIT(1)
|
||||
int (*shrink)(struct drm_i915_gem_object *obj, unsigned int flags);
|
||||
|
||||
int (*pread)(struct drm_i915_gem_object *obj,
|
||||
const struct drm_i915_gem_pread *arg);
|
||||
@ -551,31 +568,7 @@ struct drm_i915_gem_object {
|
||||
struct sg_table *pages;
|
||||
void *mapping;
|
||||
|
||||
struct i915_page_sizes {
|
||||
/**
|
||||
* The sg mask of the pages sg_table. i.e the mask of
|
||||
* of the lengths for each sg entry.
|
||||
*/
|
||||
unsigned int phys;
|
||||
|
||||
/**
|
||||
* The gtt page sizes we are allowed to use given the
|
||||
* sg mask and the supported page sizes. This will
|
||||
* express the smallest unit we can use for the whole
|
||||
* object, as well as the larger sizes we may be able
|
||||
* to use opportunistically.
|
||||
*/
|
||||
unsigned int sg;
|
||||
|
||||
/**
|
||||
* The actual gtt page size usage. Since we can have
|
||||
* multiple vma associated with this object we need to
|
||||
* prevent any trampling of state, hence a copy of this
|
||||
* struct also lives in each vma, therefore the gtt
|
||||
* value here should only be read/write through the vma.
|
||||
*/
|
||||
unsigned int gtt;
|
||||
} page_sizes;
|
||||
struct i915_page_sizes page_sizes;
|
||||
|
||||
I915_SELFTEST_DECLARE(unsigned int page_mask);
|
||||
|
||||
|
@ -4,6 +4,8 @@
|
||||
* Copyright © 2014-2016 Intel Corporation
|
||||
*/
|
||||
|
||||
#include <drm/drm_cache.h>
|
||||
|
||||
#include "i915_drv.h"
|
||||
#include "i915_gem_object.h"
|
||||
#include "i915_scatterlist.h"
|
||||
@ -169,16 +171,6 @@ int i915_gem_object_truncate(struct drm_i915_gem_object *obj)
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Try to discard unwanted pages */
|
||||
void i915_gem_object_writeback(struct drm_i915_gem_object *obj)
|
||||
{
|
||||
assert_object_held_shared(obj);
|
||||
GEM_BUG_ON(i915_gem_object_has_pages(obj));
|
||||
|
||||
if (obj->ops->writeback)
|
||||
obj->ops->writeback(obj);
|
||||
}
|
||||
|
||||
static void __i915_gem_object_reset_page_iter(struct drm_i915_gem_object *obj)
|
||||
{
|
||||
struct radix_tree_iter iter;
|
||||
|
@ -10,6 +10,7 @@
|
||||
#include "gt/intel_gt_pm.h"
|
||||
#include "gt/intel_gt_requests.h"
|
||||
|
||||
#include "i915_driver.h"
|
||||
#include "i915_drv.h"
|
||||
|
||||
#if defined(CONFIG_X86)
|
||||
@ -23,7 +24,7 @@ void i915_gem_suspend(struct drm_i915_private *i915)
|
||||
{
|
||||
GEM_TRACE("%s\n", dev_name(i915->drm.dev));
|
||||
|
||||
intel_wakeref_auto(&i915->ggtt.userfault_wakeref, 0);
|
||||
intel_wakeref_auto(&to_gt(i915)->ggtt->userfault_wakeref, 0);
|
||||
flush_workqueue(i915->wq);
|
||||
|
||||
/*
|
||||
|
@ -5,8 +5,11 @@
|
||||
*/
|
||||
|
||||
#include <linux/pagevec.h>
|
||||
#include <linux/shmem_fs.h>
|
||||
#include <linux/swap.h>
|
||||
|
||||
#include <drm/drm_cache.h>
|
||||
|
||||
#include "gem/i915_gem_region.h"
|
||||
#include "i915_drv.h"
|
||||
#include "i915_gemfs.h"
|
||||
@ -331,6 +334,21 @@ shmem_writeback(struct drm_i915_gem_object *obj)
|
||||
__shmem_writeback(obj->base.size, obj->base.filp->f_mapping);
|
||||
}
|
||||
|
||||
static int shmem_shrink(struct drm_i915_gem_object *obj, unsigned int flags)
|
||||
{
|
||||
switch (obj->mm.madv) {
|
||||
case I915_MADV_DONTNEED:
|
||||
return i915_gem_object_truncate(obj);
|
||||
case __I915_MADV_PURGED:
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (flags & I915_GEM_OBJECT_SHRINK_WRITEBACK)
|
||||
shmem_writeback(obj);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
void
|
||||
__i915_gem_object_release_shmem(struct drm_i915_gem_object *obj,
|
||||
struct sg_table *pages,
|
||||
@ -503,7 +521,7 @@ const struct drm_i915_gem_object_ops i915_gem_shmem_ops = {
|
||||
.get_pages = shmem_get_pages,
|
||||
.put_pages = shmem_put_pages,
|
||||
.truncate = shmem_truncate,
|
||||
.writeback = shmem_writeback,
|
||||
.shrink = shmem_shrink,
|
||||
|
||||
.pwrite = shmem_pwrite,
|
||||
.pread = shmem_pread,
|
||||
|
@ -57,22 +57,18 @@ static int drop_pages(struct drm_i915_gem_object *obj,
|
||||
|
||||
static int try_to_writeback(struct drm_i915_gem_object *obj, unsigned int flags)
|
||||
{
|
||||
if (obj->ops->shrinker_release_pages)
|
||||
return obj->ops->shrinker_release_pages(obj,
|
||||
!(flags & I915_SHRINK_ACTIVE),
|
||||
flags & I915_SHRINK_WRITEBACK);
|
||||
if (obj->ops->shrink) {
|
||||
unsigned int shrink_flags = 0;
|
||||
|
||||
switch (obj->mm.madv) {
|
||||
case I915_MADV_DONTNEED:
|
||||
i915_gem_object_truncate(obj);
|
||||
return 0;
|
||||
case __I915_MADV_PURGED:
|
||||
return 0;
|
||||
if (!(flags & I915_SHRINK_ACTIVE))
|
||||
shrink_flags |= I915_GEM_OBJECT_SHRINK_NO_GPU_WAIT;
|
||||
|
||||
if (flags & I915_SHRINK_WRITEBACK)
|
||||
shrink_flags |= I915_GEM_OBJECT_SHRINK_WRITEBACK;
|
||||
|
||||
return obj->ops->shrink(obj, shrink_flags);
|
||||
}
|
||||
|
||||
if (flags & I915_SHRINK_WRITEBACK)
|
||||
i915_gem_object_writeback(obj);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -401,9 +397,9 @@ i915_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr
|
||||
I915_SHRINK_VMAPS);
|
||||
|
||||
/* We also want to clear any cached iomaps as they wrap vmap */
|
||||
mutex_lock(&i915->ggtt.vm.mutex);
|
||||
mutex_lock(&to_gt(i915)->ggtt->vm.mutex);
|
||||
list_for_each_entry_safe(vma, next,
|
||||
&i915->ggtt.vm.bound_list, vm_link) {
|
||||
&to_gt(i915)->ggtt->vm.bound_list, vm_link) {
|
||||
unsigned long count = vma->node.size >> PAGE_SHIFT;
|
||||
struct drm_i915_gem_object *obj = vma->obj;
|
||||
|
||||
@ -418,7 +414,7 @@ i915_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr
|
||||
|
||||
i915_gem_object_unlock(obj);
|
||||
}
|
||||
mutex_unlock(&i915->ggtt.vm.mutex);
|
||||
mutex_unlock(&to_gt(i915)->ggtt->vm.mutex);
|
||||
|
||||
*(unsigned long *)ptr += freed_pages;
|
||||
return NOTIFY_DONE;
|
||||
|
@ -16,6 +16,7 @@
|
||||
#include "i915_gem_stolen.h"
|
||||
#include "i915_reg.h"
|
||||
#include "i915_vgpu.h"
|
||||
#include "intel_mchbar_regs.h"
|
||||
|
||||
/*
|
||||
* The BIOS typically reserves some of the system's memory for the exclusive
|
||||
@ -72,7 +73,7 @@ void i915_gem_stolen_remove_node(struct drm_i915_private *i915,
|
||||
static int i915_adjust_stolen(struct drm_i915_private *i915,
|
||||
struct resource *dsm)
|
||||
{
|
||||
struct i915_ggtt *ggtt = &i915->ggtt;
|
||||
struct i915_ggtt *ggtt = to_gt(i915)->ggtt;
|
||||
struct intel_uncore *uncore = ggtt->vm.gt->uncore;
|
||||
struct resource *r;
|
||||
|
||||
@ -583,6 +584,7 @@ i915_pages_create_for_stolen(struct drm_device *dev,
|
||||
|
||||
static int i915_gem_object_get_pages_stolen(struct drm_i915_gem_object *obj)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(obj->base.dev);
|
||||
struct sg_table *pages =
|
||||
i915_pages_create_for_stolen(obj->base.dev,
|
||||
obj->stolen->start,
|
||||
@ -590,7 +592,7 @@ static int i915_gem_object_get_pages_stolen(struct drm_i915_gem_object *obj)
|
||||
if (IS_ERR(pages))
|
||||
return PTR_ERR(pages);
|
||||
|
||||
dbg_poison(&to_i915(obj->base.dev)->ggtt,
|
||||
dbg_poison(to_gt(i915)->ggtt,
|
||||
sg_dma_address(pages->sgl),
|
||||
sg_dma_len(pages->sgl),
|
||||
POISON_INUSE);
|
||||
@ -603,9 +605,10 @@ static int i915_gem_object_get_pages_stolen(struct drm_i915_gem_object *obj)
|
||||
static void i915_gem_object_put_pages_stolen(struct drm_i915_gem_object *obj,
|
||||
struct sg_table *pages)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(obj->base.dev);
|
||||
/* Should only be called from i915_gem_object_release_stolen() */
|
||||
|
||||
dbg_poison(&to_i915(obj->base.dev)->ggtt,
|
||||
dbg_poison(to_gt(i915)->ggtt,
|
||||
sg_dma_address(pages->sgl),
|
||||
sg_dma_len(pages->sgl),
|
||||
POISON_FREE);
|
||||
|
@ -9,6 +9,7 @@
|
||||
#include <drm/drm_file.h>
|
||||
|
||||
#include "i915_drv.h"
|
||||
#include "i915_file_private.h"
|
||||
#include "i915_gem_context.h"
|
||||
#include "i915_gem_ioctls.h"
|
||||
#include "i915_gem_object.h"
|
||||
|
@ -183,7 +183,8 @@ static int
|
||||
i915_gem_object_fence_prepare(struct drm_i915_gem_object *obj,
|
||||
int tiling_mode, unsigned int stride)
|
||||
{
|
||||
struct i915_ggtt *ggtt = &to_i915(obj->base.dev)->ggtt;
|
||||
struct drm_i915_private *i915 = to_i915(obj->base.dev);
|
||||
struct i915_ggtt *ggtt = to_gt(i915)->ggtt;
|
||||
struct i915_vma *vma, *vn;
|
||||
LIST_HEAD(unbind);
|
||||
int ret = 0;
|
||||
@ -338,7 +339,7 @@ i915_gem_set_tiling_ioctl(struct drm_device *dev, void *data,
|
||||
struct drm_i915_gem_object *obj;
|
||||
int err;
|
||||
|
||||
if (!dev_priv->ggtt.num_fences)
|
||||
if (!to_gt(dev_priv)->ggtt->num_fences)
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
obj = i915_gem_object_lookup(file, args->handle);
|
||||
@ -364,9 +365,9 @@ i915_gem_set_tiling_ioctl(struct drm_device *dev, void *data,
|
||||
args->stride = 0;
|
||||
} else {
|
||||
if (args->tiling_mode == I915_TILING_X)
|
||||
args->swizzle_mode = to_i915(dev)->ggtt.bit_6_swizzle_x;
|
||||
args->swizzle_mode = to_gt(dev_priv)->ggtt->bit_6_swizzle_x;
|
||||
else
|
||||
args->swizzle_mode = to_i915(dev)->ggtt.bit_6_swizzle_y;
|
||||
args->swizzle_mode = to_gt(dev_priv)->ggtt->bit_6_swizzle_y;
|
||||
|
||||
/* Hide bit 17 swizzling from the user. This prevents old Mesa
|
||||
* from aborting the application on sw fallbacks to bit 17,
|
||||
@ -421,7 +422,7 @@ i915_gem_get_tiling_ioctl(struct drm_device *dev, void *data,
|
||||
struct drm_i915_gem_object *obj;
|
||||
int err = -ENOENT;
|
||||
|
||||
if (!dev_priv->ggtt.num_fences)
|
||||
if (!to_gt(dev_priv)->ggtt->num_fences)
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
rcu_read_lock();
|
||||
@ -437,10 +438,10 @@ i915_gem_get_tiling_ioctl(struct drm_device *dev, void *data,
|
||||
|
||||
switch (args->tiling_mode) {
|
||||
case I915_TILING_X:
|
||||
args->swizzle_mode = dev_priv->ggtt.bit_6_swizzle_x;
|
||||
args->swizzle_mode = to_gt(dev_priv)->ggtt->bit_6_swizzle_x;
|
||||
break;
|
||||
case I915_TILING_Y:
|
||||
args->swizzle_mode = dev_priv->ggtt.bit_6_swizzle_y;
|
||||
args->swizzle_mode = to_gt(dev_priv)->ggtt->bit_6_swizzle_y;
|
||||
break;
|
||||
default:
|
||||
case I915_TILING_NONE:
|
||||
|
@ -3,6 +3,8 @@
|
||||
* Copyright © 2021 Intel Corporation
|
||||
*/
|
||||
|
||||
#include <linux/shmem_fs.h>
|
||||
|
||||
#include <drm/ttm/ttm_bo_driver.h>
|
||||
#include <drm/ttm/ttm_placement.h>
|
||||
|
||||
@ -424,16 +426,14 @@ int i915_ttm_purge(struct drm_i915_gem_object *obj)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int i915_ttm_shrinker_release_pages(struct drm_i915_gem_object *obj,
|
||||
bool no_wait_gpu,
|
||||
bool should_writeback)
|
||||
static int i915_ttm_shrink(struct drm_i915_gem_object *obj, unsigned int flags)
|
||||
{
|
||||
struct ttm_buffer_object *bo = i915_gem_to_ttm(obj);
|
||||
struct i915_ttm_tt *i915_tt =
|
||||
container_of(bo->ttm, typeof(*i915_tt), ttm);
|
||||
struct ttm_operation_ctx ctx = {
|
||||
.interruptible = true,
|
||||
.no_wait_gpu = no_wait_gpu,
|
||||
.no_wait_gpu = flags & I915_GEM_OBJECT_SHRINK_NO_GPU_WAIT,
|
||||
};
|
||||
struct ttm_placement place = {};
|
||||
int ret;
|
||||
@ -467,7 +467,7 @@ static int i915_ttm_shrinker_release_pages(struct drm_i915_gem_object *obj,
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (should_writeback)
|
||||
if (flags & I915_GEM_OBJECT_SHRINK_WRITEBACK)
|
||||
__shmem_writeback(obj->base.size, i915_tt->filp->f_mapping);
|
||||
|
||||
return 0;
|
||||
@ -842,11 +842,9 @@ void i915_ttm_adjust_lru(struct drm_i915_gem_object *obj)
|
||||
} else if (obj->mm.madv != I915_MADV_WILLNEED) {
|
||||
bo->priority = I915_TTM_PRIO_PURGE;
|
||||
} else if (!i915_gem_object_has_pages(obj)) {
|
||||
if (bo->priority < I915_TTM_PRIO_HAS_PAGES)
|
||||
bo->priority = I915_TTM_PRIO_HAS_PAGES;
|
||||
bo->priority = I915_TTM_PRIO_NO_PAGES;
|
||||
} else {
|
||||
if (bo->priority > I915_TTM_PRIO_NO_PAGES)
|
||||
bo->priority = I915_TTM_PRIO_NO_PAGES;
|
||||
bo->priority = I915_TTM_PRIO_HAS_PAGES;
|
||||
}
|
||||
|
||||
ttm_bo_move_to_lru_tail(bo, bo->resource, NULL);
|
||||
@ -977,7 +975,7 @@ static const struct drm_i915_gem_object_ops i915_gem_ttm_obj_ops = {
|
||||
.get_pages = i915_ttm_get_pages,
|
||||
.put_pages = i915_ttm_put_pages,
|
||||
.truncate = i915_ttm_truncate,
|
||||
.shrinker_release_pages = i915_ttm_shrinker_release_pages,
|
||||
.shrink = i915_ttm_shrink,
|
||||
|
||||
.adjust_lru = i915_ttm_adjust_lru,
|
||||
.delayed_free = i915_ttm_delayed_free,
|
||||
|
@ -142,7 +142,16 @@ int i915_ttm_move_notify(struct ttm_buffer_object *bo)
|
||||
struct drm_i915_gem_object *obj = i915_ttm_to_gem(bo);
|
||||
int ret;
|
||||
|
||||
ret = i915_gem_object_unbind(obj, I915_GEM_OBJECT_UNBIND_ACTIVE);
|
||||
/*
|
||||
* Note: The async unbinding here will actually transform the
|
||||
* blocking wait for unbind into a wait before finally submitting
|
||||
* evict / migration blit and thus stall the migration timeline
|
||||
* which may not be good for overall throughput. We should make
|
||||
* sure we await the unbind fences *after* the migration blit
|
||||
* instead of *before* as we currently do.
|
||||
*/
|
||||
ret = i915_gem_object_unbind(obj, I915_GEM_OBJECT_UNBIND_ACTIVE |
|
||||
I915_GEM_OBJECT_UNBIND_ASYNC);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@ -531,7 +540,7 @@ int i915_ttm_move(struct ttm_buffer_object *bo, bool evict,
|
||||
return ret;
|
||||
}
|
||||
|
||||
migration_fence = __i915_ttm_move(bo, ctx, clear, dst_mem, bo->ttm,
|
||||
migration_fence = __i915_ttm_move(bo, ctx, clear, dst_mem, ttm,
|
||||
dst_rsgt, true, &deps);
|
||||
i915_deps_fini(&deps);
|
||||
}
|
||||
|
@ -8,9 +8,10 @@
|
||||
|
||||
#include "i915_selftest.h"
|
||||
|
||||
#include "gem/i915_gem_region.h"
|
||||
#include "gem/i915_gem_internal.h"
|
||||
#include "gem/i915_gem_lmem.h"
|
||||
#include "gem/i915_gem_pm.h"
|
||||
#include "gem/i915_gem_region.h"
|
||||
|
||||
#include "gt/intel_gt.h"
|
||||
|
||||
@ -370,9 +371,9 @@ static int igt_check_page_sizes(struct i915_vma *vma)
|
||||
err = -EINVAL;
|
||||
}
|
||||
|
||||
if (!HAS_PAGE_SIZES(i915, vma->page_sizes.gtt)) {
|
||||
if (!HAS_PAGE_SIZES(i915, vma->resource->page_sizes_gtt)) {
|
||||
pr_err("unsupported page_sizes.gtt=%u, supported=%u\n",
|
||||
vma->page_sizes.gtt & ~supported, supported);
|
||||
vma->resource->page_sizes_gtt & ~supported, supported);
|
||||
err = -EINVAL;
|
||||
}
|
||||
|
||||
@ -403,15 +404,9 @@ static int igt_check_page_sizes(struct i915_vma *vma)
|
||||
if (i915_gem_object_is_lmem(obj) &&
|
||||
IS_ALIGNED(vma->node.start, SZ_2M) &&
|
||||
vma->page_sizes.sg & SZ_2M &&
|
||||
vma->page_sizes.gtt < SZ_2M) {
|
||||
vma->resource->page_sizes_gtt < SZ_2M) {
|
||||
pr_err("gtt pages mismatch for LMEM, expected 2M GTT pages, sg(%u), gtt(%u)\n",
|
||||
vma->page_sizes.sg, vma->page_sizes.gtt);
|
||||
err = -EINVAL;
|
||||
}
|
||||
|
||||
if (obj->mm.page_sizes.gtt) {
|
||||
pr_err("obj->page_sizes.gtt(%u) should never be set\n",
|
||||
obj->mm.page_sizes.gtt);
|
||||
vma->page_sizes.sg, vma->resource->page_sizes_gtt);
|
||||
err = -EINVAL;
|
||||
}
|
||||
|
||||
@ -547,9 +542,9 @@ static int igt_mock_memory_region_huge_pages(void *arg)
|
||||
goto out_unpin;
|
||||
}
|
||||
|
||||
if (vma->page_sizes.gtt != page_size) {
|
||||
if (vma->resource->page_sizes_gtt != page_size) {
|
||||
pr_err("%s page_sizes.gtt=%u, expected=%u\n",
|
||||
__func__, vma->page_sizes.gtt,
|
||||
__func__, vma->resource->page_sizes_gtt,
|
||||
page_size);
|
||||
err = -EINVAL;
|
||||
goto out_unpin;
|
||||
@ -630,9 +625,9 @@ static int igt_mock_ppgtt_misaligned_dma(void *arg)
|
||||
|
||||
err = igt_check_page_sizes(vma);
|
||||
|
||||
if (vma->page_sizes.gtt != page_size) {
|
||||
if (vma->resource->page_sizes_gtt != page_size) {
|
||||
pr_err("page_sizes.gtt=%u, expected %u\n",
|
||||
vma->page_sizes.gtt, page_size);
|
||||
vma->resource->page_sizes_gtt, page_size);
|
||||
err = -EINVAL;
|
||||
}
|
||||
|
||||
@ -647,7 +642,7 @@ static int igt_mock_ppgtt_misaligned_dma(void *arg)
|
||||
* pages.
|
||||
*/
|
||||
for (offset = 4096; offset < page_size; offset += 4096) {
|
||||
err = i915_vma_unbind(vma);
|
||||
err = i915_vma_unbind_unlocked(vma);
|
||||
if (err)
|
||||
goto out_unpin;
|
||||
|
||||
@ -657,9 +652,10 @@ static int igt_mock_ppgtt_misaligned_dma(void *arg)
|
||||
|
||||
err = igt_check_page_sizes(vma);
|
||||
|
||||
if (vma->page_sizes.gtt != I915_GTT_PAGE_SIZE_4K) {
|
||||
if (vma->resource->page_sizes_gtt != I915_GTT_PAGE_SIZE_4K) {
|
||||
pr_err("page_sizes.gtt=%u, expected %llu\n",
|
||||
vma->page_sizes.gtt, I915_GTT_PAGE_SIZE_4K);
|
||||
vma->resource->page_sizes_gtt,
|
||||
I915_GTT_PAGE_SIZE_4K);
|
||||
err = -EINVAL;
|
||||
}
|
||||
|
||||
@ -805,9 +801,9 @@ static int igt_mock_ppgtt_huge_fill(void *arg)
|
||||
}
|
||||
}
|
||||
|
||||
if (vma->page_sizes.gtt != expected_gtt) {
|
||||
if (vma->resource->page_sizes_gtt != expected_gtt) {
|
||||
pr_err("gtt=%u, expected=%u, size=%zd, single=%s\n",
|
||||
vma->page_sizes.gtt, expected_gtt,
|
||||
vma->resource->page_sizes_gtt, expected_gtt,
|
||||
obj->base.size, yesno(!!single));
|
||||
err = -EINVAL;
|
||||
break;
|
||||
@ -961,10 +957,10 @@ static int igt_mock_ppgtt_64K(void *arg)
|
||||
}
|
||||
}
|
||||
|
||||
if (vma->page_sizes.gtt != expected_gtt) {
|
||||
if (vma->resource->page_sizes_gtt != expected_gtt) {
|
||||
pr_err("gtt=%u, expected=%u, i=%d, single=%s\n",
|
||||
vma->page_sizes.gtt, expected_gtt, i,
|
||||
yesno(!!single));
|
||||
vma->resource->page_sizes_gtt,
|
||||
expected_gtt, i, yesno(!!single));
|
||||
err = -EINVAL;
|
||||
goto out_vma_unpin;
|
||||
}
|
||||
|
@ -319,7 +319,7 @@ static int pin_buffer(struct i915_vma *vma, u64 addr)
|
||||
int err;
|
||||
|
||||
if (drm_mm_node_allocated(&vma->node) && vma->node.start != addr) {
|
||||
err = i915_vma_unbind(vma);
|
||||
err = i915_vma_unbind_unlocked(vma);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
@ -544,7 +544,7 @@ static bool has_bit17_swizzle(int sw)
|
||||
|
||||
static bool bad_swizzling(struct drm_i915_private *i915)
|
||||
{
|
||||
struct i915_ggtt *ggtt = &i915->ggtt;
|
||||
struct i915_ggtt *ggtt = to_gt(i915)->ggtt;
|
||||
|
||||
if (i915->quirks & QUIRK_PIN_SWIZZLED_PAGES)
|
||||
return true;
|
||||
|
@ -6,6 +6,7 @@
|
||||
|
||||
#include <linux/prime_numbers.h>
|
||||
|
||||
#include "gem/i915_gem_internal.h"
|
||||
#include "gem/i915_gem_pm.h"
|
||||
#include "gt/intel_engine_pm.h"
|
||||
#include "gt/intel_engine_regs.h"
|
||||
@ -1375,7 +1376,7 @@ static int igt_ctx_readonly(void *arg)
|
||||
goto out_file;
|
||||
}
|
||||
|
||||
vm = ctx->vm ?: &i915->ggtt.alias->vm;
|
||||
vm = ctx->vm ?: &to_gt(i915)->ggtt->alias->vm;
|
||||
if (!vm || !vm->has_read_only) {
|
||||
err = 0;
|
||||
goto out_file;
|
||||
|
@ -4,8 +4,13 @@
|
||||
*/
|
||||
|
||||
#include "gt/intel_migrate.h"
|
||||
#include "gt/intel_gpu_commands.h"
|
||||
#include "gem/i915_gem_ttm_move.h"
|
||||
|
||||
#include "i915_deps.h"
|
||||
|
||||
#include "selftests/igt_spinner.h"
|
||||
|
||||
static int igt_fill_check_buffer(struct drm_i915_gem_object *obj,
|
||||
bool fill)
|
||||
{
|
||||
@ -101,7 +106,8 @@ static int igt_same_create_migrate(void *arg)
|
||||
}
|
||||
|
||||
static int lmem_pages_migrate_one(struct i915_gem_ww_ctx *ww,
|
||||
struct drm_i915_gem_object *obj)
|
||||
struct drm_i915_gem_object *obj,
|
||||
struct i915_vma *vma)
|
||||
{
|
||||
int err;
|
||||
|
||||
@ -109,6 +115,24 @@ static int lmem_pages_migrate_one(struct i915_gem_ww_ctx *ww,
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
if (vma) {
|
||||
err = i915_vma_pin_ww(vma, ww, obj->base.size, 0,
|
||||
0UL | PIN_OFFSET_FIXED |
|
||||
PIN_USER);
|
||||
if (err) {
|
||||
if (err != -EINTR && err != ERESTARTSYS &&
|
||||
err != -EDEADLK)
|
||||
pr_err("Failed to pin vma.\n");
|
||||
return err;
|
||||
}
|
||||
|
||||
i915_vma_unpin(vma);
|
||||
}
|
||||
|
||||
/*
|
||||
* Migration will implicitly unbind (asynchronously) any bound
|
||||
* vmas.
|
||||
*/
|
||||
if (i915_gem_object_is_lmem(obj)) {
|
||||
err = i915_gem_object_migrate(obj, ww, INTEL_REGION_SMEM);
|
||||
if (err) {
|
||||
@ -149,11 +173,15 @@ static int lmem_pages_migrate_one(struct i915_gem_ww_ctx *ww,
|
||||
return err;
|
||||
}
|
||||
|
||||
static int igt_lmem_pages_migrate(void *arg)
|
||||
static int __igt_lmem_pages_migrate(struct intel_gt *gt,
|
||||
struct i915_address_space *vm,
|
||||
struct i915_deps *deps,
|
||||
struct igt_spinner *spin,
|
||||
struct dma_fence *spin_fence)
|
||||
{
|
||||
struct intel_gt *gt = arg;
|
||||
struct drm_i915_private *i915 = gt->i915;
|
||||
struct drm_i915_gem_object *obj;
|
||||
struct i915_vma *vma = NULL;
|
||||
struct i915_gem_ww_ctx ww;
|
||||
struct i915_request *rq;
|
||||
int err;
|
||||
@ -165,6 +193,14 @@ static int igt_lmem_pages_migrate(void *arg)
|
||||
if (IS_ERR(obj))
|
||||
return PTR_ERR(obj);
|
||||
|
||||
if (vm) {
|
||||
vma = i915_vma_instance(obj, vm, NULL);
|
||||
if (IS_ERR(vma)) {
|
||||
err = PTR_ERR(vma);
|
||||
goto out_put;
|
||||
}
|
||||
}
|
||||
|
||||
/* Initial GPU fill, sync, CPU initialization. */
|
||||
for_i915_gem_ww(&ww, err, true) {
|
||||
err = i915_gem_object_lock(obj, &ww);
|
||||
@ -175,25 +211,23 @@ static int igt_lmem_pages_migrate(void *arg)
|
||||
if (err)
|
||||
continue;
|
||||
|
||||
err = intel_migrate_clear(>->migrate, &ww, NULL,
|
||||
err = intel_migrate_clear(>->migrate, &ww, deps,
|
||||
obj->mm.pages->sgl, obj->cache_level,
|
||||
i915_gem_object_is_lmem(obj),
|
||||
0xdeadbeaf, &rq);
|
||||
if (rq) {
|
||||
dma_resv_add_excl_fence(obj->base.resv, &rq->fence);
|
||||
i915_gem_object_set_moving_fence(obj, &rq->fence);
|
||||
i915_request_put(rq);
|
||||
}
|
||||
if (err)
|
||||
continue;
|
||||
|
||||
err = i915_gem_object_wait(obj, I915_WAIT_INTERRUPTIBLE,
|
||||
5 * HZ);
|
||||
if (err)
|
||||
continue;
|
||||
|
||||
err = igt_fill_check_buffer(obj, true);
|
||||
if (err)
|
||||
continue;
|
||||
if (!vma) {
|
||||
err = igt_fill_check_buffer(obj, true);
|
||||
if (err)
|
||||
continue;
|
||||
}
|
||||
}
|
||||
if (err)
|
||||
goto out_put;
|
||||
@ -204,7 +238,7 @@ static int igt_lmem_pages_migrate(void *arg)
|
||||
*/
|
||||
for (i = 1; i <= 5; ++i) {
|
||||
for_i915_gem_ww(&ww, err, true)
|
||||
err = lmem_pages_migrate_one(&ww, obj);
|
||||
err = lmem_pages_migrate_one(&ww, obj, vma);
|
||||
if (err)
|
||||
goto out_put;
|
||||
}
|
||||
@ -213,12 +247,27 @@ static int igt_lmem_pages_migrate(void *arg)
|
||||
if (err)
|
||||
goto out_put;
|
||||
|
||||
if (spin) {
|
||||
if (dma_fence_is_signaled(spin_fence)) {
|
||||
pr_err("Spinner was terminated by hangcheck.\n");
|
||||
err = -EBUSY;
|
||||
goto out_unlock;
|
||||
}
|
||||
igt_spinner_end(spin);
|
||||
}
|
||||
|
||||
/* Finally sync migration and check content. */
|
||||
err = i915_gem_object_wait_migration(obj, true);
|
||||
if (err)
|
||||
goto out_unlock;
|
||||
|
||||
err = igt_fill_check_buffer(obj, false);
|
||||
if (vma) {
|
||||
err = i915_vma_wait_for_bind(vma);
|
||||
if (err)
|
||||
goto out_unlock;
|
||||
} else {
|
||||
err = igt_fill_check_buffer(obj, false);
|
||||
}
|
||||
|
||||
out_unlock:
|
||||
i915_gem_object_unlock(obj);
|
||||
@ -231,6 +280,7 @@ out_put:
|
||||
static int igt_lmem_pages_failsafe_migrate(void *arg)
|
||||
{
|
||||
int fail_gpu, fail_alloc, ret;
|
||||
struct intel_gt *gt = arg;
|
||||
|
||||
for (fail_gpu = 0; fail_gpu < 2; ++fail_gpu) {
|
||||
for (fail_alloc = 0; fail_alloc < 2; ++fail_alloc) {
|
||||
@ -238,7 +288,118 @@ static int igt_lmem_pages_failsafe_migrate(void *arg)
|
||||
fail_gpu, fail_alloc);
|
||||
i915_ttm_migrate_set_failure_modes(fail_gpu,
|
||||
fail_alloc);
|
||||
ret = igt_lmem_pages_migrate(arg);
|
||||
ret = __igt_lmem_pages_migrate(gt, NULL, NULL, NULL, NULL);
|
||||
if (ret)
|
||||
goto out_err;
|
||||
}
|
||||
}
|
||||
|
||||
out_err:
|
||||
i915_ttm_migrate_set_failure_modes(false, false);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* This subtest tests that unbinding at migration is indeed performed
|
||||
* async. We launch a spinner and a number of migrations depending on
|
||||
* that spinner to have terminated. Before each migration we bind a
|
||||
* vma, which should then be async unbound by the migration operation.
|
||||
* If we are able to schedule migrations without blocking while the
|
||||
* spinner is still running, those unbinds are indeed async and non-
|
||||
* blocking.
|
||||
*
|
||||
* Note that each async bind operation is awaiting the previous migration
|
||||
* due to the moving fence resulting from the migration.
|
||||
*/
|
||||
static int igt_async_migrate(struct intel_gt *gt)
|
||||
{
|
||||
struct intel_engine_cs *engine;
|
||||
enum intel_engine_id id;
|
||||
struct i915_ppgtt *ppgtt;
|
||||
struct igt_spinner spin;
|
||||
int err;
|
||||
|
||||
ppgtt = i915_ppgtt_create(gt, 0);
|
||||
if (IS_ERR(ppgtt))
|
||||
return PTR_ERR(ppgtt);
|
||||
|
||||
if (igt_spinner_init(&spin, gt)) {
|
||||
err = -ENOMEM;
|
||||
goto out_spin;
|
||||
}
|
||||
|
||||
for_each_engine(engine, gt, id) {
|
||||
struct ttm_operation_ctx ctx = {
|
||||
.interruptible = true
|
||||
};
|
||||
struct dma_fence *spin_fence;
|
||||
struct intel_context *ce;
|
||||
struct i915_request *rq;
|
||||
struct i915_deps deps;
|
||||
|
||||
ce = intel_context_create(engine);
|
||||
if (IS_ERR(ce)) {
|
||||
err = PTR_ERR(ce);
|
||||
goto out_ce;
|
||||
}
|
||||
|
||||
/*
|
||||
* Use MI_NOOP, making the spinner non-preemptible. If there
|
||||
* is a code path where we fail async operation due to the
|
||||
* running spinner, we will block and fail to end the
|
||||
* spinner resulting in a deadlock. But with a non-
|
||||
* preemptible spinner, hangcheck will terminate the spinner
|
||||
* for us, and we will later detect that and fail the test.
|
||||
*/
|
||||
rq = igt_spinner_create_request(&spin, ce, MI_NOOP);
|
||||
intel_context_put(ce);
|
||||
if (IS_ERR(rq)) {
|
||||
err = PTR_ERR(rq);
|
||||
goto out_ce;
|
||||
}
|
||||
|
||||
i915_deps_init(&deps, GFP_KERNEL);
|
||||
err = i915_deps_add_dependency(&deps, &rq->fence, &ctx);
|
||||
spin_fence = dma_fence_get(&rq->fence);
|
||||
i915_request_add(rq);
|
||||
if (err)
|
||||
goto out_ce;
|
||||
|
||||
err = __igt_lmem_pages_migrate(gt, &ppgtt->vm, &deps, &spin,
|
||||
spin_fence);
|
||||
i915_deps_fini(&deps);
|
||||
dma_fence_put(spin_fence);
|
||||
if (err)
|
||||
goto out_ce;
|
||||
}
|
||||
|
||||
out_ce:
|
||||
igt_spinner_fini(&spin);
|
||||
out_spin:
|
||||
i915_vm_put(&ppgtt->vm);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
/*
|
||||
* Setting ASYNC_FAIL_ALLOC to 2 will simulate memory allocation failure while
|
||||
* arming the migration error check and block async migration. This
|
||||
* will cause us to deadlock and hangcheck will terminate the spinner
|
||||
* causing the test to fail.
|
||||
*/
|
||||
#define ASYNC_FAIL_ALLOC 1
|
||||
static int igt_lmem_async_migrate(void *arg)
|
||||
{
|
||||
int fail_gpu, fail_alloc, ret;
|
||||
struct intel_gt *gt = arg;
|
||||
|
||||
for (fail_gpu = 0; fail_gpu < 2; ++fail_gpu) {
|
||||
for (fail_alloc = 0; fail_alloc < ASYNC_FAIL_ALLOC; ++fail_alloc) {
|
||||
pr_info("Simulated failure modes: gpu: %d, alloc: %d\n",
|
||||
fail_gpu, fail_alloc);
|
||||
i915_ttm_migrate_set_failure_modes(fail_gpu,
|
||||
fail_alloc);
|
||||
ret = igt_async_migrate(gt);
|
||||
if (ret)
|
||||
goto out_err;
|
||||
}
|
||||
@ -256,6 +417,7 @@ int i915_gem_migrate_live_selftests(struct drm_i915_private *i915)
|
||||
SUBTEST(igt_lmem_create_migrate),
|
||||
SUBTEST(igt_same_create_migrate),
|
||||
SUBTEST(igt_lmem_pages_failsafe_migrate),
|
||||
SUBTEST(igt_lmem_async_migrate),
|
||||
};
|
||||
|
||||
if (!HAS_LMEM(i915))
|
||||
|
@ -6,11 +6,13 @@
|
||||
|
||||
#include <linux/prime_numbers.h>
|
||||
|
||||
#include "gem/i915_gem_internal.h"
|
||||
#include "gem/i915_gem_region.h"
|
||||
#include "gt/intel_engine_pm.h"
|
||||
#include "gt/intel_gpu_commands.h"
|
||||
#include "gt/intel_gt.h"
|
||||
#include "gt/intel_gt_pm.h"
|
||||
#include "gem/i915_gem_region.h"
|
||||
|
||||
#include "huge_gem_object.h"
|
||||
#include "i915_selftest.h"
|
||||
#include "selftests/i915_random.h"
|
||||
@ -166,7 +168,9 @@ static int check_partial_mapping(struct drm_i915_gem_object *obj,
|
||||
kunmap(p);
|
||||
|
||||
out:
|
||||
i915_gem_object_lock(obj, NULL);
|
||||
__i915_vma_put(vma);
|
||||
i915_gem_object_unlock(obj);
|
||||
return err;
|
||||
}
|
||||
|
||||
@ -261,7 +265,9 @@ static int check_partial_mappings(struct drm_i915_gem_object *obj,
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
i915_gem_object_lock(obj, NULL);
|
||||
__i915_vma_put(vma);
|
||||
i915_gem_object_unlock(obj);
|
||||
|
||||
if (igt_timeout(end_time,
|
||||
"%s: timed out after tiling=%d stride=%d\n",
|
||||
@ -307,7 +313,7 @@ static int igt_partial_tiling(void *arg)
|
||||
int tiling;
|
||||
int err;
|
||||
|
||||
if (!i915_ggtt_has_aperture(&i915->ggtt))
|
||||
if (!i915_ggtt_has_aperture(to_gt(i915)->ggtt))
|
||||
return 0;
|
||||
|
||||
/* We want to check the page mapping and fencing of a large object
|
||||
@ -320,7 +326,7 @@ static int igt_partial_tiling(void *arg)
|
||||
|
||||
obj = huge_gem_object(i915,
|
||||
nreal << PAGE_SHIFT,
|
||||
(1 + next_prime_number(i915->ggtt.vm.total >> PAGE_SHIFT)) << PAGE_SHIFT);
|
||||
(1 + next_prime_number(to_gt(i915)->ggtt->vm.total >> PAGE_SHIFT)) << PAGE_SHIFT);
|
||||
if (IS_ERR(obj))
|
||||
return PTR_ERR(obj);
|
||||
|
||||
@ -366,10 +372,10 @@ static int igt_partial_tiling(void *arg)
|
||||
tile.tiling = tiling;
|
||||
switch (tiling) {
|
||||
case I915_TILING_X:
|
||||
tile.swizzle = i915->ggtt.bit_6_swizzle_x;
|
||||
tile.swizzle = to_gt(i915)->ggtt->bit_6_swizzle_x;
|
||||
break;
|
||||
case I915_TILING_Y:
|
||||
tile.swizzle = i915->ggtt.bit_6_swizzle_y;
|
||||
tile.swizzle = to_gt(i915)->ggtt->bit_6_swizzle_y;
|
||||
break;
|
||||
}
|
||||
|
||||
@ -440,7 +446,7 @@ static int igt_smoke_tiling(void *arg)
|
||||
IGT_TIMEOUT(end);
|
||||
int err;
|
||||
|
||||
if (!i915_ggtt_has_aperture(&i915->ggtt))
|
||||
if (!i915_ggtt_has_aperture(to_gt(i915)->ggtt))
|
||||
return 0;
|
||||
|
||||
/*
|
||||
@ -457,7 +463,7 @@ static int igt_smoke_tiling(void *arg)
|
||||
|
||||
obj = huge_gem_object(i915,
|
||||
nreal << PAGE_SHIFT,
|
||||
(1 + next_prime_number(i915->ggtt.vm.total >> PAGE_SHIFT)) << PAGE_SHIFT);
|
||||
(1 + next_prime_number(to_gt(i915)->ggtt->vm.total >> PAGE_SHIFT)) << PAGE_SHIFT);
|
||||
if (IS_ERR(obj))
|
||||
return PTR_ERR(obj);
|
||||
|
||||
@ -486,10 +492,10 @@ static int igt_smoke_tiling(void *arg)
|
||||
break;
|
||||
|
||||
case I915_TILING_X:
|
||||
tile.swizzle = i915->ggtt.bit_6_swizzle_x;
|
||||
tile.swizzle = to_gt(i915)->ggtt->bit_6_swizzle_x;
|
||||
break;
|
||||
case I915_TILING_Y:
|
||||
tile.swizzle = i915->ggtt.bit_6_swizzle_y;
|
||||
tile.swizzle = to_gt(i915)->ggtt->bit_6_swizzle_y;
|
||||
break;
|
||||
}
|
||||
|
||||
@ -856,6 +862,7 @@ static int wc_check(struct drm_i915_gem_object *obj)
|
||||
|
||||
static bool can_mmap(struct drm_i915_gem_object *obj, enum i915_mmap_type type)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(obj->base.dev);
|
||||
bool no_map;
|
||||
|
||||
if (obj->ops->mmap_offset)
|
||||
@ -864,7 +871,7 @@ static bool can_mmap(struct drm_i915_gem_object *obj, enum i915_mmap_type type)
|
||||
return false;
|
||||
|
||||
if (type == I915_MMAP_TYPE_GTT &&
|
||||
!i915_ggtt_has_aperture(&to_i915(obj->base.dev)->ggtt))
|
||||
!i915_ggtt_has_aperture(to_gt(i915)->ggtt))
|
||||
return false;
|
||||
|
||||
i915_gem_object_lock(obj, NULL);
|
||||
@ -1351,7 +1358,9 @@ static int __igt_mmap_revoke(struct drm_i915_private *i915,
|
||||
* for other objects. Ergo we have to revoke the previous mmap PTE
|
||||
* access as it no longer points to the same object.
|
||||
*/
|
||||
i915_gem_object_lock(obj, NULL);
|
||||
err = i915_gem_object_unbind(obj, I915_GEM_OBJECT_UNBIND_ACTIVE);
|
||||
i915_gem_object_unlock(obj);
|
||||
if (err) {
|
||||
pr_err("Failed to unbind object!\n");
|
||||
goto out_unmap;
|
||||
|
@ -43,7 +43,7 @@ static int igt_gem_huge(void *arg)
|
||||
|
||||
obj = huge_gem_object(i915,
|
||||
nreal * PAGE_SIZE,
|
||||
i915->ggtt.vm.total + PAGE_SIZE);
|
||||
to_gt(i915)->ggtt->vm.total + PAGE_SIZE);
|
||||
if (IS_ERR(obj))
|
||||
return PTR_ERR(obj);
|
||||
|
||||
|
@ -7,6 +7,7 @@
|
||||
#include "igt_gem_utils.h"
|
||||
|
||||
#include "gem/i915_gem_context.h"
|
||||
#include "gem/i915_gem_internal.h"
|
||||
#include "gem/i915_gem_pm.h"
|
||||
#include "gt/intel_context.h"
|
||||
#include "gt/intel_gpu_commands.h"
|
||||
|
@ -4,6 +4,7 @@
|
||||
* Copyright © 2016 Intel Corporation
|
||||
*/
|
||||
|
||||
#include "i915_file_private.h"
|
||||
#include "mock_context.h"
|
||||
#include "selftests/mock_drm.h"
|
||||
#include "selftests/mock_gtt.h"
|
||||
|
@ -5,6 +5,8 @@
|
||||
|
||||
#include <linux/log2.h>
|
||||
|
||||
#include "gem/i915_gem_internal.h"
|
||||
|
||||
#include "gen6_ppgtt.h"
|
||||
#include "i915_scatterlist.h"
|
||||
#include "i915_trace.h"
|
||||
@ -106,17 +108,17 @@ static void gen6_ppgtt_clear_range(struct i915_address_space *vm,
|
||||
}
|
||||
|
||||
static void gen6_ppgtt_insert_entries(struct i915_address_space *vm,
|
||||
struct i915_vma *vma,
|
||||
struct i915_vma_resource *vma_res,
|
||||
enum i915_cache_level cache_level,
|
||||
u32 flags)
|
||||
{
|
||||
struct i915_ppgtt *ppgtt = i915_vm_to_ppgtt(vm);
|
||||
struct i915_page_directory * const pd = ppgtt->pd;
|
||||
unsigned int first_entry = vma->node.start / I915_GTT_PAGE_SIZE;
|
||||
unsigned int first_entry = vma_res->start / I915_GTT_PAGE_SIZE;
|
||||
unsigned int act_pt = first_entry / GEN6_PTES;
|
||||
unsigned int act_pte = first_entry % GEN6_PTES;
|
||||
const u32 pte_encode = vm->pte_encode(0, cache_level, flags);
|
||||
struct sgt_dma iter = sgt_dma(vma);
|
||||
struct sgt_dma iter = sgt_dma(vma_res);
|
||||
gen6_pte_t *vaddr;
|
||||
|
||||
GEM_BUG_ON(!pd->entry[act_pt]);
|
||||
@ -142,7 +144,7 @@ static void gen6_ppgtt_insert_entries(struct i915_address_space *vm,
|
||||
}
|
||||
} while (1);
|
||||
|
||||
vma->page_sizes.gtt = I915_GTT_PAGE_SIZE;
|
||||
vma_res->page_sizes_gtt = I915_GTT_PAGE_SIZE;
|
||||
}
|
||||
|
||||
static void gen6_flush_pd(struct gen6_ppgtt *ppgtt, u64 start, u64 end)
|
||||
@ -273,13 +275,13 @@ static void gen6_ppgtt_cleanup(struct i915_address_space *vm)
|
||||
|
||||
static void pd_vma_bind(struct i915_address_space *vm,
|
||||
struct i915_vm_pt_stash *stash,
|
||||
struct i915_vma *vma,
|
||||
struct i915_vma_resource *vma_res,
|
||||
enum i915_cache_level cache_level,
|
||||
u32 unused)
|
||||
{
|
||||
struct i915_ggtt *ggtt = i915_vm_to_ggtt(vm);
|
||||
struct gen6_ppgtt *ppgtt = vma->private;
|
||||
u32 ggtt_offset = i915_ggtt_offset(vma) / I915_GTT_PAGE_SIZE;
|
||||
struct gen6_ppgtt *ppgtt = vma_res->private;
|
||||
u32 ggtt_offset = vma_res->start / I915_GTT_PAGE_SIZE;
|
||||
|
||||
ppgtt->pp_dir = ggtt_offset * sizeof(gen6_pte_t) << 10;
|
||||
ppgtt->pd_addr = (gen6_pte_t __iomem *)ggtt->gsm + ggtt_offset;
|
||||
@ -287,9 +289,10 @@ static void pd_vma_bind(struct i915_address_space *vm,
|
||||
gen6_flush_pd(ppgtt, 0, ppgtt->base.vm.total);
|
||||
}
|
||||
|
||||
static void pd_vma_unbind(struct i915_address_space *vm, struct i915_vma *vma)
|
||||
static void pd_vma_unbind(struct i915_address_space *vm,
|
||||
struct i915_vma_resource *vma_res)
|
||||
{
|
||||
struct gen6_ppgtt *ppgtt = vma->private;
|
||||
struct gen6_ppgtt *ppgtt = vma_res->private;
|
||||
struct i915_page_directory * const pd = ppgtt->base.pd;
|
||||
struct i915_page_table *pt;
|
||||
unsigned int pde;
|
||||
|
@ -453,20 +453,21 @@ gen8_ppgtt_insert_pte(struct i915_ppgtt *ppgtt,
|
||||
return idx;
|
||||
}
|
||||
|
||||
static void gen8_ppgtt_insert_huge(struct i915_vma *vma,
|
||||
static void gen8_ppgtt_insert_huge(struct i915_address_space *vm,
|
||||
struct i915_vma_resource *vma_res,
|
||||
struct sgt_dma *iter,
|
||||
enum i915_cache_level cache_level,
|
||||
u32 flags)
|
||||
{
|
||||
const gen8_pte_t pte_encode = gen8_pte_encode(0, cache_level, flags);
|
||||
unsigned int rem = sg_dma_len(iter->sg);
|
||||
u64 start = vma->node.start;
|
||||
u64 start = vma_res->start;
|
||||
|
||||
GEM_BUG_ON(!i915_vm_is_4lvl(vma->vm));
|
||||
GEM_BUG_ON(!i915_vm_is_4lvl(vm));
|
||||
|
||||
do {
|
||||
struct i915_page_directory * const pdp =
|
||||
gen8_pdp_for_page_address(vma->vm, start);
|
||||
gen8_pdp_for_page_address(vm, start);
|
||||
struct i915_page_directory * const pd =
|
||||
i915_pd_entry(pdp, __gen8_pte_index(start, 2));
|
||||
gen8_pte_t encode = pte_encode;
|
||||
@ -475,7 +476,7 @@ static void gen8_ppgtt_insert_huge(struct i915_vma *vma,
|
||||
gen8_pte_t *vaddr;
|
||||
u16 index;
|
||||
|
||||
if (vma->page_sizes.sg & I915_GTT_PAGE_SIZE_2M &&
|
||||
if (vma_res->bi.page_sizes.sg & I915_GTT_PAGE_SIZE_2M &&
|
||||
IS_ALIGNED(iter->dma, I915_GTT_PAGE_SIZE_2M) &&
|
||||
rem >= I915_GTT_PAGE_SIZE_2M &&
|
||||
!__gen8_pte_index(start, 0)) {
|
||||
@ -492,7 +493,7 @@ static void gen8_ppgtt_insert_huge(struct i915_vma *vma,
|
||||
page_size = I915_GTT_PAGE_SIZE;
|
||||
|
||||
if (!index &&
|
||||
vma->page_sizes.sg & I915_GTT_PAGE_SIZE_64K &&
|
||||
vma_res->bi.page_sizes.sg & I915_GTT_PAGE_SIZE_64K &&
|
||||
IS_ALIGNED(iter->dma, I915_GTT_PAGE_SIZE_64K) &&
|
||||
(IS_ALIGNED(rem, I915_GTT_PAGE_SIZE_64K) ||
|
||||
rem >= (I915_PDES - index) * I915_GTT_PAGE_SIZE))
|
||||
@ -541,9 +542,9 @@ static void gen8_ppgtt_insert_huge(struct i915_vma *vma,
|
||||
*/
|
||||
if (maybe_64K != -1 &&
|
||||
(index == I915_PDES ||
|
||||
(i915_vm_has_scratch_64K(vma->vm) &&
|
||||
!iter->sg && IS_ALIGNED(vma->node.start +
|
||||
vma->node.size,
|
||||
(i915_vm_has_scratch_64K(vm) &&
|
||||
!iter->sg && IS_ALIGNED(vma_res->start +
|
||||
vma_res->node_size,
|
||||
I915_GTT_PAGE_SIZE_2M)))) {
|
||||
vaddr = px_vaddr(pd);
|
||||
vaddr[maybe_64K] |= GEN8_PDE_IPS_64K;
|
||||
@ -559,10 +560,10 @@ static void gen8_ppgtt_insert_huge(struct i915_vma *vma,
|
||||
* instead - which we detect as missing results during
|
||||
* selftests.
|
||||
*/
|
||||
if (I915_SELFTEST_ONLY(vma->vm->scrub_64K)) {
|
||||
if (I915_SELFTEST_ONLY(vm->scrub_64K)) {
|
||||
u16 i;
|
||||
|
||||
encode = vma->vm->scratch[0]->encode;
|
||||
encode = vm->scratch[0]->encode;
|
||||
vaddr = px_vaddr(i915_pt_entry(pd, maybe_64K));
|
||||
|
||||
for (i = 1; i < index; i += 16)
|
||||
@ -572,22 +573,22 @@ static void gen8_ppgtt_insert_huge(struct i915_vma *vma,
|
||||
}
|
||||
}
|
||||
|
||||
vma->page_sizes.gtt |= page_size;
|
||||
vma_res->page_sizes_gtt |= page_size;
|
||||
} while (iter->sg && sg_dma_len(iter->sg));
|
||||
}
|
||||
|
||||
static void gen8_ppgtt_insert(struct i915_address_space *vm,
|
||||
struct i915_vma *vma,
|
||||
struct i915_vma_resource *vma_res,
|
||||
enum i915_cache_level cache_level,
|
||||
u32 flags)
|
||||
{
|
||||
struct i915_ppgtt * const ppgtt = i915_vm_to_ppgtt(vm);
|
||||
struct sgt_dma iter = sgt_dma(vma);
|
||||
struct sgt_dma iter = sgt_dma(vma_res);
|
||||
|
||||
if (vma->page_sizes.sg > I915_GTT_PAGE_SIZE) {
|
||||
gen8_ppgtt_insert_huge(vma, &iter, cache_level, flags);
|
||||
if (vma_res->bi.page_sizes.sg > I915_GTT_PAGE_SIZE) {
|
||||
gen8_ppgtt_insert_huge(vm, vma_res, &iter, cache_level, flags);
|
||||
} else {
|
||||
u64 idx = vma->node.start >> GEN8_PTE_SHIFT;
|
||||
u64 idx = vma_res->start >> GEN8_PTE_SHIFT;
|
||||
|
||||
do {
|
||||
struct i915_page_directory * const pdp =
|
||||
@ -597,7 +598,7 @@ static void gen8_ppgtt_insert(struct i915_address_space *vm,
|
||||
cache_level, flags);
|
||||
} while (idx);
|
||||
|
||||
vma->page_sizes.gtt = I915_GTT_PAGE_SIZE;
|
||||
vma_res->page_sizes_gtt = I915_GTT_PAGE_SIZE;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -79,7 +79,8 @@ static int intel_context_active_acquire(struct intel_context *ce)
|
||||
|
||||
__i915_active_acquire(&ce->active);
|
||||
|
||||
if (intel_context_is_barrier(ce) || intel_engine_uses_guc(ce->engine))
|
||||
if (intel_context_is_barrier(ce) || intel_engine_uses_guc(ce->engine) ||
|
||||
intel_context_is_parallel(ce))
|
||||
return 0;
|
||||
|
||||
/* Preallocate tracking nodes */
|
||||
@ -563,7 +564,6 @@ void intel_context_bind_parent_child(struct intel_context *parent,
|
||||
* Callers responsibility to validate that this function is used
|
||||
* correctly but we use GEM_BUG_ON here ensure that they do.
|
||||
*/
|
||||
GEM_BUG_ON(!intel_engine_uses_guc(parent->engine));
|
||||
GEM_BUG_ON(intel_context_is_pinned(parent));
|
||||
GEM_BUG_ON(intel_context_is_child(parent));
|
||||
GEM_BUG_ON(intel_context_is_pinned(child));
|
||||
|
@ -9,6 +9,7 @@
|
||||
#include "intel_engine_pm.h"
|
||||
#include "intel_gpu_commands.h"
|
||||
#include "intel_lrc.h"
|
||||
#include "intel_lrc_reg.h"
|
||||
#include "intel_ring.h"
|
||||
#include "intel_sseu.h"
|
||||
|
||||
|
@ -182,6 +182,8 @@ intel_write_status_page(struct intel_engine_cs *engine, int reg, u32 value)
|
||||
#define I915_HWS_CSB_BUF0_INDEX 0x10
|
||||
#define I915_HWS_CSB_WRITE_INDEX 0x1f
|
||||
#define ICL_HWS_CSB_WRITE_INDEX 0x2f
|
||||
#define INTEL_HWS_CSB_WRITE_INDEX(__i915) \
|
||||
(GRAPHICS_VER(__i915) >= 11 ? ICL_HWS_CSB_WRITE_INDEX : I915_HWS_CSB_WRITE_INDEX)
|
||||
|
||||
void intel_engine_stop(struct intel_engine_cs *engine);
|
||||
void intel_engine_cleanup(struct intel_engine_cs *engine);
|
||||
|
@ -6,6 +6,7 @@
|
||||
#include <drm/drm_print.h>
|
||||
|
||||
#include "gem/i915_gem_context.h"
|
||||
#include "gem/i915_gem_internal.h"
|
||||
#include "gt/intel_gt_regs.h"
|
||||
|
||||
#include "i915_cmd_parser.h"
|
||||
@ -1229,17 +1230,6 @@ void intel_engine_cancel_stop_cs(struct intel_engine_cs *engine)
|
||||
ENGINE_WRITE_FW(engine, RING_MI_MODE, _MASKED_BIT_DISABLE(STOP_RING));
|
||||
}
|
||||
|
||||
const char *i915_cache_level_str(struct drm_i915_private *i915, int type)
|
||||
{
|
||||
switch (type) {
|
||||
case I915_CACHE_NONE: return " uncached";
|
||||
case I915_CACHE_LLC: return HAS_LLC(i915) ? " LLC" : " snooped";
|
||||
case I915_CACHE_L3_LLC: return " L3+LLC";
|
||||
case I915_CACHE_WT: return " WT";
|
||||
default: return "";
|
||||
}
|
||||
}
|
||||
|
||||
static u32
|
||||
read_subslice_reg(const struct intel_engine_cs *engine,
|
||||
int slice, int subslice, i915_reg_t reg)
|
||||
@ -1710,18 +1700,15 @@ static void intel_engine_print_registers(struct intel_engine_cs *engine,
|
||||
|
||||
static void print_request_ring(struct drm_printer *m, struct i915_request *rq)
|
||||
{
|
||||
struct i915_vma_snapshot *vsnap = &rq->batch_snapshot;
|
||||
struct i915_vma_resource *vma_res = rq->batch_res;
|
||||
void *ring;
|
||||
int size;
|
||||
|
||||
if (!i915_vma_snapshot_present(vsnap))
|
||||
vsnap = NULL;
|
||||
|
||||
drm_printf(m,
|
||||
"[head %04x, postfix %04x, tail %04x, batch 0x%08x_%08x]:\n",
|
||||
rq->head, rq->postfix, rq->tail,
|
||||
vsnap ? upper_32_bits(vsnap->gtt_offset) : ~0u,
|
||||
vsnap ? lower_32_bits(vsnap->gtt_offset) : ~0u);
|
||||
vma_res ? upper_32_bits(vma_res->start) : ~0u,
|
||||
vma_res ? lower_32_bits(vma_res->start) : ~0u);
|
||||
|
||||
size = rq->tail - rq->head;
|
||||
if (rq->tail < rq->head)
|
||||
|
@ -70,6 +70,12 @@
|
||||
#define RING_NOPID(base) _MMIO((base) + 0x94)
|
||||
#define RING_HWSTAM(base) _MMIO((base) + 0x98)
|
||||
#define RING_MI_MODE(base) _MMIO((base) + 0x9c)
|
||||
#define ASYNC_FLIP_PERF_DISABLE REG_BIT(14)
|
||||
#define MI_FLUSH_ENABLE REG_BIT(12)
|
||||
#define TGL_NESTED_BB_EN REG_BIT(12)
|
||||
#define MODE_IDLE REG_BIT(9)
|
||||
#define STOP_RING REG_BIT(8)
|
||||
#define VS_TIMER_DISPATCH REG_BIT(6)
|
||||
#define RING_IMR(base) _MMIO((base) + 0xa8)
|
||||
#define RING_EIR(base) _MMIO((base) + 0xb0)
|
||||
#define RING_EMR(base) _MMIO((base) + 0xb4)
|
||||
@ -211,8 +217,25 @@
|
||||
#define GEN8_RING_CS_GPR(base, n) _MMIO((base) + 0x600 + (n) * 8)
|
||||
#define GEN8_RING_CS_GPR_UDW(base, n) _MMIO((base) + 0x600 + (n) * 8 + 4)
|
||||
|
||||
#define GEN11_VCS_SFC_FORCED_LOCK(base) _MMIO((base) + 0x88c)
|
||||
#define GEN11_VCS_SFC_FORCED_LOCK_BIT (1 << 0)
|
||||
#define GEN11_VCS_SFC_LOCK_STATUS(base) _MMIO((base) + 0x890)
|
||||
#define GEN11_VCS_SFC_USAGE_BIT (1 << 0)
|
||||
#define GEN11_VCS_SFC_LOCK_ACK_BIT (1 << 1)
|
||||
|
||||
#define GEN11_VECS_SFC_FORCED_LOCK(base) _MMIO((base) + 0x201c)
|
||||
#define GEN11_VECS_SFC_FORCED_LOCK_BIT (1 << 0)
|
||||
#define GEN11_VECS_SFC_LOCK_ACK(base) _MMIO((base) + 0x2018)
|
||||
#define GEN11_VECS_SFC_LOCK_ACK_BIT (1 << 0)
|
||||
#define GEN11_VECS_SFC_USAGE(base) _MMIO((base) + 0x2014)
|
||||
#define GEN11_VECS_SFC_USAGE_BIT (1 << 0)
|
||||
|
||||
#define RING_HWS_PGA_GEN6(base) _MMIO((base) + 0x2080)
|
||||
|
||||
#define GEN12_HCP_SFC_LOCK_STATUS(base) _MMIO((base) + 0x2914)
|
||||
#define GEN12_HCP_SFC_LOCK_ACK_BIT REG_BIT(1)
|
||||
#define GEN12_HCP_SFC_USAGE_BIT REG_BIT(0)
|
||||
|
||||
#define VDBOX_CGCTL3F10(base) _MMIO((base) + 0x3f10)
|
||||
#define IECPUNIT_CLKGATE_DIS REG_BIT(22)
|
||||
|
||||
|
@ -2601,6 +2601,43 @@ static void execlists_context_cancel_request(struct intel_context *ce,
|
||||
current->comm);
|
||||
}
|
||||
|
||||
static struct intel_context *
|
||||
execlists_create_parallel(struct intel_engine_cs **engines,
|
||||
unsigned int num_siblings,
|
||||
unsigned int width)
|
||||
{
|
||||
struct intel_context *parent = NULL, *ce, *err;
|
||||
int i;
|
||||
|
||||
GEM_BUG_ON(num_siblings != 1);
|
||||
|
||||
for (i = 0; i < width; ++i) {
|
||||
ce = intel_context_create(engines[i]);
|
||||
if (IS_ERR(ce)) {
|
||||
err = ce;
|
||||
goto unwind;
|
||||
}
|
||||
|
||||
if (i == 0)
|
||||
parent = ce;
|
||||
else
|
||||
intel_context_bind_parent_child(parent, ce);
|
||||
}
|
||||
|
||||
parent->parallel.fence_context = dma_fence_context_alloc(1);
|
||||
|
||||
intel_context_set_nopreempt(parent);
|
||||
for_each_child(parent, ce)
|
||||
intel_context_set_nopreempt(ce);
|
||||
|
||||
return parent;
|
||||
|
||||
unwind:
|
||||
if (parent)
|
||||
intel_context_put(parent);
|
||||
return err;
|
||||
}
|
||||
|
||||
static const struct intel_context_ops execlists_context_ops = {
|
||||
.flags = COPS_HAS_INFLIGHT,
|
||||
|
||||
@ -2619,6 +2656,7 @@ static const struct intel_context_ops execlists_context_ops = {
|
||||
.reset = lrc_reset,
|
||||
.destroy = lrc_destroy,
|
||||
|
||||
.create_parallel = execlists_create_parallel,
|
||||
.create_virtual = execlists_create_virtual,
|
||||
};
|
||||
|
||||
@ -3465,7 +3503,7 @@ int intel_execlists_submission_setup(struct intel_engine_cs *engine)
|
||||
(u64 *)&engine->status_page.addr[I915_HWS_CSB_BUF0_INDEX];
|
||||
|
||||
execlists->csb_write =
|
||||
&engine->status_page.addr[intel_hws_csb_write_index(i915)];
|
||||
&engine->status_page.addr[INTEL_HWS_CSB_WRITE_INDEX(i915)];
|
||||
|
||||
if (GRAPHICS_VER(i915) < 11)
|
||||
execlists->csb_size = GEN8_CSB_ENTRIES;
|
||||
|
@ -87,7 +87,7 @@ int i915_ggtt_init_hw(struct drm_i915_private *i915)
|
||||
* beyond the end of the batch buffer, across the page boundary,
|
||||
* and beyond the end of the GTT if we do not provide a guard.
|
||||
*/
|
||||
ret = ggtt_init_hw(&i915->ggtt);
|
||||
ret = ggtt_init_hw(to_gt(i915)->ggtt);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@ -130,22 +130,51 @@ void i915_ggtt_suspend_vm(struct i915_address_space *vm)
|
||||
|
||||
drm_WARN_ON(&vm->i915->drm, !vm->is_ggtt && !vm->is_dpt);
|
||||
|
||||
retry:
|
||||
i915_gem_drain_freed_objects(vm->i915);
|
||||
|
||||
mutex_lock(&vm->mutex);
|
||||
|
||||
/* Skip rewriting PTE on VMA unbind. */
|
||||
open = atomic_xchg(&vm->open, 0);
|
||||
|
||||
list_for_each_entry_safe(vma, vn, &vm->bound_list, vm_link) {
|
||||
GEM_BUG_ON(!drm_mm_node_allocated(&vma->node));
|
||||
i915_vma_wait_for_bind(vma);
|
||||
struct drm_i915_gem_object *obj = vma->obj;
|
||||
|
||||
if (i915_vma_is_pinned(vma))
|
||||
GEM_BUG_ON(!drm_mm_node_allocated(&vma->node));
|
||||
|
||||
if (i915_vma_is_pinned(vma) || !i915_vma_is_bound(vma, I915_VMA_GLOBAL_BIND))
|
||||
continue;
|
||||
|
||||
/* unlikely to race when GPU is idle, so no worry about slowpath.. */
|
||||
if (WARN_ON(!i915_gem_object_trylock(obj, NULL))) {
|
||||
/*
|
||||
* No dead objects should appear here, GPU should be
|
||||
* completely idle, and userspace suspended
|
||||
*/
|
||||
i915_gem_object_get(obj);
|
||||
|
||||
atomic_set(&vm->open, open);
|
||||
mutex_unlock(&vm->mutex);
|
||||
|
||||
i915_gem_object_lock(obj, NULL);
|
||||
open = i915_vma_unbind(vma);
|
||||
i915_gem_object_unlock(obj);
|
||||
|
||||
GEM_WARN_ON(open);
|
||||
|
||||
i915_gem_object_put(obj);
|
||||
goto retry;
|
||||
}
|
||||
|
||||
if (!i915_vma_is_bound(vma, I915_VMA_GLOBAL_BIND)) {
|
||||
__i915_vma_evict(vma);
|
||||
i915_vma_wait_for_bind(vma);
|
||||
|
||||
__i915_vma_evict(vma, false);
|
||||
drm_mm_remove_node(&vma->node);
|
||||
}
|
||||
|
||||
i915_gem_object_unlock(obj);
|
||||
}
|
||||
|
||||
vm->clear_range(vm, 0, vm->total);
|
||||
@ -236,7 +265,7 @@ static void gen8_ggtt_insert_page(struct i915_address_space *vm,
|
||||
}
|
||||
|
||||
static void gen8_ggtt_insert_entries(struct i915_address_space *vm,
|
||||
struct i915_vma *vma,
|
||||
struct i915_vma_resource *vma_res,
|
||||
enum i915_cache_level level,
|
||||
u32 flags)
|
||||
{
|
||||
@ -253,10 +282,10 @@ static void gen8_ggtt_insert_entries(struct i915_address_space *vm,
|
||||
*/
|
||||
|
||||
gte = (gen8_pte_t __iomem *)ggtt->gsm;
|
||||
gte += vma->node.start / I915_GTT_PAGE_SIZE;
|
||||
end = gte + vma->node.size / I915_GTT_PAGE_SIZE;
|
||||
gte += vma_res->start / I915_GTT_PAGE_SIZE;
|
||||
end = gte + vma_res->node_size / I915_GTT_PAGE_SIZE;
|
||||
|
||||
for_each_sgt_daddr(addr, iter, vma->pages)
|
||||
for_each_sgt_daddr(addr, iter, vma_res->bi.pages)
|
||||
gen8_set_pte(gte++, pte_encode | addr);
|
||||
GEM_BUG_ON(gte > end);
|
||||
|
||||
@ -293,7 +322,7 @@ static void gen6_ggtt_insert_page(struct i915_address_space *vm,
|
||||
* through the GMADR mapped BAR (i915->mm.gtt->gtt).
|
||||
*/
|
||||
static void gen6_ggtt_insert_entries(struct i915_address_space *vm,
|
||||
struct i915_vma *vma,
|
||||
struct i915_vma_resource *vma_res,
|
||||
enum i915_cache_level level,
|
||||
u32 flags)
|
||||
{
|
||||
@ -304,10 +333,10 @@ static void gen6_ggtt_insert_entries(struct i915_address_space *vm,
|
||||
dma_addr_t addr;
|
||||
|
||||
gte = (gen6_pte_t __iomem *)ggtt->gsm;
|
||||
gte += vma->node.start / I915_GTT_PAGE_SIZE;
|
||||
end = gte + vma->node.size / I915_GTT_PAGE_SIZE;
|
||||
gte += vma_res->start / I915_GTT_PAGE_SIZE;
|
||||
end = gte + vma_res->node_size / I915_GTT_PAGE_SIZE;
|
||||
|
||||
for_each_sgt_daddr(addr, iter, vma->pages)
|
||||
for_each_sgt_daddr(addr, iter, vma_res->bi.pages)
|
||||
iowrite32(vm->pte_encode(addr, level, flags), gte++);
|
||||
GEM_BUG_ON(gte > end);
|
||||
|
||||
@ -390,7 +419,7 @@ static void bxt_vtd_ggtt_insert_page__BKL(struct i915_address_space *vm,
|
||||
|
||||
struct insert_entries {
|
||||
struct i915_address_space *vm;
|
||||
struct i915_vma *vma;
|
||||
struct i915_vma_resource *vma_res;
|
||||
enum i915_cache_level level;
|
||||
u32 flags;
|
||||
};
|
||||
@ -399,18 +428,18 @@ static int bxt_vtd_ggtt_insert_entries__cb(void *_arg)
|
||||
{
|
||||
struct insert_entries *arg = _arg;
|
||||
|
||||
gen8_ggtt_insert_entries(arg->vm, arg->vma, arg->level, arg->flags);
|
||||
gen8_ggtt_insert_entries(arg->vm, arg->vma_res, arg->level, arg->flags);
|
||||
bxt_vtd_ggtt_wa(arg->vm);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void bxt_vtd_ggtt_insert_entries__BKL(struct i915_address_space *vm,
|
||||
struct i915_vma *vma,
|
||||
struct i915_vma_resource *vma_res,
|
||||
enum i915_cache_level level,
|
||||
u32 flags)
|
||||
{
|
||||
struct insert_entries arg = { vm, vma, level, flags };
|
||||
struct insert_entries arg = { vm, vma_res, level, flags };
|
||||
|
||||
stop_machine(bxt_vtd_ggtt_insert_entries__cb, &arg, NULL);
|
||||
}
|
||||
@ -449,14 +478,14 @@ static void i915_ggtt_insert_page(struct i915_address_space *vm,
|
||||
}
|
||||
|
||||
static void i915_ggtt_insert_entries(struct i915_address_space *vm,
|
||||
struct i915_vma *vma,
|
||||
struct i915_vma_resource *vma_res,
|
||||
enum i915_cache_level cache_level,
|
||||
u32 unused)
|
||||
{
|
||||
unsigned int flags = (cache_level == I915_CACHE_NONE) ?
|
||||
AGP_USER_MEMORY : AGP_USER_CACHED_MEMORY;
|
||||
|
||||
intel_gtt_insert_sg_entries(vma->pages, vma->node.start >> PAGE_SHIFT,
|
||||
intel_gtt_insert_sg_entries(vma_res->bi.pages, vma_res->start >> PAGE_SHIFT,
|
||||
flags);
|
||||
}
|
||||
|
||||
@ -468,30 +497,32 @@ static void i915_ggtt_clear_range(struct i915_address_space *vm,
|
||||
|
||||
static void ggtt_bind_vma(struct i915_address_space *vm,
|
||||
struct i915_vm_pt_stash *stash,
|
||||
struct i915_vma *vma,
|
||||
struct i915_vma_resource *vma_res,
|
||||
enum i915_cache_level cache_level,
|
||||
u32 flags)
|
||||
{
|
||||
struct drm_i915_gem_object *obj = vma->obj;
|
||||
u32 pte_flags;
|
||||
|
||||
if (i915_vma_is_bound(vma, ~flags & I915_VMA_BIND_MASK))
|
||||
if (vma_res->bound_flags & (~flags & I915_VMA_BIND_MASK))
|
||||
return;
|
||||
|
||||
vma_res->bound_flags |= flags;
|
||||
|
||||
/* Applicable to VLV (gen8+ do not support RO in the GGTT) */
|
||||
pte_flags = 0;
|
||||
if (i915_gem_object_is_readonly(obj))
|
||||
if (vma_res->bi.readonly)
|
||||
pte_flags |= PTE_READ_ONLY;
|
||||
if (i915_gem_object_is_lmem(obj))
|
||||
if (vma_res->bi.lmem)
|
||||
pte_flags |= PTE_LM;
|
||||
|
||||
vm->insert_entries(vm, vma, cache_level, pte_flags);
|
||||
vma->page_sizes.gtt = I915_GTT_PAGE_SIZE;
|
||||
vm->insert_entries(vm, vma_res, cache_level, pte_flags);
|
||||
vma_res->page_sizes_gtt = I915_GTT_PAGE_SIZE;
|
||||
}
|
||||
|
||||
static void ggtt_unbind_vma(struct i915_address_space *vm, struct i915_vma *vma)
|
||||
static void ggtt_unbind_vma(struct i915_address_space *vm,
|
||||
struct i915_vma_resource *vma_res)
|
||||
{
|
||||
vm->clear_range(vm, vma->node.start, vma->size);
|
||||
vm->clear_range(vm, vma_res->start, vma_res->vma_size);
|
||||
}
|
||||
|
||||
static int ggtt_reserve_guc_top(struct i915_ggtt *ggtt)
|
||||
@ -505,7 +536,7 @@ static int ggtt_reserve_guc_top(struct i915_ggtt *ggtt)
|
||||
GEM_BUG_ON(ggtt->vm.total <= GUC_GGTT_TOP);
|
||||
size = ggtt->vm.total - GUC_GGTT_TOP;
|
||||
|
||||
ret = i915_gem_gtt_reserve(&ggtt->vm, &ggtt->uc_fw, size,
|
||||
ret = i915_gem_gtt_reserve(&ggtt->vm, NULL, &ggtt->uc_fw, size,
|
||||
GUC_GGTT_TOP, I915_COLOR_UNEVICTABLE,
|
||||
PIN_NOEVICT);
|
||||
if (ret)
|
||||
@ -624,7 +655,7 @@ err:
|
||||
|
||||
static void aliasing_gtt_bind_vma(struct i915_address_space *vm,
|
||||
struct i915_vm_pt_stash *stash,
|
||||
struct i915_vma *vma,
|
||||
struct i915_vma_resource *vma_res,
|
||||
enum i915_cache_level cache_level,
|
||||
u32 flags)
|
||||
{
|
||||
@ -632,25 +663,27 @@ static void aliasing_gtt_bind_vma(struct i915_address_space *vm,
|
||||
|
||||
/* Currently applicable only to VLV */
|
||||
pte_flags = 0;
|
||||
if (i915_gem_object_is_readonly(vma->obj))
|
||||
if (vma_res->bi.readonly)
|
||||
pte_flags |= PTE_READ_ONLY;
|
||||
|
||||
if (flags & I915_VMA_LOCAL_BIND)
|
||||
ppgtt_bind_vma(&i915_vm_to_ggtt(vm)->alias->vm,
|
||||
stash, vma, cache_level, flags);
|
||||
stash, vma_res, cache_level, flags);
|
||||
|
||||
if (flags & I915_VMA_GLOBAL_BIND)
|
||||
vm->insert_entries(vm, vma, cache_level, pte_flags);
|
||||
vm->insert_entries(vm, vma_res, cache_level, pte_flags);
|
||||
|
||||
vma_res->bound_flags |= flags;
|
||||
}
|
||||
|
||||
static void aliasing_gtt_unbind_vma(struct i915_address_space *vm,
|
||||
struct i915_vma *vma)
|
||||
struct i915_vma_resource *vma_res)
|
||||
{
|
||||
if (i915_vma_is_bound(vma, I915_VMA_GLOBAL_BIND))
|
||||
vm->clear_range(vm, vma->node.start, vma->size);
|
||||
if (vma_res->bound_flags & I915_VMA_GLOBAL_BIND)
|
||||
vm->clear_range(vm, vma_res->start, vma_res->vma_size);
|
||||
|
||||
if (i915_vma_is_bound(vma, I915_VMA_LOCAL_BIND))
|
||||
ppgtt_unbind_vma(&i915_vm_to_ggtt(vm)->alias->vm, vma);
|
||||
if (vma_res->bound_flags & I915_VMA_LOCAL_BIND)
|
||||
ppgtt_unbind_vma(&i915_vm_to_ggtt(vm)->alias->vm, vma_res);
|
||||
}
|
||||
|
||||
static int init_aliasing_ppgtt(struct i915_ggtt *ggtt)
|
||||
@ -723,14 +756,14 @@ int i915_init_ggtt(struct drm_i915_private *i915)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = init_ggtt(&i915->ggtt);
|
||||
ret = init_ggtt(to_gt(i915)->ggtt);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (INTEL_PPGTT(i915) == INTEL_PPGTT_ALIASING) {
|
||||
ret = init_aliasing_ppgtt(&i915->ggtt);
|
||||
ret = init_aliasing_ppgtt(to_gt(i915)->ggtt);
|
||||
if (ret)
|
||||
cleanup_init_ggtt(&i915->ggtt);
|
||||
cleanup_init_ggtt(to_gt(i915)->ggtt);
|
||||
}
|
||||
|
||||
return 0;
|
||||
@ -743,11 +776,21 @@ static void ggtt_cleanup_hw(struct i915_ggtt *ggtt)
|
||||
atomic_set(&ggtt->vm.open, 0);
|
||||
|
||||
flush_workqueue(ggtt->vm.i915->wq);
|
||||
i915_gem_drain_freed_objects(ggtt->vm.i915);
|
||||
|
||||
mutex_lock(&ggtt->vm.mutex);
|
||||
|
||||
list_for_each_entry_safe(vma, vn, &ggtt->vm.bound_list, vm_link)
|
||||
list_for_each_entry_safe(vma, vn, &ggtt->vm.bound_list, vm_link) {
|
||||
struct drm_i915_gem_object *obj = vma->obj;
|
||||
bool trylock;
|
||||
|
||||
trylock = i915_gem_object_trylock(obj, NULL);
|
||||
WARN_ON(!trylock);
|
||||
|
||||
WARN_ON(__i915_vma_unbind(vma));
|
||||
if (trylock)
|
||||
i915_gem_object_unlock(obj);
|
||||
}
|
||||
|
||||
if (drm_mm_node_allocated(&ggtt->error_capture))
|
||||
drm_mm_remove_node(&ggtt->error_capture);
|
||||
@ -773,7 +816,7 @@ static void ggtt_cleanup_hw(struct i915_ggtt *ggtt)
|
||||
*/
|
||||
void i915_ggtt_driver_release(struct drm_i915_private *i915)
|
||||
{
|
||||
struct i915_ggtt *ggtt = &i915->ggtt;
|
||||
struct i915_ggtt *ggtt = to_gt(i915)->ggtt;
|
||||
|
||||
fini_aliasing_ppgtt(ggtt);
|
||||
|
||||
@ -788,7 +831,7 @@ void i915_ggtt_driver_release(struct drm_i915_private *i915)
|
||||
*/
|
||||
void i915_ggtt_driver_late_release(struct drm_i915_private *i915)
|
||||
{
|
||||
struct i915_ggtt *ggtt = &i915->ggtt;
|
||||
struct i915_ggtt *ggtt = to_gt(i915)->ggtt;
|
||||
|
||||
GEM_WARN_ON(kref_read(&ggtt->vm.resv_ref) != 1);
|
||||
dma_resv_fini(&ggtt->vm._resv);
|
||||
@ -1209,7 +1252,7 @@ int i915_ggtt_probe_hw(struct drm_i915_private *i915)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = ggtt_probe_hw(&i915->ggtt, to_gt(i915));
|
||||
ret = ggtt_probe_hw(to_gt(i915)->ggtt, to_gt(i915));
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@ -1281,7 +1324,7 @@ bool i915_ggtt_resume_vm(struct i915_address_space *vm)
|
||||
atomic_read(&vma->flags) & I915_VMA_BIND_MASK;
|
||||
|
||||
GEM_BUG_ON(!was_bound);
|
||||
vma->ops->bind_vma(vm, NULL, vma,
|
||||
vma->ops->bind_vma(vm, NULL, vma->resource,
|
||||
obj ? obj->cache_level : 0,
|
||||
was_bound);
|
||||
if (obj) { /* only used during resume => exclusive access */
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user