492 Commits

Author SHA1 Message Date
Chris Wilson
3c00660db1 drm/i915/execlists: Assert tasklet is locked for process_csb()
We rely on only the tasklet being allowed to call into process_csb(), so
assert that is locked when we do. As the tasklet uses a simple bitlock,
there is no strong lockdep checking so we must make do with a plain
assertion that the tasklet is running and assume that we are the
tasklet!

v2: Fixup intel_gt_sanitize() to prepare each engine for the reset so
that the locks are marked as held during the reset
v3: Check for existent function pointers for very early sanitisation.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191014121336.30137-1-chris@chris-wilson.co.uk
2019-10-14 21:10:59 +01:00
Chris Wilson
89b6d1831d drm/i915/execlists: Tweak virtual unsubmission
Since commit e2144503bf3b ("drm/i915: Prevent bonded requests from
overtaking each other on preemption") we have restricted requests to run
on their chosen engine across preemption events. We can take this
restriction into account to know that we will want to resubmit those
requests onto the same physical engine, and so can shortcircuit the
virtual engine selection process and keep the request on the same
engine during unwind.

References: e2144503bf3b ("drm/i915: Prevent bonded requests from overtaking each other on preemption")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Ramlingam C <ramalingam.c@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191013203012.25208-1-chris@chris-wilson.co.uk
2019-10-14 12:51:17 +01:00
Chris Wilson
9506c23dfa drm/i915/selftests: Check that GPR are cleared for new contexts
We want the general purpose registers to be clear in all new contexts so
that we can be confident that no information is leaked from one to the
next.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191014090757.32111-7-chris@chris-wilson.co.uk
2019-10-14 11:10:28 +01:00
Chris Wilson
9c27462c89 drm/i915/selftests: Check known register values within the context
Check the logical ring context by asserting that the registers hold
expected start during execution. (It's a bit chicken-and-egg for how
could we manage to execute our request if the registers were not being
updated. Still, it's nice to verify that the HW is working as expected.)

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191014090757.32111-6-chris@chris-wilson.co.uk
2019-10-14 11:10:18 +01:00
Lionel Landwerlin
daed3e4439 drm/i915/perf: implement active wait for noa configurations
NOA configuration take some amount of time to apply. That amount of
time depends on the size of the GT. There is no documented time for
this. For example, past experimentations with powergating
configuration changes seem to indicate a 60~70us delay. We go with
500us as default for now which should be over the required amount of
time (according to HW architects).

v2: Don't forget to save/restore registers used for the wait (Chris)

v3: Name used CS_GPR registers (Chris)
    Fix compile issue due to rebase (Lionel)

v4: Fix save/restore helpers (Umesh)

v5: Move noa_wait from drm_i915_private to i915_perf_stream (Lionel)

v6: Add missing struct declarations in i915_perf.h

Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20191012072308.30312-2-chris@chris-wilson.co.uk
2019-10-12 09:08:33 +01:00
Lionel Landwerlin
6a45008ab7 drm/i915/perf: allow for CS OA configs to be created lazily
Here we introduce a mechanism by which the execbuf part of the i915
driver will be able to request that a batch buffer containing the
programming for a particular OA config be created.

We'll execute these OA configuration buffers right before executing a
set of userspace commands so that a particular user batchbuffer be
executed with a given OA configuration.

This mechanism essentially allows the userspace driver to go through
several OA configuration without having to open/close the i915/perf
stream.

v2: No need for locking on object OA config object creation (Chris)
    Flush cpu mapping of OA config (Chris)

v3: Properly deal with the perf_metric lock (Chris/Lionel)

v4: Fix oa config unref/put when not found (Lionel)

v5: Allocate BOs for configurations on the stream instead of globally
    (Lionel)

v6: Fix 64bit division (Chris)

v7: Store allocated config BOs into the stream (Lionel)

Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20191012072308.30312-1-chris@chris-wilson.co.uk
2019-10-12 09:08:27 +01:00
Chris Wilson
c3eb54aad9 drm/i915: Mark up "sentinel" requests
Sometimes we want to emit a terminator request, a request that flushes
the pipeline and allows no request to come after it. This can be used
for a "preempt-to-idle" to ensure that upon processing the
context-switch to that request, all other active contexts have been
flushed.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191012070136.32058-1-chris@chris-wilson.co.uk
2019-10-12 08:51:17 +01:00
Chris Wilson
d8ad5f5261 drm/i915/execlists: Prevent merging requests with conflicting flags
We set out-of-bound parameters inside the i915_requests.flags field,
such as disabling preemption or marking the end-of-context. We should
not coalesce consecutive requests if they have differing instructions
as we only inspect the last active request in a context. Thus if we
allow a later request to be merged into the same execution context, it
will mask any of the earlier flags.

References: 2a98f4e65bba ("drm/i915: add infrastructure to hold off preemption on a request")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191011190325.10979-9-chris@chris-wilson.co.uk
2019-10-12 07:54:52 +01:00
Chris Wilson
cd9ba7b6e4 drm/i915/selftests: Serialise write to scratch with its vma binding
Add the missing serialisation on the request for a write into a vma to
wait until that vma is bound before being executed by the GPU.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.auld@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191011193620.14026-1-chris@chris-wilson.co.uk
2019-10-11 22:42:31 +01:00
Chris Wilson
cbbf278778 drm/i915/execlists: Only mark incomplete requests as -EIO on cancelling
Only the requests that have not completed do we want to change the
status of to signal the -EIO when cancelling the inflight set of requests
upon wedging.

Reported-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191011103345.26013-1-chris@chris-wilson.co.uk
2019-10-11 13:07:24 +01:00
Chris Wilson
c97fb526ca drm/i915/execlists: Leave tell-tales as to why pending[] is bad
Before we BUG out with bad pending state, leave a telltale as to which
test failed.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191010071434.31195-2-chris@chris-wilson.co.uk
2019-10-11 09:43:06 +01:00
Chris Wilson
86027e312c drm/i915/selftests: Check that registers are preserved between virtual engines
Make sure that we copy across the registers from one engine to the next,
as we hop around a virtual engine.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191010110252.17289-1-chris@chris-wilson.co.uk
2019-10-10 13:53:58 +01:00
Chris Wilson
bd9bec5b6a drm/i915/execlists: Mark up expected state during reset
Move the BUG_ON around slightly and add some explanations for each to
try and capture the expected state more carefully. We want to compare
the expected active state of our bookkeeping as compared to the tracked
HW state.

References: https://bugs.freedesktop.org/show_bug.cgi?id=111937
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191010083242.1387-1-chris@chris-wilson.co.uk
2019-10-10 13:52:34 +01:00
Chris Wilson
542a5c66e0 drm/i915/gt: Warn CI about an unrecoverable wedge
If we have a wedged GPU that we need to recover, but fail, add a taint
for CI to pickup and schedule a reboot.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Cc: Petri Latvala <petri.latvala@intel.com>
Reviewed-by: Janusz Krzysztofik <janusz.krzysztofik@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191002160034.5121-1-chris@chris-wilson.co.uk
2019-10-10 11:19:32 +01:00
Daniele Ceraolo Spurio
9d41318c4e drm/i915/tgl: simplify the lrc register list for !RCS
There are small differences between the blitter and the video engines in
the xcs context image (e.g. registers 0x200 and 0x204 only exist on the
blitter). Since we never explicitly set a value for those register and
given that we don't need to update the offsets in the lrc image when we
change engine within the class for virtual engine because the HW can
handle that, instead of having a separate define for the BCS we can
just restrict the programming to the part we're interested in, which is
common across the engines.

Bspec: 45584
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Stuart Summers <stuart.summers@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20191009230424.6507-2-daniele.ceraolospurio@intel.com
2019-10-10 10:14:42 +01:00
Daniele Ceraolo Spurio
ba2c74da52 drm/i915/tgl: the BCS engine supports relative MMIO
The specs don't mention any specific HW limitation on the blitter and
manual inspection shows that the HW does set the relative MMIO bit in
the LRI of the blitter context image, so we can remove our limitations.

Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: John Harrison <John.C.Harrison@Intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20191009230424.6507-1-daniele.ceraolospurio@intel.com
2019-10-10 10:12:18 +01:00
Chris Wilson
c36eebd9ba drm/i915/gt: execlists->active is serialised by the tasklet
The active/pending execlists is no longer protected by the
engine->active.lock, but is serialised by the tasklet instead. Update
the locking around the debug and stats to follow suit.

v2: local_bh_disable() to prevent recursing into the tasklet in case we
trigger a softirq (Tvrtko)

Fixes: df403069029d ("drm/i915/execlists: Lift process_csb() out of the irq-off spinlock")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191009160906.16195-1-chris@chris-wilson.co.uk
2019-10-09 19:54:46 +01:00
Chris Wilson
c949ae4314 drm/i915/execlists: Protect peeking at execlists->active
Now that we dropped the engine->active.lock serialisation from around
process_csb(), direct submission can run concurrently to the interrupt
handler. As such execlists->active may be advanced as we dequeue,
dropping the reference to the request. We need to employ our RCU request
protection to ensure that the request is not freed too early.

Fixes: df403069029d ("drm/i915/execlists: Lift process_csb() out of the irq-off spinlock")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191009100955.21477-1-chris@chris-wilson.co.uk
2019-10-09 19:46:40 +01:00
Chris Wilson
41f0bc49f7 drm/i915/selftests: Hold request reference over waits
Take a reference on the request before submitting it to the HW and then
waiting on it for selftest_workarounds. Once submitted, the request may
be freed by a background worker, unless we take an extra reference for
ourselves.

References: https://bugs.freedesktop.org/show_bug.cgi?id=111926
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191009061759.3189-1-chris@chris-wilson.co.uk
2019-10-09 08:58:39 +01:00
Chris Wilson
6ad145fe02 drm/i915/gt: Give engine->kernel_context distinct timeline lock classes
Assign a separate lockclass to the perma-pinned timelines of the
kernel_context, such that we can use them from within the user timelines
should we ever need to inject GPU operations to fixup faults during
request construction.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Matthew Auld <matthew.william.auld@gmail.com>
Reviewed-by: Matthew Auld <matthew.william.auld@gmail.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191008185941.15228-1-chris@chris-wilson.co.uk
2019-10-08 22:19:00 +01:00
Chris Wilson
d99f7b079c drm/i915/gt: Flush submission tasklet before waiting/retiring
A common bane of ours is arbitrary delays in ksoftirqd processing our
submission tasklet. Give the submission tasklet a kick before we wait to
avoid those delays eating into a tight timeout.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Stuart Summers <stuart.summers@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191008105655.13256-1-chris@chris-wilson.co.uk
2019-10-08 16:23:55 +01:00
Chris Wilson
d14a701b00 drm/i915/selftests: Assign the intel_runtime_pm pointer for mock_uncore
Couple up our mock_uncore to know about the fake global device and its
runtime powermanagement.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.william.auld@gmail.com>
Reviewed-by: Matthew Auld <matthew.william.auld@gmail.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191008145045.23157-1-chris@chris-wilson.co.uk
2019-10-08 16:21:50 +01:00
Chris Wilson
3de1627851 drm/i915/selftests: Assign the mock_engine->uncore shortcut
Set up the engine->uncore shortcut on mock_engine creation.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191008071121.25088-1-chris@chris-wilson.co.uk
2019-10-08 10:14:46 +01:00
Chris Wilson
20af04f3dd drm/i915/execlists: Assign virtual_engine->uncore from first sibling
Copy across the engine->uncore shortcut to the virtual_engine from its
first physical engine, similar to the handling of the engine->gt
backpointer.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191008070342.4045-1-chris@chris-wilson.co.uk
2019-10-08 10:14:29 +01:00
Chris Wilson
a1b58ee3cb drm/i915/gt: Treat a busy timeline as 'active' while waiting
If we cannot claim the timeline->mutex while preparing for a wait on it,
we have to skip the timeline. In doing so, treat it as active so that
under a intel_gt_wait_for_idle() loop, we repeat the wait after
scheduling away.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191006165002.30312-4-chris@chris-wilson.co.uk
2019-10-07 21:44:02 +01:00
Chris Wilson
1664f35aa7 drm/i915/selftests: Appease lockdep
Disable irqs around updating the context image to keep lockdep happy:

<4>[  673.483340] WARNING: possible irq lock inversion dependency detected
<4>[  673.483342] 5.4.0-rc1-CI-Trybot_5118+ #1 Tainted: G     U
<4>[  673.483342] --------------------------------------------------------
<4>[  673.483343] swapper/2/0 just changed the state of lock:
<4>[  673.483344] ffff88845db885a0 (&i915_request_get(rq)->submit/1){-...}, at: __i915_sw_fence_complete+0x1b2/0x250 [i915]
<4>[  673.483387] but this lock took another, HARDIRQ-unsafe lock in the past:
<4>[  673.483388]  (&ce->pin_mutex/2){+...}
<4>[  673.483389]

                  and interrupts could create inverse lock ordering between them.

<4>[  673.483390]
                  other info that might help us debug this:
<4>[  673.483390] Chain exists of:
                    &i915_request_get(rq)->submit/1 --> &engine->active.lock --> &ce->pin_mutex/2

<4>[  673.483392]  Possible interrupt unsafe locking scenario:

<4>[  673.483392]        CPU0                    CPU1
<4>[  673.483393]        ----                    ----
<4>[  673.483393]   lock(&ce->pin_mutex/2);
<4>[  673.483394]                                local_irq_disable();
<4>[  673.483395]                                lock(&i915_request_get(rq)->submit/1);
<4>[  673.483396]                                lock(&engine->active.lock);
<4>[  673.483396]   <Interrupt>
<4>[  673.483397]     lock(&i915_request_get(rq)->submit/1);
<4>[  673.483398]
                   *** DEADLOCK ***

<4>[  673.483398] 2 locks held by swapper/2/0:
<4>[  673.483399]  #0: ffff8883f61ac9b0 (&(&gt->irq_lock)->rlock){-.-.}, at: gen11_gt_irq_handler+0x42/0x280 [i915]
<4>[  673.483433]  #1: ffff88845db8c418 (&(&rq->lock)->rlock){-.-.}, at: intel_engine_breadcrumbs_irq+0x34a/0x5a0 [i915]
<4>[  673.483463]
                  the shortest dependencies between 2nd lock and 1st lock:
<4>[  673.483466]   -> (&ce->pin_mutex/2){+...} ops: 614520 {
<4>[  673.483468]      HARDIRQ-ON-W at:
<4>[  673.483471]                         lock_acquire+0xa7/0x1c0
<4>[  673.483501]                         live_unlite_restore+0x1d8/0x6c0 [i915]
<4>[  673.483543]                         __i915_subtests+0xb8/0x210 [i915]
<4>[  673.483581]                         __run_selftests+0x112/0x170 [i915]
<4>[  673.483615]                         i915_live_selftests+0x2c/0x60 [i915]
<4>[  673.483644]                         i915_pci_probe+0x93/0x1b0 [i915]
<4>[  673.483646]                         pci_device_probe+0x9e/0x120
<4>[  673.483648]                         really_probe+0xea/0x420
<4>[  673.483649]                         driver_probe_device+0x10b/0x120
<4>[  673.483651]                         device_driver_attach+0x4a/0x50
<4>[  673.483652]                         __driver_attach+0x97/0x130
<4>[  673.483653]                         bus_for_each_dev+0x74/0xc0
<4>[  673.483654]                         bus_add_driver+0x142/0x220
<4>[  673.483655]                         driver_register+0x56/0xf0
<4>[  673.483657]                         do_one_initcall+0x58/0x2ff
<4>[  673.483659]                         do_init_module+0x56/0x1f8
<4>[  673.483660]                         load_module+0x243e/0x29f0
<4>[  673.483661]                         __do_sys_finit_module+0xe9/0x110
<4>[  673.483662]                         do_syscall_64+0x4f/0x210
<4>[  673.483665]                         entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4>[  673.483665]      INITIAL USE at:
<4>[  673.483667]                        lock_acquire+0xa7/0x1c0
<4>[  673.483698]                        live_unlite_restore+0x1d8/0x6c0 [i915]
<4>[  673.483733]                        __i915_subtests+0xb8/0x210 [i915]
<4>[  673.483764]                        __run_selftests+0x112/0x170 [i915]
<4>[  673.483793]                        i915_live_selftests+0x2c/0x60 [i915]
<4>[  673.483821]                        i915_pci_probe+0x93/0x1b0 [i915]
<4>[  673.483822]                        pci_device_probe+0x9e/0x120
<4>[  673.483824]                        really_probe+0xea/0x420
<4>[  673.483825]                        driver_probe_device+0x10b/0x120
<4>[  673.483826]                        device_driver_attach+0x4a/0x50
<4>[  673.483827]                        __driver_attach+0x97/0x130
<4>[  673.483828]                        bus_for_each_dev+0x74/0xc0
<4>[  673.483829]                        bus_add_driver+0x142/0x220
<4>[  673.483830]                        driver_register+0x56/0xf0
<4>[  673.483831]                        do_one_initcall+0x58/0x2ff
<4>[  673.483833]                        do_init_module+0x56/0x1f8
<4>[  673.483834]                        load_module+0x243e/0x29f0
<4>[  673.483835]                        __do_sys_finit_module+0xe9/0x110
<4>[  673.483836]                        do_syscall_64+0x4f/0x210
<4>[  673.483837]                        entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4>[  673.483838]    }
<4>[  673.483868]    ... key      at: [<ffffffffa0a8f132>] __key.70113+0x2/0xffffffffffef2ed0 [i915]
<4>[  673.483869]    ... acquired at:
<4>[  673.483935]    __execlists_reset+0xfb/0xc20 [i915]
<4>[  673.483965]    execlists_reset+0x3d/0x50 [i915]
<4>[  673.483995]    intel_engine_reset+0xdf/0x230 [i915]
<4>[  673.484022]    live_preempt_hang+0x1d7/0x2e0 [i915]
<4>[  673.484064]    __i915_subtests+0xb8/0x210 [i915]
<4>[  673.484130]    __run_selftests+0x112/0x170 [i915]
<4>[  673.484163]    i915_live_selftests+0x2c/0x60 [i915]
<4>[  673.484193]    i915_pci_probe+0x93/0x1b0 [i915]
<4>[  673.484194]    pci_device_probe+0x9e/0x120
<4>[  673.484195]    really_probe+0xea/0x420
<4>[  673.484196]    driver_probe_device+0x10b/0x120
<4>[  673.484197]    device_driver_attach+0x4a/0x50
<4>[  673.484198]    __driver_attach+0x97/0x130
<4>[  673.484199]    bus_for_each_dev+0x74/0xc0
<4>[  673.484200]    bus_add_driver+0x142/0x220
<4>[  673.484202]    driver_register+0x56/0xf0
<4>[  673.484203]    do_one_initcall+0x58/0x2ff
<4>[  673.484204]    do_init_module+0x56/0x1f8
<4>[  673.484205]    load_module+0x243e/0x29f0
<4>[  673.484206]    __do_sys_finit_module+0xe9/0x110
<4>[  673.484207]    do_syscall_64+0x4f/0x210
<4>[  673.484208]    entry_SYSCALL_64_after_hwframe+0x49/0xbe

<4>[  673.484209]  -> (&engine->active.lock){..-.} ops: 972791 {
<4>[  673.484211]     IN-SOFTIRQ-W at:
<4>[  673.484213]                       lock_acquire+0xa7/0x1c0
<4>[  673.484214]                       _raw_spin_lock_irqsave+0x33/0x50
<4>[  673.484244]                       execlists_submission_tasklet+0xaf/0x100 [i915]
<4>[  673.484246]                       tasklet_action_common.isra.18+0x6c/0x1c0
<4>[  673.484247]                       __do_softirq+0xdf/0x47f
<4>[  673.484248]                       irq_exit+0xba/0xc0
<4>[  673.484249]                       do_IRQ+0x83/0x160
<4>[  673.484250]                       ret_from_intr+0x0/0x1d
<4>[  673.484252]                       cpuidle_enter_state+0xb2/0x450
<4>[  673.484253]                       cpuidle_enter+0x24/0x40
<4>[  673.484254]                       do_idle+0x1e7/0x250
<4>[  673.484256]                       cpu_startup_entry+0x14/0x20
<4>[  673.484257]                       start_secondary+0x15f/0x1b0
<4>[  673.484258]                       secondary_startup_64+0xa4/0xb0
<4>[  673.484259]     INITIAL USE at:
<4>[  673.484261]                      lock_acquire+0xa7/0x1c0
<4>[  673.484290]                      intel_engine_init_active+0x7e/0xb0 [i915]
<4>[  673.484305]                      intel_engines_setup+0x1cd/0x3b0 [i915]
<4>[  673.484305]                      i915_gem_init+0x12d/0x900 [i915]
<4>[  673.484305]                      i915_driver_probe+0xb70/0x15d0 [i915]
<4>[  673.484305]                      i915_pci_probe+0x43/0x1b0 [i915]
<4>[  673.484305]                      pci_device_probe+0x9e/0x120
<4>[  673.484305]                      really_probe+0xea/0x420
<4>[  673.484305]                      driver_probe_device+0x10b/0x120
<4>[  673.484305]                      device_driver_attach+0x4a/0x50
<4>[  673.484305]                      __driver_attach+0x97/0x130
<4>[  673.484305]                      bus_for_each_dev+0x74/0xc0
<4>[  673.484305]                      bus_add_driver+0x142/0x220
<4>[  673.484305]                      driver_register+0x56/0xf0
<4>[  673.484305]                      do_one_initcall+0x58/0x2ff
<4>[  673.484305]                      do_init_module+0x56/0x1f8
<4>[  673.484305]                      load_module+0x243e/0x29f0
<4>[  673.484305]                      __do_sys_finit_module+0xe9/0x110
<4>[  673.484305]                      do_syscall_64+0x4f/0x210
<4>[  673.484305]                      entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4>[  673.484305]   }
<4>[  673.484305]   ... key      at: [<ffffffffa0a8f160>] __key.70307+0x0/0xffffffffffef2ea0 [i915]
<4>[  673.484305]   ... acquired at:
<4>[  673.484305]    _raw_spin_lock_irqsave+0x33/0x50
<4>[  673.484305]    execlists_submit_request+0x2b/0x1e0 [i915]
<4>[  673.484305]    submit_notify+0xa8/0x13c [i915]
<4>[  673.484305]    __i915_sw_fence_complete+0x81/0x250 [i915]
<4>[  673.484305]    i915_sw_fence_wake+0x51/0x70 [i915]
<4>[  673.484305]    __i915_sw_fence_complete+0x1ee/0x250 [i915]
<4>[  673.484305]    dma_i915_sw_fence_wake+0x1b/0x30 [i915]
<4>[  673.484305]    dma_fence_signal_locked+0x9e/0x1b0
<4>[  673.484305]    dma_fence_signal+0x1f/0x40
<4>[  673.484305]    fence_work+0x28/0x80 [i915]
<4>[  673.484305]    process_one_work+0x26a/0x620
<4>[  673.484305]    worker_thread+0x37/0x380
<4>[  673.484305]    kthread+0x119/0x130
<4>[  673.484305]    ret_from_fork+0x24/0x50

<4>[  673.484305] -> (&i915_request_get(rq)->submit/1){-...} ops: 857694 {
<4>[  673.484305]    IN-HARDIRQ-W at:
<4>[  673.484305]                     lock_acquire+0xa7/0x1c0
<4>[  673.484305]                     _raw_spin_lock_irqsave_nested+0x39/0x50
<4>[  673.484305]                     __i915_sw_fence_complete+0x1b2/0x250 [i915]
<4>[  673.484305]                     intel_engine_breadcrumbs_irq+0x3d0/0x5a0 [i915]
<4>[  673.484305]                     cs_irq_handler+0x39/0x50 [i915]
<4>[  673.484305]                     gen11_gt_irq_handler+0x17b/0x280 [i915]
<4>[  673.484305]                     gen11_irq_handler+0x54/0xf0 [i915]
<4>[  673.484305]                     __handle_irq_event_percpu+0x41/0x2c0
<4>[  673.484305]                     handle_irq_event_percpu+0x2b/0x70
<4>[  673.484305]                     handle_irq_event+0x2f/0x50
<4>[  673.484305]                     handle_edge_irq+0x99/0x1b0
<4>[  673.484305]                     do_IRQ+0x7e/0x160
<4>[  673.484305]                     ret_from_intr+0x0/0x1d
<4>[  673.484305]                     cpuidle_enter_state+0xb2/0x450
<4>[  673.484305]                     cpuidle_enter+0x24/0x40
<4>[  673.484305]                     do_idle+0x1e7/0x250
<4>[  673.484305]                     cpu_startup_entry+0x14/0x20
<4>[  673.484305]                     start_secondary+0x15f/0x1b0
<4>[  673.484305]                     secondary_startup_64+0xa4/0xb0
<4>[  673.484305]    INITIAL USE at:
<4>[  673.484305]                    lock_acquire+0xa7/0x1c0
<4>[  673.484305]                    _raw_spin_lock_irqsave_nested+0x39/0x50
<4>[  673.484305]                    __i915_sw_fence_complete+0x1b2/0x250 [i915]
<4>[  673.484305]                    __engine_park+0x233/0x420 [i915]
<4>[  673.484305]                    ____intel_wakeref_put_last+0x1c/0x70 [i915]
<4>[  673.484305]                    intel_gt_resume+0x202/0x2c0 [i915]
<4>[  673.484305]                    i915_gem_init+0x36e/0x900 [i915]
<4>[  673.484305]                    i915_driver_probe+0xb70/0x15d0 [i915]
<4>[  673.484305]                    i915_pci_probe+0x43/0x1b0 [i915]
<4>[  673.484305]                    pci_device_probe+0x9e/0x120
<4>[  673.484305]                    really_probe+0xea/0x420
<4>[  673.484305]                    driver_probe_device+0x10b/0x120
<4>[  673.484305]                    device_driver_attach+0x4a/0x50
<4>[  673.484305]                    __driver_attach+0x97/0x130
<4>[  673.484305]                    bus_for_each_dev+0x74/0xc0
<4>[  673.484305]                    bus_add_driver+0x142/0x220
<4>[  673.484305]                    driver_register+0x56/0xf0
<4>[  673.484305]                    do_one_initcall+0x58/0x2ff
<4>[  673.484305]                    do_init_module+0x56/0x1f8
<4>[  673.484305]                    load_module+0x243e/0x29f0
<4>[  673.484305]                    __do_sys_finit_module+0xe9/0x110
<4>[  673.484305]                    do_syscall_64+0x4f/0x210
<4>[  673.484305]                    entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4>[  673.484305]  }
<4>[  673.484305]  ... key      at: [<ffffffffa0a8f6a1>] __key.80173+0x1/0xffffffffffef2960 [i915]
<4>[  673.484305]  ... acquired at:
<4>[  673.484305]    mark_lock+0x382/0x500
<4>[  673.484305]    __lock_acquire+0x7e1/0x15d0
<4>[  673.484305]    lock_acquire+0xa7/0x1c0
<4>[  673.484305]    _raw_spin_lock_irqsave_nested+0x39/0x50
<4>[  673.484305]    __i915_sw_fence_complete+0x1b2/0x250 [i915]
<4>[  673.484305]    intel_engine_breadcrumbs_irq+0x3d0/0x5a0 [i915]
<4>[  673.484305]    cs_irq_handler+0x39/0x50 [i915]
<4>[  673.484305]    gen11_gt_irq_handler+0x17b/0x280 [i915]
<4>[  673.484305]    gen11_irq_handler+0x54/0xf0 [i915]
<4>[  673.484305]    __handle_irq_event_percpu+0x41/0x2c0
<4>[  673.484305]    handle_irq_event_percpu+0x2b/0x70
<4>[  673.484305]    handle_irq_event+0x2f/0x50
<4>[  673.484305]    handle_edge_irq+0x99/0x1b0
<4>[  673.484305]    do_IRQ+0x7e/0x160
<4>[  673.484305]    ret_from_intr+0x0/0x1d
<4>[  673.484305]    cpuidle_enter_state+0xb2/0x450
<4>[  673.484305]    cpuidle_enter+0x24/0x40
<4>[  673.484305]    do_idle+0x1e7/0x250
<4>[  673.484305]    cpu_startup_entry+0x14/0x20
<4>[  673.484305]    start_secondary+0x15f/0x1b0
<4>[  673.484305]    secondary_startup_64+0xa4/0xb0

<4>[  673.484305]
                  stack backtrace:
<4>[  673.484305] CPU: 2 PID: 0 Comm: swapper/2 Tainted: G     U            5.4.0-rc1-CI-Trybot_5118+ #1
<4>[  673.484305] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake U DDR4 SODIMM PD RVP TLC, BIOS ICLSFWR1.R00.3183.A00.1905020411 05/02/2019
<4>[  673.484305] Call Trace:
<4>[  673.484305]  <IRQ>
<4>[  673.484305]  dump_stack+0x67/0x9b
<4>[  673.484305]  check_usage_forwards+0x13c/0x150
<4>[  673.484305]  ? mark_lock+0x382/0x500
<4>[  673.484305]  mark_lock+0x382/0x500
<4>[  673.484305]  ? check_usage_backwards+0x140/0x140
<4>[  673.484305]  __lock_acquire+0x7e1/0x15d0
<4>[  673.484305]  ? debug_object_deactivate+0x17e/0x190
<4>[  673.484305]  lock_acquire+0xa7/0x1c0
<4>[  673.484305]  ? __i915_sw_fence_complete+0x1b2/0x250 [i915]
<4>[  673.484305]  _raw_spin_lock_irqsave_nested+0x39/0x50
<4>[  673.484305]  ? __i915_sw_fence_complete+0x1b2/0x250 [i915]
<4>[  673.484305]  __i915_sw_fence_complete+0x1b2/0x250 [i915]
<4>[  673.484305]  intel_engine_breadcrumbs_irq+0x3d0/0x5a0 [i915]
<4>[  673.484305]  cs_irq_handler+0x39/0x50 [i915]
<4>[  673.484305]  gen11_gt_irq_handler+0x17b/0x280 [i915]
<4>[  673.484305]  gen11_irq_handler+0x54/0xf0 [i915]
<4>[  673.484305]  __handle_irq_event_percpu+0x41/0x2c0
<4>[  673.484305]  handle_irq_event_percpu+0x2b/0x70
<4>[  673.484305]  handle_irq_event+0x2f/0x50
<4>[  673.484305]  handle_edge_irq+0x99/0x1b0
<4>[  673.484305]  do_IRQ+0x7e/0x160
<4>[  673.484305]  common_interrupt+0xf/0xf
<4>[  673.484305]  </IRQ>

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004203121.31138-1-chris@chris-wilson.co.uk
2019-10-07 21:44:02 +01:00
Chris Wilson
08ad9a3846 drm/i915/execlists: Fix annotation for decoupling virtual request
As we may signal a request and take the engine->active.lock within the
signaler, the engine submission paths have to use a nested annotation on
their requests -- but we guarantee that we can never submit on the same
engine as the signaling fence.

<4>[  723.763281] WARNING: possible circular locking dependency detected
<4>[  723.763285] 5.3.0-g80fa0e042cdb-drmtip_379+ #1 Tainted: G     U
<4>[  723.763288] ------------------------------------------------------
<4>[  723.763291] gem_exec_await/1388 is trying to acquire lock:
<4>[  723.763294] ffff93a7b53221d8 (&engine->active.lock){..-.}, at: execlists_submit_request+0x2b/0x1e0 [i915]
<4>[  723.763378]
                  but task is already holding lock:
<4>[  723.763381] ffff93a7c25f6d20 (&i915_request_get(rq)->submit/1){-.-.}, at: __i915_sw_fence_complete+0x1b2/0x250 [i915]
<4>[  723.763420]
                  which lock already depends on the new lock.

<4>[  723.763423]
                  the existing dependency chain (in reverse order) is:
<4>[  723.763427]
                  -> #2 (&i915_request_get(rq)->submit/1){-.-.}:
<4>[  723.763434]        _raw_spin_lock_irqsave_nested+0x39/0x50
<4>[  723.763478]        __i915_sw_fence_complete+0x1b2/0x250 [i915]
<4>[  723.763513]        intel_engine_breadcrumbs_irq+0x3aa/0x5e0 [i915]
<4>[  723.763600]        cs_irq_handler+0x49/0x50 [i915]
<4>[  723.763659]        gen11_gt_irq_handler+0x17b/0x280 [i915]
<4>[  723.763690]        gen11_irq_handler+0x54/0xf0 [i915]
<4>[  723.763695]        __handle_irq_event_percpu+0x41/0x2d0
<4>[  723.763699]        handle_irq_event_percpu+0x2b/0x70
<4>[  723.763702]        handle_irq_event+0x2f/0x50
<4>[  723.763706]        handle_edge_irq+0xee/0x1a0
<4>[  723.763709]        do_IRQ+0x7e/0x160
<4>[  723.763712]        ret_from_intr+0x0/0x1d
<4>[  723.763717]        __slab_alloc.isra.28.constprop.33+0x4f/0x70
<4>[  723.763720]        kmem_cache_alloc+0x28d/0x2f0
<4>[  723.763724]        vm_area_dup+0x15/0x40
<4>[  723.763727]        dup_mm+0x2dd/0x550
<4>[  723.763730]        copy_process+0xf21/0x1ef0
<4>[  723.763734]        _do_fork+0x71/0x670
<4>[  723.763737]        __se_sys_clone+0x6e/0xa0
<4>[  723.763741]        do_syscall_64+0x4f/0x210
<4>[  723.763744]        entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4>[  723.763747]
                  -> #1 (&(&rq->lock)->rlock#2){-.-.}:
<4>[  723.763752]        _raw_spin_lock+0x2a/0x40
<4>[  723.763789]        __unwind_incomplete_requests+0x3eb/0x450 [i915]
<4>[  723.763825]        __execlists_submission_tasklet+0x9ec/0x1d60 [i915]
<4>[  723.763864]        execlists_submission_tasklet+0x34/0x50 [i915]
<4>[  723.763874]        tasklet_action_common.isra.5+0x47/0xb0
<4>[  723.763878]        __do_softirq+0xd8/0x4ae
<4>[  723.763881]        irq_exit+0xa9/0xc0
<4>[  723.763883]        smp_apic_timer_interrupt+0xb7/0x280
<4>[  723.763887]        apic_timer_interrupt+0xf/0x20
<4>[  723.763892]        cpuidle_enter_state+0xae/0x450
<4>[  723.763895]        cpuidle_enter+0x24/0x40
<4>[  723.763899]        do_idle+0x1e7/0x250
<4>[  723.763902]        cpu_startup_entry+0x14/0x20
<4>[  723.763905]        start_secondary+0x15f/0x1b0
<4>[  723.763908]        secondary_startup_64+0xa4/0xb0
<4>[  723.763911]
                  -> #0 (&engine->active.lock){..-.}:
<4>[  723.763916]        __lock_acquire+0x15d8/0x1ea0
<4>[  723.763919]        lock_acquire+0xa6/0x1c0
<4>[  723.763922]        _raw_spin_lock_irqsave+0x33/0x50
<4>[  723.763956]        execlists_submit_request+0x2b/0x1e0 [i915]
<4>[  723.764002]        submit_notify+0xa8/0x13c [i915]
<4>[  723.764035]        __i915_sw_fence_complete+0x81/0x250 [i915]
<4>[  723.764054]        i915_sw_fence_wake+0x51/0x64 [i915]
<4>[  723.764054]        __i915_sw_fence_complete+0x1ee/0x250 [i915]
<4>[  723.764054]        dma_i915_sw_fence_wake_timer+0x14/0x20 [i915]
<4>[  723.764054]        dma_fence_signal_locked+0x9e/0x1c0
<4>[  723.764054]        dma_fence_signal+0x1f/0x40
<4>[  723.764054]        vgem_fence_signal_ioctl+0x67/0xc0 [vgem]
<4>[  723.764054]        drm_ioctl_kernel+0x83/0xf0
<4>[  723.764054]        drm_ioctl+0x2f3/0x3b0
<4>[  723.764054]        do_vfs_ioctl+0xa0/0x6f0
<4>[  723.764054]        ksys_ioctl+0x35/0x60
<4>[  723.764054]        __x64_sys_ioctl+0x11/0x20
<4>[  723.764054]        do_syscall_64+0x4f/0x210
<4>[  723.764054]        entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4>[  723.764054]
                  other info that might help us debug this:

<4>[  723.764054] Chain exists of:
                    &engine->active.lock --> &(&rq->lock)->rlock#2 --> &i915_request_get(rq)->submit/1

<4>[  723.764054]  Possible unsafe locking scenario:

<4>[  723.764054]        CPU0                    CPU1
<4>[  723.764054]        ----                    ----
<4>[  723.764054]   lock(&i915_request_get(rq)->submit/1);
<4>[  723.764054]                                lock(&(&rq->lock)->rlock#2);
<4>[  723.764054]                                lock(&i915_request_get(rq)->submit/1);
<4>[  723.764054]   lock(&engine->active.lock);
<4>[  723.764054]
                   *** DEADLOCK ***

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111862
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004194758.19679-1-chris@chris-wilson.co.uk
2019-10-07 21:44:02 +01:00
Chris Wilson
cd6a851385 drm/i915/gt: Prefer local path to runtime powermanagement
Avoid going to the base i915 device when we already have a path from gt
to the runtime powermanagement interface. The benefit is that it looks a
bit more self-consistent to always be acquiring the gt->uncore->rpm for
use with the gt->uncore.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191007154531.1750-1-chris@chris-wilson.co.uk
2019-10-07 21:44:02 +01:00
Colin Ian King
b9dcb97b6c drm/i915: make array hw_engine_mask static, makes object smaller
Don't populate the array hw_engine_mask on the stack but instead make it
static. Makes the object code smaller by 316 bytes.

Before:
   text	   data	    bss	    dec	    hex	filename
  34004	   4388	    320	  38712	   9738	gpu/drm/i915/gt/intel_reset.o

After:
   text	   data	    bss	    dec	    hex	filename
  33528	   4548	    320	  38396	   95fc	gpu/drm/i915/gt/intel_reset.o

(gcc version 9.2.1, amd64)

Signed-off-by: Colin Ian King <colin.king@canonical.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20191007154151.23245-1-colin.king@canonical.com
2019-10-07 21:44:02 +01:00
Chris Wilson
abc47ff61d drm/i915/gt: Restore dropped 'interruptible' flag
Lost in the rebasing was Tvrtko's reminder that we need to keep an
uninterruptible wait around for the Ironlake VT-d w/a

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191006165002.30312-3-chris@chris-wilson.co.uk
2019-10-07 13:51:59 +01:00
CQ Tang
0e5493cab5 drm/i915/stolen: make the object creation interface consistent
Our other backends return an actual error value upon failure. Do the
same for stolen objects, which currently just return NULL on failure.

Signed-off-by: CQ Tang <cq.tang@intel.com>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004170452.15410-2-matthew.auld@intel.com
2019-10-04 19:27:41 +01:00
Chris Wilson
2af402982a drm/i915/selftests: Drop vestigal struct_mutex guards
We no longer need struct_mutex to serialise request emission, so remove
it from the gt selftests.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-20-chris@chris-wilson.co.uk
2019-10-04 15:39:41 +01:00
Chris Wilson
cb5eb07278 drm/i915/overlay: Drop struct_mutex guard
The overlay uses the modeset mutex to control itself and only required
the struct_mutex for requests, which is now obsolete.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-16-chris@chris-wilson.co.uk
2019-10-04 15:39:39 +01:00
Chris Wilson
a4e7ccdac3 drm/i915: Move context management under GEM
Keep track of the GEM contexts underneath i915->gem.contexts and assign
them their own lock for the purposes of list management.

v2: Focus on lock tracking; ctx->vm is protected by ctx->mutex
v3: Correct split with removal of logical HW ID

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-15-chris@chris-wilson.co.uk
2019-10-04 15:39:34 +01:00
Chris Wilson
2935ed5339 drm/i915: Remove logical HW ID
With the introduction of ctx->engines[] we allow multiple logical
contexts to be used on the same engine (e.g. with virtual engines).
According to bspec, aach logical context requires a unique tag in order
for context-switching to occur correctly between them. [Simple
experiments show that it is not so easy to trick the HW into performing
a lite-restore with matching logical IDs, though my memory from early
Broadwell experiments do suggest that it should be generating
lite-restores.]

We only need to keep a unique tag for the active lifetime of the
context, and for as long as we need to identify that context. The HW
uses the tag to determine if it should use a lite-restore (why not the
LRCA?) and passes the tag back for various status identifies. The only
status we need to track is for OA, so when using perf, we assign the
specific context a unique tag.

v2: Calculate required number of tags to fill ELSP.

Fixes: 976b55f0e1db ("drm/i915: Allow a context to define its set of engines")
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=111895
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-14-chris@chris-wilson.co.uk
2019-10-04 15:39:30 +01:00
Chris Wilson
a2b4dead98 drm/i915: Move global activity tracking from GEM to GT
As our global unpark/park keep track of the number of active users, we
can simply move the accounting from the GEM layer to the base GT layer.
It was placed originally inside GEM to benefit from the 100ms extra
delay on idleness, but that has been eliminated and now there is no
substantive difference between the layers. In moving it, we move another
piece of the puzzle out from underneath struct_mutex.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-13-chris@chris-wilson.co.uk
2019-10-04 15:39:30 +01:00
Chris Wilson
6610197542 drm/i915: Move request runtime management onto gt
Requests are run from the gt and are tided into the gt runtime power
management, so pull the runtime request management under gt/

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-12-chris@chris-wilson.co.uk
2019-10-04 15:39:26 +01:00
Chris Wilson
f33a8a5160 drm/i915: Merge wait_for_timelines with retire_request
wait_for_timelines is essentially the same loop as retiring requests
(with an extra timeout), so merge the two into one routine.

v2: i915_retire_requests_timeout and keep VT'd w/a as !interruptible

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-10-chris@chris-wilson.co.uk
2019-10-04 15:39:23 +01:00
Chris Wilson
7e80576266 drm/i915: Drop struct_mutex from around i915_retire_requests()
We don't need to hold struct_mutex now for retiring requests, so drop it
from i915_retire_requests() and i915_gem_wait_for_idle(), finally
removing I915_WAIT_LOCKED for good.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-8-chris@chris-wilson.co.uk
2019-10-04 15:39:17 +01:00
Chris Wilson
b723484069 drm/i915: Move idle barrier cleanup into engine-pm
Now that we now longer need to guarantee that the active callback is
under the struct_mutex, we can lift it out of the i915_gem_park() and
into the engine parking itself.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-7-chris@chris-wilson.co.uk
2019-10-04 15:39:16 +01:00
Chris Wilson
b1e3177bd1 drm/i915: Coordinate i915_active with its own mutex
Forgo the struct_mutex serialisation for i915_active, and interpose its
own mutex handling for active/retire.

This is a multi-layered sleight-of-hand. First, we had to ensure that no
active/retire callbacks accidentally inverted the mutex ordering rules,
nor assumed that they were themselves serialised by struct_mutex. More
challenging though, is the rule over updating elements of the active
rbtree. Instead of the whole i915_active now being serialised by
struct_mutex, allocations/rotations of the tree are serialised by the
i915_active.mutex and individual nodes are serialised by the caller
using the i915_timeline.mutex (we need to use nested spinlocks to
interact with the dma_fence callback lists).

The pain point here is that instead of a single mutex around execbuf, we
now have to take a mutex for active tracker (one for each vma, context,
etc) and a couple of spinlocks for each fence update. The improvement in
fine grained locking allowing for multiple concurrent clients
(eventually!) should be worth it in typical loads.

v2: Add some comments that barely elucidate anything :(

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-6-chris@chris-wilson.co.uk
2019-10-04 15:39:12 +01:00
Chris Wilson
274cbf20fd drm/i915: Push the i915_active.retire into a worker
As we need to use a mutex to serialise i915_active activation
(because we want to allow the callback to sleep), we need to push the
i915_active.retire into a worker callback in case we get need to retire
from an atomic context.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-5-chris@chris-wilson.co.uk
2019-10-04 15:39:10 +01:00
Chris Wilson
2850748ef8 drm/i915: Pull i915_vma_pin under the vm->mutex
Replace the struct_mutex requirement for pinning the i915_vma with the
local vm->mutex instead. Note that the vm->mutex is tainted by the
shrinker (we require unbinding from inside fs-reclaim) and so we cannot
allocate while holding that mutex. Instead we have to preallocate
workers to do allocate and apply the PTE updates after we have we
reserved their slot in the drm_mm (using fences to order the PTE writes
with the GPU work and with later unbind).

In adding the asynchronous vma binding, one subtle requirement is to
avoid coupling the binding fence into the backing object->resv. That is
the asynchronous binding only applies to the vma timeline itself and not
to the pages as that is a more global timeline (the binding of one vma
does not need to be ordered with another vma, nor does the implicit GEM
fencing depend on a vma, only on writes to the backing store). Keeping
the vma binding distinct from the backing store timelines is verified by
a number of async gem_exec_fence and gem_exec_schedule tests. The way we
do this is quite simple, we keep the fence for the vma binding separate
and only wait on it as required, and never add it to the obj->resv
itself.

Another consequence in reducing the locking around the vma is the
destruction of the vma is no longer globally serialised by struct_mutex.
A natural solution would be to add a kref to i915_vma, but that requires
decoupling the reference cycles, possibly by introducing a new
i915_mm_pages object that is own by both obj->mm and vma->pages.
However, we have not taken that route due to the overshadowing lmem/ttm
discussions, and instead play a series of complicated games with
trylocks to (hopefully) ensure that only one destruction path is called!

v2: Add some commentary, and some helpers to reduce patch churn.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-4-chris@chris-wilson.co.uk
2019-10-04 15:39:02 +01:00
Chris Wilson
b290a78b5c drm/i915: Use helpers for drm_mm_node booleans
A subset of 71724f708997 ("drm/mm: Use helpers for drm_mm_node booleans")
in order to prepare drm-intel-next-queued for subsequent patches before
we can backmerge 71724f708997 itself.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004142226.13711-1-chris@chris-wilson.co.uk
2019-10-04 15:34:27 +01:00
Chris Wilson
44d0a9c05b drm/i915/execlists: Skip redundant resubmission
If we unwind the active requests, and on resubmission discover that we
intend to preempt the active contexts with themselves, simply skip the
ELSP submission.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191003210100.22250-1-chris@chris-wilson.co.uk
2019-10-04 12:52:24 +01:00
Chris Wilson
fcde8c7eea drm/i915/selftests: Exercise potential false lite-restore
If execlists's lite-restore is based on the common GEM context tag
rather than the per-intel_context LRCA, then a context switch between
two intel_contexts on the same engine derived from the same GEM context
will perform a lite-restore instead of a full context switch. We can
exploit this by poisoning the ringbuffer of the first context and trying
to trick a simple RING_TAIL update (i.e. lite-restore)

v2: Also check what happens if preempt ce[0] with ce[1] (both instances
on the same engine from the same parent context) [Tvrtko]

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191002183459.26614-1-chris@chris-wilson.co.uk
2019-10-03 00:26:02 +01:00
Chris Wilson
f8db4d051b drm/i915: Initialise breadcrumb lists on the virtual engine
With deferring the breadcrumb signalling to the virtual engine (thanks
preempt-to-busy) we need to make sure the lists and irq-worker are ready
to send a signal.

[41958.710544] BUG: kernel NULL pointer dereference, address: 0000000000000000
[41958.710553] #PF: supervisor write access in kernel mode
[41958.710556] #PF: error_code(0x0002) - not-present page
[41958.710558] PGD 0 P4D 0
[41958.710562] Oops: 0002 [#1] SMP
[41958.710565] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G     U            5.3.0+ #207
[41958.710568] Hardware name: Intel Corporation NUC7i5BNK/NUC7i5BNB, BIOS BNKBL357.86A.0052.2017.0918.1346 09/18/2017
[41958.710602] RIP: 0010:i915_request_enable_breadcrumb+0xe1/0x130 [i915]
[41958.710605] Code: 8b 44 24 30 48 89 41 08 48 89 08 48 8b 85 98 01 00 00 48 8d 8d 90 01 00 00 48 89 95 98 01 00 00 49 89 4c 24 28 49 89 44 24 30 <48> 89 10 f0 80 4b 30 10 c6 85 88 01 00 00 00 e9 1a ff ff ff 48 83
[41958.710609] RSP: 0018:ffffc90000003de0 EFLAGS: 00010046
[41958.710612] RAX: 0000000000000000 RBX: ffff888735424480 RCX: ffff8887cddb2190
[41958.710614] RDX: ffff8887cddb3570 RSI: ffff888850362190 RDI: ffff8887cddb2188
[41958.710617] RBP: ffff8887cddb2000 R08: ffff8888503624a8 R09: 0000000000000100
[41958.710619] R10: 0000000000000001 R11: 0000000000000000 R12: ffff8887cddb3548
[41958.710622] R13: 0000000000000000 R14: 0000000000000046 R15: ffff888850362070
[41958.710625] FS:  0000000000000000(0000) GS:ffff88885ea00000(0000) knlGS:0000000000000000
[41958.710628] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[41958.710630] CR2: 0000000000000000 CR3: 0000000002c09002 CR4: 00000000001606f0
[41958.710633] Call Trace:
[41958.710636]  <IRQ>
[41958.710668]  __i915_request_submit+0x12b/0x160 [i915]
[41958.710693]  virtual_submit_request+0x67/0x120 [i915]
[41958.710720]  __unwind_incomplete_requests+0x131/0x170 [i915]
[41958.710744]  execlists_dequeue+0xb40/0xe00 [i915]
[41958.710771]  execlists_submission_tasklet+0x10f/0x150 [i915]
[41958.710776]  tasklet_action_common.isra.17+0x41/0xa0
[41958.710781]  __do_softirq+0xc8/0x221
[41958.710785]  irq_exit+0xa6/0xb0
[41958.710788]  smp_apic_timer_interrupt+0x4d/0x80
[41958.710791]  apic_timer_interrupt+0xf/0x20
[41958.710794]  </IRQ>

Fixes: cb2377a919bb ("drm/i915: Fixup preempt-to-busy vs reset of a virtual request")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191001103518.9113-1-chris@chris-wilson.co.uk
2019-10-01 11:46:52 +01:00
Chris Wilson
1d6f1d16d3 drm/i915/gt: Only unwedge if we can reset first
Unwedging the GPU requires a successful GPU reset before we restore the
default submission, or else we may see residual context switch events
that we were not expecting.

v2: Pull in the special-case reset_clobbers_display, and explain why it
should be safe in the context of unwedging.

v3: Just forget all about resets before unwedging if it will clobber the
display; risk it all.

Reported-by: Janusz Krzysztofik <janusz.krzysztofik@linux.intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Janusz Krzysztofik <janusz.krzysztofik@linux.intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> #v1
Link: https://patchwork.freedesktop.org/patch/msgid/20190927160335.10622-1-chris@chris-wilson.co.uk
2019-09-30 17:48:14 +01:00
Chris Wilson
4abc6e7c91 drm/i915/selftests: Provide a mock GPU reset routine
For those mock tests that may wish to pretend triggering a GPU reset and
processing the cleanup.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Andi Shyti <andi.shyti@intel.com>
Reviewed-by: Andi Shyti <andi.shyti@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190927211749.2181-3-chris@chris-wilson.co.uk
2019-09-27 23:25:46 +01:00
Chris Wilson
4e18ca703f drm/i915/selftests: Distinguish mock device from no wakeref
On systems that have no runtime-pm, we mark the wakeref as being -1. We
therefore cannot use that value for the mock-gt indicator, so opt for
-ENODEV instead. The wakeref should never be an error value -- one
hopes!

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Andi Shyti <andi.shyti@intel.com>
Reviewed-by: Andi Shyti <andi.shyti@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190927211749.2181-2-chris@chris-wilson.co.uk
2019-09-27 23:25:30 +01:00