IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
After staring at the list_for_each_safe macros for a bit, our current
invocation of list_safe_reset_next in execlists_schedule() simply
reduces to list_for_each.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Michał Winiarski <michal.winiarski@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180102151235.3949-11-chris@chris-wilson.co.uk
The dependency chain must be an acyclic graph. This is checked by the
swfence, but for sanity, also do a simple check that we do not corrupt
our list iteration in execlists_schedule() by a shallow dependency
cycle.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Michał Winiarski <michal.winiarski@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180102151235.3949-10-chris@chris-wilson.co.uk
Back up our comment that all signalers should have been signaled before
we ourselves were retired with an assert to that effect.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Michał Winiarski <michal.winiarski@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180102151235.3949-9-chris@chris-wilson.co.uk
To modify the global seqno may require rewriting a few registers, which
requires us to hold the rpm wakeref. We must therefore take it around
the call to i915_gem_set_global_seqno() in debugfs, on behalf of the
user.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Michał Winiarski <michal.winiarski@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180102151235.3949-15-chris@chris-wilson.co.uk
Move the clearing of the CS-interrupt into the engine reset phase,
before the current init-hw phase. This helps clarify that we clear the
pending interrupts prior to any restarting of the execlists.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Michał Winiarski <michal.winiarski@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180102151235.3949-16-chris@chris-wilson.co.uk
i915_gem_request_assign() is not used since commit 77f0d0e925e8
("drm/i915/execlists: Pack the count into the low bits of the
port.request"), so remove the defunct code
References: 77f0d0e925e8 ("drm/i915/execlists: Pack the count into the low bits of the port.request")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180102151235.3949-1-chris@chris-wilson.co.uk
In the function ttm_page_alloc_init, kzalloc call is made for variable
_manager, we need to check its return value, it may return NULL.
Signed-off-by: Xiongwei Song <sxwjean@gmail.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Fixes a greenish tint on RV displays.
Signed-off-by: Yue Hin Lau <Yuehin.Lau@amd.com>
Reviewed-by: Eric Bernstein <Eric.Bernstein@amd.com>
Acked-by: Harry Wentland <harry.wentland@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
[drake@endlessm.com: backport to 4.15]
Signed-off-by: Daniel Drake <drake@endlessm.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
When destroying an inactive queue, we don't need to call
execute_queues_cpsch.
Signed-off-by: Yong Zhao <yong.zhao@amd.com>
Reviewed-by: Oak Zeng <oak.zeng@amd.com>
Signed-off-by: Oded Gabbay <oded.gabbay@gmail.com>
There is no need to hold the GPU lock while freeing the submit
object. Only move the retired submits from the GPU active list to
a temporary retire list under the GPU lock.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Now that the PMR lifetime issues are solved we can safely re-enable
performance counter profiling support.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
As long as there is an active submit, we want the GPU to stay awake. This
is slightly complicated by the fact that we really want to wake the GPU
at the last possible moment to achieve maximum power savings.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
The active count is used to check if the BO is idle, where idle is defined
as not active on the GPU and all VM mappings and reference counts dropped
to the initial state. As the idling of the mappings and references now only
happens in the submit cleanup, the active state handling must be moved to
the same location in order to keep the userspace semantics.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Less dynamic allocations and slims down the cmdbuf object to only the
required information, as everything else is already available in the
submit object.
This also simplifies buffer and mappings lifetime management, as they
are now exlusively attached to the submit object and not additionally
to the cmdbuf.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
The GPU exec state may have changed at the time when the perfmon sampling
is done, as it reflects the state of the last submission, not the current
GPU execution state.
So for proper sampling we must use the submit exec_state.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
We'll need this in some places where only the submit is available. Also
this is a first step at slimming down the cmdbuf object.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
To make them available to the event worker even after the actual
command stream execution has finished.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
The submit object lifetime will get extended to the actual GPU
execution. As multiple users will depend on this, add a kref to
properly control destruction of the object.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
The acquire_ctx is special in that it needs to be released from the same
thread as has been used to initialize it. This collides with the intention to
extend the submit lifetime beyond the gem_submit function with potentially
other threads doing the final cleanup.
Move the ww_acquire_ctx to the function local stack as suggested in the
documentation.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
This is safe to call in all paths, as the BO_PINNED flag tells us if the BO
needs unpinning.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Simplifies the cleanup path and moves fence waiting to a central location.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
The object fencing has nothing to do with the actual GPU buffer submit,
so move it to the gem submit path to have a cleaner split.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Use kzalloc so other code doesn't need to worry about uninitialized members.
Drop the non-standard GFP flags, as we really don't want to fail the submit
when under slight memory pressure. Remove one level of indentation by using
an early return if the allocation failed. Also remove the unused drm device
member.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
When manipulating the kernel command buffer the GPU mutex must be held, as
otherwise different callers might try to replace the same part of the
buffer, wreacking havok in the GPU execution.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Inserting the END command when suspending the GPU is changing the
command buffer state, which requires the GPU to be held.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
While the etnaviv workqueue needs to be ordered, as we rely on work items
being executed in queuing order, this is only true for a single GPU.
Having a shared workqueue for all GPUs in the system limits concurrency
artificially.
Getting each GPU its own ordered workqueue still meets our ordering
expectations and enables retire workers to run concurrently.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
There is no need to store this in the gpu struct. MMU flushes are triggered
correctly in reaction to MMU maps and unmaps, independent of the current ctx.
Any required pipe switches can be infered from the current and the desired
GPU exec state.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
There is no need to synchronize with oustanding retire jobs if the object
has gone idle. Retire jobs only ever change the object state from active to
idle, not the other way around.
The IOVA put race is uncritical, as the GEM_WAIT ioctl itself is holding
a reference to the GEM object, so the retire worker will not pull the
object into the CPU domain, which is the thing we are trying to guard
against with etnaviv_gpu_wait_obj_inactive. The ordering of the various
counts and waits may change a bit, but the userspace visible behavior at
the bounds of the syscall are unchanged.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Flush and prefetch are properly handled in the buffer code, data endianess
would need much wider changes than adding something to this single function.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Now that the userptr BO handling doesn't rely on the userspace restarting
the submit after object population, there is no need to special case the
-EAGAIN return value anymore.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
All code paths which populate userptr BOs are fine with the get_pages
function taking the mmap_sem lock. This allows to get rid of the pretty
involved architecture with a worker being scheduled if the mmap_sem
needs to be taken, but instead call GUP directly and allow it to take
the lock if necessary.
This simplifies the code a lot and removes the possibility of this
function returning -EAGAIN, which complicates object population
handling at the callers.
A notable change in behavior is that we don't allow a process to populate
objects with user pages from a foreign MM anymore. This would have been an
invalid use before, as it breaks the assumptions made in the etnaviv kernel
driver to enfore cache coherence. We now disallow this by rejecting the
request to populate those objects. Well behaving userspace is unaffected by
this change.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
This function never fails, as it does nothing more than adding the GEM
object to the global device list. Making this explicit through the void
return type allows to drop some unnecessary error handling.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
This function has only one caller and it isn't expected that there will
be any more in the future. Folding this function into the caller is
helping the readability.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
The current userptr page population will defer work to a work item if
needed to avoid ever taking the mmap_sem in the direct call path. With
the more fine-grained locking in etnaviv this isn't needed anymore, so
a future commit will simplify this code.
Add a lockdep annotation to validate the assumption that the mmap_sem
can be taken in the direct call path.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Userptr, prime and shmem buffer objects have different lock ordering
requirements. This is mostly due to the fact that we don't allow to mmap
userptr buffers, so we won't ever end up in our fault handler for those,
so some of the code paths are never called with the mmap_sem held.
To avoid lockdep false positives, split them up into different lock classes.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
If the FE is restarted before the sync point event is cleared, the GPU
might trigger a completion IRQ for the next sync point, corrupting
the state of the currently running worker.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Reduce the number of GGTT PTE operations to speed the test up, but we
reduce the likelihood of spotting a coherency error in those operations.
However, Broxton is sporadically timing on this test, presumably because
its GGTT operations are all uncached.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20171223110407.21402-1-chris@chris-wilson.co.uk
The omap4 CEC hardware cannot tell a Nack from a Low Drive from an
Arbitration Lost error, so just report a Nack, which is almost
certainly the reason for the error anyway.
This also simplifies the implementation. The only three interrupts
that need to be enabled are:
Transmit Buffer Full/Empty Change event: triggered when the
transmit finished successfully and cleared the buffer.
Receiver FIFO Not Empty event: triggered when a message was received.
Frame Retransmit Count Exceeded event: triggered when a transmit
failed repeatedly, usually due to the message being Nacked. Other
reasons are possible (Low Drive, Arbitration Lost) but there is no
way to know. If this happens the TX buffer needs to be cleared
manually.
While testing various error conditions I noticed that the hardware
can receive messages up to 18 bytes in total, which exceeds the legal
maximum of 16. This could cause a buffer overflow, so we check for
this and constrain the size to 16 bytes.
The old incorrect interrupt handler could cause the CEC framework to
enter into a bad state because it mis-detected the "Start Bit Irregularity
event" as an ARB_LOST transmit error when it actually is a receive error
which should be ignored.
Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com>
Reported-by: Henrik Austad <haustad@cisco.com>
Tested-by: Henrik Austad <haustad@cisco.com>
Tested-by: Hans Verkuil <hans.verkuil@cisco.com>
Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
We have plenty of global registers and whatnot programmed without
any further locking by the modeset code. Currently non-bocking
modesets are allowed to execute in parallel which could corrupt
said registers.
To avoid the problem let's run all non-blocking modesets on an
ordered workqueue. We still put page flips etc. to system_unbound_wq
allowing page flips on one pipe to execute in parallel with page flips
or a modeset on a another pipe (assuming no known state is shared
between them, at which point they would have been added to the same
atomic commit and serialized that way).
Blocking modesets are already serialized with each other by
connection_mutex, and thus are safe. To serialize them with
non-blocking modesets we just flush the workqueue before executing
blocking modesets.
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Fixes: 94f050246b42 ("drm/i915: nonblocking commit")
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20171113133622.8593-1-ville.syrjala@linux.intel.com
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
(cherry picked from commit 757fffcfdffb6c0dd46c1b264091c36b4e5a86ae)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Commit 77affa31722b ("drm/i915/psr: Fix compiler warnings for
hsw_psr_disable()") swapped status and control registers while fixing
indentation. The _ctl at the end of the status register name must have to
led to this.
Fixes: 77affa31722b ("drm/i915/psr: Fix compiler warnings for hsw_psr_disable()")
References: https://www.mrc-cbu.cam.ac.uk/people/matt.davis/cmabridge/
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20171220043520.2599-1-dhinakaran.pandiyan@intel.com
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
(cherry picked from commit 14c6547d6df641d3e41fa4f4164f6e267ebfab89)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Exynos DRM IPP subsystem is in fact non-functional and frankly speaking
dead-code. This patch clearly marks that Exynos DRM IPP subsystem is
broken and never really functional. It will be replaced by a completely
rewritten API.
Exynos DRM IPP user-space API can be obsoleted for the following
reasons:
1. Exynos DRM IPP user-space API can be optional in Exynos DRM, so
userspace should not rely that it is always available and should have
a software fallback in case it is not there.
2. The only mode which was initially semi-working was memory-to-memory
image processing. The remaining modes (LCD-"writeback" and "output")
were never operational due to missing code (both in mainline and even
vendor kernels).
3. Exynos DRM IPP mainline user-space API compatibility for
memory-to-memory got broken very early by commit 083500baefd5 ("drm:
remove DRM_FORMAT_NV12MT", which removed the support for tiled formats,
the main feature which made this API somehow useful on Exynos platforms
(video codec that time produced only tiled frames, to implement xvideo
or any other video overlay, one has to de-tile them for proper
display).
4. Broken drivers. Especially once support for IOMMU has been added,
it revealed that drivers don't configure DMA operations properly and in
many cases operate outside the provided buffers trashing memory around.
5. Need for external patches. Although IPP user-space API has been used
in some vendor kernels, but in such cases there were additional patches
applied (like reverting mentioned 083500baefd5 patch) what means that
those userspace apps which might use it, still won't work with the
mainline kernel version.
We don't have time machines, so we cannot change it, but Exynos DRM IPP
extension should never have been merged to mainline in that form.
Exynos IPP subsystem and user-space API will be rewritten, so remove
current IPP core code and mark existing drivers as BROKEN.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Daniel Stone <daniels@collabora.com>
Acked-by: Krzysztof Kozlowski <krzk@kernel.org>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Although header is included only once but still having an include guard
is a good practice. To avoid confusion, add SoC prefix to existing
Exynos5433 header include guard.
Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
The DECON headers contain only defines for registers. There are no
other drivers using them so this should be put locally to the Exynos DRM
driver. Keeping headers local helps managing the code.
Suggested-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Signed-off-by: Inki Dae <inki.dae@samsung.com>