IF YOU WOULD LIKE TO GET AN ACCOUNT, please write an
email to Administrator. User accounts are meant only to access repo
and report issues and/or generate pull requests.
This is a purpose-specific Git hosting for
BaseALT
projects. Thank you for your understanding!
Только зарегистрированные пользователи имеют доступ к сервису!
Для получения аккаунта, обратитесь к администратору.
Moving the driver-specific mmap code into a GEM object function allows
for using DRM helpers for various mmap callbacks.
The respective msm functions are being removed. The file_operations
structure fops is now being created by the helper macro
DEFINE_DRM_GEM_FOPS().
v2:
* rebase onto latest upstream
* remove declaration of msm_gem_mmap_obj() from msm_fbdev.c
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Link: https://lore.kernel.org/r/20210706084753.8194-1-tzimmermann@suse.de
[squash in missing VM_DONTEXPAND flag]
Signed-off-by: Rob Clark <robdclark@chromium.org>
This was only used to detect userspace including the same bo multiple
times in a submit. But ww_mutex can already tell us this.
When we drop struct_mutex around the submit ioctl, we'd otherwise need
to lock the bo before adding it to the bo_list. But since ww_mutex can
already tell us this, it is simpler just to remove the bo_list.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Link: https://lore.kernel.org/r/20210728010632.2633470-11-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
For existing adrenos, there is one or more ringbuffer, depending on
whether preemption is supported. When preemption is supported, each
ringbuffer has it's own priority. A submitqueue (which maps to a
gl context or vk queue in userspace) is mapped to a specific ring-
buffer at creation time, based on the submitqueue's priority.
Each ringbuffer has it's own drm_gpu_scheduler. Each submitqueue
maps to a drm_sched_entity. And each submit maps to a drm_sched_job.
Closes: https://gitlab.freedesktop.org/drm/msm/-/issues/4
Signed-off-by: Rob Clark <robdclark@chromium.org>
Acked-by: Christian König <christian.koenig@amd.com>
Link: https://lore.kernel.org/r/20210728010632.2633470-10-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
Fix the following fall-through warning:
drivers/gpu/drm/msm/msm_gem.c: In function 'msm_gem_new_impl':
drivers/gpu/drm/msm/msm_gem.c:1170:6: warning: this statement may fall through [-Wimplicit-fallthrough=]
1170 | if (priv->has_cached_coherent)
| ^
drivers/gpu/drm/msm/msm_gem.c:1173:2: note: here
1173 | default:
| ^~~~~~~
by replacing the /* fallthrough */ comment with fallthrough;
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
* devcoredump support for display errors
* dpu: irq cleanup/refactor
* dpu: dt bindings conversion to yaml
* dsi: dt bindings conversion to yaml
* mdp5: alpha/blend_mode/zpos support
* a6xx: cached coherent buffer support
* a660 support
* gpu iova fault improvements:
- info about which block triggered the fault, etc
- generation of gpu devcoredump on fault
* assortment of other cleanups and fixes
Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Rob Clark <robdclark@gmail.com>
Link: https://patchwork.freedesktop.org/patch/msgid/CAF6AEGs4=qsGBBbyn-4JWqW4-YUSTKh67X3DsPQ=T2D9aXKqNA@mail.gmail.com
Our initial logic for excluding dma-bufs was not quite right. In
particular we want msm_gem_get/put_pages() path used for exported
dma-bufs to increment/decrement the pin-count.
Also, in case the importer is vmap'ing the dma-buf, we need to be
sure to update the object's status, because it is now no longer
potentially evictable.
Fixes: 63f17ef834 drm/msm: Support evicting GEM objects to swap
Signed-off-by: Rob Clark <robdclark@chromium.org>
Link: https://lore.kernel.org/r/20210426235326.1230125-1-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
Now that tracking is wired up for potentially evictable GEM objects,
wire up shrinker and the remaining GEM bits for unpinning backing pages
of inactive objects.
Disabled by default for now, with an 'enable_eviction' module param to
enable so that we can get some more testing on the range of generations
(and iommu pairings) supported.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Link: https://lore.kernel.org/r/20210405174532.1441497-9-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
Objects that are potential for swapping out are (1) willneed (ie. if
they are purgable/MADV_WONTNEED we can just free the pages without them
having to land in swap), (2) not on an active list, (3) not dma-buf
imported or exported, and (4) not vmap'd. This repurposes the purged
list for objects that do not have backing pages (either because they
have not been pinned for the first time yet, or in a later patch because
they have been unpinned/evicted.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Link: https://lore.kernel.org/r/20210405174532.1441497-7-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
Currently these always go together, either when we purge MADV_WONTNEED
objects or when the object is freed. But for unpin, we want to be able
to purge (unmap from iommu) the vma, while keeping the iova range
allocated (so we can remap back to the same GPU virtual address when the
object is re-pinned.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Link: https://lore.kernel.org/r/20210405174532.1441497-5-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
In normal cases the gem obj lock is acquired first before mm_lock. The
exception is iterating the various object lists. In the shrinker path,
deadlock is avoided by using msm_gem_trylock() and skipping over objects
that cannot be locked. But for debugfs the straightforward thing is to
split things out into a separate list of all objects protected by it's
own lock.
Fixes: d984457b31 ("drm/msm: Add priv->mm_lock to protect active/inactive lists")
Signed-off-by: Rob Clark <robdclark@chromium.org>
Tested-by: Douglas Anderson <dianders@chromium.org>
Reviewed-by: Douglas Anderson <dianders@chromium.org>
Link: https://lore.kernel.org/r/20210401012722.527712-4-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
When the system is under heavy memory pressure, we can end up with lots
of concurrent calls into the shrinker. Keeping a running tab on what we
can shrink avoids grabbing a lock in shrinker->count(), and avoids
shrinker->scan() getting called when not profitable.
Also, we can keep purged objects in their own list to avoid re-traversing
them to help cut down time in the critical section further.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Tested-by: Douglas Anderson <dianders@chromium.org>
Reviewed-by: Douglas Anderson <dianders@chromium.org>
Link: https://lore.kernel.org/r/20210401012722.527712-3-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
Fix below warnings reported by coccicheck:
./drivers/gpu/drm/msm/msm_gem.c:991:3-9: WARNING: NULL check before some
freeing functions is not needed.
Reported-by: Abaci Robot <abaci@linux.alibaba.com>
Signed-off-by: Jiapeng Zhong <abaci-bugfix@linux.alibaba.com>
Signed-off-by: Rob Clark <robdclark@chromium.org>
get_pages is only called in a locked context. Add a WARN_ON to make sure
it stays that way.
Signed-off-by: Iskren Chernev <iskren.chernev@gmail.com>
Signed-off-by: Rob Clark <robdclark@chromium.org>
Required backmerge since we will be based on top of v5.11, and there
has been a request to backmerge already to upstream some features.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
* Shutdown hook for GPU (to ensure GPU is idle before iommu goes away)
* GPU cooling device support
* DSI 7nm and 10nm phy/pll updates
* Additional sm8150/sm8250 DPU support (merge_3d and DSPP color
processing)
* Various DP fixes
* A whole bunch of W=1 fixes from Lee Jones
* GEM locking re-work (no more trylock_recursive in shrinker!)
* LLCC (system cache) support
* Various other fixes/cleanups
Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Rob Clark <robdclark@gmail.com>
Link: https://patchwork.freedesktop.org/patch/msgid/CAF6AEGt0G=H3_RbF_GAQv838z5uujSmFd+7fYhL6Yg=23LwZ=g@mail.gmail.com
When using gem with vram carveout the page allocation is managed via
drm_mm. The necessary drm_mm_node is allocated in add_vma, but it is
referenced in msm_gem_object as well. It is freed before the drm_mm_node
has been deallocated leading to use-after-free on every single vram
allocation.
Currently put_iova is called before put_pages in both
msm_gem_free_object and msm_gem_purge:
put_iova -> del_vma -> kfree(vma) // vma holds drm_mm_node
/* later */
put_pages -> put_pages_vram -> drm_mm_remove_node(
msm_obj->vram_node)
// vram_node is a ref to
// drm_mm_node; in _msm_gem_new
It looks like del_vma does nothing else other than freeing the vma
object and removing it from it's list, so delaying the deletion should
be harmless.
This patch splits put_iova in put_iova_spaces and put_iova_vmas, so the
vma can be freed after the mm_node has been deallocated with the mm.
Note: The breaking commit separated the vma allocation from within
msm_gem_object to outside, so the vram_node reference became outside the
msm_gem_object allocation, and freeing order was therefore overlooked.
Fixes: 4b85f7f5cf ("drm/msm: support for an arbitrary number of address spaces")
Signed-off-by: Iskren Chernev <iskren.chernev@gmail.com>
Signed-off-by: Rob Clark <robdclark@chromium.org>
Mapping the imported pages of a DMA-buf into an userspace process
doesn't work as expected.
But we have reoccurring requests on this approach, so split the
functions for this and document that dma_buf_mmap() needs to be used
instead.
v2: split it into two functions
v3: rebased on latest changes
v4: update commit message a bit
Signed-off-by: Christian König <christian.koenig@amd.com>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/403838/
In situations where the GPU is mostly idle, all or nearly all buffer
objects will be in the inactive list. But if the system is under memory
pressure (from something other than GPU), we could still get a lot of
shrinker calls. Which results in traversing a list of thousands of objs
and in the end finding nothing to shrink. Which isn't so efficient.
Instead split the inactive_list into two lists, one inactive objs which
are shrinkable, and a second one for those that are not. This way we
can avoid traversing objs which we know are not shrinker candidates.
v2: Fix inverted logic think-o
Signed-off-by: Rob Clark <robdclark@chromium.org>
Previously we only held obj lock in the _active_get() path, and relied
on atomic_dec_return() to not be racy in the _active_put() path where
obj lock was not held.
But this is a false sense of security. Unlike obj lifetime refcnt,
where you do not expect to *increase* the refcnt after the last put
(which would mean that something has gone horribly wrong with the
object liveness reference counting), the active_count can increase
again from zero. Racing _active_put()s and _active_get()s could leave
the obj on the wrong mm list.
But in the retire path, immediately after the _active_put(), the
_unpin_iova() would acquire obj lock. So just move the locking earlier
and rely on that to protect obj->active_count.
Fixes: c5c1643cef ("drm/msm: Drop struct_mutex from the retire path")
Signed-off-by: Rob Clark <robdclark@chromium.org>
Add the new vma_set_file() function to allow changing
vma->vm_file with the necessary refcount dance.
v2: add more users of this.
v3: add missing EXPORT_SYMBOL, rebase on mmap cleanup,
add comments why we drop the reference on two occasions.
v4: make it clear that changing an anonymous vma is illegal.
v5: move vma_set_file to mm/util.c
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> (v2)
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Acked-by: Andrew Morton <akpm@linux-foundation.org>
Link: https://patchwork.freedesktop.org/patch/399360/
Add the new vma_set_file() function to allow changing
vma->vm_file with the necessary refcount dance.
v2: add more users of this.
v3: add missing EXPORT_SYMBOL, rebase on mmap cleanup,
add comments why we drop the reference on two occasions.
v4: make it clear that changing an anonymous vma is illegal.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> (v2)
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Link: https://patchwork.freedesktop.org/patch/394773/