summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/drm/msm/msm_gem.h
AgeCommit message (Collapse)AuthorFilesLines
2022-08-28drm/msm/gem: Convert to lockdep assertRob Clark1-9/+6
Utilize the power of lockdep for our GEM locking related sanity checking. Signed-off-by: Rob Clark <robdclark@chromium.org> Patchwork: https://patchwork.freedesktop.org/patch/496139/ Link: https://lore.kernel.org/r/20220802155152.1727594-16-robdclark@gmail.com
2022-08-27drm/msm/gem: Add msm_gem_assert_locked()Rob Clark1-1/+7
All use of msm_gem_is_locked() is just for WARN_ON()s, so extract out into an msm_gem_assert_locked() patch. Signed-off-by: Rob Clark <robdclark@chromium.org> Patchwork: https://patchwork.freedesktop.org/patch/496136/ Link: https://lore.kernel.org/r/20220802155152.1727594-15-robdclark@gmail.com
2022-08-27drm/msm/gem: Convert to using drm_gem_lruRob Clark1-93/+0
This converts over to use the shared GEM LRU/shrinker helpers. Note that it means we are no longer tracking purgeable or willneed buffers that are active separately. But the most recently pinned buffers should be at the tail of the various LRUs, and the shrinker is already prepared to encounter objects which are still active. Signed-off-by: Rob Clark <robdclark@chromium.org> Patchwork: https://patchwork.freedesktop.org/patch/496131/ Link: https://lore.kernel.org/r/20220802155152.1727594-11-robdclark@gmail.com
2022-08-27drm/msm/gem: Remove active refcntRob Clark1-12/+2
At this point the pinned refcnt is sufficient, and the shrinker is already prepared to encounter objects which are still active according to fences attached to the resv. Signed-off-by: Rob Clark <robdclark@chromium.org> Patchwork: https://patchwork.freedesktop.org/patch/496122/ Link: https://lore.kernel.org/r/20220802155152.1727594-9-robdclark@gmail.com
2022-08-27drm/msm/gem: Rename to pin/unpin_pagesRob Clark1-2/+2
Since that is what these fxns actually do.. they are getting *pinned* pages (as opposed to cases where we need pages, but don't need them pinned, like CPU mappings). Signed-off-by: Rob Clark <robdclark@chromium.org> Patchwork: https://patchwork.freedesktop.org/patch/496121/ Link: https://lore.kernel.org/r/20220802155152.1727594-7-robdclark@gmail.com
2022-08-27drm/msm/gem: Check for active in shrinker pathRob Clark1-0/+1
Currently in our shrinker path we shouldn't be encountering anything that is active, but this will change in subsequent patches. So check if there are unsignaled fences. Signed-off-by: Rob Clark <robdclark@chromium.org> Patchwork: https://patchwork.freedesktop.org/patch/496117/ Link: https://lore.kernel.org/r/20220802155152.1727594-5-robdclark@gmail.com
2022-07-06drm/msm/gem: Drop obj lock in msm_gem_free_object()Rob Clark1-1/+13
The only reason we grabbed the lock was to satisfy a bunch of places that WARN_ON() if called without the lock held. But this angers lockdep which doesn't realize no one else can be holding the lock by the time we end up destroying the object (and sees what would otherwise be a locking inversion between reservation_ww_class_mutex and fs_reclaim). Closes: https://gitlab.freedesktop.org/drm/msm/-/issues/14 Signed-off-by: Rob Clark <robdclark@chromium.org> Patchwork: https://patchwork.freedesktop.org/patch/489364/ Link: https://lore.kernel.org/r/20220613205032.2652374-1-robdclark@gmail.com
2022-07-05drm/msm: Make msm_gem_free_object() staticRob Clark1-1/+0
Misc small cleanup I noticed. Not called from another object file since commit 3c9edd9c85f5 ("drm/msm: Introduce GEM object funcs") Signed-off-by: Rob Clark <robdclark@chromium.org> Reviewed-by: Abhinav Kumar <quic_abhinavk@quicinc.com> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Patchwork: https://patchwork.freedesktop.org/patch/489362/ Link: https://lore.kernel.org/r/20220613204910.2651747-1-robdclark@gmail.com Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
2022-06-15drm/msm/gem: Separate object and vma unpinRob Clark1-5/+6
Previously the BO_PINNED state in the submit was tracking two related but different things: (1) that the buffer object was pinned, and (2) that the vma (mapping within a set of pagetables) was pinned. But with fenced vma unpin (needed so that userspace couldn't race with retire path for releasing a vma) these two were decoupled. The fact that the BO_PINNED flag was already cleared meant that we leaked the bo pin count which should have been dropped when the submit was retired. So split this state into BO_OBJ_PINNED and BO_VMA_PINNED, so they can be dropped independently. Fixes: 95d1deb02a9c ("drm/msm/gem: Add fenced vma unpin") Signed-off-by: Rob Clark <robdclark@chromium.org> Patchwork: https://patchwork.freedesktop.org/patch/487559/ Link: https://lore.kernel.org/r/20220527172341.2151005-1-robdclark@gmail.com
2022-04-21drm/msm: Add a way for userspace to allocate GPU iovaRob Clark1-0/+8
The motivation at this point is mainly native userspace mesa driver in a VM guest. The one remaining synchronous "hotpath" is buffer allocation, because guest needs to wait to know the bo's iova before it can start emitting cmdstream/state that references the new bo. By allocating the iova in the guest userspace, we no longer need to wait for a response from the host, but can just rely on the allocation request being processed before the cmdstream submission. Allocation failures (OoM, etc) would just be treated as context-lost (ie. GL_GUILTY_CONTEXT_RESET) or subsequent allocations (or readpix, etc) can raise GL_OUT_OF_MEMORY. v2: Fix inuse check v3: Change mismatched iova case to -EBUSY Signed-off-by: Rob Clark <robdclark@chromium.org> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Reviewed-by: Dmitry Osipenko <dmitry.osipenko@collabora.com> Link: https://lore.kernel.org/r/20220411215849.297838-11-robdclark@gmail.com Signed-off-by: Rob Clark <robdclark@chromium.org>
2022-04-21drm/msm/gem: Add fenced vma unpinRob Clark1-2/+12
With userspace allocated iova (next patch), we can have a race condition where userspace observes the fence completion and deletes the vma before retire_submit() gets around to unpinning the vma. To handle this, add a fenced unpin which drops the refcount but tracks the fence, and update msm_gem_vma_inuse() to check any previously unsignaled fences. v2: Fix inuse underflow (duplicate unpin) v3: Fix msm_job_run() vs submit_cleanup() race condition Signed-off-by: Rob Clark <robdclark@chromium.org> Link: https://lore.kernel.org/r/20220411215849.297838-10-robdclark@gmail.com Signed-off-by: Rob Clark <robdclark@chromium.org>
2022-04-21drm/msm/gem: Split vma lookup and pinRob Clark1-4/+5
This way we only lookup vma once per object per submit, for both the submit and retire path. Signed-off-by: Rob Clark <robdclark@chromium.org> Link: https://lore.kernel.org/r/20220411215849.297838-9-robdclark@gmail.com Signed-off-by: Rob Clark <robdclark@chromium.org>
2022-04-21drm/msm: Drop msm_gem_iova()Rob Clark1-2/+0
There was only a single user, which could just as easily stash the iova when pinning. v2: fix prepare->prepare->cleanup->cleanup sequences Signed-off-by: Rob Clark <robdclark@chromium.org> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Link: https://lore.kernel.org/r/20220411215849.297838-7-robdclark@gmail.com Signed-off-by: Rob Clark <robdclark@chromium.org>
2022-04-21drm/msm/gem: Drop PAGE_SHIFT for address space mmRob Clark1-2/+2
Get rid of all the unnecessary conversion between address/size and page offsets. It just confuses things. Signed-off-by: Rob Clark <robdclark@chromium.org> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Link: https://lore.kernel.org/r/20220411215849.297838-6-robdclark@gmail.com Signed-off-by: Rob Clark <robdclark@chromium.org>
2022-04-21drm/msm/gem: Split out inuse helperRob Clark1-0/+1
Prep for a following patch, where it gets a bit more complicated. Signed-off-by: Rob Clark <robdclark@chromium.org> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Link: https://lore.kernel.org/r/20220411215849.297838-5-robdclark@gmail.com Signed-off-by: Rob Clark <robdclark@chromium.org>
2022-04-21drm/msm/gem: Move prototypesRob Clark1-0/+22
These belong more cleanly in the gem header. Signed-off-by: Rob Clark <robdclark@chromium.org> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Link: https://lore.kernel.org/r/20220411215849.297838-2-robdclark@gmail.com Signed-off-by: Rob Clark <robdclark@chromium.org>
2022-04-21drm/msm: Remove unused field in submitRob Clark1-1/+0
Noticed this was unused and never set. Signed-off-by: Rob Clark <robdclark@chromium.org> Link: https://lore.kernel.org/r/20220225202614.225197-2-robdclark@gmail.com Signed-off-by: Rob Clark <robdclark@chromium.org>
2022-02-20drm/msm/gpu: Track global faults per address-spaceRob Clark1-0/+3
Other processes don't need to know about faults that they are isolated from by virtue of address space isolation. They are only interested in whether some of their state might have been corrupted. But to be safe, also track unattributed faults. This case should really never happen unless there is a kernel bug (and that would never happen, right?) v2: Instead of adding a new param, just change the behavior of the existing param to match what userspace actually wants [anholt] Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/5934 Signed-off-by: Rob Clark <robdclark@chromium.org> Link: https://lore.kernel.org/r/20220201161618.778455-3-robdclark@gmail.com Reviewed-by: Emma Anholt <emma@anholt.net> Signed-off-by: Rob Clark <robdclark@chromium.org>
2021-09-14Merge drm/drm-next into drm-misc-nextMaxime Ripard1-3/+0
Kickstart new drm-misc-next cycle. Signed-off-by: Maxime Ripard <maxime@cerno.tech>
2021-08-30drm/msm: Use scheduler dependency handlingDaniel Vetter1-5/+0
drm_sched_job_init is already at the right place, so this boils down to deleting code. Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Cc: Rob Clark <robdclark@gmail.com> Reviewed-by: Rob Clark <robdclark@gmail.com> Cc: Sean Paul <sean@poorly.run> Cc: Sumit Semwal <sumit.semwal@linaro.org> Cc: "Christian König" <christian.koenig@amd.com> Cc: linux-arm-msm@vger.kernel.org Cc: freedreno@lists.freedesktop.org Cc: linux-media@vger.kernel.org Cc: linaro-mm-sig@lists.linaro.org Link: https://patchwork.freedesktop.org/patch/msgid/20210805104705.862416-13-daniel.vetter@ffwll.ch
2021-08-07drm/msm: Implement mmap as GEM object functionThomas Zimmermann1-3/+0
Moving the driver-specific mmap code into a GEM object function allows for using DRM helpers for various mmap callbacks. The respective msm functions are being removed. The file_operations structure fops is now being created by the helper macro DEFINE_DRM_GEM_FOPS(). v2: * rebase onto latest upstream * remove declaration of msm_gem_mmap_obj() from msm_fbdev.c Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de> Link: https://lore.kernel.org/r/20210706084753.8194-1-tzimmermann@suse.de [squash in missing VM_DONTEXPAND flag] Signed-off-by: Rob Clark <robdclark@chromium.org>
2021-07-28drm/msm: Drop submit bo_listRob Clark1-8/+0
This was only used to detect userspace including the same bo multiple times in a submit. But ww_mutex can already tell us this. When we drop struct_mutex around the submit ioctl, we'd otherwise need to lock the bo before adding it to the bo_list. But since ww_mutex can already tell us this, it is simpler just to remove the bo_list. Signed-off-by: Rob Clark <robdclark@chromium.org> Link: https://lore.kernel.org/r/20210728010632.2633470-11-robdclark@gmail.com Signed-off-by: Rob Clark <robdclark@chromium.org>
2021-07-28drm/msm: Conversion to drm schedulerRob Clark1-3/+23
For existing adrenos, there is one or more ringbuffer, depending on whether preemption is supported. When preemption is supported, each ringbuffer has it's own priority. A submitqueue (which maps to a gl context or vk queue in userspace) is mapped to a specific ring- buffer at creation time, based on the submitqueue's priority. Each ringbuffer has it's own drm_gpu_scheduler. Each submitqueue maps to a drm_sched_entity. And each submit maps to a drm_sched_job. Closes: https://gitlab.freedesktop.org/drm/msm/-/issues/4 Signed-off-by: Rob Clark <robdclark@chromium.org> Acked-by: Christian König <christian.koenig@amd.com> Link: https://lore.kernel.org/r/20210728010632.2633470-10-robdclark@gmail.com Signed-off-by: Rob Clark <robdclark@chromium.org>
2021-07-27drm/msm: Track "seqno" fences by idrRob Clark1-0/+1
Previously the (non-fd) fence returned from submit ioctl was a raw seqno, which is scoped to the ring. But from UABI standpoint, the ioctls related to seqno fences all specify a submitqueue. We can take advantage of that to replace the seqno fences with a cyclic idr handle. This is in preperation for moving to drm scheduler, at which point the submit ioctl will return after queuing the submit job to the scheduler, but before the submit is written into the ring (and therefore before a ring seqno has been assigned). Which means we need to replace the dma_fence that userspace may need to wait on with a scheduler fence. Signed-off-by: Rob Clark <robdclark@chromium.org> Acked-by: Christian König <christian.koenig@amd.com> Link: https://lore.kernel.org/r/20210728010632.2633470-8-robdclark@gmail.com Signed-off-by: Rob Clark <robdclark@chromium.org>
2021-07-27drm/msm: Consolidate submit bo stateRob Clark1-0/+2
Move all the locked/active/pinned state handling to msm_gem_submit.c. In particular, for drm/scheduler, we'll need to do all this before pushing the submit job to the scheduler. But while we're at it we can get rid of the dupicate pin and refcnt. Signed-off-by: Rob Clark <robdclark@chromium.org> Acked-by: Christian König <christian.koenig@amd.com> Link: https://lore.kernel.org/r/20210728010632.2633470-7-robdclark@gmail.com Signed-off-by: Rob Clark <robdclark@chromium.org>
2021-07-27drm/msm: drop drm_gem_object_put_locked()Rob Clark1-6/+1
No idea why we were still using this. It certainly hasn't been needed for some time. So drop the pointless twin codepaths. Signed-off-by: Rob Clark <robdclark@chromium.org> Acked-by: Christian König <christian.koenig@amd.com> Link: https://lore.kernel.org/r/20210728010632.2633470-4-robdclark@gmail.com Signed-off-by: Rob Clark <robdclark@chromium.org>
2021-07-27drm/msm: Docs and misc cleanupRob Clark1-2/+1
Fix a couple incorrect or misspelt comments, and add submitqueue doc comment. Signed-off-by: Rob Clark <robdclark@chromium.org> Acked-by: Christian König <christian.koenig@amd.com> Link: https://lore.kernel.org/r/20210728010632.2633470-2-robdclark@gmail.com Signed-off-by: Rob Clark <robdclark@chromium.org>
2021-06-23drm/msm: devcoredump iommu fault supportRob Clark1-0/+1
Wire up support to stall the SMMU on iova fault, and collect a devcore- dump snapshot for easier debugging of faults. Currently this is a6xx-only, but mostly only because so far it is the only one using adreno-smmu-priv. Signed-off-by: Rob Clark <robdclark@chromium.org> Acked-by: Jordan Crouse <jordan@cosmicpenguin.net> Link: https://lore.kernel.org/r/20210610214431.539029-6-robdclark@gmail.com Signed-off-by: Rob Clark <robdclark@chromium.org>
2021-04-27drm/msm: Do not unpin/evict exported dma-buf'sRob Clark1-2/+2
Our initial logic for excluding dma-bufs was not quite right. In particular we want msm_gem_get/put_pages() path used for exported dma-bufs to increment/decrement the pin-count. Also, in case the importer is vmap'ing the dma-buf, we need to be sure to update the object's status, because it is now no longer potentially evictable. Fixes: 63f17ef83428 drm/msm: Support evicting GEM objects to swap Signed-off-by: Rob Clark <robdclark@chromium.org> Link: https://lore.kernel.org/r/20210426235326.1230125-1-robdclark@gmail.com Signed-off-by: Rob Clark <robdclark@chromium.org>
2021-04-07drm/msm: Track potentially evictable objectsRob Clark1-1/+46
Objects that are potential for swapping out are (1) willneed (ie. if they are purgable/MADV_WONTNEED we can just free the pages without them having to land in swap), (2) not on an active list, (3) not dma-buf imported or exported, and (4) not vmap'd. This repurposes the purged list for objects that do not have backing pages (either because they have not been pinned for the first time yet, or in a later patch because they have been unpinned/evicted. Signed-off-by: Rob Clark <robdclark@chromium.org> Link: https://lore.kernel.org/r/20210405174532.1441497-7-robdclark@gmail.com Signed-off-by: Rob Clark <robdclark@chromium.org>
2021-04-07drm/msm: Add $debugfs/gem stats on resident objectsRob Clark1-2/+2
Currently nearly everything, other than newly allocated objects which are not yet backed by pages, is pinned and resident in RAM. But it will be nice to have some stats on what is unpinned once that is supported. Signed-off-by: Rob Clark <robdclark@chromium.org> Link: https://lore.kernel.org/r/20210405174532.1441497-6-robdclark@gmail.com Signed-off-by: Rob Clark <robdclark@chromium.org>
2021-04-07drm/msm: ratelimit GEM related WARN_ON()sRob Clark1-7/+12
If you mess something up, you don't really need to see the same warn on splat 4000 times pumped out a slow debug UART port.. Signed-off-by: Rob Clark <robdclark@chromium.org> Link: https://lore.kernel.org/r/20210405174532.1441497-2-robdclark@gmail.com Signed-off-by: Rob Clark <robdclark@chromium.org>
2021-04-07drm/msm: Fix spelling "purgable" -> "purgeable"Rob Clark1-8/+8
The previous patch fixes the user visible spelling. This one fixes the code. Oops. Signed-off-by: Rob Clark <robdclark@chromium.org> Link: https://lore.kernel.org/r/20210406151816.1515329-1-robdclark@gmail.com Signed-off-by: Rob Clark <robdclark@chromium.org>
2021-04-07drm/msm: Improved debugfs gem statsRob Clark1-1/+10
The last patch lost the breakdown of active vs inactive GEM objects in $debugfs/gem. But we can add some better stats to summarize not just active vs inactive, but also purgable/purged to make up for that. Signed-off-by: Rob Clark <robdclark@chromium.org> Tested-by: Douglas Anderson <dianders@chromium.org> Reviewed-by: Douglas Anderson <dianders@chromium.org> Link: https://lore.kernel.org/r/20210401012722.527712-5-robdclark@gmail.com Signed-off-by: Rob Clark <robdclark@chromium.org>
2021-04-07drm/msm: Fix debugfs deadlockRob Clark1-2/+8
In normal cases the gem obj lock is acquired first before mm_lock. The exception is iterating the various object lists. In the shrinker path, deadlock is avoided by using msm_gem_trylock() and skipping over objects that cannot be locked. But for debugfs the straightforward thing is to split things out into a separate list of all objects protected by it's own lock. Fixes: d984457b31c4 ("drm/msm: Add priv->mm_lock to protect active/inactive lists") Signed-off-by: Rob Clark <robdclark@chromium.org> Tested-by: Douglas Anderson <dianders@chromium.org> Reviewed-by: Douglas Anderson <dianders@chromium.org> Link: https://lore.kernel.org/r/20210401012722.527712-4-robdclark@gmail.com Signed-off-by: Rob Clark <robdclark@chromium.org>
2021-04-07drm/msm: Avoid mutex in shrinker_count()Rob Clark1-4/+49
When the system is under heavy memory pressure, we can end up with lots of concurrent calls into the shrinker. Keeping a running tab on what we can shrink avoids grabbing a lock in shrinker->count(), and avoids shrinker->scan() getting called when not profitable. Also, we can keep purged objects in their own list to avoid re-traversing them to help cut down time in the critical section further. Signed-off-by: Rob Clark <robdclark@chromium.org> Tested-by: Douglas Anderson <dianders@chromium.org> Reviewed-by: Douglas Anderson <dianders@chromium.org> Link: https://lore.kernel.org/r/20210401012722.527712-3-robdclark@gmail.com Signed-off-by: Rob Clark <robdclark@chromium.org>
2021-04-07drm/msm: Remove unused freed llist nodeRob Clark1-2/+0
Unused since commit c951a9b284b9 ("drm/msm: Remove msm_gem_free_work") Signed-off-by: Rob Clark <robdclark@chromium.org> Tested-by: Douglas Anderson <dianders@chromium.org> Reviewed-by: Douglas Anderson <dianders@chromium.org> Link: https://lore.kernel.org/r/20210401012722.527712-2-robdclark@gmail.com Signed-off-by: Rob Clark <robdclark@chromium.org>
2020-12-10Merge tag 'drm-msm-next-2020-12-07' of ↵Dave Airlie1-24/+113
https://gitlab.freedesktop.org/drm/msm into drm-next * Shutdown hook for GPU (to ensure GPU is idle before iommu goes away) * GPU cooling device support * DSI 7nm and 10nm phy/pll updates * Additional sm8150/sm8250 DPU support (merge_3d and DSPP color processing) * Various DP fixes * A whole bunch of W=1 fixes from Lee Jones * GEM locking re-work (no more trylock_recursive in shrinker!) * LLCC (system cache) support * Various other fixes/cleanups Signed-off-by: Dave Airlie <airlied@redhat.com> From: Rob Clark <robdclark@gmail.com> Link: https://patchwork.freedesktop.org/patch/msgid/CAF6AEGt0G=H3_RbF_GAQv838z5uujSmFd+7fYhL6Yg=23LwZ=g@mail.gmail.com
2020-11-21drm/msm: Protect obj->active_count under obj lockRob Clark1-2/+3
Previously we only held obj lock in the _active_get() path, and relied on atomic_dec_return() to not be racy in the _active_put() path where obj lock was not held. But this is a false sense of security. Unlike obj lifetime refcnt, where you do not expect to *increase* the refcnt after the last put (which would mean that something has gone horribly wrong with the object liveness reference counting), the active_count can increase again from zero. Racing _active_put()s and _active_get()s could leave the obj on the wrong mm list. But in the retire path, immediately after the _active_put(), the _unpin_iova() would acquire obj lock. So just move the locking earlier and rely on that to protect obj->active_count. Fixes: c5c1643cef7a ("drm/msm: Drop struct_mutex from the retire path") Signed-off-by: Rob Clark <robdclark@chromium.org>
2020-11-04drm/msm: Drop struct_mutex in madvise pathRob Clark1-2/+0
The obj->lock is sufficient for what we need. This *does* have the implication that userspace can try to shoot themselves in the foot by racing madvise(DONTNEED) with submit. But the result will be about the same if they did madvise(DONTNEED) before the submit ioctl, ie. they might not get want they want if they race with shrinker. But iova fault handling is robust enough, so userspace is only shooting it's own foot. Signed-off-by: Rob Clark <robdclark@chromium.org> Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com> Signed-off-by: Rob Clark <robdclark@chromium.org>
2020-11-04drm/msm: Remove msm_gem_free_workRob Clark1-1/+0
Now that we don't need struct_mutex in the free path, we can get rid of the asynchronous free altogether. Signed-off-by: Rob Clark <robdclark@chromium.org> Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com> Signed-off-by: Rob Clark <robdclark@chromium.org>
2020-11-04drm/msm: Remove obj->gpuRob Clark1-1/+0
It cannot be atomically updated with obj->active_count, and the only purpose is a useless WARN_ON() (which becomes a buggy WARN_ON() once retire_submits() is not serialized with incoming submits via struct_mutex) Signed-off-by: Rob Clark <robdclark@chromium.org> Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com> Signed-off-by: Rob Clark <robdclark@chromium.org>
2020-11-04drm/msm: Refcount submitsRob Clark1-0/+13
Before we remove dev->struct_mutex from the retire path, we have to deal with the situation of a submit retiring before the submit ioctl returns. To deal with this, ring->submits will hold a reference to the submit, which is dropped when the submit is retired. And the submit ioctl path holds it's own ref, which it drops when it is done with the submit. Also, add to submit list *after* getting/pinning bo's, to prevent badness in case the completed fence is corrupted, and retire_worker mistakenly believes the submit is done too early. Signed-off-by: Rob Clark <robdclark@chromium.org> Reviewed-by: Jordan Crouse <jcrouse@codeaurora.org> Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com> Signed-off-by: Rob Clark <robdclark@chromium.org>
2020-11-04drm/msm/gem: Switch over to obj->resv for lockingRob Clark1-11/+5
This also converts the special msm_gem_get_vaddr_active() to expect the lock to already be held. There are two call-sites for this, one already has the lock held, so it is more straightforward to just open-code the locking for the other caller. Signed-off-by: Rob Clark <robdclark@chromium.org> Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com> Signed-off-by: Rob Clark <robdclark@chromium.org>
2020-11-04drm/msm/submit: Move copy_from_user ahead of locking bosRob Clark1-0/+3
We cannot switch to using obj->resv for locking without first moving all the copy_from_user() ahead of submit_lock_objects(). Otherwise in the mm fault path we aquire mm->mmap_sem before obj lock, but in the submit path the order is reversed. Signed-off-by: Rob Clark <robdclark@chromium.org> Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com> Signed-off-by: Rob Clark <robdclark@chromium.org>
2020-11-04drm/msm/gem: Move locking in shrinker pathRob Clark1-18/+11
Move grabbing the bo lock into shrinker, with a msm_gem_trylock() to skip over bo's that are already locked. This gets rid of the nested lock classes. Signed-off-by: Rob Clark <robdclark@chromium.org> Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com> Signed-off-by: Rob Clark <robdclark@chromium.org>
2020-11-04drm/msm/gem: Add some _locked() helpersRob Clark1-0/+6
When we cut-over to using dma_resv_lock/etc instead of msm_obj->lock, we'll need these for the submit path (where resv->lock is already held). Signed-off-by: Rob Clark <robdclark@chromium.org> Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com> Signed-off-by: Rob Clark <robdclark@chromium.org>
2020-11-04drm/msm/gem: Move prototypes to msm_gem.hRob Clark1-0/+56
Signed-off-by: Rob Clark <robdclark@chromium.org> Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com> Signed-off-by: Rob Clark <robdclark@chromium.org>
2020-11-04drm/msm/gem: Add obj->lock wrappersRob Clark1-0/+28
This will make it easier to transition over to obj->resv locking for everything that is per-bo locking. Signed-off-by: Rob Clark <robdclark@chromium.org> Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com> Signed-off-by: Rob Clark <robdclark@chromium.org>
2020-09-22drm/msm: Fix premature purging of BOAkhil P Oommen1-1/+3
In the case where we have a back-to-back submission that shares the same BO, this BO will be prematurely moved to inactive_list while retiring the first submit. But it will be still part of the second submit which is being processed by the GPU. Now, if the shrinker happens to be triggered at this point, it will result in a premature purging of this BO. To fix this, we need to refcount BO while doing submit and retire. Then, it should be moved to inactive list when this refcount becomes 0. Signed-off-by: Akhil P Oommen <akhilpo@codeaurora.org> Signed-off-by: Rob Clark <robdclark@chromium.org>