summaryrefslogtreecommitdiffstats
path: root/drivers
diff options
context:
space:
mode:
authorChris Wilson <chris@chris-wilson.co.uk>2018-03-07 17:13:03 +0000
committerRodrigo Vivi <rodrigo.vivi@intel.com>2018-03-12 11:24:44 -0700
commit82813ba9b4b31cd445a2ec1a1404de6e78f32b35 (patch)
tree52c2bf5134ef004c41dda0b13f8a422fb42dffba /drivers
parente8cd7143e269ab58344ff10d9729da144e17fed2 (diff)
downloadlinux-82813ba9b4b31cd445a2ec1a1404de6e78f32b35.tar.bz2
drm/i915: Only prune fences after wait-for-all
Currently, we only allow ourselves to prune the fences so long as all the waits completed (i.e. all the fences we checked were signaled), and that the reservation snapshot did not change across the wait. However, if we only waited for a subset of the reservation object, i.e. just waiting for the last writer to complete as opposed to all readers as well, then we would erroneously conclude we could prune the fences as indeed although all of our waits were successful, they did not represent the totality of the reservation object. v2: We only need to check the shared fences due to construction (i.e. all of the shared fences will be later than the exclusive fence, if any). Fixes: e54ca9774777 ("drm/i915: Remove completed fences after a wait") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Matthew Auld <matthew.auld@intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180307171303.29466-1-chris@chris-wilson.co.uk (cherry picked from commit fa73055b8442c97b3ba7cd0aa57cd2ad32124201) Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Diffstat (limited to 'drivers')
-rw-r--r--drivers/gpu/drm/i915/i915_gem.c16
1 files changed, 12 insertions, 4 deletions
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 66ee9d888d16..6ff5d655c202 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -434,20 +434,28 @@ i915_gem_object_wait_reservation(struct reservation_object *resv,
dma_fence_put(shared[i]);
kfree(shared);
+ /*
+ * If both shared fences and an exclusive fence exist,
+ * then by construction the shared fences must be later
+ * than the exclusive fence. If we successfully wait for
+ * all the shared fences, we know that the exclusive fence
+ * must all be signaled. If all the shared fences are
+ * signaled, we can prune the array and recover the
+ * floating references on the fences/requests.
+ */
prune_fences = count && timeout >= 0;
} else {
excl = reservation_object_get_excl_rcu(resv);
}
- if (excl && timeout >= 0) {
+ if (excl && timeout >= 0)
timeout = i915_gem_object_wait_fence(excl, flags, timeout,
rps_client);
- prune_fences = timeout >= 0;
- }
dma_fence_put(excl);
- /* Oportunistically prune the fences iff we know they have *all* been
+ /*
+ * Opportunistically prune the fences iff we know they have *all* been
* signaled and that the reservation object has not been changed (i.e.
* no new fences have been added).
*/