summaryrefslogtreecommitdiffstats
path: root/drivers/gpu/drm/i915/selftests
AgeCommit message (Collapse)AuthorFilesLines
2019-04-17drm/i915: Verify the engine workarounds stick on applicationChris Wilson1-47/+6
Read the engine workarounds back using the GPU after loading the initial context state to verify that we are setting them correctly, and bail if it fails. v2: Break out the verification into its own loop Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190417075657.19456-3-chris@chris-wilson.co.uk
2019-04-15drm/i915/selftests: Skip live timeline/suspend tests if wedgedChris Wilson2-0/+6
If the driver is wedged, we can not issue the requests to exercise the timelines or the system across suspend, so skip the tests. live_hangcheck is there to fail if we cannot recover. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190413125820.14112-4-chris@chris-wilson.co.uk
2019-04-13drm/i915: Teach intel_workarounds to use uncore mmio accessChris Wilson1-2/+3
Start weaning ourselves off the implicit I915_WRITE macro madness and start using the explicit intel_uncore mmio access. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Paulo Zanoni <paulo.r.zanoni@intel.com> Reviewed-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190412202458.10653-1-chris@chris-wilson.co.uk
2019-04-08drm/i915/selftests: Mark live_forcewake_ops as unreliableChris Wilson1-0/+11
A couple of machines in the farm show quite frequent errors in the powerwells not being released. Either there is an external agent interferring with the powerwells, or the powerwell doesn't quite behave as we anticipate -- either way, the test is not reliable enough to be enabled by default in CI. It has served its immediate purpose in providing coverage as we made tweaks to forcewake, so keep it available for future testing. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110210 Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190407192649.14750-1-chris@chris-wilson.co.uk
2019-04-08drm/i915: Consolidate the timeline->barrierChris Wilson1-1/+0
The timeline is strictly ordered, so by inserting the timeline->barrier request into the timeline->last_request it naturally provides the same barrier. Consolidate the pair of barriers into one as they serve the same purpose. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190408091728.20207-4-chris@chris-wilson.co.uk
2019-04-05drm/i915/selftests: Fix plain use of integer 0 as NULLChris Wilson1-1/+1
Quelch a sparse warning: drivers/gpu/drm/i915/gt/selftest_lrc.c:119:54: warning: Using plain integer as NULL pointer Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Matthew Auld <matthew.william.auld@gmail.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190405111430.18495-1-chris@chris-wilson.co.uk
2019-04-05drm/i915/execlists: Enable coarse preemption boundaries for gen8Chris Wilson1-0/+180
When we introduced preemption, we chose to keep it disabled for gen8 as supporting preemption inside GPGPU user batches required various w/a in userspace. Since then, the desire to preempt long queues of requests between batches (e.g. within busywaiting semaphores) has grown. So allow arbitration within the busywaits and between requests, but disable arbitration within user batches so that we can preempt between requests and not risk breaking GPGPU. However, since this preemption is much coarser and doesn't interfere with userspace, we decline to include it amongst the scheduler capabilities. (This is also required for us to skip over the preemption selftests that expect to be able to preempt user batches.) Michal suggested that we could perhaps allow preemption inside gen8 userspace batches if we can satisfy ourselves that the default preemption settings are viable with existing userspace (principally OpenCL which already should carry any known workaround). We could then merge the two code paths back into one, even dropping the artifical has-preemption device feature flag. Testcase: igt/gem_exec_scheduler/semaphore-user References: beecec901790 ("drm/i915/execlists: Preemption!") Fixes: e88619646971 ("drm/i915: Use HW semaphores for inter-engine synchronisation on gen8+") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michal Winiarski <michal.winiarski@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Michal Winiarski <michal.winiarski@intel.com> #irc Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190329134024.5254-1-chris@chris-wilson.co.uk
2019-04-02drm/i915: Move intel_engine_mask_t around for use by i915_request_types.hChris Wilson2-5/+6
We want to use intel_engine_mask_t inside i915_request.h, which means extracting it from the general header file mess and placing it inside a types.h. A knock on effect is that the compiler wants to warn about type-contraction of ALL_ENGINES into intel_engine_maskt_t, so prepare for the worst. v2: Use intel_engine_mask_t consistently v3: Move I915_NUM_ENGINES to its natural home at the end of the enum Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: John Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190401162641.10963-1-chris@chris-wilson.co.uk Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
2019-03-26drm/i915: switch intel_uncore_forcewake_for_reg to intel_uncoreDaniele Ceraolo Spurio1-1/+1
The intel_uncore structure is the owner of FW, so subclass the function to it. While at it, use a local uncore var and switch to the new read/write functions where it makes sense. Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Paulo Zanoni <paulo.r.zanoni@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20190325214940.23632-7-daniele.ceraolospurio@intel.com
2019-03-26drm/i915: switch uncore mmio funcs to use intel_uncoreDaniele Ceraolo Spurio1-2/+2
The full read/write ops can now work on the intel_uncore struct. Introduce intel_uncore_read/write functions working on intel_uncore and switch the I915_READ/WRITE macro to internally call those. v2: no change v3: add intel_uncore_read/write functions (Chris), update commit msg Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Paulo Zanoni <paulo.r.zanoni@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20190325214940.23632-6-daniele.ceraolospurio@intel.com
2019-03-26drm/i915: add uncore flags for unclaimed mmioDaniele Ceraolo Spurio1-7/+8
Save the HW capabilities to avoid having to jump back to dev_priv every time. Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Paulo Zanoni <paulo.r.zanoni@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20190325214940.23632-4-daniele.ceraolospurio@intel.com
2019-03-26drm/i915/selftests: Fix an IS_ERR() vs NULL checkDan Carpenter1-1/+1
The live_context() function returns error pointers. It never returns NULL. Fixes: 9c1477e83e62 ("drm/i915/selftests: Exercise adding requests to a full GGTT") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20190326050843.GA20038@kadam
2019-03-22drm/i915: Allow contexts to share a single timeline across all enginesChris Wilson1-1/+1
Previously, our view has been always to run the engines independently within a context. (Multiple engines happened before we had contexts and timelines, so they always operated independently and that behaviour persisted into contexts.) However, at the user level the context often represents a single timeline (e.g. GL contexts) and userspace must ensure that the individual engines are serialised to present that ordering to the client (or forgot about this detail entirely and hope no one notices - a fair ploy if the client can only directly control one engine themselves ;) In the next patch, we will want to construct a set of engines that operate as one, that have a single timeline interwoven between them, to present a single virtual engine to the user. (They submit to the virtual engine, then we decide which engine to execute on based.) To that end, we want to be able to create contexts which have a single timeline (fence context) shared between all engines, rather than multiple timelines. v2: Move the specialised timeline ordering to its own function. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190322092325.5883-4-chris@chris-wilson.co.uk
2019-03-22drm/i915: Create/destroy VM (ppGTT) for use with contextsChris Wilson4-58/+190
In preparation to making the ppGTT binding for a context explicit (to facilitate reusing the same ppGTT between different contexts), allow the user to create and destroy named ppGTT. v2: Replace global barrier for swapping over the ppgtt and tlbs with a local context barrier (Tvrtko) v3: serialise with struct_mutex; it's lazy but required dammit v4: Rewrite igt_ctx_shared_exec to be more different (aimed to be more similarly, turned out different!) v5: Fix up test unwind for aliasing-ppgtt (snb) v6: Tighten language for uapi struct drm_i915_gem_vm_control. v7: Patch the context image for runtime ppgtt switching! Testcase: igt/gem_vm_create Testcase: igt/gem_ctx_param/vm Testcase: igt/gem_ctx_clone/vm Testcase: igt/gem_ctx_shared Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190322092325.5883-2-chris@chris-wilson.co.uk
2019-03-22drm/i915/selftests: Mark up preemption tests for hang detectionChris Wilson1-3/+35
Use the igt_live_test framework for detecting whether an unwanted hang occurred during test execution, and report failure if it does. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190321194031.20240-2-chris@chris-wilson.co.uk
2019-03-22drm/i915/selftests: Calculate maximum ring size for preemption chainChris Wilson1-3/+37
32 is too many for the likes of kbl, and in order to insert that many requests into the ring requires us to declare the first few hung -- understandably a slow and unexpected process. Instead, measure the size of a singe requests and use that to estimate the upper bound on the chain length we can use for our test, remembering to flush the previous chain between tests for safety. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: "Yokoyama, Caz" <caz.yokoyama@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190321194031.20240-1-chris@chris-wilson.co.uk
2019-03-21drm/i915: Flush pages on acquisitionChris Wilson9-33/+20
When we return pages to the system, we ensure that they are marked as being in the CPU domain since any external access is uncontrolled and we must assume the worst. This means that we need to always flush the pages on acquisition if we need to use them on the GPU, and from the beginning have used set-domain. Set-domain is overkill for the purpose as it is a general synchronisation barrier, but our intent is to only flush the pages being swapped in. If we move that flush into the pages acquisition phase, we know then that when we have obj->mm.pages, they are coherent with the GPU and need only maintain that status without resorting to heavy handed use of set-domain. The principle knock-on effect for userspace is through mmap-gtt pagefaulting. Our uAPI has always implied that the GTT mmap was async (especially as when any pagefault occurs is unpredicatable to userspace) and so userspace had to apply explicit domain control itself (set-domain). However, swapping is transparent to the kernel, and so on first fault we need to acquire the pages and make them coherent for access through the GTT. Our use of set-domain here leaks into the uABI that the first pagefault was synchronous. This is unintentional and baring a few igt should be unoticed, nevertheless we bump the uABI version for mmap-gtt to reflect the change in behaviour. Another implication of the change is that gem_create() is presumed to create an object that is coherent with the CPU and is in the CPU write domain, so a set-domain(CPU) following a gem_create() would be a minor operation that merely checked whether we could allocate all pages for the object. On applying this change, a set-domain(CPU) causes a clflush as we acquire the pages. This will have a small impact on mesa as we move the clflush here on !llc from execbuf time to create, but that should have minimal performance impact as the same clflush exists but is now done early and because of the clflush issue, userspace recycles bo and so should resist allocating fresh objects. Internally, the presumption that objects are created in the CPU write-domain and remain so through writes to obj->mm.mapping is more prevalent than I expected; but easy enough to catch and apply a manual flush. For the future, we should push the page flush from the central set_pages() into the callers so that we can more finely control when it is applied, but for now doing it one location is easier to validate, at the cost of sometimes flushing when there is no need. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.william.auld@gmail.com> Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Antonio Argenziano <antonio.argenziano@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: Matthew Auld <matthew.william.auld@gmail.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190321161908.8007-1-chris@chris-wilson.co.uk
2019-03-21drm/i915: Stop storing the context name as the timeline nameChris Wilson2-10/+5
The timeline->name is only used for convenience in pretty printing the i915_request.fence->ops->get_timeline_name() and it is just as convenient to pull it from the gem_context directly. The few instances of its use inside GEM_TRACE() has proven more of a nuisance than helpful, so not worth saving imo. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190321140711.11190-4-chris@chris-wilson.co.uk
2019-03-21drm/i915: Stop storing ctx->user_handleChris Wilson1-1/+1
The user_handle need only be known by userspace for it to lookup the context via the idr; internally we have no use for it. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190321140711.11190-3-chris@chris-wilson.co.uk
2019-03-21drm/i915: Separate GEM context construction and registration to userspaceChris Wilson4-9/+24
In later patches, it became apparent that userspace can see a partially constructed GEM context and begin using it before it was ready, to much hilarity. Close this window of opportunity by lifting the registration of the context with userspace (the insertion of the context into the filp's idr) to the very end of the CONTEXT_CREATE ioctl. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190321140711.11190-1-chris@chris-wilson.co.uk
2019-03-21drm/i915/selftests: fix NULL vs IS_ERR() check in mock_context_barrier()Dan Carpenter1-2/+2
The mock_context() function returns NULL on error, it doesn't return error pointers. Fixes: 85fddf0b0027 ("drm/i915: Introduce a context barrier callback") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20190321092451.GK2202@kadam
2019-03-20drm/i915: move regs pointer inside the uncore structureDaniele Ceraolo Spurio1-1/+1
This will allow futher simplifications in the uncore handling. v2: move register access setup under uncore (Chris) Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Paulo Zanoni <paulo.r.zanoni@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20190319183543.13679-8-daniele.ceraolospurio@intel.com
2019-03-20drm/i915: make more uncore function work on intel_uncoreDaniele Ceraolo Spurio3-5/+5
Move the init, fini, prune, suspend, resume function to work on intel_uncore instead of dev_priv. Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Paulo Zanoni <paulo.r.zanoni@intel.com> Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20190319183543.13679-5-daniele.ceraolospurio@intel.com
2019-03-20drm/i915: use intel_uncore for all forcewake get/putDaniele Ceraolo Spurio1-4/+4
Now that the internal code all works on intel_uncore, flip the external-facing interface. v2: fix GVT. Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Paulo Zanoni <paulo.r.zanoni@intel.com> Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20190319183543.13679-4-daniele.ceraolospurio@intel.com
2019-03-20drm/i915: use intel_uncore in fw get/put internal pathsDaniele Ceraolo Spurio1-4/+5
Get/put functions used outside of uncore.c are updated in the next patch for a nicer split. v2: use dev_priv where we still have it (Paulo) Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Paulo Zanoni <paulo.r.zanoni@intel.com> Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20190319183543.13679-3-daniele.ceraolospurio@intel.com
2019-03-20drm/i915: Switch to bitmap_zalloc()Andy Shevchenko1-3/+2
Switch to bitmap_zalloc() to show clearly what we are allocating. Besides that it returns pointer of bitmap type instead of opaque void *. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20190304092908.57382-2-andriy.shevchenko@linux.intel.com
2019-03-20drm/i915/selftests: add test to verify get/put fw domainsDaniele Ceraolo Spurio1-6/+128
Exercise acquiring and releasing forcewake around register reads. In order to read a register behind a GT powerwell, we need to instruct that powerwell to wake up using a forcewake. When we no longer require the GT powerwell, we tell the GT to release our forcewake. Inside the forcewake, the register read should work but outside it should just return garbage, 0 being the most common garbage. Thus we can detect when we are inside and outside of the forcewake with just a simple register read, and so can verify that the GT powerwell is released when we say so. v2: Picking the right forcewaked register to return 0 outside of forcewake is an art. Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20190320080052.27273-1-chris@chris-wilson.co.uk
2019-03-20drm/i915/selftests: Provide stub reset functionsChris Wilson1-0/+36
If a test fails, we quite often mark the device as wedged. Provide the stub functions so that we can wedge the mock device, and avoid exploding on test failures. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=109981 Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190319214233.25498-3-chris@chris-wilson.co.uk
2019-03-19drm/i915: Hold a reference to the active HW contextChris Wilson1-1/+6
For virtual engines, we need to keep the HW context alive while it remains in use. For regular HW contexts, they are created and kept alive until the end of the GEM context. For simplicity, generalise the requirements and keep an active reference to each HW context. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190318212347.30146-2-chris@chris-wilson.co.uk
2019-03-19drm/i915: Lock the gem_context->active_list while dropping the linkChris Wilson1-2/+0
On unpinning the intel_context, we remove it from the active list inside the GEM context. This list is supposed to be guarded by the GEM context mutex, so remember to take it! Fixes: 7e3d9a59410d ("drm/i915: Track active engines within a context") Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190318212347.30146-1-chris@chris-wilson.co.uk
2019-03-18drm/i915: Hold a ref to the ring while retiringChris Wilson1-0/+1
As the final request on a ring may hold the reference to this ring (via retiring the last pinned context), we may find ourselves chasing a dangling pointer on completion of the list. A quick solution is to hold a reference to the ring itself as we retire along it so that we only free it after we stop dereferencing it. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190318095204.9913-4-chris@chris-wilson.co.uk
2019-03-15drm/i915/gtt: Rename i915_vm_is_48b to i915_vm_is_4lvlChris Wilson1-2/+2
Large ppGTT are differentiated by the requirement to go to four levels to address more than 32b. Given the introduction of more 4 level ppGTT with different sizes of addressable bits, rename i915_vm_is_48b() to better reflect the commonality of using 4 levels. Based on a patch by Bob Paauwe. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Bob Paauwe <bob.j.paauwe@intel.com> Cc: Matthew Auld <matthew.william.auld@gmail.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190314223839.28258-4-chris@chris-wilson.co.uk
2019-03-15drm/i915: Drop address size from ppgtt_typeChris Wilson1-1/+1
With the introduction of the separate addressable bits into the device info, we can remove the conflation of the ppgtt size from the ppgtt type. Based on a patch by Bob Paauwe. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Bob Paauwe <bob.j.paauwe@intel.com> Cc: Matthew Auld <matthew.william.auld@gmail.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190314223839.28258-3-chris@chris-wilson.co.uk
2019-03-15drm/i915: Record platform specific ppGTT size in intel_device_infoChris Wilson1-1/+2
As the maximum addressable bits is determined by platform, record that information in our static chipset tables. This has the advantage of being clearly recorded in our capability dumps for dmesg, debugfs and error states. Based on a patch by Bob Paauwe. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Bob Paauwe <bob.j.paauwe@intel.com> Cc: Matthew Auld <matthew.william.auld@gmail.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190314223839.28258-2-chris@chris-wilson.co.uk
2019-03-14drm/i915/selftests: Disable preemption while setting up fence-timersChris Wilson1-1/+8
The impossible happens and a future fence expired while we were still initialising. The probable cause is that the test was preempted and we lost our scheduler cpu slice. Disable preemption during this test to rule out preemption as a source of timer disruption. References: https://bugs.freedesktop.org/show_bug.cgi?id=110039 Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190313205944.5768-1-chris@chris-wilson.co.uk
2019-03-12drm/i915/selftests: Improve error detection of reset failureChris Wilson1-1/+17
Use a timedwait to promptly detect if the recovery after reset fails and provide a meaningful debug dump. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190312111146.10662-2-chris@chris-wilson.co.uk
2019-03-09drm/i915: Introduce a context barrier callbackChris Wilson1-0/+106
In the next patch, we will want to update live state within a context. As this state may be in use by the GPU and we haven't been explicitly tracking its activity, we instead attach it to a request we send down the context setup with its new state and on retiring that request cleanup the old state as we then know that it is no longer live. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190309160250.29324-1-chris@chris-wilson.co.uk
2019-03-08drm/i915: Introduce intel_context.pin_mutex for pin managementChris Wilson1-1/+1
Introduce a mutex to start locking the HW contexts independently of struct_mutex, with a view to reducing the coarse struct_mutex. The intel_context.pin_mutex is used to guard the transition to and from being pinned on the gpu, and so is required before starting to build any request. The intel_context will then remain pinned until the request completes, but the mutex can be released immediately unpin completion of pinning the context. A slight variant of the above is used by per-context sseu that wants to inspect the pinned status of the context, and requires that it remains stable (either !pinned or pinned) across its operation. By using the pin_mutex to serialise operations while pin_count==0, we can take that pin_mutex for stabilise the boolean pin status. v2: for Tvrtko! * Improved commit message. * Dropped _gpu suffix from gen8_modify_rpcs_gpu. v3: Repair the locking for sseu selftests Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190308132522.21573-7-chris@chris-wilson.co.uk
2019-03-08drm/i915: Track the pinned kernel contexts on each engineChris Wilson1-2/+3
Each engine acquires a pin on the kernel contexts (normal and preempt) so that the logical state is always available on demand. Keep track of each engines pin by storing the returned pointer on the engine for quick access. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190308132522.21573-6-chris@chris-wilson.co.uk
2019-03-08drm/i915: Make context pinning part of intel_context_opsChris Wilson1-27/+5
Push the intel_context pin callback down from intel_engine_cs onto the context itself by virtue of having a central caller for intel_context_pin() being able to lookup the intel_context itself. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190308132522.21573-5-chris@chris-wilson.co.uk
2019-03-08drm/i915: Move over to intel_context_lookup()Chris Wilson2-5/+8
In preparation for an ever growing number of engines and so ever increasing static array of HW contexts within the GEM context, move the array over to an rbtree, allocated upon first use. Unfortunately, this imposes an rbtree lookup at a few frequent callsites, but we should be able to mitigate those by moving over to using the HW context as our primary type and so only incur the lookup on the boundary with the user GEM context and engines. v2: Check for no HW context in guc_stage_desc_init Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190308132522.21573-4-chris@chris-wilson.co.uk
2019-03-08drm/i915: Store the intel_context_ops in the intel_engine_csChris Wilson1-7/+6
If we place a pointer to the engine specific intel_context_ops in the engine itself, we can assign the ops pointer on initialising the context, and then rely on it being set. This simplifies the code in later patches. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190308132522.21573-3-chris@chris-wilson.co.uk
2019-03-08drm/i915: Track active engines within a contextChris Wilson2-0/+8
For use in the next patch, if we track which engines have been used by the HW, we can reduce the work required to flush our state off the HW to those engines. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190308132522.21573-1-chris@chris-wilson.co.uk
2019-03-08drm/i915: Reduce presumption of request ordering for barriersChris Wilson3-2/+7
Currently we assume that we know the order in which requests run and so can determine if we need to reissue a switch-to-kernel-context prior to idling. That assumption does not hold for the future, so instead of tracking which barriers have been used, simply determine if we have ever switched away from the kernel context by using the engine and before idling ensure that all engines that have been used since the last idle are synchronously switched back to the kernel context for safety (and else of shrinking memory while idle). v2: Use intel_engine_mask_t and ALL_ENGINES Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190308093657.8640-3-chris@chris-wilson.co.uk
2019-03-08drm/i915: Do a synchronous switch-to-kernel-context on idlingChris Wilson1-7/+2
When the system idles, we switch to the kernel context as a defensive measure (no users are harmed if the kernel context is lost). Currently, we issue a switch to kernel context and then come back later to see if the kernel context is still current and the system is idle. However, if we are no longer privy to the runqueue ordering, then we have to relax our assumptions about the logical state of the GPU and the only way to ensure that the kernel context is currently loaded is by issuing a request to run after all others, and wait for it to complete all while preventing anyone else from issuing their own requests. v2: Pull wedging into switch_to_kernel_context_sync() but only after waiting (though only for the same short delay) for the active context to finish. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190308093657.8640-1-chris@chris-wilson.co.uk
2019-03-08drm/i915/selftests: Check preemption support on each engineChris Wilson1-0/+18
Check that we have setup on preemption for the engine before testing, instead warn if it is not enabled on supported HW. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190306142517.22558-28-chris@chris-wilson.co.uk
2019-03-07drm/i915/selftests: Improve switch-to-kernel-context checkingChris Wilson1-45/+35
We can reduce the switch-to-kernel-context selftest to operate as a loop and so trivially test another state transition (that of idle->busy). Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190307211947.6954-1-chris@chris-wilson.co.uk
2019-03-06drm/i915/selftests: Upgrade printing test/subtest name to pr_infoMichał Winiarski1-2/+2
We're using pr_debug for things that we don't really want to see in the CI log, but we may find useful during test development. Let's upgrade the test name printer - we do want to see those in CI log. Signed-off-by: Michał Winiarski <michal.winiarski@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190305144717.10000-1-michal.winiarski@intel.com
2019-03-06drm/i915/selftests: Fix MI_STORE_DWORD_IMM alignmentChris Wilson1-1/+1
MI_STORE_DWORD_IMM wants to write into a dword-aligned (4B) address, we mistakenly cleared bit2 and not bits 0 and 1. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190306082447.21563-1-chris@chris-wilson.co.uk
2019-03-05drm/i915: Store the BIT(engine->id) as the engine's maskChris Wilson11-33/+34
In the next patch, we are introducing a broad virtual engine to encompass multiple physical engines, losing the 1:1 nature of BIT(engine->id). To reflect the broader set of engines implied by the virtual instance, lets store the full bitmask. v2: Use intel_engine_mask_t (s/ring_mask/engine_mask/) v3: Tvrtko voted for moah churn so teach everyone to not mention ring and use $class$instance throughout. v4: Comment upon the disparity in bspec for using VCS1,VCS2 in gen8 and VCS[0-4] in later gen. We opt to keep the code consistent and use 0-index naming throughout. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190305180332.30900-1-chris@chris-wilson.co.uk