- 24 3月, 2019 1 次提交
-
-
由 Tvrtko Ursulin 提交于
[ Upstream commit ca22f32a6296cbfa29de56328c8505560a18cfa8 ] Legacy behaviour was to allow non-page-aligned mmap requests, as does the linux mmap(2) implementation by virtue of automatically rounding up for the caller. To avoid breaking legacy userspace relax the newly introduced fix. Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Fixes: 5c4604e757ba ("drm/i915: Prevent a race during I915_GEM_MMAP ioctl with WC set") Reported-by: NGuenter Roeck <linux@roeck-us.net> Cc: Adam Zabrocki <adamza@microsoft.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: <stable@vger.kernel.org> # v4.0+ Cc: Akash Goel <akash.goel@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Jani Nikula <jani.nikula@linux.intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: intel-gfx@lists.freedesktop.org Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20190305110409.28633-1-tvrtko.ursulin@linux.intel.com (cherry picked from commit a90e1948efb648f567444f87f3c19b2a0787affd) Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
- 20 2月, 2019 1 次提交
-
-
由 Joonas Lahtinen 提交于
commit 2e7bd10e05afb866b5fb13eda25095c35d7a27cc upstream. Make sure the underlying VMA in the process address space is the same as it was during vm_mmap to avoid applying WC to wrong VMA. A more long-term solution would be to have vm_mmap_locked variant in linux/mmap.h for when caller wants to hold mmap_sem for an extended duration. v2: - Refactor the compare function Fixes: 1816f923 ("drm/i915: Support creation of unbound wc user mappings for objects") Reported-by: NAdam Zabrocki <adamza@microsoft.com> Suggested-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: <stable@vger.kernel.org> # v4.0+ Cc: Akash Goel <akash.goel@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Adam Zabrocki <adamza@microsoft.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> #v1 Link: https://patchwork.freedesktop.org/patch/msgid/20190207085454.10598-1-joonas.lahtinen@linux.intel.com (cherry picked from commit 5c4604e757ba9b193b09768d75a7d2105a5b883f) Signed-off-by: NJani Nikula <jani.nikula@intel.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 21 11月, 2018 1 次提交
-
-
由 Chris Wilson 提交于
commit ab0d6a14 upstream. Handle integer overflow when computing the sub-page length for shmem backed pread/pwrite. Reported-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: stable@vger.kernel.org Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20181012140228.29783-1-chris@chris-wilson.co.uk (cherry picked from commit a5e856a5) Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 18 7月, 2018 2 次提交
-
-
由 Chris Wilson 提交于
If the driver is wedged, we skip idling the GPU. However, we may still have a few requests still not retired following the wedging (since they will be waiting for a background worker trying to acquire struct_mutex). As we hold the struct_mutex, always do a quick request retirement in order to flush the wedged path. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=107257Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180717084121.28185-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Our I915g (early gen3, the oldest machine we have in the farm) is still reporting occasional incoherency performing the following operations: 1) write through GGTT (indirect write into memory) 2) write through either CPU or WC (direct write into memory) 3) read from GGTT (indirect read) Instead of reporting the value from (2), the read from GGTT reports the earlier value written via the GGTT. We have made sure that the writes are flushed from the CPU (commit 3a32497f ("drm/i915/selftests: Provide full mb() around clflush") and commit add00e6d ("drm/i915: Flush the WCB following a WC write")), but still see the error, just less frequently. The only remaining cache that might be affected here is a chipset cache, so flush that as well. Testcase: igt/drv_selftest/live_coherency #gdg Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180717092655.28417-1-chris@chris-wilson.co.uk
-
- 13 7月, 2018 2 次提交
-
-
由 Chris Wilson 提交于
If the user created a read-only object, they should not be allowed to circumvent the write protection using the pwrite ioctl. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Jon Bloomfield <jon.bloomfield@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Matthew Auld <matthew.william.auld@gmail.com> Reviewed-by: NJon Bloomfield <jon.bloomfield@intel.com> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: NMatthew Auld <matthew.william.auld@gmail.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180712185315.3288-5-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
If the user has created a read-only object, they should not be allowed to circumvent the write protection by using a GGTT mmapping. Deny it. Also most machines do not support read-only GGTT PTEs, so again we have to reject attempted writes. Fortunately, this is known a priori, so we can at least reject in the call to create the mmap (with a sanity check in the fault handler). v2: Check the vma->vm_flags during mmap() to allow readonly access. v3: Remove VM_MAYWRITE to curtail mprotect() Testcase: igt/gem_userptr_blits/readonly_mmap* Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Jon Bloomfield <jon.bloomfield@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Matthew Auld <matthew.william.auld@gmail.com> Cc: David Herrmann <dh.herrmann@gmail.com> Reviewed-by: Matthew Auld <matthew.william.auld@gmail.com> #v1 Reviewed-by: NJon Bloomfield <jon.bloomfield@intel.com> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180712185315.3288-4-chris@chris-wilson.co.uk
-
- 12 7月, 2018 1 次提交
-
-
由 Michał Winiarski 提交于
Let's reorder things so that we can do onion teardown rather than double goto. References: b96f6ebf ("drm/i915: Correctly handle error path in i915_gem_init_hw") Signed-off-by: NMichał Winiarski <michal.winiarski@intel.com> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com> Cc: Sagar Arun Kamble <sagar.a.kamble@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20180712124810.25241-1-michal.winiarski@intel.com
-
- 10 7月, 2018 2 次提交
-
-
由 Chris Wilson 提交于
On unwinding following a critical failure inside GEM init, we also need to be sure to flush the workers before unloading the module. Testcase: igt/drv_module_reload/basic-reload-inject Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180710094421.16223-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
In the next patch, we will make a fairly minor change to flush outstanding resets before suspend. In order to keep churn to a minimum in that functional patch, we fix up the comments and coding style now. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180709130208.11730-7-chris@chris-wilson.co.uk
-
- 09 7月, 2018 2 次提交
-
-
由 Chris Wilson 提交于
With a broken GPU we expect it to fail during the initial GPU setup where do a couple of context switches to record the defaults. This is a task that takes a few milliseconds even on the slowest of devices, but we may have to wait 60s for hangcheck to give in and declare the machine inoperable. In this a case where any gpu hang is unacceptable, both from a timeliness and practical standpoint. We can therefore set a timeout on our wait-for-idle that is shorter than the hangcheck (which may be up to 60s for a declaring a wedged driver) and so detect the broken GPU much more quickly during driver load (and so prevent stalling userspace for ages). Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180709122044.7028-2-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Usually we have no idea about the upper bound we need to wait to catch up with userspace when idling the device, but in a few situations we know the system was idle beforehand and can provide a short timeout in order to very quickly catch a failure, long before hangcheck kicks in. In the following patches, we will use the timeout to curtain two overly long waits, where we know we can expect the GPU to complete within a reasonable time or declare it broken. In particular, with a broken GPU we expect it to fail during the initial GPU setup where do a couple of context switches to record the defaults. This is a task that takes a few milliseconds even on the slowest of devices, but we may have to wait 60s for hangcheck to give in and declare the machine inoperable. In this a case where any gpu hang is unacceptable, both from a timeliness and practical standpoint. The other improvement is that in selftests, we do not need to arm an independent timer to inject a wedge, as we can just limit the timeout on the wait directly. v2: Include the timeout parameter in the trace. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180709122044.7028-1-chris@chris-wilson.co.uk
-
- 07 7月, 2018 2 次提交
-
-
由 Chris Wilson 提交于
In the next patch, we will want a third distinct class of timeline that may overlap with the current pair of client and engine timeline classes. Rather than use the ad hoc markup of SINGLE_DEPTH_NESTING, initialise the different timeline classes with an explicit subclass. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180706210710.16251-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
In the next patch, we will want to start skipping requests on failing to complete their payloads. So export the utility function current used to make requests inoperable following a failed gpu reset. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180706103947.15919-2-chris@chris-wilson.co.uk
-
- 06 7月, 2018 1 次提交
-
-
由 Chris Wilson 提交于
If we have just completed a WC write, we must ensure that the WCB (Write Combining Buffer) is flushed out to main memory before we can expect to see the results. This is especially important when mixing WC with GTT as the physical paths are different and cachelines are not naturally flushed. Testcase: igt/drv_selftests/live_coherency #gdg Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180706115402.18547-1-chris@chris-wilson.co.uk
-
- 05 7月, 2018 1 次提交
-
-
由 Gustavo A. R. Silva 提交于
In preparation to enabling -Wimplicit-fallthrough, mark switch cases where we are expecting to fall through. Addresses-Coverity-ID: 141432 Addresses-Coverity-ID: 141433 Addresses-Coverity-ID: 141434 Addresses-Coverity-ID: 141435 Addresses-Coverity-ID: 141436 Addresses-Coverity-ID: 1357360 Addresses-Coverity-ID: 1357403 Addresses-Coverity-ID: 1357433 Addresses-Coverity-ID: 1392622 Addresses-Coverity-ID: 1415273 Addresses-Coverity-ID: 1435752 Addresses-Coverity-ID: 1441500 Addresses-Coverity-ID: 1454596 Signed-off-by: NGustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: NJani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180628223541.GA17665@embeddedor.com
-
- 03 7月, 2018 2 次提交
-
-
由 Chris Wilson 提交于
If the whole object is already pinned by HW for use as scanout, we will fail to move it to the mappable region and so must resort to using a partial VMA covering the whole object. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=104513 Fixes: aa136d9d ("drm/i915: Convert partial ggtt vma to full ggtt if it spans the entire object") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Matthew Auld <matthew.william.auld@gmail.com> Reviewed-by: NMatthew Auld <matthew.william.auld@gmail.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180630090509.469-1-chris@chris-wilson.co.uk (cherry picked from commit 7e7367d3) Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
由 Chris Wilson 提交于
If the whole object is already pinned by HW for use as scanout, we will fail to move it to the mappable region and so must resort to using a partial VMA covering the whole object. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=104513 Fixes: aa136d9d ("drm/i915: Convert partial ggtt vma to full ggtt if it spans the entire object") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Matthew Auld <matthew.william.auld@gmail.com> Reviewed-by: NMatthew Auld <matthew.william.auld@gmail.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180630090509.469-1-chris@chris-wilson.co.uk
-
- 29 6月, 2018 1 次提交
-
-
由 Michal Wajdeczko 提交于
We're fetching GuC/HuC firmwares directly from uc level during init_early stage but this breaks guc/huc struct isolation and also strict SW-only initialization rule for init_early. Move fw fetching to init phase and do it separately per guc/huc struct. v2: don't forget to move wopcm_init - Michele v3: fetch in init_misc phase - Michal Signed-off-by: NMichal Wajdeczko <michal.wajdeczko@intel.com> Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Michel Thierry <michel.thierry@intel.com> Reviewed-by: Michel Thierry <michel.thierry@intel.com> #2 Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20180628141522.62788-2-michal.wajdeczko@intel.com
-
- 28 6月, 2018 1 次提交
-
-
由 Chris Wilson 提交于
In the next^W forthcoming patch, we will start to defer retiring the request from the engine list if it is still active on the submission backend. To preserve the semantics that after wait-for-idle completes the system is idle and fully retired, we need to therefore wait for the backends to idle before calling i915_retire_requests(). Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180627115334.16282-1-chris@chris-wilson.co.uk
-
- 19 6月, 2018 1 次提交
-
-
由 Mika Kuoppala 提交于
If client is smart or lucky enough to create a new context after each hang, our context banning mechanism will never catch up, and as a result of that it will be saved from client banning. This can result in a never ending streak of gpu hangs caused by bad or malicious client, preventing access from other legit gpu clients. Fix this by always incrementing per client ban score if it hangs in short successions regardless of context ban scoring. The exception are non bannable contexts. They remain detached from client ban scoring mechanism. v2: xchg timestamp, tidyup (Chris) v3: comment, bannable & banned together (Chris) Fixes: b083a087 ("drm/i915: Add per client max context ban limit") Cc: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20180615104429.31477-1-mika.kuoppala@linux.intel.com (cherry picked from commit 14921f3c) Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 18 6月, 2018 2 次提交
-
-
由 Chris Wilson 提交于
Since we trigger 10,000s of hangs and resets during selftesting, we emit many, many thousands of lines of useless debug messages. Reduce the frequency by only logging a change in state of a guilty context. Fixes: 14921f3c ("drm/i915: Fix context ban and hang accounting for client") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180618073135.10849-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
commit b2209e62 ("drm/i915/execlists: Reset the CSB head tracking on reset/sanitization") and commit 1288786b ("drm/i915: Move GEM sanitize from resume_early to resume") show the conflicting requirements on the code. We must reset the GPU before trashing live state on a fast resume (hibernation debug, or error paths), but we must only reset our state tracking iff the GPU is reset (or power cycled). This is tricky if we are disabling GPU reset to simulate broken hardware; we reset our state tracking but the GPU is left intact and recovers from its stale state. v2: Again without the assertion for forcewake, no longer required since commit b3ee09a4 ("drm/i915/ringbuffer: Fix context restore upon reset") as the contexts are reset from the CS ensuring everything is powered up. Fixes: b2209e62 ("drm/i915/execlists: Reset the CSB head tracking on reset/sanitization") Fixes: 1288786b ("drm/i915: Move GEM sanitize from resume_early to resume") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180616202534.18767-1-chris@chris-wilson.co.uk
-
- 16 6月, 2018 1 次提交
-
-
由 Chris Wilson 提交于
As we want to be able to call i915_reset_engine and co from a softirq or timer context, we need to be irqsafe at all times. So we have to forgo the simple spin_lock_irq for the full spin_lock_irqsave. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180615093137.14270-1-chris@chris-wilson.co.uk
-
- 15 6月, 2018 1 次提交
-
-
由 Mika Kuoppala 提交于
If client is smart or lucky enough to create a new context after each hang, our context banning mechanism will never catch up, and as a result of that it will be saved from client banning. This can result in a never ending streak of gpu hangs caused by bad or malicious client, preventing access from other legit gpu clients. Fix this by always incrementing per client ban score if it hangs in short successions regardless of context ban scoring. The exception are non bannable contexts. They remain detached from client ban scoring mechanism. v2: xchg timestamp, tidyup (Chris) v3: comment, bannable & banned together (Chris) Fixes: b083a087 ("drm/i915: Add per client max context ban limit") Cc: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20180615104429.31477-1-mika.kuoppala@linux.intel.com
-
- 14 6月, 2018 1 次提交
-
-
由 Chris Wilson 提交于
For symmetry, simplicity and ensuring the request is always truly idle upon its completion, always emit the closing flush prior to emitting the request breadcrumb. Previously, we would only emit the flush if we had started a user batch, but this just leaves all the other paths open to speculation (do they affect the GPU caches or not?) With mm switching, a key requirement is that the GPU is flushed and invalidated before hand, so for absolute safety, we want that closing flush be mandatory. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180612105135.4459-1-chris@chris-wilson.co.uk
-
- 11 6月, 2018 2 次提交
-
-
由 Chris Wilson 提交于
As i915_gem_object_phys_attach() wants to play dirty and mess around with obj->mm.pages itself (replacing the shmemfs with a DMA allocation), refactor the gubbins so into i915_gem_object_unset_pages() that we don't have to duplicate all the secrets. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180611075532.26534-1-chris@chris-wilson.co.uk Link: https://patchwork.freedesktop.org/patch/msgid/152871104647.1718.8796913290418060204@jlahtine-desk.ger.corp.intel.com
-
由 Chris Wilson 提交于
Due to a silent conflict (silent because we are trying to fix the CI test that is meant to exercising these failures!) between commit 51e645b6 ("drm/i915: Mark the GPU as wedged without error on fault injection") and commit 8571a05a ("drm/i915: Use GEM suspend when aborting initialisation"), we failed to actually squash the error message after injecting the load failure. Rearrange the code to export i915_load_failure() for better logging of real errors (and quiet logging of injected errors). Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Reviewed-by: NMichał Winiarski <michal.winiarski@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180609111058.2660-1-chris@chris-wilson.co.uk
-
- 08 6月, 2018 1 次提交
-
-
由 Chris Wilson 提交于
If we have been instructed (by CI) to inject a fault to load the module with a wedged GPU, do so quietly less we upset CI. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com> Reviewed-by: NMichał Winiarski <michal.winiarski@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180607134558.31150-1-chris@chris-wilson.co.uk
-
- 07 6月, 2018 2 次提交
-
-
由 Chris Wilson 提交于
In preparation for vm_fault_t becoming a distinct type, convert the fault handler (i915_gem_fault()) over to the new interface. Based on a patch by Souptick Joarder References: 1c8f4220 ("mm: change return type to vm_fault_t") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Souptick Joarder <jrdr.linux@gmail.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Matthew Auld <matthew.william.auld@gmail.com> Reviewed-by: NMatthew Auld <matthew.william.auld@gmail.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180606214520.20220-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
As part of our GEM initialisation now, we send a request to the hardware in order to record the initial GPU state. This coupled with deferred idle workers, makes aborting on error tricky. We already have the mechanism in place to wait on the GPU and cancel all the deferred workers for suspend, so let's reuse it during the error teardown. It is already used in places for later init error handling, but doing so at this point is slightly ugly due to the mutex dance (it's ok, the module load is still single threaded). Testcase: igt/drv_module_reload/basic-reload-inject Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Reviewed-by: NMichał Winiarski <michal.winiarski@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180606145441.4460-1-chris@chris-wilson.co.uk
-
- 06 6月, 2018 1 次提交
-
-
由 Chris Wilson 提交于
In the near future, I want to subclass gen6_hw_ppgtt as it contains a few specialised members and I wish to add more. To avoid the ugliness of using ppgtt->base.base, rename the i915_hw_ppgtt base member (i915_address_space) as vm, which is our common shorthand for an i915_address_space local. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Matthew Auld <matthew.william.auld@gmail.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180605153758.18422-1-chris@chris-wilson.co.uk
-
- 05 6月, 2018 3 次提交
-
-
由 Chris Wilson 提交于
Since the kernel provides SZ_1M, use it in preference of 1 << 20. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMichal Wajdeczko <michal.wajdeczko@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180605135746.8020-1-chris@chris-wilson.co.uk
-
由 Michal Wajdeczko 提交于
In function gem_init_hw() we are calling uc_init_hw() but in case of error later in function, we missed to call matching uc_fini_hw() v2: pulled out from the series Signed-off-by: NMichal Wajdeczko <michal.wajdeczko@intel.com> Cc: Sagar Arun Kamble <sagar.a.kamble@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NSagar Arun Kamble <sagar.a.kamble@intel.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20180605122443.23776-1-michal.wajdeczko@intel.com
-
由 Changbin Du 提交于
This adds a new vGPU cap info bit VGT_CAPS_HUGE_GTT, which is to detect whether the host supports shadowing of huge gtt pages. If host does support it, remove the page sizes restriction for vGPU. Signed-off-by: NChangbin Du <changbin.du@intel.com> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/1525770425-5373-1-git-send-email-changbin.du@intel.com
-
- 04 6月, 2018 1 次提交
-
-
由 Michal Wajdeczko 提交于
We should keep i915_gem_init/fini functions together for easier tracking of their symmetry. v2: rebased, pulled out from the series Signed-off-by: NMichal Wajdeczko <michal.wajdeczko@intel.com> Cc: Sagar Arun Kamble <sagar.a.kamble@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NSagar Arun Kamble <sagar.a.kamble@intel.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20180604090032.20840-1-michal.wajdeczko@intel.com
-
- 02 6月, 2018 2 次提交
-
-
由 Chris Wilson 提交于
Let's not take any chances by using a shortcut to mark the objects as in the CPU domain upon freezing (all pages will be written to disk and so on restore all objects will start from the CPU domain). Currently, we simply mark the objects as being in the CPU domain, bypassing the flushes. Let's call the full domain transfer function so that we have less special case code (and symmetry with the suspend path) even though it will be mostly redundant. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180601144125.18026-2-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
As we have already suspended the device, this should be a no-op except for marking that all writes are indeed complete. The downside is that we then have to walk all the lists of objects for what should be a no-op (in some cases they will be mmio read to ensure the GGTT writes are indeed flushed, and clflushes to ensure that cpu writes are in memory). It seems prudent and the safer course for us to ensure all writes are flushed to memory before suspend. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180601144125.18026-1-chris@chris-wilson.co.uk
-
- 01 6月, 2018 2 次提交
-
-
由 Chris Wilson 提交于
Now that we always switch to the kernel context upon idling, we can make that assertion. References: 4dfacb0b ("drm/i915: Switch to kernel context before idling at runtime") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180531224057.6036-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
During testing we encounter a conflict between SUSPEND_TEST_DEVICES and disabling reset (gem_eio/suspend). This results in the device continuing on without being reset, but since it has gone through HW sanitization to account for the suspend/resume cycle, we have to assume the device has been reset to its defaults. A simple way around this is to skip the sanitize phase for SUSPEND_TEST_DEVICES by moving it to suspend-late. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180531082246.9763-4-chris@chris-wilson.co.uk
-