- 08 11月, 2014 4 次提交
-
-
So that it can be used by the flip code to wait for rendering without holding any locks. Signed-off-by: NAnder Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
Always require PIN_GLOBAL when we want a mappable offset (PIN_MAPPABLE). This causes the pin to fixup the global binding in cases were the vma was already bound (and due to the proceeding bug, we considered it to be already mappable). References: https://bugs.freedesktop.org/show_bug.cgi?id=85671Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> [danvet: Add WARN_ON to check that PIN_MAP implies PIN_GLOBAL as discussed on irc.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
We use the obj->map_and_fenceable hint for when we already have a valid mapping of this object in the aperture. This hint can only apply to the GGTT and not to the aliasing-ppGTT. One user of the hint is execbuffer relocation, which began to fail when it tried to follow the hint and perform the relocate through the non-existent GGTT mapping. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=85671Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 John Harrison 提交于
An earlier commit (c8725f3d: Do not call retire_requests from wait_for_rendering) removed the use of the ring parameter within wait_rendering__tail() but did not remove the parameter itself. As the plan is to remove obj->ring which is where this parameter comes from, it is simpler to just remove the parameter completely than to update it with a new source. For: VIZ-4377 Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> CC: Chris Wilson <chris@chris-wilson.co.uk> CC: Brad Volkin <bradley.d.volkin@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 04 11月, 2014 1 次提交
-
-
由 Tvrtko Ursulin 提交于
If these flags are on the object level it will be more difficult to allow for multiple VMAs per object. v2: Simplification and cleanup after code review comments (Chris Wilson). Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 24 10月, 2014 4 次提交
-
-
由 Daniel Vetter 提交于
Not having checks for this isn't good. I've checked igt and libdrm and they all already clear flags properly. So we're lucky and should be able to sneak this ABI clarification in. Testcase: igt/gem_wait/invalid-flags Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=85280Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
Too many new drm driver writers seem to look at i915 for inspiration. But we have two ways to do mmap, so discourage readers from the old, ugly version. In a new driver we'd just expose two mmap offsets per object, one for the gtt map and the other for the cpu map. v2: Make it clear that i915 does cpu mmaps this way for past cluelessness^W^W historical reasons. Asked for by Jani. Cc: "Cheng, Yao" <yao.cheng@intel.com> Cc: David Herrmann <dh.herrmann@gmail.com> Reviewed-by: NDavid Herrmann <dh.herrmann@gmail.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
If we are not able to free anything (the shrinker leaves nothing on the global object lists), do not log anything. This is useful when other subsystems are being stress-tested for their oom behaviour and i915.ko is shouting into the logs about doing nothing. Reported-by: NDave Jones <davej@redhat.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
The shrinker reports the number of pages freed, but we try to log the number of bytes - which leads to some nonsense values being reportedly freed during oom. Reported-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 03 10月, 2014 1 次提交
-
-
由 Chris Wilson 提交于
We can use the same logic to walk the different bound/unbound lists during shrinker (as the unbound list is a degenerate case of the bound list), slightly compacting the code. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 24 9月, 2014 1 次提交
-
-
由 Damien Lespiau 提交于
v2: Rebased on top of the i915_gpu_error.c extraction. Reviewed-by: NThomas Wood <thomas.wood@intel.com> Signed-off-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 20 9月, 2014 1 次提交
-
-
由 Daniel Vetter 提交于
I shouldn't ask everyone to do this and fail myself ... This extracts all the frontbuffer tracking functions into intel_frontbuffer.c, adds a DOC overview section and also adds the missing kerneldoc for i915_gem_track_fb and also pulls it into the same section for convenience. v2: Don't forget about the header files. v3: Oops, might check compilation next time around. To make my life easier drop the increase_pllclock from set_base_atomic since really, it doesn't matter if you see your Oops or kgdb with a tiny bit of lag. v4: Try to better explain how to actually use this, requested by Paulo on irc. v5: Explain invalidate/flush a bit clearer. v6: s/business/busyness/ Acked-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Cc: Paulo Zanoni <paulo.r.zanoni@intel.com> Cc: Vandana Kannan <vandana.kannan@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
-
- 19 9月, 2014 4 次提交
-
-
由 Chris Wilson 提交于
The data structure it was supposed to be sanity checking has long gone. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
If we believe that the device can cross cache domains in its prefetcher (i.e. we allow neighbouring pages in different domains), we don't supply a color_adjust callback. Use the presence of this callback to better determine when we should be verifying that the GTT space we just used is valid. v2: Remove the superfluous struct drm_device function param as well. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> [danvet: Also adjust the comment per irc discussion with Chris.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
Before we process the final unbind on an object and move it to the unbound list, it is semantically cleaner if there are no more active references to the object. (An active reference would imply that it was still being accessed by the GPU after it became inaccessible.) The caveat is that all callsites must be prepared for the object to disappeared during the unbind - i.e. they must hold their own reference. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
Due to the lazy retirement semantics, even though we have unbound an object, it may still hold onto an active reference. So in the debug code, play safe. v2: Export i915_gem_shrink() rather than opencoding it. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 08 9月, 2014 1 次提交
-
-
由 Daniel Vetter 提交于
In commit 1f83fee0 Author: Daniel Vetter <daniel.vetter@ffwll.ch> Date: Thu Nov 15 17:17:22 2012 +0100 drm/i915: clear up wedged transitions I've accidentally inverted the EIO/wedged handling in the fault handler: We want to return the EIO as a SIGBUS only if it's not because of the gpu having died, to prevent userspace from unduly dying. In my defence the comment right above is completely misleading, so fix both. v2: Drop the WARN_ON, it's not actually a bug to e.g. receive an -EIO when swap-in fails. v3: Don't remove too much ... oops. Reported-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: stable@vger.kernel.org Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 03 9月, 2014 3 次提交
-
-
由 Ville Syrjälä 提交于
gen2/3 platforms have a boatload of rings we're not using. On my 830 the BIOS/hw can leave some of those "active" after resume which will prevent c3 entry. The ring is apparently considered active whenever head != tail even if the ring is disabled. Disable and clear all such unused ringbuffers on init/resume. Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Thomas Daniel 提交于
These two functions make no sense in an Logical Ring Context & Execlists world. v2: We got rid of lrc_enabled and centralized everything in the sanitized i915.enable_execlists instead. Signed-off-by: NOscar Mateo <oscar.mateo@intel.com> v3: Rebased. Corrected a typo in comment for i915_switch_context and added a comment that it should not be called in execlist mode. Added WARN_ON if i915_switch_context is called in execlist mode. Moved check for execlist mode out of i915_switch_context and into callers. Added comment in context_reset explaining why nothing is done in execlist mode. Signed-off-by: NThomas Daniel <thomas.daniel@intel.com> [danvet: Simplify the patch subject so I can understand it.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 McAulay, Alistair 提交于
This patch is to address Daniels concerns over different code during reset: http://lists.freedesktop.org/archives/intel-gfx/2014-June/047758.html "The reason for aiming as hard as possible to use the exact same code for driver load, gpu reset and runtime pm/system resume is that we've simply seen too many bugs due to slight variations and unintended omissions." Tested using igt drv_hangman. V2: Cleaner way of preventing check_wedge returning -EAGAIN V3: Clean the last_context during reset, to ensure do_switch() does the MI_SET_CONTEXT. As per review. Signed-off-by: NMcAulay, Alistair <alistair.mcaulay@intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com> [danvet: Rebase over ctx->ppgtt rework and extend the comment in check_wedge a bit.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 20 8月, 2014 1 次提交
-
-
由 Oscar Mateo 提交于
If we reset a ring after a hang, we have to make sure that we clear out all queued Execlists requests. v2: The ring is, at this point, already being correctly re-programmed for Execlists, and the hangcheck counters cleared. v3: Daniel suggests to drop the "if (execlists)" because the Execlists queue should be empty in legacy mode (which is true, if we do the INIT_LIST_HEAD). v4: Do the pending intel_runtime_pm_put Signed-off-by: NOscar Mateo <oscar.mateo@intel.com> Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 15 8月, 2014 1 次提交
-
-
由 Oscar Mateo 提交于
On a previous iteration of this patch, I created an Execlists version of __i915_add_request and asbtracted it away as a vfunc. Daniel Vetter wondered then why that was needed: "with the clean split in command submission I expect every function to know wether it'll submit to an lrc (everything in intel_lrc.c) or wether it'll submit to a legacy ring (existing code), so I don't see a need for an add_request vfunc." The honest, hairy truth is that this patch is the glue keeping the whole logical ring puzzle together: - i915_add_request is used by intel_ring_idle, which in turn is used by i915_gpu_idle, which in turn is used in several places inside the eviction and gtt codes. - Also, it is used by i915_gem_check_olr, which is littered all over i915_gem.c - ... If I were to duplicate all the code that directly or indirectly uses __i915_add_request, I'll end up creating a separate driver. To show the differences between the existing legacy version and the new Execlists one, this time I have special-cased __i915_add_request instead of adding an add_request vfunc. I hope this helps to untangle this Gordian knot. Signed-off-by: NOscar Mateo <oscar.mateo@intel.com> Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> [danvet: Adjust to ringbuf->FIXME_lrc_ctx per the discussion with Thomas Daniel.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 13 8月, 2014 5 次提交
-
-
由 Daniel Vetter 提交于
Currently we abuse the aliasing ppgtt to set up the ppgtt support in general. Which is a bit backwards since with full ppgtt we don't ever need the aliasing ppgtt. So untangle this and separate the ppgtt init from the aliasing ppgtt. While at it drag it out of the context enabling (which just does a switch to the default context). Note that we still have the differentiation between synchronous and asynchronous ppgtt setup, but that will soon vanish. So also correctly wire up the return value handling to be prepared for when ->switch_mm drops the synchronous parameter and could start to fail. Reviewed-by: NMichel Thierry <michel.thierry@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
A subsequent patch will no longer initialize the aliasing ppgtt if we have full ppgtt enabled, since we simply don't need that any more. Unfortunately a few places check for the aliasing ppgtt instead of checking for ppgtt in general. Fix them up. One special case are the gtt offset and size macros, which have some code to remap the aliasing ppgtt to the global gtt. The aliasing ppgtt is _not_ a logical address space, so passing that in as the vm is plain and simple a bug. So just WARN about it and carry on - we have a gracefully fall-through anyway if we can't find the vma. Reviewed-by: NMichel Thierry <michel.thierry@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
We already needs this just as a safety check in case the preallocation reservation dance fails. But we definitely need this to be able to move tha aliasing ppgtt setup back out of the context code to this place, where it belongs. Reviewed-by: NMichel Thierry <michel.thierry@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
Stuff in headers really aught to have this. Reviewed-by: NMichel Thierry <michel.thierry@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
This essentially unbreaks non-ppgtt operation where we'd scribble over random memory. While at it give the vm_to_ppgtt function a proper prefix and make it a bit more paranoid. Reviewed-by: NMichel Thierry <michel.thierry@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 12 8月, 2014 2 次提交
-
-
由 Daniel Vetter 提交于
So when reviewing Michel's patch I've noticed a few things and cleaned them up: - The early checks in ppgtt_release are now redundant: The inactive list should always be empty now, so we can ditch these checks. Even for the aliasing ppgtt (though that's a different confusion) since we tear that down after all the objects are gone. - The ppgtt handling functions are splattered all over. Consolidate them in i915_gem_gtt.c, give them OCD prefixes and add wrappers for get/put. - There was a bit a confusion in ppgtt_release about whether it cares about the active or inactive list. It should care about them both, so augment the WARNINGs to check for both. There's still create_vm_for_ctx left to do, put that is blocked on the removal of ppgtt->ctx. Once that's done we can rename it to i915_ppgtt_create and move it to its siblings for handling ppgtts. v2: Move the ppgtt checks into the inline get/put functions as suggested by Chris. v3: Inline the now redundant ppgtt local variable. Cc: Michel Thierry <michel.thierry@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMichel Thierry <michel.thierry@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Michel Thierry 提交于
VMAs should take a reference of the address space they use. Now, when the fd is closed, it will release the ref that the context was holding, but it will still be referenced by any vmas that are still active. ppgtt_release() should then only be called when the last thing referencing it releases the ref, and it can just call the base cleanup and free the ppgtt. Note that with this we will extend the lifetime of ppgtts which contain shared objects. But all the non-shared objects will get removed as soon as they drop of the active list and for the shared ones the shrinker can eventually reap them. Since we currently can't evict ppgtt pagetables either I don't think that temporary leak is important. Signed-off-by: NMichel Thierry <michel.thierry@intel.com> [danvet: Add note about potential ppgtt leak with this approach.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 11 8月, 2014 6 次提交
-
-
由 Oscar Mateo 提交于
Execlists are indeed a brave new world with respect to workload submission to the GPU. In previous version of these series, I have tried to impact the legacy ringbuffer submission path as little as possible (mostly, passing the context around and using the correct ringbuffer when I needed one) but Daniel is afraid (probably with a reason) that these changes and, especially, future ones, will end up breaking older gens. This commit and some others coming next will try to limit the damage by creating an alternative path for workload submission. The first step is here: laying out a new ring init/fini. Signed-off-by: NOscar Mateo <oscar.mateo@intel.com> Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Oscar Mateo 提交于
As suggested by Daniel Vetter. The idea, in subsequent patches, is to provide an alternative to these vfuncs for the Execlists submission mechanism. v2: Splitted into two and reordered to illustrate our intentions, instead of showing it off. Also, remove the add_request vfunc and added the stop_ring one. Signed-off-by: NOscar Mateo <oscar.mateo@intel.com> Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> [danvet: - Make checkpatch happy. - Be grumpy about the excessive vtable. - Ditch gt->is_ring_initialized.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Oscar Mateo 提交于
GEN8 brings an expansion of the HW contexts: "Logical Ring Contexts". These expanded contexts enable a number of new abilities, especially "Execlists". The macro is defined to off until we have things in place to hope to work. v2: Rename "advanced contexts" to the more correct "logical ring contexts". v3: Add a module parameter to enable execlists. Execlist are relatively new, and so it'd be wise to be able to switch back to ring submission to debug subtle problems that will inevitably arise. v4: Add an intel_enable_execlists function. v5: Sanitize early, as suggested by Daniel. Remove lrc_enabled. Signed-off-by: Ben Widawsky <ben@bwidawsk.net> (v1) Signed-off-by: Damien Lespiau <damien.lespiau@intel.com> (v3) Signed-off-by: Oscar Mateo <oscar.mateo@intel.com> (v2, v4 & v5) Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
This migrates the fence tracking onto the existing seqno infrastructure so that the later conversion to tracking via requests is simplified. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
Move the decision on whether we need to have a mappable object during execbuffer to the fore and then reuse that decision by propagating the flag through to reservation. As a corollary, before doing the actual relocation through the GTT, we can make sure that we do have a GTT mapping through which to operate. Note that the key to make this work is to ditch the obj->map_and_fenceable unbind optimization - with full ppgtt it doesn't make a lot of sense any more anyway. v2: Revamp and resend to ease future patches. v3: Refresh patch rationale References: https://bugs.freedesktop.org/show_bug.cgi?id=81094Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Ben Widawsky <benjamin.widawsky@intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> [danvet: Explain why obj->map_and_fenceable is key and split out the secure batch fix.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
If an object is not bound into the global GTT, then it cannot be accessed via the GTT. This restores the original code that was muddled by ppGTT. In the process, we remove a WARN that had long outlived its usefulness and was simply being coded around instead. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 07 8月, 2014 1 次提交
-
-
由 Deepak S 提交于
We might be leaving the PGU Frequency (and thus vnn) high during the suspend. Flusing the delayed work queue should take care of this. Signed-off-by: NDeepak S <deepak.s@linux.intel.com> Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 24 7月, 2014 1 次提交
-
-
由 Thomas Gleixner 提交于
Use ktime_get_raw_ns() and get rid of the back and forth timespec conversions. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
-
- 23 7月, 2014 3 次提交
-
-
由 Chris Wilson 提交于
An object can only have an active gtt mapping if it is currently bound into the global gtt. Therefore we can simply walk the list of all bound objects and check the flag upon those for an active gtt mapping. From commit 48018a57 Author: Paulo Zanoni <paulo.r.zanoni@intel.com> Date: Fri Dec 13 15:22:31 2013 -0200 drm/i915: release the GTT mmaps when going into D3 Also note that the WARN is inappropriate for this function as GPU activity is orthogonal to GTT mmap status. Rather it is the caller that relies upon this condition and so it should assert that the GPU is idle itself. References: https://bugs.freedesktop.org/show_bug.cgi?id=80081Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Paulo Zanoni <paulo.r.zanoni@intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@gmail.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Tested-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> [danvet: cherry-pick from -next to -fixes.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Armin Reese 提交于
When using an IOMMU, GEM objects are mapped by their DMA address as the physical address is unknown. This depends on the underlying IOMMU driver to map and unmap the physical pages properly as defined in intel_iommu.c. The current code will tell the IOMMU to unmap the GEM BO's pages on the destruction of the first VMA that "maps" that BO. This is clearly wrong as there may be other VMAs "mapping" that BO (using flink). The scanout is one such example. The patch fixes this issue by only unmapping the DMA maps when there are no more VMAs mapping that object. This is equivalent to when an object is considered unbound as can be seen by the code. On the first VMA that again because bound, we will remap. An alternate solution would be to move the dma mapping to object creation and destrubtion. I am not sure if this is considered an unfriendly thing to do. Some notes to backporters trying to backport full PPGTT: The bug can never be hit without enabling the IOMMU. The existing code will also do the right thing when the object is shared via dmabuf. The failure should be demonstrable with flink. In cases when not using intel_iommu_strict it is likely (likely, as defined by: off the top of my head) on current workloads to *not* hit this bug since we often teardown all VMAs for an object shared across multiple VMs. We also finish access to that object before the first dma_unmapping. intel_iommu_strict with flinked buffers is likely to hit this issue. Signed-off-by: NArmin Reese <armin.c.reese@intel.com> [danvet: Add the excellent commit message provided by Ben.] Reviewed-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Jesse Barnes 提交于
Now that we use the runtime IRQ enable/disable functions in our suspend path, we can simply check the pm._irqs_disabled flag everywhere. So rename it to catch the users, and add an inline for it to make the checks clear everywhere. Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org> Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-