- 11 1月, 2017 5 次提交
-
-
由 Chris Wilson 提交于
It has been some time since i915_gem_engine_cleanup was only called from the module unload path, and now it is only called when the GPU is wedged. Mika complained that the name is confusing, especially in light of the existence of i915_gem_cleanup_engines(). Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170110172246.27297-5-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Similarly to a normal reset, after we mark the GPU as wedged (completely fubar and no more requests can be executed), set the error status on all the in flight requests. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170110172246.27297-4-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Let userspace know if its request was resubmitted due to it being executed at the time of a global reset. In this case, the reset was for a guilty request on another engine, and this request was an innocent victim that will be re-executed upon restarting. However, since it was running at the time of the reset, we can not guarantee that it suffered no ill-effects from the reset (e.g. some context state may be lost, or some self-modifying fragment shaders will be restarted from the final state not their initial state), to let userspace know that it has been corrupted set a special value on the fence->error, -EAGAIN. If the request does hang on resubmission, the error will be overwritten with -EIO. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170110172246.27297-3-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
The struct dma_fence carries a status field exposed to userspace by sync_file. This is inspected after the fence is signaled and can convey whether or not the request completed successfully, or in our case if we detected a hang during the request (signaled via -EIO in SYNC_IOC_FILE_INFO). v2: Mark all cancelled requests as failed. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170110172246.27297-2-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Always reset the requests of the guilty context, including the hung request that we tell the hardware to skip. This should help if the reprogram fails entirely, but more importantly makes the guilty path more uniform (and simplifies the subsequent patch to tweak the cancelled requests). Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170110172246.27297-1-chris@chris-wilson.co.uk
-
- 10 1月, 2017 6 次提交
-
-
由 Chris Wilson 提交于
The VMA is later clipped against the vm_area_struct before insertion of the faulting PTE so we are free to create the partial view as we desire. If we use the object as the extents rather than the area, this partial can then be used for other areas. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170110095633.6612-2-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
In order to reuse the partial view for selftesting, extract the common function for computing the view. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170110095633.6612-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Rename i915_gem_get_ggtt_size() and i915_gem_get_ggtt_alignment() to i915_gem_fence_size() and i915_gem_fence_alignment() respectively to better match usage. Similarly move the pair of functions into i915_gem_tiling.c next to the fence restrictions. Suggested-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170109161613.11881-6-chris@chris-wilson.co.ukReviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
-
由 Chris Wilson 提交于
The fence size/alignment is a combination of the vma size plus object tiling parameters. Those parameters are rarely changed, making the fence size/alignemnt roughly constant for the lifetime of the VMA. We can simplify subsequent calculations by precalculating the size/alignment required for GGTT vma taking fencing into account (with an update if we do change the tiling or stride). Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170109161613.11881-4-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Ensure the view occupies the full tile row so that reads/writes into the VMA do not escape (via fenced detiling) into neighbouring objects - we will pad the object with scratch pages to satisfy the fence. This applies the lazy-tiling we employed on gen2/3 to gen4+. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170109161613.11881-2-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Computing the tile row size of a tiled object (for use with fence registers) is repeated, so extract it to a common helper. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: http://patchwork.freedesktop.org/patch/msgid/20170109161613.11881-1-chris@chris-wilson.co.ukReviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
-
- 07 1月, 2017 4 次提交
-
-
由 Konrad Rzeszutek Wilk 提交于
So they can figure out what is the optimal number of pages that can be contingously stitched together without fear of bounce buffer. We also expose an mechanism for sub-users of SWIOTLB API, such as Xen-SWIOTLB to set the max segment value. And lastly if swiotlb=force is set (which mandates we bounce buffer everything) we set max_segment so at least we can bounce buffer one 4K page instead of a giant 512KB one for which we may not have space. Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reported-and-Tested-by: NJuergen Gross <jgross@suse.com>
-
由 Chris Wilson 提交于
As we now use a deferred free queue for objects, simply retiring the active objects is not enough to immediately free them and recover their mmap space - we must now also drain the freed object list. Fixes: fbbd37b3 ("drm/i915: Move object release to a freelist + worker" Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: <drm-intel-fixes@lists.freedesktop.org> Link: http://patchwork.freedesktop.org/patch/msgid/20170106152240.5793-3-chris@chris-wilson.co.ukReviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
-
由 Chris Wilson 提交于
Since commit fe115628 ("drm/i915: Implement pwrite without struct-mutex") the lowlevel pwrite calls are now called without the protection of struct_mutex, but pwrite_phys was still asserting that it held the struct_mutex and later tried to drop and relock it. Fixes: fe115628 ("drm/i915: Implement pwrite without struct-mutex") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: <drm-intel-fixes@lists.freedesktop.org> Link: http://patchwork.freedesktop.org/patch/msgid/20170106152240.5793-1-chris@chris-wilson.co.ukReviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
-
由 Chris Wilson 提交于
The kernel context (dev_priv->kernel_context) is unique in that it is not associated with any user filp - it is the only one with ctx->file_priv == NULL. This is a simpler test than comparing it against dev_priv->kernel_context which involves some pointer dancing. In checking that this is true, we notice that the gvt context is allocating itself a i915_hw_ppgtt it doesn't use and not flagging that its file_priv should be invalid. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170106152013.24684-5-chris@chris-wilson.co.uk
-
- 06 1月, 2017 1 次提交
-
-
由 Chris Wilson 提交于
If we skip before banning, we have an inconsistent interface between execbuf still queueing valid request but those requests already queued being cancelled. If we only cancel the pending requests once we stop accepting new requests, the user interface is more consistent. Reported-by: NTvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Fixes: 821ed7df ("drm/i915: Update reset path to fix incomplete requests") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Cc: <stable@vger.kernel.org> # v4.9+ Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170105170059.344-1-chris@chris-wilson.co.uk
-
- 05 1月, 2017 2 次提交
-
-
由 Chris Wilson 提交于
In order to defeat some circular dependencies between headers to allow use of e.g. range_overflows() in a header, move the simple independent macros into their own header. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170105153023.30575-4-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
The fence registers are clobbered by a GPU reset. If there is concurrent user access to a fenced region via a GTT mmaping, the access will not be fenced during the reset (until we restore the fences afterwards). In order to prevent invalid access during the reset, before we clobber the fences first we must invalidate the GTT mmapings. Access to the mmap will then be forced to fault in the page, and in handling the fault, i915_gem_fault() will take the struct_mutex and wait upon the reset to complete. v2: Fix up commentary. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=99274 Testcase: igt/gem_mmap_gtt/hang Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: http://patchwork.freedesktop.org/patch/msgid/20170104145110.1486-1-chris@chris-wilson.co.ukReviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
-
- 03 1月, 2017 3 次提交
-
-
由 Chris Wilson 提交于
As the fence may be signaled concurrently from an interrupt on another device, it is possible for the list of requests on the timeline to be modified as we walk it. Take both (the context's timeline and the global timeline) locks to prevent such modifications. Fixes: 80b204bc ("drm/i915: Enable multiple timelines") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: <drm-intel-fixes@lists.freedesktop.org> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161223145804.6605-10-chris@chris-wilson.co.uk (cherry picked from commit 00c25e3f) Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
由 Chris Wilson 提交于
As trimming the sg table is merely an optimisation that gracefully fails if we cannot allocate a new table, we do not need to report the failure either. Fixes: 0c40ce13 ("drm/i915: Trim the object sg table") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161223145804.6605-4-chris@chris-wilson.co.uk (cherry picked from commit 8bfc478f) Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
由 Chris Wilson 提交于
When we teardown the backing storage for the phys object, we copy from the coherent contiguous block back to the shmemfs object, clflushing as we go. Trying to clflush the invalid sg beforehand just oops and would be redundant (due to it already being coherent, and clflushed afterwards). Reported-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: <drm-intel-fixes@lists.freedesktop.org> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161223145804.6605-3-chris@chris-wilson.co.uk (cherry picked from commit e5facdf9) Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 31 12月, 2016 1 次提交
-
-
由 Chris Wilson 提交于
The existing kerneldoc was outdated, so time for a refresh. v2: Use single line kdoc, mention functions for manipulation Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/20161231112012.29263-3-chris@chris-wilson.co.uk
-
- 24 12月, 2016 5 次提交
-
-
由 Chris Wilson 提交于
As the fence may be signaled concurrently from an interrupt on another device, it is possible for the list of requests on the timeline to be modified as we walk it. Take both (the context's timeline and the global timeline) locks to prevent such modifications. Fixes: 80b204bc ("drm/i915: Enable multiple timelines") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: <drm-intel-fixes@lists.freedesktop.org> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161223145804.6605-10-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
As trimming the sg table is merely an optimisation that gracefully fails if we cannot allocate a new table, we do not need to report the failure either. Fixes: 0c40ce13 ("drm/i915: Trim the object sg table") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161223145804.6605-4-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
When we teardown the backing storage for the phys object, we copy from the coherent contiguous block back to the shmemfs object, clflushing as we go. Trying to clflush the invalid sg beforehand just oops and would be redundant (due to it already being coherent, and clflushed afterwards). Reported-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: <drm-intel-fixes@lists.freedesktop.org> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161223145804.6605-3-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
The idle work handler is self-arming - if it detects that it needs to run again it will queue itself from its work handler. Take greater care when trying to drain the idle work, and double check that it is flushed. The free worker has a similar issue where it is armed by an RCU task which may be running concurrently with us. This should hopefully help with the sporadic WARN_ON(dev_priv->gt.awake) from i915_gem_suspend. v2: Reuse drain_freed_objects. v3: Don't try to flush the freed objects from the shrinker, as it may be underneath the struct_mutex already. v4: do while and comment upon the excess rcu_barrier in drain_freed_objects Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161223145804.6605-2-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Since commit db6c2b41 ("drm/i915: Store the vma in an rbtree under the object") the vma are once again sorted into GGTT first, then ppGTT so that the typical case of walking the GGTT vma can stop as soon as we find a non-ppGTT. Apply that optimisation. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161223145804.6605-1-chris@chris-wilson.co.uk
-
- 20 12月, 2016 4 次提交
-
-
由 Chris Wilson 提交于
If we at first do not succeed with attempting to remap our physical pages using a coalesced scattergather list, try again with one scattergather entry per page. This should help with swiotlb as it uses a limited buffer size and only searches for contiguous chunks within its buffer aligned up to the next boundary - i.e. we may prematurely cause a failure as we are unable to utilize the unused space between large chunks and trigger an error such as: i915 0000:00:02.0: swiotlb buffer is full (sz: 1630208 bytes) Reported-by: NJuergen Gross <jgross@suse.com> Tested-by: NJuergen Gross <jgross@suse.com> Fixes: 871dfbd6 ("drm/i915: Allow compaction upto SWIOTLB max segment size") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Imre Deak <imre.deak@intel.com> Cc: <drm-intel-fixes@lists.freedesktop.org> Link: http://patchwork.freedesktop.org/patch/msgid/20161219124346.550-1-chris@chris-wilson.co.ukReviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> (cherry picked from commit d766ef53) Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
由 Chris Wilson 提交于
In commit a4f5ea64 ("drm/i915: Refactor object page API"), I reordered the object->pages teardown to be more friendly wrt to a separate obj->mm.lock. However, I overlooked the phys object and left it with a dangling use-after-free of its phys_handle. Move the allocation of the phys handle to get_pages and it release to put_pages to prevent the invalid access and to improve symmetry. v2: Add commentary about always aligning to page size. Testcase: igt/drv_selftest/objects Reported-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Fixes: a4f5ea64 ("drm/i915: Refactor object page API") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161207133411.8028-1-chris@chris-wilson.co.uk (cherry picked from commit dbb4351b) Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
由 Chris Wilson 提交于
In commit 0c40ce13 ("drm/i915: Trim the object sg table"), we expect to copy exactly orig_st->nents across and allocate the table thusly. The copy loop should therefore end with the new_sg being NULL. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161219124346.550-2-chris@chris-wilson.co.ukReviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
-
由 Chris Wilson 提交于
If we at first do not succeed with attempting to remap our physical pages using a coalesced scattergather list, try again with one scattergather entry per page. This should help with swiotlb as it uses a limited buffer size and only searches for contiguous chunks within its buffer aligned up to the next boundary - i.e. we may prematurely cause a failure as we are unable to utilize the unused space between large chunks and trigger an error such as: i915 0000:00:02.0: swiotlb buffer is full (sz: 1630208 bytes) Reported-by: NJuergen Gross <jgross@suse.com> Tested-by: NJuergen Gross <jgross@suse.com> Fixes: 871dfbd6 ("drm/i915: Allow compaction upto SWIOTLB max segment size") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Imre Deak <imre.deak@intel.com> Cc: <drm-intel-fixes@lists.freedesktop.org> Link: http://patchwork.freedesktop.org/patch/msgid/20161219124346.550-1-chris@chris-wilson.co.ukReviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 19 12月, 2016 1 次提交
-
-
由 Chris Wilson 提交于
The requests conversion introduced a nasty bug where we could generate a new request in the middle of constructing a request if we needed to idle the system in order to evict space for a context. The request to idle would be executed (and waited upon) before the current one, creating a minor havoc in the seqno accounting, as we will consider the current request to already be completed (prior to deferred seqno assignment) but ring->last_retired_head would have been updated and still could allow us to overwrite the current request before execution. We also employed two different mechanisms to track the active context until it was switched out. The legacy method allowed for waiting upon an active context (it could forcibly evict any vma, including context's), but the execlists method took a step backwards by pinning the vma for the entire active lifespan of the context (the only way to evict was to idle the entire GPU, not individual contexts). However, to circumvent the tricky issue of locking (i.e. we cannot take struct_mutex at the time of i915_gem_request_submit(), where we would want to move the previous context onto the active tracker and unpin it), we take the execlists approach and keep the contexts pinned until retirement. The benefit of the execlists approach, more important for execlists than legacy, was the reduction in work in pinning the context for each request - as the context was kept pinned until idle, it could short circuit the pinning for all active contexts. We introduce new engine vfuncs to pin and unpin the context respectively. The context is pinned at the start of the request, and only unpinned when the following request is retired (this ensures that the context is idle and coherent in main memory before we unpin it). We move the engine->last_context tracking into the retirement itself (rather than during request submission) in order to allow the submission to be reordered or unwound without undue difficultly. And finally an ulterior motive for unifying context handling was to prepare for mock requests. v2: Rename to last_retired_context, split out legacy_context tracking for MI_SET_CONTEXT. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161218153724.8439-3-chris@chris-wilson.co.uk
-
- 17 12月, 2016 1 次提交
-
-
由 Matthew Auld 提交于
Convert some of the obvious hand-rolled ranged overflow sanity checks to our shiny new range_overflows macro. Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Suggested-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161213203222.32564-4-matthew.auld@intel.comReviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
- 15 12月, 2016 1 次提交
-
-
由 Jan Kara 提交于
Every single user of vmf->virtual_address typed that entry to unsigned long before doing anything with it so the type of virtual_address does not really provide us any additional safety. Just use masked vmf->address which already has the appropriate type. Link: http://lkml.kernel.org/r/1479460644-25076-3-git-send-email-jack@suse.czSigned-off-by: NJan Kara <jack@suse.cz> Acked-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Ross Zwisler <ross.zwisler@linux.intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 12 12月, 2016 1 次提交
-
-
由 Chris Wilson 提交于
In commit a4f5ea64 ("drm/i915: Refactor object page API"), I reordered the object->pages teardown to be more friendly wrt to a separate obj->mm.lock. However, I overlooked the phys object and left it with a dangling use-after-free of its phys_handle. Move the allocation of the phys handle to get_pages and it release to put_pages to prevent the invalid access and to improve symmetry. v2: Add commentary about always aligning to page size. Testcase: igt/drv_selftest/objects Reported-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Fixes: a4f5ea64 ("drm/i915: Refactor object page API") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161207133411.8028-1-chris@chris-wilson.co.uk
-
- 08 12月, 2016 1 次提交
-
-
由 Jani Nikula 提交于
Pineview deserves to use its own platform enum (which was already added, unused, previously). IS_G33() no longer matches Pineview, and gets replaced by IS_G33() || IS_PINEVIEW() or equivalent. Pineview is no longer an outlier among platform definitions. Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NJani Nikula <jani.nikula@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1481143689-19672-1-git-send-email-jani.nikula@intel.com
-
- 07 12月, 2016 1 次提交
-
-
由 Jani Nikula 提交于
Add more consistency to our naming. Pineview remains the outlier. Keep using code names for gen5+. v2: rebased Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by: NJani Nikula <jani.nikula@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1481105584-23033-1-git-send-email-jani.nikula@intel.com
-
- 06 12月, 2016 1 次提交
-
-
由 Chris Wilson 提交于
We need to distinguish between full i915_vma structs and simple drm_mm_nodes when considering eviction (i.e. we must be careful not to treat a mere drm_mm_node as a much larger i915_vma causing memory corruption, if we are lucky). To do this, color these not-a-vma with -1 (I915_COLOR_UNEVICTABLE). v2...v200: New name for -1. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161205142941.21965-1-chris@chris-wilson.co.uk
-
- 05 12月, 2016 1 次提交
-
-
由 Chris Wilson 提交于
Since the submit/execute split in commit d55ac5bf ("drm/i915: Defer transfer onto execution timeline to actual hw submission") the global seqno advance was deferred until the submit_request callback. After wedging the GPU, we were installing a nop_submit_request handler (to avoid waking up the dead hw) but I had missed converting this over to the new scheme. Under the new scheme, we have to explicitly call i915_gem_submit_request() from the submit_request handler to mark the request as on the hardware. If we don't the request is always pending, and any waiter will continue to wait indefinitely and hangcheck will not be able to resolve the lockup. References: https://bugs.freedesktop.org/show_bug.cgi?id=98748 Testcase: igt/gem_eio/in-flight Fixes: d55ac5bf ("drm/i915: Defer transfer onto execution timeline to actual hw submission") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161122144121.7379-3-chris@chris-wilson.co.uk (cherry picked from commit 3dcf93f7) Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 02 12月, 2016 1 次提交
-
-
由 Tvrtko Ursulin 提交于
Simplifies the code to pass the right parameter in. v2: Commit message. (Joonas Lahtinen) Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
-