- 07 10月, 2017 1 次提交
-
-
由 Matthew Auld 提交于
Move the setting/clearing of the vma->pages to a vm operation. Doing so neatens things up a little, but more importantly gives us a sane place to also set/clear the vma->pages_sizes, which we introduce later in preparation for supporting huge-pages. v2: remove redundant vma->pages check v3: GEM_BUG_ON(vma->pages) following i915_vma_remove Suggested-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171006145041.21673-8-matthew.auld@intel.comSigned-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171006221833.32439-7-chris@chris-wilson.co.uk
-
- 29 9月, 2017 1 次提交
-
-
由 Chris Wilson 提交于
Take advantage of optimised memset64() instead of open coding it to prefill the GTT pages. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20170926095353.11036-1-chris@chris-wilson.co.ukReviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
-
- 22 9月, 2017 1 次提交
-
-
由 Michal Wajdeczko 提交于
Our global struct with params is named exactly the same way as new preferred name for the drm_i915_private function parameter. To avoid such name reuse lets use different name for the global. v5: pure rename v6: fix Credits-to: Coccinelle @@ identifier n; @@ ( - i915.n + i915_modparams.n ) Signed-off-by: NMichal Wajdeczko <michal.wajdeczko@intel.com> Cc: Jani Nikula <jani.nikula@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Ville Syrjala <ville.syrjala@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Acked-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by: NJani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20170919193846.38060-1-michal.wajdeczko@intel.com
-
- 18 9月, 2017 1 次提交
-
-
由 Zhi Wang 提交于
The cache attribute of the required entry has to be the same with the existing value. After this requirement is met, the futher comparison should be performed. After this fix, the refined test case can pass. v2: - Refine the tittle and comments. (Rodrigo) Fixes: 4395890a ("drm/i915: Introduce private PAT management") Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Ben Widawsky <benjamin.widawsky@intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by: NZhi Wang <zhi.a.wang@intel.com> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/1505741794-10593-1-git-send-email-zhi.a.wang@intel.comSigned-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
- 14 9月, 2017 3 次提交
-
-
由 Zhi Wang 提交于
Remove the "INDEX" suffix from PPAT marcos as they are bits actually, not indexes. Suggested-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NZhi Wang <zhi.a.wang@intel.com> Cc: Ben Widawsky <benjamin.widawsky@intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/1505392783-4084-2-git-send-email-zhi.a.wang@intel.com
-
由 Zhi Wang 提交于
The private PAT management is to support PPAT entry manipulation. Two APIs are introduced for dynamically managing PPAT entries: intel_ppat_get and intel_ppat_put. intel_ppat_get will search for an existing PPAT entry which perfectly matches the required PPAT value. If not, it will try to allocate a new entry if there is any available PPAT indexs, or return a partially matched PPAT entry if there is no available PPAT indexes. intel_ppat_put will put back the PPAT entry which comes from intel_ppat_get. If it's dynamically allocated, the reference count will be decreased. If the reference count turns into zero, the PPAT index is freed again. Besides, another two callbacks are introduced to support the private PAT management framework. One is ppat->update_hw(), which writes the PPAT configurations in ppat->entries into HW. Another one is ppat->match, which will return a score to show how two PPAT values match with each other. v17: - Refine the comparision of score of BDW. (Joonas) v16: - Fix a bug in PPAT match function of BDW. (Joonas) v15: - Refine some code flow. (Joonas) v12: - Fix a problem "not returning the entry of best score". (Zhenyu) v7: - Keep all the register writes unchanged in this patch. (Joonas) v6: - Address all comments from Chris: http://www.spinics.net/lists/intel-gfx/msg136850.html - Address all comments from Joonas: http://www.spinics.net/lists/intel-gfx/msg136845.html v5: - Add check and warnnings for those platforms which don't have PPAT. v3: - Introduce dirty bitmap for PPAT registers. (Chris) - Change the name of the pointer "dev_priv" to "i915". (Chris) - intel_ppat_{get, put} returns/takes a const intel_ppat_entry *. (Chris) v2: - API re-design. (Chris) Signed-off-by: NZhi Wang <zhi.a.wang@intel.com> Cc: Ben Widawsky <benjamin.widawsky@intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> #v7 Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> [Joonas: Use BIT() in the enum in bdw_private_pat_match] Signed-off-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/1505392783-4084-1-git-send-email-zhi.a.wang@intel.com
-
由 Michal Hocko 提交于
GFP_TEMPORARY was introduced by commit e12ba74d ("Group short-lived and reclaimable kernel allocations") along with __GFP_RECLAIMABLE. It's primary motivation was to allow users to tell that an allocation is short lived and so the allocator can try to place such allocations close together and prevent long term fragmentation. As much as this sounds like a reasonable semantic it becomes much less clear when to use the highlevel GFP_TEMPORARY allocation flag. How long is temporary? Can the context holding that memory sleep? Can it take locks? It seems there is no good answer for those questions. The current implementation of GFP_TEMPORARY is basically GFP_KERNEL | __GFP_RECLAIMABLE which in itself is tricky because basically none of the existing caller provide a way to reclaim the allocated memory. So this is rather misleading and hard to evaluate for any benefits. I have checked some random users and none of them has added the flag with a specific justification. I suspect most of them just copied from other existing users and others just thought it might be a good idea to use without any measuring. This suggests that GFP_TEMPORARY just motivates for cargo cult usage without any reasoning. I believe that our gfp flags are quite complex already and especially those with highlevel semantic should be clearly defined to prevent from confusion and abuse. Therefore I propose dropping GFP_TEMPORARY and replace all existing users to simply use GFP_KERNEL. Please note that SLAB users with shrinkers will still get __GFP_RECLAIMABLE heuristic and so they will be placed properly for memory fragmentation prevention. I can see reasons we might want some gfp flag to reflect shorterm allocations but I propose starting from a clear semantic definition and only then add users with proper justification. This was been brought up before LSF this year by Matthew [1] and it turned out that GFP_TEMPORARY really doesn't have a clear semantic. It seems to be a heuristic without any measured advantage for most (if not all) its current users. The follow up discussion has revealed that opinions on what might be temporary allocation differ a lot between developers. So rather than trying to tweak existing users into a semantic which they haven't expected I propose to simply remove the flag and start from scratch if we really need a semantic for short term allocations. [1] http://lkml.kernel.org/r/20170118054945.GD18349@bombadil.infradead.org [akpm@linux-foundation.org: fix typo] [akpm@linux-foundation.org: coding-style fixes] [sfr@canb.auug.org.au: drm/i915: fix up] Link: http://lkml.kernel.org/r/20170816144703.378d4f4d@canb.auug.org.au Link: http://lkml.kernel.org/r/20170728091904.14627-1-mhocko@kernel.orgSigned-off-by: NMichal Hocko <mhocko@suse.com> Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au> Acked-by: NMel Gorman <mgorman@suse.de> Acked-by: NVlastimil Babka <vbabka@suse.cz> Cc: Matthew Wilcox <willy@infradead.org> Cc: Neil Brown <neilb@suse.de> Cc: "Theodore Ts'o" <tytso@mit.edu> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 12 9月, 2017 1 次提交
-
-
由 Zhi Wang 提交于
Factor out setup_private_pat() for introducing the following patches. Signed-off-by: NZhi Wang <zhi.a.wang@intel.com> Cc: Ben Widawsky <benjamin.widawsky@intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NBen Widawsky <benjamin.widawsky@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/1505202148-22959-1-git-send-email-zhi.a.wang@intel.comSigned-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
- 09 9月, 2017 1 次提交
-
-
由 Chris Wilson 提交于
If we know that we will completely fill a pagetable (i.e. we are inserting a complete set of 512 pages), we can skip prefilling that PT with scratch entries. If we have to abort the insertion prior to writing the real entries, we will teardown the pagetable and remove it from the page directory (so that we will restart the allocation next time). We could do similar tricks for the PD and PDP, but the likelihood of a single insertion covering the entire 512 entries diminishes, as do the cycle savings. The saving are even greater (relatively) when we are preallocating page tables for huge pages, as then we never need to fill the page table. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.auld@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20170908181622.17791-1-chris@chris-wilson.co.ukReviewed-by: NMatthew Auld <matthew.auld@intel.com>
-
- 07 9月, 2017 1 次提交
-
-
由 Chris Wilson 提交于
shrink_slab() allows us to report back the number of objects we successfully scanned (out of the target shrinkctl->nr_to_scan). As report the number of pages owned by each GEM object as a separate item to the shrinker, we cannot precisely control the number of shrinker objects we scan on each pass; and indeed may free more than requested. If we fail to tell the shrinker about the number of objects we process, it will continue to hold a grudge against us as any objects left unscanned are added to the next reclaim -- and so we will keep on "unfairly" shrinking our own slab in comparison to other slabs. Link: http://lkml.kernel.org/r/20170822135325.9191-2-chris@chris-wilson.co.ukSigned-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Hillf Danton <hillf.zj@alibaba-inc.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Shaohua Li <shli@fb.com> Cc: Christoph Lameter <cl@linux.com> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Pekka Enberg <penberg@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 06 9月, 2017 1 次提交
-
-
由 Zhi Wang 提交于
Add back the GEN8_PPAT_WB cache attributes in cnl_setup_private_ppat(), which are missed on CNL. Fixes: 4e34935f ("drm/i915/cnl: Setup PAT Index.") Cc: Ben Widawsky <benjamin.widawsky@intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Suggested-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by: NZhi Wang <zhi.a.wang@intel.com> Reviewed-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/1504208177-27784-1-git-send-email-zhi.a.wang@intel.com (cherry picked from commit 6e31cdcf) Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com>
-
- 02 9月, 2017 1 次提交
-
-
由 Zhi Wang 提交于
Add back the GEN8_PPAT_WB cache attributes in cnl_setup_private_ppat(), which are missed on CNL. Fixes: 4e34935f ("drm/i915/cnl: Setup PAT Index.") Cc: Ben Widawsky <benjamin.widawsky@intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Suggested-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by: NZhi Wang <zhi.a.wang@intel.com> Reviewed-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/1504208177-27784-1-git-send-email-zhi.a.wang@intel.com
-
- 01 9月, 2017 1 次提交
-
-
由 Rodrigo Vivi 提交于
Driver’s CPU access to GTT is via the GTTMMADR BAR. The current HW implementation of that BAR is to only support <= DW (and maybe QW) writes—not 16/32/64B writes that could occur with WC and/or SSE/AVX moves. GTTMMADR must be marked uncacheable (UC). Accesses to GTTMMADR(GTT), must be 64 bits or less (ie. 1 GTT entry). v2: Get clarification on the reasons and spec is getting updated to reflect it now. Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Suggested-by: NBen Widawsky <benjamin.widawsky@intel.com> Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20170829230907.21363-1-rodrigo.vivi@intel.com
-
- 23 8月, 2017 1 次提交
-
-
由 Chris Wilson 提交于
We use WC pages for coherent writes into the ppGTT on !llc architectures. However, to create a WC page requires a stop_machine(), i.e. is very slow. To compensate we currently keep a per-vm cache of recently freed pages, but we still see the slow startup of new contexts. We can amoritize that cost slightly by allocating WC pages in small batches (PAGEVEC_SIZE == 14) and since creating a WC page implies a stop_machine() there is no penalty for keeping that stash global. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMatthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20170822173828.5932-1-chris@chris-wilson.co.uk
-
- 19 8月, 2017 1 次提交
-
-
由 Rodrigo Vivi 提交于
Let's inherit workarounds from previous platforms that according to wa_database and BSpec are still valid for Cannonlake. v2: Add missed workarounds. v3: Rebase v4: Remove bad chunk that was added to rc6 disable. (Ander) Also remove A0 W/a that are not needed anymore. v5: Rebase on top of CFL. v6: Remove empty gen9_init_perctx_bb and gen9_init_indirectctx_bb since they don't carry any gen10 related W/a. (by Oscar). Also Remove A0 exclusive workaround. v7: Remove more A0 exclusive workarounds. As pointed out by Oscar many workarounds were changed to be A0 only so let's remove them. Cc: Oscar Mateo <oscar.mateo@intel.com> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: NOscar Mateo <oscar.mateo@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20170815231651.975-1-rodrigo.vivi@intel.com
-
- 16 8月, 2017 1 次提交
-
-
由 Rodrigo Vivi 提交于
Different from previous platforms, on CNL+ there's separated registers for separated indexes. v2: Remove comments regarding uncertainty around the table. v3: Remove extra line (by Ben) Cc: Clint Taylor <clinton.a.taylor@intel.com> Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: NBen Widawsky <benjamin.widawsky@intel.com> Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20170815232539.3562-1-rodrigo.vivi@intel.com
-
- 15 8月, 2017 2 次提交
-
-
由 Tina Zhang 提交于
Enable the guest i915 full ppgtt functionality when host can provide this capability. vgt_caps is introduced to guest i915 driver to get the vgpu capabilities from the device model. VGT_CPAS_FULL_PPGTT is one of the capabilities type to let guest i915 dirver know that the guest i915 full ppgtt is supported by device model. Notice that the minor version of pvinfo isn't bumped because of this vgt_caps introduction, due to older guest would be broken by simply increasing the pvinfo version. Although the pvinfo minor version doesn't increase, the compatibility won't be blocked. The compatibility is ensured by checking the value of caps field in pvinfo. Zero means no full ppgtt support and BIT(2) means this feature is provided. Changes since v1: - Use u32 instead of uint32_t (Joonas) - Move VGT_CAPS_FULL_PPGTT introduction to this patch and use #define instead of enum (Joonas) - Rewrite the vgpu full ppgtt capability checking logic. (Joonas) - Some coding style refine. (Joonas) Changes since v2: - Divide the whole patch set into two separate patch series, with one patch in i915 side to check guest i915 full ppgtt capability and enable it when this capability is supported by the device model, and the other one in gvt side which fixs the blocking issue and enables the device model to provide the capability to guest. And this patch focuses on guest i915 side. (Joonas) - Change the title from "introduce vgt_caps to pvinfo" to "Enable guest i915 full ppgtt functionality". (Tina) Change since v3: - Add some comments about pvinfo caps and version. (Joonas) Change since v4: - Tested by Tina Zhang. Change since v5: - Add limitation about supporting 32bit full ppgtt. Change since v6: - Change the fallback to 48bit full ppgtt if i915.ppgtt_enable=2. (Zhenyu) Change in v9: - Remove the fixme comment due to no plan for 32bit full ppgtt support. (Zhenyu) - Reorder the patch-set to fix compiling issue with git-bisect. (Zhenyu) - Add print log when forcing guest 48bit full ppgtt. (Zhenyu) v10: - Update against Joonas's has_full_ppgtt and has_full_48bit_ppgtt disconnect change. (Zhenyu) Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> # in v2 Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Tina Zhang <tina.zhang@intel.com> Signed-off-by: NTina Zhang <tina.zhang@intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
由 Joonas Lahtinen 提交于
Configurations like virtualized environments may support only 48 bit ppGTT without supporting 32 bit ppGTT. Support this by disconnecting the relationship of the two feature bits. Cc: Tina Zhang <tina.zhang@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Zhi Wang <zhi.a.wang@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
-
- 07 7月, 2017 1 次提交
-
-
由 Chuanxiao Dong 提交于
The ppgtt should be get directly from i915_address_space *vm instead of vma->vm. v2: - add one more fix for bxt. (Chris) Fixes: 4a234c5f ("drm/i915: pass the vma to insert_entries") Bugzilla:https://bugs.freedesktop.org/show_bug.cgi?id=101713Signed-off-by: NChuanxiao Dong <chuanxiao.dong@intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> v1 Cc: Matthew Auld <matthew.auld@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Zhenyu Wang <zhenyuw@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1499421059-18262-1-git-send-email-chuanxiao.dong@intel.comReviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
- 22 6月, 2017 1 次提交
-
-
由 Matthew Auld 提交于
The vma already contains most of the information we need for insertion. But also in preparation for supporting huge gtt pages, it would be useful to know the details of the vma, such that we can we can easily determine the page sizes we are allowed to use when inserting into the 48b PPGTT. This is especially true for 64K where we can't just arbitrarily use it, since we require aligning/padding the vm space to 2M, which sometimes we can't enforce in the upper levels. Suggested-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Link: http://patchwork.freedesktop.org/patch/msgid/20170622095836.6800-1-matthew.auld@intel.comSigned-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
- 17 6月, 2017 1 次提交
-
-
由 Rodrigo Vivi 提交于
Coffee Lake inherit most of Kabylake production workarounds. v2: Fix typo on commit message and remove WaDisableKillLogic and GEN9_DISABLE_OCL_OOB_SUPPRESS_LOGIC, since as Mika pointed out they shouldn't be here for cfl according to BSpec. Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com> Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1497653398-15722-1-git-send-email-rodrigo.vivi@intel.com
-
- 16 6月, 2017 1 次提交
-
-
由 Chris Wilson 提交于
When choosing a slot for an execbuffer, we ideally want to use the same address as last time (so that we don't have to rebind it) and the same address as expected by the user (so that we don't have to fixup any relocations pointing to it). If we first try to bind the incoming execbuffer->offset from the user, or the currently bound offset that should hopefully achieve the goal of avoiding the rebind cost and the relocation penalty. However, if the object is not currently bound there we don't want to arbitrarily unbind an object in our chosen position and so choose to rebind/relocate the incoming object instead. After we report the new position back to the user, on the next pass the relocations should have settled down. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtien@linux.intel.com>
-
- 07 6月, 2017 2 次提交
-
-
由 Chris Wilson 提交于
Commit 7c3f86b6 ("drm/i915: Invalidate the guc ggtt TLB upon insertion") added the restoration of the invalidation routine after the GuC was disabled, but missed that the GuC was unconditionally disabled when not used. This then overwrites the invalidate routine for the older chipsets, causing havoc and breaking resume as the most obvious victim. We place the guard inside i915_ggtt_disable_guc() to be backport friendly (the bug was introduced into v4.11) but it would be preferred to be in more control over when this was guard (i.e. do not try and teardown the data structures before we have enabled them). That should be true with the reorganisation of the guc loaders. Reported-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Fixes: 7c3f86b6 ("drm/i915: Invalidate the guc ggtt TLB upon insertion") Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Oscar Mateo <oscar.mateo@intel.com> Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com> Cc: Arkadiusz Hiler <arkadiusz.hiler@intel.com> Cc: <stable@vger.kernel.org> # v4.11+ Link: http://patchwork.freedesktop.org/patch/msgid/20170531190514.3691-1-chris@chris-wilson.co.ukReviewed-by: NMichel Thierry <michel.thierry@intel.com> (cherry picked from commit cb60606d) Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
由 Jon Bloomfield 提交于
BXT has a H/W issue with IOMMU which can lead to system hangs when Aperture accesses are queued within the GAM behind GTT Accesses. This patch avoids the condition by wrapping all GTT updates in stop_machine and using a flushing read prior to restarting the machine. The stop_machine guarantees no new Aperture accesses can begin while the PTE writes are being emmitted. The flushing read ensures that any following Aperture accesses cannot begin until the PTE writes have been cleared out of the GAM's fifo. Only FOLLOWING Aperture accesses need to be separated from in flight PTE updates. PTE Writes may follow tightly behind already in flight Aperture accesses, so no flushing read is required at the start of a PTE update sequence. This issue was reproduced by running igt/gem_readwrite and igt/gem_render_copy simultaneously from different processes, each in a tight loop, with INTEL_IOMMU enabled. This patch was originally published as: drm/i915: Serialize GTT Updates on BXT [Note: This will cause a performance penalty for some use cases, but avoiding hangs trumps performance hits. This may need to be worked around in Mesa to recover the lost performance.] v2: Move bxt/iommu detection into static function Remove #ifdef CONFIG_INTEL_IOMMU protection Make function names more reflective of purpose Move flushing read into static function v3: Tidy up for checkpatch.pl Testcase: igt/gem_concurrent_blit Signed-off-by: NJon Bloomfield <jon.bloomfield@intel.com> Cc: John Harrison <john.C.Harrison@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel.vetter@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: stable@vger.kernel.org Link: http://patchwork.freedesktop.org/patch/msgid/1495641251-30022-1-git-send-email-jon.bloomfield@intel.comReviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> (cherry picked from commit 0ef34ad6) Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 01 6月, 2017 2 次提交
-
-
由 Chris Wilson 提交于
When we enable the GuC, we enable an alternative mechanism for doing post-GGTT update invalidation. Likewise, when we disable the GuC, we restore the previous method. Assert that we change between known endpoints, so that we can catch if we accidentally clobber some other gen and if we change the invalidate routine without updating guc. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Oscar Mateo <oscar.mateo@intel.com> Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com> Cc: Arkadiusz Hiler <arkadiusz.hiler@intel.com> Cc: Michel Thierry <michel.thierry@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170601090446.1334-1-chris@chris-wilson.co.ukReviewed-by: NMichal Wajdeczko <michal.wajdeczko@intel.com> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
-
由 Chris Wilson 提交于
Commit 7c3f86b6 ("drm/i915: Invalidate the guc ggtt TLB upon insertion") added the restoration of the invalidation routine after the GuC was disabled, but missed that the GuC was unconditionally disabled when not used. This then overwrites the invalidate routine for the older chipsets, causing havoc and breaking resume as the most obvious victim. We place the guard inside i915_ggtt_disable_guc() to be backport friendly (the bug was introduced into v4.11) but it would be preferred to be in more control over when this was guard (i.e. do not try and teardown the data structures before we have enabled them). That should be true with the reorganisation of the guc loaders. Reported-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Fixes: 7c3f86b6 ("drm/i915: Invalidate the guc ggtt TLB upon insertion") Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Oscar Mateo <oscar.mateo@intel.com> Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com> Cc: Arkadiusz Hiler <arkadiusz.hiler@intel.com> Cc: <stable@vger.kernel.org> # v4.11+ Link: http://patchwork.freedesktop.org/patch/msgid/20170531190514.3691-1-chris@chris-wilson.co.ukReviewed-by: NMichel Thierry <michel.thierry@intel.com>
-
- 26 5月, 2017 1 次提交
-
-
由 Chris Wilson 提交于
We depend on intel_iommu_gfx_mapped for various workarounds, but that is only available under an #ifdef CONFIG_INTEL_IOMMU. Refactor all the cut-and-paste ifdefs to a common routine. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: http://patchwork.freedesktop.org/patch/msgid/20170525121612.2190-1-chris@chris-wilson.co.ukReviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
-
- 25 5月, 2017 1 次提交
-
-
由 Jon Bloomfield 提交于
BXT has a H/W issue with IOMMU which can lead to system hangs when Aperture accesses are queued within the GAM behind GTT Accesses. This patch avoids the condition by wrapping all GTT updates in stop_machine and using a flushing read prior to restarting the machine. The stop_machine guarantees no new Aperture accesses can begin while the PTE writes are being emmitted. The flushing read ensures that any following Aperture accesses cannot begin until the PTE writes have been cleared out of the GAM's fifo. Only FOLLOWING Aperture accesses need to be separated from in flight PTE updates. PTE Writes may follow tightly behind already in flight Aperture accesses, so no flushing read is required at the start of a PTE update sequence. This issue was reproduced by running igt/gem_readwrite and igt/gem_render_copy simultaneously from different processes, each in a tight loop, with INTEL_IOMMU enabled. This patch was originally published as: drm/i915: Serialize GTT Updates on BXT v2: Move bxt/iommu detection into static function Remove #ifdef CONFIG_INTEL_IOMMU protection Make function names more reflective of purpose Move flushing read into static function v3: Tidy up for checkpatch.pl Testcase: igt/gem_concurrent_blit Signed-off-by: NJon Bloomfield <jon.bloomfield@intel.com> Cc: John Harrison <john.C.Harrison@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel.vetter@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1495641251-30022-1-git-send-email-jon.bloomfield@intel.comReviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
- 18 5月, 2017 2 次提交
-
-
由 Michal Hocko 提交于
Now that drm_[cm]alloc* helpers are simple one line wrappers around kvmalloc_array and drm_free_large is just kvfree alias we can drop them and replace by their native forms. This shouldn't introduce any functional change. Changes since v1 - fix typo in drivers/gpu//drm/etnaviv/etnaviv_gem.c - noticed by 0day build robot Suggested-by: NDaniel Vetter <daniel@ffwll.ch> Signed-off-by: Michal Hocko <mhocko@suse.com>drm: drop drm_[cm]alloc* helpers [danvet: Fixup vgem which grew another user very recently.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Acked-by: NChristian König <christian.koenig@amd.com> Link: http://patchwork.freedesktop.org/patch/msgid/20170517122312.GK18247@dhcp22.suse.cz
-
由 Matthew Auld 提交于
For the aliasing ppgtt we clear the va range up to vma->size, but seem to allocate up to vma->node.size, which is a little inconsistent given that vma->node.size >= vma->size. Not that is really matters all that much since we preallocate anyway, but for consistency just use vma->size. Fixes: ff685975 ("drm/i915: Move allocate_va_range to GTT") Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Link: http://patchwork.freedesktop.org/patch/msgid/20170516085514.5853-1-matthew.auld@intel.comReviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> (cherry picked from commit d567232c) Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 16 5月, 2017 1 次提交
-
-
由 Matthew Auld 提交于
For the aliasing ppgtt we clear the va range up to vma->size, but seem to allocate up to vma->node.size, which is a little inconsistent given that vma->node.size >= vma->size. Not that is really matters all that much since we preallocate anyway, but for consistency just use vma->size. Fixes: ff685975 ("drm/i915: Move allocate_va_range to GTT") Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Link: http://patchwork.freedesktop.org/patch/msgid/20170516085514.5853-1-matthew.auld@intel.comReviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
- 15 5月, 2017 1 次提交
-
-
由 Matthew Auld 提交于
If a vma is already bound to a ppgtt, we incorrectly call allocate_va_range again when doing a PIN_UPDATE, which will result in over accounting within our paging structures, such that when we do unbind something we don't actually destroy the structures and end up inadvertently recycling them. In reality this probably isn't too bad, but once we start touching PDEs and PDPEs for 64K/2M/1G pages this apparent recycling will manifest into lots of really, really subtle bugs. v2: Fix the testing of vma->flags for aliasing_ppgtt_bind_vma Fixes: ff685975 ("drm/i915: Move allocate_va_range to GTT") Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Link: http://patchwork.freedesktop.org/patch/msgid/20170512091423.26085-1-chris@chris-wilson.co.uk (cherry picked from commit 1f23475c) Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 12 5月, 2017 1 次提交
-
-
由 Matthew Auld 提交于
If a vma is already bound to a ppgtt, we incorrectly call allocate_va_range again when doing a PIN_UPDATE, which will result in over accounting within our paging structures, such that when we do unbind something we don't actually destroy the structures and end up inadvertently recycling them. In reality this probably isn't too bad, but once we start touching PDEs and PDPEs for 64K/2M/1G pages this apparent recycling will manifest into lots of really, really subtle bugs. v2: Fix the testing of vma->flags for aliasing_ppgtt_bind_vma Fixes: ff685975 ("drm/i915: Move allocate_va_range to GTT") Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Link: http://patchwork.freedesktop.org/patch/msgid/20170512091423.26085-1-chris@chris-wilson.co.uk
-
- 10 5月, 2017 2 次提交
-
-
由 Imre Deak 提交于
On GEN8+ (not counting CHV) the calculation can in theory result in an incorrect sign extension with all upper bits set. In practice this is unlikely to happen since it would require 4GB of stolen memory set aside. For consistency still prevent the sign extension explicitly everywhere. Signed-off-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1494408113-379-6-git-send-email-imre.deak@intel.com
-
由 Imre Deak 提交于
Even though an error from these functions isn't fatal we still want to have a diagnostic message about it. v2: - Don't do assignments in if statements. (Jani) Cc: Jani Nikula <jani.nikula@intel.com> Signed-off-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NJani Nikula <jani.nikula@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1494408113-379-4-git-send-email-imre.deak@intel.com
-
- 09 5月, 2017 1 次提交
-
-
由 Laura Abbott 提交于
set_memory_* functions have moved to set_memory.h. Switch to this explicitly. [akpm@linux-foundation.org: track drivers/gpu/drm/i915/i915_gem_gtt.c linux-next changes] Link: http://lkml.kernel.org/r/1488920133-27229-8-git-send-email-labbott@redhat.comSigned-off-by: NLaura Abbott <labbott@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 31 3月, 2017 1 次提交
-
-
由 Chris Wilson 提交于
Since commit 1233e2db ("drm/i915: Move object backing storage manipulation to its own locking"), i915_gem_object_put_pages() and specifically the i915_gem_gtt_finish_pages() may be called from outside of the struct_mutex and so we can no longer pass I915_WAIT_LOCKED to i915_gem_wait_for_idle. Fixes: 1233e2db ("drm/i915: Move object backing storage manipulation to its own locking") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Daniel Vetter <daniel.vetter@intel.com> Cc: Jani Nikula <jani.nikula@linux.intel.com> Cc: intel-gfx@lists.freedesktop.org Cc: <stable@vger.kernel.org> # v4.10+ Link: http://patchwork.freedesktop.org/patch/msgid/20170330085341.20311-1-chris@chris-wilson.co.ukReviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> (cherry picked from commit 228ec87c) Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 30 3月, 2017 1 次提交
-
-
由 Chris Wilson 提交于
Since commit 1233e2db ("drm/i915: Move object backing storage manipulation to its own locking"), i915_gem_object_put_pages() and specifically the i915_gem_gtt_finish_pages() may be called from outside of the struct_mutex and so we can no longer pass I915_WAIT_LOCKED to i915_gem_wait_for_idle. Fixes: 1233e2db ("drm/i915: Move object backing storage manipulation to its own locking") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Daniel Vetter <daniel.vetter@intel.com> Cc: Jani Nikula <jani.nikula@linux.intel.com> Cc: intel-gfx@lists.freedesktop.org Cc: <stable@vger.kernel.org> # v4.10+ Link: http://patchwork.freedesktop.org/patch/msgid/20170330085341.20311-1-chris@chris-wilson.co.ukReviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
-
- 03 3月, 2017 2 次提交
-
-
由 Mika Kuoppala 提交于
If we manage to tangle errorpaths and get call to callbacks, it is better to defensively keep them as null until object init is finished so that we get clean null deref on callsite, instead of more cryptic wreckage with partly initialized vm objects. Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Link: http://patchwork.freedesktop.org/patch/msgid/1488295691-9404-5-git-send-email-mika.kuoppala@intel.com
-
由 Mika Kuoppala 提交于
The term legacy is subjective. Use 3lvl and 4lvl where appropriate. Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Link: http://patchwork.freedesktop.org/patch/msgid/1488295691-9404-4-git-send-email-mika.kuoppala@intel.com
-