- 19 3月, 2014 1 次提交
-
-
由 Chris Wilson 提交于
We have reports of heavy screen corruption if we try to use the stolen memory reserved by the BIOS whilst the DMA-Remapper is active. This quirk may be only specific to a few machines or BIOSes, but first lets apply the big hammer and always disable use of stolen memory when DMAR is active. v2 by Jani: Rebase on -fixes, only look at intel_iommu_gfx_mapped. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=68535Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Cc: stable@vger.kernel.org Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 03 3月, 2014 1 次提交
-
-
由 Akash Goel 提交于
There is a conflict seen when requesting the kernel to reserve the physical space used for the stolen area. This is because some BIOS are wrapping the stolen area in the root PCI bus, but have an off-by-one error. As a workaround we retry the reservation with an offset of 1 instead of 0. v2: updated commit message & the comment in source file (Daniel) Signed-off-by: NAkash Goel <akash.goel@intel.com> Reviewed-by: NJesse Barnes <jbarnes@virtuousgeek.org> Tested-by: NArjan van de Ven <arjan@linux.intel.com> Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 28 1月, 2014 1 次提交
-
-
由 Akash Goel 提交于
The 'offset' field of the 'scatterlist' structure was wrongly programmed with the offset value from the base of stolen area, whereas this field indicates the offset from where the interested data starts within the first PAGE pointed to by 'scattterlist' structure. As a result when a new GEM object allocated from stolen area is mapped to GTT, it could lead to an overwrite of GTT entries as the page count calculation will go wrong, refer the function 'sg_page_count'. v2: Modified the commit message. (Chris) Signed-off-by: NAkash Goel <akash.goel@intel.com> Reviewed-by: NJesse Barnes <jbarnes@virtuousgeek.org> Cc: stable@vger.kernel.org Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=71908 Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=69104Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 18 12月, 2013 1 次提交
-
-
由 Daniel Vetter 提交于
But only when we indeed set up a gtt mapping. We need this since the vma also holds a pages_pin_count, on top of the unconditional pages_pin_count we grab for all stolen objects (to avoid swap-out). This should avoid a pages_pin_count underrun when cleaning up framebuffers objects taken over from the BIOS. Chris mentioned in his review that this bug even predates the vma conversion. Reported-by: NJesse Barnes <jbarnes@virtuousgeek.org> Cc: Jesse Barnes <jbarnes@virtuousgeek.org> Cc: Ben Widawsky <benjamin.widawsky@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 05 9月, 2013 1 次提交
-
-
由 Chris Wilson 提交于
Paulo reported that if he set the amount of reserved memory to 0, then we emitted a warning about a conflict before disabling our use of stolen memory. This was introduced with commit eaba1b8f Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Thu Jul 4 12:28:35 2013 +0100 drm/i915: Verify that our stolen memory doesn't conflict and is simply fixed by checking for a no reservation first. Reported-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Paulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 04 9月, 2013 1 次提交
-
-
由 Daniel Vetter 提交于
In the execbuf code we don't clean up any vmas which ended up not getting bound for code simplicity. To make sure that we don't end up creating multiple vma for the same vm kill the somewhat dangerous vma_create function and inline it into lookup_or_create. This is just a safety measure to prevent surprises in the future. Also update the somewhat confused comment in the execbuf code and clarify what kind of magic is going on with a new one. v2: Keep the function separate as requested by Chris. But give it a __ prefix for paranoia and move it tighter together with the other vma stuff. Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Ben Widawsky <ben@bwidawsk.net> Acked-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 22 8月, 2013 1 次提交
-
-
由 Daniel Vetter 提交于
Use the standard inversely ordered goto label stack for everything. Spotted while reviewing place where we might need to to call vma_destroy but failed to do so. Cc: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 10 8月, 2013 1 次提交
-
-
由 Chris Wilson 提交于
As a corollary to reviewing the interaction between LLC and our cache domains, the GPU PTE bits are independent of the CPU PAT bits. As such we can set the cache level on stolen memory based on how we wish the GPU to cache accesses to it. So we are free to set the same default cache levels as for normal bo, i.e. enable LLC cacheing by default where appropriate. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 08 8月, 2013 1 次提交
-
-
由 Ben Widawsky 提交于
formerly: "drm/i915: Create VMAs (part 5) - move mm_list" The mm_list is used for the active/inactive LRUs. Since those LRUs are per address space, the link should be per VMx . Because we'll only ever have 1 VMA before this point, it's not incorrect to defer this change until this point in the patch series, and doing it here makes the change much easier to understand. Shamelessly manipulated out of Daniel: "active/inactive stuff is used by eviction when we run out of address space, so needs to be per-vma and per-address space. Bound/unbound otoh is used by the shrinker which only cares about the amount of memory used and not one bit about in which address space this memory is all used in. Of course to actual kick out an object we need to unbind it from every address space, but for that we have the per-object list of vmas." v2: only bump GGTT LRU in i915_gem_object_set_to_gtt_domain (Chris) v3: Moved earlier in the series v4: Add dropped message from v3 Signed-off-by: NBen Widawsky <ben@bwidawsk.net> [danvet: Frob patch to apply and use vma->node.size directly as discused with Ben. Also drop a needles BUG_ON before move_to_inactive, the function itself has the same check.] [danvet 2nd: Rebase on top of the lost "drm/i915: Cleanup more of VMA in destroy", specifically unlink the vma from the mm_list in vma_unbind (to keep it symmetric with bind_to_vm) instead of vma_destroy.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 07 8月, 2013 2 次提交
-
-
由 David Herrmann 提交于
i915 is the last user of the weird search+get_block drm_mm API. Convert it to an explicit kmalloc()+insert_node(). This drops the last user of the node-cache in drm_mm. We can remove it now in a follow-up patch. v2: - simplify error path in i915_setup_compression() v3: - simplify error path even more Cc: Chris Wilson <chris@chris-wilson.co.uk> Acked-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 David Herrmann 提交于
Add a "best_match" flag similar to the drm_mm_search_*() helpers so we can convert TTM to use them in follow up patches. We can also inline the non-generic helpers and move them into the header to allow compile-time optimizations. To make calls to drm_mm_{search,insert}_node() more readable, this converts the boolean argument to a flagset. There are pending patches that add additional flags for top-down allocators and more. v2: - use flag parameter instead of boolean "best_match" - convert *_search_free() helpers to also use flags argument Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 06 8月, 2013 1 次提交
-
-
由 Ben Widawsky 提交于
Just some small cleanups, and a rename of vm->ggtt_vm requested by Daniel. Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 24 7月, 2013 1 次提交
-
-
由 Chris Wilson 提交于
So I made the mistake of missing that the desktop and mobile chipsets have different layouts in their PCI configurations, and we were incorrectly setting the wrong physical address for stolen memory on mobile chipsets. Since all gen3+ are actually consistent in the location of the GBSM register in the PCI configuration space on device 2 (the GPU), use it. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> [danvet: Drop cc: stable and fudge conflicts.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 23 7月, 2013 1 次提交
-
-
由 David Herrmann 提交于
drm_gem_object_init() and drm_gem_private_object_init() do exactly the same (except for shmem alloc) so make the first use the latter to reduce code duplication. Also drop the return code from drm_gem_private_object_init(). It seems unlikely that we will extend it any time soon so no reason to keep it around. This simplifies code paths in drivers, too. Last but not least, fix gma500 to call drm_gem_object_release() before freeing objects that were allocated via drm_gem_private_object_init(). That isn't actually necessary for now, but might be in the future. Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: NPatrik Jakobsson <patrik.r.jakobsson@gmail.com> Acked-by: NRob Clark <robdclark@gmail.com> Signed-off-by: NDave Airlie <airlied@gmail.com>
-
- 19 7月, 2013 1 次提交
-
-
由 Dan Carpenter 提交于
i915_gem_vma_create() returns and ERR_PTR() or a valid pointer, it never returns NULL. Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 18 7月, 2013 4 次提交
-
-
由 Ben Widawsky 提交于
Formerly: "drm/i915: Create VMAs (part 1)" In a previous patch, the notion of a VM was introduced. A VMA describes an area of part of the VM address space. A VMA is similar to the concept in the linux mm. However, instead of representing regular memory, a VMA is backed by a GEM BO. There may be many VMAs for a given object, one for each VM the object is to be used in. This may occur through flink, dma-buf, or a number of other transient states. Currently the code depends on only 1 VMA per object, for the global GTT (and aliasing PPGTT). The following patches will address this and make the rest of the infrastructure more suited v2: s/i915_obj/i915_gem_obj (Chris) v3: Only move an object to the now global unbound list if there are no more VMAs for the object which are bound into a VM (ie. the list is empty). v4: killed obj->gtt_space some reworks due to rebase v5: Free vma on error path (Imre) v6: Another missed vma free in i915_gem_object_bind_to_gtt error path (Imre) Fixed vma freeing in stolen preallocation (Imre) Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Reviewed-by: NImre Deak <imre.deak@intel.com> [danvet: Squash in fixup from Ben to not deref a non-existing vma in set_cache_level, reported by Chris.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
The odds of this happening are *extremely* unlikely. Reported-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
Shamelessly manipulated out of Daniel :-) "When moving the lists around explain that the active/inactive stuff is used by eviction when we run out of address space, so needs to be per-vma and per-address space. Bound/unbound otoh is used by the shrinker which only cares about the amount of memory used and not one bit about in which address space this memory is all used in. Of course to actual kick out an object we need to unbind it from every address space, but for that we have the per-object list of vmas." v2: Leave the bound list as a global one. (Chris, indirectly) v3: Rebased with no i915_gtt_vm. In most places I added a new *vm local, since it will eventually be replaces by a vm argument. Put comment back inline, since it no longer makes sense to do otherwise. v4: Rebased on hangcheck/error state movement Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Reviewed-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
Every address space should support object allocation. It therefore makes sense to have the allocator be part of the "superclass" which GGTT and PPGTT will derive. Since our maximum address space size is only 2GB we're not yet able to avoid doing allocation/eviction; but we'd hope one day this becomes almost irrelvant. v2: Rebased Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Reviewed-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 09 7月, 2013 6 次提交
-
-
由 Daniel Vetter 提交于
v2: Bail out if we hit the WARN_ON to avoid fallout later on. Spotted by Chris Wilson. Suggested-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
Sanity check that the memory region found through the Graphics Base of Stolen Memory is reserved and hidden from the rest of the system through the use of the resource API. v2: "Graphics Stolen Memory" is such a more bodacious name than the lame "i915 stolen", and convert to using devres for automagical cleanup of the resource. (danvet) Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> [danvet: Dump proper hexcodes.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
Embedding the node in the obj is more natural in the transition to VMAs which will also have embedded nodes. This change also helps transition away from put_block to remove node. Though it's quite an uncommon occurrence, it's somewhat convenient to not fail at bind time because we cannot allocate the node. Though in practice there are other allocations (like the request structure) which would probably make this point not terribly useful. Quoting Daniel: Note that the only difference between put_block and remove_node is that the former fills up the preallocation cache. Which we don't need anyway and hence is just wasted space. v2: Clean up the stolen preallocation code. Rebased on the reserve_node patches renames ggtt_ stuff to gtt_ stuff WARN_ON if the object is already bound (which doesn't mean it's in the bound list, tricky) Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
With the getters in place from the previous patch this members serves no purpose other than saving one spare pointer chase, which will be killed in the next patch anyway. Moving to VMAs, this members adds unnecessary confusion since an object may exist at different offsets in different VMs. v2: Properly preserve the stolen offset. This code is a bit hacky but it all goes away when we embed the drm_mm_node and removes the need for the incorrect patch I submitted previously: "Use gtt_space->start for stolen reservation" Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
With the previous patch we no longer actually create a node, we simply find the correct hole and occupy it. This very well could have been squashed with the last patch, but since I already had David's review, I figured it's easiest to keep it distinct. Also update the users in i915. Conveniently this is the only user of the interface. CC: David Airlie <airlied@linux.ie> CC: <dri-devel@lists.freedesktop.org> Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Acked-by: NDavid Airlie <airlied@linux.ie> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
For an upcoming patch where we introduce the i915 VMA, it's ideal to have the drm_mm_node as part of the VMA struct (ie. it's pre-allocated). Part of the conversion to VMAs is to kill off obj->gtt_space. Doing this will break a bunch of code, but amongst them are 2 callers of drm_mm_create_block(), both related to stolen memory. It also allows us to embed the drm_mm_node into the object currently which provides a nice transition over to the new code. v2: Reordered to do before ripping out obj->gtt_offset. Some minor cleanups made available because of reordering. v3: s/continue/break on failed stolen node allocation (David) Set obj->gtt_space on failed node allocation (David) Only unref stolen (fix double free) on failed create_stolen (David) Free node, and NULL it in failed create_stolen (David) Add back accidentally removed newline (David) CC: <dri-devel@lists.freedesktop.org> Reviewed-by: NDavid Herrmann <dh.herrmann@gmail.com> Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Acked-by: NDavid Airlie <airlied@linux.ie> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 06 7月, 2013 1 次提交
-
-
由 Daniel Vetter 提交于
A magic -1 is a obscure, especially since it's actually passed as an unsigned, so depends upon the magic sign extension rules in C. This has been added in commit 3727d55e Author: Jesse Barnes <jbarnes@virtuousgeek.org> Date: Wed May 8 10:45:14 2013 -0700 drm/i915: allow stolen, pre-allocated objects to avoid GTT allocation v2 Use a proper #define instead. Spotted while reviewing Ben's drm_mm_create_block changes. v2: Cast the constant to u32 since otherwise we again have a type mismatch. Suggested by Chris Wilson. Cc: Ben Widawsky <ben@bwidawsk.net> Cc: Jesse Barnes <jbarnes@virtuousgeek.org> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 02 7月, 2013 1 次提交
-
-
由 Daniel Vetter 提交于
Every other place properly checks whether we've managed to set up the stolen allocator at boot-up properly, with the exception of the cleanup code. Which results in an ugly *ERROR* Memory manager not clean. Delaying takedown at module unload time since the drm_mm isn't initialized at all. v2: While at it check whether the stolen drm_mm is initialized instead of the more obscure stolen_base == 0 check. v3: Fix up the logic. Also we need to keep the stolen_base check in i915_gem_object_create_stolen_for_preallocated since that can be called before stolen memory is fully set up. Spotted by Chris Wilson. v4: Readd the conversion in i915_gem_object_create_stolen_for_preallocated, the check is for the dev_priv->mm.gtt_space drm_mm, the stolen allocatot must already be initialized when calling that function (if we indeed have stolen memory). Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=65953 Cc: Chris Wilson <chris@chris-wilson.co.uk> Tested-by: lu hua <huax.lu@intel.com> (v3) Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 01 7月, 2013 1 次提交
-
-
由 Ben Widawsky 提交于
Signed-off-by: NBen Widawsky <ben@bwidawsk.net> [danvet: Resolve conflict with Damien's FBC_CHIP_DEFAULT no fbc reason.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 03 6月, 2013 2 次提交
-
-
由 Ben Widawsky 提交于
Since it will be used for the global bound/unbound list with full PPGTT, this helps clarify things for upcoming code rework. Recommended-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
This makes it easier to catch leaks. Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 11 5月, 2013 2 次提交
-
-
由 Jesse Barnes 提交于
In some cases, we may not need GTT address space allocated to a stolen object, so allow passing -1 to the preallocated function to indicate as much. v2: remove BUG_ON(gtt_offset & 4095) now that -1 is allowed (Ville) Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Jesse Barnes 提交于
But we need to get the right stolen base and make pre-allocated objects for BIOS stuff so we don't clobber it. If the BIOS hasn't allocated a power context, we allocate one here too, from stolen space as required by the docs. v2: fix stolen to phys if ladder (Ben) keep BIOS reserved space out of allocator altogether (Ben) v3: fix mask of stolen base (Ben) v4: clean up preallocated object on unload (Ben) don't zero reg on unload (Jesse) fix mask harder (Jesse) v5: use unref for freeing stolen bits (Chris) move alloc/free to intel_pm.c (Chris) v6: NULL pctx at disable time so error paths work (Ben) v7: use correct PCI device for config read (Jesse) Reviewed-by: NBen Widawsky <benjamin.widawsky@intel.com> Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 27 4月, 2013 1 次提交
-
-
由 Chris Wilson 提交于
Instead of repeatedly bombarding the user with a request to reboot and increase the stolen size with every fb refresh, just inform them the first time only. v2: Rearrange code so the hint to increase the amount of memory stolen by the BIOS is only emitted if we fail to find sufficient stolen memory for FBC. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> [danvet: Fixup formatting code mismatch that gcc spotted.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 28 3月, 2013 1 次提交
-
-
由 Imre Deak 提交于
Since for_each_sg_page supports already memory w/o backing pages we can revert the corresponding workaround. This reverts commit 5bd4687e. Signed-off-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 27 3月, 2013 1 次提交
-
-
由 Chris Wilson 提交于
Wrap a preallocated region of stolen memory within an ordinary GEM object, for example the BIOS framebuffer. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 23 3月, 2013 1 次提交
-
-
由 Imre Deak 提交于
This is needed since currently sg_for_each_page assumes that we have a valid page in each sg item. It is only a real problem for CONFIG_SPARSEMEM where the page is dereferenced, in other cases the iterator works ok with an invalid page pointer. We can remove this workaround when we have fixed sg_page_iter to work on scatterlists without backing pages. Signed-off-by: NImre Deak <imre.deak@intel.com>
-
- 31 1月, 2013 1 次提交
-
-
由 Ben Widawsky 提交于
With the probe call in our dispatch table, we can now cut away the last three remaining members in the intel_gtt shared struct and so remove it completely. v2: Rebased on top of Daniel's series Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> [danvet: bikeshed commit message a bit.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 18 12月, 2012 1 次提交
-
-
由 Daniel Vetter 提交于
We need to clean up the overlay first, before taking down the stolen memory allocator. This regression has been introducec in commit 80405138 Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Thu Nov 15 11:32:29 2012 +0000 drm/i915: Allocate overlay registers from stolen memory v2: Rework the patch a bit as suggested by Chris Wilson: - move the overlay teardown up, into the modeset cleanup - move the stolen mm takedown into i915_gem_cleanup_stolen Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 01 12月, 2012 2 次提交
-
-
由 Chris Wilson 提交于
The primary purpose of this was to debug some use-after-free memory corruption that was causing an OOPS inside drm/i915. As it turned out the corruption was being caused elsewhere and i915.ko as a major user of many objects was being hit hardest. Indeed as we do frequent the generic kmalloc caches, dedicating one to ourselves (or at least naming one for us depending upon the core) aids debugging our own slab usage. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJesse Barnes <jbarnes@virtuousgeek.org> Reviewed-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
Allow for the creation of GEM objects backed by stolen memory. As these are not backed by ordinary pages, we create a fake dma mapping and store the address in the scatterlist rather than obj->pages. v2: Mark _i915_gem_object_create_stolen() as static, as noticed by Jesse Barnes. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJesse Barnes <jbarnes@virtuousgeek.org> Reviewed-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-