- 28 6月, 2011 1 次提交
-
-
由 Hugh Dickins 提交于
Soon tmpfs will stop supporting ->readpage and read_mapping_page(): once "tmpfs: add shmem_read_mapping_page_gfp" has been applied, this patch can be applied to ease the transition. ttm_tt_swapin() and ttm_tt_swapout() use shmem_read_mapping_page() in place of read_mapping_page(), since their swap_space has been created with shmem_file_setup(). Signed-off-by: NHugh Dickins <hughd@google.com> Cc: Christoph Hellwig <hch@infradead.org> Cc: Thomas Hellstrom <thellstrom@vmware.com> Cc: Dave Airlie <airlied@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 21 6月, 2011 1 次提交
-
-
由 Konrad Rzeszutek Wilk 提交于
. and some comments to make it easier to understand. Ackedby: Randy Dunlap <randy.dunlap@oracle.com> [v2: Added some more updates from Randy Dunlap] Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 25 5月, 2011 1 次提交
-
-
由 Ying Han 提交于
Change each shrinker's API by consolidating the existing parameters into shrink_control struct. This will simplify any further features added w/o touching each file of shrinker. [akpm@linux-foundation.org: fix build] [akpm@linux-foundation.org: fix warning] [kosaki.motohiro@jp.fujitsu.com: fix up new shrinker API] [akpm@linux-foundation.org: fix xfs warning] [akpm@linux-foundation.org: update gfs2] Signed-off-by: NYing Han <yinghan@google.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Minchan Kim <minchan.kim@gmail.com> Acked-by: NPavel Emelyanov <xemul@openvz.org> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Mel Gorman <mel@csn.ul.ie> Acked-by: NRik van Riel <riel@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Hugh Dickins <hughd@google.com> Cc: Dave Hansen <dave@linux.vnet.ibm.com> Cc: Steven Whitehouse <swhiteho@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 13 4月, 2011 1 次提交
-
-
由 Dave Airlie 提交于
This reverts commit 69a07f0b. We've tracked a number of problems back to this, and Thomas thinks we should redesign this for .40/41 anyways so I'm happy to revert it. Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 10 4月, 2011 1 次提交
-
-
由 Paul Bolle 提交于
There's no need to pass kref_put() the address of a function (just the function will do just fine) nor to cast its unused return to void. Signed-off-by: NPaul Bolle <pebolle@tiscali.nl> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
- 05 4月, 2011 1 次提交
-
-
由 Jan Engelhardt 提交于
Signed-off-by: NJan Engelhardt <jengelh@medozas.de> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 23 2月, 2011 3 次提交
-
-
由 Dave Airlie 提交于
This reverts commit 5a893fc2. This causes a use after free in the ttm free alloc pages path, when it tries to get the be after the be has been destroyed. Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Ben Skeggs 提交于
Nouveau doesn't have enough information at ttm_backend_func.bind() time to implement things like tiled GART, or to keep a buffer at a constant address in the GPU virtual address space no matter where in physical memory it's placed. To resolve this, nouveau will handle binding of all buffers to the GPU itself from the move_notify() hook. This commit ensures it's called for all buffer moves. Talked to Dave about the impact on radeon, which uses move_notify, it doesn't look like anything should break there. Signed-off-by: NBen Skeggs <bskeggs@redhat.com> Reviewed-by: NThomas Hellstrom <thomas@shipmail.org> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Konrad Rzeszutek Wilk 提交于
This makes the accounting when using 'debug_dma_dump_mappings()' and CONFIG_DMA_API_DEBUG=y be assigned to the correct device instead of 'fallback'. No functional change - just cosmetic. Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
- 28 1月, 2011 3 次提交
-
-
由 Konrad Rzeszutek Wilk 提交于
We pass in the array of ttm pages to be populated in the GART/MM of the card (or AGP). Patch titled: "ttm: Utilize the DMA API for pages that have TTM_PAGE_FLAG_DMA32 set." uses the DMA API to make those pages have a proper DMA addresses (in the situation where page_to_phys or virt_to_phys do not give use the DMA (bus) address). Since we are using the DMA API on those pages, we should pass in the DMA address to this function so it can save it in its proper fields (later patches use it). [v2: Added reviewed-by tag] Reviewed-by: NThomas Hellstrom <thellstrom@shipmail.org> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Tested-by: NIan Campbell <ian.campbell@citrix.com>
-
由 Konrad Rzeszutek Wilk 提交于
For pages that have the TTM_PAGE_FLAG_DMA32 flag set we use the DMA API. We save the bus address in our array which we use to program the GART (see "radeon/ttm/PCIe: Use dma_addr if TTM has set it." and "nouveau/ttm/PCIe: Use dma_addr if TTM has set it."). The reason behind using the DMA API is that under Xen we would end up programming the GART with the bounce buffer (SWIOTLB) DMA address instead of the physical DMA address of the TTM page. The reason being that alloc_page with GFP_DMA32 does not allocate pages under the the 4GB mark when running under Xen hypervisor. Under baremetal this means we do the DMA API call earlier instead of when we program the GART. For details please refer to: https://lkml.org/lkml/2011/1/7/251 [v2: Fixed indentation, revised desc, added Reviewed-by] Reviewed-by: NThomas Hellstrom <thomas@shipmail.org> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Tested-by: NIan Campbell <ian.campbell@citrix.com>
-
由 Konrad Rzeszutek Wilk 提交于
This is right now limited to only non-pool constructs. [v2: Fixed indentation issues, add review-by tag] Reviewed-by: NThomas Hellstrom <thomas@shipmail.org> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Tested-by: NIan Campbell <ian.campbell@citrix.com>
-
- 24 12月, 2010 1 次提交
-
-
由 Tejun Heo 提交于
Make ttm_bo::ttm_bo_device_release call cancel_delayed_work_sync() instead of calling cancel_delayed_work() followed by flush_scheduled_work(). This is to prepare for the deprecation and removal of flush_scheduled_work(). Signed-off-by: NTejun Heo <tj@kernel.org> Cc:: Thomas Hellstrom <thellstrom@vmware.com> Cc:: Dave Airlie <airlied@redhat.com>
-
- 16 12月, 2010 1 次提交
-
-
由 Ben Skeggs 提交于
Drivers using their own implementation of io_mem_reserve/io_mem_free are likely to store the tracking information for the map in mem.mm_node, so it can't be freed while still mapped. Signed-off-by: Ben Skeggs<bskeggs@redhat.com> Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 22 11月, 2010 8 次提交
-
-
由 Thomas Hellstrom 提交于
This patch attempts to fix up shortcomings with the current calling sequences. 1) There's a fastpath where no locking occurs and only io_mem_reserved is called to obtain needed info for mapping. The fastpath is set per memory type manager. 2) If the fastpath is disabled, io_mem_reserve and io_mem_free will be exactly balanced and not called recursively for the same struct ttm_mem_reg. 3) Optionally the driver can choose to enable a per memory type manager LRU eviction mechanism that, when io_mem_reserve returns -EAGAIN will attempt to kill user-space mappings of memory in that manager to free up needed resources Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Reviewed-by: NBen Skeggs <bskeggs@redhat.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Thomas Hellstrom 提交于
Rather than having the driver supply the validation sequence, leave that responsibility to TTM. This saves some confusion and a function argument. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Thomas Hellstrom 提交于
Drastically reduce the number of spin lock / unlock operations by performing unreserving and fencing under global locks. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Reviewed-by: NJerome Glisse <j.glisse@redhat.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Thomas Hellstrom 提交于
The bo lock used only to protect the bo sync object members, and since it is a per bo lock, fencing a buffer list will see a lot of locks and unlocks. Replace it with a per-device lock that protects the sync object members on *all* bos. Reading and setting these members will always be very quick, so the risc of heavy lock contention is microscopic. Note that waiting for sync objects will always take place outside of this lock. The bo device fence lock will eventually be replaced with a seqlock / rcu mechanism so we can determine that a bo is idle under a rcu / read seqlock. However this change will allow us to batch fencing and unreserving of buffers with a minimal amount of locking. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Reviewed-by: NJerome Glisse <j.glisse@gmail.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Thomas Hellstrom 提交于
Add an aid for the driver to detect deadlocks on multi-bo reservations Update documentation. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Reviewed-by: NJerome Glisse <j.glisse@gmail.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Thomas Hellstrom 提交于
Avoid the ttm_bo_unreserve() spinlocks by calling ttm_eu_backoff_reservation_locked under the lru spinlock. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Reviewed-by: NJerome Glisse <j.glisse@gmail.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Thomas Hellstrom 提交于
Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Dave Airlie 提交于
Makes it possible to reserve a list of buffer objects with a single spin lock / unlock if there is no contention. Should improve cpu usage on SMP kernels. v2: Initialize private list members on reserve and don't call ttm_bo_list_ref_sub() with zero put_count. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 18 11月, 2010 1 次提交
-
-
由 Thomas Hellstrom 提交于
A process suspended waiting for a higher sequence or no sequence to unreserve, a bo may be beaten to the reservation by a process with a lower sequence. In that case the first process should give up trying to reserve and return -EAGAIN. In order for that to happen, we must wake waiting processes when we change sequence, so that they have a chance to detect the new sequence. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 10 11月, 2010 1 次提交
-
-
由 Thomas Hellstrom 提交于
Call destroy() on _all_ ttm_bo_init() failures, and make sure that behavior is documented in the function description. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 09 11月, 2010 8 次提交
-
-
由 Thomas Hellstrom 提交于
This breaks vmwgfx non-root EGL clients and is a remnant from the TTM user-space interface. This test should be done in the driver. Replace the remaining placement test with a BUG_ON, since triggering it is a driver bug. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Thomas Hellstrom 提交于
The sync object may disappear as soon as we release the bo::lock, so take a reference on it while we use it. One option would be to call sync_object_flush() before releasing the bo::lock, but that would put an atomic requirement on that function. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Thomas Hellstrom 提交于
The driver (for example vmwgfx) may want to silently deal with the error itself. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Thomas Hellstrom 提交于
Since we're doing this outside of a spinlock to provide the necessary barriers, add an explicit barrier. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Thomas Hellstrom 提交于
Replace with BUG_ON(). These error messages remained from the time when TTM was initialized from user-space. Nowadays hitting one of those is really a kernel bug. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Thomas Hellstrom 提交于
Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Thomas Hellstrom 提交于
Searching for a free block in the range manager may in some situations be a lenghty operation, and we want to avoid holding the global lru lock during that time. Instead use a per-manager spinlock. This leaves the global lru lock for quick lru list and swap list manipulation only, including list manipulation associated with reserving buffer objects. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Thomas Hellstrom 提交于
Remove an obsolete comment about mm nodes. Document the new bo range manager interface. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 27 10月, 2010 1 次提交
-
-
由 Peter Zijlstra 提交于
Keep the current interface but ignore the KM_type and use a stack based approach. The advantage is that we get rid of crappy code like: #define __KM_PTE \ (in_nmi() ? KM_NMI_PTE : \ in_irq() ? KM_IRQ_PTE : \ KM_PTE0) and in general can stop worrying about what context we're in and what kmap slots might be appropriate for that. The downside is that FRV kmap_atomic() gets more expensive. For now we use a CPP trick suggested by Andrew: #define kmap_atomic(page, args...) __kmap_atomic(page) to avoid having to touch all kmap_atomic() users in a single patch. [ not compiled on: - mn10300: the arch doesn't actually build with highmem to begin with ] [akpm@linux-foundation.org: coding-style fixes] [akpm@linux-foundation.org: fix up drivers/gpu/drm/i915/intel_overlay.c] Acked-by: NRik van Riel <riel@redhat.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: NChris Metcalf <cmetcalf@tilera.com> Cc: David Howells <dhowells@redhat.com> Cc: Hugh Dickins <hughd@google.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: David Miller <davem@davemloft.net> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Dave Airlie <airlied@linux.ie> Cc: Li Zefan <lizf@cn.fujitsu.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 21 10月, 2010 2 次提交
-
-
由 Thomas Hellstrom 提交于
This commit replaces the ttm_bo_cleanup_ref function with two new functions. One for the case where the bo is not yet on the delayed destroy list, and one for the case where the bo was on the delayed destroy list, at least at the time of call. This makes it possible to optimize the two cases somewhat. It also enables the possibility to directly destroy buffers on the delayed delete list when they are about to be evicted or swapped out. Currently they were only evicted / swapped and destruction was left for the delayed buffer destruction thread. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Thomas Hellstrom 提交于
Release the lru spinlock early. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 19 10月, 2010 2 次提交
-
-
由 Jean Delvare 提交于
Function ttm_bo_wait_unreserved can be slightly simplified. Signed-off-by: NJean Delvare <khali@linux-fr.org> Cc: Thomas Hellstrom <thellstrom@vmware.com> Cc: Jerome Glisse <jglisse@redhat.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Dave Airlie 提交于
We need the unlocked variant for the new codepath introduced to fix the race condition in master recently. Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 06 10月, 2010 1 次提交
-
-
由 Thomas Hellstrom 提交于
This fixes a race pointed out by Dave Airlie where we don't take a buffer object about to be destroyed off the LRU lists properly. It also fixes a rare case where a buffer object could be destroyed in the middle of an accelerated eviction. The patch also adds a utility function that can be used to prematurely release GPU memory space usage of an object waiting to be destroyed. For example during eviction or swapout. The above mentioned commit didn't queue the buffer on the delayed destroy list under some rare circumstances. It also didn't completely honor the remove_all parameter. Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=615505 http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=591061Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 05 10月, 2010 2 次提交
-
-
由 Ben Skeggs 提交于
Nouveau will need this on GeForce 8 and up to account for the GPU reordering physical VRAM for some memory types. Reviewed-by: NJerome Glisse <jglisse@redhat.com> Acked-by: NThomas Hellström <thellstrom@vmware.com> Signed-off-by: NBen Skeggs <bskeggs@redhat.com>
-
由 Ben Skeggs 提交于
Existing core code/drivers call drm_mm_put_block on ttm_mem_reg.mm_node directly. Future patches will modify TTM behaviour in such a way that ttm_mem_reg.mm_node doesn't necessarily belong to drm_mm. Reviewed-by: NJerome Glisse <jglisse@redhat.com> Acked-by: NThomas Hellström <thellstrom@vmware.com> Signed-off-by: NBen Skeggs <bskeggs@redhat.com>
-