- 20 11月, 2012 1 次提交
-
-
由 Marcin Slusarz 提交于
It's unused. Signed-off-by: NMarcin Slusarz <marcin.slusarz@gmail.com> Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 03 10月, 2012 1 次提交
-
-
由 David Howells 提交于
Convert #include "..." to #include <path/...> in kernel system headers. Signed-off-by: NDavid Howells <dhowells@redhat.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Acked-by: NDave Jones <davej@redhat.com>
-
- 24 7月, 2012 1 次提交
-
-
由 Ilija Hadzic 提交于
Patch 649bf3ca has completely removed ttm_backend structure. Remove lingering declaration and related (now stale) field in ttm_tt structure, CC: Jerome Glisse <jglisse at redhat.com> Signed-off-by: Ilija Hadzic <ihadzic at research.bell-labs.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 23 5月, 2012 1 次提交
-
-
由 Dave Airlie 提交于
This adds the ability for ttm common code to take an SG table and use it as the backing for a slave TTM object. The drivers can then populate their GTT tables using the SG object. v2: make sure to setup VM for sg bos as well. Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Reviewed-by: NJerome Glisse <jglisse@redhat.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 06 1月, 2012 1 次提交
-
-
由 Jerome Glisse 提交于
ttm tt rework modified the way we allocate and populate the ttm_tt structure, the AGP side was missing some bit to properly work. Fix those and fix radeon and nouveau AGP support. Tested on radeon only so far. Signed-off-by: NJerome Glisse <jglisse@redhat.com> Reviewed-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 06 12月, 2011 8 次提交
-
-
由 Jerome Glisse 提交于
Provide helper function to compute the kernel memory size needed for each buffer object. Move all the accounting inside ttm, simplifying driver and avoiding code duplication accross them. v2 fix accounting of ghost object, one would have thought that i would have run into the issue since a longtime but it seems ghost object are rare when you have plenty of vram ;) Signed-off-by: NJerome Glisse <jglisse@redhat.com> Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com>
-
由 Jerome Glisse 提交于
Move dma data to a superset ttm_dma_tt structure which herit from ttm_tt. This allow driver that don't use dma functionalities to not have to waste memory for it. V2 Rebase on top of no memory account changes (where/when is my delorean when i need it ?) V3 Make sure page list is initialized empty V4 typo/syntax fixes Signed-off-by: NJerome Glisse <jglisse@redhat.com> Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com>
-
由 Konrad Rzeszutek Wilk 提交于
In TTM world the pages for the graphic drivers are kept in three different pools: write combined, uncached, and cached (write-back). When the pages are used by the graphic driver the graphic adapter via its built in MMU (or AGP) programs these pages in. The programming requires the virtual address (from the graphic adapter perspective) and the physical address (either System RAM or the memory on the card) which is obtained using the pci_map_* calls (which does the virtual to physical - or bus address translation). During the graphic application's "life" those pages can be shuffled around, swapped out to disk, moved from the VRAM to System RAM or vice-versa. This all works with the existing TTM pool code - except when we want to use the software IOTLB (SWIOTLB) code to "map" the physical addresses to the graphic adapter MMU. We end up programming the bounce buffer's physical address instead of the TTM pool memory's and get a non-worky driver. There are two solutions: 1) using the DMA API to allocate pages that are screened by the DMA API, or 2) using the pci_sync_* calls to copy the pages from the bounce-buffer and back. This patch fixes the issue by allocating pages using the DMA API. The second is a viable option - but it has performance drawbacks and potential correctness issues - think of the write cache page being bounced (SWIOTLB->TTM), the WC is set on the TTM page and the copy from SWIOTLB not making it to the TTM page until the page has been recycled in the pool (and used by another application). The bounce buffer does not get activated often - only in cases where we have a 32-bit capable card and we want to use a page that is allocated above the 4GB limit. The bounce buffer offers the solution of copying the contents of that 4GB page to an location below 4GB and then back when the operation has been completed (or vice-versa). This is done by using the 'pci_sync_*' calls. Note: If you look carefully enough in the existing TTM page pool code you will notice the GFP_DMA32 flag is used - which should guarantee that the provided page is under 4GB. It certainly is the case, except this gets ignored in two cases: - If user specifies 'swiotlb=force' which bounces _every_ page. - If user is using a Xen's PV Linux guest (which uses the SWIOTLB and the underlaying PFN's aren't necessarily under 4GB). To not have this extra copying done the other option is to allocate the pages using the DMA API so that there is not need to map the page and perform the expensive 'pci_sync_*' calls. This DMA API capable TTM pool requires for this the 'struct device' to properly call the DMA API. It also has to track the virtual and bus address of the page being handed out in case it ends up being swapped out or de-allocated - to make sure it is de-allocated using the proper's 'struct device'. Implementation wise the code keeps two lists: one that is attached to the 'struct device' (via the dev->dma_pools list) and a global one to be used when the 'struct device' is unavailable (think shrinker code). The global list can iterate over all of the 'struct device' and its associated dma_pool. The list in dev->dma_pools can only iterate the device's dma_pool. /[struct device_pool]\ /---------------------------------------------------| dev | / +-------| dma_pool | /-----+------\ / \--------------------/ |struct device| /-->[struct dma_pool for WC]</ /[struct device_pool]\ | dma_pools +----+ /-| dev | | ... | \--->[struct dma_pool for uncached]<-/--| dma_pool | \-----+------/ / \--------------------/ \----------------------------------------------/ [Two pools associated with the device (WC and UC), and the parallel list containing the 'struct dev' and 'struct dma_pool' entries] The maximum amount of dma pools a device can have is six: write-combined, uncached, and cached; then there are the DMA32 variants which are: write-combined dma32, uncached dma32, and cached dma32. Currently this code only gets activated when any variant of the SWIOTLB IOMMU code is running (Intel without VT-d, AMD without GART, IBM Calgary and Xen PV with PCI devices). Tested-by: NMichel Dänzer <michel@daenzer.net> [v1: Using swiotlb_nr_tbl instead of swiotlb_enabled] [v2: Major overhaul - added 'inuse_list' to seperate used from inuse and reorder the order of lists to get better performance.] [v3: Added comments/and some logic based on review, Added Jerome tag] [v4: rebase on top of ttm_tt & ttm_backend merge] [v5: rebase on top of ttm memory accounting overhaul] [v6: New rebase on top of more memory accouting changes] [v7: well rebase on top of no memory accounting changes] [v8: make sure pages list is initialized empty] [v9: calll ttm_mem_global_free_page in unpopulate for accurate accountg] Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: NJerome Glisse <jglisse@redhat.com> Acked-by: NThomas Hellstrom <thellstrom@vmware.com>
-
由 Jerome Glisse 提交于
Move the page allocation and freeing to driver callback and provide ttm code helper function for those. Most intrusive change, is the fact that we now only fully populate an object this simplify some of code designed around the page fault design. V2 Rebase on top of memory accounting overhaul V3 New rebase on top of more memory accouting changes V4 Rebase on top of no memory account changes (where/when is my delorean when i need it ?) Signed-off-by: NJerome Glisse <jglisse@redhat.com> Reviewed-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com>
-
由 Jerome Glisse 提交于
ttm_backend will only exist with a ttm_tt, and ttm_tt will only be of interest when bound to a backend. Merge them to avoid code and data duplication. V2 Rebase on top of memory accounting overhaul V3 Rebase on top of more memory accounting changes V4 Rebase on top of no memory account changes (where/when is my delorean when i need it ?) V5 make sure ttm is unbound before destroying, change commit message on suggestion from Tormod Volden Signed-off-by: NJerome Glisse <jglisse@redhat.com> Reviewed-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com>
-
由 Jerome Glisse 提交于
This field is not use by any of the driver just drop it. Signed-off-by: NJerome Glisse <jglisse@redhat.com> Reviewed-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com>
-
由 Jerome Glisse 提交于
Split btw highmem and lowmem page was rendered useless by the pool code. Remove it. Note further cleanup would change the ttm page allocation helper to actualy take an array instead of relying on list this could drasticly reduce the number of function call in the common case of allocation whole buffer. Signed-off-by: NJerome Glisse <jglisse@redhat.com> Reviewed-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com>
-
由 Jerome Glisse 提交于
This was never use in none of the driver, properly using userspace page for bo would need more code (vma interaction mostly). Removing this dead code in preparation of ttm_tt & backend merge. Signed-off-by: NJerome Glisse <jglisse@redhat.com> Reviewed-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com>
-
- 21 6月, 2011 1 次提交
-
-
由 Konrad Rzeszutek Wilk 提交于
. and some comments to make it easier to understand. Ackedby: Randy Dunlap <randy.dunlap@oracle.com> [v2: Added some more updates from Randy Dunlap] Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 05 4月, 2011 1 次提交
-
-
由 Jan Engelhardt 提交于
Signed-off-by: NJan Engelhardt <jengelh@medozas.de> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 31 3月, 2011 1 次提交
-
-
由 Lucas De Marchi 提交于
Fixes generated by 'codespell' and manually reviewed. Signed-off-by: NLucas De Marchi <lucas.demarchi@profusion.mobi>
-
- 23 2月, 2011 2 次提交
-
-
由 Dave Airlie 提交于
This reverts commit 5a893fc2. This causes a use after free in the ttm free alloc pages path, when it tries to get the be after the be has been destroyed. Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Konrad Rzeszutek Wilk 提交于
This makes the accounting when using 'debug_dma_dump_mappings()' and CONFIG_DMA_API_DEBUG=y be assigned to the correct device instead of 'fallback'. No functional change - just cosmetic. Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
- 28 1月, 2011 2 次提交
-
-
由 Konrad Rzeszutek Wilk 提交于
We pass in the array of ttm pages to be populated in the GART/MM of the card (or AGP). Patch titled: "ttm: Utilize the DMA API for pages that have TTM_PAGE_FLAG_DMA32 set." uses the DMA API to make those pages have a proper DMA addresses (in the situation where page_to_phys or virt_to_phys do not give use the DMA (bus) address). Since we are using the DMA API on those pages, we should pass in the DMA address to this function so it can save it in its proper fields (later patches use it). [v2: Added reviewed-by tag] Reviewed-by: NThomas Hellstrom <thellstrom@shipmail.org> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Tested-by: NIan Campbell <ian.campbell@citrix.com>
-
由 Konrad Rzeszutek Wilk 提交于
This is right now limited to only non-pool constructs. [v2: Fixed indentation issues, add review-by tag] Reviewed-by: NThomas Hellstrom <thomas@shipmail.org> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Tested-by: NIan Campbell <ian.campbell@citrix.com>
-
- 22 11月, 2010 6 次提交
-
-
由 Thomas Hellstrom 提交于
This patch attempts to fix up shortcomings with the current calling sequences. 1) There's a fastpath where no locking occurs and only io_mem_reserved is called to obtain needed info for mapping. The fastpath is set per memory type manager. 2) If the fastpath is disabled, io_mem_reserve and io_mem_free will be exactly balanced and not called recursively for the same struct ttm_mem_reg. 3) Optionally the driver can choose to enable a per memory type manager LRU eviction mechanism that, when io_mem_reserve returns -EAGAIN will attempt to kill user-space mappings of memory in that manager to free up needed resources Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Reviewed-by: NBen Skeggs <bskeggs@redhat.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Thomas Hellstrom 提交于
Rather than having the driver supply the validation sequence, leave that responsibility to TTM. This saves some confusion and a function argument. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Thomas Hellstrom 提交于
Drastically reduce the number of spin lock / unlock operations by performing unreserving and fencing under global locks. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Reviewed-by: NJerome Glisse <j.glisse@redhat.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Thomas Hellstrom 提交于
The bo lock used only to protect the bo sync object members, and since it is a per bo lock, fencing a buffer list will see a lot of locks and unlocks. Replace it with a per-device lock that protects the sync object members on *all* bos. Reading and setting these members will always be very quick, so the risc of heavy lock contention is microscopic. Note that waiting for sync objects will always take place outside of this lock. The bo device fence lock will eventually be replaced with a seqlock / rcu mechanism so we can determine that a bo is idle under a rcu / read seqlock. However this change will allow us to batch fencing and unreserving of buffers with a minimal amount of locking. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Reviewed-by: NJerome Glisse <j.glisse@gmail.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Thomas Hellstrom 提交于
Add an aid for the driver to detect deadlocks on multi-bo reservations Update documentation. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Reviewed-by: NJerome Glisse <j.glisse@gmail.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Dave Airlie 提交于
Makes it possible to reserve a list of buffer objects with a single spin lock / unlock if there is no contention. Should improve cpu usage on SMP kernels. v2: Initialize private list members on reserve and don't call ttm_bo_list_ref_sub() with zero put_count. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 09 11月, 2010 1 次提交
-
-
由 Thomas Hellstrom 提交于
Remove an obsolete comment about mm nodes. Document the new bo range manager interface. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 21 10月, 2010 1 次提交
-
-
由 Thomas Hellstrom 提交于
Release the lru spinlock early. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 19 10月, 2010 1 次提交
-
-
由 Dave Airlie 提交于
We need the unlocked variant for the new codepath introduced to fix the race condition in master recently. Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 05 10月, 2010 2 次提交
-
-
由 Ben Skeggs 提交于
Nouveau will need this on GeForce 8 and up to account for the GPU reordering physical VRAM for some memory types. Reviewed-by: NJerome Glisse <jglisse@redhat.com> Acked-by: NThomas Hellström <thellstrom@vmware.com> Signed-off-by: NBen Skeggs <bskeggs@redhat.com>
-
由 Ben Skeggs 提交于
Existing core code/drivers call drm_mm_put_block on ttm_mem_reg.mm_node directly. Future patches will modify TTM behaviour in such a way that ttm_mem_reg.mm_node doesn't necessarily belong to drm_mm. Reviewed-by: NJerome Glisse <jglisse@redhat.com> Acked-by: NThomas Hellström <thellstrom@vmware.com> Signed-off-by: NBen Skeggs <bskeggs@redhat.com>
-
- 04 8月, 2010 1 次提交
-
-
由 Dave Airlie 提交于
I wrote this for the prime sharing work, but I also noticed other external non-upstream drivers from a large company carrying a similiar patch, so I may as well ship it in master. Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 07 5月, 2010 1 次提交
-
-
由 Thomas Hellstrom 提交于
It's unused and buggy in its current form, since it can place a bo in the reserved state without removing it from lru lists. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 20 4月, 2010 2 次提交
-
-
由 Jerome Glisse 提交于
All TTM driver have been converted to new io_mem_reserve/free interface which allow driver to choose and return proper io base, offset to core TTM for ioremapping if necessary. This patch remove what is now deadcode. V2 adapt to match with change in first patch of the patchset V3 update after io_mem_reserve/io_mem_free callback balancing V4 adjust to minor cleanup V5 remove the needs ioremap flag V6 keep the ioremapping facility in TTM [airlied- squashed driver removals in here also] Signed-off-by: NJerome Glisse <jglisse@redhat.com> Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Jerome Glisse 提交于
On fault the driver is given the opportunity to perform any operation it sees fit in order to place the buffer into a CPU visible area of memory. This patch doesn't break TTM users, nouveau, vmwgfx and radeon should keep working properly. Future patch will take advantage of this infrastructure and remove the old path from TTM once driver are converted. V2 return VM_FAULT_NOPAGE if callback return -EBUSY or -ERESTARTSYS V3 balance io_mem_reserve and io_mem_free call, fault_reserve_notify is responsible to perform any necessary task for mapping to succeed V4 minor cleanup, atomic_t -> bool as member is protected by reserve mecanism from concurent access V5 the callback is now responsible for iomapping the bo and providing a virtual address this simplify TTM and will allow to get rid of TTM_MEMTYPE_FLAG_NEEDS_IOREMAP V6 use the bus addr data to decide to ioremap or this isn't needed but we don't necesarily need to ioremap in the callback but still allow driver to use static mapping Signed-off-by: NJerome Glisse <jglisse@redhat.com> Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 08 4月, 2010 1 次提交
-
-
由 Jerome Glisse 提交于
There is case where we want to be able to wait only for the GPU while not waiting for other buffer to be unreserved. This patch split the no_wait argument all the way down in the whole ttm path so that upper level can decide on what to wait on or not. [airlied: squashed these 4 for bisectability reasons.] drm/radeon/kms: update to TTM no_wait splitted argument drm/nouveau: update to TTM no_wait splitted argument drm/vmwgfx: update to TTM no_wait splitted argument [vmwgfx patch: Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>] Signed-off-by: NJerome Glisse <jglisse@redhat.com> Acked-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 15 3月, 2010 1 次提交
-
-
由 Dave Airlie 提交于
Now that the drm core can do this, lets just use it, split the code out so TTM doesn't have to drag all of drmP.h in. Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 01 3月, 2010 1 次提交
-
-
由 Randy Dunlap 提交于
Fix function prototype to match its actual usage and implementation. drivers/gpu/drm/ttm/ttm_bo_util.c:341:10: error: symbol 'ttm_io_prot' redeclared with different type (originally declared at include/drm/ttm/ttm_bo_driver.h:911) - incompatible argument 1 (different signedness) Signed-off-by: NRandy Dunlap <randy.dunlap@oracle.com> Cc: David Airlie <airlied@linux.ie> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 14 1月, 2010 1 次提交
-
-
由 Thomas Hellstrom 提交于
This is needed for a bugfix in the vmwgfx driver. Drivers may have GPU bindings on buffers that core TTM is not aware of, and TTM may view those buffers as ordinary system memory buffers. Add a notifier to such drivers when TTM is about to move the buffer contents out to swappable memory. The driver must then release any private GPU bindings on those buffers. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 10 12月, 2009 1 次提交
-
-
由 Thomas Hellstrom 提交于
Return -ERESTARTSYS instead of -ERESTART when interrupted by a signal. The -ERESTARTSYS is converted to an -EINTR by the kernel signal layer before returned to user-space. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Signed-off-by: NJerome Glisse <jglisse@redhat.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-