1. 20 11月, 2012 2 次提交
  2. 03 10月, 2012 1 次提交
  3. 24 8月, 2012 1 次提交
  4. 06 12月, 2011 2 次提交
  5. 28 10月, 2011 1 次提交
  6. 18 10月, 2011 1 次提交
  7. 01 9月, 2011 1 次提交
    • M
      drm/ttm: add a way to bo_wait for either the last read or last write · dfadbbdb
      Marek Olšák 提交于
      Sometimes we want to know whether a buffer is busy and wait for it (bo_wait).
      However, sometimes it would be more useful to be able to query whether
      a buffer is busy and being either read or written, and wait until it's stopped
      being either read or written. The point of this is to be able to avoid
      unnecessary waiting, e.g. if a GPU has written something to a buffer and is now
      reading that buffer, and a CPU wants to map that buffer for read, it needs to
      only wait for the last write. If there were no write, there wouldn't be any
      waiting needed.
      
      This, or course, requires user space drivers to send read/write flags
      with each relocation (like we have read/write domains in radeon, so we can
      actually use those for something useful now).
      
      Now how this patch works:
      
      The read/write flags should passed to ttm_validate_buffer. TTM maintains
      separate sync objects of the last read and write for each buffer, in addition
      to the sync object of the last use of a buffer. ttm_bo_wait then operates
      with one the sync objects.
      Signed-off-by: NMarek Olšák <maraeo@gmail.com>
      Reviewed-by: NJerome Glisse <jglisse@redhat.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      dfadbbdb
  8. 23 8月, 2011 1 次提交
  9. 16 12月, 2010 1 次提交
  10. 22 11月, 2010 2 次提交
    • T
      drm/ttm: Fix up io_mem_reserve / io_mem_free calling · eba67093
      Thomas Hellstrom 提交于
      This patch attempts to fix up shortcomings with the current calling
      sequences.
      
      1) There's a fastpath where no locking occurs and only io_mem_reserved is
         called to obtain needed info for mapping. The fastpath is set per
         memory type manager.
      2) If the fastpath is disabled, io_mem_reserve and io_mem_free will be exactly
         balanced and not called recursively for the same struct ttm_mem_reg.
      3) Optionally the driver can choose to enable a per memory type manager LRU
         eviction mechanism that, when io_mem_reserve returns -EAGAIN will attempt
         to kill user-space mappings of memory in that manager to free up needed
         resources
      Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com>
      Reviewed-by: NBen Skeggs <bskeggs@redhat.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      eba67093
    • T
      drm/ttm/radeon/nouveau: Kill the bo lock in favour of a bo device fence_lock · 702adba2
      Thomas Hellstrom 提交于
      The bo lock used only to protect the bo sync object members, and since it
      is a per bo lock, fencing a buffer list will see a lot of locks and unlocks.
      Replace it with a per-device lock that protects the sync object members on
      *all* bos. Reading and setting these members will always be very quick, so
      the risc of heavy lock contention is microscopic. Note that waiting for
      sync objects will always take place outside of this lock.
      
      The bo device fence lock will eventually be replaced with a seqlock /
      rcu mechanism so we can determine that a bo is idle under a
      rcu / read seqlock.
      
      However this change will allow us to batch fencing and unreserving of
      buffers with a minimal amount of locking.
      Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com>
      Reviewed-by: NJerome Glisse <j.glisse@gmail.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      702adba2
  11. 27 10月, 2010 1 次提交
    • P
      mm: stack based kmap_atomic() · 3e4d3af5
      Peter Zijlstra 提交于
      Keep the current interface but ignore the KM_type and use a stack based
      approach.
      
      The advantage is that we get rid of crappy code like:
      
      	#define __KM_PTE			\
      		(in_nmi() ? KM_NMI_PTE : 	\
      		 in_irq() ? KM_IRQ_PTE :	\
      		 KM_PTE0)
      
      and in general can stop worrying about what context we're in and what kmap
      slots might be appropriate for that.
      
      The downside is that FRV kmap_atomic() gets more expensive.
      
      For now we use a CPP trick suggested by Andrew:
      
        #define kmap_atomic(page, args...) __kmap_atomic(page)
      
      to avoid having to touch all kmap_atomic() users in a single patch.
      
      [ not compiled on:
        - mn10300: the arch doesn't actually build with highmem to begin with ]
      
      [akpm@linux-foundation.org: coding-style fixes]
      [akpm@linux-foundation.org: fix up drivers/gpu/drm/i915/intel_overlay.c]
      Acked-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NChris Metcalf <cmetcalf@tilera.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: David Miller <davem@davemloft.net>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Dave Airlie <airlied@linux.ie>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3e4d3af5
  12. 05 10月, 2010 2 次提交
  13. 22 9月, 2010 1 次提交
  14. 07 7月, 2010 1 次提交
  15. 07 5月, 2010 1 次提交
  16. 20 4月, 2010 2 次提交
    • J
      drm/ttm: remove io_ field from TTM V6 · 0c321c79
      Jerome Glisse 提交于
      All TTM driver have been converted to new io_mem_reserve/free
      interface which allow driver to choose and return proper io
      base, offset to core TTM for ioremapping if necessary. This
      patch remove what is now deadcode.
      
      V2 adapt to match with change in first patch of the patchset
      V3 update after io_mem_reserve/io_mem_free callback balancing
      V4 adjust to minor cleanup
      V5 remove the needs ioremap flag
      V6 keep the ioremapping facility in TTM
      
      [airlied- squashed driver removals in here also]
      Signed-off-by: NJerome Glisse <jglisse@redhat.com>
      Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      0c321c79
    • J
      drm/ttm: ttm_fault callback to allow driver to handle bo placement V6 · 82c5da6b
      Jerome Glisse 提交于
      On fault the driver is given the opportunity to perform any operation
      it sees fit in order to place the buffer into a CPU visible area of
      memory. This patch doesn't break TTM users, nouveau, vmwgfx and radeon
      should keep working properly. Future patch will take advantage of this
      infrastructure and remove the old path from TTM once driver are
      converted.
      
      V2 return VM_FAULT_NOPAGE if callback return -EBUSY or -ERESTARTSYS
      V3 balance io_mem_reserve and io_mem_free call, fault_reserve_notify
         is responsible to perform any necessary task for mapping to succeed
      V4 minor cleanup, atomic_t -> bool as member is protected by reserve
         mecanism from concurent access
      V5 the callback is now responsible for iomapping the bo and providing
         a virtual address this simplify TTM and will allow to get rid of
         TTM_MEMTYPE_FLAG_NEEDS_IOREMAP
      V6 use the bus addr data to decide to ioremap or this isn't needed
         but we don't necesarily need to ioremap in the callback but still
         allow driver to use static mapping
      Signed-off-by: NJerome Glisse <jglisse@redhat.com>
      Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      82c5da6b
  17. 08 4月, 2010 1 次提交
  18. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  19. 01 2月, 2010 1 次提交
  20. 07 12月, 2009 1 次提交
  21. 04 12月, 2009 1 次提交
  22. 19 8月, 2009 1 次提交
  23. 04 8月, 2009 1 次提交
  24. 29 7月, 2009 2 次提交
  25. 24 6月, 2009 1 次提交
  26. 15 6月, 2009 1 次提交
    • T
      drm: Add the TTM GPU memory manager subsystem. · ba4e7d97
      Thomas Hellstrom 提交于
      TTM is a GPU memory manager subsystem designed for use with GPU
      devices with various memory types (On-card VRAM, AGP,
      PCI apertures etc.). It's essentially a helper library that assists
      the DRM driver in creating and managing persistent buffer objects.
      
      TTM manages placement of data and CPU map setup and teardown on
      data movement. It can also optionally manage synchronization of
      data on a per-buffer-object level.
      
      TTM takes care to provide an always valid virtual user-space address
      to a buffer object which makes user-space sub-allocation of
      big buffer objects feasible.
      
      TTM uses a fine-grained per buffer-object locking scheme, taking
      care to release all relevant locks when waiting for the GPU.
      Although this implies some locking overhead, it's probably a big
      win for devices with multiple command submission mechanisms, since
      the lock contention will be minimal.
      
      TTM can be used with whatever user-space interface the driver
      chooses, including GEM. It's used by the upcoming Radeon KMS DRM driver
      and is also the GPU memory management core of various new experimental
      DRM drivers.
      Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com>
      Signed-off-by: NJerome Glisse <jglisse@redhat.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      ba4e7d97