1. 24 7月, 2015 1 次提交
  2. 23 9月, 2014 1 次提交
  3. 02 9月, 2014 1 次提交
  4. 01 9月, 2014 1 次提交
  5. 08 7月, 2014 1 次提交
  6. 08 1月, 2014 1 次提交
  7. 23 12月, 2013 1 次提交
  8. 18 12月, 2013 1 次提交
  9. 20 11月, 2013 1 次提交
  10. 06 11月, 2013 2 次提交
  11. 25 7月, 2013 1 次提交
    • D
      drm/ttm: convert to unified vma offset manager · 72525b3f
      David Herrmann 提交于
      Use the new vma-manager infrastructure. This doesn't change any
      implementation details as the vma-offset-manager is nearly copied 1-to-1
      from TTM.
      
      The vm_lock is moved into the offset manager so we can drop it from TTM.
      During lookup, we use the vma locking helpers to take a reference to the
      found object.
      In all other scenarios, locking stays the same as before. We always
      guarantee that drm_vma_offset_remove() is called only during destruction.
      Hence, helpers like drm_vma_node_offset_addr() are always safe as long as
      the node has a valid offset.
      
      This also drops the addr_space_offset member as it is a copy of vm_start
      in vma_node objects. Use the accessor functions instead.
      
      v4:
       - remove vm_lock
       - use drm_vma_offset_lock_lookup() to protect lookup (instead of vm_lock)
      
      Cc: Dave Airlie <airlied@redhat.com>
      Cc: Ben Skeggs <bskeggs@redhat.com>
      Cc: Maarten Lankhorst <maarten.lankhorst@canonical.com>
      Cc: Martin Peres <martin.peres@labri.fr>
      Cc: Alex Deucher <alexander.deucher@amd.com>
      Cc: Thomas Hellstrom <thellstrom@vmware.com>
      Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com>
      Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NDave Airlie <airlied@gmail.com>
      72525b3f
  12. 28 6月, 2013 1 次提交
  13. 12 4月, 2013 1 次提交
  14. 08 2月, 2013 1 次提交
    • D
      drm/ttm: fix fence locking in ttm_buffer_object_transfer, 2nd try · ff7c60c5
      Daniel Vetter 提交于
      This fixes up
      
      commit e8e89622
      Author: Daniel Vetter <daniel.vetter@ffwll.ch>
      Date:   Tue Dec 18 22:25:11 2012 +0100
      
          drm/ttm: fix fence locking in ttm_buffer_object_transfer
      
      which leaves behind a might_sleep in atomic context, since the
      fence_lock spinlock is held over a kmalloc(GFP_KERNEL) call. The fix
      is to revert the above commit and only take the lock where we need it,
      around the call to ->sync_obj_ref.
      
      v2: Fixup things noticed by Maarten Lankhorst:
      - Brown paper bag locking bug.
      - No need for kzalloc if we clear the entire thing on the next line.
      - check for bo->sync_obj (totally unlikely race, but still someone
        else could have snuck in) and clear fbo->sync_obj if it's cleared
        already.
      Reported-by: NDave Airlie <airlied@gmail.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: Maarten Lankhorst <maarten.lankhorst@canonical.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      ff7c60c5
  15. 21 1月, 2013 1 次提交
    • D
      ttm: don't destroy old mm_node on memcpy failure · 63054186
      Dave Airlie 提交于
      When we are using memcpy to move objects around, and we fail to memcpy
      due to lack of memory to populate or failure to finish the copy, we don't
      want to destroy the mm_node that has been copied into old_copy.
      
      While working on a new kms driver that uses memcpy, if I overallocated bo's
      up to the memory limits, and eviction failed, then machine would oops soon
      after due to having an active bo with an already freed drm_mm embedded in it,
      freeing it a second time didn't end well.
      Reviewed-by: NJerome Glisse <jglisse@redhat.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      63054186
  16. 08 1月, 2013 1 次提交
  17. 10 12月, 2012 1 次提交
    • M
      drm/ttm: remove no_wait_reserve, v3 · 97a875cb
      Maarten Lankhorst 提交于
      All items on the lru list are always reservable, so this is a stupid
      thing to keep. Not only that, it is used in a way which would
      guarantee deadlocks if it were ever to be set to block on reserve.
      
      This is a lot of churn, but mostly because of the removal of the
      argument which can be nested arbitrarily deeply in many places.
      
      No change of code in this patch except removal of the no_wait_reserve
      argument, the previous patch removed the use of no_wait_reserve.
      
      v2:
       - Warn if -EBUSY is returned on reservation, all objects on the list
         should be reservable. Adjusted patch slightly due to conflicts.
      v3:
       - Focus on no_wait_reserve removal only.
      Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@canonical.com>
      Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      97a875cb
  18. 20 11月, 2012 2 次提交
  19. 03 10月, 2012 1 次提交
  20. 24 8月, 2012 1 次提交
  21. 06 12月, 2011 2 次提交
  22. 28 10月, 2011 1 次提交
  23. 18 10月, 2011 1 次提交
  24. 01 9月, 2011 1 次提交
    • M
      drm/ttm: add a way to bo_wait for either the last read or last write · dfadbbdb
      Marek Olšák 提交于
      Sometimes we want to know whether a buffer is busy and wait for it (bo_wait).
      However, sometimes it would be more useful to be able to query whether
      a buffer is busy and being either read or written, and wait until it's stopped
      being either read or written. The point of this is to be able to avoid
      unnecessary waiting, e.g. if a GPU has written something to a buffer and is now
      reading that buffer, and a CPU wants to map that buffer for read, it needs to
      only wait for the last write. If there were no write, there wouldn't be any
      waiting needed.
      
      This, or course, requires user space drivers to send read/write flags
      with each relocation (like we have read/write domains in radeon, so we can
      actually use those for something useful now).
      
      Now how this patch works:
      
      The read/write flags should passed to ttm_validate_buffer. TTM maintains
      separate sync objects of the last read and write for each buffer, in addition
      to the sync object of the last use of a buffer. ttm_bo_wait then operates
      with one the sync objects.
      Signed-off-by: NMarek Olšák <maraeo@gmail.com>
      Reviewed-by: NJerome Glisse <jglisse@redhat.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      dfadbbdb
  25. 23 8月, 2011 1 次提交
  26. 16 12月, 2010 1 次提交
  27. 22 11月, 2010 2 次提交
    • T
      drm/ttm: Fix up io_mem_reserve / io_mem_free calling · eba67093
      Thomas Hellstrom 提交于
      This patch attempts to fix up shortcomings with the current calling
      sequences.
      
      1) There's a fastpath where no locking occurs and only io_mem_reserved is
         called to obtain needed info for mapping. The fastpath is set per
         memory type manager.
      2) If the fastpath is disabled, io_mem_reserve and io_mem_free will be exactly
         balanced and not called recursively for the same struct ttm_mem_reg.
      3) Optionally the driver can choose to enable a per memory type manager LRU
         eviction mechanism that, when io_mem_reserve returns -EAGAIN will attempt
         to kill user-space mappings of memory in that manager to free up needed
         resources
      Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com>
      Reviewed-by: NBen Skeggs <bskeggs@redhat.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      eba67093
    • T
      drm/ttm/radeon/nouveau: Kill the bo lock in favour of a bo device fence_lock · 702adba2
      Thomas Hellstrom 提交于
      The bo lock used only to protect the bo sync object members, and since it
      is a per bo lock, fencing a buffer list will see a lot of locks and unlocks.
      Replace it with a per-device lock that protects the sync object members on
      *all* bos. Reading and setting these members will always be very quick, so
      the risc of heavy lock contention is microscopic. Note that waiting for
      sync objects will always take place outside of this lock.
      
      The bo device fence lock will eventually be replaced with a seqlock /
      rcu mechanism so we can determine that a bo is idle under a
      rcu / read seqlock.
      
      However this change will allow us to batch fencing and unreserving of
      buffers with a minimal amount of locking.
      Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com>
      Reviewed-by: NJerome Glisse <j.glisse@gmail.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      702adba2
  28. 27 10月, 2010 1 次提交
    • P
      mm: stack based kmap_atomic() · 3e4d3af5
      Peter Zijlstra 提交于
      Keep the current interface but ignore the KM_type and use a stack based
      approach.
      
      The advantage is that we get rid of crappy code like:
      
      	#define __KM_PTE			\
      		(in_nmi() ? KM_NMI_PTE : 	\
      		 in_irq() ? KM_IRQ_PTE :	\
      		 KM_PTE0)
      
      and in general can stop worrying about what context we're in and what kmap
      slots might be appropriate for that.
      
      The downside is that FRV kmap_atomic() gets more expensive.
      
      For now we use a CPP trick suggested by Andrew:
      
        #define kmap_atomic(page, args...) __kmap_atomic(page)
      
      to avoid having to touch all kmap_atomic() users in a single patch.
      
      [ not compiled on:
        - mn10300: the arch doesn't actually build with highmem to begin with ]
      
      [akpm@linux-foundation.org: coding-style fixes]
      [akpm@linux-foundation.org: fix up drivers/gpu/drm/i915/intel_overlay.c]
      Acked-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NChris Metcalf <cmetcalf@tilera.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: David Miller <davem@davemloft.net>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Dave Airlie <airlied@linux.ie>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3e4d3af5
  29. 05 10月, 2010 2 次提交
  30. 22 9月, 2010 1 次提交
  31. 07 7月, 2010 1 次提交
  32. 07 5月, 2010 1 次提交
  33. 20 4月, 2010 2 次提交
    • J
      drm/ttm: remove io_ field from TTM V6 · 0c321c79
      Jerome Glisse 提交于
      All TTM driver have been converted to new io_mem_reserve/free
      interface which allow driver to choose and return proper io
      base, offset to core TTM for ioremapping if necessary. This
      patch remove what is now deadcode.
      
      V2 adapt to match with change in first patch of the patchset
      V3 update after io_mem_reserve/io_mem_free callback balancing
      V4 adjust to minor cleanup
      V5 remove the needs ioremap flag
      V6 keep the ioremapping facility in TTM
      
      [airlied- squashed driver removals in here also]
      Signed-off-by: NJerome Glisse <jglisse@redhat.com>
      Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      0c321c79
    • J
      drm/ttm: ttm_fault callback to allow driver to handle bo placement V6 · 82c5da6b
      Jerome Glisse 提交于
      On fault the driver is given the opportunity to perform any operation
      it sees fit in order to place the buffer into a CPU visible area of
      memory. This patch doesn't break TTM users, nouveau, vmwgfx and radeon
      should keep working properly. Future patch will take advantage of this
      infrastructure and remove the old path from TTM once driver are
      converted.
      
      V2 return VM_FAULT_NOPAGE if callback return -EBUSY or -ERESTARTSYS
      V3 balance io_mem_reserve and io_mem_free call, fault_reserve_notify
         is responsible to perform any necessary task for mapping to succeed
      V4 minor cleanup, atomic_t -> bool as member is protected by reserve
         mecanism from concurent access
      V5 the callback is now responsible for iomapping the bo and providing
         a virtual address this simplify TTM and will allow to get rid of
         TTM_MEMTYPE_FLAG_NEEDS_IOREMAP
      V6 use the bus addr data to decide to ioremap or this isn't needed
         but we don't necesarily need to ioremap in the callback but still
         allow driver to use static mapping
      Signed-off-by: NJerome Glisse <jglisse@redhat.com>
      Reviewed-by: NThomas Hellstrom <thellstrom@vmware.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      82c5da6b
  34. 08 4月, 2010 1 次提交