1. 28 10月, 2010 11 次提交
    • C
    • D
      drm/i915: add accounting for mappable objects in gtt v2 · fb7d516a
      Daniel Vetter 提交于
      More precisely: For those that _need_ to be mappable. Also add two
      BUG_ONs in fault and pin to check the consistency of the mappable
      flag.
      
      Changes in v2:
      - Add tracking of gtt mappable space (to notice mappable/unmappable
        balancing issues).
      - Improve the mappable working set tracking by tracking fault and pin
        separately.
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      fb7d516a
    • D
      drm/i915: add mappable to gem_object_bind tracepoint · ec57d260
      Daniel Vetter 提交于
      This way we can make some more educated guesses as to why exactly
      we can't use 2G apertures to their full potential ;)
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      ec57d260
    • D
      drm/i915: use the complete gtt · 53984635
      Daniel Vetter 提交于
      At least the part that's currently enabled by the BIOS.
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      53984635
    • D
      drm/i915: unbind unmappable objects on fault/pin · 16e809ac
      Daniel Vetter 提交于
      In i915_gem_object_pin obviously unbind only if mappable is true.
      
      This is the last part to enable gtt_mappable_end != gtt_size, which
      the next patch will do.
      
      v2: Fences on g33/pineview only work in the mappable part of the
      gtt.
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      16e809ac
    • D
      drm/i915: range-restricted bind_to_gtt · 920afa77
      Daniel Vetter 提交于
      Like before add a parameter mappable (also to gem_object_pin) and
      set it depending upon the context. Only bos that are brought into
      the gtt due to an execbuffer call can be put into the unmappable
      part of the gtt, everything else (especially pinned objects) need
      to be put into the mappable part of the gtt.
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      920afa77
    • D
      drm/i915: range-restricted eviction support · a6e0aa42
      Daniel Vetter 提交于
      Add a mappable parameter to i915_gem_evict_something to distinguish
      the two cases (non-restricted vs. mappable gtt allocations). No
      functional changes because the mappable limit is set to the end of
      the gtt currently.
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      a6e0aa42
    • C
      3cce469c
    • C
      b2223497
    • C
      drm/i915: Move object to GPU domains after dispatching execbuffer · 7e318e18
      Chris Wilson 提交于
      In the event that we fail to dispatch the execbuffer, for example if
      there is insufficient space on the ring, we were leaving the objects in
      an inconsistent state. Notably they were marked as being in the GPU
      write domain, but were not added to the ring or any list. This would
      lead to inevitable oops:
      
      [ 1010.522940] [drm:i915_gem_do_execbuffer] *ERROR* dispatch failed -16
      [ 1010.523055] BUG: unable to handle kernel NULL pointer dereference at
      0000000000000088
      [ 1010.523097] IP: [<ffffffff8122d006>] i915_gem_flush_ring+0x26/0x140
      [ 1010.523120] PGD 14cf2f067 PUD 14ce04067 PMD 0
      [ 1010.523140] Oops: 0000 [#1] SMP
      [ 1010.523154] last sysfs file: /sys/devices/virtual/vc/vcsa2/uevent
      [ 1010.523173] CPU 0
      [ 1010.523183] Pid: 716, comm: X Not tainted 2.6.36+ #34 LosLunas
      CRB/SandyBridge Platform
      [ 1010.523206] RIP: 0010:[<ffffffff8122d006>]  [<ffffffff8122d006>]
      i915_gem_flush_ring+0x26/0x140
      [ 1010.523233] RSP: 0018:ffff88014bf97cd8  EFLAGS: 00010296
      [ 1010.523249] RAX: ffff88014e2d1808 RBX: 0000000000000000 RCX: 0000000000000000
      [ 1010.523270] RDX: 0000000000000002 RSI: 0000000000000000 RDI: 0000000000000000
      [ 1010.523290] RBP: ffff88014e2d1000 R08: 0000000000000002 R09: 00000000400c645f
      [ 1010.523311] R10: 0000000000000001 R11: 0000000000000246 R12: 0000000000000002
      [ 1010.523331] R13: ffff88014e29a000 R14: 00000000000000c8 R15: ffffffff8162eb28
      [ 1010.523352] FS:  00007fc62379d700(0000) GS:ffff88001fc00000(0000) knlGS:0000000000000000
      [ 1010.523375] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      [ 1010.523392] CR2: 0000000000000088 CR3: 000000014bf87000 CR4: 00000000000406f0
      [ 1010.523412] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      [ 1010.523433] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
      [ 1010.523454] Process X (pid: 716, threadinfo ffff88014bf96000, task ffff88014cc1ee40)
      [ 1010.523475] Stack:
      [ 1010.523483]  ffff88014d5199c0 0000000000000200 0000000000000000 ffff88014bcc6400
      [ 1010.523509] <0> 0000000000000000 0000000000000001 ffff88014e29a000 ffff88014bcc6400
      [ 1010.523537] <0> ffffffff8162eb28 ffffffff8122faa8 ffff88014e29a000 ffff88014bcc6400
      [ 1010.523568] Call Trace:
      [ 1010.523578]  [<ffffffff8122faa8>] ?  i915_gem_object_flush_gpu_write_domain+0x48/0x80
      [ 1010.523601]  [<ffffffff8122fb8e>] ?  i915_gem_object_set_to_gtt_domain+0x2e/0xb0
      [ 1010.523623]  [<ffffffff8123113b>] ?  i915_gem_set_domain_ioctl+0xdb/0x1f0
      [ 1010.523644]  [<ffffffff8120a3f1>] ? drm_ioctl+0x3d1/0x460
      [ 1010.523660]  [<ffffffff81231060>] ?  i915_gem_set_domain_ioctl+0x0/0x1f0
      [ 1010.523682]  [<ffffffff81092618>] ?  vma_prio_tree_insert+0x28/0x120
      [ 1010.523701]  [<ffffffff8109f379>] ? vma_link+0x99/0xf0
      [ 1010.523717]  [<ffffffff810a111d>] ? mmap_region+0x1ed/0x4f0
      [ 1010.523734]  [<ffffffff810c306f>] ? do_vfs_ioctl+0x9f/0x580
      [ 1010.523750]  [<ffffffff810c3599>] ? sys_ioctl+0x49/0x80
      [ 1010.523767]  [<ffffffff810022eb>] ?  system_call_fastpath+0x16/0x1b
      [ 1010.523785] Code: 00 00 00 00 00 41 57 89 ce 41 56 41 55 41 54 45 89 c4 55 48 89 fd 53 48 89 d3 44 89 c2 48 89 df 4c 8d b3 c8 00 00 00 48 83 ec 18 <ff> 93 88 00 00 00 48 8b 83 c8 00 00 00 4c 8b bd 30 03 00 00 48
      [ 1010.523946] RIP  [<ffffffff8122d006>] i915_gem_flush_ring+0x26/0x140
      [ 1010.523966]  RSP <ffff88014bf97cd8>
      [ 1010.523977] CR2: 0000000000000088
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      7e318e18
    • C
      drm/i915: Propagate errors from writing to ringbuffer · e1f99ce6
      Chris Wilson 提交于
      Preparing the ringbuffer for adding new commands can fail (a timeout
      whilst waiting for the GPU to catch up and free some space). So check
      for any potential error before overwriting HEAD with new commands, and
      propagate that error back to the user where possible.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      e1f99ce6
  2. 27 10月, 2010 2 次提交
    • C
      drm/i915/ringbuffer: Drop the redundant dev from the vfunc interface · 78501eac
      Chris Wilson 提交于
      The ringbuffer keeps a pointer to the parent device, so we can use that
      instead of passing around the pointer on the stack.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      78501eac
    • P
      mm: stack based kmap_atomic() · 3e4d3af5
      Peter Zijlstra 提交于
      Keep the current interface but ignore the KM_type and use a stack based
      approach.
      
      The advantage is that we get rid of crappy code like:
      
      	#define __KM_PTE			\
      		(in_nmi() ? KM_NMI_PTE : 	\
      		 in_irq() ? KM_IRQ_PTE :	\
      		 KM_PTE0)
      
      and in general can stop worrying about what context we're in and what kmap
      slots might be appropriate for that.
      
      The downside is that FRV kmap_atomic() gets more expensive.
      
      For now we use a CPP trick suggested by Andrew:
      
        #define kmap_atomic(page, args...) __kmap_atomic(page)
      
      to avoid having to touch all kmap_atomic() users in a single patch.
      
      [ not compiled on:
        - mn10300: the arch doesn't actually build with highmem to begin with ]
      
      [akpm@linux-foundation.org: coding-style fixes]
      [akpm@linux-foundation.org: fix up drivers/gpu/drm/i915/intel_overlay.c]
      Acked-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NChris Metcalf <cmetcalf@tilera.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: David Miller <davem@davemloft.net>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Dave Airlie <airlied@linux.ie>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3e4d3af5
  3. 25 10月, 2010 1 次提交
  4. 23 10月, 2010 1 次提交
  5. 22 10月, 2010 2 次提交
  6. 21 10月, 2010 1 次提交
  7. 20 10月, 2010 3 次提交
  8. 19 10月, 2010 7 次提交
  9. 08 10月, 2010 1 次提交
    • C
      drm/i915: Wait for pending flips on the GPU · e59f2bac
      Chris Wilson 提交于
      Currently, if a batch buffer refers to an object with a pending flip,
      then we sleep until that pending flip is completed (unpinned and
      signalled). This is so that a flip can be queued and the user can
      continue rendering to the backbuffer oblivious to whether the buffer is
      still pinned as the scan out. (The kernel arbitrating at the last moment
      to stall the batch and wait until the buffer is unpinned and replaced as
      the front buffer.)
      
      As we only have a queue depth of 1, we can simply wait for the current
      pending flip to complete and continue rendering. We can achieve this
      with a single WAIT_FOR_EVENT command inserted into the ring buffer prior
      to executing the batch, *without* stalling the client.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      e59f2bac
  10. 04 10月, 2010 1 次提交
  11. 03 10月, 2010 2 次提交
  12. 02 10月, 2010 2 次提交
  13. 01 10月, 2010 4 次提交
  14. 30 9月, 2010 2 次提交