1. 05 9月, 2013 1 次提交
  2. 04 9月, 2013 3 次提交
    • A
      i915: Update VGA arbiter support for newer devices · 81b5c7bc
      Alex Williamson 提交于
      This is intended to add VGA arbiter support for Intel HD graphics on
      Core processors.  The old GMCH registers no longer exist, so even
      though it appears that i915 participates in VGA arbitration, it doesn't
      work.  On Intel HD graphics we already attempt to disable VGA regions
      of the device.  This makes registering as a VGA client unnecessary since
      we don't intend to operate differently depending on how many VGA devices
      are present.  We can disable VGA memory regions by clearing the memory
      enable bit in the VGA MSR.  That only leaves VGA IO, which we update
      the VGA arbiter to know that we don't participate in VGA memory
      arbitration.  We also add a hook on unload to re-enable memory and
      reinstate VGA memory arbitration.
      
      v3: Use explicit LEGACY_IO | LEGACY_MEM when restoring rather than
          LEGACY_MASK, per Ville's comments.
      
      v2: I915_READ/WRITE accessors don't work in i915_disable_vga, use inb/outb
          directly.  Also, on the driver unbind VGA enable path, acquire legacy
          IO to re-enable VGA memory.  Correct comment.
      Signed-off-by: NAlex Williamson <alex.williamson@redhat.com>
      Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com>
      [danvet: Add patch changelog. Also squash in a fixup to have a dummy
      static inline for vga_set_legacy_decoding for CONFIG_VGA_ARB=n as
      reported by the 0-day kernel build bot.]
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      
      fixup 2
      81b5c7bc
    • J
      x86: add early quirk for reserving Intel graphics stolen memory v5 · 814c5f1f
      Jesse Barnes 提交于
      Systems with Intel graphics controllers set aside memory exclusively for
      gfx driver use.  This memory is not always marked in the E820 as
      reserved or as RAM, and so is subject to overlap from E820 manipulation
      later in the boot process.  On some systems, MMIO space is allocated on
      top, despite the efforts of the "RAM buffer" approach, which simply
      rounds memory boundaries up to 64M to try to catch space that may decode
      as RAM and so is not suitable for MMIO.
      
      v2: use read_pci_config for 32 bit reads instead of adding a new one
          (Chris)
          add gen6 stolen size function (Chris)
      v3: use a function pointer (Chris)
          drop gen2 bits (Daniel)
      v4: call e820_sanitize_map after adding the region
      v5: fixup comments (Peter)
          simplify loop (Chris)
      Acked-by: NIngo Molnar <mingo@kernel.org>
      Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org>
      Acked-by: NH. Peter Anvin <hpa@zytor.com>
      Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=66726
      Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=66844Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      814c5f1f
    • J
      drm/i915: split PCI IDs out into i915_drm.h v4 · a0a18075
      Jesse Barnes 提交于
      For use by userspace (at some point in the future) and other kernel code.
      
      v2: move PCI IDs to uabi (Chris)
          move PCI IDs to drm/ (Dave)
      v3: fixup Quanta detection - needs to come first (Daniel)
      v4: fix up PCI match structure init for easier use by userspace (Chris)
      Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      a0a18075
  3. 02 9月, 2013 1 次提交
  4. 31 8月, 2013 3 次提交
  5. 30 8月, 2013 14 次提交
  6. 29 8月, 2013 2 次提交
    • D
      drm: allow open of dynamic off devices. · 13bb9cc8
      Dave Airlie 提交于
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      13bb9cc8
    • D
      gpu/vga_switcheroo: add driver control power feature. (v3) · 0d69704a
      Dave Airlie 提交于
      For optimus and powerxpress muxless we really want the GPU
      driver deciding when to power up/down the GPU, not userspace.
      
      This adds the ability for a driver to dynamically power up/down
      the GPU and remove the switcheroo from controlling it, the
      switcheroo reports the dynamic state to userspace also.
      
      It also adds 2 power domains, one for machine where the power
      switch is controlled outside the GPU D3 state, so the powerdown
      ordering is done correctly, and the second for the hdmi audio
      device to make sure it can resume for PCI config space accesses.
      
      v1.1: fix build with switcheroo off
      
      v2: add power domain support for radeon and v1 nvidia dsms
      v2.1: fix typo in off case
      
      v3: add audio power domain for hdmi audio + misc audio fixes
      
      v4: use PCI_SLOT macro, drop power reference on hdmi audio resume
      failure also.
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      0d69704a
  7. 27 8月, 2013 1 次提交
    • D
      drm/vma: add access management helpers · 88d7ebe5
      David Herrmann 提交于
      The VMA offset manager uses a device-global address-space. Hence, any
      user can currently map any offset-node they want. They only need to guess
      the right offset. If we wanted per open-file offset spaces, we'd either
      need VM_NONLINEAR mappings or multiple "struct address_space" trees. As
      both doesn't really scale, we implement access management in the VMA
      manager itself.
      
      We use an rb-tree to store open-files for each VMA node. On each mmap
      call, GEM, TTM or the drivers must check whether the current user is
      allowed to map this file.
      
      We add a separate lock for each node as there is no generic lock available
      for the caller to protect the node easily.
      
      As we currently don't know whether an object may be used for mmap(), we
      have to do access management for all objects. If it turns out to slow down
      handle creation/deletion significantly, we can optimize it in several
      ways:
       - Most times only a single filp is added per bo so we could use a static
         "struct file *main_filp" which is checked/added/removed first before we
         fall back to the rbtree+drm_vma_offset_file.
         This could be even done lockless with rcu.
       - Let user-space pass a hint whether mmap() should be supported on the
         bo and avoid access-management if not.
       - .. there are probably more ideas once we have benchmarks ..
      
      v2: add drm_vma_node_verify_access() helper
      Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      88d7ebe5
  8. 25 8月, 2013 2 次提交
    • R
      drm/msm: add a3xx gpu support · 7198e6b0
      Rob Clark 提交于
      Add initial support for a3xx 3d core.
      
      So far, with hardware that I've seen to date, we can have:
       + zero, one, or two z180 2d cores
       + a3xx or a2xx 3d core, which share a common CP (the firmware
         for the CP seems to implement some different PM4 packet types
         but the basics of cmdstream submission are the same)
      
      Which means that the eventual complete "class" hierarchy, once
      support for all past and present hw is in place, becomes:
       + msm_gpu
         + adreno_gpu
           + a3xx_gpu
           + a2xx_gpu
         + z180_gpu
      
      This commit splits out the parts that will eventually be common
      between a2xx/a3xx into adreno_gpu, and the parts that are even
      common to z180 into msm_gpu.
      
      Note that there is no cmdstream validation required.  All memory access
      from the GPU is via IOMMU/MMU.  So as long as you don't map silly things
      to the GPU, there isn't much damage that the GPU can do.
      Signed-off-by: NRob Clark <robdclark@gmail.com>
      7198e6b0
    • A
      cope with potentially long ->d_dname() output for shmem/hugetlb · 118b2302
      Al Viro 提交于
      dynamic_dname() is both too much and too little for those - the
      output may be well in excess of 64 bytes dynamic_dname() assumes
      to be enough (thanks to ashmem feeding really long names to
      shmem_file_setup()) and vsnprintf() is an overkill for those
      guys.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      118b2302
  9. 23 8月, 2013 4 次提交
  10. 22 8月, 2013 3 次提交
    • M
      [SCSI] zfcp: fix lock imbalance by reworking request queue locking · d79ff142
      Martin Peschke 提交于
      This patch adds wait_event_interruptible_lock_irq_timeout(), which is a
      straight-forward descendant of wait_event_interruptible_timeout() and
      wait_event_interruptible_lock_irq().
      
      The zfcp driver used to call wait_event_interruptible_timeout()
      in combination with some intricate and error-prone locking. Using
      wait_event_interruptible_lock_irq_timeout() as a replacement
      nicely cleans up that locking.
      
      This rework removes a situation that resulted in a locking imbalance
      in zfcp_qdio_sbal_get():
      
      BUG: workqueue leaked lock or atomic: events/1/0xffffff00/10
          last function: zfcp_fc_wka_port_offline+0x0/0xa0 [zfcp]
      
      It was introduced by commit c2af7545
      "[SCSI] zfcp: Do not wait for SBALs on stopped queue", which had a new
      code path related to ZFCP_STATUS_ADAPTER_QDIOUP that took an early exit
      without a required lock being held. The problem occured when a
      special, non-SCSI I/O request was being submitted in process context,
      when the adapter's queues had been torn down. In this case the bug
      surfaced when the Fibre Channel port connection for a well-known address
      was closed during a concurrent adapter shut-down procedure, which is a
      rare constellation.
      
      This patch also fixes these warnings from the sparse tool (make C=1):
      
      drivers/s390/scsi/zfcp_qdio.c:224:12: warning: context imbalance in
       'zfcp_qdio_sbal_check' - wrong count at exit
      drivers/s390/scsi/zfcp_qdio.c:244:5: warning: context imbalance in
       'zfcp_qdio_sbal_get' - unexpected unlock
      
      Last but not least, we get rid of that crappy lock-unlock-lock
      sequence at the beginning of the critical section.
      
      It is okay to call zfcp_erp_adapter_reopen() with req_q_lock held.
      Reported-by: NMikulas Patocka <mpatocka@redhat.com>
      Reported-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NMartin Peschke <mpeschke@linux.vnet.ibm.com>
      Cc: stable@vger.kernel.org #2.6.35+
      Signed-off-by: NSteffen Maier <maier@linux.vnet.ibm.com>
      Signed-off-by: NJames Bottomley <JBottomley@Parallels.com>
      d79ff142
    • C
      drm/i915: Use Write-Through cacheing for the display plane on Iris · 651d794f
      Chris Wilson 提交于
      Haswell GT3e has the unique feature of supporting Write-Through cacheing
      of objects within the eLLC/LLC. The purpose of this is to enable the display
      plane to remain coherent whilst objects lie resident in the eLLC/LLC - so
      that we, in theory, get the best of both worlds, perfect display and fast
      access.
      
      However, we still need to be careful as the CPU does not see the WT when
      accessing the cache. In particular, this means that we need to flush the
      cache lines after writing to an object through the CPU, and on
      transitioning from a cached state to WT.
      
      v2: Actually do the clflush on transition to WT, nagging by Ville.
      v3: Flush the CPU cache after writes into WT objects.
      v4: Rease onto LLC updates and report WT as "uncached" for
      get_cache_level_ioctl to remain symmetric with set_cache_level_ioctl.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
      Cc: Kenneth Graunke <kenneth@whitecape.org>
      Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      651d794f
    • D
      drm/i915: reserve I915_CACHING_DISPLAY and document cache modes · 35c7ab42
      Daniel Vetter 提交于
      Resolve the catch-22 of igt needing a stable number and patches first
      needing testcases by reserving the interface number up-front.
      
      v2: Improve the spelling a bit.
      
      v3: More spelling fail spotted by Chris.
      Requested-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
      Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      35c7ab42
  11. 21 8月, 2013 6 次提交
    • D
      drm/prime: Always add exported buffers to the handle cache · d0b2c533
      Daniel Vetter 提交于
      ... not only when the dma-buf is freshly created. In contrived
      examples someone else could have exported/imported the dma-buf already
      and handed us the gem object with a flink name. If such on object gets
      reexported as a dma_buf we won't have it in the handle cache already,
      which breaks the guarantee that for dma-buf imports we always hand
      back an existing handle if there is one.
      
      This is exercised by igt/prime_self_import/with_one_bo_two_files
      
      Now if we extend the locked sections just a notch more we can also
      plug th racy buf/handle cache setup in handle_to_fd:
      
      If evil userspace races a concurrent gem close against a prime export
      operation we can end up tearing down the gem handle before the dma buf
      handle cache is set up. When handle_to_fd gets around to adding the
      handle to the cache there will be no one left to clean it up,
      effectily leaking the bo (and the dma-buf, since the handle cache
      holds a ref on the dma-buf):
      
      Thread A			Thread B
      
      handle_to_fd:
      
      lookup gem object from handle
      creates new dma_buf
      
      				gem_close on the same handle
      				obj->dma_buf is set, but file priv buf
      				handle cache has no entry
      
      				obj->handle_count drops to 0
      
      drm_prime_add_buf_handle sets up the handle cache
      
      -> We have a dma-buf reference in the handle cache, but since the
      handle_count of the gem object already dropped to 0 no on will clean
      it up. When closing the drm device fd we'll hit the WARN_ON in
      drm_prime_destroy_file_private.
      
      The important change is to extend the critical section of the
      filp->prime.lock to cover the gem handle lookup. This serializes with
      a concurrent gem handle close.
      
      This leak is exercised by igt/prime_self_import/export-vs-gem_close-race
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      d0b2c533
    • D
      drm/prime: make drm_prime_lookup_buf_handle static · de9564d8
      Daniel Vetter 提交于
      ... and move it to the top of the function to avoid a forward
      declaration.
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      de9564d8
    • D
      drm/prime: Simplify drm_gem_remove_prime_handles · 838cd445
      Daniel Vetter 提交于
      with the reworking semantics and locking of the obj->dma_buf pointer
      this pointer is always set as long as there's still a gem handle
      around and a dma_buf associated with this gem object.
      
      Also, the per file-priv lookup-cache for dma-buf importing is also
      unified between foreign and native objects.
      
      Hence we don't need to special case the clean any more and can simply
      drop the clause which only runs for foreing objects, i.e. with
      obj->import_attach set.
      
      Note that with this change (actually with the previous one to always
      set up obj->dma_buf even for foreign objects) it is no longer required
      to set obj->import_attach when importing a foreing object. So update
      comments accordingly, too.
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      838cd445
    • D
      drm/prime: proper locking+refcounting for obj->dma_buf link · 319c933c
      Daniel Vetter 提交于
      The export dma-buf cache is semantically similar to an flink name. So
      semantically it makes sense to treat it the same and remove the name
      (i.e. the dma_buf pointer) and its references when the last gem handle
      disappears.
      
      Again we need to be careful, but double so: Not just could someone
      race and export with a gem close ioctl (so we need to recheck
      obj->handle_count again when assigning the new name), but multiple
      exports can also race against each another. This is prevented by
      holding the dev->object_name_lock across the entire section which
      touches obj->dma_buf.
      
      With the new scheme we also need to reinstate the obj->dma_buf link at
      import time (in case the only reference userspace has held in-between
      was through the dma-buf fd and not through any native gem handle). For
      simplicity we don't check whether it's a native object but
      unconditionally set up that link - with the new scheme of removing the
      obj->dma_buf reference when the last handle disappears we can do that.
      
      To make it clear that this is not just for exported buffers anymore
      als rename it from export_dma_buf to dma_buf.
      
      To make sure that now one can race a fd_to_handle or handle_to_fd with
      gem_close we use the same tricks as in flink of extending the
      dev->object_name_locking critical section. With this change we finally
      have a guaranteed 1:1 relationship (at least for native objects)
      between gem objects and dma-bufs, even accounting for races (which can
      happen since the dma-buf itself holds a reference while in-flight).
      
      This prevent igt/prime_self_import/export-vs-gem_close-race from
      Oopsing the kernel. There is still a leak though since the per-file
      priv dma-buf/handle cache handling is racy. That will be fixed in a
      later patch.
      
      v2: Remove the bogus dma_buf_put from the export_and_register_object
      failure path if we've raced with the handle count dropping to 0.
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      319c933c
    • D
      drm/gem: completely close gem_open vs. gem_close races · 20228c44
      Daniel Vetter 提交于
      The gem flink name holds a reference onto the object itself, and this
      self-reference would prevent an flink'ed object from every being
      freed. To break that loop we remove the flink name when the last
      userspace handle disappears, i.e. when obj->handle_count reaches 0.
      
      Now in gem_open we drop the dev->object_name_lock between the flink
      name lookup and actually adding the handle. This means a concurrent
      gem_close of the last handle could result in the flink name getting
      reaped right inbetween, i.e.
      
      Thread 1		Thread 2
      gem_open		gem_close
      
      flink -> obj lookup
      			handle_count drops to 0
      			remove flink name
      create_handle
      handle_count++
      
      If someone now flinks this object again, we'll get a new flink name.
      
      We can close this race by removing the lock dropping and making the
      entire lookup+handle_create sequence atomic. Unfortunately to still be
      able to share the handle_create logic this requires a
      handle_create_tail function which drops the lock - we can't hold the
      object_name_lock while calling into a driver's ->gem_open callback.
      
      Note that for flink fixing this race isn't really important, since
      racing gem_open against gem_close is clearly a userspace bug. And no
      matter how the race ends, we won't leak any references.
      
      But with dma-buf where the userspace dma-buf fd itself is refcounted
      this is a valid sequence and hence we should fix it. Therefore this
      patch here is just a warm-up exercise (and for consistency between
      flink buffer sharing and dma-buf buffer sharing with self-imports).
      
      Also note that this extension of the critical section in gem_open
      protected by dev->object_name_lock only works because it's now a
      mutex: A spinlock would conflict with the potential memory allocation
      in idr_preload().
      
      This is exercises by igt/gem_flink_race/flink_name.
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      20228c44
    • D
      drm/gem: switch dev->object_name_lock to a mutex · cd4f013f
      Daniel Vetter 提交于
      I want to wrap the creation of a dma-buf from a gem object in it,
      so that the obj->export_dma_buf cache can be atomically filled in.
      
      Instead of creating a new mutex just for that variable I've figured
      I can reuse the existing dev->object_name_lock, especially since
      the new semantics will exactly mirror the flink obj->name already
      protected by that lock.
      
      v2: idr_preload/idr_preload_end is now an atomic section, so need to
      move the mutex locking outside.
      
      [airlied: fix up conflict with patch to make debugfs use lock]
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      cd4f013f