1. 22 10月, 2013 2 次提交
  2. 21 10月, 2013 1 次提交
  3. 18 10月, 2013 2 次提交
  4. 17 10月, 2013 3 次提交
    • D
      drm/i915: Use unsigned long for obj->user_pin_count · aa5f8021
      Daniel Vetter 提交于
      At least on linux sizeof(long) == sizeof(void*) and the thinking
      is that you can grab about as many references as there's memory.
      
      Doesn't really matter, just a bit of OCD since the fixed size data
      type in a pure in-kernel datastructure look off.
      
      v2: Ville asked for an overflow check since no one prevents userspace
      from incrementing the pin count forever.
      
      v3: s/INT/LONG/, noticed by Chris.
      
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
      Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com>
      Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      aa5f8021
    • D
      drm/i915: prevent tiling changes on framebuffer backing storage · 80075d49
      Daniel Vetter 提交于
      Assuming that all framebuffer related metadata is invariant simplifies
      our userspace input data checking. And current userspace always first
      updates the tiling of an object before creating a framebuffer with it.
      
      This allows us to upconvert a check in pin_and_fence to a WARN.
      
      In the future it should also be helpful to know which buffer objects
      are potential scanout targets for e.g. frontbuffer rendering tracking
      and similar things.
      
      Note that SNA shipped for one prerelease with code which will be
      broken through this patch. But users shouldn't notice since it's
      purely an optimization and will transparently fall back to allocating
      a new fb. i-g-t also had offending code (now fixed), but we don't
      really care about breaking the test-suite.
      
      Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
      Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com>
      Grumpily-reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      80075d49
    • C
      drm/i915: Disable all GEM timers and work on unload · 45c5f202
      Chris Wilson 提交于
      We have two once very similar functions, i915_gpu_idle() and
      i915_gem_idle(). The former is used as the lower level operation to
      flush work on the GPU, whereas the latter is the high level interface to
      flush the GEM bookkeeping in addition to flushing the GPU. As such
      i915_gem_idle() also clears out the request and activity lists and
      cancels the delayed work. This is what we need for unloading the driver,
      unfortunately we called i915_gpu_idle() instead.
      
      In the process, make sure that when cancelling the delayed work and
      timer, which is synchronous, that we do not hold any locks to prevent a
      deadlock if the work item is already waiting upon the mutex. This
      requires us to push the mutex down from the caller to i915_gem_idle().
      
      v2: s/i915_gem_idle/i915_gem_suspend/
      
      Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=70334Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Tested-by: xunx.fang@intel.com
      [danvet: Only set ums.suspended for !kms as discussed earlier. Chris
      noticed that this slipped through.]
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      45c5f202
  5. 16 10月, 2013 10 次提交
  6. 12 10月, 2013 1 次提交
    • D
      drm/i915: Kconfig option to disable the legacy fbdev support · 4520f53a
      Daniel Vetter 提交于
      Boots Just Fine (tm)!
      
      The only glitch seems to be that at least on Fedora the boot splash
      gets confused and doesn't display much at all.
      
      And since there's no ugly console flickering anymore in between, the
      flicker while switching between X servers (VT support is still enabled)
      is even more jarring.
      
      Also, I'm unsure whether we don't need to somehow kick out vgacon, now
      that nothing else gets in the way. But stuff seems to work, so I
      don't care. Also everything still works as well with VGA_CONSOLE=n
      
      Also the #ifdef mess needs a bit of a cleanup, follow-up patches will
      do just that.
      
      To keep the Kconfig tidy, extract all the i915 options into its own
      file.
      
      v2:
      - Rebase on top of the preliminary hw support option and the
        intel_drv.h cleanup.
      - Shut up warnings in i915_debugfs.c
      
      v3: Use the right CONFIG variable, spotted by Chon Ming.
      
      Cc: Lee, Chon Ming <chon.ming.lee@intel.com>
      Cc: David Herrmann <dh.herrmann@gmail.com>
      Reviewed-by: NChon Ming Lee <chon.ming.lee@intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      4520f53a
  7. 11 10月, 2013 1 次提交
  8. 10 10月, 2013 2 次提交
  9. 09 10月, 2013 2 次提交
  10. 04 10月, 2013 4 次提交
    • R
      drm/i915: Simplify PSR debugfs · a031d709
      Rodrigo Vivi 提交于
      for igt test case.
      
      v2: remove trailing spaces and fix conflicts
      Signed-off-by: NRodrigo Vivi <rodrigo.vivi@gmail.com>
      [danvet:
      - make it comipile
      - s/IS_HASWELL/HAS_PSR/]
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      a031d709
    • C
      drm/i915: Tweak RPS thresholds to more aggressively downclock · dd75fdc8
      Chris Wilson 提交于
      After applying wait-boost we often find ourselves stuck at higher clocks
      than required. The current threshold value requires the GPU to be
      continuously and completely idle for 313ms before it is dropped by one
      bin. Conversely, we require the GPU to be busy for an average of 90% over
      a 84ms period before we upclock. So the current thresholds almost never
      downclock the GPU, and respond very slowly to sudden demands for more
      power. It is easy to observe that we currently lock into the wrong bin
      and both underperform in benchmarks and consume more power than optimal
      (just by repeating the task and measuring the different results).
      
      An alternative approach, as discussed in the bspec, is to use a
      continuous threshold for upclocking, and an average value for downclocking.
      This is good for quickly detecting and reacting to state changes within a
      frame, however it fails with the common throttling method of waiting
      upon the outstanding frame - at least it is difficult to choose a
      threshold that works well at 15,000fps and at 60fps. So continue to use
      average busy/idle loads to determine frequency change.
      
      v2: Use 3 power zones to keep frequencies low in steady-state mostly
      idle (e.g. scrolling, interactive 2D drawing), and frequencies high
      for demanding games. In between those end-states, we use a
      fast-reclocking algorithm to converge more quickly on the desired bin.
      
      v3: Bug fixes - make sure we reset adj after switching power zones.
      
      v4: Tune - drop the continuous busy thresholds as it prevents us from
      choosing the right frequency for glxgears style swap benchmarks. Instead
      the goal is to be able to find the right clocks irrespective of the
      wait-boost.
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Kenneth Graunke <kenneth@whitecape.org>
      Cc: Stéphane Marchesin <stephane.marchesin@gmail.com>
      Cc: Owen Taylor <otaylor@redhat.com>
      Cc: "Meng, Mengmeng" <mengmeng.meng@intel.com>
      Cc: "Zhuang, Lena" <lena.zhuang@intel.com>
      Reviewed-by: NJesse Barnes <jbarnes@virtuousgeek.org>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      dd75fdc8
    • C
      drm/i915: Boost RPS frequency for CPU stalls · b29c19b6
      Chris Wilson 提交于
      If we encounter a situation where the CPU blocks waiting for results
      from the GPU, give the GPU a kick to boost its the frequency.
      
      This should work to reduce user interface stalls and to quickly promote
      mesa to high frequencies - but the cost is that our requested frequency
      stalls high (as we do not idle for long enough before rc6 to start
      reducing frequencies, nor are we aggressive at down clocking an
      underused GPU). However, this should be mitigated by rc6 itself powering
      off the GPU when idle, and that energy use is dependent upon the workload
      of the GPU in addition to its frequency (e.g. the math or sampler
      functions only consume power when used). Still, this is likely to
      adversely affect light workloads.
      
      In particular, this nearly eliminates the highly noticeable wake-up lag
      in animations from idle. For example, expose or workspace transitions.
      (However, given the situation where we fail to downclock, our requested
      frequency is almost always the maximum, except for Baytrail where we
      manually downclock upon idling. This often masks the latency of
      upclocking after being idle, so animations are typically smooth - at the
      cost of increased power consumption.)
      
      Stéphane raised the concern that this will punish good applications and
      reward bad applications - but due to the nature of how mesa performs its
      client throttling, I believe all mesa applications will be roughly
      equally affected. To address this concern, and to prevent applications
      like compositors from permanently boosting the RPS state, we ratelimit the
      frequency of the wait-boosts each client recieves.
      
      Unfortunately, this techinique is ineffective with Ironlake - which also
      has dynamic render power states and suffers just as dramatically. For
      Ironlake, the thermal/power headroom is shared with the CPU through
      Intelligent Power Sharing and the intel-ips module. This leaves us with
      no GPU boost frequencies available when coming out of idle, and due to
      hardware limitations we cannot change the arbitration between the CPU and
      GPU quickly enough to be effective.
      
      v2: Limit each client to receiving a single boost for each active period.
          Tested by QA to only marginally increase power, and to demonstrably
          increase throughput in games. No latency measurements yet.
      
      v3: Cater for front-buffer rendering with manual throttling.
      
      v4: Tidy up.
      
      v5: Sadly the compositor needs frequent boosts as it may never idle, but
      due to its picking mechanism (using ReadPixels) may require frequent
      waits. Those waits, along with the waits for the vrefresh swap, conspire
      to keep the GPU at low frequencies despite the interactive latency. To
      overcome this we ditch the one-boost-per-active-period and just ratelimit
      the number of wait-boosts each client can receive.
      Reported-and-tested-by: NPaul Neumann <paul104x@yahoo.de>
      Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=68716Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Kenneth Graunke <kenneth@whitecape.org>
      Cc: Stéphane Marchesin <stephane.marchesin@gmail.com>
      Cc: Owen Taylor <otaylor@redhat.com>
      Cc: "Meng, Mengmeng" <mengmeng.meng@intel.com>
      Cc: "Zhuang, Lena" <lena.zhuang@intel.com>
      Reviewed-by: NJesse Barnes <jbarnes@virtuousgeek.org>
      [danvet: No extern for function prototypes in headers.]
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      b29c19b6
    • C
      drm/i915: Fix __wait_seqno to use true infinite timeouts · 094f9a54
      Chris Wilson 提交于
      When we switched to always using a timeout in conjunction with
      wait_seqno, we lost the ability to detect missed interrupts. Since, we
      have had issues with interrupts on a number of generations, and they are
      required to be delivered in a timely fashion for a smooth UX, it is
      important that we do log errors found in the wild and prevent the
      display stalling for upwards of 1s every time the seqno interrupt is
      missed.
      
      Rather than continue to fix up the timeouts to work around the interface
      impedence in wait_event_*(), open code the combination of
      wait_event[_interruptible][_timeout], and use the exposed timer to
      poll for seqno should we detect a lost interrupt.
      
      v2: In order to satisfy the debug requirement of logging missed
      interrupts with the real world requirments of making machines work even
      if interrupts are hosed, we revert to polling after detecting a missed
      interrupt.
      
      v3: Throw in a debugfs interface to simulate broken hw not reporting
      interrupts.
      
      v4: s/EGAIN/EAGAIN/ (Imre)
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: NImre Deak <imre.deak@intel.com>
      [danvet: Don't use the struct typedef in new code.]
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      094f9a54
  11. 01 10月, 2013 9 次提交
  12. 21 9月, 2013 1 次提交
  13. 20 9月, 2013 2 次提交
    • B
      drm/i915: s/HAS_L3_GPU_CACHE/HAS_L3_DPF · 040d2baa
      Ben Widawsky 提交于
      We'd only ever used this define to denote whether or not we have the
      dynamic parity feature (DPF) and never to determine whether or not L3
      exists. Baytrail is a good example of where L3 exists, and not DPF.
      
      This patch provides clarify in the code for future use cases which might
      want to actually query whether or not L3 exists.
      
      v2: Add /* DPF == dynamic parity feature */
      Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com>
      Signed-off-by: NBen Widawsky <ben@bwidawsk.net>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      040d2baa
    • B
      drm/i915: Do remaps for all contexts · 3ccfd19d
      Ben Widawsky 提交于
      On both Ivybridge and Haswell, row remapping information is saved and
      restored with context. This means, we never actually properly supported
      the l3 remapping because our sysfs interface is asynchronous (and not
      tied to any context), and the known faulty HW would be reused by the
      next context to run.
      
      Not that due to the asynchronous nature of the sysfs entry, there is no
      point modifying the registers for the existing context. Instead we set a
      flag for all contexts to load the correct remapping information on the
      next run. Interested clients can use debugfs to determine whether or not
      the row has been remapped.
      
      One could propose at this point that we just do the remapping in the
      kernel. I guess since we have to maintain the sysfs interface anyway,
      I'm not sure how useful it is, and I do like keeping the policy in
      userspace; (it wasn't my original decision to make the
      interface the way it is, so I'm not attached).
      
      v2: Force a context switch when we have a remap on the next switch.
      (Ville)
      Don't let userspace use the interface with disabled contexts.
      
      v3: Don't force a context switch, just let it nop
      Improper context slice remap initialization, 1<<1 instead of 1<<i, but I
      rewrote it to avoid a second round of confusion.
      Error print moved to error path (All Ville)
      Added a comment on why the slice remap initialization happens.
      
      CC: Ville Syrjälä <ville.syrjala@linux.intel.com>
      Signed-off-by: NBen Widawsky <ben@bwidawsk.net>
      Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      3ccfd19d