- 27 3月, 2013 2 次提交
-
-
由 Egbert Eich 提交于
This allows to enable HPD interrupts for individual pins to only receive hotplug events from lines which are connected and working. v2: Restructured initailization of const arrays following a suggstion by Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NEgbert Eich <eich@suse.de> Acked-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org> (v1) Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Egbert Eich 提交于
It's basically identical to i915_hpd_irq_setup(). Signed-off-by: NEgbert Eich <eich@suse.de> Reviewed-by: NJesse Barnes <jbarnes@virtuousgeek.org> Acked-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 23 3月, 2013 2 次提交
-
-
由 Paulo Zanoni 提交于
So don't read it when capturing the error state. This solves "unclaimed register" messages on Haswell when we have a GPU hang. V2: Check for HAS_PCH_SPLIT instead of Gen5+ because VLV still has this register. Signed-off-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
Requested by Daniel. v2: Fix incorrect num_pipe settings. (Chris) Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Reviewed-by: NJesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 18 3月, 2013 2 次提交
-
-
由 Chris Wilson 提交于
Once we thought we got semaphores working, we disabled kicking the ring if hangcheck fired whilst waiting upon a ring as it was doing more harm than good: commit 4e0e90dc Author: Daniel Vetter <daniel.vetter@ffwll.ch> Date: Wed Dec 14 13:56:58 2011 +0100 drm/i915: kicking rings stuck on semaphores considered harmful However, life is never that easy and semaphores are still causing problems whereby the value written by one ring (bcs) is not being propagated to the waiter (rcs). Thus the waiter never wakes up and we declare the GPU hung, which often has unfortunate consequences, even if we successfully reset the GPU. But the GPU is idle as it has completed the work, just didn't notify its clients. So we can detect the incomplete wait during hang check and probe the target ring to see if has indeed emitted the breadcrumb seqno following the work and then and only then kick the waiter. Based on a suggestion by Ben Widawsky. v2: cross-check wait with iphdr. fix signaller calculation. References: https://bugs.freedesktop.org/show_bug.cgi?id=54226Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Ben Widawsky <ben@bwidawsk.net> Acked-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Paulo Zanoni 提交于
To avoid this: [ 256.798060] [drm] capturing error event; look for more information in/sys/kernel/debug/dri/0/i915_error_state Ben Widawsky identified that this regression has been introduced in commit 2f86f191 Author: Ben Widawsky <ben@bwidawsk.net> Date: Mon Jan 28 15:32:15 2013 -0800 drm/i915: Error state should print /sys/kernel/debug ... [danvet: split up long line.] <----- he did it Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Reviewed-by: NBen Widawsky <ben@bwidawsk.net> [danvet: Pimp commit message with the regression note. Also, order more brown paper bags, I've run out.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 06 3月, 2013 1 次提交
-
-
由 Paulo Zanoni 提交于
From the docs: "IIR can queue up to two interrupt events. When the IIR is cleared, it will set itself again after one clock if a second event was stored." "Only the rising edge of the PCH Display interrupt will cause the North Display IIR (DEIIR) PCH Display Interrupt even bit to be set, so all PCH Display Interrupts, including back to back interrupts, must be cleared before a new PCH Display interrupt can cause DEIIR to be set". The current code works fine because we don't get many interrupts, but if we enable the PCH FIFO underrun interrupts we'll start getting so many interrupts that at some point new PCH interrupts won't cause DEIIR to be set. The initial implementation I tried was to turn the code that checks SDEIIR into a loop, but we can still get interrupts even after the loop is done (and before the irq handler finishes), so we have to either disable the interrupts or mask them. In the end I concluded that just disabling the PCH interrupts is enough, you don't even need the loop, so this is what this patch implements. I've tested it and it passes the 2 "PCH FIFO underrun interrupt storms" I can reproduce: the "ironlake_crtc_disable" case and the "wrong watermarks" case. In other words, here's how to reproduce the problem fixed by this patch: 1 - Enable PCH FIFO underrun interrupts (SERR_INT on SNB+) 2 - Boot the machine 3 - While booting we'll get tons of PCH FIFO underrun interrupts 4 - Plug a new monitor 5 - Run xrandr, notice it won't detect the new monitor 6 - Read SDEIIR and notice it's not 0 while DEIIR is 0 Q: Can't we just clear DEIIR before SDEIIR? A: It doesn't work. SDEIIR has to be completely cleared (including the interrupts stored on its back queue) before it can flip DEIIR's bit to 1 again, and even while you're clearing it you'll be getting more and more interrupts. Q: Why does it work by just disabling+enabling the south interrupts? A: Because when we re-enable them, if there's something on the SDEIIR register (maybe an interrupt stored on the queue), the re-enabling will make DEIIR's bit flip to 1, and since we'll already have interrupts enabled we'll get another interrupt, then run our irq handler again to process the "back" interrupts. v2: Even bigger commit message, added code comments. Note that this fixes missed dp aux irqs which have been reported for 3.9-rc1. This regression has been introduced by switching to irq-driven dp aux transactions with commit 9ee32fea Author: Daniel Vetter <daniel.vetter@ffwll.ch> Date: Sat Dec 1 13:53:48 2012 +0100 drm/i915: irq-drive the dp aux communication References: http://www.mail-archive.com/intel-gfx@lists.freedesktop.org/msg18588.html References: https://lkml.org/lkml/2013/2/26/769Tested-by: NImre Deak <imre.deak@intel.com> Reported-by: NSedat Dilek <sedat.dilek@gmail.com> Reported-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> [danvet: Pimp commit message with references for the dp aux irq timeout regression this fixes.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 05 3月, 2013 3 次提交
-
-
由 Ben Widawsky 提交于
On error, this represents the state of the currently running context at the time it was loaded. Unfortunately, since we're hung and can't switch out the context this may not tell us too much about the most current state of the context, but does give clues about what has happened since loading. Thanks to recent doc updates, we have a little more confidence regarding what is actually in this memory, and perhaps it will help us gain more insight into certain bugs. AFAICT, the most interesting info is in the first page. To save space, we only capture the first page. In the future, we might want to dump more. Sample of the relevant part of error state: render ring --- HW Context = 0x01b20000 [0000] 00000000 1100105f 00002028 ffff0880 [0010] 0000209c feff4040 000020c0 efdf0080 [0020] 00002178 00000001 0000217c 00145855 [0030] 00002310 00000000 00002314 00000000 v2: Move error collection to the ring error code Change format of dump to not confuse intel_error_decode (Chris) Put the context error object with the others (Chris) Don't search bound_list instead of active_list (chris) v3: extract and flatten context recording (daniel) checkpatch related fixes for the copypasta in debugfs v4: bug in v3 (Daniel) - if ((ring->id == RCS) && error->ccid) + if ((ring->id != RCS) || !error->ccid) References: https://bugs.freedesktop.org/show_bug.cgi?id=55845 Reviewed-by (v2): Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> [danvet: Bikeshed away the redudant parenthese around ring->id != RCS] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
v2: Actually use num_pages (Chris) Cc: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 21 2月, 2013 2 次提交
-
-
由 Ville Syrjälä 提交于
Caching the PIPESTAT enable bits has been deemed pointless. Just read them from the register itself. Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: NJani Nikula <jani.nikula@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ville Syrjälä 提交于
The indentation is getting way too deep. Pull the vblank interupt handling out to separate functions. v2: Keep flip_mask handling in the main irq handler and flatten {i8xx,i915}_handle_vblank() even further. Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 20 2月, 2013 3 次提交
-
-
由 Ville Syrjälä 提交于
Use the gen3 logic for handling page flip interrupts on gen4. Unfortuantely this kills the stall_check since that looks like it can easily trigger too early. With the current logic the stall check would kick in on the first vblank after the flip has been submitted to the ring. If the CS takes longer than that to process the commands in the ring, the stall check will cause the page flip to be complete too early. That doesn't sound like a very good idea. Something better should be deviced if we still need the stall check. For now, mark i915_pageflip_stall_check() as unused. v2: Fix irq enable_mask and add __always_unused (Chris Wilson) References: https://bugs.launchpad.net/ubuntu/+source/xserver-xorg-video-intel/+bug/1116587Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Tested-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ville Syrjälä 提交于
If the interrupt handler were to process a previous vblank interrupt and the following flip pending interrupt at the same time, the page flip would be completed too soon. To eliminate this race, check the live pending flip status from the ISR register before finishing the page flip. v2: Added a comment explaining the logic (by Chris Wilson) v3: Fix a typo in the comment Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Tested-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ville Syrjälä 提交于
GPU reset will drop all flips that are still in the ring. So after the reset, call update_plane() for all CRTCs to make sure the primary planes are scanning out from the correct buffer. Also finish all pending flips. That means user space will get its page flip events and won't get stuck waiting for them. v2: Explicitly finish page flips instead of relying on FLIP_DONE interrupt being generated by the base address update. v3: Make two loops over crtcs to avoid deadlocks with the crtc mutex Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> [danvet: Fixup long line complaint from checkpatch.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 15 2月, 2013 2 次提交
-
-
由 Paulo Zanoni 提交于
So we can remove duplicated code. Note that this function is used not only on IBX, but also CPT and LPT. Signed-off-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Reviewed-by: NBen Widawsky <ben@bwidawsk.net> [danvet: Also bikeshed s/ironlake_enable_pch_hotplug/ibx_enable_hotplug to keep consistent with our ibx for pch naming scheme.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
They're physically the same pins and also the same bits, duplicating only confuses the reader. This also makes it a bit obvious that we have quite some code duplication going on here. Squashing that is for a larger rework in our hpd handling though. Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 31 1月, 2013 1 次提交
-
-
由 Ben Widawsky 提交于
/sys/kernel/debug has more or less been the standard location of debugfs for several years now. Other parts of DRM already use this location, so we should as well. Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Reviewed-by: NCarl Worth <cworth@cworth.org> Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> [danvet: split up long line.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 22 1月, 2013 2 次提交
-
-
由 Daniel Vetter 提交于
Damien Lespiau wondered how race the gpu reset/hang detection code is against concurrent gpu resets/hang detections or combinations thereof. Luckily the single work item is guranteed to never run concurrently, so reset handling is already single-threaded. Hence we only have to worry about concurrent hang detections, or a hang detection firing off while we're still processing an older gpu reset request. Due to the new mechanism of setting the reset in progress flag and the ordering guaranteed by the schedule_work function there's nothing to do but add a comment explaining why we're safe. The only thing I've noticed is that we still try to reset the gpu now, even when it is declared terminally wedged. Add a check for that to avoid continous warnings about failed resets, in case the hangcheck timer ever gets stuck. Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
With the previous patch the state transition handling of the reset code itself is now (hopefully) race free and solid. But that still leaves out everyone else - with the various lock-free wait paths we have there's the possibility that the reset happens between the point where we read the seqno we should wait on and the actual wait. And if __wait_seqno then never sees the RESET_IN_PROGRESS state, we'll happily wait for a seqno which will in all likelyhood never signal. In practice this is not a big problem since the X server gets constantly interrupted, and can then submit more work (hopefully) to unblock everyone else: As soon as a new seqno write lands, all waiters will unblock. But running the i-g-t reset testcase ZZ_hangman can expose this race, especially on slower hw with fewer cpu cores. Now looking forward to ARB_robustness and friends that's not the best possible behaviour, hence this patch adds a reset_counter to be able to detect any reset, even if a given thread never observed the in-progress state. The important part is to correctly order things: - The write side needs to increment the counter after any seqno gets reset. Hence we need to do that at the end of the reset work, and again wake everyone up. We also need to place a barrier in between any possible seqno changes and the counter increment, since any unlock operations only guarantee that nothing leaks out, but not that at later load operation gets moved ahead. - On the read side we need to ensure that no reset can sneak in and invalidate the seqno. In all cases we can use the one-sided barrier that unlock operations guarantee (of the lock protecting the respective seqno/ring pair) to ensure correct ordering. Hence it is sufficient to place the atomic read before the mutex/spin_unlock and no additional barriers are required. The end-result of all this is that we need to wake up everyone twice in a reset operation: - First, before the reset starts, to get any lockholders of the locks, so that the reset can proceed. - Second, after the reset is completed, to allow waiters to properly and reliably detect the reset condition and bail out. I admit that this entire reset_counter thing smells a bit like overkill, but I think it's justified since it makes it really explicit what the bail-out condition is. And we need a reset counter anyway to implement ARB_robustness, and imo with finer-grained locking on the horizont this is the most resilient scheme I could think of. v2: Drop spurious change in the wait_for_error EXIT_COND - we only need to wait until we leave the reset-in-progress wedged state. v3: Don't play tricks with barriers in the throttle ioctl, the spin_unlock is barrier enough. I've also considered using a little helper to grab the current reset_counter, but then decided that hiding the atomic_read isn't a great idea, since having it explicitly show up in the code is a nice remainder to reviews to check the memory barriers. v4: Add a comment to explain why we need to fall through in __wait_seqno in the end variable assignments. v5: Review from Damien: - s/smb/smp/ in a comment - don't increment the reset counter after we've set it to WEDGED. Now we (again) properly wedge the gpu when the reset fails. Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 20 1月, 2013 3 次提交
-
-
由 Daniel Vetter 提交于
We have two important transitions of the wedged state in the current code: - 0 -> 1: This means a hang has been detected, and signals to everyone that they please get of any locks, so that the reset work item can do its job. - 1 -> 0: The reset handler has completed. Now the last transition mixes up two states: "Reset completed and successful" and "Reset failed". To distinguish these two we do some tricks with the reset completion, but I simply could not convince myself that this doesn't race under odd circumstances. Hence split this up, and add a new terminal state indicating that the hw is gone for good. Also add explicit #defines for both states, update comments. v2: Split out the reset handling bugfix for the throttle ioctl. v3: s/tmp/wedged/ sugested by Chris Wilson. Also fixup up a rebase error which prevented this patch from actually compiling. v4: To unify the wedged state with the reset counter, keep the reset-in-progress state just as a flag. The terminally-wedged state is now denoted with a big number. v5: Add a comment to the reset_counter special values explaining that WEDGED & RESET_IN_PROGRESS needs to be true for the code to be correct. v6: Fixup logic errors introduced with the wedged+reset_counter unification. Since WEDGED implies reset-in-progress (in a way we're terminally stuck in the dead-but-reset-not-completed state), we need ensure that we check for this everywhere. The specific bug was in wait_for_error, which would simply have timed out. v7: Extract an inline i915_reset_in_progress helper to make the code more readable. Also annote the reset-in-progress case with an unlikely, to help the compiler optimize the fastpath. Do the same for the terminally wedged case with i915_terminally_wedged. Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-Off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
And to make Ben Widawsky happier, use the gpu_error instead of the entire device as the argument in some functions. Drop the outdated comment on ->wedged for now, a follow-up patch will change the semantics and add a proper comment again. Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
This has been sprinkled all over the place in dev_priv. I think it'd be good to also move all the code into a separate file like i915_gem_error.c, but that's for another patch. Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 18 1月, 2013 2 次提交
-
-
由 Ben Widawsky 提交于
The purpose of the gtt structure is to help isolate our gtt specific properties from the rest of the code (in doing so it help us finish the isolation from the AGP connection). The following members are pulled out (and renamed): gtt_start gtt_total gtt_mappable_end gtt_mappable gtt_base_addr gsm The gtt structure will serve as a nice place to put gen specific gtt routines in upcoming patches. As far as what else I feel belongs in this structure: it is meant to encapsulate the GTT's physical properties. This is why I've not added fields which track various drm_mm properties, or things like gtt_mtrr (which is itself a pretty transient field). Reviewed-by: NRodrigo Vivi <rodrigo.vivi@gmail.com> [Ben modified commit messages] Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Egbert Eich 提交于
This variable is only used locally in the irq postinstall functions for ivybridge and ironlake. Signed-off-by: NEgbert Eich <eich@suse.de> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 15 1月, 2013 1 次提交
-
-
由 Chris Wilson 提交于
These are useful for investigating hangs involving WAIT_FOR_EVENT. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> [danvet: Apply a droplet of Future-Proof in the if-ladder.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 19 12月, 2012 1 次提交
-
-
由 Ben Widawsky 提交于
Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Reviewed-by: NJesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 18 12月, 2012 1 次提交
-
-
由 Daniel Vetter 提交于
Now that Chris Wilson demonstrated that the key for stability on early gen 2 is to simple _never_ exchange the physical backing storage of batch buffers I've tried a stab at a kernel solution. Doesn't look too nefarious imho, now that I don't try to be too clever for my own good any more. v2: After discussing the various techniques, we've decided to always blit batches on the suspect devices, but allow userspace to opt out of the kernel workaround assume full responsibility for providing coherent batches. The principal reason is that avoiding the blit does improve performance in a few key microbenchmarks and also in cairo-trace replays. Signed-Off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> [danvet: - Drop the hunk which uses HAS_BROKEN_CS_TLB to implement the ring wrap w/a. Suggested by Chris Wilson. - Also add the ACTHD check from Chris Wilson for the error state dumping, so that we still catch batches when userspace opts out of the w/a.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 12 12月, 2012 1 次提交
-
-
由 Daniel Vetter 提交于
For GMCH platforms we set up the hpd irq registers in the irq postinstall hook. But since we only enable the irq sources we actually need in PORT_HOTPLUG_EN/STATUS, taking dev_priv->hotplug_supported_mask into account, no hpd interrupt sources is enabled since commit 52d7eced Author: Daniel Vetter <daniel.vetter@ffwll.ch> Date: Sat Dec 1 21:03:22 2012 +0100 drm/i915: reorder setup sequence to have irqs for output setup Wrongly set-up interrupts also lead to broken hw-based load-detection on at least GM45, resulting in ghost VGA/TV-out outputs. To fix this, delay the hotplug register setup until after all outputs are set up, by moving it into a new dev_priv->display.hpd_irq_callback. We might also move the PCH_SPLIT platforms to such a setup eventually. Another funny part is that we need to delay the fbdev initial config probing until after the hpd regs are setup, for otherwise it'll detect ghost outputs. But we can only enable the hpd interrupt handling itself (and the output polling) _after_ that initial scan, due to massive locking brain-damage in the fbdev setup code. Add a big comment to explain this cute little dragon lair. v2: Encapsulate all the fbdev handling by wrapping the move call into intel_fbdev_initial_config in intel_fb.c. Requested by Chris Wilson. v3: Applied bikeshed from Jesse Barnes. v4: Imre Deak noticed that we also need to call intel_hpd_init after the drm_irqinstall calls in the gpu reset and resume paths - otherwise hotplug will be broken. Also improve the comment a bit about why hpd_init needs to be called before we set up the initial fbdev config. Bugzilla: Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=54943Reported-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org> (v3) Reviewed-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 09 12月, 2012 1 次提交
-
-
由 Tomas Janousek 提交于
Commit 9ee32fea unconditionally prevents the CPU from entering idle states until intel_dp_aux_ch completes for the first time, which never happens on my DisplayPort-less intel gfx, causing the CPU to get rather hot. Signed-off-by: NTomas Janousek <tomi@nomi.cz> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 06 12月, 2012 10 次提交
-
-
由 Chris Wilson 提交于
Before queuing the flip but crucially after attaching the unpin-work to the crtc, we continue to setup the unpin-work. However, should the hardware fire early, we see the connected unpin-work and queue the task. The task then promptly runs and unpins the fb before we finish taking the required references or even pinning it... Havoc. To close the race, we use the flip-pending atomic to indicate when the flip is finally setup and enqueued. So during the flip-done processing, we can check more accurately whether the flip was expected. v2: Add the appropriate mb() to ensure that the writes to the page-flip worker are complete prior to marking it active and emitting the MI_FLIP. On the read side, the mb should be enforced by the spinlocks. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: stable@vger.kernel.org [danvet: Review the barriers a bit, we need a write barrier both before and after updating ->pending. Similarly we need a read barrier in the interrupt handler both before and after reading ->pending. With well-ordered irqs only one barrier in each place should be required, but since this patch explicitly sets out to combat spurious interrupts with is staged activation of the unpin work we need to go full-bore on the barriers, too. Discussed with Chris Wilson on irc and changes acked by him.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Paulo Zanoni 提交于
Having 9500 lines repeated on dmesg does not help me at all. Signed-off-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
At least on the platforms that have a dp aux irq and also have it enabled - vlvhsw should have one, too. But I don't have a machine to test this on. Judging from docs there's no dp aux interrupt for gm45. Also, I only have an ivb cpu edp machine, so the dp aux A code for snb/ilk is untested. For dpcd probing when nothing is connected it slashes about 5ms of cpu time (cpu time is now negligible), which agrees with 3 * 5 400 usec timeouts. A previous version of this patch increases the time required to go through the dp_detect cycle (which includes reading the edid) from around 33 ms to around 40 ms. Experiments indicated that this is purely due to the irq latency - the hw doesn't allow us to queue up dp aux transactions and hence irq latency directly affects throughput. gmbus is much better, there we have a 8 byte buffer, and we get the irq once another 4 bytes can be queued up. But by using the pm_qos interface to request the lowest possible cpu wake-up latency this slowdown completely disappeared. Since all our output detection logic is single-threaded with the mode_config mutex right now anyway, I've decide not ot play fancy and to just reuse the gmbus wait queue. But this would definitely prep the way to run dp detection on different ports in parallel v2: Add a timeout for dp aux transfers when using interrupts - the hw _does_ prevent this with the hw-based 400 usec timeout, but if the irq somehow doesn't arrive we're screwed. Lesson learned while developing this ;-) v3: While at it also convert the busy-loop to wait_for_atomic, so that we don't run the risk of an infinite loop any more. v4: Ensure we have the smallest possible irq latency by using the pm_qos interface. v5: Add a comment to the code to explain why we frob pm_qos. Suggested by Chris Wilson. v6: Disable dp irq for vlv, that's easier than trying to get at docs and hw. v7: Squash in a fix for Haswell that Paulo Zanoni tracked down - the dp aux registers aren't at a fixed offset any more, but can be on the PCH while the DP port is on the cpu die. Reviewed-by: Imre Deak <imre.deak@intel.com> (v6) Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
Doesn't do anything yet than call dp_aux_irq_handler. Reviewed-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
We need two special things to properly wire this up: - Add another argument to gmbus_wait_hw_status to pass in the correct interrupt bit in gmbus4. - Since we can only get an irq for one of the two events we want, hand-roll the wait_event_timeout code so that we wake up every jiffie and can check for NAKs. This way we also subsume gmbus support for platforms without interrupts (or where those are not yet enabled). The important bit really is to only enable one gmbus interrupt source at the same time - with that piece of lore figured out, this seems to work flawlessly. Ben Widawsky rightfully complained the lack of measurements for the claimed benefits (especially since the first version was actually broken and fell back to bit-banging). Previously reading the 256 byte hdmi EDID takes about 72 ms here. With this patch it's down to 33 ms. Given that transfering the 256 bytes over i2c at wire speed takes 20.5ms alone, the reduction in additional overhead is rather nice. v2: Chris Wilson wondered whether GMBUS4 might contain some set bits when booting up an hence result in some spurious interrupts. Since we clear GMBUS4 after every wait and we do gmbus transfer really early in the setup sequence to detect displays the window is small, but still be paranoid and clear it properly. v3: Clarify the comment that gmbus irq generation can only support one kind of event, why it bothers us and how we work around that limit. Cc: Daniel Kurtz <djkurtz@chromium.org> Reviewed-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
Only enables the interrupt and puts a irq handler into place, doesn't do anything yet. Unfortunately there's no gmbus interrupt support for gen2/3 (safe for pnv, but there the irq is marked as "Test mode"). v2: Wire up the irq handler for vlv and gen4 properly. v3: i915_enable_pipestat expects the mask bit, not the status bits ... and for added hilarity those are rather inconsistently named. Reviewed-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
Otherwise the new&shiny irq-driven gmbus and dp aux code won't work that well. Noticed since the dp aux code doesn't have an automatic fallback with a timeout (since the hw provides for that already). v2: Simple move drm_irq_install before intel_modeset_gem_init, as suggested by Ben Widawsky. v3: Now that interrupts are enabled before all connectors are fully set up, we might fall over serving a HPD interrupt while things are still being set up. Instead of jumping through massive hoops and complicating the code with a separate hpd irq enable step, simply block out the hotplug work item from doing anything until things are in place. v4: Actually, we can enable hotplug processing only after the fbdev is fully set up, since we call down into the fbdev from the hotplug work functions. So stick the hpd enabling right next to the poll helper initialization. v5: We need to enable irqs before intel_modeset_init, since that function sets up the outputs. v6: Fixup cleanup sequence, too. Reviewed-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
... together with all the other irq related resources in intel_irq_init. I've managed to oops in the notify_ring function on my ilk, presumably because of the powerctx setup call to i915_gpu_idle. Note that this is only a problem with the reorder irq setup sequence for irq-driver gmbus/dp aux. Reviewed-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
This is for legacy legacy stuff, and checking with the leftover pipe from the previous loop is propably not what we want. Since pipe == 2 after the loop ... Then we only assing a variable and do nothing with it. Cc: Jesse Barnes <jbarnes@virtuousgeek.org> Reviewed-by: NJesse Barnes <jbarnes@virtuousgeek.org> Reviewed-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
No need to have the exaxt same code twice. Reviewed-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-