- 20 2月, 2013 2 次提交
-
-
由 Ville Syrjälä 提交于
If the interrupt handler were to process a previous vblank interrupt and the following flip pending interrupt at the same time, the page flip would be completed too soon. To eliminate this race, check the live pending flip status from the ISR register before finishing the page flip. v2: Added a comment explaining the logic (by Chris Wilson) v3: Fix a typo in the comment Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Tested-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ville Syrjälä 提交于
GPU reset will drop all flips that are still in the ring. So after the reset, call update_plane() for all CRTCs to make sure the primary planes are scanning out from the correct buffer. Also finish all pending flips. That means user space will get its page flip events and won't get stuck waiting for them. v2: Explicitly finish page flips instead of relying on FLIP_DONE interrupt being generated by the base address update. v3: Make two loops over crtcs to avoid deadlocks with the crtc mutex Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> [danvet: Fixup long line complaint from checkpatch.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 15 2月, 2013 2 次提交
-
-
由 Paulo Zanoni 提交于
So we can remove duplicated code. Note that this function is used not only on IBX, but also CPT and LPT. Signed-off-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Reviewed-by: NBen Widawsky <ben@bwidawsk.net> [danvet: Also bikeshed s/ironlake_enable_pch_hotplug/ibx_enable_hotplug to keep consistent with our ibx for pch naming scheme.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
They're physically the same pins and also the same bits, duplicating only confuses the reader. This also makes it a bit obvious that we have quite some code duplication going on here. Squashing that is for a larger rework in our hpd handling though. Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 31 1月, 2013 1 次提交
-
-
由 Ben Widawsky 提交于
/sys/kernel/debug has more or less been the standard location of debugfs for several years now. Other parts of DRM already use this location, so we should as well. Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Reviewed-by: NCarl Worth <cworth@cworth.org> Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> [danvet: split up long line.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 22 1月, 2013 2 次提交
-
-
由 Daniel Vetter 提交于
Damien Lespiau wondered how race the gpu reset/hang detection code is against concurrent gpu resets/hang detections or combinations thereof. Luckily the single work item is guranteed to never run concurrently, so reset handling is already single-threaded. Hence we only have to worry about concurrent hang detections, or a hang detection firing off while we're still processing an older gpu reset request. Due to the new mechanism of setting the reset in progress flag and the ordering guaranteed by the schedule_work function there's nothing to do but add a comment explaining why we're safe. The only thing I've noticed is that we still try to reset the gpu now, even when it is declared terminally wedged. Add a check for that to avoid continous warnings about failed resets, in case the hangcheck timer ever gets stuck. Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
With the previous patch the state transition handling of the reset code itself is now (hopefully) race free and solid. But that still leaves out everyone else - with the various lock-free wait paths we have there's the possibility that the reset happens between the point where we read the seqno we should wait on and the actual wait. And if __wait_seqno then never sees the RESET_IN_PROGRESS state, we'll happily wait for a seqno which will in all likelyhood never signal. In practice this is not a big problem since the X server gets constantly interrupted, and can then submit more work (hopefully) to unblock everyone else: As soon as a new seqno write lands, all waiters will unblock. But running the i-g-t reset testcase ZZ_hangman can expose this race, especially on slower hw with fewer cpu cores. Now looking forward to ARB_robustness and friends that's not the best possible behaviour, hence this patch adds a reset_counter to be able to detect any reset, even if a given thread never observed the in-progress state. The important part is to correctly order things: - The write side needs to increment the counter after any seqno gets reset. Hence we need to do that at the end of the reset work, and again wake everyone up. We also need to place a barrier in between any possible seqno changes and the counter increment, since any unlock operations only guarantee that nothing leaks out, but not that at later load operation gets moved ahead. - On the read side we need to ensure that no reset can sneak in and invalidate the seqno. In all cases we can use the one-sided barrier that unlock operations guarantee (of the lock protecting the respective seqno/ring pair) to ensure correct ordering. Hence it is sufficient to place the atomic read before the mutex/spin_unlock and no additional barriers are required. The end-result of all this is that we need to wake up everyone twice in a reset operation: - First, before the reset starts, to get any lockholders of the locks, so that the reset can proceed. - Second, after the reset is completed, to allow waiters to properly and reliably detect the reset condition and bail out. I admit that this entire reset_counter thing smells a bit like overkill, but I think it's justified since it makes it really explicit what the bail-out condition is. And we need a reset counter anyway to implement ARB_robustness, and imo with finer-grained locking on the horizont this is the most resilient scheme I could think of. v2: Drop spurious change in the wait_for_error EXIT_COND - we only need to wait until we leave the reset-in-progress wedged state. v3: Don't play tricks with barriers in the throttle ioctl, the spin_unlock is barrier enough. I've also considered using a little helper to grab the current reset_counter, but then decided that hiding the atomic_read isn't a great idea, since having it explicitly show up in the code is a nice remainder to reviews to check the memory barriers. v4: Add a comment to explain why we need to fall through in __wait_seqno in the end variable assignments. v5: Review from Damien: - s/smb/smp/ in a comment - don't increment the reset counter after we've set it to WEDGED. Now we (again) properly wedge the gpu when the reset fails. Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 20 1月, 2013 3 次提交
-
-
由 Daniel Vetter 提交于
We have two important transitions of the wedged state in the current code: - 0 -> 1: This means a hang has been detected, and signals to everyone that they please get of any locks, so that the reset work item can do its job. - 1 -> 0: The reset handler has completed. Now the last transition mixes up two states: "Reset completed and successful" and "Reset failed". To distinguish these two we do some tricks with the reset completion, but I simply could not convince myself that this doesn't race under odd circumstances. Hence split this up, and add a new terminal state indicating that the hw is gone for good. Also add explicit #defines for both states, update comments. v2: Split out the reset handling bugfix for the throttle ioctl. v3: s/tmp/wedged/ sugested by Chris Wilson. Also fixup up a rebase error which prevented this patch from actually compiling. v4: To unify the wedged state with the reset counter, keep the reset-in-progress state just as a flag. The terminally-wedged state is now denoted with a big number. v5: Add a comment to the reset_counter special values explaining that WEDGED & RESET_IN_PROGRESS needs to be true for the code to be correct. v6: Fixup logic errors introduced with the wedged+reset_counter unification. Since WEDGED implies reset-in-progress (in a way we're terminally stuck in the dead-but-reset-not-completed state), we need ensure that we check for this everywhere. The specific bug was in wait_for_error, which would simply have timed out. v7: Extract an inline i915_reset_in_progress helper to make the code more readable. Also annote the reset-in-progress case with an unlikely, to help the compiler optimize the fastpath. Do the same for the terminally wedged case with i915_terminally_wedged. Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-Off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
And to make Ben Widawsky happier, use the gpu_error instead of the entire device as the argument in some functions. Drop the outdated comment on ->wedged for now, a follow-up patch will change the semantics and add a proper comment again. Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
This has been sprinkled all over the place in dev_priv. I think it'd be good to also move all the code into a separate file like i915_gem_error.c, but that's for another patch. Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 18 1月, 2013 2 次提交
-
-
由 Ben Widawsky 提交于
The purpose of the gtt structure is to help isolate our gtt specific properties from the rest of the code (in doing so it help us finish the isolation from the AGP connection). The following members are pulled out (and renamed): gtt_start gtt_total gtt_mappable_end gtt_mappable gtt_base_addr gsm The gtt structure will serve as a nice place to put gen specific gtt routines in upcoming patches. As far as what else I feel belongs in this structure: it is meant to encapsulate the GTT's physical properties. This is why I've not added fields which track various drm_mm properties, or things like gtt_mtrr (which is itself a pretty transient field). Reviewed-by: NRodrigo Vivi <rodrigo.vivi@gmail.com> [Ben modified commit messages] Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Egbert Eich 提交于
This variable is only used locally in the irq postinstall functions for ivybridge and ironlake. Signed-off-by: NEgbert Eich <eich@suse.de> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 15 1月, 2013 1 次提交
-
-
由 Chris Wilson 提交于
These are useful for investigating hangs involving WAIT_FOR_EVENT. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> [danvet: Apply a droplet of Future-Proof in the if-ladder.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 19 12月, 2012 1 次提交
-
-
由 Ben Widawsky 提交于
Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Reviewed-by: NJesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 18 12月, 2012 1 次提交
-
-
由 Daniel Vetter 提交于
Now that Chris Wilson demonstrated that the key for stability on early gen 2 is to simple _never_ exchange the physical backing storage of batch buffers I've tried a stab at a kernel solution. Doesn't look too nefarious imho, now that I don't try to be too clever for my own good any more. v2: After discussing the various techniques, we've decided to always blit batches on the suspect devices, but allow userspace to opt out of the kernel workaround assume full responsibility for providing coherent batches. The principal reason is that avoiding the blit does improve performance in a few key microbenchmarks and also in cairo-trace replays. Signed-Off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> [danvet: - Drop the hunk which uses HAS_BROKEN_CS_TLB to implement the ring wrap w/a. Suggested by Chris Wilson. - Also add the ACTHD check from Chris Wilson for the error state dumping, so that we still catch batches when userspace opts out of the w/a.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 12 12月, 2012 1 次提交
-
-
由 Daniel Vetter 提交于
For GMCH platforms we set up the hpd irq registers in the irq postinstall hook. But since we only enable the irq sources we actually need in PORT_HOTPLUG_EN/STATUS, taking dev_priv->hotplug_supported_mask into account, no hpd interrupt sources is enabled since commit 52d7eced Author: Daniel Vetter <daniel.vetter@ffwll.ch> Date: Sat Dec 1 21:03:22 2012 +0100 drm/i915: reorder setup sequence to have irqs for output setup Wrongly set-up interrupts also lead to broken hw-based load-detection on at least GM45, resulting in ghost VGA/TV-out outputs. To fix this, delay the hotplug register setup until after all outputs are set up, by moving it into a new dev_priv->display.hpd_irq_callback. We might also move the PCH_SPLIT platforms to such a setup eventually. Another funny part is that we need to delay the fbdev initial config probing until after the hpd regs are setup, for otherwise it'll detect ghost outputs. But we can only enable the hpd interrupt handling itself (and the output polling) _after_ that initial scan, due to massive locking brain-damage in the fbdev setup code. Add a big comment to explain this cute little dragon lair. v2: Encapsulate all the fbdev handling by wrapping the move call into intel_fbdev_initial_config in intel_fb.c. Requested by Chris Wilson. v3: Applied bikeshed from Jesse Barnes. v4: Imre Deak noticed that we also need to call intel_hpd_init after the drm_irqinstall calls in the gpu reset and resume paths - otherwise hotplug will be broken. Also improve the comment a bit about why hpd_init needs to be called before we set up the initial fbdev config. Bugzilla: Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=54943Reported-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org> (v3) Reviewed-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 09 12月, 2012 1 次提交
-
-
由 Tomas Janousek 提交于
Commit 9ee32fea unconditionally prevents the CPU from entering idle states until intel_dp_aux_ch completes for the first time, which never happens on my DisplayPort-less intel gfx, causing the CPU to get rather hot. Signed-off-by: NTomas Janousek <tomi@nomi.cz> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 06 12月, 2012 10 次提交
-
-
由 Chris Wilson 提交于
Before queuing the flip but crucially after attaching the unpin-work to the crtc, we continue to setup the unpin-work. However, should the hardware fire early, we see the connected unpin-work and queue the task. The task then promptly runs and unpins the fb before we finish taking the required references or even pinning it... Havoc. To close the race, we use the flip-pending atomic to indicate when the flip is finally setup and enqueued. So during the flip-done processing, we can check more accurately whether the flip was expected. v2: Add the appropriate mb() to ensure that the writes to the page-flip worker are complete prior to marking it active and emitting the MI_FLIP. On the read side, the mb should be enforced by the spinlocks. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: stable@vger.kernel.org [danvet: Review the barriers a bit, we need a write barrier both before and after updating ->pending. Similarly we need a read barrier in the interrupt handler both before and after reading ->pending. With well-ordered irqs only one barrier in each place should be required, but since this patch explicitly sets out to combat spurious interrupts with is staged activation of the unpin work we need to go full-bore on the barriers, too. Discussed with Chris Wilson on irc and changes acked by him.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Paulo Zanoni 提交于
Having 9500 lines repeated on dmesg does not help me at all. Signed-off-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
At least on the platforms that have a dp aux irq and also have it enabled - vlvhsw should have one, too. But I don't have a machine to test this on. Judging from docs there's no dp aux interrupt for gm45. Also, I only have an ivb cpu edp machine, so the dp aux A code for snb/ilk is untested. For dpcd probing when nothing is connected it slashes about 5ms of cpu time (cpu time is now negligible), which agrees with 3 * 5 400 usec timeouts. A previous version of this patch increases the time required to go through the dp_detect cycle (which includes reading the edid) from around 33 ms to around 40 ms. Experiments indicated that this is purely due to the irq latency - the hw doesn't allow us to queue up dp aux transactions and hence irq latency directly affects throughput. gmbus is much better, there we have a 8 byte buffer, and we get the irq once another 4 bytes can be queued up. But by using the pm_qos interface to request the lowest possible cpu wake-up latency this slowdown completely disappeared. Since all our output detection logic is single-threaded with the mode_config mutex right now anyway, I've decide not ot play fancy and to just reuse the gmbus wait queue. But this would definitely prep the way to run dp detection on different ports in parallel v2: Add a timeout for dp aux transfers when using interrupts - the hw _does_ prevent this with the hw-based 400 usec timeout, but if the irq somehow doesn't arrive we're screwed. Lesson learned while developing this ;-) v3: While at it also convert the busy-loop to wait_for_atomic, so that we don't run the risk of an infinite loop any more. v4: Ensure we have the smallest possible irq latency by using the pm_qos interface. v5: Add a comment to the code to explain why we frob pm_qos. Suggested by Chris Wilson. v6: Disable dp irq for vlv, that's easier than trying to get at docs and hw. v7: Squash in a fix for Haswell that Paulo Zanoni tracked down - the dp aux registers aren't at a fixed offset any more, but can be on the PCH while the DP port is on the cpu die. Reviewed-by: Imre Deak <imre.deak@intel.com> (v6) Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
Doesn't do anything yet than call dp_aux_irq_handler. Reviewed-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
We need two special things to properly wire this up: - Add another argument to gmbus_wait_hw_status to pass in the correct interrupt bit in gmbus4. - Since we can only get an irq for one of the two events we want, hand-roll the wait_event_timeout code so that we wake up every jiffie and can check for NAKs. This way we also subsume gmbus support for platforms without interrupts (or where those are not yet enabled). The important bit really is to only enable one gmbus interrupt source at the same time - with that piece of lore figured out, this seems to work flawlessly. Ben Widawsky rightfully complained the lack of measurements for the claimed benefits (especially since the first version was actually broken and fell back to bit-banging). Previously reading the 256 byte hdmi EDID takes about 72 ms here. With this patch it's down to 33 ms. Given that transfering the 256 bytes over i2c at wire speed takes 20.5ms alone, the reduction in additional overhead is rather nice. v2: Chris Wilson wondered whether GMBUS4 might contain some set bits when booting up an hence result in some spurious interrupts. Since we clear GMBUS4 after every wait and we do gmbus transfer really early in the setup sequence to detect displays the window is small, but still be paranoid and clear it properly. v3: Clarify the comment that gmbus irq generation can only support one kind of event, why it bothers us and how we work around that limit. Cc: Daniel Kurtz <djkurtz@chromium.org> Reviewed-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
Only enables the interrupt and puts a irq handler into place, doesn't do anything yet. Unfortunately there's no gmbus interrupt support for gen2/3 (safe for pnv, but there the irq is marked as "Test mode"). v2: Wire up the irq handler for vlv and gen4 properly. v3: i915_enable_pipestat expects the mask bit, not the status bits ... and for added hilarity those are rather inconsistently named. Reviewed-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
Otherwise the new&shiny irq-driven gmbus and dp aux code won't work that well. Noticed since the dp aux code doesn't have an automatic fallback with a timeout (since the hw provides for that already). v2: Simple move drm_irq_install before intel_modeset_gem_init, as suggested by Ben Widawsky. v3: Now that interrupts are enabled before all connectors are fully set up, we might fall over serving a HPD interrupt while things are still being set up. Instead of jumping through massive hoops and complicating the code with a separate hpd irq enable step, simply block out the hotplug work item from doing anything until things are in place. v4: Actually, we can enable hotplug processing only after the fbdev is fully set up, since we call down into the fbdev from the hotplug work functions. So stick the hpd enabling right next to the poll helper initialization. v5: We need to enable irqs before intel_modeset_init, since that function sets up the outputs. v6: Fixup cleanup sequence, too. Reviewed-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
... together with all the other irq related resources in intel_irq_init. I've managed to oops in the notify_ring function on my ilk, presumably because of the powerctx setup call to i915_gpu_idle. Note that this is only a problem with the reorder irq setup sequence for irq-driver gmbus/dp aux. Reviewed-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
This is for legacy legacy stuff, and checking with the leftover pipe from the previous loop is propably not what we want. Since pipe == 2 after the loop ... Then we only assing a variable and do nothing with it. Cc: Jesse Barnes <jbarnes@virtuousgeek.org> Reviewed-by: NJesse Barnes <jbarnes@virtuousgeek.org> Reviewed-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
No need to have the exaxt same code twice. Reviewed-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 04 12月, 2012 1 次提交
-
-
由 Daniel Vetter 提交于
- __iomem where there is none (I love how we mix these things up). - Use gfp_t instead of an other plain type. - Unconfuse one place about enum pipe vs enum transcoder - for the pch transcoder we actually use the pipe enum. Fixup the other cases where we assign the pipe to the cpu transcoder with explicit casts. - Declare the mch_lock properly in a header. There is still a decent mess in intel_bios.c about __iomem, but heck, this is x86 and we're allowed to do that. Makes-sparse-happy: Chris Wilson <chris@chris-wilson.co.uk> [danvet: Use a space after the cast consistently and fix up the newly-added cast in i915_irq.c to properly use __iomem.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 01 12月, 2012 2 次提交
-
-
由 Chris Wilson 提交于
Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJesse Barnes <jbarnes@virtuousgeek.org> Reviewed-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
We only need to read/write the south interrupt register if the corresponding bit is set in the north master interrupt register. Noticed while reading our interrupt handling code. Same optimization has already been applied on ivb in commit 0e43406b Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Wed May 9 21:45:44 2012 +0100 drm/i915: Simplify interrupt processing for IvyBridge We can take advantage that the PCH_IIR is a subordinate register to reduce one of the required IIR reads, and that we only need to clear interrupts handled to reduce the writes. And by simply tidying the code we can reduce the line count and hopefully make it more readable. Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 29 11月, 2012 1 次提交
-
-
由 Chris Wilson 提交于
Should be useful to know what the driver thought the other ring's seqno was when it last used a semaphore. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 12 11月, 2012 2 次提交
-
-
由 Jesse Barnes 提交于
This allows the power related code to run independently of the rest of the pipeline, extending the resume and init time improvements into userspace, which would otherwise have been blocked on the struct mutex if we were doing PCU communication. v2: Also convert the locking for the rps sysfs interface. Suggested-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org> (v1) Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
Pretty astonishing how far apart these two members landed ... Especially since I've already removed almost 200 lines in between. Reviewed-by: NJesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 26 10月, 2012 2 次提交
-
-
由 Paulo Zanoni 提交于
Signed-off-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Paulo Zanoni 提交于
Because the PIPECONF register is actually part of the CPU transcoder, not the CPU pipe. Ideally we would also rename PIPECONF to TRANSCONF to remind people that they should use the transcoder instead of the pipe, but let's keep it like this for now since most Gens still name it PIPECONF. Signed-off-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 18 10月, 2012 1 次提交
-
-
由 Daniel Vetter 提交于
Somehow this was left out in the refactoring that introduced the pch handlers. Avoids a hotplug_mask special case in the ilk_irq_handler. Noticed while hunting down the pch hotplug bits. Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 09 10月, 2012 1 次提交
-
-
由 Chris Wilson 提交于
round_jiffies() aligns the wakeup time to the nearest second in order to batch wakeups and reduce system load, which is useful for unimportant coarse timers like our hangcheck. v2: round_jiffies_relative() returns the relative jiffie value, whereas we need the absolute value for the timer. Suggested-by: NArjan van de Ven <arjan@linux.intel.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Arjan van de Ven <arjan@linux.intel.com> Reviewed-by: NJani Nikula <jani.nikula@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 04 10月, 2012 1 次提交
-
-
由 Daniel Vetter 提交于
... since finish_page_flip needs the vblank timestamp generated in drm_handle_vblank. Somehow all the gmch platforms get it right, but all the pch platform irq handlers get is wrong. Hooray for copy& pasting! Currently this gets papered over by a gross hack in finish_page_flip. A second patch will remove that. Note that without this, the new timestamp sanity checks in flip_test occasionally get tripped up, hence the cc: stable tag. Cc: stable@vger.kernel.org Reviewed-by: mario.kleiner@tuebingen.mpg.de Tested-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 03 10月, 2012 2 次提交
-
-
由 David Howells 提交于
Convert #include "..." to #include <path/...> in drivers/gpu/. Signed-off-by: NDavid Howells <dhowells@redhat.com> Acked-by: NDave Airlie <airlied@redhat.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Acked-by: NDave Jones <davej@redhat.com>
-
由 David Howells 提交于
Remove redundant DRM UAPI header #inclusions from drivers/gpu/. Remove redundant #inclusions of core DRM UAPI headers (drm.h, drm_mode.h and drm_sarea.h). They are now #included via drmP.h and drm_crtc.h via a preceding patch. Without this patch and the patch to make include the UAPI headers from the core headers, after the UAPI split, the DRM C sources cannot find these UAPI headers because the DRM code relies on specific -I flags to make #include "..." work on headers in include/drm/ - but that does not work after the UAPI split without adding more -I flags. Signed-off-by: NDavid Howells <dhowells@redhat.com> Acked-by: NDave Airlie <airlied@redhat.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Acked-by: NDave Jones <davej@redhat.com>
-