- 07 5月, 2014 1 次提交
-
-
由 Ville Syrjälä 提交于
Ignore the cache bits in PPAT and just set the snoop bit where appropriate. BDW WB is mapped to snooped access, while all other modes are mapped to non-snooped access. The hardware supposedly ignores everything except the snoop bit in the PPAT entries. Additionally the hardware actually enforces snooping for all page table accesses, and thus the snoop bit is ignored for PDEs. v2: Rebased on top of the bdw resume fix to reload the ppat entries. v3: Rebase on top of the i915_gem_gtt.h header extraction. Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> (v1) Acked-by: NBen Widawsky <ben@bwidawsk.net> Reviewed-by: NRafael Barbalho <rafael.barbalho@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 06 5月, 2014 6 次提交
-
-
由 Ville Syrjälä 提交于
We won't be calling intel_enable_primary_plane() or intel_disable_primary_plane() with the primary plane in the wrong state. So remove the useless DISPLAY_PLANE_ENABLE checks. v2: Convert the checks to WARNs instead (Daniel,Paulo) Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ville Syrjälä 提交于
On ILK when we disable a particular watermark level, we must maintain the actual watermark values for that level for some time (until the next vblank possibly). Otherwise we risk underruns. In order to achieve that result we must merge the LP1+ watermarks a bit differently since we must also merge levels that are to be disabled. We must also make sure we don't overflow the fields in the watermark registers in case the calculated watermarks come out too big to fit. As early as possbile we mark all computed watermark levels as disabled if they would exceed the register maximums. We make sure to leave the actual watermarks for such levels zeroed out. Then during merging, we take the maxium values for every level, regardless if they're disabled or not. That may seem a bit pointless since at the moment all the watermark levels we merge should have their values zeroed if the level is already disabled. However soon we will be dealing with intermediate watermarks that, in addition to the new watermark values, also contain the previous watermark values, and so levels that are disabled may no longer be zeroed out. v2: Split the patch in two (Paulo) Use if() instead of & when merging ->enable (Paulo) Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> [danvet: Fix commit message as noted by Paulo.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ville Syrjälä 提交于
When we calculate the watermarks for a pipe make sure we leave any level fully zeroed out if it would exceed any of the maximum values that fit in the registers. This will be important later when we start to use also disabled watermark levels during LP1+ merging. Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ville Syrjälä 提交于
Add trace points for observing the atomic pipe update mechanism. v2: Rebased due to earlier changes v3: Pass intel_crtc instead of drm_crtc (Daniel) v4: Pass frame counter from the caller to evaded/end since the caller now always has that ready Reviewed-by: NJesse Barnes <jbarnes@virtuousgeek.org> Reviewed-by: NSourab Gupta <sourabgupta@gmail.com> Reviewed-by: NAkash Goel <akash.goels@gmail.com> Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ville Syrjälä 提交于
Move the primary plane enable/disable to occur atomically with the sprite update that caused the primary plane visibility to change. FBC and IPS enable/disable is left to happen well before or after the primary plane change. v2: Pass intel_crtc instead of drm_crtc (Daniel) Reviewed-by: NJesse Barnes <jbarnes@virtuousgeek.org> Reviewed-by: NSourab Gupta <sourabgupta@gmail.com> Reviewed-by: NAkash Goel <akash.goels@gmail.com> Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ville Syrjälä 提交于
Add a mechanism by which we can evade the leading edge of vblank. This guarantees that no two sprite register writes will straddle on either side of the vblank start, and that means all the writes will be latched together in one atomic operation. We do the vblank evade by checking the scanline counter, and if it's too close to the start of vblank (too close has been hardcoded to 100usec for now), we will wait for the vblank start to pass. In order to eliminate random delayes from the rest of the system, we operate with interrupts disabled, except when waiting for the vblank obviously. Note that we now go digging through pipe_to_crtc_mapping[] in the vblank interrupt handler, which is a bit dangerous since we set up interrupts before the crtcs. However in this case since it's the vblank interrupt, we don't actually unmask it until some piece of code requests it. v2: preempt_check_resched() calls after local_irq_enable() (Jesse) Hook up the vblank irq stuff on BDW as well v3: Pass intel_crtc instead of drm_crtc (Daniel) Warn if crtc.mutex isn't locked (Daniel) Add an explicit compiler barrier and document the barriers (Daniel) Note the irq vs. modeset setup madness in the commit message (Daniel) v4: Use prepare_to_wait() & co. directly and eliminate vbl_received v5: Refactor intel_pipe_handle_vblank() vs. drm_handle_vblank() (Chris) Check for min/max scanline <= 0 (Chris) Don't call intel_pipe_update_end() if start failed totally (Chris) Check that the vblank counters match on both sides of the critical section (Chris) v6: Fix atomic update for interlaced modes v7: Reorder code for better readability (Chris) v8: Drop preempt_check_resched(). It's not available to modules anymore and isn't even needed unless we ourselves cause a wakeup needing reschedule while interrupts are off Reviewed-by: NJesse Barnes <jbarnes@virtuousgeek.org> Reviewed-by: NSourab Gupta <sourabgupta@gmail.com> Reviewed-by: NAkash Goel <akash.goels@gmail.com> Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 05 5月, 2014 33 次提交
-
-
由 Ben Widawsky 提交于
All the rest of the code to enable this is in my branch. Without my branch, hitting > 32b offsets is impossible. The code has always "supported" 64b, but it's never actually been run of tested. This change doesn't actually fix anything. [1] I am not sure why X won't work yet. I do not get hangs or obvious errors. There are 3 fixes grouped together here. First is to remove the hardcoded 0 for the upper dword of the relocation. The next fix is to use a 64b value for target_offset. The final fix is to not directly apply target_offset to reloc->delta. reloc->delta is part of ABI, and so we cannot change it. As it stands, 32b is enough to represent everything we're interested in representing anyway. The main problem is, we cannot add greater than 32b values to it directly. [1] Almost all of intel-gpu-tools is not yet ready to test 64b relocations. There are a few places that expect 32b values for offsets and these all won't work. Cc: Rafael Barbalho <rafael.barbalho@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
Previously, our code only had a 32b offset value for where the batchbuffer starts. With full PPGTT, and 64b canonical GPU address space, that is an insufficient value. The code to expand is pretty straight forward, and only one platform needs to do anything with the extra bits. Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NRafael Barbalho <rafael.barbalho@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
SDVO is used by both crtcs using the i9xx_ and the ironlake_ functions. For both cases there is nothing between the encoder->mode_set and the encoder->pre_enable calls that touches the hardware. The vlv_ functions are different since they enable the pll before the ->pre_enable hook. But SDVO isn't supported on vlv platforms, so this doesn't matter. We've also already clean up all the sdvo state computation logic, all relevant parts are already in the ->compute_config hook. So we can just get rid of the ->mode_set hook by converting it to a ->pre_enable hook. Reviewed-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
We only set a few bits in the ADPA register, which we then read back in the enable/disable hooks. So we can just move that bit of state computation code to the place where we need it since setting these bits without enabling the CRT encoder has no effects. The only exceptions are the hotplug bits since they affect the hotplug detection logic, but we already set those in the ->reset function and then never touch them. Reviewed-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
Currently for the i9xx crtc hooks there's nothing between the call to encoder->mode_set and encoder->pre_enable which touches the hardware. Therefore, since tv is only used on gen3/4, we can just move the hook. Yay for easy cases! The only other important thing to check is that the new ->pre_enable hook is idempotent wrt the sw state since now it can be called multiple times (due to DPMS). After a the bit of refactoring this is now easy to check: It only reads crtc->config and computes derived state but otherwise leaves it as-is, so we're good. Reviewed-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
The pipe and plane _are_ disabled when we call this. So replace it all with the corresponding assert (as self-documenting code) and rip out all the lore. Checking for a disabled plane would require us to export those macros from intel_display.c, but if the pipe is off the plane isn't working either. So this single check is good enough. Reviewed-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
We only support TV-out on gen3/4 mobile platforms, and i915gm is the only one that matches. Reviewed-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
intel_tv_mode_set is still too bug. Reviewed-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
intel_tv_mode_set is just too big. Reviewed-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
Currently for the i9xx crtc hooks there's nothing between the call to encoder->mode_set and encoder->pre_enable which touches the hardware. Therefore, since dvo is only used on gen2, we can just move the hook. Yay for easy cases! The only other important thing to check is that the new ->pre_enable hook is idempotent wrt the sw state since now it can be called multiple times (due to DPMS). It only reads crtc->config but otherwise leaves it as-is, so we're good. Reviewed-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
For a bunch of reasons we want to move away from the ->mode_set callbacks: All hw state setup needs to move into ->enable hooks (so that DOMS can do runtime pm) and all the configuration setup needs to move into the compute_config functions. To start with this make the enocer->mode_set callback optional. Reviewed-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ville Syrjälä 提交于
The BIOS can enable a pipe but leave the primary plane disabled. This coflicts with out current idea of primary_enabled. Read the actual hardware plane state and set primary_enabled appropriately. We currently assume that primary_enabled is always true when we're about to disable a crtc. That needs to change now as the plane may not be enabled. So replace the relevant WARNs with early returns in intel_{enable,disable}_primary_hw_plane(). Fixes the following warning [ 3.831602] WARNING: CPU: 0 PID: 1112 at linux/drivers/gpu/drm/i915/intel_display.c:1918 intel_disable_primary_hw_plane+0xe4/0xf0 [i915]() which got introduced here by me: commit e9e39655c0c30cddc3f8c09a757678a24dd36737 Author: Ville Syrjälä <ville.syrjala@linux.intel.com> Date: Mon Apr 28 15:53:25 2014 +0300 drm/i915: Remove useless checks from primary enable/disable Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Tested-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
Add_request has always contained both the semaphore mailbox updates as well as the breadcrumb writes. Since the semaphore signal is the one which actually knows about the number of dwords it needs to emit to the ring, we move the ring_begin to that function. This allows us to remove the hideously shared #define On a related not, gen8 will use a different number of dwords for semaphores, but not for add request. v2: Make number of dwords an explicit part of signalling (via function argument). (Chris) v3: very slight comment change Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
This abstraction again is in preparation for gen8. Gen8 will bring new semantics for doing this operation. While here, make the writes of MI_NOOPs explicit for non-existent rings. This should have been implicit before. NOTE: This is going to be removed in a few patches. Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
This will be helpful in abstracting some of the code in preparation for gen8 semaphores. v2: Move mbox stuff to a separate struct v3: Rebased over VCS2 work Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> (v1) Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Imre Deak 提交于
During the initial power well enabling on the driver init/resume path we can avoid initialzing part of the HW/SW state that will be initialized anyway by the subsequent init/resume code. For some steps like HPD initialization this redundancy is not only an overhead but an actual problem, since they can't be run this early in the overall init sequence. Add a flag marking the init phase and skip reinitialzing state that is not strictly necessary based on that. This is also needed by the upcoming HPD init restructuring by Thierry and Daniel. Signed-off-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
In commit 691e6415 Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Wed Apr 9 09:07:36 2014 +0100 drm/i915: Always use kref tracking for all contexts. we populated fake contexts on all platforms. These were identical to the full hardware context tracking structs, except for the ctx->obj used to store the hardware state. However, there remained one place where we assumed that if a context existed, it would have an object associated with it. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=77717 Testcase: igt/drv_suspend/debugfs-reader Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ville Syrjälä 提交于
Add a new function intel_get_crtc_scanline() that returns the current scanline counter for the crtc. v2: Rebase after vblank timestamp changes. Use intel_ prefix instead of i915_ as is more customary for display related functions. Include DRM_SCANOUTPOS_INVBL in the return value even w/o adjustments, for a bit of extra consistency. v3: Change the implementation to be based on DSL on all gens, since that's enough for the needs of atomic updates, and it will avoid complicating the scanout position calculations for the vblank timestamps v4: Don't break scanline wraparound for interlaced modes Reviewed-by: NSourab Gupta <sourabgupta@gmail.com> Reviewed-by: NAkash Goel <akash.goels@gmail.com> Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ville Syrjälä 提交于
Seems I've been a bit dense with regards to the start of vblank vs. the scanline counter / pixel counter. After staring at the pixel counter on gen4 I came to the conclusion that the start of vblank interrupt and scanline counter increment happen at the same time. The scanline counter increment is documented to occur at start of hsync, which means that the start of vblank interrupt must also trigger there. Looking at the pixel counter value when the scanline wraps from vtotal-1 to 0 confirms that, as the pixel counter at that point reads hsync_start. This also clarifies why we see need the +1 adjustment to the scaline counter. The counter actually starts counting from vtotal-1 on the first active line. I also confirmed that the frame start interrupt happens ~1 line after the start of vblank, but the frame start occurs at hblank_start instead. We only use the frame start interrupt on gen2 where the start of vblank interrupt isn't available. The only important thing to note here is that frame start occurs after vblank start, so we don't have to play any additional tricks to fix up the scanline counter. The other thing to note is the fact that the pixel counter on gen3-4 starts counting from the start of horizontal active on the first active line. That means that when we get the start of vblank interrupt, the pixel counter reads (htotal*(vblank_start-1)+hsync_start). Since we consider vblank to start at (htotal*vblank_start) we need to add a constant (htotal-hsync_start) offset to the pixel counter, or else we risk misdetecting whether we're in vblank or not. I talked a bit with Art Runyan about these topics, and he confirmed my findings. And that the same rules should hold for platforms which don't have the pixel counter. That's good since without the pixel counter it's rather difficult to verify the timings to this accuracy. So the conclusion is that we can throw away all the ISR tricks I added, and just increment the scanline counter by one always. Reviewed-by: NSourab Gupta <sourabgupta@gmail.com> Reviewed-by: NAkash Goel <akash.goels@gmail.com> Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
It seems we need this at least for the current platforms we have, but probably not later. In any event, it should cause too much harm as we do the same thing on several other platforms. Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NRodrigo Vivi <rodrigo.vivi@gmail.com> Reviewed-by: NBrad Volkin <bradley.d.volkin@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
The same register exists for querying and programming eDRAM AKA eLLC. So we can simply use it. For now, use all the same defaults as we had for Haswell, since like Haswell, I have no further details. I do not actually have a part with eDRAM, so I cannot test this. Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NRodrigo Vivi <rodrigo.vivi@gmail.com> Reviewed-by: NBrad Volkin <bradley.d.volkin@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
I don't have any insight on what parts can do what. The docs do seem to suggest WT caching works in at least the same manner as it does on Haswell. The addr = 0 is to shut up GCC: drivers/gpu/drm/i915/i915_gem_gtt.c:80:7: warning: 'addr' may be used uninitialized in this function [-Wmaybe-uninitialized] Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NRodrigo Vivi <rodrigo.vivi@gmail.com> Reviewed-by: NBrad Volkin <bradley.d.volkin@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Imre Deak 提交于
On BDW we don't enable RC6 at the moment, but this isn't reflected in the (sanitized) i915.enable_rc6 option. So make enable_rc6 report correctly that RC6 is disabled, which will also effectively disable RPM on BDW (since RPM depends on RC6). Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=77565Signed-off-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ville Syrjälä 提交于
assert_plane_enabled() is now triggering during FDI link train because we no longer enable planes that early. This problem got introduced in: commit a5c4d7bc Author: Ville Syrjälä <ville.syrjala@linux.intel.com> Date: Fri Mar 7 18:32:13 2014 +0200 drm/i915: Disable/enable planes as the first/last thing during modeset on ILK+ Just drop the assert since we shouldn't need planes for link training. Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> [danvet: Squash in fixup for now unused plane local variable, reported by 0-day tester.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Jan Moskyto Matejka 提交于
This reverts commit 60f2b4af. The same warning has been fixed in e5081a53 and these two commits got merged in 74e99a84de2d0980320612db8015ba606af42114 which caused another warning. Simply, the reverted commit casted the pointer difference to unsigned long and the other commit changed the output type from long to ptrdiff_t. The other commit fixes the original warning the better way so I'm reverting this commit now. Signed-off-by: NJan Moskyto Matejka <mq@suse.cz> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ville Syrjälä 提交于
We have a struct_mutex deadlock during driver init on ILK [ 54.320273] ============================================= [ 54.320371] [ INFO: possible recursive locking detected ] [ 54.320471] 3.15.0-rc2-flip_race+ #2 Not tainted [ 54.320567] --------------------------------------------- [ 54.320665] modprobe/2178 is trying to acquire lock: [ 54.320762] (&dev->struct_mutex){+.+.+.}, at: [<ffffffffa0568b05>] intel_enable_gt_powersave+0xa5/0x9d0 [i915] [ 54.321111] [ 54.321111] but task is already holding lock: [ 54.321250] (&dev->struct_mutex){+.+.+.}, at: [<ffffffffa05b4c2e>] intel_modeset_init_hw+0x3e/0x60 [i915] [ 54.321583] [ 54.321583] other info that might help us debug this: [ 54.321724] Possible unsafe locking scenario: [ 54.321724] [ 54.321863] CPU0 [ 54.321954] ---- [ 54.322046] lock(&dev->struct_mutex); [ 54.322221] lock(&dev->struct_mutex); [ 54.322397] [ 54.322397] *** DEADLOCK *** [ 54.322397] [ 54.322638] May be due to missing lock nesting notation [ 54.322638] [ 54.322781] 4 locks held by modprobe/2178: [ 54.322875] #0: (&dev->mutex){......}, at: [<ffffffff813592eb>] __driver_attach+0x5b/0xb0 [ 54.323230] #1: (&dev->mutex){......}, at: [<ffffffff813592f9>] __driver_attach+0x69/0xb0 [ 54.323582] #2: (drm_global_mutex){+.+.+.}, at: [<ffffffffa04e1e0d>] drm_dev_register+0x2d/0x120 [drm] [ 54.323945] #3: (&dev->struct_mutex){+.+.+.}, at: [<ffffffffa05b4c2e>] intel_modeset_init_hw+0x3e/0x60 [i915] This regression got introduced in: commit 586d5270b60dc1f35cc3ca982d403765bad77965 Author: Imre Deak <imre.deak@intel.com> Date: Mon Apr 14 20:24:28 2014 +0300 drm/i915: move getting struct_mutex lower in the callstack during GPU reset Fix the problem by not taking struct_mutex around intel_enable_gt_powersave() in intel_modeset_init_hw() since intel_enable_gt_powersave() now grabs the mutex itself. Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Imre Deak 提交于
In recent dmesg logs reported for unrelated issues I noticed some power domain WARNs caused by the following. The workaround commit ce352550 Author: Ville Syrjälä <ville.syrjala@linux.intel.com> Date: Fri Sep 20 10:14:23 2013 +0300 drm/i915: Fix unclaimed register access due to delayed VGA memory disable and following fixup of it commit a1485320 Author: Ville Syrjälä <ville.syrjala@linux.intel.com> Date: Mon Sep 16 17:38:34 2013 +0300 drm/i915: Move power well init earlier during driver load was partially reverted by commit 7f16e5c1 Merge: 9d1cb914 5e01dc7b Author: Daniel Vetter <daniel.vetter@ffwll.ch> Date: Mon Nov 4 16:28:47 2013 +0100 Merge tag 'v3.12' into drm-intel-next but kept the power domain put calls on the error path. I think for now we can keep things as-is (not reintroduce the w/a) and just fix the error path, since - nobody complained seeing this issue - according to Ville someone is reworking the VGA arbitration scheme at the moment and when that's ready we have to rethink this part anyway So fix this by just removing the put calls from the error path as well. Signed-off-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
Ville noticed that we have this nice kerneldoc but it's not integrated anywhere. Fix this asap! Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Brad Volkin <bradley.d.volkin@intel.com> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
A common issue we have is that retiring requests causes recursion through GTT manipulation or page table manipulation which we can only handle at very specific points. However, to maintain internal consistency (enforced through our sanity checks on write_domain at various points in the GEM object lifecycle) we do need to retire the object prior to marking it with a new write_domain, and also clear the write_domain for the implicit flush following a batch. Note that this then allows the unbound objects to still be on the active lists, and so care must be taken when removing objects from unbound lists (similar to the caveats we face processing the bound lists). v2: Fix i915_gem_shrink_all() to handle updated object lifetime rules, by refactoring it to call into __i915_gem_shrink(). v3: Missed an object-retire prior to changing cache domains in i915_gem_object_set_cache_leve() v4: Rebase Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Tested-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: NBrad Volkin <bradley.d.volkin@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Imre Deak 提交于
I've seen latencies up to 15msec, so increase the timeout to 20msec. Signed-off-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Imre Deak 提交于
This will be needed by the VLV runtime PM helpers too, so factor it out. Also add a safety check for the case where the previous force-off is still pending, since I'm not sure if Punit can handle a new setting while the previous one hasn't settled yet. v2: - unchanged v3: - add a note to the commit message about the safety check (Ville) Signed-off-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Imre Deak 提交于
When enabling runtime PM on VLV, GT power save enabling becomes relatively frequent, so optimize it a bit. Signed-off-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Imre Deak 提交于
During runtime suspend there can be a last pending rps.work, so make sure it's canceled. Note that in the runtime suspend callback we can't get any RPS interrupts since it's called only after the GPU goes idle and we set the minimum RPS frequency. The next possibility for an RPS interrupt is only after getting an RPM ref (for example because of a new GPU command) and calling the RPM resume callback. v2: - patch introduced in v2 of the patchset v3: - Change the order of canceling the rps.work and disabling interrupts to avoid the race between interrupt disabling and the the rps.work. Race spotted by Ville. Signed-off-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-