- 20 11月, 2014 19 次提交
-
-
由 Zhe Wang 提交于
For MMIO registers which are shadowed, force wake is not needed to write to these registers. v2: Rebase on top of nightly (Damien) v3: Rebase on top of "Gen9 multiple-engine forcewake" changes v4: (Mika, Bob, done by Damien) - Reorder the shadowed registers by popularity Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: NZhe Wang <zhe1.wang@intel.com> Signed-off-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Zhe Wang 提交于
Enable multi-engine forcewake for Gen9. v2: (Damien) - Rebase on top of nightly - Move the register range definitions to intel_uncore.c - Whitespace fixes v3: (Addressing Mika's comment, done by Damien) - Use REG_RANGE() (introduced after the patch was written) - Add a SKL_NEEDS_FORCE_WAKE() macro that gets rid of a useless comparison to FORCEWAKE (reg 0xa18c is not used on SKL) v4: (Damien) - Use newly introduced ASSIGN_READ/WRITE_MMIO_VFUNCS() macros Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: NZhe Wang <zhe1.wang@intel.com> Signed-off-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ville Syrjälä 提交于
Trying to read the status of the power wells right after taking forcewake for the other register reads makes little sense. Most of the time the power wells will still be up due to the recent forcewake. Instead do the power well status read first, and only then read the register needing forcewake. This way the reported power well status can actually reflect what's going on in the system. Cc: Deepak S <deepak.s@intel.com> Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: NDeepak S <deepak.s@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
Let's just throw in the towel on this one and take the cheap way out. Based on a patch from Chris Wilson, but checking for a different bit. Chris' patch checked for even bank layout, this one here for a magic bit. Given the evidence we've gathered (not much) both work I think, but checking for the magic bit might be more accurate. Anyway, works on my gm45 here. For paranoi restrict to gen4 (and mobile), since we've only ever seen this on gm45 and i965gm. Also add some debugfs output so that we can skip the tiled swapping tests properly in these cases. v2: Clean up the quirk'ed pin count in free_object to avoid upsetting the WARN_ON. Spotted by Chris. Cc: Chris Wilson <chris@chris-wilson.co.uk> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=28813 Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=45092Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Tom O'Rourke 提交于
In __gen6_update_ring_freq, use the full range of possible gpu frequencies from max_freq to min_freq. The actual gpu frequency could be outside the range from max_freq_softlimit to min_freq_softlimit due to power/thermal constraints. Signed-off-by: NTom O'Rourke <Tom.O'Rourke@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Tom O'Rourke 提交于
In gen8_enable_rps, change the initial rps setting to the min_freq_softlimit (same as gen6_enable_rps). Signed-off-by: NTom O'Rourke <Tom.O'Rourke@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Tom O'Rourke 提交于
Set the min_freq_softlimit to max(RPe, 450MHz). Setting a floor can ensure a minimum experience level. The 450MHz value came from a power and performance study of various types of workloads (3D, Media, GPGPU, idle, etc). v2: rebased Signed-off-by: NTom O'Rourke <Tom.O'Rourke@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Tom O'Rourke 提交于
Added gen6_init_rps_frequencies() to initialize the rps frequency values. This function replaces parse_rp_state_cap(). In addition to reading RPn, RP0, and RP1 from RP_STATE_CAP register, the new function reads efficient frequency (aka RPe) from pcode for Haswell and Broadwell and sets the turbo softlimits. The turbo minimum frequency softlimit is set to RPe for Haswell and Broadwell and to RPn otherwise. For RPe, the efficiency is based on the frequency/power ratio (MHz/W); this is considering GT power and not package power. The efficent frequency is the highest frequency for which the frequency/power ratio is within some threshold of the highest frequency/power ratio. A fixed decrease in frequency results in smaller decrease in power at frequencies less than RPe than at frequencies above RPe. v2: Following suggestions from Chris Wilson and Daniel Vetter to extend and rename parse_rp_state_cap and to open-code a poorly named function. Signed-off-by: NTom O'Rourke <Tom.O'Rourke@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> [danvet: Remove unused variables.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
Found one more! With this we can clear up the ggtt init code a bit, yay! Acked-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
-
由 Daniel Vetter 提交于
With this all the ums nonsense around gem setup/teardown has disappeared, yay! Acked-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
-
由 Daniel Vetter 提交于
Again just complicates gem init functions and makes a general mess out of everything. Good riddance! v2: In my enthusiasm to start removing dri1/ums crud I went overboard a bit and killed parts of hangcheck. Resurrect it. Acked-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
-
由 Daniel Vetter 提交于
We've killed ums support by now, it's time to reap the benefits. This one here is getting in the way of doing some ring init cleanup. Acked-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
-
由 Daniel Vetter 提交于
KMS always intializes, this was only a valid check when userspace was still in control of the kernel driver. v2: Comment that we outright reject all dri1/ums params. Acked-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
Whether we'll reject them or no-op doesn't really matter ... Acked-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
With the deprecation of UMS, and by association DRI1, we have a tough choice when updating the ring access routines. We either rewrite the DRI1 routines blindly without testing (so likely to be broken) or take the liberty of declaring them no longer supported and remove them entirely. This takes the latter approach. v2: Also remove the DRI1 sarea updates Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> [danvet: Fix rebase conflicts.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Thomas Daniel 提交于
Same as with the context, pinning to GGTT regardless is harmful (it badly fragments the GGTT and can even exhaust it). Unfortunately, this case is also more complex than the previous one because we need to map and access the ringbuffer in several places along the execbuffer path (and we cannot make do by leaving the default ringbuffer pinned, as before). Also, the context object itself contains a pointer to the ringbuffer address that we have to keep updated if we are going to allow the ringbuffer to move around. v2: Same as with the context pinning, we cannot really do it during an interrupt. Also, pin the default ringbuffers objects regardless (makes error capture a lot easier). v3: Rebased. Take a pin reference of the ringbuffer for each item in the execlist request queue because the hardware may still be using the ringbuffer after the MI_USER_INTERRUPT to notify the seqno update is executed. The ringbuffer must remain pinned until the context save is complete. No longer pin and unpin ringbuffer in populate_lr_context() - this transient address is meaningless and the pinning can cause a sleep while atomic. v4: Moved ringbuffer pin and unpin into the lr_context_pin functions. Downgraded pinning check BUG_ONs to WARN_ONs. v5: Reinstated WARN_ONs for unexpected execlist states. Removed unused variable. Issue: VIZ-4277 Signed-off-by: NOscar Mateo <oscar.mateo@intel.com> Signed-off-by: NThomas Daniel <thomas.daniel@intel.com> Reviewed-by: NAkash Goel <akash.goels@gmail.com> Reviewed-by: Deepak S<deepak.s@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Oscar Mateo 提交于
Up until now, we have pinned every logical ring context backing object during creation, and left it pinned until destruction. This made my life easier, but it's a harmful thing to do, because we cause fragmentation of the GGTT (and, eventually, we would run out of space). This patch makes the pinning on-demand: the backing objects of the two contexts that are written to the ELSP are pinned right before submission and unpinned once the hardware is done with them. The only context that is still pinned regardless is the global default one, so that the HWS can still be accessed in the same way (ring->status_page). v2: In the early version of this patch, we were pinning the context as we put it into the ELSP: on the one hand, this is very efficient because only a maximum two contexts are pinned at any given time, but on the other hand, we cannot really pin in interrupt time :( v3: Use a mutex rather than atomic_t to protect pin count to avoid races. Do not unpin default context in free_request. v4: Break out pin and unpin into functions. Fix style problems reported by checkpatch v5: Remove unpin_lock as all pinning and unpinning is done with the struct mutex already locked. Add WARN_ONs to make sure this is the case in future. Issue: VIZ-4277 Signed-off-by: NOscar Mateo <oscar.mateo@intel.com> Signed-off-by: NThomas Daniel <thomas.daniel@intel.com> Reviewed-by: NAkash Goel <akash.goels@gmail.com> Reviewed-by: Deepak S<deepak.s@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Thomas Daniel 提交于
No longer create a work item to clean each execlist queue item. Instead, move retired execlist requests to a queue and clean up the items during retire_requests. v2: Fix legacy ring path broken during overzealous cleanup v3: Update idle detection to take execlists queue into account v4: Grab execlist lock when checking queue state v5: Fix leaking requests by freeing in execlists_retire_requests. Issue: VIZ-4274 Signed-off-by: NThomas Daniel <thomas.daniel@intel.com> Reviewed-by: NDeepak S <deepak.s@linux.intel.com> Reviewed-by: NAkash Goel <akash.goels@gmail.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
So with all the code movement and extraction in intel_pm.c in -next git is hopelessly confused with commit 2208d655 Author: Daniel Vetter <daniel.vetter@ffwll.ch> Date: Fri Nov 14 09:25:29 2014 +0100 drm/i915: drop WaSetupGtModeTdRowDispatch:snb from -fixes. Worse even small changes in -next move around the conflict context so rerere is equally useless. Let's just backmerge and be done with it. Conflicts: drivers/gpu/drm/i915/i915_drv.c drivers/gpu/drm/i915/intel_pm.c Except for git getting lost no tricky conflicts really. Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
-
- 19 11月, 2014 10 次提交
-
-
由 Imre Deak 提交于
After the previous patch RPS disabling doesn't depend any more on the first level interrupts being disabled, so we can move it everywhere earlier. Doing so let's us think about the uninitialization steps afterwards independently of any asynchronous RPS events that can happen atm. It also makes the system/runtime suspend time RPS disabling more uniform. Finally this gets rid of the WARN in intel_suspend_gt_powersave(), which we can hit if a final RPS work runs after we disabled the first level interrupts. Testcase: igt/pm_rpm Reference: https://bugs.freedesktop.org/show_bug.cgi?id=82939Signed-off-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Imre Deak 提交于
When disabling the RPS interrupts there is a tricky dependency between the thread disabling the interrupts, the RPS interrupt handler and the corresponding RPS work. The RPS work can reenable the interrupts, so there is no straightforward order in the disabling thread to (1) make sure that any RPS work is flushed and to (2) disable all RPS interrupts. Currently this is solved by masking the interrupts using two separate mask registers (first level display IMR and PM IMR) and doing the disabling when all first level interrupts are disabled. This works, but the requirement to run with all first level interrupts disabled is unnecessary making the suspend / unload time ordering of RPS disabling wrt. other unitialization steps difficult and error prone. Removing this restriction allows us to disable RPS early during suspend / unload and forget about it for the rest of the sequence. By adding a more explicit method for avoiding the above race, it also becomes easier to prove its correctness. Finally currently we can hit the WARN in snb_update_pm_irq(), when a final RPS work runs with the first level interrupts already disabled. This won't lead to any problem (due to the separate interrupt masks), but with the change in this and the next patch we can get rid of the WARN, while leaving it in place for other scenarios. To address the above points, add a new RPS interrupts_enabled flag and use this during RPS disabling to avoid requeuing the RPS work and reenabling of the RPS interrupts. Since the interrupt disabling happens now in intel_suspend_gt_powersave(), we will disable RPS interrupts explicitly during suspend (and not just through the first level mask), but there is no problem doing so, it's also more consistent and allows us to unify more of the RPS disabling during suspend and unload time in the next patch. v2/v3: - rebase on patch "drm/i915: move rps irq disable one level up" in the patchset Signed-off-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Imre Deak 提交于
Atm we first enable the RPS interrupts then we clear any pending ones. By this we could lose an interrupt arriving after we unmasked it. This may not be a problem as the caller should handle such a race, but logic still calls for the opposite order. Also we can delay enabling the interrupts until after all the RPS initialization is ready with the following order: 1. disable left-over RPS (earlier via intel_uncore_sanitize) 2. clear any pending RPS interrupts 3. initialize RPS 4. enable RPS interrupts This also allows us to do the 2. and 4. step the same way for all platforms, so let's follow this order to simplifying things. Also make sure any queued interrupts are also cleared. v2: - rebase on the GEN9 patches where we don't support RPS yet, so we musn't enable RPS interrupts on it (Paulo) v3: - avoid enabling RPS interrupts on GEN>9 too (Paulo) - clarify the RPS init sequence in the log message (Chris) - add POSTING_READ to gen6_reset_rps_interrupts() (Paulo) - WARN if any PM_IIR bits are set in gen6_enable_rps_interrupts() (Paulo) Signed-off-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Imre Deak 提交于
We disable the RPS interrupts for all platforms at the same spot, so move it one level up in the callstack to simplify things. No functional change. v2: - rebase on the GEN9 patches where RPS isn't supported yet, so we don't need to disable RPS interrupts on it (Paulo) v3: - avoid disabling the interrupts on GEN>9 too (Paulo) Signed-off-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Imre Deak 提交于
This extends commit 132f3f17 Author: Imre Deak <imre.deak@intel.com> Date: Mon Nov 10 15:34:33 2014 +0200 drm/i915: WARN if we receive any gen9 rps interrupts to GEN>9 platforms as suggested by Paulo. Signed-off-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Matt Roper 提交于
When using the universal plane interface, the source rectangle coordinates define the panning offset for the primary plane, which needs to be stored in crtc->{x,y}. The original universal plane code negelected to set these panning offset fields, which was partially remedied in: commit ccc759dc Author: Gustavo Padovan <gustavo.padovan@collabora.co.uk> Date: Wed Sep 24 14:20:22 2014 -0300 drm/i915: Merge of visible and !visible paths for primary planes However the plane source coordinates are provided in 16.16 fixed point format and the above commit forgot to convert back to integer coordinates before saving the values. When we replace intel_pipe_set_base() with plane->funcs->update_plane() in a future patch, this bug becomes visible via the set_config entrypoint as well as update_plane. Cc: Gustavo Padovan <gustavo.padovan@collabora.co.uk> Testcase: igt/kms_plane Signed-off-by: NMatt Roper <matthew.d.roper@intel.com> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Jesse Barnes 提交于
Just like we do in the HDMI code, set the infoframe flag if we detect that infoframes are enabled. v2: check for actual infoframe status as in hdmi code (Daniel) Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Tom O'Rourke 提交于
In sandybridge_pcode_read and sandybridge_pcode_write, extend the mbox parameter from u8 to u32. On Haswell and Sandybridge, bits 7:0 encode the mailbox command and bits 28:8 are used for address control for specific commands. Based on suggestion from Ville Syrjälä. Signed-off-by: NTom O'Rourke <Tom.O'Rourke@intel.com> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
For whatever reasons this can happen. For real testcases the test will notice the -EIO and fall over, but we also have some testcases that just read all debugfs files. And that shouldn't cause dmesg spam. So tune it down a bit so that we still have the information for debugging. And change the errno so that real testcases can easily differentiate. Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=84890Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
-
由 Chris Wilson 提交于
With multiple rings, we may continue to render on the blitter whilst executing an infinite shader on the render ring. As we currently, rearm the timer with each execbuf, in this scenario the hangcheck will never fire and we will never detect the lockup on the render ring. Instead, only arm the timer once per hangcheck, so that hangcheck runs more frequently. v2: Rearrange code to avoid triggering a BUG_ON in add_timer from softirq context. Testcase: igt/gem_reset_stats/defer-hangcheck* Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=86225Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 18 11月, 2014 11 次提交
-
-
由 Daniel Vetter 提交于
This goes back to commit 362b8af7 Author: Ben Widawsky <benjamin.widawsky@intel.com> Date: Thu Jan 30 00:19:38 2014 -0800 drm/i915: Move per ring error state to ring_error Spotted while reading error states. Cc: Ben Widawsky <benjamin.widawsky@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
-
由 Jani Nikula 提交于
Indicate the monitor has been disconnected on disable. The regression has been introduced in commit 5fad84a7 Author: Jani Nikula <jani.nikula@intel.com> Date: Tue Nov 4 10:30:23 2014 +0200 drm/i915: rewrite hsw/bdw audio codec enable/disable sequences Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=86424 Cc: Rodrigo Vivi <rodrigo.vivi@gmail.com> Signed-off-by: NJani Nikula <jani.nikula@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Jesse Barnes 提交于
The lack of a break here wasn't for falling through to some other important code, so made me do a double take. Add a break just to make things a little less confusing. Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
kmap never fails. Spotted-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Arun Siluvery <arun.siluvery@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
-
由 Matt Roper 提交于
When invalid cloning configurations were detected during modeset, we never copied the error code into the return value variable, leading us to return 0 (success) to userspace. This regression has been introduced in commit 50f52756 Author: Jesse Barnes <jbarnes@virtuousgeek.org> Date: Fri Nov 7 13:11:00 2014 -0800 drm/i915: use compute_config in set_config v4 Testcase: igt/kms_setmode Signed-off-by: NMatt Roper <matthew.d.roper@intel.com> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=86226Reviewed-by: NJesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Damien Lespiau 提交于
Signed-off-by: NDamien Lespiau <damien.lespiau@intel.com> Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Damien Lespiau 提交于
On SKL DPLL0 is used to derive CDCLK but can also be used to drive an eDP port (as long as we don't want SSC). DPLL0 is special enough to not be handled by the shared DPLL framework (drives CDCLK, not supposed to enable the HDMI mode), So we need to compute the configuration separately from the other DPLLs. Note that we don't need to reprogram DPLL0 (which would mean bringing down CDCLK) to support the various eDP 1.3 link rates as they all share the same VCO (8100). Signed-off-by: NDamien Lespiau <damien.lespiau@intel.com> Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Rodrigo Vivi 提交于
Let's document PSR a bit. No functional changes. v2: Add actual DocBook entry and accept Daniel's improvements. Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Rodrigo Vivi 提交于
No functional changes. Just cleaning and reorganizing it. v2: Rebase it puting it to begin of psr rework. This helps to blame easily at least latest changes. Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Rodrigo Vivi 提交于
No functional change. Just making it public for use outside intel_dp.c Allowing split psr functions. Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ville Syrjälä 提交于
GEN6_GT_THREAD_STATUS_REG doesn't seem to exist on VLV. Reads just give 0x0 no matter what the state of the render and media wells. There was also some hint in the Gunit HAS that thread status not being needed on VLV, and hence dropped when bringing stuff over from the IVB design. Not really a definite comment about the specific register itself though. Also the w/a itself is no longer listed for VLV in the database. It was there some time ago in the past, but I guess someone figured out the mistake and dropped it. So let's just drop it from the code as well. Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: Deepak S<deepak.s@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-