- 11 7月, 2014 13 次提交
-
-
由 Daniel Vetter 提交于
Just filing in names and ids, but not yet officially registering them so that the hw state cross checker doesn't completely freak out about them. Still since we do already read out and cross check config->shared_dpll the basics are now there to flesh out the wrpll shared dpll implementation. The idea is now to roll out all the callbacks step-by-step and then at the end switch to the shared dpll framework. This way hw and sw changes are clearly separated. Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> [imre: added const to hsw_ddi_pll_names (Damien)] Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
This way only the dynamic WRPLL selection for hdmi ddi mode is done in intel_ddi_pll_select. v2: Don't clobber the precomputed values when selecting clocks fro hdmi encoders. v3 (from Paulo): Rebase on top of the s/IS_HASWELL/HAS_DDI/ patch. Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NPaulo Zanoni <przanoni@gmail.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Paulo Zanoni 提交于
Don't let it fall in the HAS_PCH_SPLIT() case. Signed-off-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
To make things a bit more manageable extract a new function for reading out common ddi port state. This means a bit of duplication between encoders and the core since both look at the same registers, but doesn't seem worth to make a fuzz about. We can also remove the state readout code in intel_ddi_setup_hw_pll_state. That code is only called from the hardware take over and not the cross check code, and only after the crtc state is reconstructed. So we can rely on an accurate value of crtc->config.ddi_pll_sel already. Compared to the old code also trust the hw state more and don't special-case port A - we want to cross-check the actual-state, not bake in our own assumptions about how this is supposed to all be linked up. v2: Make use of the read-out ddi_pll_sel in intel_ddi_clock_get. Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> [imre: rebased on patchset version w/o pch/crt/fdi refactoring] Signed-off-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
Just boring sed job for preparation. Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> [imre: rebased on patchset version w/o pch/crt/fdi refactoring] Signed-off-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
Similar to how the ->crtc_mode_set hook should touch the hardware to enable anything the ->crtc_off hook should disable anything in the hardware. Otherwise runtime pm for dpms will not work. Currently the only things left int the haswell_crtc_off hook is disabling the ddi plls. We can't move the WRPLL enabling out yet because the current ddi pll sharing code used by the haswell code doesn't separately track active users and overall users. This must be fixed by porting it to the generic shared display pll framework, which is powerful enough. But the SPLL source is only used by the crt encoder and so can be moved already. We only need to make sure that the ddi port E is already off, which hsw_fdi_disable does by calling intel_ddi_post_disable. With this the code reorg to shuffle hsw fdi/lpt specific code into a new hsw-specific crt encoder type is now finally complete. Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> [imre: rebased on patchset version w/o pch/crt/fdi refactoring] Signed-off-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
The call to intel_ddi_pll_enable in haswell_crtc_mode_set is the only function that still touches the hardware state from the crtc mode_set callback on hsw. Since the SPLL isn't ever shared we can easily take it out into the hsw crt encoder functions. Temporarily we'll loose a bit of WARN_ON coverage with this, but once the WRPLLs are switched over that will be restored. For the SPLL selection add a WARN in the hsw fdi link training code. Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> [imre: rebased on patchset version w/o pch/crt/fdi refactoring] Signed-off-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Imre Deak 提交于
This is needed by an upcoming patch that moves the PCH/CRT PLL disabling into the post_disable hook, after which we want to keep the modeset sequence at its current state. At this point this won't have an effect since the PCH/CRT post_disable hook is atm a NOP. Signed-off-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Imre Deak 提交于
This is needed by an upcoming patch that moves the PCH/CRT PLL enabling into the pre_enable hook, after which we want to keep the modeset sequence at its current state. At this point this won't have an effect since the PCH/CRT pre_enable hook is atm a NOP. Signed-off-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
Luckily the bit definitions match, but it's still confusing to use one when handling the other. So sprinkle some OCD over the #defines to make them match and use the right version in each place. Maybe we should unify these definitions completely, but that can always be done sometime in the future. Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
SPLL would be a reference clock we could potentially share, especially if we want to use the SSC mode. But currently we don't, so let's rip out this complexity for a simpler conversion to the new display pll framework. Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
All the other checks also check hw state, so checking our software refcounts for the plls looks a bit odd. Also this will simplify the conversion over to the shared dpll framework, which itself has massive amounts of checks to make sure that we never leave a display pll enabled when we shouldn't. So after that conversion we should stil have a good enough coverage of asserts for entering pc8/runtime pm on hsw/bdw. Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 10 7月, 2014 5 次提交
-
-
由 Matt Roper 提交于
Add !mutex_is_locked() checks to intel_pin_and_fence_fb_obj() and intel_unpin_fb_obj() to help catch failures to grab struct_mutex when operating on fb objects. Signed-off-by: NMatt Roper <matthew.d.roper@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Matt Roper 提交于
intel_primary_plane_{setplane,disable} were lacking struct_mutex locking around their GEM operations. Signed-off-by: NMatt Roper <matthew.d.roper@intel.com> Reported-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Paulo Zanoni 提交于
Otherwise we will print some WARNs when we read registers and the machine is suspended. Testcase: igt/pm_rpm/debugfs-read Cc: Ben Widawsky <ben@bwidawsk.net> Signed-off-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Reviewed-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Paulo Zanoni 提交于
On HSW, the D_COMP register can be accessed through the mailbox (read and write) or through MMIO on a MCHBAR offset (read only). On BDW, the access should be done through MMIO on another address. So to account for all these cases, create hsw_read_dcomp() with the correct implementation for reading, and also fix hsw_write_dcomp() to do the correct thing on BDW. With this patch, we can now get back from the PC8+ state on BDW. We were previously getting a black screen and lots of dmesg errors. Please notice that the bug only happens when you actually reach the PC8+ states, not when you only allow it. Testcase: igt/pm_rpm/rte Signed-off-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Paulo Zanoni 提交于
That function can be used to write anything on D_COMP, not just disable it, so print a more appropriate message. Signed-off-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 09 7月, 2014 8 次提交
-
-
由 Mika Kuoppala 提交于
CHV hard hangs on reading on 0x11100 References: https://bugs.freedesktop.org/show_bug.cgi?id=80893Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Mika Kuoppala 提交于
CHV hard hangs on reading on 0x112f4. References: https://bugs.freedesktop.org/show_bug.cgi?id=80893Suggested-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Mika Kuoppala 提交于
CHV hard hangs on reading these registers. As these have not been used since cantiga & ilk, remove the debugfs entry. References: https://bugs.freedesktop.org/show_bug.cgi?id=80893Suggested-by: NJesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Mika Kuoppala 提交于
CHV hard hangs on reading these registers. As these have not been used since cantiga & ilk, remove the debugfs entry. References: https://bugs.freedesktop.org/show_bug.cgi?id=80893Suggested-by: NJesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Matt Roper 提交于
This should hopefully simplify the display code slightly and also solves at least one mistake in intel_pipe_set_base() where to_intel_framebuffer(fb)->obj is referenced during local variable initialization, before 'if (!fb)' gets checked. Potential uses of this macro were identified via the following Coccinelle patch: @@ expression E; @@ * to_intel_framebuffer(E)->obj @@ expression E; identifier I; @@ I = to_intel_framebuffer(E); ... * I->obj v2: Rewrite some NULL tests in terms of the obj rather than the fb. Also add a WARN() if trying to pageflip with a disabled primary plane. [Suggested by Chris Wilson] Signed-off-by: NMatt Roper <matthew.d.roper@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Matt Roper 提交于
Add an intel_fb_obj() macro that returns the GEM object associated with a DRM framebuffer. This macro is safe to call on NULL framebuffers (a NULL object pointer will be returned in this case). Signed-off-by: NMatt Roper <matthew.d.roper@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
On g33, the documentation states "HWS_PGA: Format = Bits 28:12 of graphics memory address (bits 31:29 MBZ)." which translates to that the address of the HWS must be below 256MiB, which is conveniently the mappable aperture. This also appears to be true (but not documented as so) for gen4 and gen5. To generalise we force it into the low mappable region for all non-LLC platforms. If we locate the HWS at the top of the GTT the machine will hard hang during boot (fails on pnv, gm45, ilk and byt, but works on snb, ivb, hsw). v2: Add comments to explain why use PIN_MAPPABLE even though we have no intention of mapping the object. (Ville) Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Deepak S 提交于
With RC6 enabled, BYT has an HW issue in determining the right Gfx busyness. WA for Turbo + RC6: Use SW based Gfx busy-ness detection to decide on increasing/decreasing the freq. This logic will monitor C0 counters of render/media power-wells over EI period and takes necessary action based on these values v2: Refactor duplicate code. (Ville) v3: Reformat the comments. (Ville) v4: Enable required counters and remove unwanted code (Ville) v5: Added frequency change acceleration support and remove kernel-doc style comments. (Ville) v6: Updated comment section and Fix w/a comment. (Ville) Signed-off-by: NDeepak S <deepak.s@linux.intel.com> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 08 7月, 2014 14 次提交
-
-
由 Chris Wilson 提交于
For whatever reason, MI_DISPLAY_FLIP fails to change tiling mode on Baytrail, so just use CPU driven mmio flips instead. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=76176Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
We currently see random GPU hangs when using RCS flips with multiple pipes on Ivybridge. Now that we have mmio flips, we can fairly cheaply fallback to using CPU driven flips instead. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=77104Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Oscar Mateo 提交于
So that we isolate the legacy ringbuffer submission mechanism, which becomes a good candidate to be abstracted away. This is prep-work for Execlists (which will its own workload submission mechanism). No functional changes. Reviewed-by: NJesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: NOscar Mateo <oscar.mateo@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Oscar Mateo 提交于
Again, it's low-level enough to simply take a ringbuf and nothing else. Trivial change. Reviewed-by: NJesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: NOscar Mateo <oscar.mateo@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Oscar Mateo 提交于
It's simple enough that it doesn't need to know anything about the engine. Trivial change. Reviewed-by: NJesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: NOscar Mateo <oscar.mateo@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Oscar Mateo 提交于
More prep work: with Execlists, we are going to start creating a lot of extra ringbuffers soon, so these functions are handy. No functional changes. v2: rename allocate/destroy_ring_buffer to alloc/destroy_ringbuffer_obj because the name is more meaningful and to mirror a similar function in the context world: i915_gem_alloc_context_obj(). Change suggested by Brad Volkin. Reviewed-by: NJesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: NOscar Mateo <oscar.mateo@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Oscar Mateo 提交于
A bit of background on the context elements. Cc: Jesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: NOscar Mateo <oscar.mateo@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> [danvet: Appease checkpatch.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Oscar Mateo 提交于
This is an Execlists preparatory patch, since they make context ID become an overloaded term: - In the software, it was used to distinguish which context userspace was trying to use. - In the BSpec, the term is used to describe the 20-bits long field the hardware uses to it to discriminate the contexts that are submitted to the ELSP and inform the driver about their current status (via Context Switch Interrupts and Context Status Buffers). Initially, I tried to make the different meanings converge, but it proved impossible: - The software ctx->id is per-filp, while the hardware one needs to be globally unique. - Also, we multiplex several backing states objects per intel_context, and all of them need unique HW IDs. - I tried adding a per-filp ID and then composing the HW context ID as: ctx->id + file_priv->id + ring->id, but the fact that the hardware only uses 20-bits means we have to artificially limit the number of filps or contexts the userspace can create. The ctx->user_handle renaming bits are done with this Cocci patch (plus manual frobbing of the struct declaration): @@ struct intel_context c; @@ - (c).id + c.user_handle @@ struct intel_context *c; @@ - (c)->id + c->user_handle Also, while we are at it, s/DEFAULT_CONTEXT_ID/DEFAULT_CONTEXT_HANDLE and change the type to unsigned 32 bits. v2: s/handle/user_handle and change the type to uint32_t as suggested by Chris Wilson. Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org> (v1) Signed-off-by: NOscar Mateo <oscar.mateo@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Oscar Mateo 提交于
We have already advanced that Logical Ring Contexts have their own kind of backing objects, but everything will be better explained in the Execlists series. For now, suffice it to say that the current backing object is only ever used with the render ring, so we're making this fact more explicit (which is a good reason on its own). As for the is_initialized flag, we only use to signify that the render state has been initialized (a.k.a. golden context, a.k.a. null context). It doesn't mean anything for the other engines, so make that distinction obvious. Done with the following Coccinelle patch (plus manual frobbing of the struct): @@ struct intel_context c; @@ - (c).obj + c.legacy_hw_ctx.rcs_state @@ struct intel_context *c; @@ - (c)->obj + c->legacy_hw_ctx.rcs_state @@ struct intel_context c; @@ - (c).is_initialized + c.legacy_hw_ctx.initialized @@ struct intel_context *c; @@ - (c)->is_initialized + c->legacy_hw_ctx.initialized This Execlists prep-work patch has been suggested by Chris Wilson and Daniel Vetter separately. Initially, it was two separate patches: drm/i915: Rename ctx->obj to ctx->rcs_state drm/i915: Make it obvious that ctx->id is merely a user handle Signed-off-by: NOscar Mateo <oscar.mateo@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> [danvet: s/id/is_initialized/ to fix the subject and resolve a conflict in i915_gem_context_reset. Also introduce a new lctx local variable to avoid overtly long lines.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Oscar Mateo 提交于
This is preparatory work for Execlists: we plan to use it later to allocate our own context objects (since Logical Ring Contexts do not have the same kind of backing objects). No functional changes. Reviewed-by: NJesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: NOscar Mateo <oscar.mateo@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Imre Deak 提交于
To achieve further power savings during system freeze (aka connected standby, or s0ix) we have to send a PCI_D1 opregion notification. As the information about the state we're entering (system freeze, suspend to ram or suspend to disk) is only available through the ACPI subsystem, make this support depend on the relevant kconfig option. Things will still work if this option isn't set, albeit with less than optimial power saving. This also fixes a compile breakage when the option is not set introduced in commit e5747e3a Author: Jesse Barnes <jbarnes@virtuousgeek.org> Date: Thu Jun 12 08:35:47 2014 -0700 drm/i915: send proper opregion notifications on suspend/resume Reported-by: NRandy Dunlap <rdunlap@infradead.org> Signed-off-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
Make the assumption that media workloads are not as latency sensitive for __wait_seqno, and that upclocking the GPU does not affect the BLT engine. Under that assumption, we only wait to forcibly upclock the GPU when we are stalling for results from the render pipeline. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Deepak S<deepak.s@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Jesse Barnes 提交于
With the new checks in place, we can see we're doing things backwards, so fix them up per the spec. Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Rodrigo Vivi 提交于
Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-