- 11 7月, 2014 7 次提交
-
-
由 Chris Wilson 提交于
Place the RPS counters inside the RPS struct. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
Mostly this patch is one big excersize in deleting code and asserts which are no longer needed. Note that we still abuse the shared dpll framework a bit since we call the enable/disable functions from the crtc mode_set and off hooks. But changing the actual hardware sequence will be done in the next step. Note that besides the massive amount of changes in this patch the places and order in which the low-level WRPLL code is called is absolutely unchanged. Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> [imre: rebased on patchset version w/o pch/crt/fdi refactoring] Signed-off-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
Still tacked onto the side, but slowly getting there. v2: Don't forget the debugfs file. v3 (from Paulo): Don't forget to check the power domains. Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Paulo Zanoni 提交于
And get/put it when needed. The special thing about this commit is that it will now return false in ibx_pch_dpll_get_hw_state() in case the power domain is not enabled. This will fix some WARNs we have when we run pm_rpm on SNB. Testcase: igt/pm_rpm Bugzilla:https://bugs.freedesktop.org/show_bug.cgi?id=80463Signed-off-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
The WRPLLs won't use them. Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
Just filing in names and ids, but not yet officially registering them so that the hw state cross checker doesn't completely freak out about them. Still since we do already read out and cross check config->shared_dpll the basics are now there to flesh out the wrpll shared dpll implementation. The idea is now to roll out all the callbacks step-by-step and then at the end switch to the shared dpll framework. This way hw and sw changes are clearly separated. Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> [imre: added const to hsw_ddi_pll_names (Damien)] Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
SPLL would be a reference clock we could potentially share, especially if we want to use the SSC mode. But currently we don't, so let's rip out this complexity for a simpler conversion to the new display pll framework. Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 09 7月, 2014 1 次提交
-
-
由 Deepak S 提交于
With RC6 enabled, BYT has an HW issue in determining the right Gfx busyness. WA for Turbo + RC6: Use SW based Gfx busy-ness detection to decide on increasing/decreasing the freq. This logic will monitor C0 counters of render/media power-wells over EI period and takes necessary action based on these values v2: Refactor duplicate code. (Ville) v3: Reformat the comments. (Ville) v4: Enable required counters and remove unwanted code (Ville) v5: Added frequency change acceleration support and remove kernel-doc style comments. (Ville) v6: Updated comment section and Fix w/a comment. (Ville) Signed-off-by: NDeepak S <deepak.s@linux.intel.com> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 08 7月, 2014 6 次提交
-
-
由 Oscar Mateo 提交于
A bit of background on the context elements. Cc: Jesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: NOscar Mateo <oscar.mateo@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> [danvet: Appease checkpatch.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Oscar Mateo 提交于
This is an Execlists preparatory patch, since they make context ID become an overloaded term: - In the software, it was used to distinguish which context userspace was trying to use. - In the BSpec, the term is used to describe the 20-bits long field the hardware uses to it to discriminate the contexts that are submitted to the ELSP and inform the driver about their current status (via Context Switch Interrupts and Context Status Buffers). Initially, I tried to make the different meanings converge, but it proved impossible: - The software ctx->id is per-filp, while the hardware one needs to be globally unique. - Also, we multiplex several backing states objects per intel_context, and all of them need unique HW IDs. - I tried adding a per-filp ID and then composing the HW context ID as: ctx->id + file_priv->id + ring->id, but the fact that the hardware only uses 20-bits means we have to artificially limit the number of filps or contexts the userspace can create. The ctx->user_handle renaming bits are done with this Cocci patch (plus manual frobbing of the struct declaration): @@ struct intel_context c; @@ - (c).id + c.user_handle @@ struct intel_context *c; @@ - (c)->id + c->user_handle Also, while we are at it, s/DEFAULT_CONTEXT_ID/DEFAULT_CONTEXT_HANDLE and change the type to unsigned 32 bits. v2: s/handle/user_handle and change the type to uint32_t as suggested by Chris Wilson. Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org> (v1) Signed-off-by: NOscar Mateo <oscar.mateo@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Oscar Mateo 提交于
We have already advanced that Logical Ring Contexts have their own kind of backing objects, but everything will be better explained in the Execlists series. For now, suffice it to say that the current backing object is only ever used with the render ring, so we're making this fact more explicit (which is a good reason on its own). As for the is_initialized flag, we only use to signify that the render state has been initialized (a.k.a. golden context, a.k.a. null context). It doesn't mean anything for the other engines, so make that distinction obvious. Done with the following Coccinelle patch (plus manual frobbing of the struct): @@ struct intel_context c; @@ - (c).obj + c.legacy_hw_ctx.rcs_state @@ struct intel_context *c; @@ - (c)->obj + c->legacy_hw_ctx.rcs_state @@ struct intel_context c; @@ - (c).is_initialized + c.legacy_hw_ctx.initialized @@ struct intel_context *c; @@ - (c)->is_initialized + c->legacy_hw_ctx.initialized This Execlists prep-work patch has been suggested by Chris Wilson and Daniel Vetter separately. Initially, it was two separate patches: drm/i915: Rename ctx->obj to ctx->rcs_state drm/i915: Make it obvious that ctx->id is merely a user handle Signed-off-by: NOscar Mateo <oscar.mateo@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> [danvet: s/id/is_initialized/ to fix the subject and resolve a conflict in i915_gem_context_reset. Also introduce a new lctx local variable to avoid overtly long lines.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
Since the semaphore information is in an object, just dump it, and let the user parse it later. NOTE: The page being used for the semaphores are incoherent with the CPU. No matter what I do, I cannot figure out a way to read anything but 0s. Note that the semaphore waits are indeed working. v2: Don't print signal, and wait (they should be the same). Instead, print sync_seqno (Chris) v3: Free the semaphore error object (Chris) v4: Fix semaphore offset calculation during error state collection (Ville) v5: VCS2 rebase Make semaphore object error capture coding style consistent (Ville) Do the proper math for the signal offset (Ville) v6: Fix small conflicts on rebase and s/ring_buffer/engine_cs (Rodrigo) Reviewed-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ben Widawsky 提交于
Semaphore signalling works similarly to previous GENs with the exception that the per ring mailboxes no longer exist. Instead you must define your own space, somewhere in the GTT. The comments in the code define the layout I've opted for, which should be fairly future proof. Ie. I tried to define offsets in abstract terms (NUM_RINGS, seqno size, etc). NOTE: If one wanted to move this to the HWSP they could. I've decided one 4k object would be easier to deal with, and provide potential wins with cache locality, but that's all speculative. v2: Update the macro to not need the other ring's ring->id (Chris) Update the comment to use the correct formula (Chris) v3: Move the macros the ringbuffer.h to prevent churn in next patch (Ville) v4: Fixed compilation rebase conflict commit 1ec9e26d Author: Daniel Vetter <daniel.vetter@ffwll.ch> Date: Fri Feb 14 14:01:11 2014 +0100 drm/i915: Consolidate binding parameters into flags v5: VCS2 rebase Replace hweight_long with hweight32 v6 (Rodrigo): * Add missed VC2 gen8 ring signal init * fixing conflicst on rebase * minor fixes on address table * remove WARN_ON Reviewed-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> [danvet: s/BUG_ON/WARN_ON/] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 John Harrison 提交于
The 'i915_driver_preclose()' function has a parameter called 'file_priv'. However, this is misleading as the structure it points to is a 'drm_file' not a 'drm_i915_file_private'. It should be named just 'file' to avoid confusion. Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 07 7月, 2014 2 次提交
-
-
由 Dave Airlie 提交于
The digital ports from Ironlake and up have the ability to distinguish between long and short HPD pulses. Displayport 1.1 only uses the short form to request link retraining usually, so we haven't really needed support for it until now. However with DP 1.2 MST we need to handle the short irqs on their own outside the modesetting locking the long hpd's involve. This patch adds the framework to distinguish between short/long to the current code base, to lay the basis for future DP 1.2 MST work. This should mean we get better bisectability in case of regression due to the new irq handling. v2: add GM45 support (untested, due to lack of hw) Signed-off-by: NDave Airlie <airlied@redhat.com> Reviewed-by: NTodd Previte <tprevite@gmail.com> [danvet: Fix conflicts in i915_irq.c with Oscar Mateo's irq handling race fixes and a trivial one in intel_drv.h with the psr code.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Imre Deak 提交于
This functionality will be also needed by an upcoming patch, so factor it out. As a bonus this also makes things a bit more uniform across platforms. Note that this also changes the register read-modify-write to a simple write during disabling. This is what we do during enabling anyway and according to the spec all the relevant bits are reserved-MBZ or reserved with a 0 default value. v2: - unchanged v3: - fix missing cxsr disabling on pineview (Deepak) Signed-off-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NDeepak S <deepak.s@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 03 7月, 2014 2 次提交
-
-
由 Ben Widawsky 提交于
The GEN FBC unit provides the ability to set a low pass on frames it attempts to compress. If a frame is less than a certain amount compressibility (2:1, 4:1) it will not bother. This allows the driver to reduce the size it requests out of stolen memory. Unluckily, a few months ago, Ville actually began using this feature for framebuffers that are 16bpp (not sure why not 8bpp). In those cases, we are already using this mechanism for a different purpose, and so we can only achieve one further level of compression (2:1 -> 4:1) FBC GEN1, ie. pre-G45 is ignored. The cleverness of the patch is Art's. The bugs are mine. v2: Update message and including missing threshold case 3 (Spotted by Arthur). Cc: Art Runyan <arthur.j.runyan@intel.com> Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
由 Ben Widawsky 提交于
We are already using the size to determine whether or not to free the object, so there is no functional change there. Almost everything else has changed to static allocations of the drm_mm_node too. Aside from bringing this inline with much of our other code, this makes error paths slightly simpler, which benefits the look of an upcoming patch. Signed-off-by: NBen Widawsky <ben@bwidawsk.net> Reviewed-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
- 23 6月, 2014 1 次提交
-
-
由 Imre Deak 提交于
Jesse noticed that the punit communication needed to query the VLV power well status can cause substantial delays. Since we can query the state frequently, for example during I2C transfers, maintain a cached version of the HW state to get rid of this delay. This fixes at least one reported regression where boot time increased by ~4 seconds due to frequent power well state queries on VLV during eDP EDID read. This regression has been introduced in commit bb4932c4 Author: Imre Deak <imre.deak@intel.com> Date: Mon Apr 14 20:24:33 2014 +0300 drm/i915: vlv: check port power domain instead of only D0 for eDP VDD on Reported-by: NJesse Barnes <jesse.barnes@intel.com> Signed-off-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NJesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 20 6月, 2014 3 次提交
-
-
由 Daniel Vetter 提交于
Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
So these are the guts of the new beast. This tracks when a frontbuffer gets invalidated (due to frontbuffer rendering) and hence should be constantly scaned out, and when it's flushed again and can be compressed/one-shot-upload. Rules for flushing are simple: The frontbuffer needs one more full upload starting from the next vblank. Which means that the flushing can _only_ be called once the frontbuffer update has been latched. But this poses a problem for pageflips: We can't just delay the flushing until the pageflip is latched, since that would pose the risk that we override frontbuffer rendering that has been scheduled in-between the pageflip ioctl and the actual latching. To handle this track asynchronous invalidations (and also pageflip) state per-ring and delay any in-between flushing until the rendering has completed. And also cancel any delayed flushing if we get a new invalidation request (whether delayed or not). Also call intel_mark_fb_busy in both cases in all cases to make sure that we keep the screen at the highest refresh rate both on flips, synchronous plane updates and for frontbuffer rendering. v2: Lots of improvements Suggestions from Chris: - Move invalidate/flush in flush_*_domain and set_to_*_domain. - Drop the flush in busy_ioctl since it's redundant. Was a leftover from an earlier concept to track flips/delayed flushes. - Don't forget about the initial modeset enable/final disable. Suggested by Chris. Track flips accurately, too. Since flips complete independently of rendering we need to track pending flips in a separate mask. Again if an invalidate happens we need to cancel the evenutal flush to avoid races. v3: Provide correct header declarations for flip functions. Currently not needed outside of intel_display.c, but part of the proper interface. v4: Add proper domain management to fbcon so that the fbcon buffer is also tracked correctly. v5: Fixup locking around the fbcon set_to_gtt_domain call. v6: More comments from Chris: - Split out fbcon changes. - Drop superflous checks for potential scanout before calling intel_fb functions - we can micro-optimize this later. - s/intel_fb_/intel_fb_obj_/ to make it clear that this deals in gem object. We already have precedence for fb_obj in the pin_and_fence functions. v7: Clarify the semantics of the flip flush handling by renaming things a bit: - Don't go through a gem object but take the relevant frontbuffer bits directly. These functions center on the plane, the actual object is irrelevant - even a flip to the same object as already active should cause a flush. - Add a new intel_frontbuffer_flip for synchronous plane updates. It currently just calls intel_frontbuffer_flush since the implemenation differs. This way we achieve a clear split between one-shot update events on one side and frontbuffer rendering with potentially a very long delay between the invalidate and flush. Chris and I also had some discussions about mark_busy and whether it is appropriate to call from flush. But mark busy is a state which should be derived from the 3 events (invalidate, flush, flip) we now have by the users, like psr does by tracking relevant information in psr.busy_frontbuffer_bits. DRRS (the only real use of mark_busy for frontbuffer) needs to have similar logic. With that the overall mark_busy in the core could be removed. v8: Only when retiring gpu buffers only flush frontbuffer bits we actually invalidated in a batch. Just for safety since before any additional usage/invalidate we should always retire current rendering. Suggested by Chris Wilson. v9: Actually use intel_frontbuffer_flip in all appropriate places. Spotted by Chris. v10: Address more comments from Chris: - Don't call _flip in set_base when the crtc is inactive, avoids redunancy in the modeset case with the initial enabling of all planes. - Add comments explaining that the initial/final plane enable/disable still has work left to do before it's fully generic. v11: Only invalidate for gtt/cpu access when writing. Spotted by Chris. v12: s/_flush/_flip/ in intel_overlay.c per Chris' comment. Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
The downclocking checks a few more things, so not that simple to convert. Also, this should get unified with the drrs handling and also use the locking of that. Otoh the drrs locking is about as hapzardous as no locking, at least on first sight. For easier conversion ditch the upclocking on unload - we'll turn off everything anyway. Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 19 6月, 2014 2 次提交
-
-
由 Daniel Vetter 提交于
So from just a quick look we seem to have enough information to accurately figure out whether a given gem bo is used as a frontbuffer and where exactly: We have obj->pin_count as a first check with no false negatives and only negligible false positives. And then we can just walk the modeset objects and figure out where exactly a buffer is used as scanout. Except that we can't due to locking order: If we already hold dev->struct_mutex we can't acquire any modeset locks, so could potential chase freed pointers and other evil stuff. So we need something else. For that introduce a new set of bits obj->frontbuffer_bits to track where a buffer object is used. That we can then chase without grabbing any modeset locks. Of course the consumers of this (DRRS, PSR, FBC, ...) still need to be able to do their magic both when called from modeset and from gem code. But that can be easily achieved by adding locks for these specific subsystems which always nest within either kms or gem locking. This patch just adds the relevant update code to all places. Note that if we ever support multi-planar scanout targets then we need one frontbuffer tracking bit per attachment point that we expose to userspace. v2: - Fix more oopsen. Oops. - WARN if we leak obj->frontbuffer_bits when freeing a gem buffer. Fix the bugs this brought to light. - s/update_frontbuffer_bits/update_fb_bits/. More consistent with the fb tracking functions (fb for gem object, frontbuffer for raw bits). And the function name was way too long. v3: Size obj->frontbuffer_bits correctly so that all pipes fit in. v4: Don't update fb bits in set_base on failure. Noticed by Chris. v5: s/i915_gem_update_fb_bits/i915_gem_track_fb/ Also remove a few local enum pipe variables which are now no longer needed to make the function arguments no drop over the 80 char limit. Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Oscar Mateo 提交于
The original comment that introduced it said: commit 0009e46c Author: Ben Widawsky <ben@bwidawsk.net> Date: Fri Dec 6 14:11:02 2013 -0800 drm/i915: Track which ring a context ran on Previously we dropped the association of a context to a ring. It is however very important to know which ring a context ran on (we could have reused the other member, but I was nitpicky). This is very important when we switch address spaces, which unlike context objects, do change per ring. As an example, if we have: RCS BCS ctx A ctx A ctx B ctx B Without tracking the last ring B ran on, we wouldn't know to switch the address space on BCS in the last row. But this is not really true, because we are already checking to != from (with "from" being = ring->last_context) and that should be enough to make sure we switch to the right address space. We would have a problem if we switched the context object for every ring (since then we would fail to do it in some situations) but we only switch it for the render ring, so we don't care. Signed-off-by: NOscar Mateo <oscar.mateo@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 17 6月, 2014 2 次提交
-
-
由 Sourab Gupta 提交于
This patch enables the framework for using MMIO based flip calls, in contrast with the CS based flip calls which are being used currently. MMIO based flip calls can be enabled on architectures where Render and Blitter engines reside in different power wells. The decision to use MMIO flips can be made based on workloads to give 100% residency for Media power well. v2: The MMIO flips now use the interrupt driven mechanism for issuing the flips when target seqno is reached. (Incorporating Ville's idea) v3: Rebasing on latest code. Code restructuring after incorporating Damien's comments v4: Addressing Ville's review comments -general cleanup -updating only base addr instead of calling update_primary_plane -extending patch for gen5+ platforms v5: Addressed Ville's review comments -Making mmio flip vs cs flip selection based on module parameter -Adding check for DRIVER_MODESET feature in notify_ring before calling notify mmio flip. -Other changes mostly in function arguments v6: -Having a seperate function to check condition for using mmio flips (Ville) -propogating error code from i915_gem_check_olr (Ville) v7: -Adding __must_check with i915_gem_check_olr (Chris) -Renaming mmio_flip_data to mmio_flip (Chris) -Rebasing on latest nightly v8: -Rebasing on latest code -squash 3rd patch in series(mmio setbase vs page flip race) with this patch -Added new tiling mode update in intel_do_mmio_flip (Chris) v9: -check for obj->last_write_seqno being 0 instead of obj->ring being NULL in intel_postpone_flip, as this is a more restrictive condition (Chris) v10: -Applied Chris's suggestions for squashing patches 2,3 into this patch. These patches make the selection of CS vs MMIO flip at the page flip time, and make the module parameter for using mmio flips as tristate, the states being 'force CS flips', 'force mmio flips', 'driver discretion'. Changed the logic for driver discretion (Chris) v11: Minor code cleanup(better readability, fixing whitespace errors, using lockdep to check mutex locked status in postpone_flip, removal of __must_check in function definition) (Chris) Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NSourab Gupta <sourab.gupta@intel.com> Signed-off-by: NAkash Goel <akash.goel@intel.com> Tested-by: Chris Wilson <chris@chris-wilson.co.uk> # snb, ivb [danvet: Fix up parameter alignement checkpatch spotted.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Akash Goel 提交于
This adds support for a write-enable bit in the entry of GTT. This is handled via a read-only flag in the GEM buffer object which is then used to see how to set the bit when writing the GTT entries. Currently by default the Batch buffer & Ring buffers are marked as read only. v2: Moved the pte override code for read-only bit to 'byt_pte_encode'. (Chris) Fixed the issue of leaving 'gt_old_ro' as unused. (Chris) v3: Removed the 'gt_old_ro' field, now setting RO bit only for Ring Buffers(Daniel). v4: Added a new 'flags' parameter to all the pte(gen6) encode & insert_entries functions, in lieu of overloading the cache_level enum (Daniel). v5: Removed the superfluous VLV check & changed the definition location of PTE_READ_ONLY flag (Imre) Reviewed-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NAkash Goel <akash.goel@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 14 6月, 2014 1 次提交
-
-
由 Rodrigo Vivi 提交于
The perfect solution for psr_exit is the hardware tracking the changes and doing the psr exit by itself. This scenario works for HSW and BDW with some environments like Gnome and Wayland. However there are many other scenarios that this isn't true. Mainly one right now is KDE users on HSW and BDW with PSR on. User would miss many screen updates. For instances any key typed could be seen only when mouse cursor is moved. So this patch introduces the ability of trigger PSR exit on kernel side on some common cases that. Most of the cases are coverred by psr_exit at set_domain. The remaining cases are coverred by triggering it at set_domain, busy_ioctl, sw_finish and mark_busy. The downside here might be reducing the residency time on the cases this already work very wall like Gnome environment. But so far let's get focused on fixinge issues sio PSR couild be used for everybody and we could even get it enabled by default. Later we can add some alternatives to choose the level of PSR efficiency over boot flag of even over crtc property. v2: remove exit from connector_dpms. Daniel pointed this is the wrong way and also this isn't needed for BDW and HSW anyway. Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: NVijay Purushothaman <vijay.a.purushothaman@intel.com> Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 13 6月, 2014 3 次提交
-
-
由 Daniel Vetter 提交于
Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Imre Deak 提交于
Atm, the forcewake refcount will be incorrectly set to zero during system suspend if there is any reference held via the i915_forcewake_user debugfs entry. Fix this by simply not zeroing the sw counters during suspend and restoring the original state using them. Note that the only other places where we zeroed the counters were driver load and unload time, where it was redundant anyway. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=78059Signed-off-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
Now that we have a release hook into i915_gem_object_free, we can move the explicit call to the internal stolen function and hook it up throught the callback instead. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 12 6月, 2014 1 次提交
-
-
由 Jesse Barnes 提交于
This allows the system to enter the lowest power mode during system freeze. v2: delete force wake timer at suspend (Imre) v3: add GT work suspend function (Imre) v4: use uncore forcewake reset (Daniel) Reviewed-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NKristen Carlson Accardi <kristen@linux.intel.com> Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 11 6月, 2014 3 次提交
-
-
由 Jesse Barnes 提交于
Working for real this time. i915_ppgtt_info has all sorts of good stuff in it and X is running nicely on top. Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ville Syrjälä 提交于
These are just single registers so wasting space for the pipe offsets seems a bit pointless. So just use the _PIPE3() macro instead. Also rewrite the _PIPE3() macro to be more obvious, and protect the arguments properly. Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> [danvet: Frob conflict.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Rodrigo Vivi 提交于
"Because our driver assumes only one panel is PSR capable, and we already have other PSR information on dev_priv instead of intel_dp. If we ever support multiple PSR panels, we'll have to move struct i915_psr to intel_dp anyway." (by Paulo) v2: Avoid more than one setup. Removing initialization and trusting allocation. (By Paulo Zanoni). v3: rebase. v4: Adding comment. Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NRodrigo Vivi <rodrigo.vivi@gmail.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 05 6月, 2014 1 次提交
-
-
由 Shobhit Kumar 提交于
It seems by default the VBT has MIPI configuration block as well. The Generic driver will assume always MIPI if MIPI configuration block is found. This is causing probelm when actually there is eDP. Fix this by looking into general definition block which will have device configurations. From here we can figure out what is the LFP type and initialize MIPI only if MIPI is found. v2: Addressed review comments by Damien - Moved PORT definitions to intel_bios.h and renamed as DVO_PORT_MIPIA - renamed is_mipi to has_mipi and moved definition as suggested - Check has_mipi inside parse_mipi and intel_dsi_init insted of outside v3: Make has_mipi as a bitfield as suggested Signed-off-by: NShobhit Kumar <shobhit.kumar@intel.com> Reviewed-by: NDamien Lespiau <damien.lespiau@intel.com> [danvet: fold in conditions to pack everything neatly below 80 chars.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 27 5月, 2014 2 次提交
-
-
由 Chris Wilson 提交于
This is pure evil. Userspace, I'm looking at you SNA, repacks batch buffers on the fly after generation as they are being passed to the kernel for execution. These batches also contain self-referenced relocations as a single buffer encompasses the state commands, kernels, vertices and sampler. During generation the buffers are placed at known offsets within the full batch, and then the relocation deltas (as passed to the kernel) are tweaked as the batch is repacked into a smaller buffer. This means that userspace is passing negative relocations deltas, which subsequently wrap to large values if the batch is at a low address. The GPU hangs when it then tries to use the large value as a base for its address offsets, rather than wrapping back to the real value (as one would hope). As the GPU uses positive offsets from the base, we can treat the relocation address as the minimum address read by the GPU. For the upper bound, we trust that userspace will not read beyond the end of the buffer. So, how do we fix negative relocations from wrapping? We can either check that every relocation looks valid when we write it, and then position each object such that we prevent the offset wraparound, or we just special-case the self-referential behaviour of SNA and force all batches to be above 256k. Daniel prefers the latter approach. This fixes a GPU hang when it tries to use an address (relocation + offset) greater than the GTT size. The issue would occur quite easily with full-ppgtt as each fd gets its own VM space, so low offsets would often be handed out. However, with the rearrangement of the low GTT due to capturing the BIOS framebuffer, it is already affecting kernels 3.15 onwards. I think only IVB+ is susceptible to this bug, but the workaround should only kick in rarely, so it seems sensible to always apply it. v3: Use a bias for batch buffers to prevent small negative delta relocations from wrapping. v4 from Daniel: - s/BIAS/BATCH_OFFSET_BIAS/ - Extract eb_vma_misplaced/i915_vma_misplaced since the conditions were growing rather cumbersome. - Add a comment to eb_get_batch explaining why we do this. - Apply the batch offset bias everywhere but mention that we've only observed it on gen7 gpus. - Drop PIN_OFFSET_FIX for now, that slipped in from a feature patch. v5: Add static to eb_get_batch, spotted by 0-day tester. Testcase: igt/gem_bad_reloc Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=78533 Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> (v3) Cc: stable@vger.kernel.org Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
A single object may be referenced by multiple registers fundamentally breaking the static allotment of ids in the current design. When the object is used the second time, the physical address of the first assignment is relinquished and a second one granted. However, the hardware is still reading (and possibly writing) to the old physical address now returned to the system. Eventually hilarity will ensue, but in the short term, it just means that cursors are broken when using more than one pipe. v2: Fix up leak of pci handle when handling an error during attachment, and avoid a double kmap/kunmap. (Ville) Rebase against -fixes. v3: And fix the error handling added in v2 (Ville) Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=77351Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Jani Nikula <jani.nikula@linux.intel.com> Cc: stable@vger.kernel.org Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 23 5月, 2014 3 次提交
-
-
由 Oscar Mateo 提交于
It's barely alive now anyway, so give it the "coup de grâce". Signed-off-by: NOscar Mateo <oscar.mateo@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Oscar Mateo 提交于
Up until now, contexts had one (and only one) backing object that was used by the hardware to save/restore render ring contexts (via the MI_SET_CONTEXT command). Other rings did not have or need this, so our i915_hw_context struct had a 1:1 relationship with a a real HW context. With Logical Ring Contexts and Execlists, this is not possible anymore: all rings need a backing object, and it cannot be reused. To prepare for that, rename our contexts to the more generic term intel_context. No functional changes. Signed-off-by: NOscar Mateo <oscar.mateo@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Oscar Mateo 提交于
In the upcoming patches we plan to break the correlation between engine command streamers (a.k.a. rings) and ringbuffers, so it makes sense to refactor the code and make the change obvious. No functional changes. Signed-off-by: NOscar Mateo <oscar.mateo@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-