- 05 12月, 2014 1 次提交
-
-
由 Daniel Vetter 提交于
Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 03 12月, 2014 14 次提交
-
-
由 John Harrison 提交于
Updated the trace_irq code to use requests instead of seqnos. This includes reference counting the request object to ensure it sticks around when required. Note that getting access to the reference counting functions means moving the inline i915_trace_irq_get() function from intel_ringbuffer.h to i915_drv.h. For: VIZ-4377 Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NThomas Daniel <Thomas.Daniel@intel.com> [danvet: Resolve conflict due to shuffled merge order.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 John Harrison 提交于
The ring member of the object structure was always updated with the last_read_seqno member. Thus with the conversion to last_read_req, obj->ring is now a direct copy of obj->last_read_req->ring. This makes it somewhat redundant and potentially misleading (especially as there was no comment to explain its purpose). This checkin removes the redundant field. Many uses were simply testing for non-null to see if the object is active on the GPU. Some of these have been converted to check 'obj->active' instead. Others (where the last_read_req is about to be used anyway) have been changed to check obj->last_read_req. The rest simply pull the ring out from the request structure and proceed as before. For: VIZ-4377 Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NThomas Daniel <Thomas.Daniel@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 John Harrison 提交于
Almost everywhere that caled i915_seqno_passed() was really asking 'has the given seqno popped out of the hardware yet?'. Thus it had to query the current hardware seqno and then do a signed delta comparison (which copes with wrapping around zero but not with seqno values more than 2GB apart, although the latter is unlikely!). Now that the majority of seqno instances have been replaced with request structures, it is possible to convert this test to be request based as well. There is now a 'i915_gem_request_completed()' function which takes a request and returns true or false as appropriate. Note that this currently just wraps up the original _passed() test but a later patch in the series will reduce this to simply returning a cached internal value, i.e.: _completed(req) { return req->completed; }' This checkin converts almost all _seqno_passed() calls. The only one left is in the semaphore code which still requires seqnos not request structures. For: VIZ-4377 Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NThomas Daniel <Thomas.Daniel@intel.com> [danvet: Drop hunk touching the trace_irq code since I've dropped the patch which converts that, and resolve resulting conflict.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 John Harrison 提交于
There is no longer any need to retrieve a seqno value from an i915_add_request() call. The calling code already knows which request structure is being processed (it can only be ring->OLR). And as the request itself is now used in preference to the basic seqno value, the latter is now redundant in this situation. For: VIZ-4377 Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NThomas Daniel <Thomas.Daniel@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 John Harrison 提交于
Now that all code above is using request structures instead of seqno values, it is possible to convert __wait_seqno() itself. Internally, it is still calling i915_seqno_passed(), this will be updated later in the series. This step is just changing the parameter list and function name. For: VIZ-4377 Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NThomas Daniel <Thomas.Daniel@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
With refcounting it looks like you can just drop that refcount, but that's not really the case. So make sure no one forgets. Motivated by the unlocked call in the mmio flip code. Cc: John Harrison <John.C.Harrison@Intel.com> Cc: Thomas Daniel <Thomas.Daniel@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
-
由 Daniel Vetter 提交于
Updated i915_wait_seqno() to take a request structure instead of a seqno value and renamed it accordingly. Internally, it just pulls the seqno out of the request and calls on to __wait_seqno() as before. However, all the code further up the stack is now simplified as it can just pass the request object straight through without having to peek inside. For: VIZ-4377 Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NThomas Daniel <Thomas.Daniel@intel.com> [danvet: Squash in hunk from an earlier patch which was rebased wrongly.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 John Harrison 提交于
Updated the _check_olr() function to actually take a request object and compare it to the OLR rather than extracting seqnos and comparing those. Note that there is one use case where the request object being processed is no longer available at that point in the call stack. Hence a temporary copy of the original function is still present (but called _check_ols() instead). This will be removed in a subsequent patch. Also, downgraded a BUG_ON to a WARN_ON as apparently the former is frowned upon for shipping code. For: VIZ-4377 Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NThomas Daniel <Thomas.Daniel@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 John Harrison 提交于
The object structure contains the last read, write and fenced seqno values for use in syncrhonisation operations. These have now been replaced with their request structure counterparts. Note that to ensure that objects do not end up with dangling pointers, the assignments of last_*_req include reference count updates. Thus a request cannot be freed if an object is still hanging on to it for any reason. v2: Corrected 'last_rendering_' to 'last_read_' in a number of comments that did not get updated when 'last_rendering_seqno' became 'last_read|write_seqno' several millenia ago. For: VIZ-4377 Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NThomas Daniel <Thomas.Daniel@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 John Harrison 提交于
Added helper functions for retrieving the ring and seqno entries from a request structure. This allows the internal workings of the request structure to be hidden from code that is using these. It also allows for useful workarounds/debug code to be added as or when necessary. Note that it is intended that the majority (if not all) uses of the seqno accessor will disappear eventually as code is updated to use the request structure itself rather than working with seqno values. For: VIZ-4377 Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NThomas Daniel <Thomas.Daniel@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 John Harrison 提交于
The plan is to use request structures everywhere that seqno values were previously used. This means saving pointers to structures in places that used to be simple integers. In turn, that means that the target structure now needs much more stringent lifetime tracking. That is, it must not be freed while some other random object still holds a pointer to it. To achieve this tracking, a reference count needs to be added. Whenever a pointer to the structure is saved away, the count must be incremented and the free must only occur when all references have been released. For: VIZ-4377 Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NThomas Daniel <Thomas.Daniel@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
Now unused. Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
-
由 Rodrigo Vivi 提交于
This patch is the last in series of VLV/CHV PSR, that finally enable PSR by adding it to HAS_PSR and calling the proper enable and disable functions on the right places. Although it is still disabled by default. v2: Rebase over intel_psr and merge Durgadoss's fixes. v3: Fix typo. Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: NDurgadoss R <durgadoss.r@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Rodrigo Vivi 提交于
PSR (aka SRD) block is defined at VBT and currently being used. Mainly/At-least to configure the amount of idle_frames require to get back to PSR Entry. Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: NDurgadoss R <durgadoss.r@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 21 11月, 2014 2 次提交
-
-
由 Daniel Vetter 提交于
Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Thomas Hellstrom 提交于
It happens on occasion that developers of generic user-space applications abuse the dumb buffer API to get hold of drm buffers that they can both mmap() and use for GPU acceleration, using the assumptions that dumb buffers and buffers available for GPU are a) The same type and can be aribtrarily type-casted. b) fully coherent. This patch makes the most widely used drivers warn nicely when that happens, the next step will be to fail. v2: Move drmP.h changes to drm_gem.h. Fix Radeon dumb mmap breakage. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Acked-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Acked-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 20 11月, 2014 7 次提交
-
-
由 Daniel Vetter 提交于
Let's just throw in the towel on this one and take the cheap way out. Based on a patch from Chris Wilson, but checking for a different bit. Chris' patch checked for even bank layout, this one here for a magic bit. Given the evidence we've gathered (not much) both work I think, but checking for the magic bit might be more accurate. Anyway, works on my gm45 here. For paranoi restrict to gen4 (and mobile), since we've only ever seen this on gm45 and i965gm. Also add some debugfs output so that we can skip the tiled swapping tests properly in these cases. v2: Clean up the quirk'ed pin count in free_object to avoid upsetting the WARN_ON. Spotted by Chris. Cc: Chris Wilson <chris@chris-wilson.co.uk> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=28813 Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=45092Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
Found one more! With this we can clear up the ggtt init code a bit, yay! Acked-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
-
由 Daniel Vetter 提交于
With this all the ums nonsense around gem setup/teardown has disappeared, yay! Acked-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
-
由 Daniel Vetter 提交于
Again just complicates gem init functions and makes a general mess out of everything. Good riddance! v2: In my enthusiasm to start removing dri1/ums crud I went overboard a bit and killed parts of hangcheck. Resurrect it. Acked-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
-
由 Daniel Vetter 提交于
We've killed ums support by now, it's time to reap the benefits. This one here is getting in the way of doing some ring init cleanup. Acked-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
-
由 Chris Wilson 提交于
With the deprecation of UMS, and by association DRI1, we have a tough choice when updating the ring access routines. We either rewrite the DRI1 routines blindly without testing (so likely to be broken) or take the liberty of declaring them no longer supported and remove them entirely. This takes the latter approach. v2: Also remove the DRI1 sarea updates Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> [danvet: Fix rebase conflicts.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Oscar Mateo 提交于
Up until now, we have pinned every logical ring context backing object during creation, and left it pinned until destruction. This made my life easier, but it's a harmful thing to do, because we cause fragmentation of the GGTT (and, eventually, we would run out of space). This patch makes the pinning on-demand: the backing objects of the two contexts that are written to the ELSP are pinned right before submission and unpinned once the hardware is done with them. The only context that is still pinned regardless is the global default one, so that the HWS can still be accessed in the same way (ring->status_page). v2: In the early version of this patch, we were pinning the context as we put it into the ELSP: on the one hand, this is very efficient because only a maximum two contexts are pinned at any given time, but on the other hand, we cannot really pin in interrupt time :( v3: Use a mutex rather than atomic_t to protect pin count to avoid races. Do not unpin default context in free_request. v4: Break out pin and unpin into functions. Fix style problems reported by checkpatch v5: Remove unpin_lock as all pinning and unpinning is done with the struct mutex already locked. Add WARN_ONs to make sure this is the case in future. Issue: VIZ-4277 Signed-off-by: NOscar Mateo <oscar.mateo@intel.com> Signed-off-by: NThomas Daniel <thomas.daniel@intel.com> Reviewed-by: NAkash Goel <akash.goels@gmail.com> Reviewed-by: Deepak S<deepak.s@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 19 11月, 2014 2 次提交
-
-
由 Imre Deak 提交于
When disabling the RPS interrupts there is a tricky dependency between the thread disabling the interrupts, the RPS interrupt handler and the corresponding RPS work. The RPS work can reenable the interrupts, so there is no straightforward order in the disabling thread to (1) make sure that any RPS work is flushed and to (2) disable all RPS interrupts. Currently this is solved by masking the interrupts using two separate mask registers (first level display IMR and PM IMR) and doing the disabling when all first level interrupts are disabled. This works, but the requirement to run with all first level interrupts disabled is unnecessary making the suspend / unload time ordering of RPS disabling wrt. other unitialization steps difficult and error prone. Removing this restriction allows us to disable RPS early during suspend / unload and forget about it for the rest of the sequence. By adding a more explicit method for avoiding the above race, it also becomes easier to prove its correctness. Finally currently we can hit the WARN in snb_update_pm_irq(), when a final RPS work runs with the first level interrupts already disabled. This won't lead to any problem (due to the separate interrupt masks), but with the change in this and the next patch we can get rid of the WARN, while leaving it in place for other scenarios. To address the above points, add a new RPS interrupts_enabled flag and use this during RPS disabling to avoid requeuing the RPS work and reenabling of the RPS interrupts. Since the interrupt disabling happens now in intel_suspend_gt_powersave(), we will disable RPS interrupts explicitly during suspend (and not just through the first level mask), but there is no problem doing so, it's also more consistent and allows us to unify more of the RPS disabling during suspend and unload time in the next patch. v2/v3: - rebase on patch "drm/i915: move rps irq disable one level up" in the patchset Signed-off-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Tom O'Rourke 提交于
In sandybridge_pcode_read and sandybridge_pcode_write, extend the mbox parameter from u8 to u32. On Haswell and Sandybridge, bits 7:0 encode the mailbox command and bits 28:8 are used for address control for specific commands. Based on suggestion from Ville Syrjälä. Signed-off-by: NTom O'Rourke <Tom.O'Rourke@intel.com> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 14 11月, 2014 5 次提交
-
-
由 Satheeshakrishna M 提交于
On skylake, DPLL 1, 2 and 3 can be used for DP and HDMI. The shared dpll framework allows us to share those DPLLs among DDIs when possible. The most tricky part is to provide a DPLL state that can be easily compared. DPLL_CRTL1 is shared by all the DPLLs, 6 bits each. The per-dpll crtl1 field of the hw state is then normalized to be the same value if 2 DPLLs do indeed have identical values for those 6 bits. v2: Port the code to the shared DPLL infrastructure (Damien) v3: Rebase on top of Ander's clock computation staging work for atomic (Damien) Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com> (v2) Signed-off-by: Satheeshakrishna M <satheeshakrishna.m@intel.com> (v1) Signed-off-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Satheeshakrishna M 提交于
Adding structure/enum for SKL clocking implementation. v2: Addressed Damien's comment - Removed internal structure from this header file v3: Stove this into the generic intel_dpll_id enum and give them the established DPLL_ID_ prefixes. (Daniel) v4: - We'll only try to share DPLL1/2/3, leaving DPLL0 to eDP - Use SKL in the skylake shared DPLL names - Re-add the skl_dpll enum (Damien) v5: Remove SKL_DPLL_NONE (Daniel) v6: Modified as per review comments from Paulo Reviewed-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: Satheeshakrishna M <satheeshakrishna.m@intel.com> (v2) Signed-off-by: Damien Lespiau <damien.lespiau@intel.com> (v4,v5) Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> (v3) Signed-off-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Jani Nikula 提交于
This is not used within the driver, and merely saving/restoring these registers isn't going to do any good anyway. In fact, it's possible it's actively harmful. Any code enabling the feature should handle this completely in the regular platform specific enable/disable backlight functions. Signed-off-by: NJani Nikula <jani.nikula@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ville Syrjälä 提交于
On VLV/CHV both pipes A and B have their own backlight control registers. In order to correctly read out the current hardware state at init we need to know which pipe is driving the eDP port. Pass that information down from the eDP init code into the backlight code. To determine the correct pipe we first look at which pipe is currently configured in the port control register, if that look invalid we look at which pipe's PPS is currently controlling the port, and if that too looks invalid we just assume pipe A. Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: NJani Nikula <jani.nikula@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
Currently objects for which the hardware needs a contiguous physical address are allocated a shadow backing storage to satisfy the contraint. This shadow buffer is not wired into the normal obj->pages and so the physical object is incoherent with accesses via the GPU, GTT and CPU. By setting up the appropriate scatter-gather table, we can allow userspace to access the physical object via either a GTT mmaping of or by rendering into the GEM bo. However, keeping the CPU mmap of the shmemfs backing storage coherent with the contiguous shadow is not yet possible. Fortuituously, CPU mmaps of objects requiring physical addresses are not expected to be coherent anyway. This allows the physical constraint of the GEM object to be transparent to userspace and allow it to efficiently render into or update them via the GTT and GPU. v2: Fix leak of pci handle spotted by Ville v3: Remove the now duplicate call to detach_phys_object during free. v4: Wait for rendering before pwrite. As this patch makes it possible to render into the phys object, we should make it correct as well! Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NRodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 08 11月, 2014 9 次提交
-
-
由 Daniel Vetter 提交于
Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ville Syrjälä 提交于
We need the HPLL frequency when calculating cdclk. Currently we read that out from the hardware every single time, which isn't going to fly very well if the device is runtime suspended. So cache the HPLL frequency in dev_priv and use the cached value. Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Reference: https://bugs.freedesktop.org/show_bug.cgi?id=82939Reviewed-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
So that it can be used by the flip code to wait for rendering without holding any locks. Signed-off-by: NAnder Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Zhe Wang 提交于
Implement common forcewake functions shared by Gen9 features. v2: Make the focewake_{get,put} functions static (Mika) Small coding style fix in the function definition (Damien) Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com> Signed-off-by: Zhe Wang <zhe1.wang@intel.com> (v1) Signed-off-by: Damien Lespiau <damien.lespiau@intel.com> (v2) Signed-off-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Damien Lespiau 提交于
To correctly flush the new DDB allocation we need to know about the pipe allocation layout inside the DDB in order to sequence the re-allocation to not cause a newly allocated pipe to fetch from a space that was previously allocated to another pipe. This patch preserves the per-pipe (start,end) allocation to be used in the flush. Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Damien Lespiau 提交于
Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Damien Lespiau 提交于
Ville suggested that we should use the same semantics as C arrays to reduce the number of those pesky +1/-1 in the allocation code. This patch leaves the debugfs file as is, showing the internal DDB allocation structure, not the values written in the registers. v2: Remove the test on ->end in skl_ddb_entry_size() (Ville) Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Suggested-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Damien Lespiau 提交于
v2: Don't check DDB on pre-SKL platforms Don't check DDB state on disabled pipes v3: Squash "Expose skl_ddb_get_hw_state()" Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Pradeep Bhat 提交于
This patch implements the watermark algorithm and its necessary functions. Two function pointers skl_update_wm and skl_update_sprite_wm are provided. The skl_update_wm will update the watermarks for the crtc provided as an argument and then checks for change in DDB allocation for other active pipes and recomputes the watermarks for those Pipes and planes as well. Finally it does the register programming for all dirty pipes. The trigger of the Watermark double buffer registers will have to be once the plane configurations are done by the caller. v2: fixed the divide-by-0 error in the results computation func. Also reworked the PLANE_WM register values computation func to make it more compact. Incorporated all other review comments from Damien. v3: Changed the skl_compute_plane_wm function to now return success or failure. Also the result blocks and lines are computed here instead of in skl_compute_wm_results function. v4: Adjust skl_ddb_alloc_changed() to the new planes/cursor split (Damien) v5: Reworked the affected functions to implement new plane/cursor split. v6: Rework the logic that triggers the DDB allocation and WM computation of skl_update_other_pipe_wm() to not depend on non-computed DDB values. Always give a valid cursor_width (at boot it's 0) to keep the invariant that we consider the cursor plane always enabled. Otherwise we end up dividing by 0 in skl_compute_plane_wm() (Damien Lespiau) v7: Spell out allocation skl_ddb_ functions should have the ddb as first argument Make the skl_ddb_alloc_changed() parameters const (Damien) v8: Rebase on top of the crtc->primary changes v9: Split the staging results structure to not exceed the 1Kb stack allocation in skl_update_wm() v10: Make skl_pipe_pixel_rate() take a pointer to the pipe config Add a comment about overflow considerations for skl_wm_method1() Various additions of const Various use of sizeof(variable) instead of sizeof(type) Various move of variable definitons to a narrower scope Zero initialize some stack allocated structures to make sure we don't have garbage in case we don't write all the values (Ville) v11: Remove non-necessary default number of blocks/lines when the plane is disabled (Ville) Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NPradeep Bhat <pradeep.bhat@intel.com> Signed-off-by: NDamien Lespiau <damien.lespiau@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-