- 08 7月, 2015 1 次提交
-
-
由 Paulo Zanoni 提交于
Because it makes more sense there, IMHO. Signed-off-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 07 7月, 2015 2 次提交
-
-
由 Daniel Vetter 提交于
After the register save/restore code is gone there's just one user left and it just obfuscates that one. Remove it. Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Suggested-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Daniel Vetter 提交于
Since that's really what we want to test for. Note remove the gen5 case doesn't change anything: In intel_setup_outputs ilk is handled already in the HAS_PCH_SPLIT case, and the register save/restore code touches registers which simply doesn't exist anymore at all. v2: Drop UMS parts. v3: Update commit message to reflect that the reg save/restore code is gone (Ville). Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 06 7月, 2015 5 次提交
-
-
由 Paulo Zanoni 提交于
Make sure we're not going to have weird races in really weird cases where a lot of different CRTCs are doing rendering and modesets at the same time. With this change and the stolen_lock from the previous patch, we can start removing the struct_mutex locking we have around FBC in the next patches. v2: - Rebase (6 months later) - Also lock debugfs and stolen. v3: - Don't lock a single value read (Chris). - Replace lockdep assertions with WARNs (Daniel). - Improve commit message. - Don't forget intel_pre_plane_update() locking. v4: - Don't remove struct_mutex at intel_pre_plane_update() (Chris). - Add comment regarding locking dependencies (Chris). - Rebase after the stolen code rework. - Rebase again after drm-intel-nightly changes. v5: - Rebase after the new stolen_lock patch. Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> (v4) Signed-off-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Paulo Zanoni 提交于
Which should protect dev_priv->mm.stolen usage. This will allow us to simplify the relationship between stolen memory, FBC and struct_mutex. v2: - Rebase after the stolen_remove_node() dev_priv patch move. - I realized that after we fixed a few things related to the FBC CFB size checks, we're not reallocating the CFB anymore with FBC enabled, so we can just move all the locking to i915_gem_stolen.c and stop worrying about freezing all the stolen alocations while freeing/rellocating the CFB. This allows us to fix the "Too coarse" observation from Chris. Suggested-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Paulo Zanoni 提交于
With the abstractions created by the last patch, we can move this code and the only thing inside intel_fbc.c that knows about dev_priv->mm is the code that reads stolen_base. We also had to move a call to i915_gem_stolen_cleanup_compression() - now called intel_fbc_cleanup_cfb() - outside i915_gem_stolen.c. v2: - Rebase after the remove_node() changes on the previous patch. Requested-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Paulo Zanoni 提交于
We want to move the FBC code out of i915_gem_stolen.c, but that code directly adds/removes stolen memory nodes. Let's create this abstraction, so i915_gme_stolen.c is still in control of all the stolen memory handling. The abstraction will also allow us to add locking assertions later. v2: - Add dev_priv as remove_node() argument since we'll need it later (Chris). Requested-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Abdiel Janulgue 提交于
Ensures that the batch buffer is executed by the resource streamer. And will let userspace know whether Resource Streamer is supported in the kernel. v2: Don't skip 1<<15 for the exec flags (Jani Nikula) v3: Use HAS_RESOURCE_STREAMER macro for execbuf validation (Chris Wilson) (from getparam patch) v2: Update I915_PARAM_HAS_RESOURCE_STREAMER so it's after I915_PARAM_HAS_GPU_RESET. v3: Only advertise RS support for hardware that supports it. v4: Add HAS_RESOURCE_STREAMER() macro (Chris) Testcase: igt/gem_exec_params Cc: Jani Nikula <jani.nikula@intel.com> Cc: Kenneth Graunke <kenneth@whitecape.org> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NAbdiel Janulgue <abdiel.janulgue@linux.intel.com> [danvet: squash in getparam patch since it'd break bisect, suggested by Chris.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 03 7月, 2015 1 次提交
-
-
由 Daniel Vetter 提交于
Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 30 6月, 2015 2 次提交
-
-
由 Imre Deak 提交于
Although we have a fixed setting for the PLL9 and EBB4 registers, it still makes sense to check them together with the rest of PLL registers. While at it also remove a redundant comment about 10 bit clock enabling. Signed-off-by: NImre Deak <imre.deak@intel.com> Reviewed-by: NSonika Jindal <sonika.jindal@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 David Weinehall 提交于
This patch adds support for 0.85V VccIO on Skylake Y, separate buffer translation tables for Skylake U, and support for I_boost for the entries that needs this. Changes in v2: * Refactored the code a bit to move all DDI signal level setup to intel_ddi.c Issue: VIZ-5677 Signed-off-by: NDavid Weinehall <david.weinehall@linux.intel.com> Reviewed-by: NAntti Koskipää <antti.koskipaa@linux.intel.com> [danvet: Apply style polish checkpatch suggested.] Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 29 6月, 2015 2 次提交
-
-
由 Ville Syrjälä 提交于
Turns out the VLV/CHV system agent doesn't understand memory latencies, so trying to rely on the PND deadline mechanism is not going to fly especially when DDR DVFS is enabled. Currently we try to avoid the problems by lying to the system agent about the deadlines and setting the FIFO watermarks to 8 cachelines. This however leads to bad memory self refresh residency. So in order to satosfy everyone we'll just give up on the deadline scheme and program the watermarks old school based on the worst case memory latency. I've modelled this a bit on the ILK+ approach where we compute multiple sets of watermarks for each pipe (PM2,PM5,DDR DVFS) and when merge thet appropriate one later with the watermarks from other pipes. There isn't too much to merge actually since each pipe has a totally independent FIFO (well apart from the mess with the partially shared DSPARB registers), but still decopuling the pipes from each other seems like a good idea. Eventually we'll want to perform the watermark update in two phases around the plane update to avoid underruns due to the single buffered watermark registers. But that's still in limbo for ILK+ too, so I've not gone that far yet for VLV/CHV either. Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: NClint Taylor <Clinton.A.Taylor@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Ville Syrjälä 提交于
Read out the current watermark settings from the hardware at driver init time. This will allow us to compare the newly calculated values against the currrent ones and potentially avoid needless WM updates. Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: NClint Taylor <Clinton.A.Taylor@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 24 6月, 2015 1 次提交
-
-
由 Daniel Vetter 提交于
With the new DRRS code it kinda sticks out, and we never managed to get this to work well enough without causing issues. Time to wave goodbye. I've decided to keep the logic for programming the reduced clocks intact, but everything else is gone. If anyone ever wants to resurrect this we need to redo it all anyway on top of the frontbuffer tracking. Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com> Acked-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 23 6月, 2015 19 次提交
-
-
由 John Harrison 提交于
As there is no OLR to check, the check_olr() function is now a no-op and can be removed. For: VIZ-5115 Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NTomas Elf <tomas.elf@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 John Harrison 提交于
In _i915_add_request(), the request is associated with a userland client. Specifically it is linked to the 'file' structure and the current user process is recorded. One problem here is that the current user process is not necessarily the same as when the request was submitted to the driver. This is especially true when the GPU scheduler arrives and decouples driver submission from hardware submission. Note also that it is only in the case where the add request comes from an execbuff call that there is a client to associate. Any other add request call is kernel only so does not need to do it. This patch moves the client association into a separate function. This is then called from the execbuffer code path itself at a sensible time. It also removes the now redundant 'file' pointer from the add request parameter list. An extra cleanup of the client association is also added to the request clean up code for the eventuality where the request is killed after association but before being submitted (e.g. due to out of memory error somewhere). Once the submission has happened, the request is on the request list and the regular request list removal will clear the association. Note that this still needs to happen at this point in time because the request might be kept floating around much longer (due to someone holding a reference count) and the client should not be worrying about this request after it has been retired. For: VIZ-5115 Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NTomas Elf <tomas.elf@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 John Harrison 提交于
Converted i915_gem_l3_remap() to take a request structure instead of a ring. For: VIZ-5115 Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NTomas Elf <tomas.elf@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 John Harrison 提交于
Now that everything above has been converted to use request structures, it is possible to update the lower level move_to_active() functions to be request based as well. For: VIZ-5115 Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NTomas Elf <tomas.elf@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 John Harrison 提交于
Now that all callers of i915_add_request() have a request pointer to hand, it is possible to update the add request function to take a request pointer rather than pulling it out of the OLR. For: VIZ-5115 Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NTomas Elf <tomas.elf@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 John Harrison 提交于
Updated the display page flip code to do explicit request creation and submission rather than relying on the OLR and just hoping that the request actually gets submitted at some random point. The sequence is now to create a request, queue the work to the ring, assign the known request to the flip queue work item then actually submit the work and post the request. Note that every single flip function used to finish with '__intel_ring_advance(ring);'. However, immediately after they return there is now an add request call which will do the advance anyway. Thus the many duplicate advance calls have been removed. v2: Updated commit message with comment about advance removal. v3: The request can now be allocated by the _sync() code earlier on. Thus the page flip path does not necessarily need to allocate a new request, it may be able to re-use one. For: VIZ-5115 Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NTomas Elf <tomas.elf@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 John Harrison 提交于
The plan is to pass requests around as the basic submission tracking structure rather than rings and contexts. This patch updates the i915_gem_object_sync() code path. v2: Much more complex patch to share a single request between the sync and the page flip. The _sync() function now supports lazy allocation of the request structure. That is, if one is passed in then that will be used. If one is not, then a request will be allocated and passed back out. Note that the _sync() code does not necessarily require a request. Thus one will only be created until certain situations. The reason the lazy allocation must be done within the _sync() code itself is because the decision to need one or not is not really something that code above can second guess (except in the case where one is definitely not required because no ring is passed in). The call chains above _sync() now support passing a request through which most callers passing in NULL and assuming that no request will be required (because they also pass in NULL for the ring and therefore can't be generating any ring code). The exeception is intel_crtc_page_flip() which now supports having a request returned from _sync(). If one is, then that request is shared by the page flip (if the page flip is of a type to need a request). If _sync() does not generate a request but the page flip does need one, then the page flip path will create its own request. v3: Updated comment description to be clearer about 'to_req' parameter (Tomas Elf review request). Rebased onto newer tree that significantly changed the synchronisation code. v4: Updated comments from review feedback (Tomas Elf) For: VIZ-5115 Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NTomas Elf <tomas.elf@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 John Harrison 提交于
Now that the request is guaranteed to specify the context, it is possible to update the context switch code to use requests rather than ring and context pairs. This patch updates i915_switch_context() accordingly. Also removed the warning that the request's context must match the last context switch's context. As the context switch now gets the context object from the request structure, there is no longer any scope for the two to become out of step. For: VIZ-5115 Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NTomas Elf <tomas.elf@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 John Harrison 提交于
The final step in removing the OLR from i915_gem_init_hw() is to pass the newly allocated request structure in to each step rather than passing a ring structure. This patch updates both i915_ppgtt_init_ring() and i915_gem_context_enable() to take request pointers. For: VIZ-5115 Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NTomas Elf <tomas.elf@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 John Harrison 提交于
Now that a single per ring loop is being done for all the different intialisation steps in i915_gem_init_hw(), it is possible to add proper request management as well. The last remaining issue is that the context enable call eventually ends up within *_render_state_init() and this does its own private _i915_add_request() call. This patch adds explicit request creation and submission to the top level loop and removes the add_request() from deep within the sub-functions. v2: Updated for removal of batch_obj from add_request call in previous patch. For: VIZ-5115 Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NTomas Elf <tomas.elf@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 John Harrison 提交于
The start of day context initialisation code in i915_gem_context_enable() loops over each ring and calls the legacy switch context or the execlist init context code as appropriate. This patch moves the ring looping out of that function in to the top level caller i915_gem_init_hw(). This means the a single pass can be made over all rings doing the PPGTT, L3 remap and context initialisation of each ring altogether. For: VIZ-5115 Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NTomas Elf <tomas.elf@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 John Harrison 提交于
In order to explcitly track all GPU work (and completely remove the outstanding lazy request), it is necessary to add extra i915_add_request() calls to various places. Some of these do not need the implicit cache flush done as part of the standard batch buffer submission process. This patch adds a flag to _add_request() to specify whether the flush is required or not. For: VIZ-5115 Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NTomas Elf <tomas.elf@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 John Harrison 提交于
The plan is to pass requests around as the basic submission tracking structure rather than rings and contexts. This patch updates the execbuffer_move_to_active() code path. For: VIZ-5115 Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NTomas Elf <tomas.elf@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 John Harrison 提交于
Rather than just having a local request variable in the execbuff code, the request pointer is now stored in the execbuff params structure. Also added explicit cleanup of the request (plus wiping the OLR to match) in the error case. This means that the execbuff code is no longer dependent upon the OLR keeping track of the request so as to not leak it when things do go wrong. Note that in the success case, the i915_add_request() at the end of the submission function will tidy up the request and clear the OLR. For: VIZ-5115 Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NTomas Elf <tomas.elf@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 John Harrison 提交于
The alloc_request() function does not actually return the newly allocated request. Instead, it must be pulled from ring->outstanding_lazy_request. This patch fixes this so that code can create a request and start using it knowing exactly which request it actually owns. v2: Updated for new i915_gem_request_alloc() scheme. For: VIZ-5115 Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NTomas Elf <tomas.elf@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 John Harrison 提交于
Shrunk the parameter list of i915_gem_execbuffer_retire_commands() to a single structure as everything it requires is available in the execbuff_params object. For: VIZ-5115 Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NTomas Elf <tomas.elf@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 John Harrison 提交于
The do_execbuf() function takes quite a few parameters. The actual set of parameters is going to change with the conversion to passing requests around. Further, it is due to grow massively with the arrival of the GPU scheduler. This patch simplifies the prototype by passing a parameter structure instead. Changing the parameter set in the future is then simply a matter of adding/removing items to the structure. Note that the structure does not contain absolutely everything that is passed in. This is because the intention is to use this structure more extensively later in this patch series and more especially in the GPU scheduler that is coming soon. The latter requires hanging on to the structure as the final hardware submission can be delayed until long after the execbuf IOCTL has returned to user land. Thus it is unsafe to put anything in the structure that is local to the IOCTL call itself - such as the 'args' parameter. All entries must be copies of data or pointers to structures that are reference counted in some way and guaranteed to exist for the duration of the batch buffer's life. v2: Rebased to newer tree and updated for changes to the command parser. Specifically, a code shuffle has required saving the batch start address in the params structure. For: VIZ-5115 Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NTomas Elf <tomas.elf@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 John Harrison 提交于
The i915_add_request() function is called to keep track of work that has been written to the ring buffer. It adds epilogue commands to track progress (seqno updates and such), moves the request structure onto the right list and other such house keeping tasks. However, the work itself has already been written to the ring and will get executed whether or not the add request call succeeds. So no matter what goes wrong, there isn't a whole lot of point in failing the call. At the moment, this is fine(ish). If the add request does bail early on and not do the housekeeping, the request will still float around in the ring->outstanding_lazy_request field and be picked up next time. It means multiple pieces of work will be tagged as the same request and driver can't actually wait for the first piece of work until something else has been submitted. But it all sort of hangs together. This patch series is all about removing the OLR and guaranteeing that each piece of work gets its own personal request. That means that there is no more 'hoovering up of forgotten requests'. If the request does not get tracked then it will be leaked. Thus the add request call _must_ not fail. The previous patch should have already ensured that it _will_ not fail by removing the potential for running out of ring space. This patch enforces the rule by actually removing the early exit paths and the return code. Note that if something does manage to fail and the epilogue commands don't get written to the ring, the driver will still hang together. The request will be added to the tracking lists. And as in the old case, any subsequent work will generate a new seqno which will suffice for marking the old one as complete. v2: Improved WARNings (Tomas Elf review request). For: VIZ-5115 Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Reviewed-by: NTomas Elf <tomas.elf@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 John Harrison 提交于
It is a bad idea for i915_add_request() to fail. The work will already have been send to the ring and will be processed, but there will not be any tracking or management of that work. The only way the add request call can fail is if it can't write its epilogue commands to the ring (cache flushing, seqno updates, interrupt signalling). The reasons for that are mostly down to running out of ring buffer space and the problems associated with trying to get some more. This patch prevents that situation from happening in the first place. When a request is created, it marks sufficient space as reserved for the epilogue commands. Thus guaranteeing that by the time the epilogue is written, there will be plenty of space for it. Note that a ring_begin() call is required to actually reserve the space (and do any potential waiting). However, that is not currently done at request creation time. This is because the ring_begin() code can allocate a request. Hence calling begin() from the request allocation code would lead to infinite recursion! Later patches in this series remove the need for begin() to do the allocate. At that point, it becomes safe for the allocate to call begin() and really reserve the space. Until then, there is a potential for insufficient space to be available at the point of calling i915_add_request(). However, that would only be in the case where the request was created and immediately submitted without ever calling ring_begin() and adding any work to that request. Which should never happen. And even if it does, and if that request happens to fall down the tiny window of opportunity for failing due to being out of ring space then does it really matter because the request wasn't doing anything in the first place? v2: Updated the 'reserved space too small' warning to include the offending sizes. Added a 'cancel' operation to clean up when a request is abandoned. Added re-initialisation of tracking state after a buffer wrap to keep the sanity checks accurate. v3: Incremented the reserved size to accommodate Ironlake (after finally managing to run on an ILK system). Also fixed missing wrap code in LRC mode. v4: Added extra comment and removed duplicate WARN (feedback from Tomas). For: VIZ-5115 CC: Tomas Elf <tomas.elf@intel.com> Signed-off-by: NJohn Harrison <John.C.Harrison@Intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 22 6月, 2015 2 次提交
-
-
由 Jani Nikula 提交于
We have enough generic hotplug functions sprinkled all over i915_irq.c to warrant moving them to a file of their own. This should further underline the distinction between generic code in the new file and platform specific hotplug and irq code that remains in i915_irq.c. Add new intel_hpd_init_work to keep work functions static, and rename get_port_from_pin to intel_hpd_pin_to_port while increasing its visibility, but keep everything else the same. Signed-off-by: NJani Nikula <jani.nikula@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Maarten Lankhorst 提交于
The skylake scalers depend on the cdclk freq, but that frequency can change during a modeset. So when a modeset happens calculate the new cdclk in the atomic state. With the transitional helpers gone the cached value can be used in the scaler, and committed after all crtc's are disabled. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=90874Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: NMatt Roper <matthew.d.roper@intel.com> Tested-by(IVB): Matt Roper <matthew.d.roper@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 20 6月, 2015 1 次提交
-
-
由 Daniel Vetter 提交于
Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 16 6月, 2015 1 次提交
-
-
由 Paulo Zanoni 提交于
Because we're currently using FBC_UNSUPPORTED_MODE for two different cases. This commit will also allow us to write the next one without hiding information from the user. Signed-off-by: NPaulo Zanoni <paulo.r.zanoni@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
- 15 6月, 2015 3 次提交
-
-
由 Chris Wilson 提交于
In igt, we want to test handling of GPU hangs, both for recovery purposes and for reporting. However, we don't want to inject a genuine GPU hang onto a machine that cannot recover and so be permenantly wedged. Rather than embed heuristics into igt, have the kernel report exactly when it expects the GPU reset to work. This can also be usefully extended in future to indicate different levels of fine-grained resets. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Tim Gore <tim.gore@intel.com> Cc: Tomas Elf <tomas.elf@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Francisco Jerez 提交于
Until now the software command checker assumed that commands could read or write at most a single register per packet. This is not necessarily the case, MI_LOAD_REGISTER_IMM expects a variable-length list of offset/value pairs and writes them in sequence. The previous code would only check whether the first entry was valid, effectively allowing userspace to write unrestricted registers of the MMIO space by sending a multi-register write with a legal first register, with potential security implications on Gen6 and 7 hardware. Fix it by extending the drm_i915_cmd_descriptor table to represent multi-register access and making validate_cmd() iterate for all register offsets present in the command packet. Signed-off-by: NFrancisco Jerez <currojerez@riseup.net> Reviewed-by: NZhigang Gong <zhigang.gong@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NJani Nikula <jani.nikula@intel.com>
-
由 Francisco Jerez 提交于
Until now the software command checker assumed that commands could read or write at most a single register per packet. This is not necessarily the case, MI_LOAD_REGISTER_IMM expects a variable-length list of offset/value pairs and writes them in sequence. The previous code would only check whether the first entry was valid, effectively allowing userspace to write unrestricted registers of the MMIO space by sending a multi-register write with a legal first register, with potential security implications on Gen6 and 7 hardware. Fix it by extending the drm_i915_cmd_descriptor table to represent multi-register access and making validate_cmd() iterate for all register offsets present in the command packet. Signed-off-by: NFrancisco Jerez <currojerez@riseup.net> Reviewed-by: NZhigang Gong <zhigang.gong@linux.intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-