- 12 4月, 2016 15 次提交
-
-
由 Ville Syrjälä 提交于
Store the extracted panel_type under dev_priv.vbt instead of keeping around a static variable for it. Cc: Rob Kramer <rob@solution-space.com> Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: NJani Nikula <jani.nikula@intel.com>
-
由 Ville Syrjälä 提交于
VBT can only contain 16 panel entries, indexed with the panel_type. To play it safe we should reject panel_type > 0xf, so that we don't read past the valid data. v2: Add debug logging (Jani) Cc: Jani Nikula <jani.nikula@linux.intel.com> Cc: Rob Kramer <rob@solution-space.com> Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: Jani Nikula <jani.nikula@intel.com> (v1) Link: http://patchwork.freedesktop.org/patch/msgid/1460359329-10817-1-git-send-email-ville.syrjala@linux.intel.com
-
由 Ville Syrjälä 提交于
There's no real reason the user should care that we're about to fall back to bitbanging, so let's change the message from DRM_INFO to DRM_DEBUG_KMS. Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1457366220-29409-5-git-send-email-ville.syrjala@linux.intel.com Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=94890Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
-
由 Ville Syrjälä 提交于
When the GMBUS based i2c transfer times out, we try to fall back to bit-banging and retry the operation that way. However if the bit-banging attempt also fails, we should probably go back to the GMBUS method for the next attempt. Maybe there simply wasn't anyone one the bus at this time. There's also a bit of a mess going on with the force_bit handling. It's supposed to be a ref count actually, and it is as far as intel_gmbus_force_bit() is concerned. But it's treated as just a flag by the timeout based bit-banging fallback. I suppose that's fine since we should never end up in the timeout fallback case if force_bit was already non-zero. However now that we want to restore things back to where they were after the bit-banging attempt failed, we're going to have to do things a bit differently to avoid clobbering the force_bit count as set up by intel_gmbus_force_bit(). So let's dedicate the high bit as a flag for the low level timeout based fallback and treat the rest of the bits as a ref count just as before. Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1457366220-29409-4-git-send-email-ville.syrjala@linux.intel.comReviewed-by: NJani Nikula <jani.nikula@intel.com>
-
由 Ville Syrjälä 提交于
Extend the protection of gmbus_mutex around the force_bit RMW in intel_gmbus_force_bit(), in case someone gets the idea of calling it from a separate thread while there's other stuff happening on the same bus. Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1457366220-29409-3-git-send-email-ville.syrjala@linux.intel.comReviewed-by: NJani Nikula <jani.nikula@intel.com>
-
由 Chris Wilson 提交于
Since we only ever use the drm_i915_private from the stored i915_mm_struct->dev, save some electrons by storing the right backpointer. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Michał Winiarski <michal.winiarski@intel.com> Reviewed-by: NMichał Winiarski <michal.winiarski@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1459864801-28606-3-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Holding a reference to the containing task_struct is not sufficient to prevent the mm_struct from being reaped under memory pressure. If this happens whilst we are calling get_user_pages(), explosions erupt - sometimes an immediate GPF, sometimes page flag corruption. To prevent the target mm from being reaped as we are reading from it, acquire a reference before we begin. Testcase: igt/gem_shrink/*userptr Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: stable@vger.kernel.org Reviewed-by: NMichał Winiarski <michal.winiarski@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1459864801-28606-2-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
In order to ensure that all invalidations are completed before the operation returns to userspace (i.e. before the munmap() syscall returns) we need to wait upon the outstanding operations. We are allowed to block inside the invalidate_range_start callback, and as struct_mutex is the inner lock with mmap_sem we can wait upon the struct_mutex without provoking lockdep into warning about a deadlock. However, we don't actually want to wait upon outstanding rendering whilst holding the struct_mutex if we can help it otherwise we also block other processes from submitting work to the GPU. So first we do a wait without the lock and then when we reacquire the lock, we double check that everything is ready for removing the invalidated pages. Finally to wait upon the outstanding unpinning tasks, we create a private workqueue as a means to conveniently wait upon all at once. The drawback is that this workqueue is per-mm, so any threads concurrently invalidating objects will wait upon each other. The advantage of using the workqueue is that we can wait in parallel for completion of rendering and unpinning of several objects (of particular importance if the process terminates with a whole mm full of objects). v2: Apply a cup of tea to the changelog. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=94699 Testcase: igt/gem_userptr_blits/sync-unmap-cycles Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Michał Winiarski <michal.winiarski@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1459864801-28606-1-git-send-email-chris@chris-wilson.co.ukReviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com>
-
由 Daniel Vetter 提交于
Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
-
由 Chris Wilson 提交于
If we want a contiguous mapping of a single page sized object, we can forgo using vmap() and just use a regular kmap(). Note that this is only suitable if the desired pgprot_t is compatible. v2: Use is_vmalloc_addr() Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Dave Gordon <david.s.gordon@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1460113874-17366-7-git-send-email-chris@chris-wilson.co.ukReviewed-by: NTvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
-
由 Chris Wilson 提交于
I have instances where I want to use drm_malloc_ab() but with a custom gfp mask. And with those, where I want a temporary allocation, I want to try a high-order kmalloc() before using a vmalloc(). So refactor my usage into drm_malloc_gfp(). Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: dri-devel@lists.freedesktop.org Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Acked-by: NDave Airlie <airlied@redhat.com> Link: http://patchwork.freedesktop.org/patch/msgid/1460113874-17366-6-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
When called because we have run out of vmap address space, we only need to recover objects that have vmappings and not all. v2: Start using is_vmalloc_addr() Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1460113874-17366-5-git-send-email-chris@chris-wilson.co.ukReviewed-by: NTvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
-
由 Chris Wilson 提交于
We now have two implementations for vmapping a whole object, one for dma-buf and one for the ringbuffer. If we couple the mapping into the obj->pages lifetime, then we can reuse an obj->mapping for both and at the same time couple it into the shrinker. There is a third vmapping routine in the cmdparser that maps only a range within the object, for the time being that is left alone, but will eventually use these routines in order to cache the mapping between invocations. v2: Mark the failable kmalloc() as __GFP_NOWARN (vsyrjala) v3: Call unpin_vmap from the right dmabuf unmapper v4: Rename vmap to map as we don't wish to imply the type of mapping involved, just that it contiguously maps the object into kernel space. Add kerneldoc and lockdep annotations Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Dave Gordon <david.s.gordon@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1460113874-17366-4-git-send-email-chris@chris-wilson.co.ukReviewed-by: NTvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
-
由 Chris Wilson 提交于
After we pin the ringbuffer into the GGTT, all error paths need to unpin it again. Move this common step into one block, and make the unable to iomap error code consistent (i.e. treat it as out of memory to avoid confusing it with a invalid argument). Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1460113874-17366-3-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
We only need the struct_mutex to manipulate the pages_pin_count on the object, we do not need to hold our BKL when freeing the exported scatterlist. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1460113874-17366-2-git-send-email-chris@chris-wilson.co.uk
-
- 11 4月, 2016 5 次提交
-
-
由 Tim Gore 提交于
This is to fix a GPU hang seen with mid thread pre-emption and pooled EUs. v2. Use IS_BXT_REVID instead of IS_BROXTON and INTEL_REVID v3. And use correct type for register addresses Signed-off-by: NTim Gore <tim.gore@intel.com> Reviewed-by: NArun Siluvery <arun.siluvery@linux.intel.com> Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1458571049-854-1-git-send-email-tim.gore@intel.com
-
由 Dongwon Kim 提交于
For BXT, description of polarities of PORT_PLL_REF_SEL has been reversed for newer Gen9LP steppings according to the recent update in Bspec. This bit now should be set for "Non-SSC" mode for all Gen9LP starting from B0 stepping. v2: Only B0 and newer stepping should be affected by this change. Signed-off-by: NDongwon Kim <dongwon.kim@intel.com> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=94866Reviewed-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NImre Deak <imre.deak@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1458176773-26925-1-git-send-email-dongwon.kim@intel.com
-
由 Maarten Lankhorst 提交于
Check functions are used by atomic to see if the new state will be allowed. There's also a hw state checker which checks afterwards that the committed state is correct. Rename it to hw state verifier to reduce some confusion. Suggested-by: NMatt Roper <matthew.d.roper@intel.com> Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/56FB8785.8020506@linux.intel.comReviewed-by: NMatt Roper <matthew.d.roper@intel.com>
-
由 Maarten Lankhorst 提交于
The modeset state verifier no longer has full access to the hardware, instead it should only verify affected crtc's. Looking for disabled stuff can be verified immediately after all crtc disables have completed, while each enabled crtc can be verified right after being enabled. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1458741487-23801-3-git-send-email-maarten.lankhorst@linux.intel.comReviewed-by: NMatt Roper <matthew.d.roper@intel.com> [mlankhorst: check -> verify]
-
由 Maarten Lankhorst 提交于
This will make it easier to keep the crtc checker when atomic commit is reworked for asynchronous commits. This prevents checking crtc's that were not part of the state. It's safe to verify disabled encoders, connectors and dpll's that are not part of the state, because during modeset connection_mutex is held. Signed-off-by: NMaarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1458741487-23801-2-git-send-email-maarten.lankhorst@linux.intel.comReviewed-by: NMatt Roper <matthew.d.roper@intel.com> [mlankhorst: Extend commit message and rename check to verify.]
-
- 09 4月, 2016 9 次提交
-
-
由 Chris Wilson 提交于
When reading from the HWS page, we use barrier() to prevent the compiler optimising away the read from the volatile (may be updated by the GPU) memory address. This is more suited to READ_ONCE(); make it so. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1460195877-20520-5-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Rather than call a function to compute the matching cachelines and clflush them, just call the clflush *instruction* directly. We also know that we can use the unpatched plain clflush rather than the clflushopt alternative. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Cc: Imre Deak <imre.deak@intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1460195877-20520-4-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Only declare a missed interrupt if we find that the GPU is idle with waiters and a hangcheck interval has passed in which no new user interrupts have been raised. v2: Clear the stuck interrupt marker between successful batches Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1460195877-20520-3-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
In order to simplify future patches, extract the lazy_coherency optimisation our of the engine->get_seqno() vfunc into its own callback. v2: Rename the barrier to engine->irq_seqno_barrier to try and better reflect that the barrier is only required after the user interrupt before reading the seqno (to ensure that the seqno update lands in time as we do not have strict seqno-irq ordering on all platforms). Reviewed-by: Dave Gordon <david.s.gordon@intel.com> [#v2] v3: Comments for hangcheck paranoia. Mika wanted to keep the extra barrier inside the hangcheck, just in case. I can argue that it doesn't provide a barrier against anything, but the side-effects of applying the barrier may prevent a false declaration of a hung GPU. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Cc: Dave Gordon <david.s.gordon@intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1460195877-20520-2-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
In order to ensure seqno/irq coherency, we currently read a ring register. The mmio transaction following the interrupt delays the inspection of the seqno long enough for the MI_STORE_DWORD_IMM to update the CPU cache. However, it is only the memory timing that is important for the purposes of the delay, we do not need nor desire the extra forcewake. v3: Update commentary Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com> [v2] Link: http://patchwork.freedesktop.org/patch/msgid/1460195877-20520-1-git-send-email-chris@chris-wilson.co.uk
-
由 Akash Goel 提交于
Currently for the case where there is enough space at the end of Ring buffer for accommodating only the base request, the wrapround is done immediately and as a result the base request gets added at the start of Ring buffer. But there may not be enough free space at the beginning to accommodate the base request, as before the wraparound, the wait was effectively done for the reserved_size free space from the start of Ring buffer. In such a case there is a potential of Ring buffer overflow, the instructions at the head of Ring (ACTHD) can get overwritten. Since the base request can fit in the remaining space, there is no need to wraparound immediately. The wraparound will anyway happen later when the reserved part starts getting used. Cc: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NAkash Goel <akash.goel@intel.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: http://patchwork.freedesktop.org/patch/msgid/1457688402-10411-1-git-send-email-akash.goel@intel.comReviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: stable@vger.kernel.org
-
由 Wolfram Sang 提交于
Make sure we avoid a division-by-zero OOPS in case clock-frequency is set too low in DT. Add missing '\n' while we are here. Signed-off-by: NWolfram Sang <wsa@the-dreams.de> Acked-by: NAxel Lin <axel.lin@ingics.com>
-
由 Wolfram Sang 提交于
This reverts commit 34cf2acd. 'ret' is not set when bailing out. Also, there is a better place to check for 0. Reported-by: NAxel Lin <axel.lin@ingics.com> Signed-off-by: NWolfram Sang <wsa@the-dreams.de>
-
由 Jason Wang 提交于
After commit f84bb1ea ("net: fix IFF_NO_QUEUE for drivers using alloc_netdev"), default qdisc was changed to noqueue because tuntap does not set tx_queue_len during .setup(). This patch restores default qdisc by setting tx_queue_len in tun_setup(). Fixes: f84bb1ea ("net: fix IFF_NO_QUEUE for drivers using alloc_netdev") Cc: Phil Sutter <phil@nwl.cc> Signed-off-by: NJason Wang <jasowang@redhat.com> Acked-by: NMichael S. Tsirkin <mst@redhat.com> Acked-by: NPhil Sutter <phil@nwl.cc> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 08 4月, 2016 11 次提交
-
-
由 Chris Wilson 提交于
Having fixed the tracking of the engine's last_submitted_seqno, we can now rely on it for detecting when the engine is idle (and not have to touch the requests pointer). Testcase: igt/gem_exec_whisper Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1460010558-10705-9-git-send-email-chris@chris-wilson.co.ukReviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com>
-
由 Chris Wilson 提交于
Seal the request and mark it as pending execution before we submit it to hardware. We assume that the actual submission cannot fail (that guarantee is provided by preallocating space in the request for the submission). As we may inspect this state without holding any locks during hangcheck we should apply a barrier to ensure that we do not see a more recent value in the HWS than we are tracking. Based on a patch by Mika Kuoppala. Suggested-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1460010558-10705-8-git-send-email-chris@chris-wilson.co.ukReviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com>
-
由 Chris Wilson 提交于
When we change the current seqno, we also need to remember to reset the last_submitted_seqno for the engine. Testcase: igt/gem_exec_whisper Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1460010558-10705-7-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
An oversight is that when we wrap the seqno, we need to reset the hw semaphore counters to 0. We did this for gen6 and gen7 and forgot to do so for the new implementation required for gen8 (legacy). Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1460010558-10705-6-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
We reuse the same calculation into two macros, and I want to add a third user. Time to refactor. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1460010558-10705-5-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Since we are setting engine local values that are tied to the hardware, move it out of i915_gem_init_seqno() into the intel_ring_init_seqno() backend, next to where the other hw semaphore registers are written. v2: Make the explanatory comment about always resetting the semaphores to 0 irrespective of the value of the reset seqno. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1460010558-10705-4-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
We only use drm_i915_private within the function, so delete the unneeded drm_device local. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1460010558-10705-3-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
After the GPU reset and we discard all of the incomplete requests, mark the GPU as having advanced to the last_submitted_seqno (as having completed the requests and ready for fresh work). The impact of this is negligible, as all the requests will be considered completed by this point, it just brings the HWS into line with expectations for external viewers. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1460010558-10705-2-git-send-email-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
It's useful to look at the last seqno submitted on a particular engine and compare it against the HWS value to check for irregularities. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1460010558-10705-1-git-send-email-chris@chris-wilson.co.uk
-
由 Yong Li 提交于
The current implementation only uses the first byte in val, the second byte is always 0. Change it to use cpu_to_le16 to write the two bytes into the register Cc: stable@vger.kernel.org Signed-off-by: NYong Li <sdliyong@gmail.com> Reviewed-by: NPhil Reid <preid@electromag.com.au> Signed-off-by: NLinus Walleij <linus.walleij@linaro.org>
-
由 Guenter Roeck 提交于
Since commit ff2b1359 ("gpio: make the gpiochip a real device"), attempts to add a gpio chip prior to gpiolib initialization cause the system to crash. This happens because gpio_bus_type has not been registered yet. Defer creating gpio devices until after gpiolib has been initialized to fix the problem. Cc: Greg Ungerer <gerg@uclinux.org> Cc: Alexandre Courbot <gnurou@gmail.com> Fixes: ff2b1359 ("gpio: make the gpiochip a real device") Signed-off-by: NGuenter Roeck <linux@roeck-us.net> Signed-off-by: NLinus Walleij <linus.walleij@linaro.org>
-