1. 12 4月, 2017 1 次提交
  2. 11 4月, 2017 3 次提交
    • J
      drm/udl: Fix unaligned memory access in udl_render_hline · 0c45b36f
      Jonathan Neuschäfer 提交于
      On SPARC, the udl driver filled my kernel log with these messages:
      
      [186668.910612] Kernel unaligned access at TPC[76609c] udl_render_hline+0x13c/0x3a0
      
      Use put_unaligned_be16 to avoid them. On x86 this results in the same
      code, but on SPARC the compiler emits two single-byte stores.
      Signed-off-by: NJonathan Neuschäfer <j.neuschaefer@gmx.net>
      Acked-by: NDavid Airlie <airlied@linux.ie>
      Signed-off-by: NSean Paul <seanpaul@chromium.org>
      Link: http://patchwork.freedesktop.org/patch/msgid/20170407200229.20642-1-j.neuschaefer@gmx.net
      0c45b36f
    • J
      drm/i915: Don't call synchronize_rcu_expedited under struct_mutex · c053b5a5
      Joonas Lahtinen 提交于
      Only call synchronize_rcu_expedited after unlocking struct_mutex to
      avoid deadlock because the workqueues depend on struct_mutex.
      
      >From original patch by Andrea:
      
      synchronize_rcu/synchronize_sched/synchronize_rcu_expedited() will
      hang until its own workqueues are run. The i915 gem workqueues will
      wait on the struct_mutex to be released. So we cannot wait for a
      quiescent state using those rcu primitives while holding the
      struct_mutex or it creates a circular lock dependency resulting in
      kernel hangs (which is reproducible but goes undetected by lockdep).
      
      kswapd0         D    0   700      2 0x00000000
      Call Trace:
      ? __schedule+0x1a5/0x660
      ? schedule+0x36/0x80
      ? _synchronize_rcu_expedited.constprop.65+0x2ef/0x300
      ? wake_up_bit+0x20/0x20
      ? rcu_stall_kick_kthreads.part.54+0xc0/0xc0
      ? rcu_exp_wait_wake+0x530/0x530
      ? i915_gem_shrink+0x34b/0x4b0
      ? i915_gem_shrinker_scan+0x7c/0x90
      ? i915_gem_shrinker_scan+0x7c/0x90
      ? shrink_slab.part.61.constprop.72+0x1c1/0x3a0
      ? shrink_zone+0x154/0x160
      ? kswapd+0x40a/0x720
      ? kthread+0xf4/0x130
      ? try_to_free_pages+0x450/0x450
      ? kthread_create_on_node+0x40/0x40
      ? ret_from_fork+0x23/0x30
      plasmashell     D    0  4657   4614 0x00000000
      Call Trace:
      ? __schedule+0x1a5/0x660
      ? schedule+0x36/0x80
      ? schedule_preempt_disabled+0xe/0x10
      ? __mutex_lock.isra.4+0x1c9/0x790
      ? i915_gem_close_object+0x26/0xc0
      ? i915_gem_close_object+0x26/0xc0
      ? drm_gem_object_release_handle+0x48/0x90
      ? drm_gem_handle_delete+0x50/0x80
      ? drm_ioctl+0x1fa/0x420
      ? drm_gem_handle_create+0x40/0x40
      ? pipe_write+0x391/0x410
      ? __vfs_write+0xc6/0x120
      ? do_vfs_ioctl+0x8b/0x5d0
      ? SyS_ioctl+0x3b/0x70
      ? entry_SYSCALL_64_fastpath+0x13/0x94
      kworker/0:0     D    0 29186      2 0x00000000
      Workqueue: events __i915_gem_free_work
      Call Trace:
      ? __schedule+0x1a5/0x660
      ? schedule+0x36/0x80
      ? schedule_preempt_disabled+0xe/0x10
      ? __mutex_lock.isra.4+0x1c9/0x790
      ? del_timer_sync+0x44/0x50
      ? update_curr+0x57/0x110
      ? __i915_gem_free_objects+0x31/0x300
      ? __i915_gem_free_objects+0x31/0x300
      ? __i915_gem_free_work+0x2d/0x40
      ? process_one_work+0x13a/0x3b0
      ? worker_thread+0x4a/0x460
      ? kthread+0xf4/0x130
      ? process_one_work+0x3b0/0x3b0
      ? kthread_create_on_node+0x40/0x40
      ? ret_from_fork+0x23/0x30
      
      Fixes: 3d3d18f0 ("drm/i915: Avoid rcu_barrier() from reclaim paths (shrinker)")
      Reported-by: NAndrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
      Cc: Jani Nikula <jani.nikula@intel.com>
      Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
      (cherry picked from commit 8f612d05)
      Signed-off-by: NJani Nikula <jani.nikula@intel.com>
      c053b5a5
    • S
      drm/i915: Suspend GuC prior to GPU Reset during GEM suspend · 63987bfe
      Sagar Arun Kamble 提交于
      i915 is currently doing a full GPU reset at the end of
      i915_gem_suspend() followed by GuC suspend in i915_drm_suspend(). This
      GPU reset clobbers the GuC, causing the suspend request to then fail,
      leaving the GuC in an undefined state. We need to tell the GuC to
      suspend before we do the direct intel_gpu_reset().
      
      v2: Commit message update. (Chris, Daniele)
      
      Fixes: 1c777c5d ("drm/i915/hsw: Fix GPU hang during resume from S3-devices state")
      Cc: Jeff McGee <jeff.mcgee@intel.com>
      Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
      Cc: Imre Deak <imre.deak@intel.com>
      Cc: Mika Kuoppala <mika.kuoppala@intel.com>
      Signed-off-by: NSagar Arun Kamble <sagar.a.kamble@intel.com>
      Link: http://patchwork.freedesktop.org/patch/msgid/1491387710-20553-1-git-send-email-sagar.a.kamble@intel.comReviewed-by: NDaniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
      Acked-by: NChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      (cherry picked from commit fd089233)
      Signed-off-by: NJani Nikula <jani.nikula@intel.com>
      63987bfe
  3. 06 4月, 2017 1 次提交
  4. 04 4月, 2017 7 次提交
  5. 01 4月, 2017 1 次提交
  6. 31 3月, 2017 5 次提交
  7. 30 3月, 2017 10 次提交
  8. 29 3月, 2017 5 次提交
  9. 28 3月, 2017 1 次提交
  10. 27 3月, 2017 1 次提交
  11. 23 3月, 2017 1 次提交
  12. 22 3月, 2017 1 次提交
    • C
      drm/i915/gvt: Use force single submit flag to distinguish gvt request from i915 request · bc2d4b62
      Changbin Du 提交于
      In my previous Commit ab9da627906a ("drm/i915: make context status
      notifier head be per engine") rely on scheduler->current_workload[x]
      to distinguish gvt spacial request from i915 request. But this is
      not always true since no synchronization between workload_thread and
      lrc irq handler.
      
          lrc irq handler               workload_thread
               ----                          ----
        pick i915 requests;
                                      intel_vgpu_submit_execlist();
                                      current_workload[x] = xxx;
        shadow_context_status_change();
      
      Then current_workload[x] is not null but current request is of i915 self.
      So instead we check ctx flag CONTEXT_FORCE_SINGLE_SUBMISSION. Only gvt
      request set this flag and always set.
      
      v2: Reverse the order of multi-condition 'if' statement.
      
      Fixes: ab9da6279 ("drm/i915: make context status notifier head be per engine")
      Signed-off-by: NChangbin Du <changbin.du@intel.com>
      Reviewed-by: NYulei Zhang <yulei.zhang@intel.com>
      Signed-off-by: NZhenyu Wang <zhenyuw@linux.intel.com>
      bc2d4b62
  13. 21 3月, 2017 3 次提交
    • C
      drm/i915: make context status notifier head be per engine · 590379ae
      Changbin Du 提交于
      GVTg has introduced the context status notifier to schedule the GVTg
      workload. At that time, the notifier is bound to GVTg context only,
      so GVTg is not aware of host workloads.
      
      Now we are going to improve GVTg's guest workload scheduler policy,
      and add Guc emulation support for new Gen graphics. Both these two
      features require acknowledgment for all contexts running on hardware.
      (But will not alter host workload.) So here try to make some change.
      
      The change is simple:
        1. Move the context status notifier head from i915_gem_context to
           intel_engine_cs. Which means there is a notifier head per engine
           instead of per context. Execlist driver still call notifier for
           each context sched-in/out events of current engine.
        2. At GVTg side, it binds a notifier_block for each physical engine
           at GVTg initialization period. Then GVTg can hear all context
           status events.
      
      In this patch, GVTg do nothing for host context event, but later
      will add a function there. But in any case, the notifier callback is
      a noop if this is no active vGPU.
      
      Since intel_gvt_init() is called at early initialization stage and
      require the status notifier head has been initiated, I initiate it in
      intel_engine_setup().
      
      v2: remove a redundant newline. (chris)
      
      Fixes: 3c7ba635 ("drm/i915: Introduce execlist context status change notification")
      Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=100232Signed-off-by: NChangbin Du <changbin.du@intel.com>
      Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
      Cc: Zhi Wang <zhi.a.wang@intel.com>
      Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
      Link: http://patchwork.freedesktop.org/patch/msgid/20170313024711.28591-1-changbin.du@intel.comAcked-by: NZhenyu Wang <zhenyuw@linux.intel.com>
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      (cherry picked from commit 3fc03069)
      Signed-off-by: NJani Nikula <jani.nikula@intel.com>
      Link: http://patchwork.freedesktop.org/patch/msgid/20170321144720.17020-1-chris@chris-wilson.co.uk
      590379ae
    • C
      drm/i915: Avoid rcu_barrier() from reclaim paths (shrinker) · 3d3d18f0
      Chris Wilson 提交于
      The rcu_barrier() takes the cpu_hotplug mutex which itself is not
      reclaim-safe, and so rcu_barrier() is illegal from inside the shrinker.
      
      [  309.661373] =========================================================
      [  309.661376] [ INFO: possible irq lock inversion dependency detected ]
      [  309.661380] 4.11.0-rc1-CI-CI_DRM_2333+ #1 Tainted: G        W
      [  309.661383] ---------------------------------------------------------
      [  309.661386] gem_exec_gttfil/6435 just changed the state of lock:
      [  309.661389]  (rcu_preempt_state.barrier_mutex){+.+.-.}, at: [<ffffffff81100731>] _rcu_barrier+0x31/0x160
      [  309.661399] but this lock took another, RECLAIM_FS-unsafe lock in the past:
      [  309.661402]  (cpu_hotplug.lock){+.+.+.}
      [  309.661404]
      
                     and interrupts could create inverse lock ordering between them.
      
      [  309.661410]
                     other info that might help us debug this:
      [  309.661414]  Possible interrupt unsafe locking scenario:
      
      [  309.661417]        CPU0                    CPU1
      [  309.661419]        ----                    ----
      [  309.661421]   lock(cpu_hotplug.lock);
      [  309.661425]                                local_irq_disable();
      [  309.661432]                                lock(rcu_preempt_state.barrier_mutex);
      [  309.661441]                                lock(cpu_hotplug.lock);
      [  309.661446]   <Interrupt>
      [  309.661448]     lock(rcu_preempt_state.barrier_mutex);
      [  309.661453]
                      *** DEADLOCK ***
      
      [  309.661460] 4 locks held by gem_exec_gttfil/6435:
      [  309.661464]  #0:  (sb_writers#10){.+.+.+}, at: [<ffffffff8120d83d>] vfs_write+0x17d/0x1f0
      [  309.661475]  #1:  (debugfs_srcu){......}, at: [<ffffffff81320491>] debugfs_use_file_start+0x41/0xa0
      [  309.661486]  #2:  (&attr->mutex){+.+.+.}, at: [<ffffffff8123a3e7>] simple_attr_write+0x37/0xe0
      [  309.661495]  #3:  (&dev->struct_mutex){+.+.+.}, at: [<ffffffffa0091b4a>] i915_drop_caches_set+0x3a/0x150 [i915]
      [  309.661540]
                     the shortest dependencies between 2nd lock and 1st lock:
      [  309.661547]  -> (cpu_hotplug.lock){+.+.+.} ops: 829 {
      [  309.661553]     HARDIRQ-ON-W at:
      [  309.661560]                       __lock_acquire+0x5e5/0x1b50
      [  309.661565]                       lock_acquire+0xc9/0x220
      [  309.661572]                       __mutex_lock+0x6e/0x990
      [  309.661576]                       mutex_lock_nested+0x16/0x20
      [  309.661583]                       get_online_cpus+0x61/0x80
      [  309.661590]                       kmem_cache_create+0x25/0x1d0
      [  309.661596]                       debug_objects_mem_init+0x30/0x249
      [  309.661602]                       start_kernel+0x341/0x3fe
      [  309.661607]                       x86_64_start_reservations+0x2a/0x2c
      [  309.661612]                       x86_64_start_kernel+0x173/0x186
      [  309.661619]                       verify_cpu+0x0/0xfc
      [  309.661622]     SOFTIRQ-ON-W at:
      [  309.661627]                       __lock_acquire+0x611/0x1b50
      [  309.661632]                       lock_acquire+0xc9/0x220
      [  309.661636]                       __mutex_lock+0x6e/0x990
      [  309.661641]                       mutex_lock_nested+0x16/0x20
      [  309.661646]                       get_online_cpus+0x61/0x80
      [  309.661650]                       kmem_cache_create+0x25/0x1d0
      [  309.661655]                       debug_objects_mem_init+0x30/0x249
      [  309.661660]                       start_kernel+0x341/0x3fe
      [  309.661664]                       x86_64_start_reservations+0x2a/0x2c
      [  309.661669]                       x86_64_start_kernel+0x173/0x186
      [  309.661674]                       verify_cpu+0x0/0xfc
      [  309.661677]     RECLAIM_FS-ON-W at:
      [  309.661682]                          mark_held_locks+0x6f/0xa0
      [  309.661687]                          lockdep_trace_alloc+0xb3/0x100
      [  309.661693]                          kmem_cache_alloc_trace+0x31/0x2e0
      [  309.661699]                          __smpboot_create_thread.part.1+0x27/0xe0
      [  309.661704]                          smpboot_create_threads+0x61/0x90
      [  309.661709]                          cpuhp_invoke_callback+0x9c/0x8a0
      [  309.661713]                          cpuhp_up_callbacks+0x31/0xb0
      [  309.661718]                          _cpu_up+0x7a/0xc0
      [  309.661723]                          do_cpu_up+0x5f/0x80
      [  309.661727]                          cpu_up+0xe/0x10
      [  309.661734]                          smp_init+0x71/0xb3
      [  309.661738]                          kernel_init_freeable+0x94/0x19e
      [  309.661743]                          kernel_init+0x9/0xf0
      [  309.661748]                          ret_from_fork+0x2e/0x40
      [  309.661752]     INITIAL USE at:
      [  309.661757]                      __lock_acquire+0x234/0x1b50
      [  309.661761]                      lock_acquire+0xc9/0x220
      [  309.661766]                      __mutex_lock+0x6e/0x990
      [  309.661771]                      mutex_lock_nested+0x16/0x20
      [  309.661775]                      get_online_cpus+0x61/0x80
      [  309.661780]                      __cpuhp_setup_state+0x44/0x170
      [  309.661785]                      page_alloc_init+0x23/0x3a
      [  309.661790]                      start_kernel+0x124/0x3fe
      [  309.661794]                      x86_64_start_reservations+0x2a/0x2c
      [  309.661799]                      x86_64_start_kernel+0x173/0x186
      [  309.661804]                      verify_cpu+0x0/0xfc
      [  309.661807]   }
      [  309.661813]   ... key      at: [<ffffffff81e37690>] cpu_hotplug+0xb0/0x100
      [  309.661817]   ... acquired at:
      [  309.661821]    lock_acquire+0xc9/0x220
      [  309.661825]    __mutex_lock+0x6e/0x990
      [  309.661829]    mutex_lock_nested+0x16/0x20
      [  309.661833]    get_online_cpus+0x61/0x80
      [  309.661837]    _rcu_barrier+0x9f/0x160
      [  309.661841]    rcu_barrier+0x10/0x20
      [  309.661847]    netdev_run_todo+0x5f/0x310
      [  309.661852]    rtnl_unlock+0x9/0x10
      [  309.661856]    default_device_exit_batch+0x133/0x150
      [  309.661862]    ops_exit_list.isra.0+0x4d/0x60
      [  309.661866]    cleanup_net+0x1d8/0x2c0
      [  309.661872]    process_one_work+0x1f4/0x6d0
      [  309.661876]    worker_thread+0x49/0x4a0
      [  309.661881]    kthread+0x107/0x140
      [  309.661884]    ret_from_fork+0x2e/0x40
      
      [  309.661890] -> (rcu_preempt_state.barrier_mutex){+.+.-.} ops: 179 {
      [  309.661896]    HARDIRQ-ON-W at:
      [  309.661901]                     __lock_acquire+0x5e5/0x1b50
      [  309.661905]                     lock_acquire+0xc9/0x220
      [  309.661910]                     __mutex_lock+0x6e/0x990
      [  309.661914]                     mutex_lock_nested+0x16/0x20
      [  309.661919]                     _rcu_barrier+0x31/0x160
      [  309.661923]                     rcu_barrier+0x10/0x20
      [  309.661928]                     netdev_run_todo+0x5f/0x310
      [  309.661932]                     rtnl_unlock+0x9/0x10
      [  309.661936]                     default_device_exit_batch+0x133/0x150
      [  309.661941]                     ops_exit_list.isra.0+0x4d/0x60
      [  309.661946]                     cleanup_net+0x1d8/0x2c0
      [  309.661951]                     process_one_work+0x1f4/0x6d0
      [  309.661955]                     worker_thread+0x49/0x4a0
      [  309.661960]                     kthread+0x107/0x140
      [  309.661964]                     ret_from_fork+0x2e/0x40
      [  309.661968]    SOFTIRQ-ON-W at:
      [  309.661972]                     __lock_acquire+0x611/0x1b50
      [  309.661977]                     lock_acquire+0xc9/0x220
      [  309.661981]                     __mutex_lock+0x6e/0x990
      [  309.661986]                     mutex_lock_nested+0x16/0x20
      [  309.661990]                     _rcu_barrier+0x31/0x160
      [  309.661995]                     rcu_barrier+0x10/0x20
      [  309.661999]                     netdev_run_todo+0x5f/0x310
      [  309.662003]                     rtnl_unlock+0x9/0x10
      [  309.662008]                     default_device_exit_batch+0x133/0x150
      [  309.662013]                     ops_exit_list.isra.0+0x4d/0x60
      [  309.662017]                     cleanup_net+0x1d8/0x2c0
      [  309.662022]                     process_one_work+0x1f4/0x6d0
      [  309.662027]                     worker_thread+0x49/0x4a0
      [  309.662031]                     kthread+0x107/0x140
      [  309.662035]                     ret_from_fork+0x2e/0x40
      [  309.662039]    IN-RECLAIM_FS-W at:
      [  309.662043]                        __lock_acquire+0x638/0x1b50
      [  309.662048]                        lock_acquire+0xc9/0x220
      [  309.662053]                        __mutex_lock+0x6e/0x990
      [  309.662058]                        mutex_lock_nested+0x16/0x20
      [  309.662062]                        _rcu_barrier+0x31/0x160
      [  309.662067]                        rcu_barrier+0x10/0x20
      [  309.662089]                        i915_gem_shrink_all+0x33/0x40 [i915]
      [  309.662109]                        i915_drop_caches_set+0x141/0x150 [i915]
      [  309.662114]                        simple_attr_write+0xc7/0xe0
      [  309.662119]                        full_proxy_write+0x4f/0x70
      [  309.662124]                        __vfs_write+0x23/0x120
      [  309.662128]                        vfs_write+0xc6/0x1f0
      [  309.662133]                        SyS_write+0x44/0xb0
      [  309.662138]                        entry_SYSCALL_64_fastpath+0x1c/0xb1
      [  309.662142]    INITIAL USE at:
      [  309.662147]                    __lock_acquire+0x234/0x1b50
      [  309.662151]                    lock_acquire+0xc9/0x220
      [  309.662156]                    __mutex_lock+0x6e/0x990
      [  309.662160]                    mutex_lock_nested+0x16/0x20
      [  309.662165]                    _rcu_barrier+0x31/0x160
      [  309.662169]                    rcu_barrier+0x10/0x20
      [  309.662174]                    netdev_run_todo+0x5f/0x310
      [  309.662178]                    rtnl_unlock+0x9/0x10
      [  309.662183]                    default_device_exit_batch+0x133/0x150
      [  309.662188]                    ops_exit_list.isra.0+0x4d/0x60
      [  309.662192]                    cleanup_net+0x1d8/0x2c0
      [  309.662197]                    process_one_work+0x1f4/0x6d0
      [  309.662202]                    worker_thread+0x49/0x4a0
      [  309.662206]                    kthread+0x107/0x140
      [  309.662210]                    ret_from_fork+0x2e/0x40
      [  309.662214]  }
      [  309.662220]  ... key      at: [<ffffffff81e4e1c8>] rcu_preempt_state+0x508/0x780
      [  309.662225]  ... acquired at:
      [  309.662229]    check_usage_forwards+0x12b/0x130
      [  309.662233]    mark_lock+0x360/0x6f0
      [  309.662237]    __lock_acquire+0x638/0x1b50
      [  309.662241]    lock_acquire+0xc9/0x220
      [  309.662245]    __mutex_lock+0x6e/0x990
      [  309.662249]    mutex_lock_nested+0x16/0x20
      [  309.662253]    _rcu_barrier+0x31/0x160
      [  309.662257]    rcu_barrier+0x10/0x20
      [  309.662279]    i915_gem_shrink_all+0x33/0x40 [i915]
      [  309.662298]    i915_drop_caches_set+0x141/0x150 [i915]
      [  309.662303]    simple_attr_write+0xc7/0xe0
      [  309.662307]    full_proxy_write+0x4f/0x70
      [  309.662311]    __vfs_write+0x23/0x120
      [  309.662315]    vfs_write+0xc6/0x1f0
      [  309.662319]    SyS_write+0x44/0xb0
      [  309.662323]    entry_SYSCALL_64_fastpath+0x1c/0xb1
      
      [  309.662329]
                     stack backtrace:
      [  309.662335] CPU: 1 PID: 6435 Comm: gem_exec_gttfil Tainted: G        W       4.11.0-rc1-CI-CI_DRM_2333+ #1
      [  309.662342] Hardware name: Hewlett-Packard HP Compaq 8100 Elite SFF PC/304Ah, BIOS 786H1 v01.13 07/14/2011
      [  309.662348] Call Trace:
      [  309.662354]  dump_stack+0x67/0x92
      [  309.662359]  print_irq_inversion_bug.part.19+0x1a4/0x1b0
      [  309.662365]  check_usage_forwards+0x12b/0x130
      [  309.662369]  mark_lock+0x360/0x6f0
      [  309.662374]  ? print_shortest_lock_dependencies+0x1a0/0x1a0
      [  309.662379]  __lock_acquire+0x638/0x1b50
      [  309.662383]  ? __mutex_unlock_slowpath+0x3e/0x2e0
      [  309.662388]  ? trace_hardirqs_on+0xd/0x10
      [  309.662392]  ? _rcu_barrier+0x31/0x160
      [  309.662396]  lock_acquire+0xc9/0x220
      [  309.662400]  ? _rcu_barrier+0x31/0x160
      [  309.662404]  ? _rcu_barrier+0x31/0x160
      [  309.662409]  __mutex_lock+0x6e/0x990
      [  309.662412]  ? _rcu_barrier+0x31/0x160
      [  309.662416]  ? _rcu_barrier+0x31/0x160
      [  309.662421]  ? synchronize_rcu_expedited+0x35/0xb0
      [  309.662426]  ? _raw_spin_unlock_irqrestore+0x52/0x60
      [  309.662434]  mutex_lock_nested+0x16/0x20
      [  309.662438]  _rcu_barrier+0x31/0x160
      [  309.662442]  rcu_barrier+0x10/0x20
      [  309.662464]  i915_gem_shrink_all+0x33/0x40 [i915]
      [  309.662484]  i915_drop_caches_set+0x141/0x150 [i915]
      [  309.662489]  simple_attr_write+0xc7/0xe0
      [  309.662494]  full_proxy_write+0x4f/0x70
      [  309.662498]  __vfs_write+0x23/0x120
      [  309.662503]  ? rcu_read_lock_sched_held+0x75/0x80
      [  309.662507]  ? rcu_sync_lockdep_assert+0x2a/0x50
      [  309.662512]  ? __sb_start_write+0x102/0x210
      [  309.662516]  ? vfs_write+0x17d/0x1f0
      [  309.662520]  vfs_write+0xc6/0x1f0
      [  309.662524]  ? trace_hardirqs_on_caller+0xe7/0x200
      [  309.662529]  SyS_write+0x44/0xb0
      [  309.662533]  entry_SYSCALL_64_fastpath+0x1c/0xb1
      [  309.662537] RIP: 0033:0x7f507eac24a0
      [  309.662541] RSP: 002b:00007fffda8720e8 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
      [  309.662548] RAX: ffffffffffffffda RBX: ffffffff81482bd3 RCX: 00007f507eac24a0
      [  309.662552] RDX: 0000000000000005 RSI: 00007fffda8720f0 RDI: 0000000000000005
      [  309.662557] RBP: ffffc9000048bf88 R08: 0000000000000000 R09: 000000000000002c
      [  309.662561] R10: 0000000000000014 R11: 0000000000000246 R12: 00007fffda872230
      [  309.662566] R13: 00007fffda872228 R14: 0000000000000201 R15: 00007fffda8720f0
      [  309.662572]  ? __this_cpu_preempt_check+0x13/0x20
      
      Fixes: 0eafec6d ("drm/i915: Enable lockless lookup of request tracking via RCU")
      Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=100192Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
      Cc: <stable@vger.kernel.org> # v4.9+
      Link: http://patchwork.freedesktop.org/patch/msgid/20170314115019.18127-1-chris@chris-wilson.co.ukReviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      (cherry picked from commit bd784b7c)
      Signed-off-by: NJani Nikula <jani.nikula@intel.com>
      Link: http://patchwork.freedesktop.org/patch/msgid/20170321144531.12344-1-chris@chris-wilson.co.uk
      3d3d18f0
    • A
      drm/exynos/dsi: make te-gpios optional · 22e098da
      Andrzej Hajda 提交于
      DSI forwards te-gpios interrupts to display controller, but if display
      controller works in HW-TRIGGER mode this interrupt is not necessary.
      Making te-gpios property optional allows to avoid generating spare
      interrupts.
      And also if panel device node of command mode panel device doesn't provide
      te-gpios property then the panel driver failed to probe. This was a critial
      issue.
      
      With this patch we can not only get rid of 60 interrupt callbacks per second
      but also fix the critial issues.
      Signed-off-by: NAndrzej Hajda <a.hajda@samsung.com>
      Signed-off-by: NInki Dae <inki.dae@samsung.com>
      22e098da