- 01 11月, 2017 1 次提交
-
-
由 Chris Wilson 提交于
We will want to break this down to give detailed per-engine warnings as to why we still think we are active as we attempt to park the engines. For the first step, just move the warning verbatim from the idle-worker to intel_engines_park(). Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171027110617.31745-3-chris@chris-wilson.co.ukReviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com>
-
- 27 10月, 2017 2 次提交
-
-
由 Chris Wilson 提交于
Kasan spotted [IGT] gem_tiled_pread_pwrite: exiting, ret=0 ================================================================== BUG: KASAN: use-after-free in __i915_gem_object_reset_page_iter+0x15c/0x170 [i915] Read of size 8 at addr ffff8801359da310 by task kworker/3:2/182 CPU: 3 PID: 182 Comm: kworker/3:2 Tainted: G U 4.14.0-rc6-CI-Custom_3340+ #1 Hardware name: Intel Corp. Geminilake/GLK RVP1 DDR4 (05), BIOS GELKRVPA.X64.0062.B30.1708222146 08/22/2017 Workqueue: events __i915_gem_free_work [i915] Call Trace: dump_stack+0x68/0xa0 print_address_description+0x78/0x290 ? __i915_gem_object_reset_page_iter+0x15c/0x170 [i915] kasan_report+0x23d/0x350 __asan_report_load8_noabort+0x19/0x20 __i915_gem_object_reset_page_iter+0x15c/0x170 [i915] ? i915_gem_object_truncate+0x100/0x100 [i915] ? lock_acquire+0x380/0x380 __i915_gem_object_put_pages+0x30d/0x530 [i915] __i915_gem_free_objects+0x551/0xbd0 [i915] ? lock_acquire+0x13e/0x380 __i915_gem_free_work+0x4e/0x70 [i915] process_one_work+0x6f6/0x1590 ? pwq_dec_nr_in_flight+0x2b0/0x2b0 worker_thread+0xe6/0xe90 ? pci_mmcfg_check_reserved+0x110/0x110 kthread+0x309/0x410 ? process_one_work+0x1590/0x1590 ? kthread_create_on_node+0xb0/0xb0 ret_from_fork+0x27/0x40 Allocated by task 1801: save_stack_trace+0x1b/0x20 kasan_kmalloc+0xee/0x190 kasan_slab_alloc+0x12/0x20 kmem_cache_alloc+0xdc/0x2e0 radix_tree_node_alloc.constprop.12+0x48/0x330 __radix_tree_create+0x274/0x480 __radix_tree_insert+0xa2/0x610 i915_gem_object_get_sg+0x224/0x670 [i915] i915_gem_object_get_page+0xb5/0x1c0 [i915] i915_gem_pread_ioctl+0x822/0xf60 [i915] drm_ioctl_kernel+0x13f/0x1c0 drm_ioctl+0x6cf/0x980 do_vfs_ioctl+0x184/0xf30 SyS_ioctl+0x41/0x70 entry_SYSCALL_64_fastpath+0x1c/0xb1 Freed by task 37: save_stack_trace+0x1b/0x20 kasan_slab_free+0xaf/0x190 kmem_cache_free+0xbf/0x340 radix_tree_node_rcu_free+0x79/0x90 rcu_process_callbacks+0x46d/0xf40 __do_softirq+0x21c/0x8d3 The buggy address belongs to the object at ffff8801359da0f0 which belongs to the cache radix_tree_node of size 576 The buggy address is located 544 bytes inside of 576-byte region [ffff8801359da0f0, ffff8801359da330) The buggy address belongs to the page: page:ffffea0004d67600 count:1 mapcount:0 mapping: (null) index:0x0 compound_mapcount: 0 flags: 0x8000000000008100(slab|head) raw: 8000000000008100 0000000000000000 0000000000000000 0000000100110011 raw: ffffea0004b52920 ffffea0004b38020 ffff88015b416a80 0000000000000000 page dumped because: kasan: bad access detected Memory state around the buggy address: ffff8801359da200: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff8801359da280: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb >ffff8801359da300: fb fb fb fb fb fb fc fc fc fc fc fc fc fc fc fc ^ ffff8801359da380: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc ffff8801359da400: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc ================================================================== Disabling lock debugging due to kernel taint which looks like the slab containing the radixtree iter was freed as we traversed the tree, taking the rcu read lock across the loop should prevent that (deferring all the frees until the end). Reported-by: NTomi Sarvela <tomi.p.sarvela@intel.com> Fixes: 96d77634 ("drm/i915: Use a radixtree for random access to the object's backing storage") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171026130032.10677-1-chris@chris-wilson.co.ukReviewed-by: NMatthew Auld <matthew.william.auld@gmail.com>
-
由 Michał Winiarski 提交于
Pretty similar to what we have on execlists. We're reusing most of the GEM code, however, due to GuC quirks we need a couple of extra bits. Preemption is implemented as GuC action, and actions can be pretty slow. Because of that, we're using a mutex to serialize them. Since we're requesting preemption from the tasklet, the task of creating a workitem and wrapping it in GuC action is delegated to a worker. To distinguish that preemption has finished, we're using additional piece of HWSP, and since we're not getting context switch interrupts, we're also adding a user interrupt. The fact that our special preempt context has completed unfortunately doesn't mean that we're ready to submit new work. We also need to wait for GuC to finish its own processing. v2: Don't compile out the wait for GuC, handle workqueue flush on reset, no need for ordered workqueue, put on a reviewer hat when looking at my own patches (Chris) Move struct work around in intel_guc, move user interruput outside of conditional (Michał) Keep ring around rather than chase though intel_context v3: Extract WA for flushing ggtt writes to a helper (Chris) Keep work_struct in intel_guc rather than engine (Michał) Use ordered workqueue for inject_preempt worker to avoid GuC quirks. v4: Drop now unused INTEL_GUC_PREEMPT_OPTION_IMMEDIATE (Daniele) Drop stray newlines, use container_of for intel_guc in worker, check for presence of workqueue when flushing it, rather than enable_guc_submission modparam, reorder preempt postprocessing (Chris) v5: Make wq NULL after destroying it v6: Swap struct guc_preempt_work members (Michał) Signed-off-by: NMichał Winiarski <michal.winiarski@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Jeff McGee <jeff.mcgee@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Oscar Mateo <oscar.mateo@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171026133558.19580-1-michal.winiarski@intel.com
-
- 26 10月, 2017 2 次提交
-
-
由 Michał Winiarski 提交于
Now that we're handling request resubmission the same way as regular submission (from the tasklet), we can move GuC initialization earlier, before restarting the engines. This way, we're no longer being in the state of flux during engine restart - we're already in user requested submission mode. Signed-off-by: NMichał Winiarski <michal.winiarski@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com> Cc: Michel Thierry <michel.thierry@intel.com> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Cc: Oscar Mateo <oscar.mateo@intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171025172519.10670-5-chris@chris-wilson.co.ukSigned-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
由 Chris Wilson 提交于
In the next patch, we will want to install a callback when the engines (GT as a whole) become idle and similarly when they first become busy. To enable that callback, first rename intel_engines_mark_idle() to intel_engines_park() and provide the companion intel_engines_unpark(). Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Michał Winiarski <michal.winiarski@intel.com> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171025143943.7661-2-chris@chris-wilson.co.uk
-
- 24 10月, 2017 2 次提交
-
-
由 Chris Wilson 提交于
When we park the engine (upon idling), we kill the irq tasklet. However, to be sure that it is not restarted by a spurious interrupt after doing so, flush the interrupt handler before parking. As we only park the engines when we believe the system is idle, there should not be any interrupts to distrub us; so flushing the final in-flight interrupt should be sufficient. (However, we are still dependent on the HW behaving in an orderly and timely fashion, which we shall endeavour to improve upon later.) Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Imre Deak <imre.deak@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171023213237.26536-2-chris@chris-wilson.co.ukReviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com>
-
由 Chris Wilson 提交于
In the idle worker we drop the prolonged GT wakeref used to cover such essentials as interrupt delivery. (When a CS interrupt arrives, we also assert that the GT is awake.) However, it turns out that 10ms is not long enough to be assured that the last CS interrupt has been delivered, so bump that to 200ms, and move the entirety of that wait to before we take the struct_mutex to avoid blocking. As this is now a potentially long wait, restore the earlier behaviour of bailing out early when a new request arrives. v2: Break out the repeated check for new requests into its own little helper to try and improve the self-commentary. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Imre Deak <imre.deak@intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171023213237.26536-1-chris@chris-wilson.co.uk
-
- 19 10月, 2017 1 次提交
-
-
由 Chris Wilson 提交于
If the device is in runtime suspend, resuming takes time and reduces our powersaving. If this was for a small write into an object, that resume will take longer than any savings in using the indirect GGTT access to avoid the cpu cache. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20171019063733.31620-1-chris@chris-wilson.co.uk
-
- 17 10月, 2017 7 次提交
-
-
由 Chris Wilson 提交于
When pwriting into shmemfs, the fast path pagecache_write does not notice when it is writing to beyond the end of the truncated shmemfs inode. Report -EFAULT directly when we try to use pwrite into the !I915_MADV_WILLNEED object. Fixes: 7c55e2c5 ("drm/i915: Use pagecache write to prepopulate shmemfs from pwrite-ioctl") Testcase: igt/gem_madvise/dontneed-before-pwrite Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Matthew Auld <matthew.william.auld@gmail.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171016202732.25459-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
If we fail to recover the HW state upon resume (i.e. our attempt to clear the wedged bit and reset during i915_gem_sanitize() fails), then skip the HW restart inside i915_gem_init_hw(). We will ultimately do the HW restart when successfully unwedging and resetting the HW later, but attempting to restore a wedged device upon resume is risky as the HW is in an unknown state. v2: Suppress the error message when detecting the already wedged HW. Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=103240 Testcase: igt/gem_eio/in-flight-suspend Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171015143725.27764-1-chris@chris-wilson.co.ukReviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com>
-
由 Chris Wilson 提交于
We free objects in bulk after they wait for their RCU grace period. Currently, we take struct_mutex and unbind all the objects. This can lead to a long lock duration during which time those objects have their pages unfreeable (i.e. the shrinker is prevented from reaping those pages). If we only process a single object under the struct_mutex and then free the pages, the number of objects locked away from the shrinker is minimal and we allow regular clients better access to struct_mutex if they need it. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171013202621.7276-9-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Inspired by Tvrtko's critique of the reaping of the stale contexts before allocating a new one, also limit the freed object reaping to the oldest stale object before allocating a fresh object. Unlike contexts, objects may have radically different sizes of backing storage, but similar to contexts, while we want to prevent starvation due to excessive freed lists, we also do not want to delay fresh allocations for too long. Only freeing the oldest on the freed object list before each allocation is a reasonable compromise. v2: Only a single consumer of llist_del_first() is allowed (although multiple llist_add are still allowed in parallel). Unlike i915_gem_context, i915_gem_flush_free_objects() is itself not serialized and so we need to add our own spinlock. Otherwise KASAN eventually spots a use-after-free for the race on *first->next. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> #v1 Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171013202621.7276-8-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Remove the struct_mutex requirement around dev_priv->mm.bound_list and dev_priv->mm.unbound_list by giving it its own spinlock. This reduces one more requirement for struct_mutex and in the process gives us slightly more accurate unbound_list tracking, which should improve the shrinker - but the drawback is that we drop the retirement before counting so i915_gem_object_is_active() may be stale and lead us to underestimate the number of objects that may be shrunk (see commit bed50aea ("drm/i915/shrinker: Flush active on objects before counting")). v2: Crosslink the spinlock to the lists it protects, and btw this changes s/obj->global_link/obj->mm.link/ v3: Fix decoupling of old links in i915_gem_object_attach_phys() v3.1: Fix the fix, only unlink if it was linked v3.2: Use a local for to_i915(obj->base.dev)->mm.obj_lock Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171016114037.5556-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
In the next patch, we want to extend use of the global pin counter for semi-permanent pinning of context/ring objects. Given that we plan to extend the usage to encompass a disparate set of objects, we want a name that reflects both and should entail less confusion. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171013202621.7276-2-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
Since we occasionally stuff an error pointer into obj->mm.pages for a semi-permanent or even permanent failure, we have to be more careful and not just test against NULL when deciding if the object has a complete set of its concurrent pages. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171013202621.7276-1-chris@chris-wilson.co.uk
-
- 14 10月, 2017 1 次提交
-
-
由 Chris Wilson 提交于
Since the removal of the stop_machine(), it is allowed and expected for the nop_submit_request() and nop_complete_submit_request() to run in parallel to the i915_gem_set_wedged() processing. As such we can no longer assert that i915_gem_set_wedged() has completed inside the stop_machine prior to the individual nop_submit_request execution. Fixes: af7a8ffa ("drm/i915: Use rcu instead of stop_machine in set_wedged") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel.vetter@intel.com> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171012204019.3557-1-chris@chris-wilson.co.ukReviewed-by: NDaniel Vetter <daniel.vetter@intel.com> Reviewed-by: NMika Kuoppala <mika.kuoppala@intel.com>
-
- 11 10月, 2017 2 次提交
-
-
由 Daniel Vetter 提交于
stop_machine is not really a locking primitive we should use, except when the hw folks tell us the hw is broken and that's the only way to work around it. This patch tries to address the locking abuse of stop_machine() from commit 20e4933c Author: Chris Wilson <chris@chris-wilson.co.uk> Date: Tue Nov 22 14:41:21 2016 +0000 drm/i915: Stop the machine as we install the wedged submit_request handler Chris said parts of the reasons for going with stop_machine() was that it's no overhead for the fast-path. But these callbacks use irqsave spinlocks and do a bunch of MMIO, and rcu_read_lock is _real_ fast. To stay as close as possible to the stop_machine semantics we first update all the submit function pointers to the nop handler, then call synchronize_rcu() to make sure no new requests can be submitted. This should give us exactly the huge barrier we want. I pondered whether we should annotate engine->submit_request as __rcu and use rcu_assign_pointer and rcu_dereference on it. But the reason behind those is to make sure the compiler/cpu barriers are there for when you have an actual data structure you point at, to make sure all the writes are seen correctly on the read side. But we just have a function pointer, and .text isn't changed, so no need for these barriers and hence no need for annotations. Unfortunately there's a complication with the call to intel_engine_init_global_seqno: - Without stop_machine we must hold the corresponding spinlock. - Without stop_machine we must ensure that all requests are marked as having failed with dma_fence_set_error() before we call it. That means we need to split the nop request submission into two phases, both synchronized with rcu: 1. Only stop submitting the requests to hw and mark them as failed. 2. After all pending requests in the scheduler/ring are suitably marked up as failed and we can force complete them all, also force complete by calling intel_engine_init_global_seqno(). This should fix the followwing lockdep splat: ====================================================== WARNING: possible circular locking dependency detected 4.14.0-rc3-CI-CI_DRM_3179+ #1 Tainted: G U ------------------------------------------------------ kworker/3:4/562 is trying to acquire lock: (cpu_hotplug_lock.rw_sem){++++}, at: [<ffffffff8113d4bc>] stop_machine+0x1c/0x40 but task is already holding lock: (&dev->struct_mutex){+.+.}, at: [<ffffffffa0136588>] i915_reset_device+0x1e8/0x260 [i915] which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #6 (&dev->struct_mutex){+.+.}: __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 __mutex_lock+0x86/0x9b0 mutex_lock_interruptible_nested+0x1b/0x20 i915_mutex_lock_interruptible+0x51/0x130 [i915] i915_gem_fault+0x209/0x650 [i915] __do_fault+0x1e/0x80 __handle_mm_fault+0xa08/0xed0 handle_mm_fault+0x156/0x300 __do_page_fault+0x2c5/0x570 do_page_fault+0x28/0x250 page_fault+0x22/0x30 -> #5 (&mm->mmap_sem){++++}: __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 __might_fault+0x68/0x90 _copy_to_user+0x23/0x70 filldir+0xa5/0x120 dcache_readdir+0xf9/0x170 iterate_dir+0x69/0x1a0 SyS_getdents+0xa5/0x140 entry_SYSCALL_64_fastpath+0x1c/0xb1 -> #4 (&sb->s_type->i_mutex_key#5){++++}: down_write+0x3b/0x70 handle_create+0xcb/0x1e0 devtmpfsd+0x139/0x180 kthread+0x152/0x190 ret_from_fork+0x27/0x40 -> #3 ((complete)&req.done){+.+.}: __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 wait_for_common+0x58/0x210 wait_for_completion+0x1d/0x20 devtmpfs_create_node+0x13d/0x160 device_add+0x5eb/0x620 device_create_groups_vargs+0xe0/0xf0 device_create+0x3a/0x40 msr_device_create+0x2b/0x40 cpuhp_invoke_callback+0xc9/0xbf0 cpuhp_thread_fun+0x17b/0x240 smpboot_thread_fn+0x18a/0x280 kthread+0x152/0x190 ret_from_fork+0x27/0x40 -> #2 (cpuhp_state-up){+.+.}: __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 cpuhp_issue_call+0x133/0x1c0 __cpuhp_setup_state_cpuslocked+0x139/0x2a0 __cpuhp_setup_state+0x46/0x60 page_writeback_init+0x43/0x67 pagecache_init+0x3d/0x42 start_kernel+0x3a8/0x3fc x86_64_start_reservations+0x2a/0x2c x86_64_start_kernel+0x6d/0x70 verify_cpu+0x0/0xfb -> #1 (cpuhp_state_mutex){+.+.}: __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 __mutex_lock+0x86/0x9b0 mutex_lock_nested+0x1b/0x20 __cpuhp_setup_state_cpuslocked+0x53/0x2a0 __cpuhp_setup_state+0x46/0x60 page_alloc_init+0x28/0x30 start_kernel+0x145/0x3fc x86_64_start_reservations+0x2a/0x2c x86_64_start_kernel+0x6d/0x70 verify_cpu+0x0/0xfb -> #0 (cpu_hotplug_lock.rw_sem){++++}: check_prev_add+0x430/0x840 __lock_acquire+0x1420/0x15e0 lock_acquire+0xb0/0x200 cpus_read_lock+0x3d/0xb0 stop_machine+0x1c/0x40 i915_gem_set_wedged+0x1a/0x20 [i915] i915_reset+0xb9/0x230 [i915] i915_reset_device+0x1f6/0x260 [i915] i915_handle_error+0x2d8/0x430 [i915] hangcheck_declare_hang+0xd3/0xf0 [i915] i915_hangcheck_elapsed+0x262/0x2d0 [i915] process_one_work+0x233/0x660 worker_thread+0x4e/0x3b0 kthread+0x152/0x190 ret_from_fork+0x27/0x40 other info that might help us debug this: Chain exists of: cpu_hotplug_lock.rw_sem --> &mm->mmap_sem --> &dev->struct_mutex Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&dev->struct_mutex); lock(&mm->mmap_sem); lock(&dev->struct_mutex); lock(cpu_hotplug_lock.rw_sem); *** DEADLOCK *** 3 locks held by kworker/3:4/562: #0: ("events_long"){+.+.}, at: [<ffffffff8109c64a>] process_one_work+0x1aa/0x660 #1: ((&(&i915->gpu_error.hangcheck_work)->work)){+.+.}, at: [<ffffffff8109c64a>] process_one_work+0x1aa/0x660 #2: (&dev->struct_mutex){+.+.}, at: [<ffffffffa0136588>] i915_reset_device+0x1e8/0x260 [i915] stack backtrace: CPU: 3 PID: 562 Comm: kworker/3:4 Tainted: G U 4.14.0-rc3-CI-CI_DRM_3179+ #1 Hardware name: /NUC7i5BNB, BIOS BNKBL357.86A.0048.2017.0704.1415 07/04/2017 Workqueue: events_long i915_hangcheck_elapsed [i915] Call Trace: dump_stack+0x68/0x9f print_circular_bug+0x235/0x3c0 ? lockdep_init_map_crosslock+0x20/0x20 check_prev_add+0x430/0x840 ? irq_work_queue+0x86/0xe0 ? wake_up_klogd+0x53/0x70 __lock_acquire+0x1420/0x15e0 ? __lock_acquire+0x1420/0x15e0 ? lockdep_init_map_crosslock+0x20/0x20 lock_acquire+0xb0/0x200 ? stop_machine+0x1c/0x40 ? i915_gem_object_truncate+0x50/0x50 [i915] cpus_read_lock+0x3d/0xb0 ? stop_machine+0x1c/0x40 stop_machine+0x1c/0x40 i915_gem_set_wedged+0x1a/0x20 [i915] i915_reset+0xb9/0x230 [i915] i915_reset_device+0x1f6/0x260 [i915] ? gen8_gt_irq_ack+0x170/0x170 [i915] ? work_on_cpu_safe+0x60/0x60 i915_handle_error+0x2d8/0x430 [i915] ? vsnprintf+0xd1/0x4b0 ? scnprintf+0x3a/0x70 hangcheck_declare_hang+0xd3/0xf0 [i915] ? intel_runtime_pm_put+0x56/0xa0 [i915] i915_hangcheck_elapsed+0x262/0x2d0 [i915] process_one_work+0x233/0x660 worker_thread+0x4e/0x3b0 kthread+0x152/0x190 ? process_one_work+0x660/0x660 ? kthread_create_on_node+0x40/0x40 ret_from_fork+0x27/0x40 Setting dangerous option reset - tainting kernel i915 0000:00:02.0: Resetting chip after gpu hang Setting dangerous option reset - tainting kernel i915 0000:00:02.0: Resetting chip after gpu hang v2: Have 1 global synchronize_rcu() barrier across all engines, and improve commit message. v3: We need to protect the seqno update with the timeline spinlock (in set_wedged) to avoid racing with other updates of the seqno, like we already do in nop_submit_request (Chris). v4: Use two-phase sequence to plug the race Chris spotted where we can complete requests before they're marked up with -EIO. v5: Review from Chris: - simplify nop_submit_request. - Add comment to rcu_read_lock section. - Align comments with the new style. v6: Remove unused variable to appease CI. Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=102886 Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=103096 Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Marta Lofstedt <marta.lofstedt@intel.com> Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171011091019.1425-1-daniel.vetter@ffwll.ch
-
由 Sagar Arun Kamble 提交于
Prepared substructure rps for RPS related state. autoenable_work is used for RC6 too hence it is defined outside rps structure. As we do this lot many functions are refactored to use intel_rps *rps to access rps related members. Hence renamed intel_rps_client pointer variables to rps_client in various functions. v2: Rebase. v3: s/pm/gt_pm (Chris) Refactored access to rps structure by declaring struct intel_rps * in many functions. Signed-off-by: NSagar Arun Kamble <sagar.a.kamble@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Imre Deak <imre.deak@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: Radoslaw Szwichtenberg <radoslaw.szwichtenberg@intel.com> #1 Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/1507360055-19948-9-git-send-email-sagar.a.kamble@intel.comAcked-by: NImre Deak <imre.deak@intel.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171010213010.7415-8-chris@chris-wilson.co.uk
-
- 10 10月, 2017 6 次提交
-
-
由 Matthew Auld 提交于
It's a little unclear what the sg_mask actually is, so prefer the more meaningful name of sg_page_sizes. Suggested-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171009110024.29114-1-matthew.auld@intel.comReviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
由 Chris Wilson 提交于
Currently, we reject attempting to pin a large bo into the mappable aperture, but only after trying to create the vma. Under debug kernels, repeatedly creating and freeing that vma for an oversized bo consumes one-third of the runtime for pwrite/pread tests as it is spent on kmalloc/kfree tracking. If we move the rejection to before creating that vma, we lose some accuracy of checking against the fence_size as opposed to object size, though the fence can never be smaller than the object. Note that the vma creation itself will reject an attempt to create a vma larger than the GTT so we can remove one redundant test. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171009084401.29090-7-chris@chris-wilson.co.ukReviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
-
由 Chris Wilson 提交于
Both pread/pwrite GTT paths provide a fast fallback in case we cannot map the whole object at a time. Currently, we use the fallback for very large objects and for active objects that would require remapping, but we can also add active fault mappable objects to the list that we want to avoid evicting. The rationale is that such fault mappable objects are in active use and to evict requires tearing down the CPU PTE and forcing a page fault on the next access; more costly, and intefers with other processes, than our per-page GTT fallback. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171009084401.29090-6-chris@chris-wilson.co.ukReviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
-
由 Chris Wilson 提交于
We don't wish to refault the entire object (other vma) when unbinding one partial vma. To do this track which vma have been faulted into the user's address space. v2: Use a local vma_offset to tidy up a multiline unmap_mapping_range(). Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171009084401.29090-3-chris@chris-wilson.co.ukReviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
-
由 Chris Wilson 提交于
Following the pattern now used for obj->mm.pages, use just pin_fence and unpin_fence to control access to the fence registers. I.e. instead of calling get_fence(); pin_fence(), we now just need to call pin_fence(). This will make it easier to reduce the locking requirements around fence registers. Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171009084401.29090-2-chris@chris-wilson.co.ukReviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
-
由 Chris Wilson 提交于
Resetting the engine requires us to hold the forcewake wakeref to prevent RC6 trying to happen in the middle of the reset sequence. The consequence of an unwanted RC6 event in the middle is that random state is then saved to the powercontext and restored later, which may overwrite the mmio state we need to preserve (e.g. PD_DIR_BASE in the legacy ringbuffer reset_ring_common()). This was noticed in the live_hangcheck selftests when Haswell would sporadically fail to restart during igt_reset_queue(). Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171009110301.21705-3-chris@chris-wilson.co.ukReviewed-by: NMika Kuoppala <mika.kuoppala@linux.intel.com>
-
- 07 10月, 2017 7 次提交
-
-
由 Matthew Auld 提交于
Currently gvt gtt handling doesn't support huge page entries, so disable for now. v2: remove useless 48b PPGTT check Suggested-by: NZhenyu Wang <zhenyuw@linux.intel.com> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Zhenyu Wang <zhenyuw@linux.intel.com> Reviewed-by: NZhenyu Wang <zhenyuw@linux.intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171006145041.21673-20-matthew.auld@intel.comSigned-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171006221833.32439-19-chris@chris-wilson.co.uk
-
由 Matthew Auld 提交于
v2: mock test page support configurations and add MI_STORE_DWORD test v3: run all mockable huge page tests on all platforms via the mock_device v4: add pin_update regression test various improvements suggested by Chris v5: fix issues reported by kbuild test single sg spanning multiple page sizes don't explode when running the live-tests through the appgtt v6: lots of improvements from Chris v7: run on each engine for igt_write_huge add simple tmpfs fallback test v8: size_t is bad don't break the i386 build Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171006145041.21673-18-matthew.auld@intel.comSigned-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171006221833.32439-17-chris@chris-wilson.co.uk
-
由 Matthew Auld 提交于
In preparation for supporting huge gtt pages for the ppgtt, we introduce page size members for gem objects. We fill in the page sizes by scanning the sg table. v2: pass the sg_mask to set_pages v3: calculate the sg_mask inline with populating the sg_table where possible, and pass to set_pages along with the pages. v4: bunch of improvements from Joonas v5: fix num_pages blunder introduce i915_sg_page_sizes helper v6: prefer GEM_BUG_ON(sizes == 0) Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Daniel Vetter <daniel@ffwll.ch> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171006145041.21673-7-matthew.auld@intel.comSigned-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171006221833.32439-6-chris@chris-wilson.co.uk
-
由 Matthew Auld 提交于
Each backend is now responsible for calling __i915_gem_object_set_pages upon successfully gathering its backing storage. This eliminates the inconsistency between the async and sync paths, which stands out even more when we start throwing around an sg_mask in a later patch. Suggested-by: NChris Wilson <chris@chris-wilson.co.uk> Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171006145041.21673-6-matthew.auld@intel.comSigned-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171006221833.32439-5-chris@chris-wilson.co.uk
-
由 Matthew Auld 提交于
Not a fully blown gemfs, just our very own tmpfs kernel mount. Doing so moves us away from the shmemfs shm_mnt, and gives us the much needed flexibility to do things like set our own mount options, namely huge= which should allow us to enable the use of transparent-huge-pages for our shmem backed objects. v2: various improvements suggested by Joonas v3: move gemfs instance to i915.mm and simplify now that we have file_setup_with_mnt v4: fallback to tmpfs shm_mnt upon failure to setup gemfs v5: make tmpfs fallback kinder v5: better gemfs failure message flags variable Signed-off-by: NMatthew Auld <matthew.auld@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Kirill A. Shutemov <kirill@shutemov.name> Cc: Hugh Dickins <hughd@google.com> Cc: linux-mm@kvack.org Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171006145041.21673-3-matthew.auld@intel.comSigned-off-by: NChris Wilson <chris@chris-wilson.co.uk> Link: https://patchwork.freedesktop.org/patch/msgid/20171006221833.32439-2-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
If two nop's (requests in-flight following a wedged device) complete at the same time, the global_seqno value written to the HWSP is undefined as the two threads are not serialized. v2: Use irqsafe spinlock. We expect the callback may be called from inside another irq spinlock, so we can't unconditionally restore irqs. Fixes: ce1135c7 ("drm/i915: Complete requests in nop_submit_request") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171006115617.18432-1-chris@chris-wilson.co.uk
-
由 Chris Wilson 提交于
If a worker requeues itself, it may switch to a different kworker pool, which flush_work() considers as complete. To be strict, we then need to keep flushing the work until it is no longer pending. References: https://bugs.freedesktop.org/show_bug.cgi?id=102456Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Mika Kuoppala <mika.kuoppala@intel.com> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20171006104038.22337-1-chris@chris-wilson.co.uk
-
- 29 9月, 2017 1 次提交
-
-
由 Sagar Arun Kamble 提交于
i915_gem_restore_fences is GEM resumption task hence it is moved to i915_gem_resume from i915_restore_state. Signed-off-by: NSagar Arun Kamble <sagar.a.kamble@intel.com> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/1506661116-12106-1-git-send-email-sagar.a.kamble@intel.comReviewed-by: NMichal Wajdeczko <michal.wajdeczko@intel.com> Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
-
- 25 9月, 2017 1 次提交
-
-
由 Mika Kuoppala 提交于
Engine's execlist related items have been increasing to a point where a separate struct is warranted. Carve execlist specific items to a dedicated struct to add clarity. v2: add kerneldoc and fix whitespace (Joonas, Chris) v3: csb_mmio changes, rebase v4: s/\b(el|execlist)\b/execlists/ (Joonas) Suggested-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by: NMika Kuoppala <mika.kuoppala@intel.com> Acked-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Reviewed-by: Michał Winiarski <michal.winiarski@intel.com> (v3) Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> (v3) Reviewed-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20170922124307.10914-1-mika.kuoppala@intel.com
-
- 22 9月, 2017 1 次提交
-
-
由 Michal Wajdeczko 提交于
Our global struct with params is named exactly the same way as new preferred name for the drm_i915_private function parameter. To avoid such name reuse lets use different name for the global. v5: pure rename v6: fix Credits-to: Coccinelle @@ identifier n; @@ ( - i915.n + i915_modparams.n ) Signed-off-by: NMichal Wajdeczko <michal.wajdeczko@intel.com> Cc: Jani Nikula <jani.nikula@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Ville Syrjala <ville.syrjala@intel.com> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Acked-by: NJoonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by: NJani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20170919193846.38060-1-michal.wajdeczko@intel.com
-
- 18 9月, 2017 1 次提交
-
-
由 Chris Wilson 提交于
When wedging the hw, we want to mark all in-flight requests as -EIO. This is made slightly more complex by execlists who store the ready but not yet submitted-to-hw requests on a private queue (an rbtree priolist). Call into execlists to cancel not only the ELSP tracking for the submitted requests, but also the queue of unsubmitted requests. v2: Move the majority of engine_set_wedged to the backends (both legacy ringbuffer and execlists handling their own lists). Reported-by: NMichał Winiarski <michal.winiarski@intel.com> Testcase: igt/gem_eio/in-flight-contexts Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Michał Winiarski <michal.winiarski@intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20170915173100.26470-1-chris@chris-wilson.co.ukReviewed-by: NMichał Winiarski <michal.winiarski@intel.com>
-
- 14 9月, 2017 1 次提交
-
-
由 Michal Hocko 提交于
GFP_TEMPORARY was introduced by commit e12ba74d ("Group short-lived and reclaimable kernel allocations") along with __GFP_RECLAIMABLE. It's primary motivation was to allow users to tell that an allocation is short lived and so the allocator can try to place such allocations close together and prevent long term fragmentation. As much as this sounds like a reasonable semantic it becomes much less clear when to use the highlevel GFP_TEMPORARY allocation flag. How long is temporary? Can the context holding that memory sleep? Can it take locks? It seems there is no good answer for those questions. The current implementation of GFP_TEMPORARY is basically GFP_KERNEL | __GFP_RECLAIMABLE which in itself is tricky because basically none of the existing caller provide a way to reclaim the allocated memory. So this is rather misleading and hard to evaluate for any benefits. I have checked some random users and none of them has added the flag with a specific justification. I suspect most of them just copied from other existing users and others just thought it might be a good idea to use without any measuring. This suggests that GFP_TEMPORARY just motivates for cargo cult usage without any reasoning. I believe that our gfp flags are quite complex already and especially those with highlevel semantic should be clearly defined to prevent from confusion and abuse. Therefore I propose dropping GFP_TEMPORARY and replace all existing users to simply use GFP_KERNEL. Please note that SLAB users with shrinkers will still get __GFP_RECLAIMABLE heuristic and so they will be placed properly for memory fragmentation prevention. I can see reasons we might want some gfp flag to reflect shorterm allocations but I propose starting from a clear semantic definition and only then add users with proper justification. This was been brought up before LSF this year by Matthew [1] and it turned out that GFP_TEMPORARY really doesn't have a clear semantic. It seems to be a heuristic without any measured advantage for most (if not all) its current users. The follow up discussion has revealed that opinions on what might be temporary allocation differ a lot between developers. So rather than trying to tweak existing users into a semantic which they haven't expected I propose to simply remove the flag and start from scratch if we really need a semantic for short term allocations. [1] http://lkml.kernel.org/r/20170118054945.GD18349@bombadil.infradead.org [akpm@linux-foundation.org: fix typo] [akpm@linux-foundation.org: coding-style fixes] [sfr@canb.auug.org.au: drm/i915: fix up] Link: http://lkml.kernel.org/r/20170816144703.378d4f4d@canb.auug.org.au Link: http://lkml.kernel.org/r/20170728091904.14627-1-mhocko@kernel.orgSigned-off-by: NMichal Hocko <mhocko@suse.com> Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au> Acked-by: NMel Gorman <mgorman@suse.de> Acked-by: NVlastimil Babka <vbabka@suse.cz> Cc: Matthew Wilcox <willy@infradead.org> Cc: Neil Brown <neilb@suse.de> Cc: "Theodore Ts'o" <tytso@mit.edu> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 08 9月, 2017 2 次提交
-
-
由 Chris Wilson 提交于
We also see the delayed GTT write issue on i915g/i915gm, so let's presume that it is a universal problem for all !llc machines, and that we just haven't yet noticed on g33, gen4 and gen5 machines. v2: Use a register that exists on all platforms Testcase: igt/gem_mmap_gtt/coherency # i915gm References: https://bugs.freedesktop.org/show_bug.cgi?id=102577Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20170907184520.5032-1-chris@chris-wilson.co.ukReviewed-by: NVille Syrjälä <ville.syrjala@linux.intel.com>
-
由 Ville Syrjälä 提交于
drm_pci_alloc() refuses to cooperate if the passed alignment exceeds the object size. So round up the obj size to the next power of two as well to make this actually work. Obviously things work just fine as long as the size was a power of two to begin with. However kms_cursor_crc doesn't always use power of two sizes so we hit a failure when we try to allocate the phys memory. Testcase: igt/kms_cursor_crc Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20170907143203.13055-1-ville.syrjala@linux.intel.comReviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
-
- 07 9月, 2017 2 次提交
-
-
由 Tvrtko Ursulin 提交于
With the addition of __sg_alloc_table_from_pages we can control the maximum coalescing size and eliminate a separate path for allocating backing store here. Similar to 871dfbd6 ("drm/i915: Allow compaction upto SWIOTLB max segment size") this enables more compact sg lists to be created and so has a beneficial effect on workloads with many and/or large objects of this class. v2: * Rename helper to i915_sg_segment_size and fix swiotlb override. * Commit message update. v3: * Actually include the swiotlb override fix. v4: * Regroup parameters a bit. (Chris Wilson) v5: * Rebase for swiotlb_max_segment. * Add DMA map failure handling as in abb0deac ("drm/i915: Fallback to single PAGE_SIZE segments for DMA remapping"). v6: Handle swiotlb_max_segment() returning 1. (Joonas Lahtinen) v7: Rebase. v8: Commit spelling fix. Signed-off-by: NTvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: linux-kernel@vger.kernel.org Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20170803091417.23677-1-tvrtko.ursulin@linux.intel.com
-
由 Chris Wilson 提交于
shrink_slab() allows us to report back the number of objects we successfully scanned (out of the target shrinkctl->nr_to_scan). As report the number of pages owned by each GEM object as a separate item to the shrinker, we cannot precisely control the number of shrinker objects we scan on each pass; and indeed may free more than requested. If we fail to tell the shrinker about the number of objects we process, it will continue to hold a grudge against us as any objects left unscanned are added to the next reclaim -- and so we will keep on "unfairly" shrinking our own slab in comparison to other slabs. Link: http://lkml.kernel.org/r/20170822135325.9191-2-chris@chris-wilson.co.ukSigned-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Hillf Danton <hillf.zj@alibaba-inc.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Shaohua Li <shli@fb.com> Cc: Christoph Lameter <cl@linux.com> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Pekka Enberg <penberg@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-