1. 11 11月, 2015 2 次提交
  2. 10 11月, 2015 1 次提交
    • G
      drm/vc4: Add dependency on HAVE_DMA_ATTRS, and select DRM_GEM_CMA_HELPER · 2565df91
      Guenter Roeck 提交于
      Avoid the following build errors, seen with m68k:allmodconfig and other
      architectures which do not support HAVE_DMA_ATTRS.
      
      ERROR: "drm_gem_cma_create" [drivers/gpu/drm/vc4/vc4.ko] undefined!
      ERROR: "drm_gem_cma_prime_mmap" [drivers/gpu/drm/vc4/vc4.ko] undefined!
      ERROR: "drm_gem_cma_prime_get_sg_table" [drivers/gpu/drm/vc4/vc4.ko] undefined!
      ERROR: "drm_gem_cma_vm_ops" [drivers/gpu/drm/vc4/vc4.ko] undefined!
      ERROR: "drm_gem_cma_mmap" [drivers/gpu/drm/vc4/vc4.ko] undefined!
      ERROR: "drm_gem_cma_prime_vunmap" [drivers/gpu/drm/vc4/vc4.ko] undefined!
      ERROR: "drm_gem_cma_prime_import_sg_table" [drivers/gpu/drm/vc4/vc4.ko] undefined!
      ERROR: "drm_gem_cma_free_object" [drivers/gpu/drm/vc4/vc4.ko] undefined!
      ERROR: "drm_gem_cma_prime_vmap" [drivers/gpu/drm/vc4/vc4.ko] undefined!
      ERROR: "drm_gem_cma_dumb_map_offset" [drivers/gpu/drm/vc4/vc4.ko] undefined!
      ERROR: "drm_gem_cma_create" [drivers/gpu/drm/drm_kms_helper.ko] undefined!
      ERROR: "drm_gem_cma_describe" [drivers/gpu/drm/drm_kms_helper.ko] undefined!
      ERROR: "drm_gem_cma_free_object" [drivers/gpu/drm/drm_kms_helper.ko] undefined!
      Acked-by: NEric Anholt <eric@anholt.net>
      Fixes: c8b75bca ("drm/vc4: Add KMS support for Raspberry Pi.")
      Signed-off-by: NGuenter Roeck <linux@roeck-us.net>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      2565df91
  3. 07 11月, 2015 2 次提交
    • D
      Merge tag 'drm-intel-next-fixes-2015-11-06' of... · 816d2206
      Dave Airlie 提交于
      Merge tag 'drm-intel-next-fixes-2015-11-06' of git://anongit.freedesktop.org/drm-intel into drm-next
      
      Here's a handful of i915 fixes for drm-next/v4.4. Imre's commit alone
      should address the remaining warnings galore you experienced on
      Skylake. Almost all of the rest are also fixes against user or QA
      reported bugs, with references.
      
      * tag 'drm-intel-next-fixes-2015-11-06' of git://anongit.freedesktop.org/drm-intel:
        drm/i915/skl: disable display side power well support for now
        drm/i915: Extend DSL readout fix to BDW and SKL.
        drm/i915: Do graphics device reset under forcewake
        drm/i915: Skip fence installation for objects with rotated views (v4)
        drm/i915: add quirk to enable backlight on Dell Chromebook 11 (2015)
        drm/i915/skl: Prevent unclaimed register writes on skylake.
        drm/i915: disable CPU PWM also on LPT/SPT backlight disable
        drm/i915: Fix maxfifo watermark calc on vlv cursor planes
        drm/i915: add hotplug activation period to hotplug update mask
      816d2206
    • D
      Merge branch 'vmwgfx-next' of git://people.freedesktop.org/~thomash/linux into drm-next · d0baf921
      Dave Airlie 提交于
      One is fix for a regression in 4.3, One irq locking rework.
      
      * 'vmwgfx-next' of git://people.freedesktop.org/~thomash/linux:
        drm/vmwgfx: Relax irq locking somewhat
        drm/vmwgfx: Properly flush cursor updates and page-flips
      d0baf921
  4. 06 11月, 2015 4 次提交
  5. 05 11月, 2015 13 次提交
  6. 04 11月, 2015 7 次提交
    • D
      drm/i915: Fix locking around GuC firmware load · bf248ca1
      Daniel Stone 提交于
      The GuC firmware load requires struct_mutex to create a GEM object,
      but this collides badly with request_firmware. Move struct_mutex
      locking down into the loader itself, so we don't hold it across the
      entire load process, including request_firmware.
      
      [   20.451400] ======================================================
      [   20.451420] [ INFO: possible circular locking dependency detected ]
      [   20.451441] 4.3.0-rc5+ #1 Tainted: G        W
      [   20.451457] -------------------------------------------------------
      [   20.451477] plymouthd/371 is trying to acquire lock:
      [   20.451494]  (&dev->struct_mutex){+.+.+.}, at: [<ffffffffa0093c62>]
      drm_gem_mmap+0x112/0x290 [drm]
      [   20.451538]
                     but task is already holding lock:
      [   20.451557]  (&mm->mmap_sem){++++++}, at: [<ffffffff811fd9ac>]
      vm_mmap_pgoff+0x8c/0xf0
      [   20.451591]
                     which lock already depends on the new lock.
      
      [   20.451617]
                     the existing dependency chain (in reverse order) is:
      [   20.451640]
                     -> #3 (&mm->mmap_sem){++++++}:
      [   20.451661]        [<ffffffff8110644e>] lock_acquire+0xce/0x1c0
      [   20.451683]        [<ffffffff8120ec9a>] __might_fault+0x7a/0xa0
      [   20.451705]        [<ffffffff8127e34e>] filldir+0x9e/0x130
      [   20.451726]        [<ffffffff81295b86>] dcache_readdir+0x186/0x230
      [   20.451748]        [<ffffffff8127e117>] iterate_dir+0x97/0x130
      [   20.451769]        [<ffffffff8127e66a>] SyS_getdents+0x9a/0x130
      [   20.451790]        [<ffffffff8184f2f2>] entry_SYSCALL_64_fastpath+0x12/0x76
      [   20.451829]
                     -> #2 (&sb->s_type->i_mutex_key#2){+.+.+.}:
      [   20.451852]        [<ffffffff8110644e>] lock_acquire+0xce/0x1c0
      [   20.451872]        [<ffffffff8184b516>] mutex_lock_nested+0x86/0x400
      [   20.451893]        [<ffffffff81277790>] walk_component+0x1d0/0x2a0
      [   20.451914]        [<ffffffff812779f0>] link_path_walk+0x190/0x5a0
      [   20.451935]        [<ffffffff8127803b>] path_openat+0xab/0x1260
      [   20.451955]        [<ffffffff8127a651>] do_filp_open+0x91/0x100
      [   20.451975]        [<ffffffff81267e67>] file_open_name+0xf7/0x150
      [   20.451995]        [<ffffffff81267ef3>] filp_open+0x33/0x60
      [   20.452014]        [<ffffffff8157e1e7>] _request_firmware+0x277/0x880
      [   20.452038]        [<ffffffff8157e9e4>] request_firmware_work_func+0x34/0x80
      [   20.452060]        [<ffffffff810c7020>] process_one_work+0x230/0x680
      [   20.452082]        [<ffffffff810c74be>] worker_thread+0x4e/0x450
      [   20.452102]        [<ffffffff810ce511>] kthread+0x101/0x120
      [   20.452121]        [<ffffffff8184f66f>] ret_from_fork+0x3f/0x70
      [   20.452140]
                     -> #1 (umhelper_sem){++++.+}:
      [   20.452159]        [<ffffffff8110644e>] lock_acquire+0xce/0x1c0
      [   20.452178]        [<ffffffff8184c5c1>] down_read+0x51/0xa0
      [   20.452197]        [<ffffffff810c203b>]
      usermodehelper_read_trylock+0x5b/0x130
      [   20.452221]        [<ffffffff8157e147>] _request_firmware+0x1d7/0x880
      [   20.452242]        [<ffffffff8157e821>] request_firmware+0x31/0x50
      [   20.452262]        [<ffffffffa01b54a4>]
      intel_guc_ucode_init+0xf4/0x400 [i915]
      [   20.452305]        [<ffffffffa0213913>] i915_driver_load+0xd63/0x16e0 [i915]
      [   20.452343]        [<ffffffffa00987d9>] drm_dev_register+0xa9/0xc0 [drm]
      [   20.452369]        [<ffffffffa009ae3d>] drm_get_pci_dev+0x8d/0x1e0 [drm]
      [   20.452396]        [<ffffffffa01521e4>] i915_pci_probe+0x34/0x50 [i915]
      [   20.452421]        [<ffffffff81464675>] local_pci_probe+0x45/0xa0
      [   20.452443]        [<ffffffff81465a6d>] pci_device_probe+0xfd/0x140
      [   20.452464]        [<ffffffff8156a2e4>] driver_probe_device+0x224/0x480
      [   20.452486]        [<ffffffff8156a5c8>] __driver_attach+0x88/0x90
      [   20.452505]        [<ffffffff81567cf3>] bus_for_each_dev+0x73/0xc0
      [   20.452526]        [<ffffffff81569a7e>] driver_attach+0x1e/0x20
      [   20.452546]        [<ffffffff815695ae>] bus_add_driver+0x1ee/0x280
      [   20.452566]        [<ffffffff8156b100>] driver_register+0x60/0xe0
      [   20.453197]        [<ffffffff81464050>] __pci_register_driver+0x60/0x70
      [   20.453845]        [<ffffffffa009b070>] drm_pci_init+0xe0/0x110 [drm]
      [   20.454497]        [<ffffffffa027f092>] 0xffffffffa027f092
      [   20.455156]        [<ffffffff81002123>] do_one_initcall+0xb3/0x200
      [   20.455796]        [<ffffffff811d8c01>] do_init_module+0x5f/0x1e7
      [   20.456434]        [<ffffffff8114c4e6>] load_module+0x2126/0x27d0
      [   20.457071]        [<ffffffff8114cdf9>] SyS_finit_module+0xb9/0xf0
      [   20.457738]        [<ffffffff8184f2f2>] entry_SYSCALL_64_fastpath+0x12/0x76
      [   20.458370]
                     -> #0 (&dev->struct_mutex){+.+.+.}:
      [   20.459773]        [<ffffffff8110584f>] __lock_acquire+0x191f/0x1ba0
      [   20.460451]        [<ffffffff8110644e>] lock_acquire+0xce/0x1c0
      [   20.461074]        [<ffffffffa0093c88>] drm_gem_mmap+0x138/0x290 [drm]
      [   20.461693]        [<ffffffff8121a5ec>] mmap_region+0x3ec/0x670
      [   20.462298]        [<ffffffff8121abb2>] do_mmap+0x342/0x420
      [   20.462901]        [<ffffffff811fd9d2>] vm_mmap_pgoff+0xb2/0xf0
      [   20.463532]        [<ffffffff81218f62>] SyS_mmap_pgoff+0x1f2/0x290
      [   20.464118]        [<ffffffff8102187b>] SyS_mmap+0x1b/0x30
      [   20.464702]        [<ffffffff8184f2f2>] entry_SYSCALL_64_fastpath+0x12/0x76
      [   20.465289]
                     other info that might help us debug this:
      
      [   20.467179] Chain exists of:
                       &dev->struct_mutex --> &sb->s_type->i_mutex_key#2 -->
      &mm->mmap_sem
      
      [   20.468928]  Possible unsafe locking scenario:
      
      [   20.470161]        CPU0                    CPU1
      [   20.470745]        ----                    ----
      [   20.471325]   lock(&mm->mmap_sem);
      [   20.471902]                                lock(&sb->s_type->i_mutex_key#2);
      [   20.472538]                                lock(&mm->mmap_sem);
      [   20.473118]   lock(&dev->struct_mutex);
      [   20.473704]
                      *** DEADLOCK ***
      Signed-off-by: NDaniel Stone <daniels@collabora.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      bf248ca1
    • F
      drm/amdgpu: update Fiji's Golden setting · a7ca8ef9
      Flora Cui 提交于
      Change-Id: Ic3f3bfce4767cc05d04f6eb24e22a0f3e7ceacaa
      Signed-off-by: NFlora Cui <Flora.Cui@amd.com>
      Reviewed-by: NJammy Zhou <Jammy.Zhou@amd.com>
      a7ca8ef9
    • F
      drm/amdgpu: update Fiji's rev id · b6bc28ff
      Flora Cui 提交于
      Change-Id: I0018e2b72feb771683c57960ba3ce942bec5d3ab
      Signed-off-by: NFlora Cui <Flora.Cui@amd.com>
      Reviewed-by: NJammy Zhou <Jammy.Zhou@amd.com>
      b6bc28ff
    • F
      drm/amdgpu: extract common code in vi_common_early_init · a3d08fa5
      Flora Cui 提交于
      Change-Id: I9ed25353c559e27bc1b1d5b50f977b0ff03de87f
      Signed-off-by: NFlora Cui <Flora.Cui@amd.com>
      Reviewed-by: NJammy Zhou <Jammy.Zhou@amd.com>
      a3d08fa5
    • D
      drm/amd/scheduler: don't oops on failure to load · 32544d02
      Dave Airlie 提交于
      In two places amdgpu tries to tear down something it hasn't
      initalised when failing. This is what happens when you
      enable experimental support on topaz which then fails in
      ring init.
      
      This patch allows it to fail cleanly.
      
      agd: Split out from from the original patch since the
      scheduler is a driver independent.
      Reviewed-by: NChunming Zhou <david1.zhou@amd.com>
      Reviewed-by: NChristian König <christian.koenig@amd.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
      Cc: stable@vger.kernel.org
      32544d02
    • D
      drm/amdgpu: don't oops on failure to load (v2) · fe295b27
      Dave Airlie 提交于
      In two places amdgpu tries to tear down something it hasn't
      initalised when failing. This is what happens when you
      enable experimental support on topaz which then fails in
      ring init.
      
      This patch allows it to fail cleanly.
      
      v2 (agd): split out scheduler change into a separate patch
      Reviewed-by: NChunming Zhou <david1.zhou@amd.com>
      Reviewed-by: NChristian König <christian.koenig@amd.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
      Cc: stable@vger.kernel.org
      fe295b27
    • A
      df7989fe
  7. 03 11月, 2015 11 次提交