1. 23 6月, 2021 2 次提交
  2. 28 4月, 2021 1 次提交
  3. 08 4月, 2021 2 次提交
  4. 07 4月, 2021 1 次提交
  5. 02 4月, 2021 3 次提交
  6. 18 3月, 2021 1 次提交
  7. 27 2月, 2021 1 次提交
    • D
      drm/msm: Fix speed-bin support not to access outside valid memory · 7bf168c8
      Douglas Anderson 提交于
      When running the latest kernel on an sc7180 with KASAN I got this
      splat:
        BUG: KASAN: slab-out-of-bounds in a6xx_gpu_init+0x618/0x644
        Read of size 4 at addr ffffff8088f36100 by task kworker/7:1/58
        CPU: 7 PID: 58 Comm: kworker/7:1 Not tainted 5.11.0+ #3
        Hardware name: Google Lazor (rev1 - 2) with LTE (DT)
        Workqueue: events deferred_probe_work_func
        Call trace:
         dump_backtrace+0x0/0x3a8
         show_stack+0x24/0x30
         dump_stack+0x174/0x1e0
         print_address_description+0x70/0x2e4
         kasan_report+0x178/0x1bc
         __asan_report_load4_noabort+0x44/0x50
         a6xx_gpu_init+0x618/0x644
         adreno_bind+0x26c/0x438
      
      This is because the speed bin is defined like this:
        gpu_speed_bin: gpu_speed_bin@1d2 {
          reg = <0x1d2 0x2>;
          bits = <5 8>;
        };
      
      As you can see the "length" is 2 bytes. That means that the nvmem
      subsystem allocates only 2 bytes. The GPU code, however, was casting
      the pointer allocated by nvmem to a (u32 *) and dereferencing. That's
      not so good.
      
      Let's fix this to just use the nvmem_cell_read_u16() accessor function
      which simplifies things and also gets rid of the splat.
      
      Let's also put an explicit conversion from little endian in place just
      to make things clear. The nvmem subsystem today is assuming little
      endian and this makes it clear. Specifically, the way the above sc7180
      cell is interpreted:
      
      NVMEM:
       +--------+--------+--------+--------+--------+
       | ...... | 0x1d3  | 0x1d2  | ...... | 0x000  |
       +--------+--------+--------+--------+--------+
                    ^       ^
                   msb     lsb
      
      You can see that the least significant data is at the lower address
      which is little endian.
      
      NOTE: someone who is truly paying attention might wonder about me
      picking the "u16" version of this accessor instead of the "u8" (since
      the value is 8 bits big) or the u32 version (just for fun). At the
      moment you need to pick the accessor that exactly matches the length
      the cell was specified as in the device tree. Hopefully future
      patches to the nvmem subsystem will fix this.
      
      Fixes: fe7952c6 ("drm/msm: Add speed-bin support to a618 gpu")
      Signed-off-by: NDouglas Anderson <dianders@chromium.org>
      Signed-off-by: NRob Clark <robdclark@chromium.org>
      7bf168c8
  8. 24 2月, 2021 2 次提交
  9. 02 2月, 2021 1 次提交
  10. 01 2月, 2021 14 次提交
  11. 08 1月, 2021 2 次提交
  12. 06 12月, 2020 1 次提交
  13. 30 11月, 2020 3 次提交
    • J
      drm/msm/a6xx: Add support for using system cache on MMU500 based targets · 3d247123
      Jordan Crouse 提交于
      GPU targets with an MMU-500 attached have a slightly different process for
      enabling system cache. Use the compatible string on the IOMMU phandle
      to see if an MMU-500 is attached and modify the programming sequence
      accordingly.
      Signed-off-by: NJordan Crouse <jcrouse@codeaurora.org>
      Signed-off-by: NSai Prakash Ranjan <saiprakash.ranjan@codeaurora.org>
      Signed-off-by: NRob Clark <robdclark@chromium.org>
      3d247123
    • S
      drm/msm/a6xx: Add support for using system cache(LLC) · 474dadb8
      Sharat Masetty 提交于
      The last level system cache can be partitioned to 32 different
      slices of which GPU has two slices preallocated. One slice is
      used for caching GPU buffers and the other slice is used for
      caching the GPU SMMU pagetables. This talks to the core system
      cache driver to acquire the slice handles, configure the SCID's
      to those slices and activates and deactivates the slices upon
      GPU power collapse and restore.
      
      Some support from the IOMMU driver is also needed to make use
      of the system cache to set the right TCR attributes. GPU then
      has the ability to override a few cacheability parameters which
      it does to override write-allocate to write-no-allocate as the
      GPU hardware does not benefit much from it.
      
      DOMAIN_ATTR_IO_PGTABLE_CFG is another domain level attribute used
      by the IOMMU driver for pagetable configuration which will be used
      to set a quirk initially to set the right attributes to cache the
      hardware pagetables into the system cache.
      Signed-off-by: NSharat Masetty <smasetty@codeaurora.org>
      [saiprakash.ranjan: fix to set attr before device attach to iommu and rebase]
      Signed-off-by: NSai Prakash Ranjan <saiprakash.ranjan@codeaurora.org>
      Signed-off-by: NRob Clark <robdclark@chromium.org>
      474dadb8
    • L
      drm/msm/adreno/a6xx_gpu_state: Make some local functions static · 692bdf97
      Lee Jones 提交于
      Fixes the following W=1 kernel build warning(s):
      
       drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c:83:7: warning: no previous prototype for ‘state_kcalloc’ [-Wmissing-prototypes]
       drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c:95:7: warning: no previous prototype for ‘state_kmemdup’ [-Wmissing-prototypes]
       drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c:947:6: warning: no previous prototype for ‘a6xx_gpu_state_destroy’ [-Wmissing-prototypes]
      
      Cc: Rob Clark <robdclark@gmail.com>
      Cc: Sean Paul <sean@poorly.run>
      Cc: David Airlie <airlied@linux.ie>
      Cc: Daniel Vetter <daniel@ffwll.ch>
      Cc: linux-arm-msm@vger.kernel.org
      Cc: dri-devel@lists.freedesktop.org
      Cc: freedreno@lists.freedesktop.org
      Signed-off-by: NLee Jones <lee.jones@linaro.org>
      Signed-off-by: NRob Clark <robdclark@chromium.org>
      692bdf97
  14. 24 11月, 2020 1 次提交
  15. 11 11月, 2020 2 次提交
    • R
      drm/msm/a5xx: Clear shadow on suspend · 5771de5d
      Rob Clark 提交于
      Similar to the previous patch, clear shadow on suspend to avoid timeouts
      waiting for ringbuffer space.
      
      Fixes: 8907afb4 ("drm/msm: Allow a5xx to mark the RPTR shadow as privileged")
      Signed-off-by: NRob Clark <robdclark@chromium.org>
      5771de5d
    • R
      drm/msm/a6xx: Clear shadow on suspend · e8b0b994
      Rob Clark 提交于
      Clear the shadow rptr on suspend.  Otherwise, when we resume, we can
      have a stale value until CP_WHERE_AM_I executes.  If we suspend near
      the ringbuffer wraparound point, this can lead to a chicken/egg
      situation where we are waiting for ringbuffer space to write the
      CP_WHERE_AM_I (or CP_INIT) packet, because we mistakenly believe that
      the ringbuffer is full (due to stale rptr value in the shadow).
      
      Fixes errors like:
      
        [drm:adreno_wait_ring [msm]] *ERROR* timeout waiting for space in ringbuffer 0
      
      in the resume path.
      
      Fixes: d3a569fc ("drm/msm: a6xx: Use WHERE_AM_I for eligible targets")
      Signed-off-by: NRob Clark <robdclark@chromium.org>
      e8b0b994
  16. 05 11月, 2020 3 次提交