1. 20 3月, 2021 2 次提交
  2. 19 11月, 2020 1 次提交
  3. 09 11月, 2020 1 次提交
  4. 06 11月, 2020 1 次提交
  5. 05 11月, 2020 1 次提交
  6. 25 9月, 2020 1 次提交
  7. 10 9月, 2020 1 次提交
    • M
      drm: etnaviv: fix common struct sg_table related issues · 182354a5
      Marek Szyprowski 提交于
      The Documentation/DMA-API-HOWTO.txt states that the dma_map_sg() function
      returns the number of the created entries in the DMA address space.
      However the subsequent calls to the dma_sync_sg_for_{device,cpu}() and
      dma_unmap_sg must be called with the original number of the entries
      passed to the dma_map_sg().
      
      struct sg_table is a common structure used for describing a non-contiguous
      memory buffer, used commonly in the DRM and graphics subsystems. It
      consists of a scatterlist with memory pages and DMA addresses (sgl entry),
      as well as the number of scatterlist entries: CPU pages (orig_nents entry)
      and DMA mapped pages (nents entry).
      
      It turned out that it was a common mistake to misuse nents and orig_nents
      entries, calling DMA-mapping functions with a wrong number of entries or
      ignoring the number of mapped entries returned by the dma_map_sg()
      function.
      
      To avoid such issues, lets use a common dma-mapping wrappers operating
      directly on the struct sg_table objects and use scatterlist page
      iterators where possible. This, almost always, hides references to the
      nents and orig_nents entries, making the code robust, easier to follow
      and copy/paste safe.
      Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
      Reviewed-by: NRobin Murphy <robin.murphy@arm.com>
      Acked-by: NLucas Stach <l.stach@pengutronix.de>
      182354a5
  8. 09 9月, 2020 1 次提交
  9. 18 6月, 2020 1 次提交
  10. 10 6月, 2020 1 次提交
  11. 20 5月, 2020 1 次提交
  12. 21 3月, 2020 1 次提交
    • L
      drm/etnaviv: request pages from DMA32 zone when needed · b72af445
      Lucas Stach 提交于
      Some Vivante GPUs are found in systems that have interconnects restricted
      to 32 address bits, but may have system memory mapped above the 4GB mark.
      As this region isn't accessible to the GPU via DMA any GPU memory allocated
      in the upper part needs to go through SWIOTLB bounce buffering. This kills
      performance if it happens too often, as well as overrunning the available
      bounce buffer space, as the GPU buffer may stay mapped for a long time.
      
      Avoid bounce buffering by checking the addressing restrictions. If the
      GPU is unable to access memory above the 4GB mark, request our SHM buffers
      to be located in the DMA32 zone.
      Signed-off-by: NLucas Stach <l.stach@pengutronix.de>
      b72af445
  13. 19 12月, 2019 1 次提交
    • A
      drm/etnaviv: avoid deprecated timespec · 38c4a4cf
      Arnd Bergmann 提交于
      struct timespec is being removed from the kernel because it often leads
      to code that is not y2038-safe.
      
      In the etnaviv driver, monotonic timestamps are used, which do not suffer
      from overflow, but the usage of timespec here gets in the way of removing
      the interface completely.
      
      Pass down the user-supplied 64-bit value here rather than converting
      it to an intermediate timespec to avoid the conversion.
      
      The conversion is transparent for all regular CLOCK_MONOTONIC values,
      but is a small change in behavior for excessively large values: the
      existing code would treat e.g. tv_sec=0x100000000 the same as tv_sec=0
      and not block, while the new code it would block for up to 2^31
      seconds. The new behavior is more logical here, but if it causes problems,
      the truncation can be put back.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      38c4a4cf
  14. 15 8月, 2019 5 次提交
    • L
      drm/etnaviv: implement softpin · 088880dd
      Lucas Stach 提交于
      With softpin we allow the userspace to take control over the GPU virtual
      address space. The new capability is relected by a bump of the minor DRM
      version. There are a few restrictions for userspace to take into
      account:
      
      1. The kernel reserves a bit of the address space to implement zero page
      faulting and mapping of the kernel internal ring buffer. Userspace can
      query the kernel for the first usable GPU VM address via
      ETNAVIV_PARAM_SOFTPIN_START_ADDR.
      
      2. We only allow softpin on GPUs, which implement proper process
      separation via PPAS. If softpin is not available the softpin start
      address will be set to ~0.
      
      3. Softpin is all or nothing. A submit using softpin must not use any
      address fixups via relocs.
      Signed-off-by: NLucas Stach <l.stach@pengutronix.de>
      Reviewed-by: NPhilipp Zabel <p.zabel@pengutronix.de>
      Reviewed-by: NGuido Günther <agx@sigxcpu.org>
      088880dd
    • L
      drm/etnaviv: allow to request specific virtual address for gem mapping · 17eae23b
      Lucas Stach 提交于
      Allow the mapping code to request a specific virtual address for the gem
      mapping. If the virtual address is zero we fall back to the old mode of
      allocating a virtual address for the mapping.
      Signed-off-by: NLucas Stach <l.stach@pengutronix.de>
      Reviewed-by: NPhilipp Zabel <p.zabel@pengutronix.de>
      Reviewed-by: NGuido Günther <agx@sigxcpu.org>
      17eae23b
    • L
      drm/etnaviv: implement per-process address spaces on MMUv2 · 17e4660a
      Lucas Stach 提交于
      This builds on top of the MMU contexts introduced earlier. Instead of having
      one context per GPU core, each GPU client receives its own context.
      
      On MMUv1 this still means a single shared pagetable set is used by all
      clients, but on MMUv2 there is now a distinct set of pagetables for each
      client. As the command fetch is also translated via the MMU on MMUv2 the
      kernel command ringbuffer is mapped into each of the client pagetables.
      
      As the MMU context switch is a bit of a heavy operation, due to the needed
      cache and TLB flushing, this patch implements a lazy way of switching the
      MMU context. The kernel does not have its own MMU context, but reuses the
      last client context for all of its operations. This has some visible impact,
      as the GPU can now only be started once a client has submitted some work and
      we got the client MMU context assigned. Also the MMU context has a different
      lifetime than the general client context, as the GPU might still execute the
      kernel command buffer in the context of a client even after the client has
      completed all GPU work and has been terminated. Only when the GPU is runtime
      suspended or switches to another clients MMU context is the old context
      freed up.
      Signed-off-by: NLucas Stach <l.stach@pengutronix.de>
      Reviewed-by: NPhilipp Zabel <p.zabel@pengutronix.de>
      Reviewed-by: NGuido Günther <agx@sigxcpu.org>
      17e4660a
    • L
      drm/etnaviv: provide MMU context to etnaviv_gem_mapping_get · e6364d70
      Lucas Stach 提交于
      In preparation to having a context per process, etnaviv_gem_mapping_get
      should not use the current GPU context, but needs to be told which
      context to use.
      Signed-off-by: NLucas Stach <l.stach@pengutronix.de>
      Reviewed-by: NPhilipp Zabel <p.zabel@pengutronix.de>
      Reviewed-by: NGuido Günther <agx@sigxcpu.org>
      e6364d70
    • L
      drm/etnaviv: rework MMU handling · 27b67278
      Lucas Stach 提交于
      This reworks the MMU handling to make it possible to have multiple MMU contexts.
      A context is basically one instance of GPU page tables. Currently we have one
      set of page tables per GPU, which isn't all that clever, as it has the
      following two consequences:
      
      1. All GPU clients (aka processes) are sharing the same pagetables, which means
      there is no isolation between clients, but only between GPU assigned memory
      spaces and the rest of the system. Better than nothing, but also not great.
      
      2. Clients operating on the same set of buffers with different etnaviv GPU
      cores, e.g. a workload using both the 2D and 3D GPU, need to map the used
      buffers into the pagetable sets of each used GPU.
      
      This patch reworks all the MMU handling to introduce the abstraction of the
      MMU context. A context can be shared across different GPU cores, as long as
      they have compatible MMU implementations, which is the case for all systems
      with Vivante GPUs seen in the wild.
      
      As MMUv1 is not able to change pagetables on the fly, without a
      "stop the world" operation, which stops GPU, changes pagetables via CPU
      interaction, restarts GPU, the implementation introduces a shared context on
      MMUv1, which is returned whenever there is a request for a new context.
      
      This patch assigns a MMU context to each GPU, so on MMUv2 systems there is
      still one set of pagetables per GPU, but due to the shared context MMUv1
      systems see a change in behavior as now a single pagetable set is used
      across all GPU cores.
      Signed-off-by: NLucas Stach <l.stach@pengutronix.de>
      Reviewed-by: NPhilipp Zabel <p.zabel@pengutronix.de>
      Reviewed-by: NGuido Günther <agx@sigxcpu.org>
      27b67278
  15. 13 8月, 2019 1 次提交
  16. 09 8月, 2019 1 次提交
  17. 03 8月, 2019 2 次提交
  18. 26 6月, 2019 1 次提交
  19. 17 4月, 2019 1 次提交
  20. 19 2月, 2019 1 次提交
  21. 12 12月, 2018 1 次提交
  22. 06 8月, 2018 1 次提交
  23. 18 5月, 2018 1 次提交
  24. 03 1月, 2018 5 次提交
  25. 16 11月, 2017 1 次提交
  26. 10 10月, 2017 1 次提交
  27. 13 9月, 2017 1 次提交
  28. 15 8月, 2017 1 次提交
  29. 08 8月, 2017 1 次提交
  30. 03 7月, 2017 1 次提交
    • L
      drm/etnaviv: populate GEM objects on cpu_prep · 8cc47b3e
      Lucas Stach 提交于
      CPU prep is the point where we can reasonably return an error to userspace
      when something goes wrong while populating the object. If we leave the
      object unpopulated at this point, the allocation will happen in the
      fault handler when userspace accesses the object through the mmap space,
      where we don't have any other option than to OOM the system.
      Signed-off-by: NLucas Stach <dev@lynxeye.de>
      8cc47b3e