1. 27 10月, 2020 7 次提交
  2. 17 10月, 2020 1 次提交
  3. 02 10月, 2020 5 次提交
  4. 30 9月, 2020 5 次提交
  5. 25 9月, 2020 1 次提交
  6. 23 9月, 2020 1 次提交
  7. 19 9月, 2020 5 次提交
  8. 18 9月, 2020 7 次提交
  9. 11 9月, 2020 1 次提交
    • J
      RDMA/umem: Split ib_umem_num_pages() into ib_umem_num_dma_blocks() · a665aca8
      Jason Gunthorpe 提交于
      ib_umem_num_pages() should only be used by things working with the SGL in
      CPU pages directly.
      
      Drivers building DMA lists should use the new ib_num_dma_blocks() which
      returns the number of blocks rdma_umem_for_each_block() will return.
      
      To make this general for DMA drivers requires a different implementation.
      Computing DMA block count based on umem->address only works if the
      requested page size is < PAGE_SIZE and/or the IOVA == umem->address.
      
      Instead the number of DMA pages should be computed in the IOVA address
      space, not umem->address. Thus the IOVA has to be stored inside the umem
      so it can be used for these calculations.
      
      For now set it to umem->address by default and fix it up if
      ib_umem_find_best_pgsz() was called. This allows drivers to be converted
      to ib_umem_num_dma_blocks() safely.
      
      Link: https://lore.kernel.org/r/6-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.comSigned-off-by: NJason Gunthorpe <jgg@nvidia.com>
      a665aca8
  10. 10 9月, 2020 7 次提交