1. 28 1月, 2021 17 次提交
  2. 27 1月, 2021 5 次提交
  3. 26 1月, 2021 11 次提交
  4. 22 1月, 2021 4 次提交
  5. 21 1月, 2021 1 次提交
  6. 14 1月, 2021 1 次提交
  7. 12 1月, 2021 1 次提交
    • O
      habanalabs: prevent soft lockup during unmap · 9488307a
      Oded Gabbay 提交于
      When using Deep learning framework such as tensorflow or pytorch, there
      are tens of thousands of host memory mappings. When the user frees
      all those mappings at the same time, the process of unmapping and
      unpinning them can take a long time, which may cause a soft lockup
      bug.
      
      To prevent this, we need to free the core to do other things during
      the unmapping process. For now, we chose to do it every 32K unmappings
      (each unmap is a single 4K page).
      Signed-off-by: NOded Gabbay <ogabbay@kernel.org>
      9488307a