-
由 Chris Wilson 提交于
When the caller maps their dmabuf and we return an sg_table, the caller doesn't expect the pages beneath that sg_table to vanish on a whim (i.e. under mempressure). The contract is that the pages are pinned for the duration of the mapping (from dma_buf_map_attachment() to dma_buf_unmap_attachment). To comply, we need to introduce our own vgem_object.pages_pin_count and elevate it across the mapping. However, the drm_prime interface we use calls drv->prime_pin on dma_buf_attach and drv->prime_unpin on dma_buf_detach, which while that does cover the mapping is much broader than is desired -- but it will do for now. v2: also hold the pin across prime_vmap/vunmap Reported-by: NTomi Sarvela <tomi.p.sarvela@intel.com> Testcase: igt/gem_concurrent_blit/*swap*vgem* Fixes: 5ba6c9ff ("drm/vgem: Fix mmaping") Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk> Cc: Tomi Sarvela <tomi.p.sarvela@intel.com> Cc: Laura Abbott <labbott@redhat.com> Cc: Sean Paul <seanpaul@chromium.org> Cc: Matthew Auld <matthew.auld@intel.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: <stable@vger.kernel.org> # needs a backport Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/20170622134617.17912-1-chris@chris-wilson.co.uk
71bb23c7