dma-direct: don't over-decrypt memory
stable inclusion from stable-v5.10.124 commit 73bc8a5e8e3a902d8cc9b2f42505f647ca48fac2 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I5L6E7 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=73bc8a5e8e3a902d8cc9b2f42505f647ca48fac2 -------------------------------- commit 4a37f3dd upstream. The original x86 sev_alloc() only called set_memory_decrypted() on memory returned by alloc_pages_node(), so the page order calculation fell out of that logic. However, the common dma-direct code has several potential allocators, not all of which are guaranteed to round up the underlying allocation to a power-of-two size, so carrying over that calculation for the encryption/decryption size was a mistake. Fix it by rounding to a *number* of pages, rather than an order. Until recently there was an even worse interaction with DMA_DIRECT_REMAP where we could have ended up decrypting part of the next adjacent vmalloc area, only averted by no architecture actually supporting both configs at once. Don't ask how I found that one out... Fixes: c10f07aa ("dma/direct: Handle force decryption for DMA coherent buffers in common code") Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NDavid Rientjes <rientjes@google.com> [ backport the functional change without all the prior refactoring ] Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com> Reviewed-by: NWei Li <liwei391@huawei.com>
Showing
想要评论请 注册 或 登录