From 4fc701ead77ede96df3e8b3de13fdf2b1326ee5b Mon Sep 17 00:00:00 2001 From: Haggai Eran Date: Tue, 6 Jan 2015 13:56:02 +0200 Subject: [PATCH] IB/core: Properly handle registration of on-demand paging MRs after dereg When the last on-demand paging MR is released the notifier count is left non-zero so that concurrent page faults will have to abort. If a new MR is then registered, the counter is reset. However, the decision is made to put the new MR in the list waiting for the notifier count to reach zero, before the counter is reset. An invalidation or another MR registration can release the MR to handle page faults, but without such an event the MR can wait forever. The patch fixes this issue by adding a check whether the MR is the first on-demand paging MR when deciding whether it is ready to handle page faults. If it is the first MR, we know that there are no mmu notifiers running in parallel to the registration. Fixes: 882214e2b128 ("IB/core: Implement support for MMU notifiers regarding on demand paging regions") Signed-off-by: Haggai Eran Signed-off-by: Shachar Raindel Signed-off-by: Roland Dreier --- drivers/infiniband/core/umem_odp.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c index 6095872549e7..8b8cc6fa0ab0 100644 --- a/drivers/infiniband/core/umem_odp.c +++ b/drivers/infiniband/core/umem_odp.c @@ -294,7 +294,8 @@ int ib_umem_odp_get(struct ib_ucontext *context, struct ib_umem *umem) if (likely(ib_umem_start(umem) != ib_umem_end(umem))) rbt_ib_umem_insert(&umem->odp_data->interval_tree, &context->umem_tree); - if (likely(!atomic_read(&context->notifier_count))) + if (likely(!atomic_read(&context->notifier_count)) || + context->odp_mrs_count == 1) umem->odp_data->mn_counters_active = true; else list_add(&umem->odp_data->no_private_counters, -- GitLab