- 29 10月, 2019 30 次提交
-
-
由 Jason Gunthorpe 提交于
invalidate_range() also obtains the umem_mutex which is being held at this point, so if this path were was ever called it would deadlock. Thus conclude the debugging never triggers and rework it into a simple WARN_ON and leave things as they are. While here add a note to explain how we could possibly get inconsistent page pointers. Link: https://lore.kernel.org/r/20191009160934.3143-16-jgg@ziepe.caSigned-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Jason Gunthorpe 提交于
For creation, as soon as the umem_odp is created the notifier can be called, however the underlying MR may not have been setup yet. This would cause problems if mlx5_ib_invalidate_range() runs. There is some confusing/ulocked/racy code that might by trying to solve this, but without locks it isn't going to work right. Instead trivially solve the problem by short-circuiting the invalidation if there are not yet any DMA mapped pages. By definition there is nothing to invalidate in this case. The create code will have the umem fully setup before anything is DMA mapped, and npages is fully locked by the umem_mutex. For destroy, invalidate the entire MR at the HW to stop DMA then DMA unmap the pages before destroying the MR. This drives npages to zero and prevents similar racing with invalidate while the MR is undergoing destruction. Arguably it would be better if the umem was created after the MR and destroyed before, but that would require a big rework of the MR code. Fixes: 6aec21f6 ("IB/mlx5: Page faults handling infrastructure") Link: https://lore.kernel.org/r/20191009160934.3143-15-jgg@ziepe.caReviewed-by: NArtemy Kovalyov <artemyko@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Jason Gunthorpe 提交于
These mkeys are entirely internal and are never used by the HW for page fault. They should also never be used by userspace for prefetch. Simplify & optimize things by not including them in the xarray. Since the prefetch path can now never see a child mkey there is no need for the second synchronize_srcu() during imr destroy. Link: https://lore.kernel.org/r/20191009160934.3143-14-jgg@ziepe.caReviewed-by: NArtemy Kovalyov <artemyko@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Jason Gunthorpe 提交于
Use SRCU in a sensible way by removing all MRs in the implicit tree from the two xarrays (the update operation), then a synchronize, followed by a normal single threaded teardown. This is only a little unusual from the normal pattern as there can still be some work pending in the unbound wq that may also require a workqueue flush. This is tracked with a single atomic, consolidating the redundant existing atomics and wait queue. For understand-ability the entire ODP implicit create/destroy flow now largely exists in a single pair of functions within odp.c, with a few support functions for tearing down an unused child. Link: https://lore.kernel.org/r/20191009160934.3143-13-jgg@ziepe.caReviewed-by: NArtemy Kovalyov <artemyko@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Jason Gunthorpe 提交于
Now that the locking is simplified combine pagefault_implicit_mr() with implicit_mr_get_data() so that we sweep over the idx range only once, and do the single xlt update at the end, after the child umems are setup. This avoids double iteration/xa_loads plus the sketchy failure path if the xa_load() fails. Link: https://lore.kernel.org/r/20191009160934.3143-12-jgg@ziepe.caReviewed-by: NArtemy Kovalyov <artemyko@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Jason Gunthorpe 提交于
Now that the child MRs are stored in an xarray we can rely on the SRCU lock to protect the xa_load and use xa_cmpxchg on the slow allocation path to resolve races with concurrent page fault. This reduces the scope of the critical section of umem_mutex for implicit MRs to only cover mlx5_ib_update_xlt, and avoids taking a lock at all if the child MR is already in the xarray. This makes it consistent with the normal ODP MR critical section for umem_lock, and the locking approach used for destroying an unusued implicit child MR. The MLX5_IB_UPD_XLT_ATOMIC is no longer needed in implicit_get_child_mr() since it is no longer called with any locks. Link: https://lore.kernel.org/r/20191009160934.3143-11-jgg@ziepe.caReviewed-by: NArtemy Kovalyov <artemyko@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Jason Gunthorpe 提交于
Currently the child leaves are stored in the shared interval tree and every lookup for a child must be done under the interval tree rwsem. This is further complicated by dropping the rwsem during iteration (ie the odp_lookup(), odp_next() pattern), which requires a very tricky an difficult to understand locking scheme with SRCU. Instead reserve the interval tree for the exclusive use of the mmu notifier related code in umem_odp.c and give each implicit MR a xarray containing all the child MRs. Since the size of each child is 1GB of VA, a 1 level xarray will index 64G of VA, and a 2 level will index 2TB, making xarray a much better data structure choice than an interval tree. The locking properties of xarray will be used in the next patches to rework the implicit ODP locking scheme into something simpler. At this point, the xarray is locked by the implicit MR's umem_mutex, and read can also be locked by the odp_srcu. Link: https://lore.kernel.org/r/20191009160934.3143-10-jgg@ziepe.caReviewed-by: NArtemy Kovalyov <artemyko@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Jason Gunthorpe 提交于
The single routine has a very confusing scheme to advance to the next child MR when working on an implicit parent. This scheme can only be used when working with an implicit parent and must not be triggered when working on a normal MR. Re-arrange things by directly putting all the single-MR stuff into one function and calling it in a loop for the implicit case. Simplify some of the error handling in the new pagefault_real_mr() to remove unneeded gotos. Link: https://lore.kernel.org/r/20191009160934.3143-9-jgg@ziepe.caReviewed-by: NArtemy Kovalyov <artemyko@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Jason Gunthorpe 提交于
Instead of rewriting all the IOVA's to 0 as things progress down the tree make the IOVA of the children equal to placement in the tree. This makes things easier to understand by keeping mmkey.iova == HW configuration. Link: https://lore.kernel.org/r/20191009160934.3143-8-jgg@ziepe.caReviewed-by: NArtemy Kovalyov <artemyko@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Jason Gunthorpe 提交于
This makes the routines easier to understand, particularly with respect the locking requirements of the entire sequence. The implicit_mr_alloc() had a lot of ifs specializing it to each of the callers, and only a very small amount of code was actually shared. Following patches will cause the flow in the two functions to diverge further. Link: https://lore.kernel.org/r/20191009160934.3143-7-jgg@ziepe.caReviewed-by: NArtemy Kovalyov <artemyko@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Jason Gunthorpe 提交于
This function is intended to loop across each MTT chunk in the implicit parent that intersects the range [io_virt, io_virt+bnct). But it is has a confusing construction, so: - Consistently use imr and odp_imr to refer to the implicit parent to avoid confusion with the normal mr and odp of the child - Directly compute the inclusive start/end indexes by shifting. This is clearer to understand the intent and avoids any errors from unaligned values of addr - Iterate directly over the range of MTT indexes, do not make a loop out of goto - Follow 'success oriented flow', with goto error unwind - Directly calculate the range of idx's that need update_xlt - Ensure that any leaf MR added to the interval tree always results in an update to the XLT Link: https://lore.kernel.org/r/20191009160934.3143-6-jgg@ziepe.caReviewed-by: NArtemy Kovalyov <artemyko@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Jason Gunthorpe 提交于
No users are left, delete it. Link: https://lore.kernel.org/r/20191009160934.3143-5-jgg@ziepe.caReviewed-by: NArtemy Kovalyov <artemyko@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Jason Gunthorpe 提交于
There is a per device xarray storing mkeys that is used to store every mkey in the system. However, this xarray is now only read by ODP for certain ODP designated MRs (ODP, implicit ODP, MW, DEVX_INDIRECT). Create an xarray only for use by ODP, that only contains ODP related MKeys. This xarray is protected by SRCU and all erases are protected by a synchronize. This improves performance: - All MRs in the odp_mkeys xarray are ODP MRs, so some tests for is_odp() can be deleted. The xarray will also consume fewer nodes. - normal MR's are never mixed with ODP MRs in a SRCU data structure so performance sucking synchronize_srcu() on every MR destruction is not needed. - No smp_load_acquire(live) and xa_load() double barrier on read Due to the SRCU locking scheme care must be taken with the placement of the xa_store(). Once it completes the MR is immediately visible to other threads and only through a xa_erase() & synchronize_srcu() cycle could it be destroyed. Link: https://lore.kernel.org/r/20191009160934.3143-4-jgg@ziepe.caReviewed-by: NArtemy Kovalyov <artemyko@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Jason Gunthorpe 提交于
The locking model for signature is completely different than ODP, do not share the same xarray that relies on SRCU locking to support ODP. Simply store the active mlx5_core_sig_ctx's in an xarray when signature MRs are created and rely on trivial xarray locking to serialize everything. The overhead of storing only a handful of SIG related MRs is going to be much less than an xarray full of every mkey. Link: https://lore.kernel.org/r/20191009160934.3143-3-jgg@ziepe.caReviewed-by: NArtemy Kovalyov <artemyko@mellanox.com> Reviewed-by: NMax Gurtovoy <maxg@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Jason Gunthorpe 提交于
When working with SRCU protected xarrays the xarray itself should be the SRCU 'update' point. Instead prefetch is using live as the SRCU update point and this prevents switching the locking design to use the xarray instead. To solve this the prefetch must only read from the xarray once, and hold on to the actual MR pointer for the duration of the async operation. Incrementing num_pending_prefetch delays destruction of the MR, so it is suitable. Prefetch calls directly to the pagefault_mr using the MR pointer and only does a single xarray lookup. All the testing if a MR is prefetchable or not is now done only in the prefetch code and removed from the pagefault critical path. Link: https://lore.kernel.org/r/20191009160934.3143-2-jgg@ziepe.caReviewed-by: NArtemy Kovalyov <artemyko@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Bryan Tan 提交于
This change allows the RDMA stack to use physical resource numbers if they are passed up from the device. This is accomplished by separating the concept of the QP number from the QP handle. Previously, the two were the same, as the QP number was exposed to the guest and also used to reference a virtual QP in the device backend. With physical resource numbers exposed, the QP number given to the guest is the number assigned from the physical HCA's QP, while the QP handle is still the internal handle used to reference a virtual QP. Regardless of whether the device is exposing physical ids, the driver will still try to pick up the QP handle from the backend if possible. The MR keys exposed to the guest will also be the MR keys created by the physical HCA, instead of virtual MR keys. The distinction between handle and keys is already present for MRs so there is no need to do anything special here. A new version of the create QP response has been added to the device API to pass up the QP number and handle. The driver will also report these to userspace in the udata response if userspace supports it or not create the queuepair if not. I also had to do a refactor of the destroy qp code to reuse it if we fail to copy to userspace. Link: https://lore.kernel.org/r/20191028181444.19448-1-aditr@vmware.comReviewed-by: NJorgen Hansen <jhansen@vmware.com> Signed-off-by: NAdit Ranadive <aditr@vmware.com> Signed-off-by: NBryan Tan <bryantan@vmware.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Bart Van Assche 提交于
The dma_set_max_seg_size() call in setup_dma_device() does not have any effect since device->dev.dma_parms is NULL. Fix this by initializing device->dev.dma_parms first. Link: https://lore.kernel.org/r/20191025225830.257535-5-bvanassche@acm.org Fixes: d10bcf94 ("RDMA/umem: Combine contiguous PAGE_SIZE regions in SGEs") Signed-off-by: NBart Van Assche <bvanassche@acm.org> Reviewed-by: NJason Gunthorpe <jgg@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Bart Van Assche 提交于
Increase the DMA max_segment_size parameter from 64 KB to 2 GB. Link: https://lore.kernel.org/r/20191025225830.257535-4-bvanassche@acm.orgSigned-off-by: NBart Van Assche <bvanassche@acm.org> Reviewed-by: NJason Gunthorpe <jgg@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Bart Van Assche 提交于
Increase the DMA max_segment_size parameter from 64 KB to 2 GB. Link: https://lore.kernel.org/r/20191025225830.257535-3-bvanassche@acm.orgSigned-off-by: NBart Van Assche <bvanassche@acm.org> Reviewed-by: NJason Gunthorpe <jgg@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Bernard Metzler 提交于
Do not release qp state lock if not previously acquired. Fixes: cf049bb3 ("RDMA/siw: Fix SQ/RQ drain logic") Link: https://lore.kernel.org/r/20191025142903.20625-1-bmt@zurich.ibm.comReported-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NBernard Metzler <bmt@zurich.ibm.com> Reviewed-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Potnuri Bharat Teja 提交于
Query speed/width from corresponding netdev. Link: https://lore.kernel.org/r/1572001022-4533-1-git-send-email-bharat@chelsio.comSigned-off-by: NPotnuri Bharat Teja <bharat@chelsio.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Michal Kalderon 提交于
User QPs pbl's weren't freed properly. MR pbls weren't freed properly. Fixes: e0290cce ("qedr: Add support for memory registeration verbs") Link: https://lore.kernel.org/r/20191027200451.28187-5-michal.kalderon@marvell.comSigned-off-by: NAriel Elior <ariel.elior@marvell.com> Signed-off-by: NMichal Kalderon <michal.kalderon@marvell.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Michal Kalderon 提交于
Re-design of the iWARP CM related objects reference counting and synchronization methods, to ensure operations are synchronized correctly and that memory allocated for "ep" is properly released. Also makes sure QP memory is not released before ep is finished accessing it. Where as the QP object is created/destroyed by external operations, the ep is created/destroyed by internal operations and represents the tcp connection associated with the QP. QP destruction flow: - needs to wait for ep establishment to complete (either successfully or with error) - needs to wait for ep disconnect to be fully posted to avoid a race condition of disconnect being called after reset. - both the operations above don't always happen, so we use atomic flags to indicate whether the qp destruction flow needs to wait for these completions or not, if the destroy is called before these operations began, the flows will check the flags and not execute them ( connect / disconnect). We use completion structure for waiting for the completions mentioned above. The QP refcnt was modified to kref object. The EP has a kref added to it to handle additional worker thread accessing it. Memory Leaks - https://www.spinics.net/lists/linux-rdma/msg83762.html Concurrency not managed correctly - https://www.spinics.net/lists/linux-rdma/msg67949.html Fixes: de0089e6 ("RDMA/qedr: Add iWARP connection management qp related callbacks") Link: https://lore.kernel.org/r/20191027200451.28187-4-michal.kalderon@marvell.comReported-by: NChuck Lever <chuck.lever@oracle.com> Reported-by: NJason Gunthorpe <jgg@mellanox.com> Signed-off-by: NAriel Elior <ariel.elior@marvell.com> Signed-off-by: NMichal Kalderon <michal.kalderon@marvell.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Michal Kalderon 提交于
The qpids xarray isn't accessed from irq context and therefore there is no need to use the xa_XXX_irq version of the apis. Remove the _irq. Fixes: b6014f9e ("qedr: Convert qpidr to XArray") Link: https://lore.kernel.org/r/20191027200451.28187-3-michal.kalderon@marvell.comSigned-off-by: NAriel Elior <ariel.elior@marvell.com> Signed-off-by: NMichal Kalderon <michal.kalderon@marvell.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Michal Kalderon 提交于
There was a missing initialization for the srqs xarray. SRQs xarray can also be called from irq context when searching for an element and uses the xa_XXX_irq apis, therefore should be initialized with IRQ flags. Fixes: 9fd15987 ("qedr: Convert srqidr to XArray") Link: https://lore.kernel.org/r/20191027200451.28187-2-michal.kalderon@marvell.comSigned-off-by: NAriel Elior <ariel.elior@marvell.com> Signed-off-by: NMichal Kalderon <michal.kalderon@marvell.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Colin Ian King 提交于
Currently, the error return path when the call to function dev->dfx->query_cqc_info fails will leak object 'context'. Fix this by making the error return path via 'err' return return codes rather than -EMSGSIZE, set ret appropriately for all error return paths and for the memory leak now return via 'err' rather than just returning without freeing context. Link: https://lore.kernel.org/r/20191024131034.19989-1-colin.king@canonical.com Addresses-Coverity: ("Resource leak") Fixes: e1c9a0dc ("RDMA/hns: Dump detailed driver-specific CQ") Signed-off-by: NColin Ian King <colin.king@canonical.com> Reviewed-by: NJason Gunthorpe <jgg@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Yangyang Li 提交于
qpc/cqc timer entry size needs one page, but currently they are fixedly configured to 4096, which is not appropriate in 64K page scenarios. So they should be modified to PAGE_SIZE. Fixes: 0e40dc2f ("RDMA/hns: Add timer allocation support for hip08") Link: https://lore.kernel.org/r/1571908917-16220-3-git-send-email-liweihang@hisilicon.comSigned-off-by: NYangyang Li <liyangyang20@huawei.com> Signed-off-by: NWeihang Li <liweihang@hisilicon.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Lijun Ou 提交于
SRQ's page size configuration of BA and buffer should depend on current PAGE_SHIFT, or it can't work in scenario of 64K page. Fixes: c7bcb134 ("RDMA/hns: Add SRQ support for hip08 kernel mode") Link: https://lore.kernel.org/r/1571908917-16220-2-git-send-email-liweihang@hisilicon.comSigned-off-by: NLijun Ou <oulijun@huawei.com> Signed-off-by: NWeihang Li <liweihang@hisilicon.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Bart Van Assche 提交于
Unlike the iSCSI target driver, for the SRP target driver it is sufficient if a single TPG can be associated with each RDMA port name. However, users started associating multiple TPGs with RDMA port names. Support this by converting the single TPG in struct srpt_port_id into a list. This patch fixes the following list corruption issue: list_add corruption. prev->next should be next (ffffffffc0a080c0), but was ffffa08a994ce6f0. (prev=ffffa08a994ce6f0). WARNING: CPU: 2 PID: 2597 at lib/list_debug.c:28 __list_add_valid+0x6a/0x70 CPU: 2 PID: 2597 Comm: targetcli Not tainted 5.4.0-rc1.3bfa3c9602a7 #1 RIP: 0010:__list_add_valid+0x6a/0x70 Call Trace: core_tpg_register+0x116/0x200 [target_core_mod] srpt_make_tpg+0x3f/0x60 [ib_srpt] target_fabric_make_tpg+0x41/0x290 [target_core_mod] configfs_mkdir+0x158/0x3e0 vfs_mkdir+0x108/0x1a0 do_mkdirat+0x77/0xe0 do_syscall_64+0x55/0x1d0 entry_SYSCALL_64_after_hwframe+0x44/0xa9 Link: https://lore.kernel.org/r/20191023204106.23326-1-bvanassche@acm.orgReported-by: NHonggang LI <honli@redhat.com> Fixes: a42d985b ("ib_srpt: Initial SRP Target merge for v3.3-rc1") Signed-off-by: NBart Van Assche <bvanassche@acm.org> Acked-by: NHonggang Li <honli@redhat.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Leon Romanovsky 提交于
HNS redefined available in bits.h define and didn't use it, we can safely delete it. Link: https://lore.kernel.org/r/20191023054239.31648-1-leon@kernel.orgSigned-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
- 28 10月, 2019 4 次提交
-
-
由 Jason Gunthorpe 提交于
The "ucmd->log_sq_bb_count" variable is a user controlled variable in the 0-255 range. If we shift more than then number of bits in an int then it's undefined behavior (it shift wraps), and potentially the int could become negative. Fixes: 9a443537 ("IB/hns: Add driver files for hns RoCE driver") Link: https://lore.kernel.org/r/20190608092514.GC28890@mwandaReported-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com> Reviewed-by: NDan Carpenter <dan.carpenter@oracle.com>
-
由 Leon Romanovsky 提交于
Add Mellanox to lust of copyright holders and replace copyright boilerplate with relevant SPDX tag. Link: https://lore.kernel.org/r/20191020071559.9743-4-leon@kernel.orgSigned-off-by: NLeon Romanovsky <leonro@mellanox.com> Reviewed-by: NJason Gunthorpe <jgg@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Leon Romanovsky 提交于
There is a specific define keyword to check if define exists or not, let's use it instead of open-coded variant. Link: https://lore.kernel.org/r/20191020071559.9743-3-leon@kernel.orgSigned-off-by: NLeon Romanovsky <leonro@mellanox.com> Reviewed-by: NJason Gunthorpe <jgg@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
由 Leon Romanovsky 提交于
Function cm_is_active_peer is not used, delete it. Link: https://lore.kernel.org/r/20191020071559.9743-2-leon@kernel.orgSigned-off-by: NLeon Romanovsky <leonro@mellanox.com> Reviewed-by: NJason Gunthorpe <jgg@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
-
- 26 10月, 2019 4 次提交
-
-
由 Mike Christie 提交于
nbd requires socket families to support the shutdown method so the nbd recv workqueue can be woken up from its sock_recvmsg call. If the socket does not support the callout we will leave recv works running or get hangs later when the device or module is removed. This adds a check during socket connection/reconnection to make sure the socket being passed in supports the needed callout. Reported-by: syzbot+24c12fa8d218ed26011a@syzkaller.appspotmail.com Fixes: e9e006f5 ("nbd: fix max number of supported devs") Tested-by: NRichard W.M. Jones <rjones@redhat.com> Signed-off-by: NMike Christie <mchristi@redhat.com> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Mark Brown 提交于
This driver is using regulator_get_optional() to handle all the supplies that it handles, and only ever enables and disables all supplies en masse without ever doing any other configuration of the device to handle missing power. These are clear signs that the API is being misused - it should only be used for supplies that may be physically absent from the system and in these cases the hardware usually needs different configuration if the supply is missing. Instead use normal regualtor_get(), if the supply is not described in DT then the framework will substitute a dummy regulator in so no special handling is needed by the consumer driver. In the case of the PHY regulator the handling in the driver is a hack to deal with integrated PHYs; the supplies are only optional in the sense that that there's some confusion in the code about where they're bound to. From a code point of view they function exactly as normal supplies so can be treated as such. It'd probably be better to model this by instantiating a PHY object for integrated PHYs. Reviewed-by: NHans de Goede <hdegoede@redhat.com> Signed-off-by: NMark Brown <broonie@kernel.org> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Josef Bacik 提交于
We hit the following warning in production print_req_error: I/O error, dev nbd0, sector 7213934408 flags 80700 ------------[ cut here ]------------ refcount_t: underflow; use-after-free. WARNING: CPU: 25 PID: 32407 at lib/refcount.c:190 refcount_sub_and_test_checked+0x53/0x60 Workqueue: knbd-recv recv_work [nbd] RIP: 0010:refcount_sub_and_test_checked+0x53/0x60 Call Trace: blk_mq_free_request+0xb7/0xf0 blk_mq_complete_request+0x62/0xf0 recv_work+0x29/0xa1 [nbd] process_one_work+0x1f5/0x3f0 worker_thread+0x2d/0x3d0 ? rescuer_thread+0x340/0x340 kthread+0x111/0x130 ? kthread_create_on_node+0x60/0x60 ret_from_fork+0x1f/0x30 ---[ end trace b079c3c67f98bb7c ]--- This was preceded by us timing out everything and shutting down the sockets for the device. The problem is we had a request in the queue at the same time, so we completed the request twice. This can actually happen in a lot of cases, we fail to get a ref on our config, we only have one connection and just error out the command, etc. Fix this by checking cmd->status in nbd_read_stat. We only change this under the cmd->lock, so we are safe to check this here and see if we've already error'ed this command out, which would indicate that we've completed it as well. Reviewed-by: NMike Christie <mchristi@redhat.com> Signed-off-by: NJosef Bacik <josef@toxicpanda.com> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Josef Bacik 提交于
We already do this for the most part, except in timeout and clear_req. For the timeout case we take the lock after we grab a ref on the config, but that isn't really necessary because we're safe to touch the cmd at this point, so just move the order around. For the clear_req cause this is initiated by the user, so again is safe. Reviewed-by: NMike Christie <mchristi@redhat.com> Signed-off-by: NJosef Bacik <josef@toxicpanda.com> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
- 25 10月, 2019 2 次提交
-
-
由 Alan Mikhak 提交于
Modify plic_init() to skip .dts interrupt contexts other than supervisor external interrupt. The .dts entry for plic may specify multiple interrupt contexts. For example, it may assign two entries IRQ_M_EXT and IRQ_S_EXT, in that order, to the same interrupt controller. This patch modifies plic_init() to skip the IRQ_M_EXT context since IRQ_S_EXT is currently the only supported context. If IRQ_M_EXT is not skipped, plic_init() will report "handler already present for context" when it comes across the IRQ_S_EXT context in the next iteration of its loop. Without this patch, .dts would have to be edited to replace the value of IRQ_M_EXT with -1 for it to be skipped. Signed-off-by: NAlan Mikhak <alan.mikhak@sifive.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NChristoph Hellwig <hch@lst.de> Acked-by: Paul Walmsley <paul.walmsley@sifive.com> # arch/riscv Link: https://lkml.kernel.org/r/1571933503-21504-1-git-send-email-alan.mikhak@sifive.com
-
由 Alain Volmat 提交于
Remove the following warning: drivers/i2c/busses/i2c-stm32f7.c:315: warning: cannot understand function prototype: 'struct stm32f7_i2c_spec i2c_specs[] = Replace a comment starting with /** by simply /* to avoid having it interpreted as a kernel-doc comment. Fixes: aeb068c5 ("i2c: i2c-stm32f7: add driver") Signed-off-by: NAlain Volmat <alain.volmat@st.com> Reviewed-by: NPierre-Yves MORDRET <pierre-yves.mordret@st.com> Signed-off-by: NWolfram Sang <wsa@the-dreams.de>
-