nwfilter: resolve deadlock between VM ops and filter update
This is from a bug report and conversation on IRC where Soren reported that while a filter update is occurring on one or more VMs (due to a rule having been edited for example), a deadlock can occur when a VM referencing a filter is started. The problem is caused by the two locking sequences of qemu driver, qemu domain, filter # for the VM start operation filter, qemu_driver, qemu_domain # for the filter update operation that obviously don't lock in the same order. The problem is the 2nd lock sequence. Here the qemu_driver lock is being grabbed in qemu_driver:qemudVMFilterRebuild() The following solution is based on the idea of trying to re-arrange the 2nd sequence of locks as follows: qemu_driver, filter, qemu_driver, qemu_domain and making the qemu driver recursively lockable so that a second lock can occur, this would then lead to the following net-locking sequence qemu_driver, filter, qemu_domain where the 2nd qemu_driver lock has been ( logically ) eliminated. The 2nd part of the idea is that the sequence of locks (filter, qemu_domain) and (qemu_domain, filter) becomes interchangeable if all code paths where filter AND qemu_domain are locked have a preceding qemu_domain lock that basically blocks their concurrent execution So, the following code paths exist towards qemu_driver:qemudVMFilterRebuild where we now want to put a qemu_driver lock in front of the filter lock. -> nwfilterUndefine() [ locks the filter ] -> virNWFilterTestUnassignDef() -> virNWFilterTriggerVMFilterRebuild() -> qemudVMFilterRebuild() -> nwfilterDefine() -> virNWFilterPoolAssignDef() [ locks the filter ] -> virNWFilterTriggerVMFilterRebuild() -> qemudVMFilterRebuild() -> nwfilterDriverReload() -> virNWFilterPoolLoadAllConfigs() ->virNWFilterPoolObjLoad() -> virNWFilterPoolAssignDef() [ locks the filter ] -> virNWFilterTriggerVMFilterRebuild() -> qemudVMFilterRebuild() -> nwfilterDriverStartup() -> virNWFilterPoolLoadAllConfigs() ->virNWFilterPoolObjLoad() -> virNWFilterPoolAssignDef() [ locks the filter ] -> virNWFilterTriggerVMFilterRebuild() -> qemudVMFilterRebuild() Qemu is not the only driver using the nwfilter driver, but also the UML driver calls into it. Therefore qemuVMFilterRebuild() can be exchanged with umlVMFilterRebuild() along with the driver lock of qemu_driver that can now be a uml_driver. Further, since UML and Qemu domains can be running on the same machine, the triggering of a rebuild of the filter can touch both types of drivers and their domains. In the patch below I am now extending each nwfilter callback driver with functions for locking and unlocking the (VM) driver (UML, QEMU) and introduce new functions for locking all registered callback drivers and unlocking them. Then I am distributing the lock-all-cbdrivers/unlock-all-cbdrivers call into the above call paths. The last shown callpath starting with nwfilterDriverStart() is problematic since it is initialize before the Qemu and UML drives are and thus a lock in the path would result in a NULL pointer attempted to be locked -- the call to virNWFilterTriggerVMFilterRebuild() is never called, so we never lock either the qemu_driver or the uml_driver in that path. Therefore, only the first 3 paths now receive calls to lock and unlock all callback drivers. Now that the locks are distributed where it matters I can remove the qemu_driver and uml_driver lock from qemudVMFilterRebuild() and umlVMFilterRebuild() and not requiring the recursive locks. For now I want to put this out as an RFC patch. I have tested it by 'stretching' the critical section after the define/undefine functions each lock the filter so I can (easily) concurrently execute another VM operation (suspend,start). That code is in this patch and if you want you can de-activate it. It seems to work ok and operations are being blocked while the update is being done. I still also want to verify the other assumption above that locking filter and qemu_domain always has a preceding qemu_driver lock.
Showing
想要评论请 注册 或 登录