usb: typec: tcpm: Fix AB BA lock inversion between tcpm code and the alt-mode drivers
When we receive a PD data packet which ends up being for the alt-mode driver we have the following lock order: 1. tcpm_pd_rx_handler take the tcpm-port lock 2. We call into the alt-mode driver which takes the alt-mode's lock And when the alt-mode driver initiates communication we have the following lock order: 3. alt-mode driver takes the alt-mode's lock 4. alt-mode driver calls tcpm_altmode_enter which takes the tcpm-port lock This is a classic AB BA lock inversion issue. With the refactoring of tcpm_handle_vdm_request() done before this patch, we don't rely on, or need to make changes to the tcpm-port data by the time we make call 2. from above. All data to be passed to the alt-mode driver sits on our stack at this point, and thus does not need locking. So after the refactoring we can simply fix this by releasing the tcpm-port lock before calling into the alt-mode driver. This fixes the following lockdep warning: [ 191.454238] ====================================================== [ 191.454240] WARNING: possible circular locking dependency detected [ 191.454244] 5.8.0-rc5+ #1 Not tainted [ 191.454246] ------------------------------------------------------ [ 191.454248] kworker/u8:5/794 is trying to acquire lock: [ 191.454251] ffff9bac8e30d4a8 (&dp->lock){+.+.}-{3:3}, at: dp_altmode_vdm+0x30/0xf0 [typec_displayport] [ 191.454263] but task is already holding lock: [ 191.454264] ffff9bac9dc240a0 (&port->lock#2){+.+.}-{3:3}, at: tcpm_pd_rx_handler+0x43/0x12c0 [tcpm] [ 191.454273] which lock already depends on the new lock. [ 191.454275] the existing dependency chain (in reverse order) is: [ 191.454277] -> #1 (&port->lock#2){+.+.}-{3:3}: [ 191.454286] __mutex_lock+0x7b/0x820 [ 191.454290] tcpm_altmode_enter+0x23/0x90 [tcpm] [ 191.454293] dp_altmode_work+0xca/0xe0 [typec_displayport] [ 191.454299] process_one_work+0x23f/0x570 [ 191.454302] worker_thread+0x55/0x3c0 [ 191.454305] kthread+0x138/0x160 [ 191.454309] ret_from_fork+0x22/0x30 [ 191.454311] -> #0 (&dp->lock){+.+.}-{3:3}: [ 191.454317] __lock_acquire+0x1241/0x2090 [ 191.454320] lock_acquire+0xa4/0x3d0 [ 191.454323] __mutex_lock+0x7b/0x820 [ 191.454326] dp_altmode_vdm+0x30/0xf0 [typec_displayport] [ 191.454330] tcpm_pd_rx_handler+0x11ae/0x12c0 [tcpm] [ 191.454333] process_one_work+0x23f/0x570 [ 191.454336] worker_thread+0x55/0x3c0 [ 191.454338] kthread+0x138/0x160 [ 191.454341] ret_from_fork+0x22/0x30 [ 191.454343] other info that might help us debug this: [ 191.454345] Possible unsafe locking scenario: [ 191.454347] CPU0 CPU1 [ 191.454348] ---- ---- [ 191.454350] lock(&port->lock#2); [ 191.454353] lock(&dp->lock); [ 191.454355] lock(&port->lock#2); [ 191.454357] lock(&dp->lock); [ 191.454360] *** DEADLOCK *** Reviewed-by: NHeikki Krogerus <heikki.krogerus@linux.intel.com> Reviewed-by: NGuenter Roeck <linux@roeck-us.net> Signed-off-by: NHans de Goede <hdegoede@redhat.com> Link: https://lore.kernel.org/r/20200724174702.61754-5-hdegoede@redhat.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Showing
想要评论请 注册 或 登录