- 27 12月, 2019 40 次提交
-
-
由 Huazhong Tan 提交于
driver inclusion category: bugfix bugzilla: NA CVE: NA This patch adds more information for reset DFX. Also, adds some cleanups to reset info, move reset_fail_cnt into struct hclge_rst_stats, and modifies some print formats. Feature or Bugfix:Bugfix Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Reviewed-by: Nshenjian <shenjian15@huawei.com> Reviewed-by: Nlinyunsheng <linyunsheng@huawei.com> Reviewed-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Huazhong Tan 提交于
driver inclusion category: bugfix bugzilla: NA CVE: NA Since hclge_get_reset_level may return HNAE3_NONE_RESET, so hdev->reset_level can not be assigned with the return value in the hclge_reset(), otherwise, it will cause the use of hdev->reset_level in hclge_reset_event get into error. Fixes: 012fcb52f67c ("net: hns3: activate reset timer when calling reset_event") Feature or Bugfix:Bugfix Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Reviewed-by: Nlinyunsheng <linyunsheng@huawei.com> Reviewed-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Linus Walleij 提交于
mainline inclusion from mainline-v5.3-rc6 commit 48057ed1840f category: bugfix bugzilla: 20768 CVE: NA ------------------------------------------------- The new API for registering a gpio_irq_chip along with a gpio_chip has a different semantic ordering than the old API which added the irqchip explicitly after registering the gpio_chip. Move the calls to add the gpio_irq_chip *last* in the function, so that the different hooks setting up OF and ACPI and machine gpio_chips are called *before* we try to register the interrupts, preserving the elder semantic order. This cropped up in the PL061 driver which used to work fine with no special ACPI quirks, but started to misbehave using the new API. Fixes: e0d89728 ("gpio: Implement tighter IRQ chip integration") Cc: Thierry Reding <treding@nvidia.com> Cc: Grygorii Strashko <grygorii.strashko@ti.com> Cc: Andy Shevchenko <andy.shevchenko@gmail.com> Reported-by: NWei Xu <xuwei5@hisilicon.com> Tested-by: NWei Xu <xuwei5@hisilicon.com> Reported-by: NAndy Shevchenko <andy.shevchenko@gmail.com> Signed-off-by: NLinus Walleij <linus.walleij@linaro.org> Link: https://lore.kernel.org/r/20190820080527.11796-1-linus.walleij@linaro.orgSigned-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Geert Uytterhoeven 提交于
mainline inclusion from mainline-v5.3-rc1 commit f99d479b category: bugfix bugzilla: 14225 CVE: NA ------------------------------------------------- A new field init_valid_mask was added to struct gpio_chip, but it was not documented. Fixes: f8ec92a9 ("gpiolib: Add init_valid_mask exported function") Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Link: https://lore.kernel.org/r/20190701142650.25122-1-geert+renesas@glider.beSigned-off-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Chris Packham 提交于
mainline inclusion from mainline-v5.3-rc3 commit d95da993383c category: bugfix bugzilla: 14225 CVE: NA ------------------------------------------------- desc->flags may already have values set by of_gpiochip_add() so make sure that this isn't undone when setting the initial direction. Cc: stable@vger.kernel.org Fixes: 3edfb7bd76bd1cba ("gpiolib: Show correct direction from the beginning") Signed-off-by: NChris Packham <chris.packham@alliedtelesis.co.nz> Link: https://lore.kernel.org/r/20190707203558.10993-1-chris.packham@alliedtelesis.co.nzSigned-off-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Ricardo Ribalda Delgado 提交于
mainline inclusion from mainline-v4.20-rc1 commit 767cd17a5cc5 category: bugfix bugzilla: 14225 CVE: NA ------------------------------------------------- gpio_hog depends on gdev field being initialized. This patch fixes an OOPs during initialization of TI's AM335x-ICEv2. Fixes: 3edfb7bd76bd1cba ("gpiolib: Show correct direction from the beginning") Tested-by: NVignesh R <vigneshr@ti.com> Signed-off-by: NRicardo Ribalda Delgado <ricardo.ribalda@gmail.com> Signed-off-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Geert Uytterhoeven 提交于
mainline inclusion from mainline-v5.1-rc7 commit 357798909164 category: bugfix bugzilla: 14225 CVE: NA ------------------------------------------------- The err_remove_chip block is too coarse, and may perform cleanup that must not be done. E.g. if of_gpiochip_add() fails, of_gpiochip_remove() is still called, causing: OF: ERROR: Bad of_node_put() on /soc/gpio@e6050000 CPU: 1 PID: 20 Comm: kworker/1:1 Not tainted 5.1.0-rc2-koelsch+ #407 Hardware name: Generic R-Car Gen2 (Flattened Device Tree) Workqueue: events deferred_probe_work_func [<c020ec74>] (unwind_backtrace) from [<c020ae58>] (show_stack+0x10/0x14) [<c020ae58>] (show_stack) from [<c07c1224>] (dump_stack+0x7c/0x9c) [<c07c1224>] (dump_stack) from [<c07c5a80>] (kobject_put+0x94/0xbc) [<c07c5a80>] (kobject_put) from [<c0470420>] (gpiochip_add_data_with_key+0x8d8/0xa3c) [<c0470420>] (gpiochip_add_data_with_key) from [<c0473738>] (gpio_rcar_probe+0x1d4/0x314) [<c0473738>] (gpio_rcar_probe) from [<c052fca8>] (platform_drv_probe+0x48/0x94) and later, if a GPIO consumer tries to use a GPIO from a failed controller: WARNING: CPU: 0 PID: 1 at lib/refcount.c:156 kobject_get+0x38/0x4c refcount_t: increment on 0; use-after-free. Modules linked in: CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.1.0-rc2-koelsch+ #407 Hardware name: Generic R-Car Gen2 (Flattened Device Tree) [<c020ec74>] (unwind_backtrace) from [<c020ae58>] (show_stack+0x10/0x14) [<c020ae58>] (show_stack) from [<c07c1224>] (dump_stack+0x7c/0x9c) [<c07c1224>] (dump_stack) from [<c0221580>] (__warn+0xd0/0xec) [<c0221580>] (__warn) from [<c02215e0>] (warn_slowpath_fmt+0x44/0x6c) [<c02215e0>] (warn_slowpath_fmt) from [<c07c58fc>] (kobject_get+0x38/0x4c) [<c07c58fc>] (kobject_get) from [<c068b3ec>] (of_node_get+0x14/0x1c) [<c068b3ec>] (of_node_get) from [<c0686f24>] (of_find_node_by_phandle+0xc0/0xf0) [<c0686f24>] (of_find_node_by_phandle) from [<c0686fbc>] (of_phandle_iterator_next+0x68/0x154) [<c0686fbc>] (of_phandle_iterator_next) from [<c0687fe4>] (__of_parse_phandle_with_args+0x40/0xd0) [<c0687fe4>] (__of_parse_phandle_with_args) from [<c0688204>] (of_parse_phandle_with_args_map+0x100/0x3ac) [<c0688204>] (of_parse_phandle_with_args_map) from [<c0471240>] (of_get_named_gpiod_flags+0x38/0x380) [<c0471240>] (of_get_named_gpiod_flags) from [<c046f864>] (gpiod_get_from_of_node+0x24/0xd8) [<c046f864>] (gpiod_get_from_of_node) from [<c0470aa4>] (devm_fwnode_get_index_gpiod_from_child+0xa0/0x144) [<c0470aa4>] (devm_fwnode_get_index_gpiod_from_child) from [<c05f425c>] (gpio_keys_probe+0x418/0x7bc) [<c05f425c>] (gpio_keys_probe) from [<c052fca8>] (platform_drv_probe+0x48/0x94) Fix this by splitting the cleanup block, and adding a missing call to gpiochip_irqchip_remove(). Fixes: 28355f81 ("gpio: defer probe if pinctrl cannot be found") Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: NMukesh Ojha <mojha@codeaurora.org> Signed-off-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Ricardo Ribalda Delgado 提交于
mainline inclusion from mainline-v4.20-rc1 commit 3edfb7bd76bd1cba6b917736943dffd799deed8a category: bugfix bugzilla: 14225 CVE: NA ------------------------------------------------- Current code assumes that the direction is input if direction_input function is set. This might not be the case on GPIOs with programmable direction. Signed-off-by: NRicardo Ribalda Delgado <ricardo.ribalda@gmail.com> Tested-by: NJeffrey Hugo <jhugo@codeaurora.org> Signed-off-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Ricardo Ribalda Delgado 提交于
mainline inclusion from mainline-v4.20-rc1 commit 6f0ec09afe278ea2a5d63f31b754db6655c69078 category: bugfix bugzilla: 14225 CVE: NA ------------------------------------------------- The current code produces XPU violation if get_direction is called just after the initialization. Signed-off-by: NRicardo Ribalda Delgado <ricardo.ribalda@gmail.com> Acked-by: NTimur Tabi <timur@kernel.org> Tested-by: NJeffrey Hugo <jhugo@codeaurora.org> Signed-off-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Ricardo Ribalda Delgado 提交于
mainline inclusion from mainline-v4.20-rc1 commit f8ec92a9 category: bugfix bugzilla: 14225 CVE: NA ------------------------------------------------- Add a function that allows initializing the valid_mask from gpiochip_add_data. This prevents race conditions during gpiochip initialization. If the function is not exported, then the old behaviour is respected, this is, set all gpios as valid. Signed-off-by: NRicardo Ribalda Delgado <ricardo.ribalda@gmail.com> Tested-by: NJeffrey Hugo <jhugo@codeaurora.org> Signed-off-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Muchun Song 提交于
mainline inclusion from mainline-v5.3-rc4 commit ac43432cb1f5 category: bugfix bugzilla: 20443 CVE: NA ------------------------------------------------- There is a race condition between removing glue directory and adding a new device under the glue dir. It can be reproduced in following test: CPU1: CPU2: device_add() get_device_parent() class_dir_create_and_add() kobject_add_internal() create_dir() // create glue_dir device_add() get_device_parent() kobject_get() // get glue_dir device_del() cleanup_glue_dir() kobject_del(glue_dir) kobject_add() kobject_add_internal() create_dir() // in glue_dir sysfs_create_dir_ns() kernfs_create_dir_ns(sd) sysfs_remove_dir() // glue_dir->sd=NULL sysfs_put() // free glue_dir->sd // sd is freed kernfs_new_node(sd) kernfs_get(glue_dir) kernfs_add_one() kernfs_put() Before CPU1 remove last child device under glue dir, if CPU2 add a new device under glue dir, the glue_dir kobject reference count will be increase to 2 via kobject_get() in get_device_parent(). And CPU2 has been called kernfs_create_dir_ns(), but not call kernfs_new_node(). Meanwhile, CPU1 call sysfs_remove_dir() and sysfs_put(). This result in glue_dir->sd is freed and it's reference count will be 0. Then CPU2 call kernfs_get(glue_dir) will trigger a warning in kernfs_get() and increase it's reference count to 1. Because glue_dir->sd is freed by CPU1, the next call kernfs_add_one() by CPU2 will fail(This is also use-after-free) and call kernfs_put() to decrease reference count. Because the reference count is decremented to 0, it will also call kmem_cache_free() to free the glue_dir->sd again. This will result in double free. In order to avoid this happening, we also should make sure that kernfs_node for glue_dir is released in CPU1 only when refcount for glue_dir kobj is 1 to fix this race. The following calltrace is captured in kernel 4.14 with the following patch applied: commit 726e4109 ("drivers: core: Remove glue dirs from sysfs earlier") -------------------------------------------------------------------------- [ 3.633703] WARNING: CPU: 4 PID: 513 at .../fs/kernfs/dir.c:494 Here is WARN_ON(!atomic_read(&kn->count) in kernfs_get(). .... [ 3.633986] Call trace: [ 3.633991] kernfs_create_dir_ns+0xa8/0xb0 [ 3.633994] sysfs_create_dir_ns+0x54/0xe8 [ 3.634001] kobject_add_internal+0x22c/0x3f0 [ 3.634005] kobject_add+0xe4/0x118 [ 3.634011] device_add+0x200/0x870 [ 3.634017] _request_firmware+0x958/0xc38 [ 3.634020] request_firmware_into_buf+0x4c/0x70 .... [ 3.634064] kernel BUG at .../mm/slub.c:294! Here is BUG_ON(object == fp) in set_freepointer(). .... [ 3.634346] Call trace: [ 3.634351] kmem_cache_free+0x504/0x6b8 [ 3.634355] kernfs_put+0x14c/0x1d8 [ 3.634359] kernfs_create_dir_ns+0x88/0xb0 [ 3.634362] sysfs_create_dir_ns+0x54/0xe8 [ 3.634366] kobject_add_internal+0x22c/0x3f0 [ 3.634370] kobject_add+0xe4/0x118 [ 3.634374] device_add+0x200/0x870 [ 3.634378] _request_firmware+0x958/0xc38 [ 3.634381] request_firmware_into_buf+0x4c/0x70 -------------------------------------------------------------------------- Fixes: 726e4109 ("drivers: core: Remove glue dirs from sysfs earlier") Signed-off-by: NMuchun Song <smuchun@gmail.com> Reviewed-by: NMukesh Ojha <mojha@codeaurora.org> Signed-off-by: NPrateek Sood <prsood@codeaurora.org> Link: https://lore.kernel.org/r/20190727032122.24639-1-smuchun@gmail.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Yangyang Li 提交于
driver inclusion category: bugfix bugzilla: NA CVE: NA Because OFED does not guarantee the order of execution of hns_roce_vma_close and hns_roce_dealloc_ucontext, driver needs to increase the reference count of the "context" to ensure that the "context" will not be released when hns_roce_dealloc_ucontext is executed first, until the vma that depends on the "context" is completely released, then the "context" is released. Feature or Bugfix:Bugfix Signed-off-by: NYangyang Li <liyangyang20@huawei.com> Reviewed-by: Nwangxi <wangxi11@huawei.com> Reviewed-by: Noulijun <oulijun@huawei.com> Reviewed-by: Nliuyixian <liuyixian@huawei.com> Reviewed-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Hao Fang 提交于
driver inclusion category: bugfix bugzilla: NA CVE: NA Feature or Bugfix:Bugfix Signed-off-by: NHao Fang <fanghao11@huawei.com> Reviewed-by: Nwangzhou <wangzhou1@hisilicon.com> Reviewed-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 tanshukun 提交于
driver inclusion category: bugfix bugzilla: NA CVE: NA The design goal of the managed resource API (the devm_ stuff) is to avoid calling unmap, free etc. It can avoid concurrency problem when resetting. Feature or Bugfix:Bugfix Signed-off-by: Ntanshukun (A) <tanshukun1@huawei.com> Reviewed-by: Nwangzhou <wangzhou1@hisilicon.com> Reviewed-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Guangbin Huang 提交于
driver inclusion category: bugfix bugzilla: NA CVE: NA This patch deletes unnecessary blank line, adds {} for some else if branchs as the first if branch has {}, for cleanup. Feature or Bugfix:Bugfix Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Reviewed-by: Nshenjian <shenjian15@huawei.com> Reviewed-by: Nlinyunsheng <linyunsheng@huawei.com> Reviewed-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Jian Shen 提交于
driver inclusion category: bugfix bugzilla: NA CVE: NA In original codes, the VF index used incorrectly in function hclge_set_vlan_rx_offload_cfg() and hclge_set_vlan_rx_offload_cfg(). When VF id is greater than 8, for example 9, it will set the same bit with VF id 1. This patch fixes it by using vport->vport_id % HCLGE_VF_NUM_PER_CMD / HCLGE_VF_NUM_PER_BYTE as the array index, intead of vport->vport_id / HCLGE_VF_NUM_PER_CMD. Fixes: 052ece6d ("net: hns3: add ethtool related offload command") Feature or Bugfix:Bugfix Signed-off-by: NJian Shen <shenjian15@huawei.com> Reviewed-by: Nlinyunsheng <linyunsheng@huawei.com> Reviewed-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Guangbin Huang 提交于
driver inclusion category: bugfix bugzilla: NA CVE: NA The printed data pfc_en and pfc_map need to be displayed in hexadecimal notation, and the end of printed string needs to be add "\n", this patch fixes them. Feature or Bugfix:Bugfix Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Reviewed-by: Nlipeng <lipeng321@huawei.com> Reviewed-by: Nlinyunsheng <linyunsheng@huawei.com> Reviewed-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Greg Kroah-Hartman 提交于
Merge 178 patches from 4.19.68 and 4.19.69 stable branch (187 total) beside 9 already merged patches: 46f9a1b ALSA: usb-audio: Fix a stack buffer overflow bug in check_input_term 58b9f19 ALSA: usb-audio: Fix an OOB bug in parse_audio_mixer_unit a1cd2f7 dm: disable DISCARD if the underlying storage no longer supports it d61d8ea bonding: Add vlan tx offload to hw_enc_features cf13e30 userfaultfd_release: always remove uffd flags and clear vm_userfaultfd_ctx 8114012 dm btree: fix order of block initialization in btree_split_beneath 53e73d1 dm space map metadata: fix missing store of apply_bops() return value 11f85d4 xfs: fix missing ILOCK unlock when xfs_setattr_nonsize fails due to EDQUOT 17c2b7a xfs: don't trip over uninitialized buffer on extent read of corrupted inode Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 David Howells 提交于
[ Upstream commit 68553f1a ] Fix rxrpc_unuse_local() to handle a NULL local pointer as it can be called on an unbound socket on which rx->local is not yet set. The following reproduced (includes omitted): int main(void) { socket(AF_RXRPC, SOCK_DGRAM, AF_INET); return 0; } causes the following oops to occur: BUG: kernel NULL pointer dereference, address: 0000000000000010 ... RIP: 0010:rxrpc_unuse_local+0x8/0x1b ... Call Trace: rxrpc_release+0x2b5/0x338 __sock_release+0x37/0xa1 sock_close+0x14/0x17 __fput+0x115/0x1e9 task_work_run+0x72/0x98 do_exit+0x51b/0xa7a ? __context_tracking_exit+0x4e/0x10e do_group_exit+0xab/0xab __x64_sys_exit_group+0x14/0x17 do_syscall_64+0x89/0x1d4 entry_SYSCALL_64_after_hwframe+0x49/0xbe Reported-by: syzbot+20dee719a2e090427b5f@syzkaller.appspotmail.com Fixes: 730c5fd4 ("rxrpc: Fix local endpoint refcounting") Signed-off-by: NDavid Howells <dhowells@redhat.com> cc: Jeffrey Altman <jaltman@auristor.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 David Howells 提交于
[ Upstream commit b00df840 ] When a local endpoint (struct rxrpc_local) ceases to be in use by any AF_RXRPC sockets, it starts the process of being destroyed, but this doesn't cause it to be removed from the namespace endpoint list immediately as tearing it down isn't trivial and can't be done in softirq context, so it gets deferred. If a new socket comes along that wants to bind to the same endpoint, a new rxrpc_local object will be allocated and rxrpc_lookup_local() will use list_replace() to substitute the new one for the old. Then, when the dying object gets to rxrpc_local_destroyer(), it is removed unconditionally from whatever list it is on by calling list_del_init(). However, list_replace() doesn't reset the pointers in the replaced list_head and so the list_del_init() will likely corrupt the local endpoints list. Fix this by using list_replace_init() instead. Fixes: 730c5fd4 ("rxrpc: Fix local endpoint refcounting") Reported-by: syzbot+193e29e9387ea5837f1d@syzkaller.appspotmail.com Signed-off-by: NDavid Howells <dhowells@redhat.com> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 David Howells 提交于
commit 06d9532f upstream. rxrpc_queue_local() attempts to queue the local endpoint it is given and then, if successful, prints a trace line. The trace line includes the current usage count - but we're not allowed to look at the local endpoint at this point as we passed our ref on it to the workqueue. Fix this by reading the usage count before queuing the work item. Also fix the reading of local->debug_id for trace lines, which must be done with the same consideration as reading the usage count. Fixes: 09d2bf59 ("rxrpc: Add a tracepoint to track rxrpc_local refcounting") Reported-by: syzbot+78e71c5bab4f76a6a719@syzkaller.appspotmail.com Signed-off-by: NDavid Howells <dhowells@redhat.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 David Howells 提交于
commit 730c5fd4 upstream. The object lifetime management on the rxrpc_local struct is broken in that the rxrpc_local_processor() function is expected to clean up and remove an object - but it may get requeued by packets coming in on the backing UDP socket once it starts running. This may result in the assertion in rxrpc_local_rcu() firing because the memory has been scheduled for RCU destruction whilst still queued: rxrpc: Assertion failed ------------[ cut here ]------------ kernel BUG at net/rxrpc/local_object.c:468! Note that if the processor comes around before the RCU free function, it will just do nothing because ->dead is true. Fix this by adding a separate refcount to count active users of the endpoint that causes the endpoint to be destroyed when it reaches 0. The original refcount can then be used to refcount objects through the work processor and cause the memory to be rcu freed when that reaches 0. Fixes: 4f95dd78 ("rxrpc: Rework local endpoint management") Reported-by: syzbot+1e0edc4b8b7494c28450@syzkaller.appspotmail.com Signed-off-by: NDavid Howells <dhowells@redhat.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Alastair D'Silva 提交于
The upstream commit: 22e9c88d ("powerpc/64: reuse PPC32 static inline flush_dcache_range()") has a similar effect, but since it is a rewrite of the assembler to C, is too invasive for stable. This patch is a minimal fix to address the issue in assembler. This patch applies cleanly to v5.2, v4.19 & v4.14. When calling flush_(inval_)dcache_range with a size >4GB, we were masking off the upper 32 bits, so we would incorrectly flush a range smaller than intended. This patch replaces the 32 bit shifts with 64 bit ones, so that the full size is accounted for. Signed-off-by: NAlastair D'Silva <alastair@d-silva.org> Acked-by: NMichael Ellerman <mpe@ellerman.id.au> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Dan Carpenter 提交于
[ Upstream commit e0702d90 ] This function is supposed to return error pointers so it matches the dmz_get_rnd_zone_for_reclaim() function. The current code could lead to a NULL dereference in dmz_do_reclaim() Fixes: b234c6d7 ("dm zoned: improve error handling in reclaim") Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Reviewed-by: NDmitry Fomichev <dmitry.fomichev@wdc.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Darrick J. Wong 提交于
commit 710d707d upstream. During testing of xfs/141 on a V4 filesystem, I observed some inconsistent behavior with regards to resources that are held (i.e. remain locked) across a defer roll. The transaction roll always gives the defer roll function a new transaction, even if committing the old transaction fails. However, the defer roll function only rejoins the held resources if the transaction commit succeedied. This means that callers of defer roll have to figure out whether the held resources are attached to the transaction being passed back. Worse yet, if the defer roll was part of a defer finish call, we have a third possibility: the defer finish could pass back a dirty transaction with dirty held resources and an error code. The only sane way to handle all of these scenarios is to require that the code that held the resource either cancel the transaction before unlocking and releasing the resources, or use functions that detach resources from a transaction properly (e.g. xfs_trans_brelse) if they need to drop the reference before committing or cancelling the transaction. In order to make this so, change the defer roll code to join held resources to the new transaction unconditionally and fix all the bhold callers to release the held buffers correctly. Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com> Reviewed-by: NBrian Foster <bfoster@redhat.com> [mcgrof: fixes kz#204223 ] Signed-off-by: NLuis Chamberlain <mcgrof@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Allison Henderson 提交于
commit 068f985a upstream. This patch adds xfs_attr_remove_args. These sub-routines remove the attributes specified in @args. We will use this later for setting parent pointers as a deferred attribute operation. Signed-off-by: NAllison Henderson <allison.henderson@oracle.com> Reviewed-by: NDave Chinner <dchinner@redhat.com> Signed-off-by: NDave Chinner <david@fromorbit.com> Signed-off-by: NLuis Chamberlain <mcgrof@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Allison Henderson 提交于
commit 2f3cd809 upstream. This patch adds xfs_attr_set_args and xfs_bmap_set_attrforkoff. These sub-routines set the attributes specified in @args. We will use this later for setting parent pointers as a deferred attribute operation. [dgc: remove attr fork init code from xfs_attr_set_args().] [dgc: xfs_attr_try_sf_addname() NULLs args.trans after commit.] [dgc: correct sf add error handling.] Signed-off-by: NAllison Henderson <allison.henderson@oracle.com> Reviewed-by: NDave Chinner <dchinner@redhat.com> Signed-off-by: NDave Chinner <david@fromorbit.com> Signed-off-by: NLuis Chamberlain <mcgrof@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Allison Henderson 提交于
commit 4c74a56b upstream. This patch adds a subroutine xfs_attr_try_sf_addname used by xfs_attr_set. This subrotine will attempt to add the attribute name specified in args in shortform, as well and perform error handling previously done in xfs_attr_set. This patch helps to pre-simplify xfs_attr_set for reviewing purposes and reduce indentation. New function will be added in the next patch. [dgc: moved commit to helper function, too.] Signed-off-by: NAllison Henderson <allison.henderson@oracle.com> Reviewed-by: NDave Chinner <dchinner@redhat.com> Signed-off-by: NDave Chinner <david@fromorbit.com> Signed-off-by: NLuis Chamberlain <mcgrof@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Allison Henderson 提交于
commit e2421f0b upstream. This patch moves fs/xfs/xfs_attr.h to fs/xfs/libxfs/xfs_attr.h since xfs_attr.c is in libxfs. We will need these later in xfsprogs. Signed-off-by: NAllison Henderson <allison.henderson@oracle.com> Reviewed-by: NDave Chinner <dchinner@redhat.com> Signed-off-by: NDave Chinner <david@fromorbit.com> Signed-off-by: NLuis Chamberlain <mcgrof@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Henry Burns 提交于
commit 701d6785 upstream. In zs_destroy_pool() we call flush_work(&pool->free_work). However, we have no guarantee that migration isn't happening in the background at that time. Since migration can't directly free pages, it relies on free_work being scheduled to free the pages. But there's nothing preventing an in-progress migrate from queuing the work *after* zs_unregister_migration() has called flush_work(). Which would mean pages still pointing at the inode when we free it. Since we know at destroy time all objects should be free, no new migrations can come in (since zs_page_isolate() fails for fully-free zspages). This means it is sufficient to track a "# isolated zspages" count by class, and have the destroy logic ensure all such pages have drained before proceeding. Keeping that state under the class spinlock keeps the logic straightforward. In this case a memory leak could lead to an eventual crash if compaction hits the leaked page. This crash would only occur if people are changing their zswap backend at runtime (which eventually starts destruction). Link: http://lkml.kernel.org/r/20190809181751.219326-2-henryburns@google.com Fixes: 48b4800a ("zsmalloc: page migration support") Signed-off-by: NHenry Burns <henryburns@google.com> Reviewed-by: NSergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Henry Burns <henrywolfeburns@gmail.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Shakeel Butt <shakeelb@google.com> Cc: Jonathan Adams <jwadams@google.com> Cc: <stable@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Henry Burns 提交于
commit 1a87aa03 upstream. In zs_page_migrate() we call putback_zspage() after we have finished migrating all pages in this zspage. However, the return value is ignored. If a zs_free() races in between zs_page_isolate() and zs_page_migrate(), freeing the last object in the zspage, putback_zspage() will leave the page in ZS_EMPTY for potentially an unbounded amount of time. To fix this, we need to do the same thing as zs_page_putback() does: schedule free_work to occur. To avoid duplicated code, move the sequence to a new putback_zspage_deferred() function which both zs_page_migrate() and zs_page_putback() call. Link: http://lkml.kernel.org/r/20190809181751.219326-1-henryburns@google.com Fixes: 48b4800a ("zsmalloc: page migration support") Signed-off-by: NHenry Burns <henryburns@google.com> Reviewed-by: NSergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Henry Burns <henrywolfeburns@gmail.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Shakeel Butt <shakeelb@google.com> Cc: Jonathan Adams <jwadams@google.com> Cc: <stable@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Vlastimil Babka 提交于
commit f7da677b upstream. THP splitting path is missing the split_page_owner() call that split_page() has. As a result, split THP pages are wrongly reported in the page_owner file as order-9 pages. Furthermore when the former head page is freed, the remaining former tail pages are not listed in the page_owner file at all. This patch fixes that by adding the split_page_owner() call into __split_huge_page(). Link: http://lkml.kernel.org/r/20190820131828.22684-2-vbabka@suse.cz Fixes: a9627bc5 ("mm/page_owner: introduce split_page_owner and replace manual handling") Reported-by: NKirill A. Shutemov <kirill@shutemov.name> Signed-off-by: NVlastimil Babka <vbabka@suse.cz> Cc: Michal Hocko <mhocko@kernel.org> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Matthew Wilcox <willy@infradead.org> Cc: <stable@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Michael Kelley 提交于
commit d0ff14fd upstream. If alloc_descs() fails before irq_sysfs_init() has run, free_desc() in the cleanup path will call kobject_del() even though the kobject has not been added with kobject_add(). Fix this by making the call to kobject_del() conditional on whether irq_sysfs_init() has run. This problem surfaced because commit aa30f47c ("kobject: Add support for default attribute groups to kobj_type") makes kobject_del() stricter about pairing with kobject_add(). If the pairing is incorrrect, a WARNING and backtrace occur in sysfs_remove_group() because there is no parent. [ tglx: Add a comment to the code and make it work with CONFIG_SYSFS=n ] Fixes: ecb3f394 ("genirq: Expose interrupt information through sysfs") Signed-off-by: NMichael Kelley <mikelley@microsoft.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/1564703564-4116-1-git-send-email-mikelley@microsoft.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Dmitry Fomichev 提交于
commit 75d66ffb upstream. dm-zoned is observed to lock up or livelock in case of hardware failure or some misconfiguration of the backing zoned device. This patch adds a new dm-zoned target function that checks the status of the backing device. If the request queue of the backing device is found to be in dying state or the SCSI backing device enters offline state, the health check code sets a dm-zoned target flag prompting all further incoming I/O to be rejected. In order to detect backing device failures timely, this new function is called in the request mapping path, at the beginning of every reclaim run and before performing any metadata I/O. The proper way out of this situation is to do dmsetup remove <dm-zoned target> and recreate the target when the problem with the backing device is resolved. Fixes: 3b1a94c8 ("dm zoned: drive-managed zoned block device target") Cc: stable@vger.kernel.org Signed-off-by: NDmitry Fomichev <dmitry.fomichev@wdc.com> Reviewed-by: NDamien Le Moal <damien.lemoal@wdc.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Dmitry Fomichev 提交于
commit d7428c50 upstream. Some errors are ignored in the I/O path during queueing chunks for processing by chunk works. Since at least these errors are transient in nature, it should be possible to retry the failed incoming commands. The fix - Errors that can happen while queueing chunks are carried upwards to the main mapping function and it now returns DM_MAPIO_REQUEUE for any incoming requests that can not be properly queued. Error logging/debug messages are added where needed. Fixes: 3b1a94c8 ("dm zoned: drive-managed zoned block device target") Cc: stable@vger.kernel.org Signed-off-by: NDmitry Fomichev <dmitry.fomichev@wdc.com> Reviewed-by: NDamien Le Moal <damien.lemoal@wdc.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Dmitry Fomichev 提交于
commit b234c6d7 upstream. There are several places in reclaim code where errors are not propagated to the main function, dmz_reclaim(). This function is responsible for unlocking zones that might be still locked at the end of any failed reclaim iterations. As the result, some device zones may be left permanently locked for reclaim, degrading target's capability to reclaim zones. This patch fixes these issues as follows - Make sure that dmz_reclaim_buf(), dmz_reclaim_seq_data() and dmz_reclaim_rnd_data() return error codes to the caller. dmz_reclaim() function is renamed to dmz_do_reclaim() to avoid clashing with "struct dmz_reclaim" and is modified to return the error to the caller. dmz_get_zone_for_reclaim() now returns an error instead of NULL pointer and reclaim code checks for that error. Error logging/debug messages are added where necessary. Fixes: 3b1a94c8 ("dm zoned: drive-managed zoned block device target") Cc: stable@vger.kernel.org Signed-off-by: NDmitry Fomichev <dmitry.fomichev@wdc.com> Reviewed-by: NDamien Le Moal <damien.lemoal@wdc.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Mikulas Patocka 提交于
commit 1cfd5d33 upstream. If the sector number is too high, dm_table_find_target() should return a pointer to a zeroed dm_target structure (the caller should test it with dm_target_is_valid). However, for some table sizes, the code in dm_table_find_target() that performs btree lookup will access out of bound memory structures. Fix this bug by testing the sector number at the beginning of dm_table_find_target(). Also, add an "inline" keyword to the function dm_table_get_size() because this is a hot path. Fixes: 512875bd ("dm: table detect io beyond device") Cc: stable@vger.kernel.org Reported-by: NZhang Tao <kontais@zoho.com> Signed-off-by: NMikulas Patocka <mpatocka@redhat.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Wenwen Wang 提交于
commit dc1a3e8e upstream. If rs_prepare_reshape() fails, no cleanup is executed, leading to leak of the raid_set structure allocated at the beginning of raid_ctr(). To fix this issue, go to the label 'bad' if the error occurs. Fixes: 11e47232 ("dm raid: stop keeping raid set frozen altogether") Cc: stable@vger.kernel.org Signed-off-by: NWenwen Wang <wenwen@cs.uga.edu> Signed-off-by: NMike Snitzer <snitzer@redhat.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Mikulas Patocka 提交于
commit 5729b6e5 upstream. Fix a crash that was introduced by the commit 724376a0. The crash is reported here: https://gitlab.com/cryptsetup/cryptsetup/issues/468 When reading from the integrity device, the function dm_integrity_map_continue calls find_journal_node to find out if the location to read is present in the journal. Then, it calculates how many sectors are consecutively stored in the journal. Then, it locks the range with add_new_range and wait_and_add_new_range. The problem is that during wait_and_add_new_range, we hold no locks (we don't hold ic->endio_wait.lock and we don't hold a range lock), so the journal may change arbitrarily while wait_and_add_new_range sleeps. The code then goes to __journal_read_write and hits BUG_ON(journal_entry_get_sector(je) != logical_sector); because the journal has changed. In order to fix this bug, we need to re-check the journal location after wait_and_add_new_range. We restrict the length to one block in order to not complicate the code too much. Fixes: 724376a0 ("dm integrity: implement fair range locks") Cc: stable@vger.kernel.org # v4.19+ Signed-off-by: NMikulas Patocka <mpatocka@redhat.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Dmitry Fomichev 提交于
commit d1fef414 upstream. This patch fixes a problem in dm-kcopyd that may leave jobs in complete queue indefinitely in the event of backing storage failure. This behavior has been observed while running 100% write file fio workload against an XFS volume created on top of a dm-zoned target device. If the underlying storage of dm-zoned goes to offline state under I/O, kcopyd sometimes never issues the end copy callback and dm-zoned reclaim work hangs indefinitely waiting for that completion. This behavior was traced down to the error handling code in process_jobs() function that places the failed job to complete_jobs queue, but doesn't wake up the job handler. In case of backing device failure, all outstanding jobs may end up going to complete_jobs queue via this code path and then stay there forever because there are no more successful I/O jobs to wake up the job handler. This patch adds a wake() call to always wake up kcopyd job wait queue for all I/O jobs that fail before dm_io() gets called for that job. The patch also sets the write error status in all sub jobs that are failed because their master job has failed. Fixes: b73c67c2 ("dm kcopyd: add sequential write feature") Cc: stable@vger.kernel.org Signed-off-by: NDmitry Fomichev <dmitry.fomichev@wdc.com> Reviewed-by: NDamien Le Moal <damien.lemoal@wdc.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-