- 31 5月, 2019 40 次提交
-
-
由 Jiri Kosina 提交于
[ Upstream commit a65c88e16f32aa9ef2e8caa68ea5c29bd5eb0ff0 ] In-NMI warnings have been added to vmalloc_fault() via: ebc8827f ("x86: Barf when vmalloc and kmemcheck faults happen in NMI") back in the time when our NMI entry code could not cope with nested NMIs. These days, it's perfectly fine to take a fault in NMI context and we don't have to care about the fact that IRET from the fault handler might cause NMI nesting. This warning has already been removed from 32-bit implementation of vmalloc_fault() in: 6863ea0c ("x86/mm: Remove in_nmi() warning from vmalloc_fault()") but the 64-bit version was omitted. Remove the bogus warning also from 64-bit implementation of vmalloc_fault(). Reported-by: NNicolai Stange <nstange@suse.de> Signed-off-by: NJiri Kosina <jkosina@suse.cz> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Joerg Roedel <jroedel@suse.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Fixes: 6863ea0c ("x86/mm: Remove in_nmi() warning from vmalloc_fault()") Link: http://lkml.kernel.org/r/nycvar.YFH.7.76.1904240902280.9803@cbobk.fhfr.pmSigned-off-by: NIngo Molnar <mingo@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
[ Upstream commit d4645d30b50d1691c26ff0f8fa4e718b08f8d3bb ] The test robot reported a wrong assignment of a per-CPU variable which it detected by using sparse and sent a report. The assignment itself is correct. The annotation for sparse was wrong and hence the report. The first pointer is a "normal" pointer and points to the per-CPU memory area. That means that the __percpu annotation has to be moved. Move the __percpu annotation to pointer which points to the per-CPU area. This change affects only the sparse tool (and is ignored by the compiler). Reported-by: Nkbuild test robot <lkp@intel.com> Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Fixes: f97f8f06 ("smpboot: Provide infrastructure for percpu hotplug threads") Link: http://lkml.kernel.org/r/20190424085253.12178-1-bigeasy@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Kees Cook 提交于
[ Upstream commit 392bef709659abea614abfe53cf228e7a59876a4 ] When building x86 with Clang LTO and CFI, CFI jump regions are automatically added to the end of the .text section late in linking. As a result, the _etext position was being labelled before the appended jump regions, causing confusion about where the boundaries of the executable region actually are in the running kernel, and broke at least the fault injection code. This moves the _etext mark to outside (and immediately after) the .text area, as it already the case on other architectures (e.g. arm64, arm). Reported-and-tested-by: NSami Tolvanen <samitolvanen@google.com> Signed-off-by: NKees Cook <keescook@chromium.org> Cc: Borislav Petkov <bp@suse.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20190423183827.GA4012@beastSigned-off-by: NIngo Molnar <mingo@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Farhan Ali 提交于
[ Upstream commit b49bdc8602b7c9c7a977758bee4125683f73e59f ] When releasing the vfio-ccw mdev, we currently do not release any existing channel program and its pinned pages. This can lead to the following warning: [1038876.561565] WARNING: CPU: 2 PID: 144727 at drivers/vfio/vfio_iommu_type1.c:1494 vfio_sanity_check_pfn_list+0x40/0x70 [vfio_iommu_type1] .... 1038876.561921] Call Trace: [1038876.561935] ([<00000009897fb870>] 0x9897fb870) [1038876.561949] [<000003ff8013bf62>] vfio_iommu_type1_detach_group+0xda/0x2f0 [vfio_iommu_type1] [1038876.561965] [<000003ff8007b634>] __vfio_group_unset_container+0x64/0x190 [vfio] [1038876.561978] [<000003ff8007b87e>] vfio_group_put_external_user+0x26/0x38 [vfio] [1038876.562024] [<000003ff806fc608>] kvm_vfio_group_put_external_user+0x40/0x60 [kvm] [1038876.562045] [<000003ff806fcb9e>] kvm_vfio_destroy+0x5e/0xd0 [kvm] [1038876.562065] [<000003ff806f63fc>] kvm_put_kvm+0x2a4/0x3d0 [kvm] [1038876.562083] [<000003ff806f655e>] kvm_vm_release+0x36/0x48 [kvm] [1038876.562098] [<00000000003c2dc4>] __fput+0x144/0x228 [1038876.562113] [<000000000016ee82>] task_work_run+0x8a/0xd8 [1038876.562125] [<000000000014c7a8>] do_exit+0x5d8/0xd90 [1038876.562140] [<000000000014d084>] do_group_exit+0xc4/0xc8 [1038876.562155] [<000000000015c046>] get_signal+0x9ae/0xa68 [1038876.562169] [<0000000000108d66>] do_signal+0x66/0x768 [1038876.562185] [<0000000000b9e37e>] system_call+0x1ea/0x2d8 [1038876.562195] 2 locks held by qemu-system-s39/144727: [1038876.562205] #0: 00000000537abaf9 (&container->group_lock){++++}, at: __vfio_group_unset_container+0x3c/0x190 [vfio] [1038876.562230] #1: 00000000670008b5 (&iommu->lock){+.+.}, at: vfio_iommu_type1_detach_group+0x36/0x2f0 [vfio_iommu_type1] [1038876.562250] Last Breaking-Event-Address: [1038876.562262] [<000003ff8013aa24>] vfio_sanity_check_pfn_list+0x3c/0x70 [vfio_iommu_type1] [1038876.562272] irq event stamp: 4236481 [1038876.562287] hardirqs last enabled at (4236489): [<00000000001cee7a>] console_unlock+0x6d2/0x740 [1038876.562299] hardirqs last disabled at (4236496): [<00000000001ce87e>] console_unlock+0xd6/0x740 [1038876.562311] softirqs last enabled at (4234162): [<0000000000b9fa1e>] __do_softirq+0x556/0x598 [1038876.562325] softirqs last disabled at (4234153): [<000000000014e4cc>] irq_exit+0xac/0x108 [1038876.562337] ---[ end trace 6c96d467b1c3ca06 ]--- Similarly we do not free the channel program when we are removing the vfio-ccw device. Let's fix this by resetting the device and freeing the channel program and pinned pages in the release path. For the remove path we can just quiesce the device, since in the remove path the mediated device is going away for good and so we don't need to do a full reset. Signed-off-by: NFarhan Ali <alifm@linux.ibm.com> Message-Id: <ae9f20dc8873f2027f7b3c5d2aaa0bdfe06850b8.1554756534.git.alifm@linux.ibm.com> Acked-by: NEric Farman <farman@linux.ibm.com> Signed-off-by: NCornelia Huck <cohuck@redhat.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Farhan Ali 提交于
[ Upstream commit cea5dde42a83b5f0a039da672f8686455936b8d8 ] Currently we call flush_workqueue while holding the subchannel spinlock. But flush_workqueue function can go to sleep, so do not call the function while holding the spinlock. Fixes the following bug: [ 285.203430] BUG: scheduling while atomic: bash/14193/0x00000002 [ 285.203434] INFO: lockdep is turned off. .... [ 285.203485] Preemption disabled at: [ 285.203488] [<000003ff80243e5c>] vfio_ccw_sch_quiesce+0xbc/0x120 [vfio_ccw] [ 285.203496] CPU: 7 PID: 14193 Comm: bash Tainted: G W .... [ 285.203504] Call Trace: [ 285.203510] ([<0000000000113772>] show_stack+0x82/0xd0) [ 285.203514] [<0000000000b7a102>] dump_stack+0x92/0xd0 [ 285.203518] [<000000000017b8be>] __schedule_bug+0xde/0xf8 [ 285.203524] [<0000000000b95b5a>] __schedule+0x7a/0xc38 [ 285.203528] [<0000000000b9678a>] schedule+0x72/0xb0 [ 285.203533] [<0000000000b9bfbc>] schedule_timeout+0x34/0x528 [ 285.203538] [<0000000000b97608>] wait_for_common+0x118/0x1b0 [ 285.203544] [<0000000000166d6a>] flush_workqueue+0x182/0x548 [ 285.203550] [<000003ff80243e6e>] vfio_ccw_sch_quiesce+0xce/0x120 [vfio_ccw] [ 285.203556] [<000003ff80245278>] vfio_ccw_mdev_reset+0x38/0x70 [vfio_ccw] [ 285.203562] [<000003ff802458b0>] vfio_ccw_mdev_remove+0x40/0x78 [vfio_ccw] [ 285.203567] [<000003ff801a499c>] mdev_device_remove_ops+0x3c/0x80 [mdev] [ 285.203573] [<000003ff801a4d5c>] mdev_device_remove+0xc4/0x130 [mdev] [ 285.203578] [<000003ff801a5074>] remove_store+0x6c/0xa8 [mdev] [ 285.203582] [<000000000046f494>] kernfs_fop_write+0x14c/0x1f8 [ 285.203588] [<00000000003c1530>] __vfs_write+0x38/0x1a8 [ 285.203593] [<00000000003c187c>] vfs_write+0xb4/0x198 [ 285.203597] [<00000000003c1af2>] ksys_write+0x5a/0xb0 [ 285.203601] [<0000000000b9e270>] system_call+0xdc/0x2d8 Signed-off-by: NFarhan Ali <alifm@linux.ibm.com> Reviewed-by: NEric Farman <farman@linux.ibm.com> Reviewed-by: NPierre Morel <pmorel@linux.ibm.com> Message-Id: <626bab8bb2958ae132452e1ddaf1b20882ad5a9d.1554756534.git.alifm@linux.ibm.com> Signed-off-by: NCornelia Huck <cohuck@redhat.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Parav Pandit 提交于
[ Upstream commit 5d7ed2f27bbd482fd29e6b2e204b1a1ee8a0b268 ] When two netdev have same link local addresses (such as vlan and non vlan), two rdma cm listen id should be able to bind to following different addresses. listener-1: addr=lla, scope_id=A, port=X listener-2: addr=lla, scope_id=B, port=X However while comparing the addresses only addr and port are considered, due to which 2nd listener fails to listen. In below example of two listeners, 2nd listener is failing with address in use error. $ rping -sv -a fe80::268a:7ff:feb3:d113%ens2f1 -p 4545& $ rping -sv -a fe80::268a:7ff:feb3:d113%ens2f1.200 -p 4545 rdma_bind_addr: Address already in use To overcome this, consider the scope_ids as well which forms the accurate IPv6 link local address. Signed-off-by: NParav Pandit <parav@mellanox.com> Reviewed-by: NDaniel Jurgens <danielj@mellanox.com> Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NJason Gunthorpe <jgg@mellanox.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Arnd Bergmann 提交于
[ Upstream commit 78d4eb8ad9e1d413449d1b7a060f50b6efa81ebd ] clang has identified a code path in which it thinks a variable may be unused: drivers/md/bcache/alloc.c:333:4: error: variable 'bucket' is used uninitialized whenever 'if' condition is false [-Werror,-Wsometimes-uninitialized] fifo_pop(&ca->free_inc, bucket); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ drivers/md/bcache/util.h:219:27: note: expanded from macro 'fifo_pop' #define fifo_pop(fifo, i) fifo_pop_front(fifo, (i)) ^~~~~~~~~~~~~~~~~~~~~~~~~ drivers/md/bcache/util.h:189:6: note: expanded from macro 'fifo_pop_front' if (_r) { \ ^~ drivers/md/bcache/alloc.c:343:46: note: uninitialized use occurs here allocator_wait(ca, bch_allocator_push(ca, bucket)); ^~~~~~ drivers/md/bcache/alloc.c:287:7: note: expanded from macro 'allocator_wait' if (cond) \ ^~~~ drivers/md/bcache/alloc.c:333:4: note: remove the 'if' if its condition is always true fifo_pop(&ca->free_inc, bucket); ^ drivers/md/bcache/util.h:219:27: note: expanded from macro 'fifo_pop' #define fifo_pop(fifo, i) fifo_pop_front(fifo, (i)) ^ drivers/md/bcache/util.h:189:2: note: expanded from macro 'fifo_pop_front' if (_r) { \ ^ drivers/md/bcache/alloc.c:331:15: note: initialize the variable 'bucket' to silence this warning long bucket; ^ This cannot happen in practice because we only enter the loop if there is at least one element in the list. Slightly rearranging the code makes this clearer to both the reader and the compiler, which avoids the warning. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Reviewed-by: NNathan Chancellor <natechancellor@gmail.com> Signed-off-by: NColy Li <colyli@suse.de> Signed-off-by: NJens Axboe <axboe@kernel.dk> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Coly Li 提交于
[ Upstream commit ce3e4cfb59cb382f8e5ce359238aa580d4ae7778 ] Currently run_cache_set() has no return value, if there is failure in bch_journal_replay(), the caller of run_cache_set() has no idea about such failure and just continue to execute following code after run_cache_set(). The internal failure is triggered inside bch_journal_replay() and being handled in async way. This behavior is inefficient, while failure handling inside bch_journal_replay(), cache register code is still running to start the cache set. Registering and unregistering code running as same time may introduce some rare race condition, and make the code to be more hard to be understood. This patch adds return value to run_cache_set(), and returns -EIO if bch_journal_rreplay() fails. Then caller of run_cache_set() may detect such failure and stop registering code flow immedidately inside register_cache_set(). If journal replay fails, run_cache_set() can report error immediately to register_cache_set(). This patch makes the failure handling for bch_journal_replay() be in synchronized way, easier to understand and debug, and avoid poetential race condition for register-and-unregister in same time. Signed-off-by: NColy Li <colyli@suse.de> Signed-off-by: NJens Axboe <axboe@kernel.dk> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Tang Junhui 提交于
[ Upstream commit 631207314d88e9091be02fbdd1fdadb1ae2ed79a ] journal replay failed with messages: Sep 10 19:10:43 ceph kernel: bcache: error on bb379a64-e44e-4812-b91d-a5599871a3b1: bcache: journal entries 2057493-2057567 missing! (replaying 2057493-20766016), disabling caching The reason is in journal_reclaim(), when discard is enabled, we send discard command and reclaim those journal buckets whose seq is old than the last_seq_now, but before we write a journal with last_seq_now, the machine is restarted, so the journal with the last_seq_now is not written to the journal bucket, and the last_seq_wrote in the newest journal is old than last_seq_now which we expect to be, so when we doing replay, journals from last_seq_wrote to last_seq_now are missing. It's hard to write a journal immediately after journal_reclaim(), and it harmless if those missed journal are caused by discarding since those journals are already wrote to btree node. So, if miss seqs are started from the beginning journal, we treat it as normal, and only print a message to show the miss journal, and point out it maybe caused by discarding. Patch v2 add a judgement condition to ignore the missed journal only when discard enabled as Coly suggested. (Coly Li: rebase the patch with other changes in bch_journal_replay()) Signed-off-by: NTang Junhui <tang.junhui.linux@gmail.com> Tested-by: NDennis Schridde <devurandom@gmx.net> Signed-off-by: NColy Li <colyli@suse.de> Signed-off-by: NJens Axboe <axboe@kernel.dk> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Coly Li 提交于
[ Upstream commit 68d10e6979a3b59e3cd2e90bfcafed79c4cf180a ] When failure happens inside bch_journal_replay(), calling cache_set_err_on() and handling the failure in async way is not a good idea. Because after bch_journal_replay() returns, registering code will continue to execute following steps, and unregistering code triggered by cache_set_err_on() is running in same time. First it is unnecessary to handle failure and unregister cache set in an async way, second there might be potential race condition to run register and unregister code for same cache set. So in this patch, if failure happens in bch_journal_replay(), we don't call cache_set_err_on(), and just print out the same error message to kernel message buffer, then return -EIO immediately caller. Then caller can detect such failure and handle it in synchrnozied way. Signed-off-by: NColy Li <colyli@suse.de> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NJens Axboe <axboe@kernel.dk> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Shenghui Wang 提交于
bcache: avoid potential memleak of list of journal_replay(s) in the CACHE_SYNC branch of run_cache_set [ Upstream commit 95f18c9d1310730d075499a75aaf13bcd60405a7 ] In the CACHE_SYNC branch of run_cache_set(), LIST_HEAD(journal) is used to collect journal_replay(s) and filled by bch_journal_read(). If all goes well, bch_journal_replay() will release the list of jounal_replay(s) at the end of the branch. If something goes wrong, code flow will jump to the label "err:" and leave the list unreleased. This patch will release the list of journal_replay(s) in the case of error detected. v1 -> v2: * Move the release code to the location after label 'err:' to simply the change. Signed-off-by: NShenghui Wang <shhuiw@foxmail.com> Signed-off-by: NColy Li <colyli@suse.de> Signed-off-by: NJens Axboe <axboe@kernel.dk> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Corentin Labbe 提交于
[ Upstream commit f87391558acf816b48f325a493d81d45dec40da0 ] When nbytes < 4, end is wronlgy set to a negative value which, due to uint, is then interpreted to a large value leading to a deadlock in the following code. This patch fix this problem. Fixes: 6298e948 ("crypto: sunxi-ss - Add Allwinner Security System crypto accelerator") Signed-off-by: NCorentin Labbe <clabbe.montjoie@gmail.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Sagi Grimberg 提交于
[ Upstream commit 1007709d7d06fab09bf2d007657575958676282b ] If we timeout the admin startup sequence we might not yet have an I/O tagset allocated which causes the teardown sequence to crash. Make nvme_tcp_teardown_io_queues safe by not iterating inflight tags if the tagset wasn't allocated. Fixes: 4c174e636674 ("nvme-rdma: fix timeout handler") Signed-off-by: NSagi Grimberg <sagi@grimberg.me> Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Sagi Grimberg 提交于
[ Upstream commit 01fa017484ad98fccdeaab32db0077c574b6bd6f ] If our target exposed a namespace with a block size that is greater than PAGE_SIZE, set 0 capacity on the namespace as we do not support it. This issue encountered when the nvmet namespace was backed by a tempfile. Signed-off-by: NSagi Grimberg <sagi@grimberg.me> Reviewed-by: NKeith Busch <keith.busch@intel.com> Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Kangjie Lu 提交于
[ Upstream commit 0ed2a005347400500a39ea7c7318f1fea57fb3ca ] In case create_singlethread_workqueue fails, the fix free the hardware and returns NULL to avoid NULL pointer dereference. Signed-off-by: NKangjie Lu <kjlu@umn.edu> Signed-off-by: NKalle Valo <kvalo@codeaurora.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Aditya Pakki 提交于
[ Upstream commit d5414c2355b20ea8201156d2e874265f1cb0d775 ] kmalloc can fail in rsi_register_rates_channels but memcpy still attempts to write to channels. The patch replaces these calls with kmemdup and passes the error upstream. Signed-off-by: NAditya Pakki <pakki001@umn.edu> Signed-off-by: NKalle Valo <kvalo@codeaurora.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Dan Carpenter 提交于
[ Upstream commit b4c35c17227fe437ded17ce683a6927845f8c4a4 ] The "rate_index" is only used as an index into the phist_data->rx_rate[] array in the mwifiex_hist_data_set() function. That array has MWIFIEX_MAX_AC_RX_RATES (74) elements and it's used to generate some debugfs information. The "rate_index" variable comes from the network skb->data[] and it is a u8 so it's in the 0-255 range. We need to cap it to prevent an array overflow. Fixes: cbf6e055 ("mwifiex: add rx histogram statistics support") Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NKalle Valo <kvalo@codeaurora.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Daniel Baluta 提交于
[ Upstream commit ddb351145a967ee791a0fb0156852ec2fcb746ba ] is_slave_mode defaults to false because sai structure that contains it is kzalloc'ed. Anyhow, if we decide to set the following configuration SAI slave -> SAI master, is_slave_mode will remain set on true although SAI being master it should be set to false. Fix this by updating is_slave_mode for each call of fsl_sai_set_dai_fmt. Signed-off-by: NDaniel Baluta <daniel.baluta@nxp.com> Acked-by: NNicolin Chen <nicoleotsuka@gmail.com> Signed-off-by: NMark Brown <broonie@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Kangjie Lu 提交于
[ Upstream commit 06d5d6b7f9948a89543e1160ef852d57892c750d ] In case platform_device_alloc fails, the fix returns an error code to avoid the NULL pointer dereference. Signed-off-by: NKangjie Lu <kjlu@umn.edu> Signed-off-by: NSrinivas Kandagatla <srinivas.kandagatla@linaro.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Daniel T. Lee 提交于
[ Upstream commit 32e621e55496a0009f44fe4914cd4a23cade4984 ] Currently, building bpf samples will cause the following error. ./tools/lib/bpf/bpf.h:132:27: error: 'UINT32_MAX' undeclared here (not in a function) .. #define BPF_LOG_BUF_SIZE (UINT32_MAX >> 8) /* verifier maximum in kernels <= 5.1 */ ^ ./samples/bpf/bpf_load.h:31:25: note: in expansion of macro 'BPF_LOG_BUF_SIZE' extern char bpf_log_buf[BPF_LOG_BUF_SIZE]; ^~~~~~~~~~~~~~~~ Due to commit 4519efa6f8ea ("libbpf: fix BPF_LOG_BUF_SIZE off-by-one error") hard-coded size of BPF_LOG_BUF_SIZE has been replaced with UINT32_MAX which is defined in <stdint.h> header. Even with this change, bpf selftests are running fine since these are built with clang and it includes header(-idirafter) from clang/6.0.0/include. (it has <stdint.h>) clang -I. -I./include/uapi -I../../../include/uapi -idirafter /usr/local/include -idirafter /usr/include \ -idirafter /usr/lib/llvm-6.0/lib/clang/6.0.0/include -idirafter /usr/include/x86_64-linux-gnu \ -Wno-compare-distinct-pointer-types -O2 -target bpf -emit-llvm -c progs/test_sysctl_prog.c -o - | \ llc -march=bpf -mcpu=generic -filetype=obj -o /linux/tools/testing/selftests/bpf/test_sysctl_prog.o But bpf samples are compiled with GCC, and it only searches and includes headers declared at the target file. As '#include <stdint.h>' hasn't been declared in tools/lib/bpf/bpf.h, it causes build failure of bpf samples. gcc -Wp,-MD,./samples/bpf/.sockex3_user.o.d -Wall -Wmissing-prototypes -Wstrict-prototypes \ -O2 -fomit-frame-pointer -std=gnu89 -I./usr/include -I./tools/lib/ -I./tools/testing/selftests/bpf/ \ -I./tools/ lib/ -I./tools/include -I./tools/perf -c -o ./samples/bpf/sockex3_user.o ./samples/bpf/sockex3_user.c; This commit add declaration of '#include <stdint.h>' to tools/lib/bpf/bpf.h to fix this problem. Signed-off-by: NDaniel T. Lee <danieltimlee@gmail.com> Acked-by: NYonghong Song <yhs@fb.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Sergey Matyukevich 提交于
[ Upstream commit 5dc8cdce1d722c733f8c7af14c5fb595cfedbfa8 ] FullMAC STAs have no way to update bss channel after CSA channel switch completion. As a result, user-space tools may provide inconsistent channel info. For instance, consider the following two commands: $ sudo iw dev wlan0 link $ sudo iw dev wlan0 info The latter command gets channel info from the hardware, so most probably its output will be correct. However the former command gets channel info from scan cache, so its output will contain outdated channel info. In fact, current bss channel info will not be updated until the next [re-]connect. Note that mac80211 STAs have a workaround for this, but it requires access to internal cfg80211 data, see ieee80211_chswitch_work: /* XXX: shouldn't really modify cfg80211-owned data! */ ifmgd->associated->channel = sdata->csa_chandef.chan; This patch suggests to convert mac80211 workaround into cfg80211 behavior and to update current bss channel in cfg80211_ch_switch_notify. Signed-off-by: NSergey Matyukevich <sergey.matyukevich.os@quantenna.com> Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Sugar Zhang 提交于
[ Upstream commit 2da254cc7908105a60a6bb219d18e8dced03dcb9 ] This patch kill instructs the DMAC to immediately terminate execution of a thread. and then clear the interrupt status, at last, stop generating interrupts for DMA_SEV. to guarantee the next dma start is clean. otherwise, one interrupt maybe leave to next start and make some mistake. we can reporduce the problem as follows: DMASEV: modify the event-interrupt resource, and if the INTEN sets function as interrupt, the DMAC will set irq<event_num> HIGH to generate interrupt. write INTCLR to clear interrupt. DMA EXECUTING INSTRUCTS DMA TERMINATE | | | | ... _stop | | | spin_lock_irqsave DMASEV | | | | mask INTEN | | | DMAKILL | | | spin_unlock_irqrestore in above case, a interrupt was left, and if we unmask INTEN, the DMAC will set irq<event_num> HIGH to generate interrupt. to fix this, do as follows: DMA EXECUTING INSTRUCTS DMA TERMINATE | | | | ... _stop | | | spin_lock_irqsave DMASEV | | | | DMAKILL | | | clear INTCLR | mask INTEN | | | spin_unlock_irqrestore Signed-off-by: NSugar Zhang <sugar.zhang@rock-chips.com> Signed-off-by: NVinod Koul <vkoul@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Arnd Bergmann 提交于
[ Upstream commit 46b83629dede262315aa82179d105581f11763b6 ] clang produces a harmless warning for each use for the qeth_adp_supported macro: drivers/s390/net/qeth_l2_main.c:559:31: warning: implicit conversion from enumeration type 'enum qeth_ipa_setadp_cmd' to different enumeration type 'enum qeth_ipa_funcs' [-Wenum-conversion] if (qeth_adp_supported(card, IPA_SETADP_SET_PROMISC_MODE)) ~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~ drivers/s390/net/qeth_core.h:179:41: note: expanded from macro 'qeth_adp_supported' qeth_is_ipa_supported(&c->options.adp, f) ~~~~~~~~~~~~~~~~~~~~~ ^ Add a version of this macro that uses the correct types, and remove the unused qeth_adp_enabled() macro that has the same problem. Reviewed-by: NNathan Chancellor <natechancellor@gmail.com> Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Mariusz Bialonczyk 提交于
[ Upstream commit 62909da8aca048ecf9fbd7e484e5100608f40a63 ] >From the DS2408 datasheet [1]: "Resume Command function checks the status of the RC flag and, if it is set, directly transfers control to the control functions, similar to a Skip ROM command. The only way to set the RC flag is through successfully executing the Match ROM, Search ROM, Conditional Search ROM, or Overdrive-Match ROM command" The function currently works perfectly fine in a multidrop bus, but when we have only a single slave connected, then only a Skip ROM is used and Match ROM is not called at all. This is leading to problems e.g. with single one DS2408 connected, as the Resume Command is not working properly and the device is responding with failing results after the Resume Command. This commit is fixing this by using a Skip ROM instead in those cases. The bandwidth / performance advantage is exactly the same. Refs: [1] https://datasheets.maximintegrated.com/en/ds/DS2408.pdfSigned-off-by: NMariusz Bialonczyk <manio@skyboo.net> Reviewed-by: NJean-Francois Dagenais <jeff.dagenais@gmail.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Nicholas Piggin 提交于
[ Upstream commit 9b019acb72e4b5741d88e8936d6f200ed44b66b2 ] The NOHZ idle balancer runs on the lowest idle CPU. This can interfere with isolated CPUs, so confine it to HK_FLAG_MISC housekeeping CPUs. HK_FLAG_SCHED is not used for this because it is not set anywhere at the moment. This could be folded into HK_FLAG_SCHED once that option is fixed. The problem was observed with increased jitter on an application running on CPU0, caused by NOHZ idle load balancing being run on CPU1 (an SMT sibling). Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20190412042613.28930-1-npiggin@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Philipp Rudo 提交于
[ Upstream commit 729829d775c9a5217abc784b2f16087d79c4eec8 ] To register data for the next kernel (command line, oldmem_base, etc.) the current kernel needs to find the ELF segment that contains head.S. This is currently done by checking ifor 'phdr->p_paddr == 0'. This works fine for the current kernel build but in theory the first few pages could be skipped. Make the detection more robust by checking if the entry point lies within the segment. Signed-off-by: NPhilipp Rudo <prudo@linux.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Manish Rangankar 提交于
[ Upstream commit f848bfd8e167210a29374e8a678892bed591684f ] Sometimes during connection recovery when there is a failure to resolve ARP, and offload connection was not issued, driver tries to flush pending offload connection work which was not queued up. kernel: WARNING: CPU: 19 PID: 10110 at kernel/workqueue.c:3030 __flush_work.isra.34+0x19c/0x1b0 kernel: CPU: 19 PID: 10110 Comm: iscsid Tainted: G W 5.1.0-rc4 #11 kernel: Hardware name: Dell Inc. PowerEdge R730/0599V5, BIOS 2.9.1 12/04/2018 kernel: RIP: 0010:__flush_work.isra.34+0x19c/0x1b0 kernel: Code: 8b fb 66 0f 1f 44 00 00 31 c0 eb ab 48 89 ef c6 07 00 0f 1f 40 00 fb 66 0f 1f 44 00 00 31 c0 eb 96 e8 08 16 fe ff 0f 0b eb 8d <0f> 0b 31 c0 eb 87 0f 1f 40 00 66 2e 0f 1 f 84 00 00 00 00 00 0f 1f kernel: RSP: 0018:ffffa6b4054dba68 EFLAGS: 00010246 kernel: RAX: 0000000000000000 RBX: ffff91df21c36fc0 RCX: 0000000000000000 kernel: RDX: 0000000000000001 RSI: 0000000000000000 RDI: ffff91df21c36fc0 kernel: RBP: ffff91df21c36ef0 R08: 0000000000000000 R09: 0000000000000000 kernel: R10: 0000000000000038 R11: ffffa6b4054dbd60 R12: ffffffffc05e72c0 kernel: R13: ffff91db10280820 R14: 0000000000000048 R15: 0000000000000000 kernel: FS: 00007f5d83cc1740(0000) GS:ffff91df2f840000(0000) knlGS:0000000000000000 kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 kernel: CR2: 0000000001cc5000 CR3: 0000000465450002 CR4: 00000000001606e0 kernel: Call Trace: kernel: ? try_to_del_timer_sync+0x4d/0x80 kernel: qedi_ep_disconnect+0x3b/0x410 [qedi] kernel: ? 0xffffffffc083c000 kernel: ? klist_iter_exit+0x14/0x20 kernel: ? class_find_device+0x93/0xf0 kernel: iscsi_if_ep_disconnect.isra.18+0x58/0x70 [scsi_transport_iscsi] kernel: iscsi_if_recv_msg+0x10e2/0x1510 [scsi_transport_iscsi] kernel: ? copyout+0x22/0x30 kernel: ? _copy_to_iter+0xa0/0x430 kernel: ? _cond_resched+0x15/0x30 kernel: ? __kmalloc_node_track_caller+0x1f9/0x270 kernel: iscsi_if_rx+0xa5/0x1e0 [scsi_transport_iscsi] kernel: netlink_unicast+0x17f/0x230 kernel: netlink_sendmsg+0x2d2/0x3d0 kernel: sock_sendmsg+0x36/0x50 kernel: ___sys_sendmsg+0x280/0x2a0 kernel: ? timerqueue_add+0x54/0x80 kernel: ? enqueue_hrtimer+0x38/0x90 kernel: ? hrtimer_start_range_ns+0x19f/0x2c0 kernel: __sys_sendmsg+0x58/0xa0 kernel: do_syscall_64+0x5b/0x180 kernel: entry_SYSCALL_64_after_hwframe+0x44/0xa9 Signed-off-by: NManish Rangankar <mrangankar@marvell.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Fabien Dessenne 提交于
[ Upstream commit cf612c5949aca2bd81a1e28688957c8149ea2693 ] Manage the -EPROBE_DEFER error case for the wake IRQ. Signed-off-by: NFabien Dessenne <fabien.dessenne@st.com> Acked-by: NAmelie Delaunay <amelie.delaunay@st.com> Signed-off-by: NAlexandre Belloni <alexandre.belloni@bootlin.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Sven Van Asbroeck 提交于
[ Upstream commit f22b1ba15ee5785aa028384ebf77dd39e8e47b70 ] The device's remove() attempts to shut down the delayed_work scheduled on the kernel-global workqueue by calling flush_scheduled_work(). Unfortunately, flush_scheduled_work() does not prevent the delayed_work from re-scheduling itself. The delayed_work might run after the device has been removed, and touch the already de-allocated info structure. This is a potential use-after-free. Fix by calling cancel_delayed_work_sync() during remove(): this ensures that the delayed work is properly cancelled, is no longer running, and is not able to re-schedule itself. This issue was detected with the help of Coccinelle. Signed-off-by: NSven Van Asbroeck <TheSven73@gmail.com> Signed-off-by: NAlexandre Belloni <alexandre.belloni@bootlin.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Johannes Berg 提交于
[ Upstream commit 30f24eabab8cd801064c5c37589d803cb4341929 ] If for some reason the device gives us an RX interrupt before we're ready for it, perhaps during device power-on with misconfigured IRQ causes mapping or so, we can crash trying to access the queues. Prevent that by checking that we actually have RXQs and that they were properly allocated. Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NLuca Coelho <luciano.coelho@intel.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Qu Wenruo 提交于
[ Upstream commit 7ac1e464c4d473b517bb784f30d40da1f842482e ] When we failed to find a root key in btrfs_update_root(), we just panic. That's definitely not cool, fix it by outputting an unique error message, aborting current transaction and return -EUCLEAN. This should not normally happen as the root has been used by the callers in some way. Reviewed-by: NFilipe Manana <fdmanana@suse.com> Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de> Signed-off-by: NQu Wenruo <wqu@suse.com> Reviewed-by: NDavid Sterba <dsterba@suse.com> Signed-off-by: NDavid Sterba <dsterba@suse.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Josef Bacik 提交于
[ Upstream commit ff612ba7849964b1898fd3ccd1f56941129c6aab ] We've been seeing the following sporadically throughout our fleet panic: kernel BUG at fs/btrfs/relocation.c:4584! netversion: 5.0-0 Backtrace: #0 [ffffc90003adb880] machine_kexec at ffffffff81041da8 #1 [ffffc90003adb8c8] __crash_kexec at ffffffff8110396c #2 [ffffc90003adb988] crash_kexec at ffffffff811048ad #3 [ffffc90003adb9a0] oops_end at ffffffff8101c19a #4 [ffffc90003adb9c0] do_trap at ffffffff81019114 #5 [ffffc90003adba00] do_error_trap at ffffffff810195d0 #6 [ffffc90003adbab0] invalid_op at ffffffff81a00a9b [exception RIP: btrfs_reloc_cow_block+692] RIP: ffffffff8143b614 RSP: ffffc90003adbb68 RFLAGS: 00010246 RAX: fffffffffffffff7 RBX: ffff8806b9c32000 RCX: ffff8806aad00690 RDX: ffff880850b295e0 RSI: ffff8806b9c32000 RDI: ffff88084f205bd0 RBP: ffff880849415000 R8: ffffc90003adbbe0 R9: ffff88085ac90000 R10: ffff8805f7369140 R11: 0000000000000000 R12: ffff880850b295e0 R13: ffff88084f205bd0 R14: 0000000000000000 R15: 0000000000000000 ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018 #7 [ffffc90003adbbb0] __btrfs_cow_block at ffffffff813bf1cd #8 [ffffc90003adbc28] btrfs_cow_block at ffffffff813bf4b3 #9 [ffffc90003adbc78] btrfs_search_slot at ffffffff813c2e6c The way relocation moves data extents is by creating a reloc inode and preallocating extents in this inode and then copying the data into these preallocated extents. Once we've done this for all of our extents, we'll write out these dirty pages, which marks the extent written, and goes into btrfs_reloc_cow_block(). From here we get our current reloc_control, which _should_ match the reloc_control for the current block group we're relocating. However if we get an ENOSPC in this path at some point we'll bail out, never initiating writeback on this inode. Not a huge deal, unless we happen to be doing relocation on a different block group, and this block group is now rc->stage == UPDATE_DATA_PTRS. This trips the BUG_ON() in btrfs_reloc_cow_block(), because we expect to be done modifying the data inode. We are in fact done modifying the metadata for the data inode we're currently using, but not the one from the failed block group, and thus we BUG_ON(). (This happens when writeback finishes for extents from the previous group, when we are at btrfs_finish_ordered_io() which updates the data reloc tree (inode item, drops/adds extent items, etc).) Fix this by writing out the reloc data inode always, and then breaking out of the loop after that point to keep from tripping this BUG_ON() later. Signed-off-by: NJosef Bacik <josef@toxicpanda.com> Reviewed-by: NFilipe Manana <fdmanana@suse.com> [ add note from Filipe ] Signed-off-by: NDavid Sterba <dsterba@suse.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Robbie Ko 提交于
[ Upstream commit 39ad317315887c2cb9a4347a93a8859326ddf136 ] When doing fallocate, we first add the range to the reserve_list and then reserve the quota. If quota reservation fails, we'll release all reserved parts of reserve_list. However, cur_offset is not updated to indicate that this range is already been inserted into the list. Therefore, the same range is freed twice. Once at list_for_each_entry loop, and once at the end of the function. This will result in WARN_ON on bytes_may_use when we free the remaining space. At the end, under the 'out' label we have a call to: btrfs_free_reserved_data_space(inode, data_reserved, alloc_start, alloc_end - cur_offset); The start offset, third argument, should be cur_offset. Everything from alloc_start to cur_offset was freed by the list_for_each_entry_safe_loop. Fixes: 18513091 ("btrfs: update btrfs_space_info's bytes_may_use timely") Reviewed-by: NFilipe Manana <fdmanana@suse.com> Signed-off-by: NRobbie Ko <robbieko@synology.com> Signed-off-by: NDavid Sterba <dsterba@suse.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Nadav Amit 提交于
[ Upstream commit f2c65fb3221adc6b73b0549fc7ba892022db9797 ] When modules and BPF filters are loaded, there is a time window in which some memory is both writable and executable. An attacker that has already found another vulnerability (e.g., a dangling pointer) might be able to exploit this behavior to overwrite kernel code. Prevent having writable executable PTEs in this stage. In addition, avoiding having W+X mappings can also slightly simplify the patching of modules code on initialization (e.g., by alternatives and static-key), as would be done in the next patch. This was actually the main motivation for this patch. To avoid having W+X mappings, set them initially as RW (NX) and after they are set as RO set them as X as well. Setting them as executable is done as a separate step to avoid one core in which the old PTE is cached (hence writable), and another which sees the updated PTE (executable), which would break the W^X protection. Suggested-by: NThomas Gleixner <tglx@linutronix.de> Suggested-by: NAndy Lutomirski <luto@amacapital.net> Signed-off-by: NNadav Amit <namit@vmware.com> Signed-off-by: NRick Edgecombe <rick.p.edgecombe@intel.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: <akpm@linux-foundation.org> Cc: <ard.biesheuvel@linaro.org> Cc: <deneen.t.dock@intel.com> Cc: <kernel-hardening@lists.openwall.com> Cc: <kristen@linux.intel.com> Cc: <linux_dti@icloud.com> Cc: <will.deacon@arm.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Jessica Yu <jeyu@kernel.org> Cc: Kees Cook <keescook@chromium.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Rik van Riel <riel@surriel.com> Link: https://lkml.kernel.org/r/20190426001143.4983-12-namit@vmware.comSigned-off-by: NIngo Molnar <mingo@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Bart Van Assche 提交于
[ Upstream commit 300ec7415c1fed5c73660f50c8e14a67e236dc0a ] Since fc_remote_port_delete() must be called with interrupts enabled, do not disable interrupts when calling that function. Remove the lockin calls from around the put_sess() call. This is safe because the function that is called when the final reference is dropped, qlt_unreg_sess(), grabs the proper locks. This patch avoids that lockdep reports the following: WARNING: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected kworker/2:1/62 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire: 0000000009e679b3 (&(&k->k_lock)->rlock){+.+.}, at: klist_next+0x43/0x1d0 and this task is already holding: 00000000a033b71c (&(&ha->tgt.sess_lock)->rlock){-...}, at: qla24xx_delete_sess_fn+0x55/0xf0 [qla2xxx_scst] which would create a new lock dependency: (&(&ha->tgt.sess_lock)->rlock){-...} -> (&(&k->k_lock)->rlock){+.+.} but this new dependency connects a HARDIRQ-irq-safe lock: (&(&ha->tgt.sess_lock)->rlock){-...} ... which became HARDIRQ-irq-safe at: lock_acquire+0xe3/0x200 _raw_spin_lock_irqsave+0x3d/0x60 qla24xx_report_id_acquisition+0xa69/0xe30 [qla2xxx_scst] qla24xx_process_response_queue+0x69e/0x1270 [qla2xxx_scst] qla24xx_msix_rsp_q+0x79/0xf0 [qla2xxx_scst] __handle_irq_event_percpu+0x79/0x3c0 handle_irq_event_percpu+0x70/0xf0 handle_irq_event+0x5a/0x8b handle_edge_irq+0x12c/0x310 handle_irq+0x192/0x20a do_IRQ+0x73/0x160 ret_from_intr+0x0/0x1d default_idle+0x23/0x1f0 arch_cpu_idle+0x15/0x20 default_idle_call+0x35/0x40 do_idle+0x2bb/0x2e0 cpu_startup_entry+0x1d/0x20 start_secondary+0x2a8/0x320 secondary_startup_64+0xa4/0xb0 to a HARDIRQ-irq-unsafe lock: (&(&k->k_lock)->rlock){+.+.} ... which became HARDIRQ-irq-unsafe at: ... lock_acquire+0xe3/0x200 _raw_spin_lock+0x32/0x50 klist_add_tail+0x33/0xb0 device_add+0x7e1/0xb50 device_create_groups_vargs+0x11c/0x150 device_create_with_groups+0x89/0xb0 vtconsole_class_init+0xb2/0x124 do_one_initcall+0xc5/0x3ce kernel_init_freeable+0x295/0x32e kernel_init+0x11/0x11b ret_from_fork+0x3a/0x50 other info that might help us debug this: Possible interrupt unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&(&k->k_lock)->rlock); local_irq_disable(); lock(&(&ha->tgt.sess_lock)->rlock); lock(&(&k->k_lock)->rlock); <Interrupt> lock(&(&ha->tgt.sess_lock)->rlock); *** DEADLOCK *** 3 locks held by kworker/2:1/62: #0: 00000000a4319c16 ((wq_completion)"qla2xxx_wq"){+.+.}, at: process_one_work+0x437/0xa80 #1: 00000000ffa34c42 ((work_completion)(&sess->del_work)){+.+.}, at: process_one_work+0x437/0xa80 #2: 00000000a033b71c (&(&ha->tgt.sess_lock)->rlock){-...}, at: qla24xx_delete_sess_fn+0x55/0xf0 [qla2xxx_scst] the dependencies between HARDIRQ-irq-safe lock and the holding lock: -> (&(&ha->tgt.sess_lock)->rlock){-...} ops: 8 { IN-HARDIRQ-W at: lock_acquire+0xe3/0x200 _raw_spin_lock_irqsave+0x3d/0x60 qla24xx_report_id_acquisition+0xa69/0xe30 [qla2xxx_scst] qla24xx_process_response_queue+0x69e/0x1270 [qla2xxx_scst] qla24xx_msix_rsp_q+0x79/0xf0 [qla2xxx_scst] __handle_irq_event_percpu+0x79/0x3c0 handle_irq_event_percpu+0x70/0xf0 handle_irq_event+0x5a/0x8b handle_edge_irq+0x12c/0x310 handle_irq+0x192/0x20a do_IRQ+0x73/0x160 ret_from_intr+0x0/0x1d default_idle+0x23/0x1f0 arch_cpu_idle+0x15/0x20 default_idle_call+0x35/0x40 do_idle+0x2bb/0x2e0 cpu_startup_entry+0x1d/0x20 start_secondary+0x2a8/0x320 secondary_startup_64+0xa4/0xb0 INITIAL USE at: lock_acquire+0xe3/0x200 _raw_spin_lock_irqsave+0x3d/0x60 qla24xx_report_id_acquisition+0xa69/0xe30 [qla2xxx_scst] qla24xx_process_response_queue+0x69e/0x1270 [qla2xxx_scst] qla24xx_msix_rsp_q+0x79/0xf0 [qla2xxx_scst] __handle_irq_event_percpu+0x79/0x3c0 handle_irq_event_percpu+0x70/0xf0 handle_irq_event+0x5a/0x8b handle_edge_irq+0x12c/0x310 handle_irq+0x192/0x20a do_IRQ+0x73/0x160 ret_from_intr+0x0/0x1d default_idle+0x23/0x1f0 arch_cpu_idle+0x15/0x20 default_idle_call+0x35/0x40 do_idle+0x2bb/0x2e0 cpu_startup_entry+0x1d/0x20 start_secondary+0x2a8/0x320 secondary_startup_64+0xa4/0xb0 } ... key at: [<ffffffffa0c0d080>] __key.85462+0x0/0xfffffffffff7df80 [qla2xxx_scst] ... acquired at: lock_acquire+0xe3/0x200 _raw_spin_lock_irqsave+0x3d/0x60 klist_next+0x43/0x1d0 device_for_each_child+0x96/0x110 scsi_target_block+0x3c/0x40 [scsi_mod] fc_remote_port_delete+0xe7/0x1c0 [scsi_transport_fc] qla2x00_mark_device_lost+0xa0b/0xa30 [qla2xxx_scst] qlt_unreg_sess+0x1c6/0x380 [qla2xxx_scst] qla24xx_delete_sess_fn+0xe6/0xf0 [qla2xxx_scst] process_one_work+0x511/0xa80 worker_thread+0x67/0x5b0 kthread+0x1d2/0x1f0 ret_from_fork+0x3a/0x50 the dependencies between the lock to be acquired and HARDIRQ-irq-unsafe lock: -> (&(&k->k_lock)->rlock){+.+.} ops: 13831 { HARDIRQ-ON-W at: lock_acquire+0xe3/0x200 _raw_spin_lock+0x32/0x50 klist_add_tail+0x33/0xb0 device_add+0x7e1/0xb50 device_create_groups_vargs+0x11c/0x150 device_create_with_groups+0x89/0xb0 vtconsole_class_init+0xb2/0x124 do_one_initcall+0xc5/0x3ce kernel_init_freeable+0x295/0x32e kernel_init+0x11/0x11b ret_from_fork+0x3a/0x50 SOFTIRQ-ON-W at: lock_acquire+0xe3/0x200 _raw_spin_lock+0x32/0x50 klist_add_tail+0x33/0xb0 device_add+0x7e1/0xb50 device_create_groups_vargs+0x11c/0x150 device_create_with_groups+0x89/0xb0 vtconsole_class_init+0xb2/0x124 do_one_initcall+0xc5/0x3ce kernel_init_freeable+0x295/0x32e kernel_init+0x11/0x11b ret_from_fork+0x3a/0x50 INITIAL USE at: lock_acquire+0xe3/0x200 _raw_spin_lock+0x32/0x50 klist_add_tail+0x33/0xb0 device_add+0x7e1/0xb50 device_create_groups_vargs+0x11c/0x150 device_create_with_groups+0x89/0xb0 vtconsole_class_init+0xb2/0x124 do_one_initcall+0xc5/0x3ce kernel_init_freeable+0x295/0x32e kernel_init+0x11/0x11b ret_from_fork+0x3a/0x50 } ... key at: [<ffffffff83ed8780>] __key.15491+0x0/0x40 ... acquired at: lock_acquire+0xe3/0x200 _raw_spin_lock_irqsave+0x3d/0x60 klist_next+0x43/0x1d0 device_for_each_child+0x96/0x110 scsi_target_block+0x3c/0x40 [scsi_mod] fc_remote_port_delete+0xe7/0x1c0 [scsi_transport_fc] qla2x00_mark_device_lost+0xa0b/0xa30 [qla2xxx_scst] qlt_unreg_sess+0x1c6/0x380 [qla2xxx_scst] qla24xx_delete_sess_fn+0xe6/0xf0 [qla2xxx_scst] process_one_work+0x511/0xa80 worker_thread+0x67/0x5b0 kthread+0x1d2/0x1f0 ret_from_fork+0x3a/0x50 stack backtrace: CPU: 2 PID: 62 Comm: kworker/2:1 Tainted: G O 5.0.7-dbg+ #8 Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011 Workqueue: qla2xxx_wq qla24xx_delete_sess_fn [qla2xxx_scst] Call Trace: dump_stack+0x86/0xca check_usage.cold.52+0x473/0x563 __lock_acquire+0x11c0/0x23e0 lock_acquire+0xe3/0x200 _raw_spin_lock_irqsave+0x3d/0x60 klist_next+0x43/0x1d0 device_for_each_child+0x96/0x110 scsi_target_block+0x3c/0x40 [scsi_mod] fc_remote_port_delete+0xe7/0x1c0 [scsi_transport_fc] qla2x00_mark_device_lost+0xa0b/0xa30 [qla2xxx_scst] qlt_unreg_sess+0x1c6/0x380 [qla2xxx_scst] qla24xx_delete_sess_fn+0xe6/0xf0 [qla2xxx_scst] process_one_work+0x511/0xa80 worker_thread+0x67/0x5b0 kthread+0x1d2/0x1f0 ret_from_fork+0x3a/0x50 Cc: Himanshu Madhani <hmadhani@marvell.com> Cc: Giridhar Malavali <gmalavali@marvell.com> Signed-off-by: NBart Van Assche <bvanassche@acm.org> Acked-by: NHimanshu Madhani <hmadhani@marvell.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Bart Van Assche 提交于
[ Upstream commit d4023db71108375e4194e92730ba0d32d7f07813 ] This patch avoids that lockdep reports the following warning: ===================================================== WARNING: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected 5.1.0-rc1-dbg+ #11 Tainted: G W ----------------------------------------------------- rmdir/1478 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire: 00000000e7ac4607 (&(&k->k_lock)->rlock){+.+.}, at: klist_next+0x43/0x1d0 and this task is already holding: 00000000cf0baf5e (&(&ha->tgt.sess_lock)->rlock){-...}, at: tcm_qla2xxx_close_session+0x57/0xb0 [tcm_qla2xxx] which would create a new lock dependency: (&(&ha->tgt.sess_lock)->rlock){-...} -> (&(&k->k_lock)->rlock){+.+.} but this new dependency connects a HARDIRQ-irq-safe lock: (&(&ha->tgt.sess_lock)->rlock){-...} ... which became HARDIRQ-irq-safe at: lock_acquire+0xe3/0x200 _raw_spin_lock_irqsave+0x3d/0x60 qla2x00_fcport_event_handler+0x1f3d/0x22b0 [qla2xxx] qla2x00_async_login_sp_done+0x1dc/0x1f0 [qla2xxx] qla24xx_process_response_queue+0xa37/0x10e0 [qla2xxx] qla24xx_msix_rsp_q+0x79/0xf0 [qla2xxx] __handle_irq_event_percpu+0x79/0x3c0 handle_irq_event_percpu+0x70/0xf0 handle_irq_event+0x5a/0x8b handle_edge_irq+0x12c/0x310 handle_irq+0x192/0x20a do_IRQ+0x73/0x160 ret_from_intr+0x0/0x1d default_idle+0x23/0x1f0 arch_cpu_idle+0x15/0x20 default_idle_call+0x35/0x40 do_idle+0x2bb/0x2e0 cpu_startup_entry+0x1d/0x20 start_secondary+0x24d/0x2d0 secondary_startup_64+0xa4/0xb0 to a HARDIRQ-irq-unsafe lock: (&(&k->k_lock)->rlock){+.+.} ... which became HARDIRQ-irq-unsafe at: ... lock_acquire+0xe3/0x200 _raw_spin_lock+0x32/0x50 klist_add_tail+0x33/0xb0 device_add+0x7f4/0xb60 device_create_groups_vargs+0x11c/0x150 device_create_with_groups+0x89/0xb0 vtconsole_class_init+0xb2/0x124 do_one_initcall+0xc5/0x3ce kernel_init_freeable+0x295/0x32e kernel_init+0x11/0x11b ret_from_fork+0x3a/0x50 other info that might help us debug this: Possible interrupt unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&(&k->k_lock)->rlock); local_irq_disable(); lock(&(&ha->tgt.sess_lock)->rlock); lock(&(&k->k_lock)->rlock); <Interrupt> lock(&(&ha->tgt.sess_lock)->rlock); *** DEADLOCK *** 4 locks held by rmdir/1478: #0: 000000002c7f1ba4 (sb_writers#10){.+.+}, at: mnt_want_write+0x32/0x70 #1: 00000000c85eb147 (&default_group_class[depth - 1]#2/1){+.+.}, at: do_rmdir+0x217/0x2d0 #2: 000000002b164d6f (&sb->s_type->i_mutex_key#13){++++}, at: vfs_rmdir+0x7e/0x1d0 #3: 00000000cf0baf5e (&(&ha->tgt.sess_lock)->rlock){-...}, at: tcm_qla2xxx_close_session+0x57/0xb0 [tcm_qla2xxx] the dependencies between HARDIRQ-irq-safe lock and the holding lock: -> (&(&ha->tgt.sess_lock)->rlock){-...} ops: 127 { IN-HARDIRQ-W at: lock_acquire+0xe3/0x200 _raw_spin_lock_irqsave+0x3d/0x60 qla2x00_fcport_event_handler+0x1f3d/0x22b0 [qla2xxx] qla2x00_async_login_sp_done+0x1dc/0x1f0 [qla2xxx] qla24xx_process_response_queue+0xa37/0x10e0 [qla2xxx] qla24xx_msix_rsp_q+0x79/0xf0 [qla2xxx] __handle_irq_event_percpu+0x79/0x3c0 handle_irq_event_percpu+0x70/0xf0 handle_irq_event+0x5a/0x8b handle_edge_irq+0x12c/0x310 handle_irq+0x192/0x20a do_IRQ+0x73/0x160 ret_from_intr+0x0/0x1d default_idle+0x23/0x1f0 arch_cpu_idle+0x15/0x20 default_idle_call+0x35/0x40 do_idle+0x2bb/0x2e0 cpu_startup_entry+0x1d/0x20 start_secondary+0x24d/0x2d0 secondary_startup_64+0xa4/0xb0 INITIAL USE at: lock_acquire+0xe3/0x200 _raw_spin_lock_irqsave+0x3d/0x60 qla2x00_loop_resync+0xb3d/0x2690 [qla2xxx] qla2x00_do_dpc+0xcee/0xf30 [qla2xxx] kthread+0x1d2/0x1f0 ret_from_fork+0x3a/0x50 } ... key at: [<ffffffffa125f700>] __key.62804+0x0/0xfffffffffff7e900 [qla2xxx] ... acquired at: __lock_acquire+0x11ed/0x1b60 lock_acquire+0xe3/0x200 _raw_spin_lock_irqsave+0x3d/0x60 klist_next+0x43/0x1d0 device_for_each_child+0x96/0x110 scsi_target_block+0x3c/0x40 [scsi_mod] fc_remote_port_delete+0xe7/0x1c0 [scsi_transport_fc] qla2x00_mark_device_lost+0x4d3/0x500 [qla2xxx] qlt_unreg_sess+0x104/0x2c0 [qla2xxx] tcm_qla2xxx_close_session+0xa2/0xb0 [tcm_qla2xxx] target_shutdown_sessions+0x17b/0x190 [target_core_mod] core_tpg_del_initiator_node_acl+0xf3/0x1f0 [target_core_mod] target_fabric_nacl_base_release+0x25/0x30 [target_core_mod] config_item_release+0x9f/0x120 [configfs] config_item_put+0x29/0x2b [configfs] configfs_rmdir+0x3d2/0x520 [configfs] vfs_rmdir+0xb3/0x1d0 do_rmdir+0x25c/0x2d0 __x64_sys_rmdir+0x24/0x30 do_syscall_64+0x77/0x220 entry_SYSCALL_64_after_hwframe+0x49/0xbe the dependencies between the lock to be acquired and HARDIRQ-irq-unsafe lock: -> (&(&k->k_lock)->rlock){+.+.} ops: 14568 { HARDIRQ-ON-W at: lock_acquire+0xe3/0x200 _raw_spin_lock+0x32/0x50 klist_add_tail+0x33/0xb0 device_add+0x7f4/0xb60 device_create_groups_vargs+0x11c/0x150 device_create_with_groups+0x89/0xb0 vtconsole_class_init+0xb2/0x124 do_one_initcall+0xc5/0x3ce kernel_init_freeable+0x295/0x32e kernel_init+0x11/0x11b ret_from_fork+0x3a/0x50 SOFTIRQ-ON-W at: lock_acquire+0xe3/0x200 _raw_spin_lock+0x32/0x50 klist_add_tail+0x33/0xb0 device_add+0x7f4/0xb60 device_create_groups_vargs+0x11c/0x150 device_create_with_groups+0x89/0xb0 vtconsole_class_init+0xb2/0x124 do_one_initcall+0xc5/0x3ce kernel_init_freeable+0x295/0x32e kernel_init+0x11/0x11b ret_from_fork+0x3a/0x50 INITIAL USE at: lock_acquire+0xe3/0x200 _raw_spin_lock+0x32/0x50 klist_add_tail+0x33/0xb0 device_add+0x7f4/0xb60 device_create_groups_vargs+0x11c/0x150 device_create_with_groups+0x89/0xb0 vtconsole_class_init+0xb2/0x124 do_one_initcall+0xc5/0x3ce kernel_init_freeable+0x295/0x32e kernel_init+0x11/0x11b ret_from_fork+0x3a/0x50 } ... key at: [<ffffffff83f3d900>] __key.15805+0x0/0x40 ... acquired at: __lock_acquire+0x11ed/0x1b60 lock_acquire+0xe3/0x200 _raw_spin_lock_irqsave+0x3d/0x60 klist_next+0x43/0x1d0 device_for_each_child+0x96/0x110 scsi_target_block+0x3c/0x40 [scsi_mod] fc_remote_port_delete+0xe7/0x1c0 [scsi_transport_fc] qla2x00_mark_device_lost+0x4d3/0x500 [qla2xxx] qlt_unreg_sess+0x104/0x2c0 [qla2xxx] tcm_qla2xxx_close_session+0xa2/0xb0 [tcm_qla2xxx] target_shutdown_sessions+0x17b/0x190 [target_core_mod] core_tpg_del_initiator_node_acl+0xf3/0x1f0 [target_core_mod] target_fabric_nacl_base_release+0x25/0x30 [target_core_mod] config_item_release+0x9f/0x120 [configfs] config_item_put+0x29/0x2b [configfs] configfs_rmdir+0x3d2/0x520 [configfs] vfs_rmdir+0xb3/0x1d0 do_rmdir+0x25c/0x2d0 __x64_sys_rmdir+0x24/0x30 do_syscall_64+0x77/0x220 entry_SYSCALL_64_after_hwframe+0x49/0xbe stack backtrace: CPU: 7 PID: 1478 Comm: rmdir Tainted: G W 5.1.0-rc1-dbg+ #11 Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011 Call Trace: dump_stack+0x86/0xca check_usage.cold.59+0x473/0x563 check_prev_add.constprop.43+0x1f1/0x1170 __lock_acquire+0x11ed/0x1b60 lock_acquire+0xe3/0x200 _raw_spin_lock_irqsave+0x3d/0x60 klist_next+0x43/0x1d0 device_for_each_child+0x96/0x110 scsi_target_block+0x3c/0x40 [scsi_mod] fc_remote_port_delete+0xe7/0x1c0 [scsi_transport_fc] qla2x00_mark_device_lost+0x4d3/0x500 [qla2xxx] qlt_unreg_sess+0x104/0x2c0 [qla2xxx] tcm_qla2xxx_close_session+0xa2/0xb0 [tcm_qla2xxx] target_shutdown_sessions+0x17b/0x190 [target_core_mod] core_tpg_del_initiator_node_acl+0xf3/0x1f0 [target_core_mod] target_fabric_nacl_base_release+0x25/0x30 [target_core_mod] config_item_release+0x9f/0x120 [configfs] config_item_put+0x29/0x2b [configfs] configfs_rmdir+0x3d2/0x520 [configfs] vfs_rmdir+0xb3/0x1d0 do_rmdir+0x25c/0x2d0 __x64_sys_rmdir+0x24/0x30 do_syscall_64+0x77/0x220 entry_SYSCALL_64_after_hwframe+0x49/0xbe Cc: Himanshu Madhani <hmadhani@marvell.com> Cc: Giridhar Malavali <gmalavali@marvell.com> Signed-off-by: NBart Van Assche <bvanassche@acm.org> Acked-by: NHimanshu Madhani <hmadhani@marvell.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Bart Van Assche 提交于
[ Upstream commit e209783d66bca04b5fce4429e59338517ffc1a0b ] Implementations of the .write_pending() callback functions must guarantee that an appropriate LIO core callback function will be called immediately or at a later time. Make sure that this guarantee is met for aborted SCSI commands. [mkp: typo] Cc: Himanshu Madhani <hmadhani@marvell.com> Cc: Giridhar Malavali <gmalavali@marvell.com> Fixes: 694833ee ("scsi: tcm_qla2xxx: Do not allow aborted cmd to advance.") # v4.13. Fixes: a07100e0 ("qla2xxx: Fix TMR ABORT interaction issue between qla2xxx and TCM") # v4.5. Signed-off-by: NBart Van Assche <bvanassche@acm.org> Acked-by: NHimanshu Madhani <hmadhani@marvell.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Bart Van Assche 提交于
[ Upstream commit 24afabdbd0b3553963a2bbf465895492b14d1107 ] Make sure that the allocated interrupts are freed if allocating memory for the msix_entries array fails. Cc: Himanshu Madhani <hmadhani@marvell.com> Cc: Giridhar Malavali <gmalavali@marvell.com> Signed-off-by: NBart Van Assche <bvanassche@acm.org> Acked-by: NHimanshu Madhani <hmadhani@marvell.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Viresh Kumar 提交于
[ Upstream commit 9a4f26cc98d81b67ecc23b890c28e2df324e29f3 ] Currently the error return path from kobject_init_and_add() is not followed by a call to kobject_put() - which means we are leaking the kobject. Fix it by adding a call to kobject_put() in the error path of kobject_init_and_add(). Signed-off-by: NTobin C. Harding <tobin@kernel.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tobin C. Harding <tobin@kernel.org> Cc: Vincent Guittot <vincent.guittot@linaro.org> Cc: Viresh Kumar <viresh.kumar@linaro.org> Link: http://lkml.kernel.org/r/20190430001144.24890-1-tobin@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Nicholas Piggin 提交于
[ Upstream commit 7ae3f6e130e8dc6188b59e3b4ebc2f16e9c8d053 ] Using a jiffies timer creates a dependency on the tick_do_timer_cpu incrementing jiffies. If that CPU has locked up and jiffies is not incrementing, the watchdog heartbeat timer for all CPUs stops and creates false positives and confusing warnings on local CPUs, and also causes the SMP detector to stop, so the root cause is never detected. Fix this by using hrtimer based timers for the watchdog heartbeat, like the generic kernel hardlockup detector. Cc: Gautham R. Shenoy <ego@linux.vnet.ibm.com> Reported-by: NRavikumar Bangoria <ravi.bangoria@in.ibm.com> Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Tested-by: NRavi Bangoria <ravi.bangoria@linux.ibm.com> Reported-by: NRavi Bangoria <ravi.bangoria@linux.ibm.com> Reviewed-by: NGautham R. Shenoy <ego@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Signed-off-by: NSasha Levin <sashal@kernel.org>
-