- 26 7月, 2019 10 次提交
-
-
由 Wen Gong 提交于
[ Upstream commit 49ed34b835e231aa941257394716bc689bc98d9f ] For some SDIO chip, the peer id is 65535 for MPDU with error status, then test_bit will trigger buffer overflow for peer's memory, if kasan enabled, it will report error. Reason is when station is in disconnecting status, firmware do not delete the peer info since it not disconnected completely, meanwhile some AP will still send data packet to station, then hardware will receive the packet and send to firmware, firmware's logic will report peer id of 65535 for MPDU with error status. Add check for overflow the size of peer's peer_ids will avoid the buffer overflow access. Call trace of kasan: dump_backtrace+0x0/0x2ec show_stack+0x20/0x2c __dump_stack+0x20/0x28 dump_stack+0xc8/0xec print_address_description+0x74/0x240 kasan_report+0x250/0x26c __asan_report_load8_noabort+0x20/0x2c ath10k_peer_find_by_id+0x180/0x1e4 [ath10k_core] ath10k_htt_t2h_msg_handler+0x100c/0x2fd4 [ath10k_core] ath10k_htt_htc_t2h_msg_handler+0x20/0x34 [ath10k_core] ath10k_sdio_irq_handler+0xcc8/0x1678 [ath10k_sdio] process_sdio_pending_irqs+0xec/0x370 sdio_run_irqs+0x68/0xe4 sdio_irq_work+0x1c/0x28 process_one_work+0x3d8/0x8b0 worker_thread+0x508/0x7cc kthread+0x24c/0x264 ret_from_fork+0x10/0x18 Tested with QCA6174 SDIO with firmware WLAN.RMH.4.4.1-00007-QCARMSWP-1. Signed-off-by: NWen Gong <wgong@codeaurora.org> Signed-off-by: NKalle Valo <kvalo@codeaurora.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Dan Carpenter 提交于
[ Upstream commit 5d6751eaff672ea77642e74e92e6c0ac7f9709ab ] The "ev->traffic_class" and "reply->ac" variables come from the network and they're used as an offset into the wmi->stream_exist_for_ac[] array. Those variables are u8 so they can be 0-255 but the stream_exist_for_ac[] array only has WMM_NUM_AC (4) elements. We need to add a couple bounds checks to prevent array overflows. I also modified one existing check from "if (traffic_class > 3) {" to "if (traffic_class >= WMM_NUM_AC) {" just to make them all consistent. Fixes: bdcd8170 (" Add ath6kl cleaned up driver") Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NKalle Valo <kvalo@codeaurora.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Tim Schumacher 提交于
[ Upstream commit 2f90c7e5d09437a4d8d5546feaae9f1cf48cfbe1 ] Right now, if an error is encountered during the SREV register read (i.e. an EIO in ath9k_regread()), that error code gets passed all the way to __ath9k_hw_init(), where it is visible during the "Chip rev not supported" message. ath9k_htc 1-1.4:1.0: ath9k_htc: HTC initialized with 33 credits ath: phy2: Mac Chip Rev 0x0f.3 is not supported by this driver ath: phy2: Unable to initialize hardware; initialization status: -95 ath: phy2: Unable to initialize hardware; initialization status: -95 ath9k_htc: Failed to initialize the device Check for -EIO explicitly in ath9k_hw_read_revisions() and return a boolean based on the success of the operation. Check for that in __ath9k_hw_init() and abort with a more debugging-friendly message if reading the revisions wasn't successful. ath9k_htc 1-1.4:1.0: ath9k_htc: HTC initialized with 33 credits ath: phy2: Failed to read SREV register ath: phy2: Could not read hardware revision ath: phy2: Unable to initialize hardware; initialization status: -95 ath: phy2: Unable to initialize hardware; initialization status: -95 ath9k_htc: Failed to initialize the device This helps when debugging by directly showing the first point of failure and it could prevent possible errors if a 0x0f.3 revision is ever supported. Signed-off-by: NTim Schumacher <timschumi@gmx.de> Signed-off-by: NKalle Valo <kvalo@codeaurora.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Surabhi Vishnoi 提交于
[ Upstream commit 97354f2c432788e3163134df6bb144f4b6289d87 ] Currently mac80211 do not support probe response template for mesh point. When WMI_SERVICE_BEACON_OFFLOAD is enabled, host driver tries to configure probe response template for mesh, but it fails because the interface type is not NL80211_IFTYPE_AP but NL80211_IFTYPE_MESH_POINT. To avoid this failure, skip sending probe response template to firmware for mesh point. Tested HW: WCN3990/QCA6174/QCA9984 Signed-off-by: NSurabhi Vishnoi <svishnoi@codeaurora.org> Signed-off-by: NKalle Valo <kvalo@codeaurora.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Gustavo A. R. Silva 提交于
[ Upstream commit bfabdd6997323adbedccb13a3fed1967fb8cf8f5 ] Notice that *rc* can evaluate to up to 5, include/linux/netdevice.h: enum gro_result { GRO_MERGED, GRO_MERGED_FREE, GRO_HELD, GRO_NORMAL, GRO_DROP, GRO_CONSUMED, }; typedef enum gro_result gro_result_t; In case *rc* evaluates to 5, we end up having an out-of-bounds read at drivers/net/wireless/ath/wil6210/txrx.c:821: wil_dbg_txrx(wil, "Rx complete %d bytes => %s\n", len, gro_res_str[rc]); Fix this by adding element "GRO_CONSUMED" to array gro_res_str. Addresses-Coverity-ID: 1444666 ("Out-of-bounds read") Fixes: 194b482b ("wil6210: Debug print GRO Rx result") Signed-off-by: NGustavo A. R. Silva <gustavo@embeddedor.com> Reviewed-by: NMaya Erez <merez@codeaurora.org> Signed-off-by: NKalle Valo <kvalo@codeaurora.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Sven Van Asbroeck 提交于
[ Upstream commit 2b8066c3deb9140fdf258417a51479b2aeaa7622 ] If probe() fails anywhere beyond the point where sdma_get_firmware() is called, then a kernel oops may occur. Problematic sequence of events: 1. probe() calls sdma_get_firmware(), which schedules the firmware callback to run when firmware becomes available, using the sdma instance structure as the context 2. probe() encounters an error, which deallocates the sdma instance structure 3. firmware becomes available, firmware callback is called with deallocated sdma instance structure 4. use after free - kernel oops ! Solution: only attempt to load firmware when we're certain that probe() will succeed. This guarantees that the firmware callback's context will remain valid. Note that the remove() path is unaffected by this issue: the firmware loader will increment the driver module's use count, ensuring that the module cannot be unloaded while the firmware callback is pending or running. Signed-off-by: NSven Van Asbroeck <TheSven73@gmail.com> Reviewed-by: NRobin Gong <yibin.gong@nxp.com> [vkoul: fixed braces for if condition] Signed-off-by: NVinod Koul <vkoul@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Maurizio Lombardi 提交于
[ Upstream commit 5dd6c49339126c2c8df2179041373222362d6e49 ] If the CHAP_A value is not supported, the chap_server_open() function should free the auth_protocol pointer and set it to NULL, or we will leave a dangling pointer around. [ 66.010905] Unsupported CHAP_A value [ 66.011660] Security negotiation failed. [ 66.012443] iSCSI Login negotiation failed. [ 68.413924] general protection fault: 0000 [#1] SMP PTI [ 68.414962] CPU: 0 PID: 1562 Comm: targetcli Kdump: loaded Not tainted 4.18.0-80.el8.x86_64 #1 [ 68.416589] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 [ 68.417677] RIP: 0010:__kmalloc_track_caller+0xc2/0x210 Signed-off-by: NMaurizio Lombardi <mlombard@redhat.com> Reviewed-by: NChris Leech <cleech@redhat.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Nathan Chancellor 提交于
[ Upstream commit aa69fb62bea15126e744af2e02acc0d6cf3ed4da ] After r363059 and r363928 in LLVM, a build using ld.lld as the linker with CONFIG_RANDOMIZE_BASE enabled fails like so: ld.lld: error: relocation R_AARCH64_ABS32 cannot be used against symbol __efistub_stext_offset; recompile with -fPIC Fangrui and Peter figured out that ld.lld is incorrectly considering __efistub_stext_offset as a relative symbol because of the order in which symbols are evaluated. _text is treated as an absolute symbol and stext is a relative symbol, making __efistub_stext_offset a relative symbol. Adding ABSOLUTE will force ld.lld to evalute this expression in the right context and does not change ld.bfd's behavior. ld.lld will need to be fixed but the developers do not see a quick or simple fix without some research (see the linked issue for further explanation). Add this simple workaround so that ld.lld can continue to link kernels. Link: https://github.com/ClangBuiltLinux/linux/issues/561 Link: https://github.com/llvm/llvm-project/commit/025a815d75d2356f2944136269aa5874721ec236 Link: https://github.com/llvm/llvm-project/commit/249fde85832c33f8b06c6b4ac65d1c4b96d23b83Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Debugged-by: NFangrui Song <maskray@google.com> Debugged-by: NPeter Smith <peter.smith@linaro.org> Suggested-by: NFangrui Song <maskray@google.com> Signed-off-by: NNathan Chancellor <natechancellor@gmail.com> [will: add comment] Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Kevin Darbyshire-Bryant 提交于
[ Upstream commit 1196364f21ffe5d1e6d83cafd6a2edb89404a3ae ] calc_vmlinuz_load_addr.c requires SZ_64K to be defined for alignment purposes. It included "../../../../include/linux/sizes.h" to define that size, however "sizes.h" tries to include <linux/const.h> which assumes linux system headers. These may not exist eg. the following error was encountered when building Linux for OpenWrt under macOS: In file included from arch/mips/boot/compressed/calc_vmlinuz_load_addr.c:16: arch/mips/boot/compressed/../../../../include/linux/sizes.h:11:10: fatal error: 'linux/const.h' file not found ^~~~~~~~~~ Change makefile to force building on local linux headers instead of system headers. Also change eye-watering relative reference in include file spec. Thanks to Jo-Philip Wich & Petr Štetiar for assistance in tracking this down & fixing. Suggested-by: NJo-Philipp Wich <jo@mein.io> Signed-off-by: NPetr Štetiar <ynezz@true.cz> Signed-off-by: NKevin Darbyshire-Bryant <ldir@darbyshire-bryant.me.uk> Signed-off-by: NPaul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Stefan Hellermann 提交于
[ Upstream commit db13a5ba2732755cf13320f3987b77cf2a71e790 ] While trying to get the uart with parity working I found setting even parity enabled odd parity insted. Fix the register settings to match the datasheet of AR9331. A similar patch was created by 8devices, but not sent upstream. https://github.com/8devices/openwrt-8devices/commit/77c5586ade3bb72cda010afad3f209ed0c98ea7cSigned-off-by: NStefan Hellermann <stefan@the2masters.de> Signed-off-by: NPaul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Signed-off-by: NSasha Levin <sashal@kernel.org>
-
- 21 7月, 2019 30 次提交
-
-
由 Greg Kroah-Hartman 提交于
-
由 Jiri Slaby 提交于
[ Upstream commit 1cbec37b3f9cff074a67bef4fc34b30a09958a0a ] common_spurious is currently ENDed erroneously. common_interrupt is used in its ENDPROC. So fix this mistake. Found by my asm macros rewrite patchset. Fixes: f8a8fe61fec8 ("x86/irq: Seperate unused system vectors from spurious entry again") Signed-off-by: NJiri Slaby <jslaby@suse.cz> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20190709063402.19847-1-jslaby@suse.czSigned-off-by: NSasha Levin <sashal@kernel.org>
-
由 Dave Airlie 提交于
commit 6ecac85eadb9d4065b9038fa3d3c66d49038e14b upstream. This should help with some of the lifetime issues, and move us away from load/unload. Acked-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NDave Airlie <airlied@redhat.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190405031715.5959-4-airlied@gmail.comSigned-off-by: NRoss Zwisler <zwisler@google.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Thomas Zimmermann 提交于
commit ac3b35f11a06964f5fe7f6ea9a190a28a7994704 upstream. This patch unifies the naming of DRM functions for reference counting of struct drm_device. The resulting code is more aligned with the rest of the Linux kernel interfaces. Signed-off-by: NThomas Zimmermann <tzimmermann@suse.de> Signed-off-by: NSean Paul <seanpaul@chromium.org> Link: https://patchwork.freedesktop.org/patch/msgid/20180926120212.25359-1-tzimmermann@suse.deSigned-off-by: NRoss Zwisler <zwisler@google.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Dave Airlie 提交于
commit fd96e0dba19c53c2d66f2a398716bb74df8ca85e upstream. This just makes it easier to later embed drm into udl. Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Signed-off-by: NDave Airlie <airlied@redhat.com> Link: https://patchwork.freedesktop.org/patch/msgid/20190405031715.5959-3-airlied@gmail.comSigned-off-by: NRoss Zwisler <zwisler@google.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Mark Zhang 提交于
commit 7151449fe7fa5962c6153355f9779d6be99e8e97 upstream. If client have not provided the mask base register then do not write into the mask register. Signed-off-by: NLaxman Dewangan <ldewangan@nvidia.com> Signed-off-by: NJinyoung Park <jinyoungp@nvidia.com> Signed-off-by: NVenkat Reddy Talla <vreddytalla@nvidia.com> Signed-off-by: NMark Zhang <markz@nvidia.com> Signed-off-by: NMark Brown <broonie@kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Haren Myneni 提交于
commit e52d484d9869eb291140545746ccbe5ffc7c9306 upstream. System gets checkstop if RxFIFO overruns with more requests than the maximum possible number of CRBs in FIFO at the same time. The max number of requests per window is controlled by window credits. So find max CRBs from FIFO size and set it to receive window credits. Fixes: b0d6c9ba ("crypto/nx: Add P9 NX support for 842 compression engine") CC: stable@vger.kernel.org # v4.14+ Signed-off-by: NHaren Myneni <haren@us.ibm.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
-
由 Christophe Leroy 提交于
commit 58cdbc6d2263beb36954408522762bbe73169306 upstream. On SEC1, hash provides wrong result when performing hashing in several steps with input data SG list has more than one element. This was detected with CONFIG_CRYPTO_MANAGER_EXTRA_TESTS: [ 44.185947] alg: hash: md5-talitos test failed (wrong result) on test vector 6, cfg="random: may_sleep use_finup src_divs=[<reimport>25.88%@+8063, <flush>24.19%@+9588, 28.63%@+16333, <reimport>4.60%@+6756, 16.70%@+16281] dst_divs=[71.61%@alignmask+16361, 14.36%@+7756, 14.3%@+" [ 44.325122] alg: hash: sha1-talitos test failed (wrong result) on test vector 3, cfg="random: inplace use_final src_divs=[<flush,nosimd>16.56%@+16378, <reimport>52.0%@+16329, 21.42%@alignmask+16380, 10.2%@alignmask+16380] iv_offset=39" [ 44.493500] alg: hash: sha224-talitos test failed (wrong result) on test vector 4, cfg="random: use_final nosimd src_divs=[<reimport>52.27%@+7401, <reimport>17.34%@+16285, <flush>17.71%@+26, 12.68%@+10644] iv_offset=43" [ 44.673262] alg: hash: sha256-talitos test failed (wrong result) on test vector 4, cfg="random: may_sleep use_finup src_divs=[<reimport>60.6%@+12790, 17.86%@+1329, <reimport>12.64%@alignmask+16300, 8.29%@+15, 0.40%@+13506, <reimport>0.51%@+16322, <reimport>0.24%@+16339] dst_divs" This is due to two issues: - We have an overlap between the buffer used for copying the input data (SEC1 doesn't do scatter/gather) and the chained descriptor. - Data copy is wrong when the previous hash left less than one blocksize of data to hash, implying a complement of the previous block with a few bytes from the new request. Fix it by: - Moving the second descriptor after the buffer, as moving the buffer after the descriptor would make it more complex for other cipher operations (AEAD, ABLKCIPHER) - Skip the bytes taken from the new request to complete the previous one by moving the SG list forward. Fixes: 37b5e889 ("crypto: talitos - chain in buffered data for ahash on SEC1") Cc: stable@vger.kernel.org Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Christophe Leroy 提交于
commit d44769e4ccb636e8238adbc151f25467a536711b upstream. Moves struct talitos_edesc into talitos.h so that it can be used from any place in talitos.c It will be required for next patch ("crypto: talitos - fix hash on SEC1") Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Cc: stable@vger.kernel.org Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Julian Wiedmann 提交于
commit ac6639cd3db607d386616487902b4cc1850a7be5 upstream. Current code sets the dsci to 0x00000080. Which doesn't make any sense, as the indicator area is located in the _left-most_ byte. Worse: if the dsci is the _shared_ indicator, this potentially clears the indication of activity for a _different_ device. tiqdio_thinint_handler() will then have no reason to call that device's IRQ handler, and the device ends up stalling. Fixes: d0c9d4a8 ("[S390] qdio: set correct bit in dsci") Cc: <stable@vger.kernel.org> Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NVasily Gorbik <gor@linux.ibm.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Julian Wiedmann 提交于
commit e54e4785cb5cb4896cf4285964aeef2125612fb2 upstream. When tiqdio_remove_input_queues() removes a queue from the tiq_list as part of qdio_shutdown(), it doesn't re-initialize the queue's list entry and the prev/next pointers go stale. If a subsequent qdio_establish() fails while sending the ESTABLISH cmd, it calls qdio_shutdown() again in QDIO_IRQ_STATE_ERR state and tiqdio_remove_input_queues() will attempt to remove the queue entry a second time. This dereferences the stale pointers, and bad things ensue. Fix this by re-initializing the list entry after removing it from the list. For good practice also initialize the list entry when the queue is first allocated, and remove the quirky checks that papered over this omission. Note that prior to commit e5218134 ("s390/qdio: fix access to uninitialized qdio_q fields"), these checks were bogus anyway. setup_queues_misc() clears the whole queue struct, and thus needs to re-init the prev/next pointers as well. Fixes: 779e6e1c ("[S390] qdio: new qdio driver.") Cc: <stable@vger.kernel.org> Signed-off-by: NJulian Wiedmann <jwi@linux.ibm.com> Signed-off-by: NVasily Gorbik <gor@linux.ibm.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Heiko Carstens 提交于
commit 4f18d869ffd056c7858f3d617c71345cf19be008 upstream. The stfle inline assembly returns the number of double words written (condition code 0) or the double words it would have written (condition code 3), if the memory array it got as parameter would have been large enough. The current stfle implementation assumes that the array is always large enough and clears those parts of the array that have not been written to with a subsequent memset call. If however the array is not large enough memset will get a negative length parameter, which means that memset clears memory until it gets an exception and the kernel crashes. To fix this simply limit the maximum length. Move also the inline assembly to an extra function to avoid clobbering of register 0, which might happen because of the added min_t invocation together with code instrumentation. The bug was introduced with commit 14375bc4 ("[S390] cleanup facility list handling") but was rather harmless, since it would only write to a rather large array. It became a potential problem with commit 3ab121ab ("[S390] kernel: Add z/VM LGR detection"). Since then it writes to an array with only four double words, while some machines already deliver three double words. As soon as machines have a facility bit within the fifth double a crash on IPL would happen. Fixes: 14375bc4 ("[S390] cleanup facility list handling") Cc: <stable@vger.kernel.org> # v2.6.37+ Reviewed-by: NVasily Gorbik <gor@linux.ibm.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NVasily Gorbik <gor@linux.ibm.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Arnd Bergmann 提交于
commit fd5de2721ea7d16e2b16c4049ac49f229551b290 upstream. As kernelci.org reports, this function is not used in vdk_hs38_defconfig: arch/arc/kernel/unwind.c:188:14: warning: 'unw_hdr_alloc' defined but not used [-Wunused-function] Fixes: bc79c9a7 ("ARC: dw2 unwind: Reinstante unwinding out of modules") Link: https://kernelci.org/build/id/5d1cae3f59b514300340c132/logs/ Cc: stable@vger.kernel.org Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NVineet Gupta <vgupta@synopsys.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Thomas Gleixner 提交于
commit f8a8fe61fec8006575699559ead88b0b833d5cad upstream Quite some time ago the interrupt entry stubs for unused vectors in the system vector range got removed and directly mapped to the spurious interrupt vector entry point. Sounds reasonable, but it's subtly broken. The spurious interrupt vector entry point pushes vector number 0xFF on the stack which makes the whole logic in __smp_spurious_interrupt() pointless. As a consequence any spurious interrupt which comes from a vector != 0xFF is treated as a real spurious interrupt (vector 0xFF) and not acknowledged. That subsequently stalls all interrupt vectors of equal and lower priority, which brings the system to a grinding halt. This can happen because even on 64-bit the system vector space is not guaranteed to be fully populated. A full compile time handling of the unused vectors is not possible because quite some of them are conditonally populated at runtime. Bring the entry stubs back, which wastes 160 bytes if all stubs are unused, but gains the proper handling back. There is no point to selectively spare some of the stubs which are known at compile time as the required code in the IDT management would be way larger and convoluted. Do not route the spurious entries through common_interrupt and do_IRQ() as the original code did. Route it to smp_spurious_interrupt() which evaluates the vector number and acts accordingly now that the real vector numbers are handed in. Fixup the pr_warn so the actual spurious vector (0xff) is clearly distiguished from the other vectors and also note for the vectored case whether it was pending in the ISR or not. "Spurious APIC interrupt (vector 0xFF) on CPU#0, should never happen." "Spurious interrupt vector 0xed on CPU#1. Acked." "Spurious interrupt vector 0xee on CPU#1. Not pending!." Fixes: 2414e021 ("x86: Avoid building unused IRQ entry stubs") Reported-by: NJan Kiszka <jan.kiszka@siemens.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Jan Beulich <jbeulich@suse.com> Link: https://lkml.kernel.org/r/20190628111440.550568228@linutronix.deSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Thomas Gleixner 提交于
commit b7107a67f0d125459fe41f86e8079afd1a5e0b15 upstream Since the rework of the vector management, warnings about spurious interrupts have been reported. Robert provided some more information and did an initial analysis. The following situation leads to these warnings: CPU 0 CPU 1 IO_APIC interrupt is raised sent to CPU1 Unable to handle immediately (interrupts off, deep idle delay) mask() ... free() shutdown() synchronize_irq() clear_vector() do_IRQ() -> vector is clear Before the rework the vector entries of legacy interrupts were statically assigned and occupied precious vector space while most of them were unused. Due to that the above situation was handled silently because the vector was handled and the core handler of the assigned interrupt descriptor noticed that it is shut down and returned. While this has been usually observed with legacy interrupts, this situation is not limited to them. Any other interrupt source, e.g. MSI, can cause the same issue. After adding proper synchronization for level triggered interrupts, this can only happen for edge triggered interrupts where the IO-APIC obviously cannot provide information about interrupts in flight. While the spurious warning is actually harmless in this case it worries users and driver developers. Handle it gracefully by marking the vector entry as VECTOR_SHUTDOWN instead of VECTOR_UNUSED when the vector is freed up. If that above late handling happens the spurious detector will not complain and switch the entry to VECTOR_UNUSED. Any subsequent spurious interrupt on that line will trigger the spurious warning as before. Fixes: 464d1230 ("x86/vector: Switch IOAPIC to global reservation mode") Reported-by: NRobert Hodaszi <Robert.Hodaszi@digi.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>- Tested-by: NRobert Hodaszi <Robert.Hodaszi@digi.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Link: https://lkml.kernel.org/r/20190628111440.459647741@linutronix.deSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Thomas Gleixner 提交于
commit dfe0cf8b51b07e56ded571e3de0a4a9382517231 upstream When an interrupt is shut down in free_irq() there might be an inflight interrupt pending in the IO-APIC remote IRR which is not yet serviced. That means the interrupt has been sent to the target CPUs local APIC, but the target CPU is in a state which delays the servicing. So free_irq() would proceed to free resources and to clear the vector because synchronize_hardirq() does not see an interrupt handler in progress. That can trigger a spurious interrupt warning, which is harmless and just confuses users, but it also can leave the remote IRR in a stale state because once the handler is invoked the interrupt resources might be freed already and therefore acknowledgement is not possible anymore. Implement the irq_get_irqchip_state() callback for the IO-APIC irq chip. The callback is invoked from free_irq() via __synchronize_hardirq(). Check the remote IRR bit of the interrupt and return 'in flight' if it is set and the interrupt is configured in level mode. For edge mode the remote IRR has no meaning. As this is only meaningful for level triggered interrupts this won't cure the potential spurious interrupt warning for edge triggered interrupts, but the edge trigger case does not result in stale hardware state. This has to be addressed at the vector/interrupt entry level seperately. Fixes: 464d1230 ("x86/vector: Switch IOAPIC to global reservation mode") Reported-by: NRobert Hodaszi <Robert.Hodaszi@digi.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Marc Zyngier <marc.zyngier@arm.com> Link: https://lkml.kernel.org/r/20190628111440.370295517@linutronix.deSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Thomas Gleixner 提交于
commit 62e0468650c30f0298822c580f382b16328119f6 upstream free_irq() ensures that no hardware interrupt handler is executing on a different CPU before actually releasing resources and deactivating the interrupt completely in a domain hierarchy. But that does not catch the case where the interrupt is on flight at the hardware level but not yet serviced by the target CPU. That creates an interesing race condition: CPU 0 CPU 1 IRQ CHIP interrupt is raised sent to CPU1 Unable to handle immediately (interrupts off, deep idle delay) mask() ... free() shutdown() synchronize_irq() release_resources() do_IRQ() -> resources are not available That might be harmless and just trigger a spurious interrupt warning, but some interrupt chips might get into a wedged state. Utilize the existing irq_get_irqchip_state() callback for the synchronization in free_irq(). synchronize_hardirq() is not using this mechanism as it might actually deadlock unter certain conditions, e.g. when called with interrupts disabled and the target CPU is the one on which the synchronization is invoked. synchronize_irq() uses it because that function cannot be called from non preemtible contexts as it might sleep. No functional change intended and according to Marc the existing GIC implementations where the driver supports the callback should be able to cope with that core change. Famous last words. Fixes: 464d1230 ("x86/vector: Switch IOAPIC to global reservation mode") Reported-by: NRobert Hodaszi <Robert.Hodaszi@digi.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Tested-by: NMarc Zyngier <marc.zyngier@arm.com> Link: https://lkml.kernel.org/r/20190628111440.279463375@linutronix.deSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Thomas Gleixner 提交于
commit 1d21f2af8571c6a6a44e7c1911780614847b0253 upstream The function might sleep, so it cannot be called from interrupt context. Not even with care. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Marc Zyngier <marc.zyngier@arm.com> Link: https://lkml.kernel.org/r/20190628111440.189241552@linutronix.deSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Thomas Gleixner 提交于
commit 4001d8e8762f57d418b66e4e668601791900a1dd upstream When interrupts are shutdown, they are immediately deactivated in the irqdomain hierarchy. While this looks obviously correct there is a subtle issue: There might be an interrupt in flight when free_irq() is invoking the shutdown. This is properly handled at the irq descriptor / primary handler level, but the deactivation might completely disable resources which are required to acknowledge the interrupt. Split the shutdown code and deactivate the interrupt after synchronization in free_irq(). Fixup all other usage sites where this is not an issue to invoke the combined shutdown_and_deactivate() function instead. This still might be an issue if the interrupt in flight servicing is delayed on a remote CPU beyond the invocation of synchronize_irq(), but that cannot be handled at that level and needs to be handled in the synchronize_irq() context. Fixes: f8264e34 ("irqdomain: Introduce new interfaces to support hierarchy irqdomains") Reported-by: NRobert Hodaszi <Robert.Hodaszi@digi.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Link: https://lkml.kernel.org/r/20190628111440.098196390@linutronix.deSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Vinod Koul 提交于
[ Upstream commit 8f9fab480c7a87b10bb5440b5555f370272a5d59 ] DIV_ROUND_UP_ULL adds the two arguments and then invokes DIV_ROUND_DOWN_ULL. But on a 32bit system the addition of two 32 bit values can overflow. DIV_ROUND_DOWN_ULL does it correctly and stashes the addition into a unsigned long long so cast the result to unsigned long long here to avoid the overflow condition. [akpm@linux-foundation.org: DIV_ROUND_UP_ULL must be an rval] Link: http://lkml.kernel.org/r/20190625100518.30753-1-vkoul@kernel.orgSigned-off-by: NVinod Koul <vkoul@kernel.org> Reviewed-by: NAndrew Morton <akpm@linux-foundation.org> Cc: Bjorn Andersson <bjorn.andersson@linaro.org> Cc: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Nicolas Boichat 提交于
[ Upstream commit 9d957a959bc8c3dfe37572ac8e99affb5a885965 ] During suspend/resume, mtk_eint_mask may be called while wake_mask is active. For example, this happens if a wake-source with an active interrupt handler wakes the system: irq/pm.c:irq_pm_check_wakeup would disable the interrupt, so that it can be handled later on in the resume flow. However, this may happen before mtk_eint_do_resume is called: in this case, wake_mask is loaded, and cur_mask is restored from an older copy, re-enabling the interrupt, and causing an interrupt storm (especially for level interrupts). Step by step, for a line that has both wake and interrupt enabled: 1. cur_mask[irq] = 1; wake_mask[irq] = 1; EINT_EN[irq] = 1 (interrupt enabled at hardware level) 2. System suspends, resumes due to that line (at this stage EINT_EN == wake_mask) 3. irq_pm_check_wakeup is called, and disables the interrupt => EINT_EN[irq] = 0, but we still have cur_mask[irq] = 1 4. mtk_eint_do_resume is called, and restores EINT_EN = cur_mask, so it reenables EINT_EN[irq] = 1 => interrupt storm as the driver is not yet ready to handle the interrupt. This patch fixes the issue in step 3, by recording all mask/unmask changes in cur_mask. This also avoids the need to read the current mask in eint_do_suspend, and we can remove mtk_eint_chip_read_mask function. The interrupt will be re-enabled properly later on, sometimes after mtk_eint_do_resume, when the driver is ready to handle it. Fixes: 58a5e1b6 ("pinctrl: mediatek: Implement wake handler and suspend resume") Signed-off-by: NNicolas Boichat <drinkcat@chromium.org> Acked-by: NSean Wang <sean.wang@kernel.org> Signed-off-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Eiichi Tsukata 提交于
[ Upstream commit 33d4a5a7a5b4d02915d765064b2319e90a11cbde ] Setting invalid value to /sys/devices/system/cpu/cpuX/hotplug/fail can control `struct cpuhp_step *sp` address, results in the following global-out-of-bounds read. Reproducer: # echo -2 > /sys/devices/system/cpu/cpu0/hotplug/fail KASAN report: BUG: KASAN: global-out-of-bounds in write_cpuhp_fail+0x2cd/0x2e0 Read of size 8 at addr ffffffff89734438 by task bash/1941 CPU: 0 PID: 1941 Comm: bash Not tainted 5.2.0-rc6+ #31 Call Trace: write_cpuhp_fail+0x2cd/0x2e0 dev_attr_store+0x58/0x80 sysfs_kf_write+0x13d/0x1a0 kernfs_fop_write+0x2bc/0x460 vfs_write+0x1e1/0x560 ksys_write+0x126/0x250 do_syscall_64+0xc1/0x390 entry_SYSCALL_64_after_hwframe+0x49/0xbe RIP: 0033:0x7f05e4f4c970 The buggy address belongs to the variable: cpu_hotplug_lock+0x98/0xa0 Memory state around the buggy address: ffffffff89734300: fa fa fa fa 00 00 00 00 00 00 00 00 00 00 00 00 ffffffff89734380: fa fa fa fa 00 00 00 00 00 00 00 00 00 00 00 00 >ffffffff89734400: 00 00 00 00 fa fa fa fa 00 00 00 00 fa fa fa fa ^ ffffffff89734480: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ffffffff89734500: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 Add a sanity check for the value written from user space. Fixes: 1db49484 ("smp/hotplug: Hotplug state fail injection") Signed-off-by: NEiichi Tsukata <devel@etsukata.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: peterz@infradead.org Link: https://lkml.kernel.org/r/20190627024732.31672-1-devel@etsukata.comSigned-off-by: NSasha Levin <sashal@kernel.org>
-
由 Nicolas Boichat 提交于
[ Upstream commit 35594bc7cecf3a78504b590e350570e8f4d7779e ] Before suspending, mtk-eint would set the interrupt mask to the one in wake_mask. However, some of these interrupts may not have a corresponding interrupt handler, or the interrupt may be disabled. On resume, the eint irq handler would trigger nevertheless, and irq/pm.c:irq_pm_check_wakeup would be called, which would try to call irq_disable. However, if the interrupt is not enabled (irqd_irq_disabled(&desc->irq_data) is true), the call does nothing, and the interrupt is left enabled in the eint driver. Especially for level-sensitive interrupts, this will lead to an interrupt storm on resume. If we detect that an interrupt is only in wake_mask, but not in cur_mask, we can just mask it out immediately (as mtk_eint_resume would do anyway at a later stage in the resume sequence, when restoring cur_mask). Fixes: bf22ff45 ("genirq: Avoid unnecessary low level irq function calls") Signed-off-by: NNicolas Boichat <drinkcat@chromium.org> Acked-by: NSean Wang <sean.wang@kernel.org> Signed-off-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Kai-Heng Feng 提交于
[ Upstream commit 0a95fc733da375de0688d0f1fd3a2869a1c1d499 ] There's a new ALPS touchpad/pointstick combo device that requires MT_CLS_WIN_8_DUAL to make its pointsitck work as a mouse. The device can be found on HP ZBook 17 G5. Signed-off-by: NKai-Heng Feng <kai.heng.feng@canonical.com> Signed-off-by: NJiri Kosina <jkosina@suse.cz> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Oleksandr Natalenko 提交于
[ Upstream commit dcf768b0ac868630e7bdb6f2f1c9fe72788012fa ] I've spotted another Chicony PixArt mouse in the wild, which requires HID_QUIRK_ALWAYS_POLL quirk, otherwise it disconnects each minute. USB ID of this device is 0x04f2:0x0939. We've introduced quirks like this for other models before, so lets add this mouse too. Link: https://github.com/sriemer/fix-linux-mouse#usb-mouse-disconnectsreconnects-every-minute-on-linuxSigned-off-by: NOleksandr Natalenko <oleksandr@redhat.com> Acked-by: NSebastian Parschauer <s.parschauer@gmx.de> Signed-off-by: NJiri Kosina <jkosina@suse.cz> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Kirill A. Shutemov 提交于
[ Upstream commit c1887159eb48ba40e775584cfb2a443962cf1a05 ] __startup_64() uses fixup_pointer() to access global variables in a position-independent fashion. Access to next_early_pgt was wrapped into the helper, but one instance in the 5-level paging branch was missed. GCC generates a R_X86_64_PC32 PC-relative relocation for the access which doesn't trigger the issue, but Clang emmits a R_X86_64_32S which leads to an invalid memory access and system reboot. Fixes: 187e91fe ("x86/boot/64/clang: Use fixup_pointer() to access 'next_early_pgt'") Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Borislav Petkov <bp@alien8.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Alexander Potapenko <glider@google.com> Link: https://lkml.kernel.org/r/20190620112422.29264-1-kirill.shutemov@linux.intel.comSigned-off-by: NSasha Levin <sashal@kernel.org>
-
由 Kirill A. Shutemov 提交于
[ Upstream commit 81c7ed296dcd02bc0b4488246d040e03e633737a ] A kernel which boots in 5-level paging mode crashes in a small percentage of cases if KASLR is enabled. This issue was tracked down to the case when the kernel image unpacks in a way that it crosses an 1G boundary. The crash is caused by an overrun of the PMD page table in __startup_64() and corruption of P4D page table allocated next to it. This particular issue is not visible with 4-level paging as P4D page tables are not used. But the P4D and the PUD calculation have similar problems. The PMD index calculation is wrong due to operator precedence, which fails to confine the PMDs in the PMD array on wrap around. The P4D calculation for 5-level paging and the PUD calculation calculate the first index correctly, but then blindly increment it which causes the same issue when a kernel image is located across a 512G and for 5-level paging across a 46T boundary. This wrap around mishandling was introduced when these parts moved from assembly to C. Restore it to the correct behaviour. Fixes: c88d7150 ("x86/boot/64: Rewrite startup_64() in C") Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Borislav Petkov <bp@alien8.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lkml.kernel.org/r/20190620112345.28833-1-kirill.shutemov@linux.intel.comSigned-off-by: NSasha Levin <sashal@kernel.org>
-
由 Milan Broz 提交于
[ Upstream commit 2eba4e640b2c4161e31ae20090a53ee02a518657 ] DM verity should also use DMERR_LIMIT to limit repeat data block corruption messages. Signed-off-by: NMilan Broz <gmazyland@gmail.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Jerome Marchand 提交于
[ Upstream commit a0651926553cfe7992166432e418987760882652 ] For the first call to realloc_argv() in dm_split_args(), old_argv is NULL and size is zero. Then memcpy is called, with the NULL old_argv as the source argument and a zero size argument. AFAIK, this is undefined behavior and generates the following warning when compiled with UBSAN on ppc64le: In file included from ./arch/powerpc/include/asm/paca.h:19, from ./arch/powerpc/include/asm/current.h:16, from ./include/linux/sched.h:12, from ./include/linux/kthread.h:6, from drivers/md/dm-core.h:12, from drivers/md/dm-table.c:8: In function 'memcpy', inlined from 'realloc_argv' at drivers/md/dm-table.c:565:3, inlined from 'dm_split_args' at drivers/md/dm-table.c:588:9: ./include/linux/string.h:345:9: error: argument 2 null where non-null expected [-Werror=nonnull] return __builtin_memcpy(p, q, size); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~ drivers/md/dm-table.c: In function 'dm_split_args': ./include/linux/string.h:345:9: note: in a call to built-in function '__builtin_memcpy' Signed-off-by: NJerome Marchand <jmarchan@redhat.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Phil Reid 提交于
[ Upstream commit 6dbc6e6f58556369bf999cd7d9793586f1b0e4b4 ] Currently probing of the mcp23s08 results in an error message "detected irqchip that is shared with multiple gpiochips: please fix the driver" This is due to the following: Call to mcp23s08_irqchip_setup() with call hierarchy: mcp23s08_irqchip_setup() gpiochip_irqchip_add_nested() gpiochip_irqchip_add_key() gpiochip_set_irq_hooks() Call to devm_gpiochip_add_data() with call hierarchy: devm_gpiochip_add_data() gpiochip_add_data_with_key() gpiochip_add_irqchip() gpiochip_set_irq_hooks() The gpiochip_add_irqchip() returns immediately if there isn't a irqchip but we added a irqchip due to the previous mcp23s08_irqchip_setup() call. So it calls gpiochip_set_irq_hooks() a second time. Fix this by moving the call to devm_gpiochip_add_data before the call to mcp23s08_irqchip_setup Fixes: 02e389e6 ("pinctrl: mcp23s08: fix irq setup order") Suggested-by: NMarco Felsch <m.felsch@pengutronix.de> Signed-off-by: NPhil Reid <preid@electromag.com.au> Signed-off-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-