- 13 2月, 2019 30 次提交
-
-
由 Naftali Goldstein 提交于
[ Upstream commit 5c2dbebb446539eb9640bf59a02756d6e7f1fc53 ] If the association supports HE, HT/VHT rates will never be used for Tx and therefore there's no need to set the sgi-per-channel-width-support bits, so don't set them in this case. Fixes: 110b32f0 ("iwlwifi: mvm: rs: add basic implementation of the new RS API handlers") Signed-off-by: NNaftali Goldstein <naftali.goldstein@intel.com> Signed-off-by: NLuca Coelho <luciano.coelho@intel.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Ioana Ciornei 提交于
[ Upstream commit 5500598abbfb5b46201b9768bd9ea873a5eeaece ] The fsl_mc_portal_allocate can fail when the requested MC portals are not yet probed by the fsl_mc_allocator. In this situation, the driver should defer the probe. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Paul Burton 提交于
[ Upstream commit 5ec17af7ead09701e23d2065e16db6ce4e137289 ] The Intel EG20T Platform Controller Hub used on the MIPS Boston development board supports prefetching memory to optimize DMA transfers. Unfortunately for unknown reasons this doesn't work well with some MIPS CPUs such as the P6600, particularly when using an I/O Coherence Unit (IOCU) to provide cache-coherent DMA. In these systems it is common for DMA data to be lost, resulting in broken access to EG20T devices such as the MMC or SATA controllers. Support for a DT property to configure the prefetching was added a while back by commit 549ce8f1 ("misc: pch_phub: Read prefetch value from device tree if passed") but we never added the DT snippet to make use of it. Add that now in order to disable the prefetching & fix DMA on the affected systems. Signed-off-by: NPaul Burton <paul.burton@mips.com> Patchwork: https://patchwork.linux-mips.org/patch/21068/ Cc: linux-mips@linux-mips.org Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Miroslav Lichvar 提交于
[ Upstream commit 83d0bdc7390b890905634186baaa294475cd6a06 ] If a gettime64 call fails, return the error and avoid copying data back to user. Cc: Richard Cochran <richardcochran@gmail.com> Cc: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: NMiroslav Lichvar <mlichvar@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Andy Duan 提交于
[ Upstream commit 397bd9211fe014b347ca8f95a8f4e1017bac1aeb ] Current driver only enable parity enable bit and never clear it when user set the termios. The fix clear the parity enable bit when PARENB flag is not set in termios->c_cflag. Cc: Lukas Wunner <lukas@wunner.de> Signed-off-by: NAndy Duan <fugang.duan@nxp.com> Reviewed-by: NFabio Estevam <festevam@gmail.com> Acked-by: NUwe Kleine-König <u.kleine-koenig@pengutronix.de> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Boris Brezillon 提交于
[ Upstream commit 0560054da5673b25d56bea6c57c8d069673af73b ] For the YUV conversion to work properly, ->x_scaling[1] should never be set to VC4_SCALING_NONE, but vc4_get_scaling_mode() might return VC4_SCALING_NONE if the horizontal scaling ratio exactly matches the horizontal subsampling factor. Add a test to turn VC4_SCALING_NONE into VC4_SCALING_PPF when that happens. The old ->x_scaling[0] adjustment is dropped as I couldn't find any mention to this constraint in the spec and it's proven to be unnecessary (I tested various multi-planar YUV formats with scaling disabled, and all of them worked fine without this adjustment). Fixes: fc04023f ("drm/vc4: Add support for YUV planes.") Signed-off-by: NBoris Brezillon <boris.brezillon@bootlin.com> Reviewed-by: NEric Anholt <eric@anholt.net> Link: https://patchwork.freedesktop.org/patch/msgid/20181109102633.32603-1-boris.brezillon@bootlin.comSigned-off-by: NSasha Levin <sashal@kernel.org>
-
由 Eric Biggers 提交于
[ Upstream commit 0a6a40c2a8c184a2fb467efacfb1cd338d719e0b ] In the "aes-fixed-time" AES implementation, disable interrupts while accessing the S-box, in order to make cache-timing attacks more difficult. Previously it was possible for the CPU to be interrupted while the S-box was loaded into L1 cache, potentially evicting the cachelines and causing later table lookups to be time-variant. In tests I did on x86 and ARM, this doesn't affect performance significantly. Responsiveness is potentially a concern, but interrupts are only disabled for a single AES block. Note that even after this change, the implementation still isn't necessarily guaranteed to be constant-time; see https://cr.yp.to/antiforgery/cachetiming-20050414.pdf for a discussion of the many difficulties involved in writing truly constant-time AES software. But it's valuable to make such attacks more difficult. Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Frank Rowand 提交于
[ Upstream commit 5b3f5c408d8cc59b87e47f1ab9803dbd006e4a91 ] The previous commit, "of: overlay: add missing of_node_get() in __of_attach_node_sysfs" added a missing of_node_get() to __of_attach_node_sysfs(). This results in a refcount imbalance for nodes attached with dlpar_attach_node(). The calling sequence from dlpar_attach_node() to __of_attach_node_sysfs() is: dlpar_attach_node() of_attach_node() __of_attach_node_sysfs() For more detailed description of the node refcount, see commit 68baf692 ("powerpc/pseries: Fix of_node_put() underflow during DLPAR remove"). Tested-by: NAlan Tull <atull@kernel.org> Acked-by: NMichael Ellerman <mpe@ellerman.id.au> Signed-off-by: NFrank Rowand <frank.rowand@sony.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Colin Ian King 提交于
[ Upstream commit 53bb565fc5439f2c8c57a786feea5946804aa3e9 ] In the expression "word1 << 16", word1 starts as u16, but is promoted to a signed int, then sign-extended to resource_size_t, which is probably not what was intended. Cast to resource_size_t to avoid the sign extension. This fixes an identical issue as fixed by commit 0b2d7076 ("x86/PCI: Fix Broadcom CNB20LE unintended sign extension") back in 2014. Detected by CoverityScan, CID#138749, 138750 ("Unintended sign extension") Fixes: 3f6ea84a ("PCI: read memory ranges out of Broadcom CNB20LE host bridge") Signed-off-by: NColin Ian King <colin.king@canonical.com> Signed-off-by: NBjorn Helgaas <helgaas@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Bob Peterson 提交于
[ Upstream commit 216f0efd19b9cc32207934fd1b87a45f2c4c593e ] Before this patch, recovery would cause all callbacks to be delayed, put on a queue, and afterward they were all queued to the callback work queue. This patch does the same thing, but occasionally takes a break after 25 of them so it won't swamp the CPU at the expense of other RT processes like corosync. Signed-off-by: NBob Peterson <rpeterso@redhat.com> Signed-off-by: NDavid Teigland <teigland@redhat.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Yi Wang 提交于
[ Upstream commit 46fda5b5067a391912cf73bf3d32c26b6a22ad09 ] Smatch report warnings: drivers/clk/imgtec/clk-boston.c:76 clk_boston_setup() warn: possible memory leak of 'onecell' drivers/clk/imgtec/clk-boston.c:83 clk_boston_setup() warn: possible memory leak of 'onecell' drivers/clk/imgtec/clk-boston.c:90 clk_boston_setup() warn: possible memory leak of 'onecell' 'onecell' is malloced in clk_boston_setup(), but not be freed before leaving from the error handling cases. Signed-off-by: NYi Wang <wang.yi59@zte.com.cn> Signed-off-by: NStephen Boyd <sboyd@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Yufen Wang 提交于
[ Upstream commit 82c08c3e7f171aa7f579b231d0abbc1d62e91974 ] In case panic() and panic() called at the same time on different CPUS. For example: CPU 0: panic() __crash_kexec machine_crash_shutdown crash_smp_send_stop machine_kexec BUG_ON(num_online_cpus() > 1); CPU 1: panic() local_irq_disable panic_smp_self_stop If CPU 1 calls panic_smp_self_stop() before crash_smp_send_stop(), kdump fails. CPU1 can't receive the ipi irq, CPU1 will be always online. To fix this problem, this patch split out the panic_smp_self_stop() and add set_cpu_online(smp_processor_id(), false). Signed-off-by: NYufen Wang <wangyufen@huawei.com> Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 James Smart 提交于
[ Upstream commit 30e196cacefdd9a38c857caed23cefc9621bc5c1 ] After a LOGO in response to an ABTS timeout, a PLOGI wasn't issued to re-establish the login. An nlp_type check in the LOGO completion handler failed to restart discovery for NVME targets. Revised the nlp_type check for NVME as well as SCSI. While reviewing the LOGO handling a few other issues were seen and were addressed: - Better lock synchronization around ndlp data types - When the ABTS times out, unregister the RPI before sending the LOGO so that all local exchange contexts are cleared and nothing received while awaiting LOGO/PLOGI handling will be accepted. - LOGO handling optimized to: Wait only R_A_TOV for a response. It doesn't need to be retried on timeout. If there wasn't a response, a PLOGI will be sent, thus an implicit logout applies as well when the other port sees it. If there is a response, any kind of response is considered "good" and the XRI quarantined for a exchange qualifier window. - PLOGI is issued as soon a LOGO state is resolved. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <jsmart2021@gmail.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Suganath Prabu 提交于
[ Upstream commit dc730212e8a378763cb182b889f90c8101331332 ] Call sas_remove_host() before removing the target devices in the driver's .remove() callback function(i.e. during driver unload time). So that driver can provide a way to allow SYNC CACHE, START STOP unit commands etc. (which are issued from SML) to the target drives during driver unload time. Once sas_remove_host() is called before removing the target drives then driver can just clean up the resources allocated for target devices and no need to call sas_port_delete_phy(), sas_port_delete() API's as these API's internally called from sas_remove_host(). Signed-off-by: NSuganath Prabu <suganath-prabu.subramani@broadcom.com> Reviewed-by: NBjorn Helgaas <bhelgaas@google.com> Reviewed-by: NAndy Shevchenko <andy.shevchenko@gmail.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 James Smart 提交于
[ Upstream commit b114d9009d386276bfc3352289fc235781ae3353 ] When LCB's are rejected, if beaconing was already in progress, the Reason Code Explanation was not being set. Should have been set to command in progress. Signed-off-by: NDick Kennedy <dick.kennedy@broadcom.com> Signed-off-by: NJames Smart <jsmart2021@gmail.com> Reviewed-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Lorenzo Bianconi 提交于
[ Upstream commit 3831a2a0010c72e3956020cbf1057a1701a2e469 ] In order to properly support dynack in ad-hoc mode running wpa_supplicant, take into account authentication frames for 'late ack' detection. This patch has been tested on devices mounted on offshore high-voltage stations connected through ~24Km link Reported-by: NKoen Vandeputte <koen.vandeputte@ncentric.com> Tested-by: NKoen Vandeputte <koen.vandeputte@ncentric.com> Signed-off-by: NLorenzo Bianconi <lorenzo.bianconi@redhat.com> Signed-off-by: NKalle Valo <kvalo@codeaurora.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Brian Norris 提交于
[ Upstream commit 2bd345cd2bfc0bd44528896313c0b45f087bdf67 ] Commit 2ea9f12c ("ath10k: add new cipher suite support") added a new n_cipher_suites HW param with a fallback value and a warning log. Commit 03a72288 ("ath10k: wmi: add hw params entry for wcn3990") later added WCN3990 HW entries, but it missed the n_cipher_suites. Rather than seeing this warning every boot ath10k_snoc 18800000.wifi: invalid hw_params.n_cipher_suites 0 let's provide the appropriate value. Cc: Rakesh Pillai <pillair@qti.qualcomm.com> Cc: Govind Singh <govinds@qti.qualcomm.com> Signed-off-by: NBrian Norris <briannorris@chromium.org> Signed-off-by: NKalle Valo <kvalo@codeaurora.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Lior David 提交于
[ Upstream commit 664497400c89a4d40aee51bcf48bbd2e4dc71104 ] A successful call to wil_tx_ring takes skb reference so it will only be freed in wil_tx_complete. Consume the skb in wil_find_tx_bcast_2 to prevent memory leak. Signed-off-by: NLior David <liord@codeaurora.org> Signed-off-by: NMaya Erez <merez@codeaurora.org> Signed-off-by: NKalle Valo <kvalo@codeaurora.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Alexei Avshalom Lazar 提交于
[ Upstream commit d083b2e2b7db5cca1791643d036e6597af27f49b ] With current reset flow, Talyn sometimes get stuck causing PCIe enumeration to fail. Fix this by removing some reset flow operations that are not relevant for Talyn. Setting bit 15 in RGF_HP_CTRL is WBE specific and is not in use for all wil6210 devices. For Sparrow, BIT_HPAL_PERST_FROM_PAD and BIT_CAR_PERST_RST were set as a WA an HW issue. Signed-off-by: NAlexei Avshalom Lazar <ailizaro@codeaurora.org> Signed-off-by: NMaya Erez <merez@codeaurora.org> Signed-off-by: NKalle Valo <kvalo@codeaurora.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Nickhu 提交于
[ Upstream commit 4c3d6174e0e17599549f636ec48ddf78627a17fe ] When the kernel configs of ftrace and frame pointer options are choosed, the compiler option of kernel will incompatible. Error message: nds32le-linux-gcc: error: -pg and -fomit-frame-pointer are incompatible Signed-off-by: NNickhu <nickhu@andestech.com> Signed-off-by: NZong Li <zong@andestech.com> Acked-by: NGreentime Hu <greentime@andestech.com> Signed-off-by: NGreentime Hu <greentime@andestech.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Steve Longerbeam 提交于
[ Upstream commit 819bec35c8c9706185498c9222bd244e0781ad35 ] Prevent possible race by parallel threads between ipu_image_convert_run() and ipu_image_convert_unprepare(). This involves setting ctx->aborting to true unconditionally so that no new job runs can be queued during unprepare, and holding the ctx->aborting flag until the context is freed. Note that the "normal" ipu_image_convert_abort() case (e.g. not during context unprepare) should clear the ctx->aborting flag after aborting any active run and clearing the context's pending queue. This is because it should be possible to continue to use the conversion context and queue more runs after an abort. Signed-off-by: NSteve Longerbeam <slongerbeam@gmail.com> Tested-by: NPhilipp Zabel <p.zabel@pengutronix.de> Signed-off-by: NPhilipp Zabel <p.zabel@pengutronix.de> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Long Li 提交于
[ Upstream commit b82592199032bf7c778f861b936287e37ebc9f62 ] If the number of NUMA nodes exceeds the number of MSI/MSI-X interrupts which are allocated for a device, the interrupt affinity spreading code fails to spread them across all nodes. The reason is, that the spreading code starts from node 0 and continues up to the number of interrupts requested for allocation. This leaves the nodes past the last interrupt unused. This results in interrupt concentration on the first nodes which violates the assumption of the block layer that all nodes are covered evenly. As a consequence the NUMA nodes above the number of interrupts are all assigned to hardware queue 0 and therefore NUMA node 0, which results in bad performance and has CPU hotplug implications, because queue 0 gets shut down when the last CPU of node 0 is offlined. Go over all NUMA nodes and assign them round-robin to all requested interrupts to solve this. [ tglx: Massaged changelog ] Signed-off-by: NLong Li <longli@microsoft.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NMing Lei <ming.lei@redhat.com> Cc: Michael Kelley <mikelley@microsoft.com> Link: https://lkml.kernel.org/r/20181102180248.13583-1-longli@linuxonhyperv.comSigned-off-by: NSasha Levin <sashal@kernel.org>
-
由 Jernej Skrabec 提交于
[ Upstream commit c96d62215fb540e2ae61de44cb7caf4db50958e3 ] It turns out that TCON TOP registers in H6 SoC have non-zero reset value. This may cause issues if bits are not changed during configuration. To prevent that, initialize registers to 0. Signed-off-by: NJernej Skrabec <jernej.skrabec@siol.net> Signed-off-by: NMaxime Ripard <maxime.ripard@bootlin.com> Link: https://patchwork.freedesktop.org/patch/msgid/20181104182705.18047-24-jernej.skrabec@siol.netSigned-off-by: NSasha Levin <sashal@kernel.org>
-
由 Muchun Song 提交于
[ Upstream commit 18534df419041e6c1f4b41af56ee7d41f757815c ] gpiod_request_commit() copies the pointer to the label passed as an argument only to be used later. But there's a chance the caller could immediately free the passed string(e.g., local variable). This could trigger a use after free when we use gpio label(e.g., gpiochip_unlock_as_irq(), gpiochip_is_requested()). To be on the safe side: duplicate the string with kstrdup_const() so that if an unaware user passes an address to a stack-allocated buffer, we won't get the arbitrary label. Also fix gpiod_set_consumer_name(). Signed-off-by: NMuchun Song <smuchun@gmail.com> Signed-off-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Arnd Bergmann 提交于
[ Upstream commit 1539c7f23f256120f89f8b9ec53160790bce9ed2 ] Randconfig testing revealed a very old bug, with gcc-8: sound/soc/intel/atom/sst/sst_loader.c: In function 'sst_load_fw': sound/soc/intel/atom/sst/sst_loader.c:357:5: error: 'fw' may be used uninitialized in this function [-Werror=maybe-uninitialized] if (fw == NULL) { ^ sound/soc/intel/atom/sst/sst_loader.c:354:25: note: 'fw' was declared here const struct firmware *fw; We must check the return code of request_firmware() before we look at the pointer result that may be uninitialized when the function fails. Fixes: 9012c954 ("ASoC: Intel: mrfld - Add DSP load and management") Signed-off-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NPierre-Louis Bossart <pierre-louis.bossart@linux.intel.com> Signed-off-by: NMark Brown <broonie@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Lukas Wunner 提交于
[ Upstream commit 3c7b30f704b6f5e53eed6bf89cf2c8d1b38b02c0 ] The BCM2835 pinctrl driver acquires a spinlock in its ->irq_enable, ->irq_disable and ->irq_set_type callbacks. Spinlocks become sleeping locks with CONFIG_PREEMPT_RT_FULL=y, therefore invocation of one of the callbacks in atomic context may cause a hard lockup if at least two GPIO pins in the same bank are used as interrupts. The issue doesn't occur with just a single interrupt pin per bank because the lock is never contended. I'm experiencing such lockups with GPIO 8 and 28 used as level-triggered interrupts, i.e. with ->irq_disable being invoked on reception of every IRQ. The critical section protected by the spinlock is very small (one bitop and one RMW of an MMIO register), hence converting to a raw spinlock seems a better trade-off than converting the driver to threaded IRQ handling (which would increase latency to handle an interrupt). Cc: Mathias Duckeck <m.duckeck@kunbus.de> Signed-off-by: NLukas Wunner <lukas@wunner.de> Acked-by: NJulia Cartwright <julia@ni.com> Signed-off-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Deepak Sharma 提交于
[ Upstream commit d5c04dff24870ef07ce6453a3f4e1ffd9cf88d27 ] Modify vgem_init to take platform dev as parent in drm_dev_init. This will make drm device available at "/sys/devices/platform/vgem" in x86 chromebook. v2: rebase, address checkpatch typo and line over 80 characters Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: NDeepak Sharma <deepak.sharma@amd.com> Reviewed-by: NSean Paul <seanpaul@chromium.org> Signed-off-by: NEmil Velikov <emil.velikov@collabora.com> Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20181023163550.15211-1-emil.l.velikov@gmail.comSigned-off-by: NSasha Levin <sashal@kernel.org>
-
由 Slawomir Stepien 提交于
[ Upstream commit 0559ef7fde67bc6c83c6eb6329dbd6649528263e ] Inside __ad7280_read32(), the spi_sync_transfer() can fail with negative error code. This change will ensure that this error is being passed up in the call stack, so it can be handled. Signed-off-by: NSlawomir Stepien <sst@poczta.fm> Signed-off-by: NJonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Gustavo A. R. Silva 提交于
[ Upstream commit a37805098900a6e73a55b3a43b7d3bcd987bb3f4 ] idx can be indirectly controlled by user-space, hence leading to a potential exploitation of the Spectre variant 1 vulnerability. This issue was detected with the help of Smatch: drivers/gpu/drm/drm_bufs.c:1420 drm_legacy_freebufs() warn: potential spectre issue 'dma->buflist' [r] (local cap) Fix this by sanitizing idx before using it to index dma->buflist Notice that given that speculation windows are large, the policy is to kill the speculation on the first load and not worry if it can be completed with a dependent load/store [1]. [1] https://marc.info/?l=linux-kernel&m=152449131114778&w=2Signed-off-by: NGustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: NDaniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20181016095549.GA23586@embeddedor.comSigned-off-by: NSasha Levin <sashal@kernel.org>
-
由 Alexey Brodkin 提交于
commit a66d972465d15b1d89281258805eb8b47d66bd36 upstream. Initially we bumped into problem with 32-bit aligned atomic64_t on ARC, see [1]. And then during quite lengthly discussion Peter Z. mentioned ARCH_KMALLOC_MINALIGN which IMHO makes perfect sense. If allocation is done by plain kmalloc() obtained buffer will be ARCH_KMALLOC_MINALIGN aligned and then why buffer obtained via devm_kmalloc() should have any other alignment? This way we at least get the same behavior for both types of allocation. [1] http://lists.infradead.org/pipermail/linux-snps-arc/2018-July/004009.html [2] http://lists.infradead.org/pipermail/linux-snps-arc/2018-July/004036.htmlSigned-off-by: NAlexey Brodkin <abrodkin@synopsys.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: David Laight <David.Laight@ACULAB.COM> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Greg KH <greg@kroah.com> Cc: <stable@vger.kernel.org> # 4.8+ Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 07 2月, 2019 10 次提交
-
-
由 Greg Kroah-Hartman 提交于
-
由 Paulo Alcantara 提交于
commit 28eb24ff75c5ac130eb326b3b4d0dcecfc0f427d upstream. In case a hostname resolves to a different IP address (e.g. long running mounts), make sure to resolve it every time prior to calling generic_ip_connect() in reconnect. Suggested-by: NSteve French <stfrench@microsoft.com> Signed-off-by: NPaulo Alcantara <palcantara@suse.de> Signed-off-by: NSteve French <stfrench@microsoft.com> Signed-off-by: NPavel Shilovsky <pshilov@microsoft.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Alexei Naberezhnov 提交于
commit 483cbbeddd5fe2c80fd4141ff0748fa06c4ff146 upstream. This fixes the case when md array assembly fails because of raid cache recovery unable to allocate a stripe, despite attempts to replay stripes and increase cache size. This happens because stripes released by r5c_recovery_replay_stripes and raid5_set_cache_size don't become available for allocation immediately. Released stripes first are placed on conf->released_stripes list and require md thread to merge them on conf->inactive_list before they can be allocated. Patch allows final allocation attempt during cache recovery to wait for new stripes to become availabe for allocation. Cc: linux-raid@vger.kernel.org Cc: Shaohua Li <shli@kernel.org> Cc: linux-stable <stable@vger.kernel.org> # 4.10+ Fixes: b4c625c6 ("md/r5cache: r5cache recovery: part 1") Signed-off-by: NAlexei Naberezhnov <anaberezhnov@fb.com> Signed-off-by: NSong Liu <songliubraving@fb.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Frank Rowand 提交于
commit 8814dc46bd9e347d4de55ec5bf8f16ea54470499 upstream. When allocating a new node, add_changeset_node() was duplicating the properties from the respective node in the overlay instead of allocating a node with no properties. When this patch is applied the errors reported by the devictree unittest from patch "of: overlay: add tests to validate kfrees from overlay removal" will no longer occur. These error messages are of the form: "OF: ERROR: ..." and the unittest results will change from: ### dt-test ### end of unittest - 203 passed, 7 failed to ### dt-test ### end of unittest - 210 passed, 0 failed Tested-by: NAlan Tull <atull@kernel.org> Signed-off-by: NFrank Rowand <frank.rowand@sony.com> Cc: Guenter Roeck <linux@roeck-us.net> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Frank Rowand 提交于
commit 6b4955ba7bc05e40c8c41071cc121bc26ca65277 upstream. The changeset entry 'update property' was used for new properties in an overlay instead of 'add property'. The decision of whether to use 'update property' was based on whether the property already exists in the subtree where the node is being spliced into. At the top level of creating a changeset describing the overlay, the target node is in the live devicetree, so checking whether the property exists in the target node returns the correct result. As soon as the changeset creation algorithm recurses into a new node, the target is no longer in the live devicetree, but is instead in the detached overlay tree, thus all properties are incorrectly found to already exist in the target. This fix will expose another devicetree bug that will be fixed in the following patch in the series. When this patch is applied the errors reported by the devictree unittest will change, and the unittest results will change from: ### dt-test ### end of unittest - 210 passed, 0 failed to ### dt-test ### end of unittest - 203 passed, 7 failed Tested-by: NAlan Tull <atull@kernel.org> Signed-off-by: NFrank Rowand <frank.rowand@sony.com> Cc: Guenter Roeck <linux@roeck-us.net> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Frank Rowand 提交于
commit 5b2c2f5a0ea3a43e0dee78059e34c7cb54136dcc upstream. There is a matching of_node_put() in __of_detach_node_sysfs() Remove misleading comment from function header comment for of_detach_node(). This patch may result in memory leaks from code that directly calls the dynamic node add and delete functions directly instead of using changesets. This commit should result in powerpc systems that dynamically allocate a node, then later deallocate the node to have a memory leak when the node is deallocated. The next commit will fix the leak. Tested-by: NAlan Tull <atull@kernel.org> Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc) Signed-off-by: NFrank Rowand <frank.rowand@sony.com> Cc: Guenter Roeck <linux@roeck-us.net> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Frank Rowand 提交于
commit 144552c786925314c1e7cb8f91a71dae1aca8798 upstream. Add checks: - attempted kfree due to refcount reaching zero before overlay is removed - properties linked to an overlay node when the node is removed - node refcount > one during node removal in a changeset destroy, if the node was created by the changeset After applying this patch, several validation warnings will be reported from the devicetree unittest during boot due to pre-existing devicetree bugs. The warnings will be similar to: OF: ERROR: of_node_release(), unexpected properties in /testcase-data/overlay-node/test-bus/test-unittest11 OF: ERROR: memory leak, expected refcount 1 instead of 2, of_node_get()/of_node_put() unbalanced - destroy cset entry: attach overlay node /testcase-data-2/substation@100/ hvac-medium-2 Tested-by: NAlan Tull <atull@kernel.org> Signed-off-by: NFrank Rowand <frank.rowand@sony.com> Cc: Guenter Roeck <linux@roeck-us.net> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Rob Herring 提交于
commit a613b26a50136ae90ab13943afe90bcbd34adb44 upstream. In preparation to remove the node name pointer from struct device_node, convert printf users to use the %pOFn format specifier. Reviewed-by: NFrank Rowand <frank.rowand@sony.com> Cc: Andrew Lunn <andrew@lunn.ch> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: netdev@vger.kernel.org Signed-off-by: NRob Herring <robh@kernel.org> Cc: Guenter Roeck <linux@roeck-us.net> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 David Hildenbrand 提交于
commit e0a352fabce61f730341d119fbedf71ffdb8663f upstream. We had a race in the old balloon compaction code before b1123ea6 ("mm: balloon: use general non-lru movable page feature") refactored it that became visible after backporting 195a8c43 ("virtio-balloon: deflate via a page list") without the refactoring. The bug existed from commit d6d86c0a ("mm/balloon_compaction: redesign ballooned pages management") till b1123ea6 ("mm: balloon: use general non-lru movable page feature"). d6d86c0a ("mm/balloon_compaction: redesign ballooned pages management") was backported to 3.12, so the broken kernels are stable kernels [3.12 - 4.7]. There was a subtle race between dropping the page lock of the newpage in __unmap_and_move() and checking for __is_movable_balloon_page(newpage). Just after dropping this page lock, virtio-balloon could go ahead and deflate the newpage, effectively dequeueing it and clearing PageBalloon, in turn making __is_movable_balloon_page(newpage) fail. This resulted in dropping the reference of the newpage via putback_lru_page(newpage) instead of put_page(newpage), leading to page->lru getting modified and a !LRU page ending up in the LRU lists. With 195a8c43 ("virtio-balloon: deflate via a page list") backported, one would suddenly get corrupted lists in release_pages_balloon(): - WARNING: CPU: 13 PID: 6586 at lib/list_debug.c:59 __list_del_entry+0xa1/0xd0 - list_del corruption. prev->next should be ffffe253961090a0, but was dead000000000100 Nowadays this race is no longer possible, but it is hidden behind very ugly handling of __ClearPageMovable() and __PageMovable(). __ClearPageMovable() will not make __PageMovable() fail, only PageMovable(). So the new check (__PageMovable(newpage)) will still hold even after newpage was dequeued by virtio-balloon. If anybody would ever change that special handling, the BUG would be introduced again. So instead, make it explicit and use the information of the original isolated page before migration. This patch can be backported fairly easy to stable kernels (in contrast to the refactoring). Link: http://lkml.kernel.org/r/20190129233217.10747-1-david@redhat.com Fixes: d6d86c0a ("mm/balloon_compaction: redesign ballooned pages management") Signed-off-by: NDavid Hildenbrand <david@redhat.com> Reported-by: NVratislav Bendel <vbendel@redhat.com> Acked-by: NMichal Hocko <mhocko@suse.com> Acked-by: NRafael Aquini <aquini@redhat.com> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Jan Kara <jack@suse.cz> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Dominik Brodowski <linux@dominikbrodowski.net> Cc: Matthew Wilcox <willy@infradead.org> Cc: Vratislav Bendel <vbendel@redhat.com> Cc: Rafael Aquini <aquini@redhat.com> Cc: Konstantin Khlebnikov <k.khlebnikov@samsung.com> Cc: Minchan Kim <minchan@kernel.org> Cc: <stable@vger.kernel.org> [3.12 - 4.7] Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Naoya Horiguchi 提交于
commit 6376360ecbe525a9c17b3d081dfd88ba3e4ed65b upstream. Currently memory_failure() is racy against process's exiting, which results in kernel crash by null pointer dereference. The root cause is that memory_failure() uses force_sig() to forcibly kill asynchronous (meaning not in the current context) processes. As discussed in thread https://lkml.org/lkml/2010/6/8/236 years ago for OOM fixes, this is not a right thing to do. OOM solves this issue by using do_send_sig_info() as done in commit d2d39309 ("signal: oom_kill_task: use SEND_SIG_FORCED instead of force_sig()"), so this patch is suggesting to do the same for hwpoison. do_send_sig_info() properly accesses to siglock with lock_task_sighand(), so is free from the reported race. I confirmed that the reported bug reproduces with inserting some delay in kill_procs(), and it never reproduces with this patch. Note that memory_failure() can send another type of signal using force_sig_mceerr(), and the reported race shouldn't happen on it because force_sig_mceerr() is called only for synchronous processes (i.e. BUS_MCEERR_AR happens only when some process accesses to the corrupted memory.) Link: http://lkml.kernel.org/r/20190116093046.GA29835@hori1.linux.bs1.fc.nec.co.jpSigned-off-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com> Reported-by: NJane Chu <jane.chu@oracle.com> Reviewed-by: NDan Williams <dan.j.williams@intel.com> Reviewed-by: NWilliam Kucharski <william.kucharski@oracle.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: <stable@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-