- 25 6月, 2019 18 次提交
-
-
由 Mathias Nyman 提交于
commit ddd57980a0fde30f7b5d14b888a2cc84d01610e8 upstream. USB 3.2 capability in a host can be detected from the xHCI Supported Protocol Capability major and minor revision fields. If major is 0x3 and minor 0x20 then the host is USB 3.2 capable. For USB 3.2 capable hosts set the root hub lane count to 2. The Major Revision and Minor Revision fields contain a BCD version number. The value of the Major Revision field is JJh and the value of the Minor Revision field is MNh for version JJ.M.N, where JJ = major revision number, M - minor version number, N = sub-minor version number, e.g. version 3.1 is represented with a value of 0310h. Also fix the extra whitespace printed out when announcing regular SuperSpeed hosts. Cc: <stable@vger.kernel.org> # v4.18+ Signed-off-by: NMathias Nyman <mathias.nyman@linux.intel.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Peter Chen 提交于
commit c19dffc0a9511a7d7493ec21019aefd97e9a111b upstream. An endpoint conflict occurs when the USB is working in device mode during an isochronous communication. When the endpointA IN direction is an isochronous IN endpoint, and the host sends an IN token to endpointA on another device, then the OUT transaction may be missed regardless the OUT endpoint number. Generally, this occurs when the device is connected to the host through a hub and other devices are connected to the same hub. The affected OUT endpoint can be either control, bulk, isochronous, or an interrupt endpoint. After the OUT endpoint is primed, if an IN token to the same endpoint number on another device is received, then the OUT endpoint may be unprimed (cannot be detected by software), which causes this endpoint to no longer respond to the host OUT token, and thus, no corresponding interrupt occurs. There is no good workaround for this issue, the only thing the software could do is numbering isochronous IN from the highest endpoint since we have observed most of device number endpoint from the lowest. Cc: <stable@vger.kernel.org> #v3.14+ Cc: Fabio Estevam <festevam@gmail.com> Cc: Greg KH <gregkh@linuxfoundation.org> Cc: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com> Cc: Jun Li <jun.li@nxp.com> Signed-off-by: NPeter Chen <peter.chen@nxp.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Stanley Chu 提交于
commit 24e2e7a19f7e4b83d0d5189040d997bce3596473 upstream. UFS runtime suspend can be triggered after pm_runtime_enable() is invoked in ufshcd_pltfrm_init(). However if the first runtime suspend is triggered before binding ufs_hba structure to ufs device structure via platform_set_drvdata(), then UFS runtime suspend will be no longer triggered in the future because its dev->power.runtime_error was set in the first triggering and does not have any chance to be cleared. To be more clear, dev->power.runtime_error is set if hba is NULL in ufshcd_runtime_suspend() which returns -EINVAL to rpm_callback() where dev->power.runtime_error is set as -EINVAL. In this case, any future rpm_suspend() for UFS device fails because rpm_check_suspend_allowed() fails due to non-zero dev->power.runtime_error. To resolve this issue, make sure the first UFS runtime suspend get valid "hba" in ufshcd_runtime_suspend(): Enable UFS runtime PM only after hba is successfully bound to UFS device structure. Fixes: 62694735 ([SCSI] ufs: Add runtime PM support for UFS host controller driver) Cc: stable@vger.kernel.org Signed-off-by: NStanley Chu <stanley.chu@mediatek.com> Reviewed-by: NAvri Altman <avri.altman@wdc.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Ulf Hansson 提交于
commit 83293386bc95cf5e9f0c0175794455835bd1cb4a upstream. Processing of SDIO IRQs must obviously be prevented while the card is system suspended, otherwise we may end up trying to communicate with an uninitialized SDIO card. Reports throughout the years shows that this is not only a theoretical problem, but a real issue. So, let's finally fix this problem, by keeping track of the state for the card and bail out before processing the SDIO IRQ, in case the card is suspended. Cc: stable@vger.kernel.org Reported-by: NDouglas Anderson <dianders@chromium.org> Tested-by: NDouglas Anderson <dianders@chromium.org> Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Douglas Anderson 提交于
commit b4c9f938d542d5f88c501744d2d12fad4fd2915f upstream. We want SDIO drivers to be able to temporarily stop retuning when the driver knows that the SDIO card is not in a state where retuning will work (maybe because the card is asleep). We'll move the relevant functions to a place where drivers can call them. Cc: stable@vger.kernel.org #v4.18+ Signed-off-by: NDouglas Anderson <dianders@chromium.org> Acked-by: NAdrian Hunter <adrian.hunter@intel.com> Acked-by: NKalle Valo <kvalo@codeaurora.org> Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Douglas Anderson 提交于
commit 0a55f4ab9678413a01e740c86e9367ba0c612b36 upstream. Normally when the MMC core sees an "-EILSEQ" error returned by a host controller then it will trigger a retuning of the card. This is generally a good idea. However, if a command is expected to sometimes cause transfer errors then these transfer errors shouldn't cause a re-tuning. This re-tuning will be a needless waste of time. One example case where a transfer is expected to cause errors is when transitioning between idle (sometimes referred to as "sleep" in Broadcom code) and active state on certain Broadcom WiFi SDIO cards. Specifically if the card was already transitioning between states when the command was sent it could cause an error on the SDIO bus. Let's add an API that the SDIO function drivers can call that will temporarily disable the auto-tuning functionality. Then we can add a call to this in the Broadcom WiFi driver and any other driver that might have similar needs. NOTE: this makes the assumption that the card is already tuned well enough that it's OK to disable the auto-retuning during one of these error-prone situations. Presumably the driver code performing the error-prone transfer knows how to recover / retry from errors. ...and after we can get back to a state where transfers are no longer error-prone then we can enable the auto-retuning again. If we truly find ourselves in a case where the card needs to be retuned sometimes to handle one of these error-prone transfers then we can always try a few transfers first without auto-retuning and then re-try with auto-retuning if the first few fail. Without this change on rk3288-veyron-minnie I periodically see this in the logs of a machine just sitting there idle: dwmmc_rockchip ff0d0000.dwmmc: Successfully tuned phase to XYZ Cc: stable@vger.kernel.org #v4.18+ Signed-off-by: NDouglas Anderson <dianders@chromium.org> Acked-by: NAdrian Hunter <adrian.hunter@intel.com> Acked-by: NKalle Valo <kvalo@codeaurora.org> Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Raul E Rangel 提交于
commit 0f7b79a44e7d7dd3ef1f59758c1a341f217ff5e5 upstream. The O2Micro controller only supports tuning at 4-bits. So the host driver needs to change the bus width while tuning and then set it back when done. There was a bug in the original implementation in that mmc->ios.bus_width also wasn't updated. Thus setting the incorrect blocksize in sdhci_send_tuning which results in a tuning failure. Signed-off-by: NRaul E Rangel <rrangel@chromium.org> Fixes: 0086fc21 ("mmc: sdhci: Add support for O2 hardware tuning") Acked-by: NAdrian Hunter <adrian.hunter@intel.com> Cc: stable@vger.kernel.org Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Harald Freudenberger 提交于
[ Upstream commit 159491f3b509bd8101199944dc7b0673b881c734 ] The inline assembler functions ap_aqic() and ap_qact() used two variables declared on the very same register. One variable was for input only, the other for output. Looks like newer versions of the gcc don't like this. Anyway it is a better coding to use one variable (which may have a union data type) on one register for input and output. So this patch introduces unions and uses only one variable now for input and output for GR1 for the PQAP(QACT) and PQAP(QIC) invocation. Signed-off-by: NHarald Freudenberger <freude@linux.ibm.com> Acked-by: NIlya Leoshkevich <iii@linux.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Ilya Leoshkevich 提交于
[ Upstream commit 146448524bddbf6dfc62de31957e428de001cbda ] [heiko.carstens@de.ibm.com]: ----- Laura Abbott reported that the kernel doesn't build anymore with gcc 9, due to the "X" constraint. Ilya provided the gcc 9 patch "S/390: Introduce jdd constraint" which introduces the new "jdd" constraint which fixes this. ----- The support for section anchors on S/390 introduced in gcc9 has changed the behavior of "X" constraint, which can now produce register references. Since existing constraints, in particular, "i", do not fit the intended use case on S/390, the new machine-specific "jdd" constraint was introduced. This patch makes jump labels use "jdd" constraint when building with gcc9. Reported-by: NLaura Abbott <labbott@redhat.com> Signed-off-by: NIlya Leoshkevich <iii@linux.ibm.com> Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Arnd Bergmann 提交于
[ Upstream commit 1dac6f5b0ed2601be21bb4e27a44b0c3e667b7f4 ] gcc gets a bit confused by the logic in ovl_setup_trap() and can't figure out whether the local 'trap' variable in the caller was initialized or not: fs/overlayfs/super.c: In function 'ovl_fill_super': fs/overlayfs/super.c:1333:4: error: 'trap' may be used uninitialized in this function [-Werror=maybe-uninitialized] iput(trap); ^~~~~~~~~~ fs/overlayfs/super.c:1312:17: note: 'trap' was declared here Reword slightly to make it easier for the compiler to understand. Fixes: 146d62e5a586 ("ovl: detect overlapping layers") Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NMiklos Szeredi <mszeredi@redhat.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Miklos Szeredi 提交于
[ Upstream commit 9179c21dc6ed1c993caa5fe4da876a6765c26af7 ] NFS mounts can be disconnected from fs root. Don't fail the overlapping layer check because of this. The check is not authoritative anyway, since topology can change during or after the check. Reported-by: NAntti Antinoja <antti@fennosys.fi> Signed-off-by: NMiklos Szeredi <mszeredi@redhat.com> Fixes: 146d62e5a586 ("ovl: detect overlapping layers") Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Amir Goldstein 提交于
[ Upstream commit 146d62e5a5867fbf84490d82455718bfb10fe824 ] Overlapping overlay layers are not supported and can cause unexpected behavior, but overlayfs does not currently check or warn about these configurations. User is not supposed to specify the same directory for upper and lower dirs or for different lower layers and user is not supposed to specify directories that are descendants of each other for overlay layers, but that is exactly what this zysbot repro did: https://syzkaller.appspot.com/x/repro.syz?x=12c7a94f400000 Moving layer root directories into other layers while overlayfs is mounted could also result in unexpected behavior. This commit places "traps" in the overlay inode hash table. Those traps are dummy overlay inodes that are hashed by the layers root inodes. On mount, the hash table trap entries are used to verify that overlay layers are not overlapping. While at it, we also verify that overlay layers are not overlapping with directories "in-use" by other overlay instances as upperdir/workdir. On lookup, the trap entries are used to verify that overlay layers root inodes have not been moved into other layers after mount. Some examples: $ ./run --ov --samefs -s ... ( mkdir -p base/upper/0/u base/upper/0/w base/lower lower upper mnt mount -o bind base/lower lower mount -o bind base/upper upper mount -t overlay none mnt ... -o lowerdir=lower,upperdir=upper/0/u,workdir=upper/0/w) $ umount mnt $ mount -t overlay none mnt ... -o lowerdir=base,upperdir=upper/0/u,workdir=upper/0/w [ 94.434900] overlayfs: overlapping upperdir path mount: mount overlay on mnt failed: Too many levels of symbolic links $ mount -t overlay none mnt ... -o lowerdir=upper/0/u,upperdir=upper/0/u,workdir=upper/0/w [ 151.350132] overlayfs: conflicting lowerdir path mount: none is already mounted or mnt busy $ mount -t overlay none mnt ... -o lowerdir=lower:lower/a,upperdir=upper/0/u,workdir=upper/0/w [ 201.205045] overlayfs: overlapping lowerdir path mount: mount overlay on mnt failed: Too many levels of symbolic links $ mount -t overlay none mnt ... -o lowerdir=lower,upperdir=upper/0/u,workdir=upper/0/w $ mv base/upper/0/ base/lower/ $ find mnt/0 mnt/0 mnt/0/w find: 'mnt/0/w/work': Too many levels of symbolic links find: 'mnt/0/u': Too many levels of symbolic links Reported-by: syzbot+9c69c282adc4edd2b540@syzkaller.appspotmail.com Signed-off-by: NAmir Goldstein <amir73il@gmail.com> Signed-off-by: NMiklos Szeredi <mszeredi@redhat.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Amir Goldstein 提交于
[ Upstream commit 6dde1e42f497b2d4e22466f23019016775607947 ] Relax the condition that overlayfs supports nfs export, to require that i_ino is consistent with st_ino/d_ino. It is enough to require that st_ino and d_ino are consistent. This fixes the failure of xfstest generic/504, due to mismatch of st_ino to inode number in the output of /proc/locks. Fixes: 12574a9f ("ovl: consistent i_ino for non-samefs with xino") Cc: <stable@vger.kernel.org> # v4.19 Signed-off-by: NAmir Goldstein <amir73il@gmail.com> Signed-off-by: NMiklos Szeredi <mszeredi@redhat.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Amir Goldstein 提交于
[ Upstream commit 941d935ac7636911a3fd8fa80e758e52b0b11e20 ] The ioctl argument was parsed as the wrong type. Fixes: b21d9c435f93 ("ovl: support the FS_IOC_FS[SG]ETXATTR ioctls") Signed-off-by: NAmir Goldstein <amir73il@gmail.com> Signed-off-by: NMiklos Szeredi <mszeredi@redhat.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Amir Goldstein 提交于
[ Upstream commit b21d9c435f935014d3e3fa6914f2e4fbabb0e94d ] They are the extended version of FS_IOC_FS[SG]ETFLAGS ioctls. xfs_io -c "chattr <flags>" uses the new ioctls for setting flags. This used to work in kernel pre v4.19, before stacked file ops introduced the ovl_ioctl whitelist. Reported-by: NDave Chinner <david@fromorbit.com> Fixes: d1d04ef8 ("ovl: stack file ops") Cc: <stable@vger.kernel.org> # v4.19 Signed-off-by: NAmir Goldstein <amir73il@gmail.com> Signed-off-by: NMiklos Szeredi <mszeredi@redhat.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Linus Torvalds 提交于
commit 6f303d60534c46aa1a239f29c321f95c83dda748 upstream. We already did this for clang, but now gcc has that warning too. Yes, yes, the address may be unaligned. And that's kind of the point. Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Allan Xavier 提交于
commit 4a60aa05a0634241ce17f957bf9fb5ac1eed6576 upstream. Add support for processing switch jump tables in objects with multiple .rodata sections, such as those created by '-ffunction-sections' and '-fdata-sections'. Currently, objtool always looks in .rodata for jump table information, which results in many "sibling call from callable instruction with modified stack frame" warnings with objects compiled using those flags. The fix is comprised of three parts: 1. Flagging all .rodata sections when importing ELF information for easier checking later. 2. Keeping a reference to the section each relocation is from in order to get the list_head for the other relocations in that section. 3. Finding jump tables by following relocations to .rodata sections, rather than always referencing a single global .rodata section. The patch has been tested without data sections enabled and no differences in the resulting orc unwind information were seen. Note that as objtool adds terminators to end of each .text section the unwind information generated between a function+data sections build and a normal build aren't directly comparable. Manual inspection suggests that objtool is now generating the correct information, or at least making more of an effort to do so than it did previously. Signed-off-by: NAllan Xavier <allan.x.xavier@oracle.com> Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/099bdc375195c490dda04db777ee0b95d566ded1.1536325914.git.jpoimboe@redhat.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Miguel Ojeda 提交于
commit 0c97bf863efce63d6ab7971dad811601e6171d2f upstream. Starting with GCC 9, -Warray-bounds detects cases when memset is called starting on a member of a struct but the size to be cleared ends up writing over further members. Such a call happens in the trace code to clear, at once, all members after and including `seq` on struct trace_iterator: In function 'memset', inlined from 'ftrace_dump' at kernel/trace/trace.c:8914:3: ./include/linux/string.h:344:9: warning: '__builtin_memset' offset [8505, 8560] from the object at 'iter' is out of the bounds of referenced subobject 'seq' with type 'struct trace_seq' at offset 4368 [-Warray-bounds] 344 | return __builtin_memset(p, c, size); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~ In order to avoid GCC complaining about it, we compute the address ourselves by adding the offsetof distance instead of referring directly to the member. Since there are two places doing this clear (trace.c and trace_kdb.c), take the chance to move the workaround into a single place in the internal header. Link: http://lkml.kernel.org/r/20190523124535.GA12931@gmail.comSigned-off-by: NMiguel Ojeda <miguel.ojeda.sandonis@gmail.com> [ Removed unnecessary parenthesis around "iter" ] Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 22 6月, 2019 22 次提交
-
-
由 Greg Kroah-Hartman 提交于
-
由 Eric Dumazet 提交于
commit b6653b3629e5b88202be3c9abc44713973f5c4b4 upstream. tcp_fragment() might be called for skbs in the write queue. Memory limits might have been exceeded because tcp_sendmsg() only checks limits at full skb (64KB) boundaries. Therefore, we need to make sure tcp_fragment() wont punish applications that might have setup very low SO_SNDBUF values. Fixes: f070ef2ac667 ("tcp: tcp_fragment() should apply sane memory limits") Signed-off-by: NEric Dumazet <edumazet@google.com> Reported-by: NChristoph Paasch <cpaasch@apple.com> Tested-by: NChristoph Paasch <cpaasch@apple.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Greg Kroah-Hartman 提交于
-
由 Alexander Lochmann 提交于
commit f69e749a49353d96af1a293f56b5b56de59c668a upstream. file_remove_privs() might be called for non-regular files, e.g. blkdev inode. There is no reason to do its job on things like blkdev inodes, pipes, or cdevs. Hence, abort if file does not refer to a regular inode. AV: more to the point, for devices there might be any number of inodes refering to given device. Which one to strip the permissions from, even if that made any sense in the first place? All of them will be observed with contents modified, after all. Found by LockDoc (Alexander Lochmann, Horst Schirmeier and Olaf Spinczyk) Reviewed-by: NJan Kara <jack@suse.cz> Signed-off-by: NAlexander Lochmann <alexander.lochmann@tu-dortmund.de> Signed-off-by: NHorst Schirmeier <horst.schirmeier@tu-dortmund.de> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Cc: Zubin Mithra <zsm@chromium.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Andrea Arcangeli 提交于
commit 59ea6d06cfa9247b586a695c21f94afa7183af74 upstream. When fixing the race conditions between the coredump and the mmap_sem holders outside the context of the process, we focused on mmget_not_zero()/get_task_mm() callers in 04f5866e41fb70 ("coredump: fix race condition between mmget_not_zero()/get_task_mm() and core dumping"), but those aren't the only cases where the mmap_sem can be taken outside of the context of the process as Michal Hocko noticed while backporting that commit to older -stable kernels. If mmgrab() is called in the context of the process, but then the mm_count reference is transferred outside the context of the process, that can also be a problem if the mmap_sem has to be taken for writing through that mm_count reference. khugepaged registration calls mmgrab() in the context of the process, but the mmap_sem for writing is taken later in the context of the khugepaged kernel thread. collapse_huge_page() after taking the mmap_sem for writing doesn't modify any vma, so it's not obvious that it could cause a problem to the coredump, but it happens to modify the pmd in a way that breaks an invariant that pmd_trans_huge_lock() relies upon. collapse_huge_page() needs the mmap_sem for writing just to block concurrent page faults that call pmd_trans_huge_lock(). Specifically the invariant that "!pmd_trans_huge()" cannot become a "pmd_trans_huge()" doesn't hold while collapse_huge_page() runs. The coredump will call __get_user_pages() without mmap_sem for reading, which eventually can invoke a lockless page fault which will need a functional pmd_trans_huge_lock(). So collapse_huge_page() needs to use mmget_still_valid() to check it's not running concurrently with the coredump... as long as the coredump can invoke page faults without holding the mmap_sem for reading. This has "Fixes: khugepaged" to facilitate backporting, but in my view it's more a bug in the coredump code that will eventually have to be rewritten to stop invoking page faults without the mmap_sem for reading. So the long term plan is still to drop all mmget_still_valid(). Link: http://lkml.kernel.org/r/20190607161558.32104-1-aarcange@redhat.com Fixes: ba76149f ("thp: khugepaged") Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com> Reported-by: NMichal Hocko <mhocko@suse.com> Acked-by: NMichal Hocko <mhocko@suse.com> Acked-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Jann Horn <jannh@google.com> Cc: Hugh Dickins <hughd@google.com> Cc: Mike Rapoport <rppt@linux.vnet.ibm.com> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Peter Xu <peterx@redhat.com> Cc: Jason Gunthorpe <jgg@mellanox.com> Cc: <stable@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Tobin C. Harding 提交于
[ Upstream commit b9fba67b3806e21b98bd5a98dc3921a8e9b42d61 ] If a call to kobject_init_and_add() fails we should call kobject_put() otherwise we leak memory. Add call to kobject_put() in the error path of call to kobject_init_and_add(). Please note, this has the side effect that the release method is called if kobject_init_and_add() fails. Link: http://lkml.kernel.org/r/20190513033458.2824-1-tobin@kernel.orgSigned-off-by: NTobin C. Harding <tobin@kernel.org> Reviewed-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: NJoseph Qi <joseph.qi@linux.alibaba.com> Cc: Mark Fasheh <mark@fasheh.com> Cc: Joel Becker <jlbec@evilplan.org> Cc: Junxiao Bi <junxiao.bi@oracle.com> Cc: Changwei Ge <gechangwei@live.cn> Cc: Gang He <ghe@suse.com> Cc: Jun Piao <piaojun@huawei.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Amit Cohen 提交于
[ Upstream commit 275e928f19117d22f6d26dee94548baf4041b773 ] Force of 56G is not supported by hardware in Ethernet devices. This configuration fails with a bad parameter error from firmware. Add check of this case. Instead of trying to set 56G with autoneg off, return a meaningful error. Fixes: 56ade8fe ("mlxsw: spectrum: Add initial support for Spectrum ASIC") Signed-off-by: NAmit Cohen <amitc@mellanox.com> Acked-by: NJiri Pirko <jiri@mellanox.com> Signed-off-by: NIdo Schimmel <idosch@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Jason Yan 提交于
[ Upstream commit 3b0541791453fbe7f42867e310e0c9eb6295364d ] The sas_port(phy->port) allocated in sas_ex_discover_expander() will not be deleted when the expander failed to discover. This will cause resource leak and a further issue of kernel BUG like below: [159785.843156] port-2:17:29: trying to add phy phy-2:17:29 fails: it's already part of another port [159785.852144] ------------[ cut here ]------------ [159785.856833] kernel BUG at drivers/scsi/scsi_transport_sas.c:1086! [159785.863000] Internal error: Oops - BUG: 0 [#1] SMP [159785.867866] CPU: 39 PID: 16993 Comm: kworker/u96:2 Tainted: G W OE 4.19.25-vhulk1901.1.0.h111.aarch64 #1 [159785.878458] Hardware name: Huawei Technologies Co., Ltd. Hi1620EVBCS/Hi1620EVBCS, BIOS Hi1620 CS B070 1P TA 03/21/2019 [159785.889231] Workqueue: 0000:74:02.0_disco_q sas_discover_domain [159785.895224] pstate: 40c00009 (nZcv daif +PAN +UAO) [159785.900094] pc : sas_port_add_phy+0x188/0x1b8 [159785.904524] lr : sas_port_add_phy+0x188/0x1b8 [159785.908952] sp : ffff0001120e3b80 [159785.912341] x29: ffff0001120e3b80 x28: 0000000000000000 [159785.917727] x27: ffff802ade8f5400 x26: ffff0000681b7560 [159785.923111] x25: ffff802adf11a800 x24: ffff0000680e8000 [159785.928496] x23: ffff802ade8f5728 x22: ffff802ade8f5708 [159785.933880] x21: ffff802adea2db40 x20: ffff802ade8f5400 [159785.939264] x19: ffff802adea2d800 x18: 0000000000000010 [159785.944649] x17: 00000000821bf734 x16: ffff00006714faa0 [159785.950033] x15: ffff0000e8ab4ecf x14: 7261702079646165 [159785.955417] x13: 726c612073277469 x12: ffff00006887b830 [159785.960802] x11: ffff00006773eaa0 x10: 7968702079687020 [159785.966186] x9 : 0000000000002453 x8 : 726f702072656874 [159785.971570] x7 : 6f6e6120666f2074 x6 : ffff802bcfb21290 [159785.976955] x5 : ffff802bcfb21290 x4 : 0000000000000000 [159785.982339] x3 : ffff802bcfb298c8 x2 : 337752b234c2ab00 [159785.987723] x1 : 337752b234c2ab00 x0 : 0000000000000000 [159785.993108] Process kworker/u96:2 (pid: 16993, stack limit = 0x0000000072dae094) [159786.000576] Call trace: [159786.003097] sas_port_add_phy+0x188/0x1b8 [159786.007179] sas_ex_get_linkrate.isra.5+0x134/0x140 [159786.012130] sas_ex_discover_expander+0x128/0x408 [159786.016906] sas_ex_discover_dev+0x218/0x4c8 [159786.021249] sas_ex_discover_devices+0x9c/0x1a8 [159786.025852] sas_discover_root_expander+0x134/0x160 [159786.030802] sas_discover_domain+0x1b8/0x1e8 [159786.035148] process_one_work+0x1b4/0x3f8 [159786.039230] worker_thread+0x54/0x470 [159786.042967] kthread+0x134/0x138 [159786.046269] ret_from_fork+0x10/0x18 [159786.049918] Code: 91322300 f0004402 91178042 97fe4c9b (d4210000) [159786.056083] Modules linked in: hns3_enet_ut(OE) hclge(OE) hnae3(OE) hisi_sas_test_hw(OE) hisi_sas_test_main(OE) serdes(OE) [159786.067202] ---[ end trace 03622b9e2d99e196 ]--- [159786.071893] Kernel panic - not syncing: Fatal exception [159786.077190] SMP: stopping secondary CPUs [159786.081192] Kernel Offset: disabled [159786.084753] CPU features: 0x2,a2a00a38 Fixes: 2908d778 ("[SCSI] aic94xx: new driver") Reported-by: NJian Luo <luojian5@huawei.com> Signed-off-by: NJason Yan <yanaijie@huawei.com> CC: John Garry <john.garry@huawei.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 YueHaibing 提交于
[ Upstream commit 12e750bc62044de096ab9a95201213fd912b9994 ] If alloc_workqueue fails in alua_init, it should return -ENOMEM, otherwise it will trigger null-ptr-deref while unloading module which calls destroy_workqueue dereference wq->lock like this: BUG: KASAN: null-ptr-deref in __lock_acquire+0x6b4/0x1ee0 Read of size 8 at addr 0000000000000080 by task syz-executor.0/7045 CPU: 0 PID: 7045 Comm: syz-executor.0 Tainted: G C 5.1.0+ #28 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1ubuntu1 Call Trace: dump_stack+0xa9/0x10e __kasan_report+0x171/0x18d ? __lock_acquire+0x6b4/0x1ee0 kasan_report+0xe/0x20 __lock_acquire+0x6b4/0x1ee0 lock_acquire+0xb4/0x1b0 __mutex_lock+0xd8/0xb90 drain_workqueue+0x25/0x290 destroy_workqueue+0x1f/0x3f0 __x64_sys_delete_module+0x244/0x330 do_syscall_64+0x72/0x2a0 entry_SYSCALL_64_after_hwframe+0x49/0xbe Reported-by: NHulk Robot <hulkci@huawei.com> Fixes: 03197b61 ("scsi_dh_alua: Use workqueue for RTPG") Signed-off-by: NYueHaibing <yuehaibing@huawei.com> Reviewed-by: NBart Van Assche <bvanassche@acm.org> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Lianbo Jiang 提交于
[ Upstream commit 1d94f06e7f5df4064ef336b7b710f50143b64a53 ] When SME is enabled, the smartpqi driver won't work on the HP DL385 G10 machine, which causes the failure of kernel boot because it fails to allocate pqi error buffer. Please refer to the kernel log: .... [ 9.431749] usbcore: registered new interface driver uas [ 9.441524] Microsemi PQI Driver (v1.1.4-130) [ 9.442956] i40e 0000:04:00.0: fw 6.70.48768 api 1.7 nvm 10.2.5 [ 9.447237] smartpqi 0000:23:00.0: Microsemi Smart Family Controller found Starting dracut initqueue hook... [ OK ] Started Show Plymouth Boot Scre[ 9.471654] Broadcom NetXtreme-C/E driver bnxt_en v1.9.1 en. [ OK ] Started Forward Password Requests to Plymouth Directory Watch. [[0;[ 9.487108] smartpqi 0000:23:00.0: failed to allocate PQI error buffer .... [ 139.050544] dracut-initqueue[949]: Warning: dracut-initqueue timeout - starting timeout scripts [ 139.589779] dracut-initqueue[949]: Warning: dracut-initqueue timeout - starting timeout scripts Basically, the fact that the coherent DMA mask value wasn't set caused the driver to fall back to SWIOTLB when SME is active. For correct operation, lets call the dma_set_mask_and_coherent() to properly set the mask for both streaming and coherent, in order to inform the kernel about the devices DMA addressing capabilities. Signed-off-by: NLianbo Jiang <lijiang@redhat.com> Acked-by: NDon Brace <don.brace@microsemi.com> Tested-by: NDon Brace <don.brace@microsemi.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Varun Prakash 提交于
[ Upstream commit cc555759117e8349088e0c5d19f2f2a500bafdbd ] ip_dev_find() can return NULL so add a check for NULL pointer. Signed-off-by: NVarun Prakash <varun@chelsio.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Max Uvarov 提交于
[ Upstream commit 2b892649254fec01678c64f16427622b41fa27f4 ] PHY_INTERFACE_MODE_RGMII_RXID is less then TXID so code to set tx delay is never called. Fixes: 2a10154a ("net: phy: dp83867: Add TI dp83867 phy") Signed-off-by: NMax Uvarov <muvarov@gmail.com> Cc: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Russell King 提交于
[ Upstream commit c678726305b9425454be7c8a7624290b602602fc ] Ensure that we supply the same phy interface mode to mac_link_down() as we did for the corresponding mac_link_up() call. This ensures that MAC drivers that use the phy interface mode in these methods can depend on mac_link_down() always corresponding to a mac_link_up() call for the same interface mode. Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Yoshihiro Shimoda 提交于
[ Upstream commit 315ca92dd863fecbffc0bb52ae0ac11e0398726a ] The sh_eth_close() resets the MAC and then calls phy_stop() so that mdio read access result is incorrect without any error according to kernel trace like below: ifconfig-216 [003] .n.. 109.133124: mdio_access: ee700000.ethernet-ffffffff read phy:0x01 reg:0x00 val:0xffff According to the hardware manual, the RMII mode should be set to 1 before operation the Ethernet MAC. However, the previous code was not set to 1 after the driver issued the soft_reset in sh_eth_dev_exit() so that the mdio read access result seemed incorrect. To fix the issue, this patch adds a condition and set the RMII mode register in sh_eth_dev_exit() for R-Car Gen2 and RZ/A1 SoCs. Note that when I have tried to move the sh_eth_dev_exit() calling after phy_stop() on sh_eth_close(), but it gets worse (kernel panic happened and it seems that a register is accessed while the clock is off). Signed-off-by: NYoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Sami Tolvanen 提交于
[ Upstream commit 1e29ab3186e33c77dbb2d7566172a205b59fa390 ] Calling sys_ni_syscall through a syscall_fn_t pointer trips indirect call Control-Flow Integrity checking due to a function type mismatch. Use SYSCALL_DEFINE0 for __arm64_sys_ni_syscall instead and remove the now unnecessary casts. Signed-off-by: NSami Tolvanen <samitolvanen@google.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Sami Tolvanen 提交于
[ Upstream commit 0e358bd7b7ebd27e491dabed938eae254c17fe3b ] Although a syscall defined using SYSCALL_DEFINE0 doesn't accept parameters, use the correct function type to avoid indirect call type mismatches with Control-Flow Integrity checking. Signed-off-by: NSami Tolvanen <samitolvanen@google.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Sami Tolvanen 提交于
[ Upstream commit 8ef8f368ce72b5e17f7c1f1ef15c38dcfd0fef64 ] Syscall wrappers in <asm/syscall_wrapper.h> use const struct pt_regs * as the argument type. Use const in syscall_fn_t as well to fix indirect call type mismatches with Control-Flow Integrity checking. Signed-off-by: NSami Tolvanen <samitolvanen@google.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Paul Mackerras 提交于
[ Upstream commit 5a3f49364c3ffa1107bd88f8292406e98c5d206c ] Currently the HV KVM code takes the kvm->lock around calls to kvm_for_each_vcpu() and kvm_get_vcpu_by_id() (which can call kvm_for_each_vcpu() internally). However, that leads to a lock order inversion problem, because these are called in contexts where the vcpu mutex is held, but the vcpu mutexes nest within kvm->lock according to Documentation/virtual/kvm/locking.txt. Hence there is a possibility of deadlock. To fix this, we simply don't take the kvm->lock mutex around these calls. This is safe because the implementations of kvm_for_each_vcpu() and kvm_get_vcpu_by_id() have been designed to be able to be called locklessly. Signed-off-by: NPaul Mackerras <paulus@ozlabs.org> Reviewed-by: NCédric Le Goater <clg@kaod.org> Signed-off-by: NPaul Mackerras <paulus@ozlabs.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Paul Mackerras 提交于
[ Upstream commit 1659e27d2bc1ef47b6d031abe01b467f18cb72d9 ] Currently the Book 3S KVM code uses kvm->lock to synchronize access to the kvm->arch.rtas_tokens list. Because this list is scanned inside kvmppc_rtas_hcall(), which is called with the vcpu mutex held, taking kvm->lock cause a lock inversion problem, which could lead to a deadlock. To fix this, we add a new mutex, kvm->arch.rtas_token_lock, which nests inside the vcpu mutexes, and use that instead of kvm->lock when accessing the rtas token list. This removes the lockdep_assert_held() in kvmppc_rtas_tokens_free(). At this point we don't hold the new mutex, but that is OK because kvmppc_rtas_tokens_free() is only called when the whole VM is being destroyed, and at that point nothing can be looking up a token in the list. Signed-off-by: NPaul Mackerras <paulus@ozlabs.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Ross Lagerwall 提交于
[ Upstream commit d10e0cc113c9e1b64b5c6e3db37b5c839794f3df ] During a suspend/resume, the xenwatch thread waits for all outstanding xenstore requests and transactions to complete. This does not work correctly for transactions started by userspace because it waits for them to complete after freezing userspace threads which means the transactions have no way of completing, resulting in a deadlock. This is trivial to reproduce by running this script and then suspending the VM: import pyxs, time c = pyxs.client.Client(xen_bus_path="/dev/xen/xenbus") c.connect() c.transaction() time.sleep(3600) Even if this deadlock were resolved, misbehaving userspace should not prevent a VM from being migrated. So, instead of waiting for these transactions to complete before suspending, store the current generation id for each transaction when it is started. The global generation id is incremented during resume. If the caller commits the transaction and the generation id does not match the current generation id, return EAGAIN so that they try again. If the transaction was instead discarded, return OK since no changes were made anyway. This only affects users of the xenbus file interface. In-kernel users of xenbus are assumed to be well-behaved and complete all transactions before freezing. Signed-off-by: NRoss Lagerwall <ross.lagerwall@citrix.com> Reviewed-by: NJuergen Gross <jgross@suse.com> Signed-off-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 YueHaibing 提交于
[ Upstream commit 41349672e3cbc2e8349831f21253509c3415aa2b ] Fixes gcc '-Wunused-but-set-variable' warning: drivers/xen/pvcalls-front.c: In function pvcalls_front_sendmsg: drivers/xen/pvcalls-front.c:543:25: warning: variable bedata set but not used [-Wunused-but-set-variable] drivers/xen/pvcalls-front.c: In function pvcalls_front_recvmsg: drivers/xen/pvcalls-front.c:638:25: warning: variable bedata set but not used [-Wunused-but-set-variable] They are never used since introduction. Signed-off-by: NYueHaibing <yuehaibing@huawei.com> Reviewed-by: NJuergen Gross <jgross@suse.com> Signed-off-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Randy Dunlap 提交于
[ Upstream commit 9a626c4a6326da4433a0d4d4a8a7d1571caf1ed3 ] Fix build errors on ia64 when DISCONTIGMEM=y and NUMA=y by exporting paddr_to_nid(). Fixes these build errors: ERROR: "paddr_to_nid" [sound/core/snd-pcm.ko] undefined! ERROR: "paddr_to_nid" [net/sunrpc/sunrpc.ko] undefined! ERROR: "paddr_to_nid" [fs/cifs/cifs.ko] undefined! ERROR: "paddr_to_nid" [drivers/video/fbdev/core/fb.ko] undefined! ERROR: "paddr_to_nid" [drivers/usb/mon/usbmon.ko] undefined! ERROR: "paddr_to_nid" [drivers/usb/core/usbcore.ko] undefined! ERROR: "paddr_to_nid" [drivers/md/raid1.ko] undefined! ERROR: "paddr_to_nid" [drivers/md/dm-mod.ko] undefined! ERROR: "paddr_to_nid" [drivers/md/dm-crypt.ko] undefined! ERROR: "paddr_to_nid" [drivers/md/dm-bufio.ko] undefined! ERROR: "paddr_to_nid" [drivers/ide/ide-core.ko] undefined! ERROR: "paddr_to_nid" [drivers/ide/ide-cd_mod.ko] undefined! ERROR: "paddr_to_nid" [drivers/gpu/drm/drm.ko] undefined! ERROR: "paddr_to_nid" [drivers/char/agp/agpgart.ko] undefined! ERROR: "paddr_to_nid" [drivers/block/nbd.ko] undefined! ERROR: "paddr_to_nid" [drivers/block/loop.ko] undefined! ERROR: "paddr_to_nid" [drivers/block/brd.ko] undefined! ERROR: "paddr_to_nid" [crypto/ccm.ko] undefined! Reported-by: Nkbuild test robot <lkp@intel.com> Signed-off-by: NRandy Dunlap <rdunlap@infradead.org> Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: linux-ia64@vger.kernel.org Signed-off-by: NTony Luck <tony.luck@intel.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-