- 10 3月, 2019 6 次提交
-
-
由 Gao Xiang 提交于
commit 1e5ceeab6929585512c63d05911d6657064abf7b upstream. Considering a read request with two decompressed file pages, If a decompression work cannot be started on the previous page due to memory pressure but in-memory LTP map lookup is done, builder->work should be still NULL. Moreover, if the current page also belongs to the same map, it won't try to start the decompression work again and then run into trouble. This patch aims to solve the above issue only with little changes as much as possible in order to make the fix backport easier. kernel message is: <4>[1051408f.015930s]SLUB: Unable to allocate memory on node -1, gfp=0x2408040(GFP_NOFS|__GFP_ZERO) <4>[1051408f.015930s] cache: erofs_compress, object size: 144, buffer size: 144, default order: 0, min order: 0 <4>[1051408f.015930s] node 0: slabs: 98, objs: 2744, free: 0 * Cannot allocate the decompression work <3>[1051408f.015960s]erofs: z_erofs_vle_normalaccess_readpages, readahead error at page 1008 of nid 5391488 * Note that the previous page was failed to read <0>[1051408f.015960s]Internal error: Accessing user space memory outside uaccess.h routines: 96000005 [#1] PREEMPT SMP ... <4>[1051408f.015991s]Hardware name: kirin710 (DT) ... <4>[1051408f.016021s]PC is at z_erofs_vle_work_add_page+0xa0/0x17c <4>[1051408f.016021s]LR is at z_erofs_do_read_page+0x12c/0xcf0 ... <4>[1051408f.018096s][<ffffff80c6fb0fd4>] z_erofs_vle_work_add_page+0xa0/0x17c <4>[1051408f.018096s][<ffffff80c6fb3814>] z_erofs_vle_normalaccess_readpages+0x1a0/0x37c <4>[1051408f.018096s][<ffffff80c6d670b8>] read_pages+0x70/0x190 <4>[1051408f.018127s][<ffffff80c6d6736c>] __do_page_cache_readahead+0x194/0x1a8 <4>[1051408f.018127s][<ffffff80c6d59318>] filemap_fault+0x398/0x684 <4>[1051408f.018127s][<ffffff80c6d8a9e0>] __do_fault+0x8c/0x138 <4>[1051408f.018127s][<ffffff80c6d8f90c>] handle_pte_fault+0x730/0xb7c <4>[1051408f.018127s][<ffffff80c6d8fe04>] __handle_mm_fault+0xac/0xf4 <4>[1051408f.018157s][<ffffff80c6d8fec8>] handle_mm_fault+0x7c/0x118 <4>[1051408f.018157s][<ffffff80c8c52998>] do_page_fault+0x354/0x474 <4>[1051408f.018157s][<ffffff80c8c52af8>] do_translation_fault+0x40/0x48 <4>[1051408f.018157s][<ffffff80c6c002f4>] do_mem_abort+0x80/0x100 <4>[1051408f.018310s]---[ end trace 9f4009a3283bd78b ]--- Fixes: 3883a79a ("staging: erofs: introduce VLE decompression support") Cc: <stable@vger.kernel.org> # 4.19+ Signed-off-by: NGao Xiang <gaoxiang25@huawei.com> Reviewed-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Mans Rullgard 提交于
commit 8d7fa3d4ea3f0ca69554215e87411494e6346fdc upstream. This adds the USB ID of the Hjelmslund Electronics USB485 Iso stick. Signed-off-by: NMans Rullgard <mans@mansr.com> Cc: stable <stable@vger.kernel.org> Signed-off-by: NJohan Hovold <johan@kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Ivan Mironov 提交于
commit dd9d3d86b08d6a106830364879c42c78db85389c upstream. Here is how this device appears in kernel log: usb 3-1: new full-speed USB device number 18 using xhci_hcd usb 3-1: New USB device found, idVendor=0b00, idProduct=3070 usb 3-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3 usb 3-1: Product: Ingenico 3070 usb 3-1: Manufacturer: Silicon Labs usb 3-1: SerialNumber: 0001 Apparently this is a POS terminal with embedded USB-to-Serial converter. Cc: stable@vger.kernel.org Signed-off-by: NIvan Mironov <mironov.ivan@gmail.com> Signed-off-by: NJohan Hovold <johan@kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Daniele Palmas 提交于
commit 6431866b6707d27151be381252d6eef13025cfce upstream. This patch adds Telit ME910 family ECM composition 0x1102. Signed-off-by: NDaniele Palmas <dnlplm@gmail.com> Cc: stable <stable@vger.kernel.org> Signed-off-by: NJohan Hovold <johan@kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Gao Xiang 提交于
commit a112152f6f3a2a88caa6f414d540bd49e406af60 upstream. EROFS has an optimized path called TAIL merging, which is designed to merge multiple reads and the corresponding decompressions into one if these requests read continuous pages almost at the same time. In general, it behaves as follows: ________________________________________________________________ ... | TAIL . HEAD | PAGE | PAGE | TAIL . HEAD | ... _____|_combined page A_|________|________|_combined page B_|____ 1 ] -> [ 2 ] -> [ 3 If the above three reads are requested in the order 1-2-3, it will generate a large work chain rather than 3 individual work chains to reduce scheduling overhead and boost up sequential read. However, if Read 2 is processed slightly earlier than Read 1, currently it still generates 2 individual work chains (chain 1, 2) but it does in-place decompression for combined page A, moreover, if chain 2 decompresses ahead of chain 1, it will be a race and lead to corrupted decompressed page. This patch fixes it. Fixes: 3883a79a ("staging: erofs: introduce VLE decompression support") Cc: <stable@vger.kernel.org> # 4.19+ Signed-off-by: NGao Xiang <gaoxiang25@huawei.com> Reviewed-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Viresh Kumar 提交于
commit 625c85a62cb7d3c79f6e16de3cfa972033658250 upstream. The cpufreq_global_kobject is created using kobject_create_and_add() helper, which assigns the kobj_type as dynamic_kobj_ktype and show/store routines are set to kobj_attr_show() and kobj_attr_store(). These routines pass struct kobj_attribute as an argument to the show/store callbacks. But all the cpufreq files created using the cpufreq_global_kobject expect the argument to be of type struct attribute. Things work fine currently as no one accesses the "attr" argument. We may not see issues even if the argument is used, as struct kobj_attribute has struct attribute as its first element and so they will both get same address. But this is logically incorrect and we should rather use struct kobj_attribute instead of struct global_attr in the cpufreq core and drivers and the show/store callbacks should take struct kobj_attribute as argument instead. This bug is caught using CFI CLANG builds in android kernel which catches mismatch in function prototypes for such callbacks. Reported-by: NDonghee Han <dh.han@samsung.com> Reported-by: NSangkyu Kim <skwith.kim@samsung.com> Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 06 3月, 2019 34 次提交
-
-
由 Greg Kroah-Hartman 提交于
-
由 Andy Lutomirski 提交于
commit 2a418cf3f5f1caf911af288e978d61c9844b0695 upstream. When calling __put_user(foo(), ptr), the __put_user() macro would call foo() in between __uaccess_begin() and __uaccess_end(). If that code were buggy, then those bugs would be run without SMAP protection. Fortunately, there seem to be few instances of the problem in the kernel. Nevertheless, __put_user() should be fixed to avoid doing this. Therefore, evaluate __put_user()'s argument before setting AC. This issue was noticed when an objtool hack by Peter Zijlstra complained about genregs_get() and I compared the assembly output to the C source. [ bp: Massage commit message and fixed up whitespace. ] Fixes: 11f1a4b9 ("x86: reorganize SMAP handling in user space accesses") Signed-off-by: NAndy Lutomirski <luto@kernel.org> Signed-off-by: NBorislav Petkov <bp@suse.de> Acked-by: NLinus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Brian Gerst <brgerst@gmail.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/20190225125231.845656645@infradead.orgSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Paul Burton 提交于
commit d1a2930d8a992fb6ac2529449f81a0056e1b98d1 upstream. The MIPS eBPF JIT calls flush_icache_range() in order to ensure the icache observes the code that we just wrote. Unfortunately it gets the end address calculation wrong due to some bad pointer arithmetic. The struct jit_ctx target field is of type pointer to u32, and as such adding one to it will increment the address being pointed to by 4 bytes. Therefore in order to find the address of the end of the code we simply need to add the number of 4 byte instructions emitted, but we mistakenly add the number of instructions multiplied by 4. This results in the call to flush_icache_range() operating on a memory region 4x larger than intended, which is always wasteful and can cause crashes if we overrun into an unmapped page. Fix this by correcting the pointer arithmetic to remove the bogus multiplication, and use braces to remove the need for a set of brackets whilst also making it obvious that the target field is a pointer. Signed-off-by: NPaul Burton <paul.burton@mips.com> Fixes: b6bd53f9 ("MIPS: Add missing file for eBPF JIT.") Cc: Alexei Starovoitov <ast@kernel.org> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Martin KaFai Lau <kafai@fb.com> Cc: Song Liu <songliubraving@fb.com> Cc: Yonghong Song <yhs@fb.com> Cc: netdev@vger.kernel.org Cc: bpf@vger.kernel.org Cc: linux-mips@vger.kernel.org Cc: stable@vger.kernel.org # v4.13+ Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Jonas Gorski 提交于
commit 18836b48ebae20850631ee2916d0cdbb86df813d upstream. The switch to the generic dma ops made dma masks mandatory, breaking devices having them not set. In case of bcm63xx, it broke ethernet with the following warning when trying to up the device: [ 2.633123] ------------[ cut here ]------------ [ 2.637949] WARNING: CPU: 0 PID: 325 at ./include/linux/dma-mapping.h:516 bcm_enetsw_open+0x160/0xbbc [ 2.647423] Modules linked in: gpio_button_hotplug [ 2.652361] CPU: 0 PID: 325 Comm: ip Not tainted 4.19.16 #0 [ 2.658080] Stack : 80520000 804cd3ec 00000000 00000000 804ccc00 87085bdc 87d3f9d4 804f9a17 [ 2.666707] 8049cf18 00000145 80a942a0 00000204 80ac0000 10008400 87085b90 eb3d5ab7 [ 2.675325] 00000000 00000000 80ac0000 000022b0 00000000 00000000 00000007 00000000 [ 2.683954] 0000007a 80500000 0013b381 00000000 80000000 00000000 804a1664 80289878 [ 2.692572] 00000009 00000204 80ac0000 00000200 00000002 00000000 00000000 80a90000 [ 2.701191] ... [ 2.703701] Call Trace: [ 2.706244] [<8001f3c8>] show_stack+0x58/0x100 [ 2.710840] [<800336e4>] __warn+0xe4/0x118 [ 2.715049] [<800337d4>] warn_slowpath_null+0x48/0x64 [ 2.720237] [<80289878>] bcm_enetsw_open+0x160/0xbbc [ 2.725347] [<802d1d4c>] __dev_open+0xf8/0x16c [ 2.729913] [<802d20cc>] __dev_change_flags+0x100/0x1c4 [ 2.735290] [<802d21b8>] dev_change_flags+0x28/0x70 [ 2.740326] [<803539e0>] devinet_ioctl+0x310/0x7b0 [ 2.745250] [<80355fd8>] inet_ioctl+0x1f8/0x224 [ 2.749939] [<802af290>] sock_ioctl+0x30c/0x488 [ 2.754632] [<80112b34>] do_vfs_ioctl+0x740/0x7dc [ 2.759459] [<80112c20>] ksys_ioctl+0x50/0x94 [ 2.763955] [<800240b8>] syscall_common+0x34/0x58 [ 2.768782] ---[ end trace fb1a6b14d74e28b6 ]--- [ 2.773544] bcm63xx_enetsw bcm63xx_enetsw.0: cannot allocate rx ring 512 Fix this by adding appropriate DMA masks for the platform devices. Fixes: f8c55dc6 ("MIPS: use generic dma noncoherent ops for simple noncoherent platforms") Signed-off-by: NJonas Gorski <jonas.gorski@gmail.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NPaul Burton <paul.burton@mips.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Cc: Ralf Baechle <ralf@linux-mips.org> Cc: James Hogan <jhogan@kernel.org> Cc: stable@vger.kernel.org # v4.19+ Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Michael Clark 提交于
commit 94ee12b507db8b5876e31c9d6c9d84f556a4b49f upstream. __cmpxchg_small erroneously uses u8 for load comparison which can be either char or short. This patch changes the local variable to u32 which is sufficiently sized, as the loaded value is already masked and shifted appropriately. Using an integer size avoids any unnecessary canonicalization from use of non native widths. This patch is part of a series that adapts the MIPS small word atomics code for xchg and cmpxchg on short and char to RISC-V. Cc: RISC-V Patches <patches@groups.riscv.org> Cc: Linux RISC-V <linux-riscv@lists.infradead.org> Cc: Linux MIPS <linux-mips@linux-mips.org> Signed-off-by: NMichael Clark <michaeljclark@mac.com> [paul.burton@mips.com: - Fix varialble typo per Jonas Gorski. - Consolidate load variable with other declarations.] Signed-off-by: NPaul Burton <paul.burton@mips.com> Fixes: 3ba7f44d ("MIPS: cmpxchg: Implement 1 byte & 2 byte cmpxchg()") Cc: stable@vger.kernel.org # v4.13+ Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Mike Kravetz 提交于
commit cb6acd01e2e43fd8bad11155752b7699c3d0fb76 upstream. hugetlb pages should only be migrated if they are 'active'. The routines set/clear_page_huge_active() modify the active state of hugetlb pages. When a new hugetlb page is allocated at fault time, set_page_huge_active is called before the page is locked. Therefore, another thread could race and migrate the page while it is being added to page table by the fault code. This race is somewhat hard to trigger, but can be seen by strategically adding udelay to simulate worst case scheduling behavior. Depending on 'how' the code races, various BUG()s could be triggered. To address this issue, simply delay the set_page_huge_active call until after the page is successfully added to the page table. Hugetlb pages can also be leaked at migration time if the pages are associated with a file in an explicitly mounted hugetlbfs filesystem. For example, consider a two node system with 4GB worth of huge pages available. A program mmaps a 2G file in a hugetlbfs filesystem. It then migrates the pages associated with the file from one node to another. When the program exits, huge page counts are as follows: node0 1024 free_hugepages 1024 nr_hugepages node1 0 free_hugepages 1024 nr_hugepages Filesystem Size Used Avail Use% Mounted on nodev 4.0G 2.0G 2.0G 50% /var/opt/hugepool That is as expected. 2G of huge pages are taken from the free_hugepages counts, and 2G is the size of the file in the explicitly mounted filesystem. If the file is then removed, the counts become: node0 1024 free_hugepages 1024 nr_hugepages node1 1024 free_hugepages 1024 nr_hugepages Filesystem Size Used Avail Use% Mounted on nodev 4.0G 2.0G 2.0G 50% /var/opt/hugepool Note that the filesystem still shows 2G of pages used, while there actually are no huge pages in use. The only way to 'fix' the filesystem accounting is to unmount the filesystem If a hugetlb page is associated with an explicitly mounted filesystem, this information in contained in the page_private field. At migration time, this information is not preserved. To fix, simply transfer page_private from old to new page at migration time if necessary. There is a related race with removing a huge page from a file and migration. When a huge page is removed from the pagecache, the page_mapping() field is cleared, yet page_private remains set until the page is actually freed by free_huge_page(). A page could be migrated while in this state. However, since page_mapping() is not set the hugetlbfs specific routine to transfer page_private is not called and we leak the page count in the filesystem. To fix that, check for this condition before migrating a huge page. If the condition is detected, return EBUSY for the page. Link: http://lkml.kernel.org/r/74510272-7319-7372-9ea6-ec914734c179@oracle.com Link: http://lkml.kernel.org/r/20190212221400.3512-1-mike.kravetz@oracle.com Fixes: bcc54222 ("mm: hugetlb: introduce page_huge_active") Signed-off-by: NMike Kravetz <mike.kravetz@oracle.com> Reviewed-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: <stable@vger.kernel.org> [mike.kravetz@oracle.com: v2] Link: http://lkml.kernel.org/r/7534d322-d782-8ac6-1c8d-a8dc380eb3ab@oracle.com [mike.kravetz@oracle.com: update comment and changelog] Link: http://lkml.kernel.org/r/420bcfd6-158b-38e4-98da-26d0cd85bd01@oracle.comSigned-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Nicholas Kazlauskas 提交于
commit 2216322919c8608a448d7ebc560a845238a5d6b6 upstream. The prepare_fb call always happens on new_plane_state. The drm_atomic_helper_cleanup_planes checks to see if plane state pointer has changed when deciding to call cleanup_fb on either the new_plane_state or the old_plane_state. For a non-async atomic commit the state pointer is swapped, so this helper calls prepare_fb on the new_plane_state and cleanup_fb on the old_plane_state. This makes sense, since we want to prepare the framebuffer we are going to use and cleanup the the framebuffer we are no longer using. For the async atomic update helpers this differs. The async atomic update helpers perform in-place updates on the existing state. They call drm_atomic_helper_cleanup_planes but the state pointer is not swapped. This means that prepare_fb is called on the new_plane_state and cleanup_fb is called on the new_plane_state (not the old). In the case where old_plane_state->fb == new_plane_state->fb then there should be no behavioral difference between an async update and a non-async commit. But there are issues that arise when old_plane_state->fb != new_plane_state->fb. The first is that the new_plane_state->fb is immediately cleaned up after it has been prepared, so we're using a fb that we shouldn't be. The second occurs during a sequence of async atomic updates and non-async regular atomic commits. Suppose there are two framebuffers being interleaved in a double-buffering scenario, fb1 and fb2: - Async update, oldfb = NULL, newfb = fb1, prepare fb1, cleanup fb1 - Async update, oldfb = fb1, newfb = fb2, prepare fb2, cleanup fb2 - Non-async commit, oldfb = fb2, newfb = fb1, prepare fb1, cleanup fb2 We call cleanup_fb on fb2 twice in this example scenario, and any further use will result in use-after-free. The simple fix to this problem is to block framebuffer changes in the drm_atomic_helper_async_check function for now. v2: Move check by itself, add a FIXME (Daniel) Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Harry Wentland <harry.wentland@amd.com> Cc: Andrey Grodzovsky <andrey.grodzovsky@amd.com> Cc: <stable@vger.kernel.org> # v4.14+ Fixes: fef9df8b ("drm/atomic: initial support for asynchronous plane update") Signed-off-by: NNicholas Kazlauskas <nicholas.kazlauskas@amd.com> Acked-by: NAndrey Grodzovsky <andrey.grodzovsky@amd.com> Acked-by: NHarry Wentland <harry.wentland@amd.com> Reviewed-by: NDaniel Vetter <daniel@ffwll.ch> Signed-off-by: NHarry Wentland <harry.wentland@amd.com> Link: https://patchwork.freedesktop.org/patch/275364/Signed-off-by: NDave Airlie <airlied@redhat.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Jann Horn 提交于
commit 0a1d52994d440e21def1c2174932410b4f2a98a1 upstream. security_mmap_addr() does a capability check with current_cred(), but we can reach this code from contexts like a VFS write handler where current_cred() must not be used. This can be abused on systems without SMAP to make NULL pointer dereferences exploitable again. Fixes: 8869477a ("security: protect from stack expansion into low vm addresses") Cc: stable@kernel.org Signed-off-by: NJann Horn <jannh@google.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 BOUGH CHEN 提交于
commit e30be063d6dbcc0f18b1eb25fa709fdef89201fb upstream. Commit 18094430 ("mmc: sdhci-esdhc-imx: add ADMA Length Mismatch errata fix") involve the fix of ERR004536, but the fix is incorrect. Double confirm with IC, need to clear the bit 7 of register 0x6c rather than set this bit 7. Here is the definition of bit 7 of 0x6c: 0: enable the new IC fix for ERR004536 1: do not use the IC fix, keep the same as before Find this issue on i.MX845s-evk board when enable CMDQ, and let system in heavy loading. root@imx8mmevk:~# dd if=/dev/mmcblk2 of=/dev/null bs=1M & root@imx8mmevk:~# memtester 1000M > /dev/zero & root@imx8mmevk:~# [ 139.897220] mmc2: cqhci: timeout for tag 16 [ 139.901417] mmc2: cqhci: ============ CQHCI REGISTER DUMP =========== [ 139.907862] mmc2: cqhci: Caps: 0x0000310a | Version: 0x00000510 [ 139.914311] mmc2: cqhci: Config: 0x00001001 | Control: 0x00000000 [ 139.920753] mmc2: cqhci: Int stat: 0x00000000 | Int enab: 0x00000006 [ 139.927193] mmc2: cqhci: Int sig: 0x00000006 | Int Coal: 0x00000000 [ 139.933634] mmc2: cqhci: TDL base: 0x7809c000 | TDL up32: 0x00000000 [ 139.940073] mmc2: cqhci: Doorbell: 0x00030000 | TCN: 0x00000000 [ 139.946518] mmc2: cqhci: Dev queue: 0x00010000 | Dev Pend: 0x00010000 [ 139.952967] mmc2: cqhci: Task clr: 0x00000000 | SSC1: 0x00011000 [ 139.959411] mmc2: cqhci: SSC2: 0x00000001 | DCMD rsp: 0x00000000 [ 139.965857] mmc2: cqhci: RED mask: 0xfdf9a080 | TERRI: 0x00000000 [ 139.972308] mmc2: cqhci: Resp idx: 0x0000002e | Resp arg: 0x00000900 [ 139.978761] mmc2: sdhci: ============ SDHCI REGISTER DUMP =========== [ 139.985214] mmc2: sdhci: Sys addr: 0xb2c19000 | Version: 0x00000002 [ 139.991669] mmc2: sdhci: Blk size: 0x00000200 | Blk cnt: 0x00000400 [ 139.998127] mmc2: sdhci: Argument: 0x40110400 | Trn mode: 0x00000033 [ 140.004618] mmc2: sdhci: Present: 0x01088a8f | Host ctl: 0x00000030 [ 140.011113] mmc2: sdhci: Power: 0x00000002 | Blk gap: 0x00000080 [ 140.017583] mmc2: sdhci: Wake-up: 0x00000008 | Clock: 0x0000000f [ 140.024039] mmc2: sdhci: Timeout: 0x0000008f | Int stat: 0x00000000 [ 140.030497] mmc2: sdhci: Int enab: 0x107f4000 | Sig enab: 0x107f4000 [ 140.036972] mmc2: sdhci: AC12 err: 0x00000000 | Slot int: 0x00000502 [ 140.043426] mmc2: sdhci: Caps: 0x07eb0000 | Caps_1: 0x8000b407 [ 140.049867] mmc2: sdhci: Cmd: 0x00002c1a | Max curr: 0x00ffffff [ 140.056314] mmc2: sdhci: Resp[0]: 0x00000900 | Resp[1]: 0xffffffff [ 140.062755] mmc2: sdhci: Resp[2]: 0x328f5903 | Resp[3]: 0x00d00f00 [ 140.069195] mmc2: sdhci: Host ctl2: 0x00000008 [ 140.073640] mmc2: sdhci: ADMA Err: 0x00000007 | ADMA Ptr: 0x7809c108 [ 140.080079] mmc2: sdhci: ============================================ [ 140.086662] mmc2: running CQE recovery Fixes: 18094430 ("mmc: sdhci-esdhc-imx: add ADMA Length Mismatch errata fix") Signed-off-by: NHaibo Chen <haibo.chen@nxp.com> Cc: stable@vger.kernel.org Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Alamy Liu 提交于
commit d07e9fadf3a6b466ca3ae90fa4859089ff20530f upstream. Free up the allocated memory in the case of error return The value of mmc_host->cqe_enabled stays 'false'. Thus, cqhci_disable (mmc_cqe_ops->cqe_disable) won't be called to free the memory. Also, cqhci_disable() seems to be designed to disable and free all resources, not suitable to handle this corner case. Fixes: a4080225 ("mmc: cqhci: support for command queue enabled host") Signed-off-by: NAlamy Liu <alamy.liu@gmail.com> Acked-by: NAdrian Hunter <adrian.hunter@intel.com> Cc: stable@vger.kernel.org Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Alamy Liu 提交于
commit 27ec9dc17c48ea2e642ccb90b4ebf7fd47468911 upstream. There is not enough space being allocated when DCMD is disabled. CQE_DCMD is not necessary to be enabled when CQE is enabled. (Software could halt CQE to send command) In the case that CQE_DCMD is not enabled, it still needs to allocate space for data transfer. For instance: CQE_DCMD is enabled: 31 slots space (one slot used by DCMD) CQE_DCMD is disabled: 32 slots space Fixes: a4080225 ("mmc: cqhci: support for command queue enabled host") Signed-off-by: NAlamy Liu <alamy.liu@gmail.com> Acked-by: NAdrian Hunter <adrian.hunter@intel.com> Cc: stable@vger.kernel.org Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Ritesh Harjani 提交于
commit e5723f95d6b493dd437f1199cacb41459713b32f upstream. In case of CQHCI, mrq->cmd may be NULL for data requests (non DCMD). In such case mmc_should_fail_request is directly dereferencing mrq->cmd while cmd is NULL. Fix this by checking for mrq->cmd pointer. Fixes: 72a5af55 ("mmc: core: Add support for handling CQE requests") Signed-off-by: NRitesh Harjani <riteshh@codeaurora.org> Cc: stable@vger.kernel.org Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Takeshi Saito 提交于
commit 5603731a15ef9ca317c122cc8c959f1dee1798b4 upstream. In R-Car Gen2 or later, the maximum number of transfer blocks are changed from 0xFFFF to 0xFFFFFFFF. Therefore, Block Count Register should use iowrite32(). If another system (U-boot, Hypervisor OS, etc) uses bit[31:16], this value will not be cleared. So, SD/MMC card initialization fails. So, check for the bigger register and use apropriate write. Also, mark the register as extended on Gen2. Signed-off-by: NTakeshi Saito <takeshi.saito.xv@renesas.com> [wsa: use max_blk_count in if(), add Gen2, update commit message] Signed-off-by: NWolfram Sang <wsa+renesas@sang-engineering.com> Cc: stable@kernel.org Reviewed-by: NSimon Horman <horms+renesas@verge.net.au> [Ulf: Fixed build error] Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Sergei Shtylyov 提交于
commit 5c27ff5db1491a947264d6d4e4cbe43ae6535bae upstream. I have encountered an interrupt storm during the eMMC chip probing (and the chip finally didn't get detected). It turned out that U-Boot left the DMAC interrupts enabled while the Linux driver didn't use those. The SDHI driver's interrupt handler somehow assumes that, even if an SDIO interrupt didn't happen, it should return IRQ_HANDLED. I think that if none of the enabled interrupts happened and got handled, we should return IRQ_NONE -- that way the kernel IRQ code recoginizes a spurious interrupt and masks it off pretty quickly... Fixes: 7729c7a2 ("mmc: tmio: Provide separate interrupt handlers") Signed-off-by: NSergei Shtylyov <sergei.shtylyov@cogentembedded.com> Reviewed-by: NWolfram Sang <wsa+renesas@sang-engineering.com> Tested-by: NWolfram Sang <wsa+renesas@sang-engineering.com> Reviewed-by: NSimon Horman <horms+renesas@verge.net.au> Cc: stable@vger.kernel.org Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Jonathan Neuschäfer 提交于
commit c9bd505dbd9d3dc80c496f88eafe70affdcf1ba6 upstream. When using the mmc_spi driver with a card-detect pin, I noticed that the card was not detected immediately after probe, but only after it was unplugged and plugged back in (and the CD IRQ fired). The call tree looks something like this: mmc_spi_probe mmc_add_host mmc_start_host _mmc_detect_change mmc_schedule_delayed_work(&host->detect, 0) mmc_rescan host->bus_ops->detect(host) mmc_detect _mmc_detect_card_removed host->ops->get_cd(host) mmc_gpio_get_cd -> -ENOSYS (ctx->cd_gpio not set) mmc_gpiod_request_cd ctx->cd_gpio = desc To fix this issue, call mmc_detect_change after the card-detect GPIO/IRQ is registered. Signed-off-by: NJonathan Neuschäfer <j.neuschaefer@gmx.net> Reviewed-by: NLinus Walleij <linus.walleij@linaro.org> Cc: stable@vger.kernel.org Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Ben Gardon 提交于
[ Upstream commit 94a980c39c8e3f8abaff5d3b5bbcd4ccf1c02c4f ] Fix a call to userspace_mem_region_find to conform to its spec of taking an inclusive, inclusive range. It was previously being called with an inclusive, exclusive range. Also remove a redundant region bounds check in vm_userspace_mem_region_add. Region overlap checking is already performed by the call to userspace_mem_region_find. Tested: Compiled tools/testing/selftests/kvm with -static Ran all resulting test binaries on an Intel Haswell test machine All tests passed Signed-off-by: NBen Gardon <bgardon@google.com> Reviewed-by: NJim Mattson <jmattson@google.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Vitaly Kuznetsov 提交于
[ Upstream commit 619ad846fc3452adaf71ca246c5aa711e2055398 ] kvm-unit-tests' eventinj "NMI failing on IDT" test results in NMI being delivered to the host (L1) when it's running nested. The problem seems to be: svm_complete_interrupts() raises 'nmi_injected' flag but later we decide to reflect EXIT_NPF to L1. The flag remains pending and we do NMI injection upon entry so it got delivered to L1 instead of L2. It seems that VMX code solves the same issue in prepare_vmcs12(), this was introduced with code refactoring in commit 5f3d5799 ("KVM: nVMX: Rework event injection and recovery"). Signed-off-by: NVitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Suravee Suthikulpanit 提交于
[ Upstream commit bb218fbcfaaa3b115d4cd7a43c0ca164f3a96e57 ] In case of incomplete IPI with invalid interrupt type, the current SVM driver does not properly emulate the IPI, and fails to boot FreeBSD guests with multiple vcpus when enabling AVIC. Fix this by update APIC ICR high/low registers, which also emulate sending the IPI. Signed-off-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Chaitanya Tata 提交于
[ Upstream commit 93183bdbe73bbdd03e9566c8dc37c9d06b0d0db6 ] Recently, DMG frequency bands have been extended till 71GHz, so extend the range check till 20GHz (45-71GHZ), else some channels will be marked as disabled. Signed-off-by: NChaitanya Tata <Chaitanya.Tata@bluwireless.co.uk> Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Mathieu Malaterre 提交于
[ Upstream commit 7c53eb5d87bc21464da4268c3c0c47457b6d9c9b ] During refactor in commit 9e478066 ("mac80211: fix MU-MIMO follow-MAC mode") a new struct 'action' was declared with packed attribute as: struct { struct ieee80211_hdr_3addr hdr; u8 category; u8 action_code; } __packed action; But since struct 'ieee80211_hdr_3addr' is declared with an aligned keyword as: struct ieee80211_hdr { __le16 frame_control; __le16 duration_id; u8 addr1[ETH_ALEN]; u8 addr2[ETH_ALEN]; u8 addr3[ETH_ALEN]; __le16 seq_ctrl; u8 addr4[ETH_ALEN]; } __packed __aligned(2); Solve the ambiguity of placing aligned structure in a packed one by adding the aligned(2) attribute to struct 'action'. This removes the following warning (W=1): net/mac80211/rx.c:234:2: warning: alignment 1 of 'struct <anonymous>' is less than 2 [-Wpacked-not-aligned] Cc: Johannes Berg <johannes.berg@intel.com> Suggested-by: NJohannes Berg <johannes@sipsolutions.net> Signed-off-by: NMathieu Malaterre <malat@debian.org> Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Balaji Pothunoori 提交于
[ Upstream commit 7ed5285396c257fd4070b1e29e7b2341aae2a1ce ] Following call trace is observed while adding TDLS peer entry in driver during TDLS setup. Call Trace: [<c1301476>] dump_stack+0x47/0x61 [<c10537d2>] __warn+0xe2/0x100 [<fa22415f>] ? sta_apply_parameters+0x49f/0x550 [mac80211] [<c1053895>] warn_slowpath_null+0x25/0x30 [<fa22415f>] sta_apply_parameters+0x49f/0x550 [mac80211] [<fa20ad42>] ? sta_info_alloc+0x1c2/0x450 [mac80211] [<fa224623>] ieee80211_add_station+0xe3/0x160 [mac80211] [<c1876fe3>] nl80211_new_station+0x273/0x420 [<c170f6d9>] genl_rcv_msg+0x219/0x3c0 [<c170f4c0>] ? genl_rcv+0x30/0x30 [<c170ee7e>] netlink_rcv_skb+0x8e/0xb0 [<c170f4ac>] genl_rcv+0x1c/0x30 [<c170e8aa>] netlink_unicast+0x13a/0x1d0 [<c170ec18>] netlink_sendmsg+0x2d8/0x390 [<c16c5acd>] sock_sendmsg+0x2d/0x40 [<c16c6369>] ___sys_sendmsg+0x1d9/0x1e0 Fixing this by allowing TDLS setup request only when we have completed association. Signed-off-by: NBalaji Pothunoori <bpothuno@codeaurora.org> Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Thomas Falcon 提交于
[ Upstream commit e95d22c69b2c130ccce257b84daf283fd82d611e ] The IBM virtual ethernet driver's polling function continues to process frames after rescheduling NAPI, resulting in a warning if it exhausted its budget. Do not restart polling after calling napi_reschedule. Instead let frames be processed in the following instance. Signed-off-by: NThomas Falcon <tlfalcon@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Maciej Żenczykowski 提交于
[ Upstream commit 3b707c3008cad04604c1f50e39f456621821c414 ] __bpf_redirect() and act_mirred checks this boolean to determine whether to prefix an ethernet header. Signed-off-by: NMaciej Żenczykowski <maze@google.com> Acked-by: NDaniel Borkmann <daniel@iogearbox.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Zhang Run 提交于
[ Upstream commit 6eea3527e68acc22483f4763c8682f223eb90029 ] The ax88772_bind() should return error code immediately when the PHY was not reset properly through ax88772a_hw_reset(). Otherwise, The asix_get_phyid() will block when get the PHY Identifier from the PHYSID1 MII registers through asix_mdio_read() due to the PHY isn't ready. Furthermore, it will produce a lot of error message cause system crash.As follows: asix 1-1:1.0 (unnamed net_device) (uninitialized): Failed to write reg index 0x0000: -71 asix 1-1:1.0 (unnamed net_device) (uninitialized): Failed to send software reset: ffffffb9 asix 1-1:1.0 (unnamed net_device) (uninitialized): Failed to write reg index 0x0000: -71 asix 1-1:1.0 (unnamed net_device) (uninitialized): Failed to enable software MII access asix 1-1:1.0 (unnamed net_device) (uninitialized): Failed to read reg index 0x0000: -71 asix 1-1:1.0 (unnamed net_device) (uninitialized): Failed to write reg index 0x0000: -71 asix 1-1:1.0 (unnamed net_device) (uninitialized): Failed to enable software MII access asix 1-1:1.0 (unnamed net_device) (uninitialized): Failed to read reg index 0x0000: -71 ... Signed-off-by: NZhang Run <zhang.run@zte.com.cn> Reviewed-by: NYang Wei <yang.wei9@zte.com.cn> Tested-by: NMarcel Ziswiler <marcel.ziswiler@toradex.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Douglas Anderson 提交于
[ Upstream commit a3c5e2cd79753121f49a8662c1e0a60ddb5486ca ] The bindings for Qualcomm opp levels changed after being Acked but before landing. Thus the code in the GPU driver that was relying on the old bindings is now broken. Let's change the code to match the new bindings by adjusting the old string 'qcom,level' to the new string 'opp-level'. See the patch ("dt-bindings: opp: Introduce opp-level bindings"). NOTE: we will do additional cleanup to totally remove the string from the code and use the new dev_pm_opp_get_level() but we'll do it in a future patch. This will facilitate getting the important code fix in sooner without having to deal with cross-maintainer dependencies. This patch needs to land before the patch ("arm64: dts: sdm845: Add gpu and gmu device nodes") since if a tree contains the device tree patch but not this one you'll get a crash at bootup. Fixes: 4b565ca5 ("drm/msm: Add A6XX device support") Signed-off-by: NDouglas Anderson <dianders@chromium.org> Reviewed-by: NJordan Crouse <jcrouse@codeaurora.org> Signed-off-by: NRob Clark <robdclark@gmail.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Hannes Reinecke 提交于
[ Upstream commit 78a61cd42a64f3587862b372a79e1d6aaf131fd7 ] Bit 6 in the ANACAP field is used to indicate that the ANA group ID doesn't change while the namespace is attached to the controller. There is an optimisation in the code to only allocate space for the ANA group header, as the namespace list won't change and hence would not need to be refreshed. However, this optimisation was never carried over to the actual workflow, which always assumes that the buffer is large enough to hold the ANA header _and_ the namespace list. So drop this optimisation and always allocate enough space. Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NHannes Reinecke <hare@suse.com> Signed-off-by: NSagi Grimberg <sagi@grimberg.me> Signed-off-by: NJens Axboe <axboe@kernel.dk> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Sagi Grimberg 提交于
[ Upstream commit 4c174e6366746ae8d49f9cc409f728eebb7a9ac9 ] Currently, we have several problems with the timeout handler: 1. If we timeout on the controller establishment flow, we will hang because we don't execute the error recovery (and we shouldn't because the create_ctrl flow needs to fail and cleanup on its own) 2. We might also hang if we get a disconnet on a queue while the controller is already deleting. This racy flow can cause the controller disable/shutdown admin command to hang. We cannot complete a timed out request from the timeout handler without mutual exclusion from the teardown flow (e.g. nvme_rdma_error_recovery_work). So we serialize it in the timeout handler and teardown io and admin queues to guarantee that no one races with us from completing the request. Reported-by: NJaesoo Lee <jalee@purestorage.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NSagi Grimberg <sagi@grimberg.me> Signed-off-by: NJens Axboe <axboe@kernel.dk> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Haiyang Zhang 提交于
[ Upstream commit 17d91256898402daf4425cc541ac9cbf64574d9a ] Changing mtu, channels, or buffer sizes ops call to netvsc_attach(), rndis_set_subchannel(), which always reset the hash key to default value. That will override hash key changed previously. This patch fixes the problem by save the hash key, then restore it when we re- add the netvsc device. Fixes: ff4a4419 ("netvsc: allow get/set of RSS indirection table") Signed-off-by: NHaiyang Zhang <haiyangz@microsoft.com> Reviewed-by: NMichael Kelley <mikelley@microsoft.com> [sl: fix up subject line] Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Haiyang Zhang 提交于
[ Upstream commit 7c9f335a3ff20557a92584199f3d35c7e992bbe5 ] These assignments occur in multiple places. The patch refactor them to a function for simplicity. It also puts the struct to heap area for future expension. Signed-off-by: NHaiyang Zhang <haiyangz@microsoft.com> Reviewed-by: NMichael Kelley <mikelley@microsoft.com> [sl: fix up subject line] Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Haiyang Zhang 提交于
[ Upstream commit b4a10c750424e01b5e37372fef0a574ebf7b56c3 ] Hyper-V hosts require us to disable RSS before changing RSS key, otherwise the changing request will fail. This patch fixes the coding error. Fixes: ff4a4419 ("netvsc: allow get/set of RSS indirection table") Reported-by: NWei Hu <weh@microsoft.com> Signed-off-by: NHaiyang Zhang <haiyangz@microsoft.com> Reviewed-by: NMichael Kelley <mikelley@microsoft.com> [sl: fix up subject line] Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Atsushi Nemoto 提交于
[ Upstream commit 17b42a20d7ca59377788c6a2409e77569570cc10 ] The connect_local_phy should return NULL (not negative errno) on error, since its caller expects it. Signed-off-by: NAtsushi Nemoto <atsushi.nemoto@sord.co.jp> Acked-by: NThor Thayer <thor.thayer@linux.intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Varun Prakash 提交于
[ Upstream commit fe35a40e675473eb65f2f5462b82770f324b5689 ] Assign fc_vport to ln->fc_vport before calling csio_fcoe_alloc_vnp() to avoid a NULL pointer dereference in csio_vport_set_state(). ln->fc_vport is dereferenced in csio_vport_set_state(). Signed-off-by: NVarun Prakash <varun@chelsio.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Ewan D. Milne 提交于
[ Upstream commit c41f59884be5cca293ed61f3d64637dbba3a6381 ] We cannot wait on a completion object in the lpfc_nvme_targetport structure in the _destroy_targetport() code path because the NVMe/fc transport will free that structure immediately after the .targetport_delete() callback. This results in a use-after-free, and a hang if slub_debug=FZPU is enabled. Fix this by putting the completion on the stack. Signed-off-by: NEwan D. Milne <emilne@redhat.com> Acked-by: NJames Smart <james.smart@broadcom.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Ewan D. Milne 提交于
[ Upstream commit 7961cba6f7d8215fa632df3d220e5154bb825249 ] We cannot wait on a completion object in the lpfc_nvme_lport structure in the _destroy_localport() code path because the NVMe/fc transport will free that structure immediately after the .localport_delete() callback. This results in a use-after-free, and a hang if slub_debug=FZPU is enabled. Fix this by putting the completion on the stack. Signed-off-by: NEwan D. Milne <emilne@redhat.com> Acked-by: NJames Smart <james.smart@broadcom.com> Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-