- 27 12月, 2019 40 次提交
-
-
由 Mikulas Patocka 提交于
[ Upstream commit b21555786f18cd77f2311ad89074533109ae3ffa ] Commit 721b1d98fb517a ("dm snapshot: Fix excessive memory usage and workqueue stalls") introduced a semaphore to limit the maximum number of in-flight kcopyd (COW) jobs. The implementation of this throttling mechanism is prone to a deadlock: 1. One or more threads write to the origin device causing COW, which is performed by kcopyd. 2. At some point some of these threads might reach the s->cow_count semaphore limit and block in down(&s->cow_count), holding a read lock on _origins_lock. 3. Someone tries to acquire a write lock on _origins_lock, e.g., snapshot_ctr(), which blocks because the threads at step (2) already hold a read lock on it. 4. A COW operation completes and kcopyd runs dm-snapshot's completion callback, which ends up calling pending_complete(). pending_complete() tries to resubmit any deferred origin bios. This requires acquiring a read lock on _origins_lock, which blocks. This happens because the read-write semaphore implementation gives priority to writers, meaning that as soon as a writer tries to enter the critical section, no readers will be allowed in, until all writers have completed their work. So, pending_complete() waits for the writer at step (3) to acquire and release the lock. This writer waits for the readers at step (2) to release the read lock and those readers wait for pending_complete() (the kcopyd thread) to signal the s->cow_count semaphore: DEADLOCK. The above was thoroughly analyzed and documented by Nikos Tsironis as part of his initial proposal for fixing this deadlock, see: https://www.redhat.com/archives/dm-devel/2019-October/msg00001.html Fix this deadlock by reworking COW throttling so that it waits without holding any locks. Add a variable 'in_progress' that counts how many kcopyd jobs are running. A function wait_for_in_progress() will sleep if 'in_progress' is over the limit. It drops _origins_lock in order to avoid the deadlock. Reported-by: NGuruswamy Basavaiah <guru2018@gmail.com> Reported-by: NNikos Tsironis <ntsironis@arrikto.com> Reviewed-by: NNikos Tsironis <ntsironis@arrikto.com> Tested-by: NNikos Tsironis <ntsironis@arrikto.com> Fixes: 721b1d98fb51 ("dm snapshot: Fix excessive memory usage and workqueue stalls") Cc: stable@vger.kernel.org # v5.0+ Depends-on: 4a3f111a73a8c ("dm snapshot: introduce account_start_copy() and account_end_copy()") Signed-off-by: NMikulas Patocka <mpatocka@redhat.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> -
由 Mikulas Patocka 提交于
[ Upstream commit a2f83e8b0c82c9500421a26c49eb198b25fcdea3 ] This simple refactoring moves code for modifying the semaphore cow_count into separate functions to prepare for changes that will extend these methods to provide for a more sophisticated mechanism for COW throttling. Signed-off-by: NMikulas Patocka <mpatocka@redhat.com> Reviewed-by: NNikos Tsironis <ntsironis@arrikto.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Sasha Levin 提交于
[ Upstream commit f7daefe4231e57381d92c2e2ad905a899c28e402 ] CPU0: CPU1: backing_dev_show backing_dev_store ...... ...... file = zram->backing_dev; down_read(&zram->init_lock); down_read(&zram->init_init_lock) file_path(file, ...); zram->backing_dev = backing_dev; up_read(&zram->init_lock); up_read(&zram->init_lock); gets the value of zram->backing_dev too early in backing_dev_show, which resultin the value being NULL at the beginning, and not NULL later. backtrace: d_path+0xcc/0x174 file_path+0x10/0x18 backing_dev_show+0x40/0xb4 dev_attr_show+0x20/0x54 sysfs_kf_seq_show+0x9c/0x10c kernfs_seq_show+0x28/0x30 seq_read+0x184/0x488 kernfs_fop_read+0x5c/0x1a4 __vfs_read+0x44/0x128 vfs_read+0xa0/0x138 SyS_read+0x54/0xb4 Link: http://lkml.kernel.org/r/1571046839-16814-1-git-send-email-chenwandun@huawei.comSigned-off-by: NChenwandun <chenwandun@huawei.com> Acked-by: NMinchan Kim <minchan@kernel.org> Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: <stable@vger.kernel.org> [4.14+] Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> -
由 Yang Yingliang 提交于
ascend inclusion category: feature bugzilla: NA CVE: NA ------------ On Hi1980 platform, each chip contains 20 cores. The former 16 cores are managed by the OS. The last 4 cores are ts core, with their GICR registers initialised by the OS and has ListOS runs on. This patch aims to initialise the GICR registers of ts cores. Signed-off-by: NYang Yingliang <yaingyingliang@huawei.com> Signed-off-by: NXu Qiang <xuqiang36@huawei.com> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Xu Qiang 提交于
ascend inclusion category: feature bugzilla: NA CVE: NA ------------ SDEI is a set of interfaces used for communication between the OS and ATF. The Ascend uses the SDEI interface to implement the NMI function. NMI depends on the soft lockup detector. The OS of the Ascend runs on EL1. Signed-off-by: NXu Qiang <xuqiang36@huawei.com> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Xu Qiang 提交于
ascend inclusion category: feature bugzilla: NA CVE: NA ------------ Support ts core ras process for ascend. Signed-off-by: NXu Qiang <xuqiang36@huawei.com> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Zhou Guanghui 提交于
ascend inclusion category: bugfix bugzilla: NA CVE: NA ------------ Reserve 4G map for DVPP. Signed-off-by: NZhou Guanghui <zhouguanghui1@huawei.com> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Jiankang Chen 提交于
ascend inclusion category: bugfix bugzilla: NA CVE: NA ------------------- There is a bug in dvpp. The virtual address used by dvpp must have same higt 16bits. Add a MAP_VA32BIT for mmap. mmap(..., MAP_VA32BIT) will return the virtual address with same higt 16bits. Signed-off-by: NJiankang Chen <chenjiankang1@huawei.com> Signed-off-by: NZhou Guanghui <zhouguanghui1@huawei.com> Signed-off-by: NChen Jun <chenjun102@huawei.com> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 James Morse 提交于
mainline inclusion from mainline-4.20 commit acafce48b07b category: bugfix bugzilla: 24619 CVE: NA ------------ It turns out the dt-probing part of this wasn't tested properly after it was merged. commit 3aa0582f ("of: platform: populate /firmware/ node from of_platform_default_populate_init()") changed the core-code to generate the platform devices, meaning the driver's attempt fails, and it bails out. Fix this by removing the manual platform-device creation for DT systems, core code has always done this for us. CC: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Nicolas Saenz Julienne 提交于
mainline inclusion from mainline-4.20 commit c3790b3799f8 category: bugfix bugzilla: 24618 CVE: NA ------------ After finding a "firmware" dt node arm_sdei tries to match it's compatible string with it. To do so it's calling of_find_matching_node() which already takes care of decreasing the refcount on the "firmware" node. We are then incorrectly decreasing the refcount on that node again. This patch removes the unwarranted call to of_node_put(). Signed-off-by: NNicolas Saenz Julienne <nsaenzjulienne@suse.de> Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Weilong chen 提交于
ascend inclusion category: feature bugzilla: NA CVE: NA ------------------- Aascend support disable oom-killer and report oom events to bbox. Signed-off-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZhou Guanghui <zhouguanghui1@huawei.com> Signed-off-by: NChen Jun <chenjin102@huawei.com> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NLi Zefan <lizefan@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Jiankang Chen 提交于
ascend inclusion category: bugfix bugzilla: NA CVE: NA ------------------- IO page fault can not be handled in time, because of low priority. Increase priority of IO page fault worker queue to WQ_HIGHPRI. Signed-off-by: NJiankang Chen <chenjiankang1@huawei.com> Signed-off-by: NChen Jun <chenjun102@huawei.com> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Mike Kravetz 提交于
mainline inclusion from mainline-5.4-rc1 commit f60858f9d327c4dd0c432abe9ec943a83929c229 category: bugfix bugzilla: 23291 CVE: NA ------------------------------------------------- When allocating hugetlbfs pool pages via /proc/sys/vm/nr_hugepages, the pages will be interleaved between all nodes of the system. If nodes are not equal, it is quite possible for one node to fill up before the others. When this happens, the code still attempts to allocate pages from the full node. This results in calls to direct reclaim and compaction which slow things down considerably. When allocating pool pages, note the state of the previous allocation for each node. If previous allocation failed, do not use the aggressive retry algorithm on successive attempts. The allocation will still succeed if there is memory available, but it will not try as hard to free up memory. Link: http://lkml.kernel.org/r/20190806014744.15446-5-mike.kravetz@oracle.comSigned-off-by: NMike Kravetz <mike.kravetz@oracle.com> Acked-by: NVlastimil Babka <vbabka@suse.cz> Cc: Hillf Danton <hdanton@sina.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Mel Gorman <mgorman@suse.de> Cc: Michal Hocko <mhocko@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NChenwandun <chenwandun@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Vlastimil Babka 提交于
mainline inclusion from mainline-5.4-rc1 commit 494330855641269c8a49f1580f0d4e2ead693245 category: bugfix bugzilla: 23291 CVE: NA ------------------------------------------------- Mike Kravetz reports that "hugetlb allocations could stall for minutes or hours when should_compact_retry() would return true more often then it should. Specifically, this was in the case where compact_result was COMPACT_DEFERRED and COMPACT_PARTIAL_SKIPPED and no progress was being made." The problem is that the compaction_withdrawn() test in should_compact_retry() includes compaction outcomes that are only possible on low compaction priority, and results in a retry without increasing the priority. This may result in furter reclaim, and more incomplete compaction attempts. With this patch, compaction priority is raised when possible, or should_compact_retry() returns false. The COMPACT_SKIPPED result doesn't really fit together with the other outcomes in compaction_withdrawn(), as that's a result caused by insufficient order-0 pages, not due to low compaction priority. With this patch, it is moved to a new compaction_needs_reclaim() function, and for that outcome we keep the current logic of retrying if it looks like reclaim will be able to help. Link: http://lkml.kernel.org/r/20190806014744.15446-4-mike.kravetz@oracle.comReported-by: NMike Kravetz <mike.kravetz@oracle.com> Signed-off-by: NVlastimil Babka <vbabka@suse.cz> Signed-off-by: NMike Kravetz <mike.kravetz@oracle.com> Tested-by: NMike Kravetz <mike.kravetz@oracle.com> Cc: Hillf Danton <hdanton@sina.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Mel Gorman <mgorman@suse.de> Cc: Michal Hocko <mhocko@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NChenwandun <chenwandun@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Vlastimil Babka 提交于
mainline inclusion from mainline-5.4-rc1 commit 5ee04716c46ce58989b1256a98af1af89f385db8 category: bugfix bugzilla: 23286 CVE: NA ------------------------------------------------- After commit "mm, reclaim: make should_continue_reclaim perform dryrun detection", closer look at the function shows, that nr_reclaimed == 0 means the function will always return false. And since non-zero nr_reclaimed implies non_zero nr_scanned, testing nr_scanned serves no purpose, and so does the testing for __GFP_RETRY_MAYFAIL. This patch thus cleans up the function to test only !nr_reclaimed upfront, and remove the __GFP_RETRY_MAYFAIL test and nr_scanned parameter completely. Comment is also updated, explaining that approximating "full LRU list has been scanned" with nr_scanned == 0 didn't really work. Link: http://lkml.kernel.org/r/20190806014744.15446-3-mike.kravetz@oracle.comSigned-off-by: NVlastimil Babka <vbabka@suse.cz> Signed-off-by: NMike Kravetz <mike.kravetz@oracle.com> Acked-by: NMike Kravetz <mike.kravetz@oracle.com> Cc: Hillf Danton <hdanton@sina.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Mel Gorman <mgorman@suse.de> Cc: Michal Hocko <mhocko@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NChenwandun <chenwandun@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Hillf Danton 提交于
mainline inclusion from mainline-5.4-rc1 commit 1c6c15971e4709953f75082a5d44212536b1c2b7 category: bugfix bugzilla: 23286 CVE: NA ------------------------------------------------- Patch series "address hugetlb page allocation stalls", v2. Allocation of hugetlb pages via sysctl or procfs can stall for minutes or hours. A simple example on a two node system with 8GB of memory is as follows: echo 4096 > /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages echo 4096 > /proc/sys/vm/nr_hugepages Obviously, both allocation attempts will fall short of their 8GB goal. However, one or both of these commands may stall and not be interruptible. The issues were initially discussed in mail thread [1] and RFC code at [2]. This series addresses the issues causing the stalls. There are two distinct fixes, a cleanup, and an optimization. The reclaim patch by Hillf and compaction patch by Vlasitmil address corner cases in their respective areas. hugetlb page allocation could stall due to either of these issues. Vlasitmil added a cleanup patch after Hillf's modifications. The hugetlb patch by Mike is an optimization suggested during the debug and development process. [1] http://lkml.kernel.org/r/d38a095e-dc39-7e82-bb76-2c9247929f07@oracle.com [2] http://lkml.kernel.org/r/20190724175014.9935-1-mike.kravetz@oracle.com This patch (of 4): Address the issue of should_continue_reclaim returning true too often for __GFP_RETRY_MAYFAIL attempts when !nr_reclaimed and nr_scanned. This was observed during hugetlb page allocation causing stalls for minutes or hours. We can stop reclaiming pages if compaction reports it can make a progress. There might be side-effects for other high-order allocations that would potentially benefit from reclaiming more before compaction so that they would be faster and less likely to stall. However, the consequences of premature/over-reclaim are considered worse. We can also bail out of reclaiming pages if we know that there are not enough inactive lru pages left to satisfy the costly allocation. We can give up reclaiming pages too if we see dryrun occur, with the certainty of plenty of inactive pages. IOW with dryrun detected, we are sure we have reclaimed as many pages as we could. Link: http://lkml.kernel.org/r/20190806014744.15446-2-mike.kravetz@oracle.comSigned-off-by: NHillf Danton <hdanton@sina.com> Signed-off-by: NMike Kravetz <mike.kravetz@oracle.com> Tested-by: NMike Kravetz <mike.kravetz@oracle.com> Acked-by: NMel Gorman <mgorman@suse.de> Acked-by: NVlastimil Babka <vbabka@suse.cz> Cc: Mel Gorman <mgorman@suse.de> Cc: Michal Hocko <mhocko@kernel.org> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Johannes Weiner <hannes@cmpxchg.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NChenwandun <chenwandun@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Lecopzer Chen 提交于
mainline inclusion from mainline-v5.4-rc1 commit ae83189405ea5c693683327fa69ac95a23ec59be category: bugfix bugzilla: 23287 CVE: NA ------------------------------------------------- sparse_buffer_alloc(xsize) gets the size of memory from sparsemap_buf after being aligned with the size. However, the size is at least PAGE_ALIGN(sizeof(struct page) * PAGES_PER_SECTION) and usually larger than PAGE_SIZE. Also, sparse_buffer_fini() only frees memory between sparsemap_buf and sparsemap_buf_end, since sparsemap_buf may be changed by PTR_ALIGN() first, the aligned space before sparsemap_buf is wasted and no one will touch it. In our ARM32 platform (without SPARSEMEM_VMEMMAP) Sparse_buffer_init Reserve d359c000 - d3e9c000 (9M) Sparse_buffer_alloc Alloc d3a00000 - d3E80000 (4.5M) Sparse_buffer_fini Free d3e80000 - d3e9c000 (~=100k) The reserved memory between d359c000 - d3a00000 (~=4.4M) is unfreed. In ARM64 platform (with SPARSEMEM_VMEMMAP) sparse_buffer_init Reserve ffffffc07d623000 - ffffffc07f623000 (32M) Sparse_buffer_alloc Alloc ffffffc07d800000 - ffffffc07f600000 (30M) Sparse_buffer_fini Free ffffffc07f600000 - ffffffc07f623000 (140K) The reserved memory between ffffffc07d623000 - ffffffc07d800000 (~=1.9M) is unfreed. Let's explicit free redundant aligned memory. [arnd@arndb.de: mark sparse_buffer_free as __meminit] Link: http://lkml.kernel.org/r/20190709185528.3251709-1-arnd@arndb.de Link: http://lkml.kernel.org/r/20190705114730.28534-1-lecopzer.chen@mediatek.comSigned-off-by: NLecopzer Chen <lecopzer.chen@mediatek.com> Signed-off-by: NMark-PK Tsai <Mark-PK.Tsai@mediatek.com> Signed-off-by: NArnd Bergmann <arnd@arndb.de> Cc: YJ Chiang <yj.chiang@mediatek.com> Cc: Lecopzer Chen <lecopzer.chen@mediatek.com> Cc: Pavel Tatashin <pasha.tatashin@oracle.com> Cc: Oscar Salvador <osalvador@suse.de> Cc: Michal Hocko <mhocko@suse.com> Cc: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NChenwandun <chenwandun@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> -
由 Mike Rapoport 提交于
mainline inclusion from mainline-v5.0-rc1 commit 4d72868c8f7c293fc8408a54db4e0a12dc031152 category: bugfix bugzilla: 23287 CVE: NA ------------------------------------------------- __memblock_free_early() is only used by the convenience wrappers, so essentially we wrap a call to memblock_free() twice. Replace calls of __memblock_free_early() with calls to memblock_free() and drop the former. Link: http://lkml.kernel.org/r/20181125102940.GE28634@rapoport-lnxSigned-off-by: NMike Rapoport <rppt@linux.ibm.com> Reviewed-by: NAndrew Morton <akpm@linux-foundation.org> Cc: Wentao Wang <witallwang@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NChenwandun <chenwandun@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Wentao Wang 提交于
mainline inclusion from mainline-v5.0-rc1 commit d31cfe7bff9109476da92c245b56083e9b48d60a category: bugfix bugzilla: 23287 CVE: NA ------------------------------------------------- Link: http://lkml.kernel.org/r/C8ECE1B7A767434691FEEFA3A01765D72AFB8E78@MX203CL03.corp.emc.comSigned-off-by: NWentao Wang <witallwang@gmail.com> Reviewed-by: NAndrew Morton <akpm@linux-foundation.org> Cc: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NChenwandun <chenwandun@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Konstantin Khlebnikov 提交于
mainline inclusion from mainline-5.4-rc4 commit ae8af4388db002bbd1df78ecee7ca31cee78e964 category: bugfix bugzilla: 24064 CVE: NA ------------------------------------------------- Mapped, dirty and writeback pages are also counted in per-lruvec stats. These counters needs update when page is moved between cgroups. Currently is nobody *consuming* the lruvec versions of these counters and that there is no user-visible effect. Link: http://lkml.kernel.org/r/157112699975.7360.1062614888388489788.stgit@buzz Fixes: 00f3ca2c ("mm: memcontrol: per-lruvec stats infrastructure") Signed-off-by: NKonstantin Khlebnikov <khlebnikov@yandex-team.ru> Acked-by: NJohannes Weiner <hannes@cmpxchg.org> Acked-by: NMichal Hocko <mhocko@suse.com> Cc: Vladimir Davydov <vdavydov.dev@gmail.com Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NChenwandun <chenwandun@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Weilong Chen 提交于
ascend inclusion category: feature bugzilla: NA CVE: NA ------------------- Fix ICMP information such as netmask and timestamp is allowed from arbitrary hosts Default is disable. enable: sysctl -w net.ipv4.icmp_timestamp_enable=1 disable sysctl -w net.ipv4.icmp_timestamp_enable=0 test: hping3 --icmp --icmp-ts -V $IPADDR Signed-off-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NLI Heng <liheng40@huawei.com> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> [fix-v2: define sysctl_icmp_timestamp_enable ifndef CONFIG_ARCH_ASCEND fix-v3: ifndef CONFIG_ARCH_ASCEND, sysctl_icmp_timestamp_enable should set 1] Reviewed-by: NMao Wenan <maowenan@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Michal Koutný 提交于
mainline inclusion from mainline-5.2-rc3 commit bc81426f5beef7da863d3365bc9d45e820448745 category: bugfix bugzilla: 16855 CVE: NA ------------------------------------------------- The commit a3b609ef ("proc read mm's {arg,env}_{start,end} with mmap semaphore taken.") added synchronization of reading argument/environment boundaries under mmap_sem. Later commit 88aa7cc6 ("mm: introduce arg_lock to protect arg_start|end and env_start|end in mm_struct") avoided the coarse use of mmap_sem in similar situations. But there still remained two places that (mis)use mmap_sem. get_cmdline should also use arg_lock instead of mmap_sem when it reads the boundaries. The second place that should use arg_lock is in prctl_set_mm. By protecting the boundaries fields with the arg_lock, we can downgrade mmap_sem to reader lock (analogous to what we already do in prctl_set_mm_map). [akpm@linux-foundation.org: coding style fixes] Link: http://lkml.kernel.org/r/20190502125203.24014-3-mkoutny@suse.com Fixes: 88aa7cc6 ("mm: introduce arg_lock to protect arg_start|end and env_start|end in mm_struct") Signed-off-by: NMichal Koutný <mkoutny@suse.com> Signed-off-by: NLaurent Dufour <ldufour@linux.ibm.com> Co-developed-by: NLaurent Dufour <ldufour@linux.ibm.com> Reviewed-by: NCyrill Gorcunov <gorcunov@gmail.com> Acked-by: NMichal Hocko <mhocko@suse.com> Cc: Yang Shi <yang.shi@linux.alibaba.com> Cc: Mateusz Guzik <mguzik@redhat.com> Cc: Kirill Tkhai <ktkhai@virtuozzo.com> Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NChenwandun <chenwandun@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Xiang Rui 提交于
ascend inclusion category: bugfix bugzilla: NA CVE: NA ------------ Update min_defconfig and davinci_defconfig Signed-off-by: NXiang Rui <rui.xiang@huawei.com> Signed-off-by: NChen Jun <chenjun102@huawei.com> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NLi Zefan <lizefan@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Xu Qiang 提交于
ascend inclusion category: bugfix bugzilla: NA CVE: NA ------------ Enable RSA in ascend. Signed-off-by: NXu Qiang <xuqiang36@huawei.com> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NLi Zefan <lizefan@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Xu Qiang 提交于
ascend inclusion category: feature bugzilla: NA CVE: NA ------------ add CONFIG_DMA_SHARED_BUFFER for tee_drv Signed-off-by: NXu Qiang <xuqiang36@huawwei.com> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NLi Zefan <lizefan@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Zhou Guanghui 提交于
ascend inclusion category: feature bugzilla: NA CVE: NA -------- add CONFIG_ARCH_MINI for Mini card; add CONFIG_ARCH_ASCEND for ascend only. [fix-v2: CONFIG_ARCH_MINI select CONFIG_ARC_ASCEND fix-v3: add this config to storage_ci_defconfig] Signed-off-by: NZhou Guanghui <zhouguanghui1@huawei.com> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NLi Zefan <lizefan@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Stefan Berger 提交于
mainline inclusion from mainline-v5.3-rc6 commit 5b359c7c4372 category: bugfix bugzilla: 21420 CVE: NA ------------------------------------------------- The tpm_tis_core has to set the TPM_CHIP_FLAG_IRQ before probing for interrupts since there is no other place in the code that would set it. Cc: linux-stable@vger.kernel.org Fixes: 570a3609 ("tpm: drop 'irq' from struct tpm_vendor_specific") Signed-off-by: NStefan Berger <stefanb@linux.ibm.com> Signed-off-by: NJarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Conflicts: drivers/char/tpm/tpm_tis_core.c [yyl: adjust context] Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Rafael J. Wysocki 提交于
mainline inclusion from mainline-v5.3-rc1 commit 2933954b71f1 category: bugfix bugzilla: 22356 CVE: NA ------------------------------------------------- It is not actually guaranteed that pm_abort_suspend will be nonzero when pm_system_cancel_wakeup() is called which may lead to subtle issues, so make it use atomic_dec_if_positive() instead of atomic_dec() for the safety sake. Fixes: 33e4f80e ("ACPI / PM: Ignore spurious SCI wakeups from suspend-to-idle") Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Ruslan Bilovol 提交于
mainline inclusion from mainline-v5.3-rc1 commit 6269e4c76eac category: bugfix bugzilla: 22495 CVE: NA ------------------------------------------------- Don't do extra cpu_to_le32 conversion for put_unaligned_le32 because it is already implemented in this function. Fixes sparse error: xhci-hub.c:1152:44: warning: incorrect type in argument 1 (different base types) xhci-hub.c:1152:44: expected unsigned int [usertype] val xhci-hub.c:1152:44: got restricted __le32 [usertype] Fixes: 395f5409 "xhci: support new USB 3.1 hub request to get extended port status" Cc: Mathias Nyman <mathias.nyman@linux.intel.com> Signed-off-by: NRuslan Bilovol <ruslan.bilovol@gmail.com> Link: https://lore.kernel.org/r/1562501839-26522-1-git-send-email-ruslan.bilovol@gmail.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Ikjoon Jang 提交于
mainline inclusion from mainline-v5.3-rc7 commit 9334367cda85 category: bugfix bugzilla: 22289 CVE: NA ------------------------------------------------- Xhci re-enables a slot on transaction error in set_address using xhci_disable_slot() + xhci_alloc_dev(). But in this case, xhci_alloc_dev() creates debugfs entries upon an existing device without cleaning up old entries, thus memory leaks. So this patch simply moves calling xhci_debugfs_free_dev() from xhci_free_dev() to xhci_disable_slot(). [added "possible" to header as this is about failure codepath -Mathias] Signed-off-by: NIkjoon Jang <ikjn@chromium.org> Signed-off-by: NMathias Nyman <mathias.nyman@linux.intel.com> Link: https://lore.kernel.org/r/1567172356-12915-5-git-send-email-mathias.nyman@linux.intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Mark Rutland 提交于
mainline inclusion from mainline-5.4-rc4 commit 3813733595c0 category: bugfix bugzilla: 24024 CVE: NA ------------------------------------------------- When detecting a spurious EL1 translation fault, we have the CPU retry the translation using an AT S1E1R instruction, and inspect PAR_EL1 to determine if the fault was spurious. When PAR_EL1.F == 0, the AT instruction successfully translated the address without a fault, which implies the original fault was spurious. However, in this case we return false and treat the original fault as if it was not spurious. Invert the return value so that we treat such a case as spurious. Cc: Catalin Marinas <catalin.marinas@arm.com> Fixes: 42f91093b043 ("arm64: mm: Ignore spurious translation faults taken from the kernel") Tested-by: NJames Morse <james.morse@arm.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> -
由 Yang Yingliang 提交于
mainline inclusion from mainline-5.4-rc4 commit 29a0f5ad87e6 category: bugfix bugzilla: 24021 CVE: NA ------------------------------------------------- The 'F' field of the PAR_EL1 register lives in bit 0, not bit 1. Fix the broken definition in 'sysreg.h'. Fixes: e8620cff9994 ("arm64: sysreg: Add some field definitions for PAR_EL1") Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NWill Deacon <will@kernel.org> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> -
由 Jiankang Chen 提交于
ascend inclusion category: feature bugzilla: 16554 CVE: NA --------- Signed-off-by: NJiankang Chen <chenjiankang1@huawei.com> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NLi Zefan <lizefan@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Jiankang Chen 提交于
ascend inclusion category: feature bugzilla: 16554 CVE: NA -------- Signed-off-by: NJiankang Chen <chenjiankang1@huawei.com> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NLi Zefan <lizefan@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Jiankang Chen 提交于
ascend inclusion category: feature bugzilla: 16554 CVE: NA -------- Signed-off-by: NJiankang Chen <chenjiankang1@huawei.com> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NLi Zefan <lizefan@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Jiankang Chen 提交于
ascend inclusion category: feature bugzilla: 16554 CVE: NA -------- Signed-off-by: NJiankang Chen <chenjiankang1@huawei.com> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NLi Zefan <lizefan@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Qiang Xu 提交于
ascend inclusion category: feature bugzilla: 16554 CVE: NA -------- svm add the catch the log of run queue, when open the sched debug Signed-off-by: NJiankang Chen <chenjiankang1@huawei.com> Signed-off-by: NQiang Xu <xuqiang36@huawei.com> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NLi Zefan <lizefan@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Jiankang Chen 提交于
ascend inclusion category: feature bugzilla: 16554 CVE: NA -------- svm provide the mmap functions to remap the phys to a fixed virtual address and the get unmapped area to get a empty virtual address area; Signed-off-by: NJiankang Chen <chenjiankang1@huawei.com> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NLi Zefan <lizefan@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Jiankang Chen 提交于
ascend inclusion category: feature bugzilla: 16554 CVE: NA -------- Signed-off-by: NJiankang Chen <chenjiankang1@huawei.com> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NLi Zefan <lizefan@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Jiankang Chen 提交于
ascend inclusion category: feature bugzilla: 16554 CVE: NA -------- this feature is to use debug the process memory; Signed-off-by: NJiankang Chen <chenjiankang1@huawei.com> Signed-off-by: NLijun Fang <fanglijun3@huawei.com> Reviewed-by: NLi Zefan <lizefan@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-