- 30 10月, 2021 7 次提交
-
-
由 Weilong Chen 提交于
ascend inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4EUVI CVE: NA ------------------------------------------------- Sharepool applies for a dedicated interface for large pages, which optimizes the efficiency of memory application Reviewed-by: NDing Tianhong <dingtianhong@huawei.com> Signed-off-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Ding Tianhong 提交于
ascend inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4EUVI CVE: NA ------------------------------------------------- The share pool is used widely for several accelerator, and it is difficult to debug the user problem, so add debug mode to analyse the problem, this mode is enabled by the sysctl_sp_debug_mode flag. Some functions have been refactored to protect the critical area correctly, and output message more clearly. Signed-off-by: NTang Yizhou <tangyizhou@huawei.com> Signed-off-by: NZhou Guanghui <zhouguanghui1@huawei.com> Signed-off-by: NWu Peng <wupeng58@huawei.com> Signed-off-by: NDing Tianhong <dingtianhong@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Ding Tianhong 提交于
ascend inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4EUVI CVE: NA ------------------------------------------------- The do_mmap/mmap_region/__mm_populate could only be used to handle the current process, now the share pool need to handle the other process and create memory mmaping, so need to export new function to distinguish different process and handle it, it would not break the current logic and only valid for share pool. The share pool need to remap the vmalloc pages to user space, so introduce the hugetlb_insert_hugepage to support hugepage remapming. Signed-off-by: NTang Yizhou <tangyizhou@huawei.com> Signed-off-by: NLi Ming <limingming.li@huawei.com> Signed-off-by: NZefan Li <lizefan@huawei.com> Signed-off-by: NZhou Guanghui <zhouguanghui1@huawei.com> Signed-off-by: NDing Tianhong <dingtianhong@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Ding Tianhong 提交于
ascend inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4EUVI CVE: NA ------------------------------------------------- The mm->sp_group is mainly used to find out the group which owns the mm, and the group could use the mm->sp_node to list and find out the mm, the mm->sp_stat_id is used for collecting memory information. This changes will affect and destroy the kabi only when enable the ascend_share_pool config. Signed-off-by: NTang Yizhou <tangyizhou@huawei.com> Signed-off-by: NLi Ming <limingming.li@huawei.com> Signed-off-by: NZefan Li <lizefan@huawei.com> Signed-off-by: NZhou Guanghui <zhouguanghui1@huawei.com> Signed-off-by: NDing Tianhong <dingtianhong@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Ding Tianhong 提交于
ascend inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4EUVI CVE: NA ------------------------------------------------- This is a prepare patch for share pool, export new function to vmalloc hugepage and vmap the hugepage to virtually contiguous space. The new head file share_pool.h is mainly used for share pool features and export some sp_xxx function when ascend_share_pool config is enabled, and do nothing by default. Signed-off-by: NZefan Li <lizefan@huawei.com> Signed-off-by: NTang Yizhou <tangyizhou@huawei.com> Signed-off-by: NLi Ming <limingming.li@huawei.com> Signed-off-by: NDing Tianhong <dingtianhong@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Ding Tianhong 提交于
ascend inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4EUVI CVE: NA ------------------------------------------------- The mm->owner is only used for MEMCG currently, but the ascend share pool features will use it later, so make it to a general features and select it for CONFIG_MEMCG. Signed-off-by: NTang Yizhou <tangyizhou@huawei.com> Signed-off-by: NDing Tianhong <dingtianhong@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Nicholas Piggin 提交于
ascend inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4EUVI CVE: NA https://lwn.net/ml/linux-kernel/20200825145753.529284-12-npiggin@gmail.com/ Don't distinction between vmalloc and hugepage vmalloc, because there is no size print in alloc_large_system_hash in v4.19. And this patch add page_order in vm_struct, it will break kabi. -------------- Support huge page vmalloc mappings. Config option HAVE_ARCH_HUGE_VMALLOC enables support on architectures that define HAVE_ARCH_HUGE_VMAP and supports PMD sized vmap mappings. vmalloc will attempt to allocate PMD-sized pages if allocating PMD size or larger, and fall back to small pages if that was unsuccessful. Allocations that do not use PAGE_KERNEL prot are not permitted to use huge pages, because not all callers expect this (e.g., module allocations vs strict module rwx). This reduces TLB misses by nearly 30x on a `git diff` workload on a 2-node POWER9 (59,800 -> 2,100) and reduces CPU cycles by 0.54%. This can result in more internal fragmentation and memory overhead for a given allocation, an option nohugevmalloc is added to disable at boot. Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NRui Xiang <rui.xiang@huawei.com> Reviewed-by: NDing Tianhong <dingtianhong@huawei.com> Reviewed-by: NZefan Li <lizefan@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
- 29 10月, 2021 8 次提交
-
-
由 Nicholas Piggin 提交于
ascend inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4EUVI CVE: NA https://lwn.net/ml/linux-kernel/20200825145753.529284-10-npiggin@gmail.com/ -------------- This is a generic kernel virtual memory mapper, not specific to ioremap. Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NRui Xiang <rui.xiang@huawei.com> Reviewed-by: NDing Tianhong <dingtianhong@huawei.com> Reviewed-by: NZefan Li <lizefan@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Nicholas Piggin 提交于
ascend inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4EUVI CVE: NA https://lwn.net/ml/linux-kernel/20200825145753.529284-6-npiggin@gmail.com/ -------------- This changes the awkward approach where architectures provide init functions to determine which levels they can provide large mappings for, to one where the arch is queried for each call. This removes code and indirection, and allows constant-folding of dead code for unsupported levels. This also adds a prot argument to the arch query. This is unused currently but could help with some architectures (e.g., some powerpc processors can't map uncacheable memory with large pages). Cc: linuxppc-dev@lists.ozlabs.org Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: x86@kernel.org Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NRui Xiang <rui.xiang@huawei.com> Reviewed-by: NDing Tianhong <dingtianhong@huawei.com> Reviewed-by: NZefan Li <lizefan@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Anshuman Khandual 提交于
mainline inclusion from mainline-v5.3-rc1 commit 0f472d04 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4EUVI CVE: NA Drop the support for powerpc platform. --------------------------- Finish up what commit c2febafc ("mm: convert generic code to 5-level paging") started while levelling up P4D huge mapping support at par with PUD and PMD. A new arch call back arch_ioremap_p4d_supported() is added which just maintains status quo (P4D huge map not supported) on x86, arm64 and powerpc. When HAVE_ARCH_HUGE_VMAP is enabled its just a simple check from the arch about the support, hence runtime effects are minimal. Link: http://lkml.kernel.org/r/1561699231-20991-1-git-send-email-anshuman.khandual@arm.comSigned-off-by: NAnshuman Khandual <anshuman.khandual@arm.com> Acked-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc) Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NRui Xiang <rui.xiang@huawei.com> Reviewed-by: NDing Tianhong <dingtianhong@huawei.com> Reviewed-by: NZefan Li <lizefan@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Christoph Hellwig 提交于
mainline inclusion from mainline-v5.8-rc1 commit ed1f324c category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4EUVI CVE: NA Add adjustment for map_vm_area in dma mapping, and don't remove in ceph_common. --------------------------- Switch all callers to map_kernel_range, which symmetric to the unmap side (as well as the _noflush versions). Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Christophe Leroy <christophe.leroy@c-s.fr> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: David Airlie <airlied@linux.ie> Cc: Gao Xiang <xiang@kernel.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Haiyang Zhang <haiyangz@microsoft.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: "K. Y. Srinivasan" <kys@microsoft.com> Cc: Laura Abbott <labbott@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Michael Kelley <mikelley@microsoft.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Nitin Gupta <ngupta@vflare.org> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Sakari Ailus <sakari.ailus@linux.intel.com> Cc: Stephen Hemminger <sthemmin@microsoft.com> Cc: Sumit Semwal <sumit.semwal@linaro.org> Cc: Wei Liu <wei.liu@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Paul Mackerras <paulus@ozlabs.org> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will@kernel.org> Link: http://lkml.kernel.org/r/20200414131348.444715-17-hch@lst.deSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NRui Xiang <rui.xiang@huawei.com> Reviewed-by: NDing Tianhong <dingtianhong@huawei.com> Reviewed-by: NZefan Li <lizefan@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Christoph Hellwig 提交于
mainline inclusion from mainline-v5.8-rc1 commit 49266277 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4EUVI CVE: NA --------------------------- Switch the two remaining callers to use __get_vm_area_caller instead. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Christophe Leroy <christophe.leroy@c-s.fr> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: David Airlie <airlied@linux.ie> Cc: Gao Xiang <xiang@kernel.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Haiyang Zhang <haiyangz@microsoft.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: "K. Y. Srinivasan" <kys@microsoft.com> Cc: Laura Abbott <labbott@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Michael Kelley <mikelley@microsoft.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Nitin Gupta <ngupta@vflare.org> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Sakari Ailus <sakari.ailus@linux.intel.com> Cc: Stephen Hemminger <sthemmin@microsoft.com> Cc: Sumit Semwal <sumit.semwal@linaro.org> Cc: Wei Liu <wei.liu@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Paul Mackerras <paulus@ozlabs.org> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will@kernel.org> Link: http://lkml.kernel.org/r/20200414131348.444715-9-hch@lst.deSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NRui Xiang <rui.xiang@huawei.com> Reviewed-by: NDing Tianhong <dingtianhong@huawei.com> Reviewed-by: NZefan Li <lizefan@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Daniel Axtens 提交于
mainline inclusion from mainline-v5.5-rc3 commit be1db475 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4EUVI CVE: NA --------------------------- apply_to_page_range() takes an address range, and if any parts of it are not covered by the existing page table hierarchy, it allocates memory to fill them in. In some use cases, this is not what we want - we want to be able to operate exclusively on PTEs that are already in the tables. Add apply_to_existing_page_range() for this. Adjust the walker functions for apply_to_page_range to take 'create', which switches them between the old and new modes. This will be used in KASAN vmalloc. [akpm@linux-foundation.org: reduce code duplication] [akpm@linux-foundation.org: s/apply_to_existing_pages/apply_to_existing_page_range/] [akpm@linux-foundation.org: initialize __apply_to_page_range::err] Link: http://lkml.kernel.org/r/20191205140407.1874-1-dja@axtens.netSigned-off-by: NDaniel Axtens <dja@axtens.net> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Uladzislau Rezki (Sony) <urezki@gmail.com> Cc: Alexander Potapenko <glider@google.com> Cc: Daniel Axtens <dja@axtens.net> Cc: Qian Cai <cai@lca.pw> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NRui Xiang <rui.xiang@huawei.com> Reviewed-by: NDing Tianhong <dingtianhong@huawei.com> Reviewed-by: NZefan Li <lizefan@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Ingo Molnar 提交于
mainline inclusion from mainline v5.6-rc1 commit 1f059dfd category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4EUVI CVE: NA --------------------------- In the x86 MM code we'd like to untangle various types of historic header dependency spaghetti, but for this we'd need to pass to the generic vmalloc code various vmalloc related defines that customarily come via the <asm/page.h> low level arch header. Signed-off-by: NIngo Molnar <mingo@kernel.org> Signed-off-by: NRui Xiang <rui.xiang@huawei.com> Reviewed-by: NDing Tianhong <dingtianhong@huawei.com> Reviewed-by: NZefan Li <lizefan@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Fang Lijun 提交于
ascend inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4D63I CVE: NA ------------------------------------------------- An interface do_vm_mmap is added to support the allocation in the address spaces of other processes. Signed-off-by: NFang Lijun <fanglijun3@huawei.com> Signed-off-by: NZhou Guanghui <zhouguanghui1@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
- 27 10月, 2021 2 次提交
-
-
由 zhangguijiang 提交于
ascend inclusion category: feature feature: Ascend emmc adaption bugzilla: https://gitee.com/openeuler/kernel/issues/I4F4LL CVE: NA -------------------- To support Ascend HiSilicon emmc chip, we add specific extensions in mmc core, include mmc card sleep/awake/reset, cache ctrl, power vcc adaptions and etc. Also, we enable some features for emmc5.0/5.1. Signed-off-by: Nzhangguijiang <zhangguijiang@huawei.com> Reviewed-by: NDing Tianhong <dingtianhong@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 zhangguijiang 提交于
ascend inclusion category: feature feature: Ascend emmc adaption bugzilla: https://gitee.com/openeuler/kernel/issues/I4F4LL CVE: NA -------------------- To identify Ascend HiSilicon emmc chip, we add a customized property to dts. In this patch we add an interface to read this property. At the same time, we provided a switch, which is CONFIG_ASCEND_HISI_MMC, for you to get rid of our modifications. Signed-off-by: Nzhangguijiang <zhangguijiang@huawei.com> Reviewed-by: NDing Tianhong <dingtianhong@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
- 25 10月, 2021 11 次提交
-
-
由 Xingang Wang 提交于
ascend inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I49RB2 CVE: NA ------------------------------------------------- The user_mpam_en configuration is used to enable/disable mpam for SMMU. If user_mpam_en is set as true, the memory requests across SMMU will carry the mpamid information, otherwise the mpamid will not take effect. Signed-off-by: NXingang Wang <wangxingang5@huawei.com> Reviewed-by: NYingtai Xie <xieyingtai@huawei.com> Reviewed-by: NDing Tianhong <dingtianhong@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Xingang Wang 提交于
ascend inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I49RB2 CVE: NA ------------------------------------------------- This add svm_set_mpam interface in svm module, which traverse all accelerators, and set the mpam configuration in SMMU. If error occurs during the traverse process, mpam configuration of all accelerators will rollback to previous. Signed-off-by: NXingang Wang <wangxingang5@huawei.com> Reviewed-by: NYingtai Xie <xieyingtai@huawei.com> Reviewed-by: NDing Tianhong <dingtianhong@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Xingang Wang 提交于
ascend inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I49RB2 CVE: NA ------------------------------------------------- This add interface to get mpam configuration of accelarators managed by svm. The svm mpam get interface traverse all accelrators, and if all succeed, return the last successful result. Signed-off-by: NXingang Wang <wangxingang5@huawei.com> Reviewed-by: NYingtai Xie <xieyingtai@huawei.com> Reviewed-by: NDing Tianhong <dingtianhong@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Xingang Wang 提交于
ascend inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I49RB2 CVE: NA ------------------------------------------------- The user_mpam_en configuration is used to enable/disable whether SMMU mpam configuration will be used. If user_mpam_en is 1, the memory requests across SMMU will not carry the SMMU mpam configuration. Signed-off-by: NXingang Wang <wangxingang5@huawei.com> Reviewed-by: NYingtai Xie <xieyingtai@huawei.com> Reviewed-by: NZhen Lei <thunder.leizhen@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Xingang Wang 提交于
ascend inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I49RB2 CVE: NA ------------------------------------------------- Add interface to get mpam configuration of CD/STE context, use s1mpam to indicate whether partid and pmg from CD or STE. Signed-off-by: NXingang Wang <wangxingang5@huawei.com> Reviewed-by: NYingtai Xie <xieyingtai@huawei.com> Reviewed-by: NZhen Lei <thunder.leizhen@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Xingang Wang 提交于
ascend inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I49RB2 CVE: NA ------------------------------------------------- To support limiting qos of device, the partid and pmg need to be set into the SMMU STE/CD context. This introduce support of SMMU mpam feature and add interface to set mpam configuration in STE/CD. Signed-off-by: NXingang Wang <wangxingang5@huawei.com> Signed-off-by: NRui Zhu <zhurui3@huawei.com> Reviewed-by: NYingtai Xie <xieyingtai@huawei.com> Reviewed-by: NZhen Lei <thunder.leizhen@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Yang Yingliang 提交于
hulk inclusion category: bugfix bugzilla: NA CVE: NA --------------------------- fix kabi broken in struct device. It's introduced by a2734433 ("PCI/MSI: Protect msi_desc::masked for multi-MSI"). Reviewed-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Thomas Gleixner 提交于
stable inclusion from linux-4.19.205 commit 3c9534778d4cc2bd01e20d4dcffc55df0962aa12 -------------------------------- commit 77e89afc upstream. Multi-MSI uses a single MSI descriptor and there is a single mask register when the device supports per vector masking. To avoid reading back the mask register the value is cached in the MSI descriptor and updates are done by clearing and setting bits in the cache and writing it to the device. But nothing protects msi_desc::masked and the mask register from being modified concurrently on two different CPUs for two different Linux interrupts which belong to the same multi-MSI descriptor. Add a lock to struct device and protect any operation on the mask and the mask register with it. This makes the update of msi_desc::masked unconditional, but there is no place which requires a modification of the hardware register without updating the masked cache. msi_mask_irq() is now an empty wrapper which will be cleaned up in follow up changes. The problem goes way back to the initial support of multi-MSI, but picking the commit which introduced the mask cache is a valid cut off point (2.6.30). Fixes: f2440d9a ("PCI MSI: Refactor interrupt masking code") Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NMarc Zyngier <maz@kernel.org> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20210729222542.726833414@linutronix.deSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Thomas Gleixner 提交于
stable inclusion from linux-4.19.205 commit cab824f67d7e8f68288d615929dec02607e473ad -------------------------------- commit 826da771 upstream. X86 IO/APIC and MSI interrupts (when used without interrupts remapping) require that the affinity setup on startup is done before the interrupt is enabled for the first time as the non-remapped operation mode cannot safely migrate enabled interrupts from arbitrary contexts. Provide a new irq chip flag which allows affected hardware to request this. This has to be opt-in because there have been reports in the past that some interrupt chips cannot handle affinity setting before startup. Fixes: 18404756 ("genirq: Expose default irq affinity mask (take 3)") Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NMarc Zyngier <maz@kernel.org> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20210729222542.779791738@linutronix.deSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Conflicts: include/linux/irq.h [yyl: adjust context] Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Eric Dumazet 提交于
stable inclusion from linux-4.19.201 commit 16851e34b621bc7e652c508bb28c47948fb86958 -------------------------------- commit 0f6925b3 upstream. Xuan Zhuo reported that commit 3226b158 ("net: avoid 32 x truesize under-estimation for tiny skbs") brought a ~10% performance drop. The reason for the performance drop was that GRO was forced to chain sk_buff (using skb_shinfo(skb)->frag_list), which uses more memory but also cause packet consumers to go over a lot of overhead handling all the tiny skbs. It turns out that virtio_net page_to_skb() has a wrong strategy : It allocates skbs with GOOD_COPY_LEN (128) bytes in skb->head, then copies 128 bytes from the page, before feeding the packet to GRO stack. This was suboptimal before commit 3226b158 ("net: avoid 32 x truesize under-estimation for tiny skbs") because GRO was using 2 frags per MSS, meaning we were not packing MSS with 100% efficiency. Fix is to pull only the ethernet header in page_to_skb() Then, we change virtio_net_hdr_to_skb() to pull the missing headers, instead of assuming they were already pulled by callers. This fixes the performance regression, but could also allow virtio_net to accept packets with more than 128bytes of headers. Many thanks to Xuan Zhuo for his report, and his tests/help. Fixes: 3226b158 ("net: avoid 32 x truesize under-estimation for tiny skbs") Reported-by: NXuan Zhuo <xuanzhuo@linux.alibaba.com> Link: https://www.spinics.net/lists/netdev/msg731397.htmlCo-Developed-by: NXuan Zhuo <xuanzhuo@linux.alibaba.com> Signed-off-by: NXuan Zhuo <xuanzhuo@linux.alibaba.com> Signed-off-by: NEric Dumazet <edumazet@google.com> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Jason Wang <jasowang@redhat.com> Cc: virtualization@lists.linux-foundation.org Acked-by: NJason Wang <jasowang@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NMatthieu Baerts <matthieu.baerts@tessares.net> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Willem de Bruijn 提交于
stable inclusion from linux-4.19.128 commit 8920e8ae16a89bebd4d98ec6c7b306b1e3e06722 -------------------------------- [ Upstream commit 6dd912f8 ] Syzkaller again found a path to a kernel crash through bad gso input: a packet with gso size exceeding len. These packets are dropped in tcp_gso_segment and udp[46]_ufo_fragment. But they may affect gso size calculations earlier in the path. Now that we have thlen as of commit 9274124f ("net: stricter validation of untrusted gso packets"), check gso_size at entry too. Fixes: bfd5f4a3 ("packet: Add GSO/csum offload support.") Reported-by: Nsyzbot <syzkaller@googlegroups.com> Signed-off-by: NWillem de Bruijn <willemb@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
- 22 10月, 2021 2 次提交
-
-
由 Zhou Guanghui 提交于
ascend inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4D63I CVE: NA ---------------------------------------------------- The current function hugetlb_alloc_hugepage implements the allocation from static hugepages first. When the static hugepage is used up, it attempts to apply for hugepages from buddy system. Two additional modes are supported: static hugepages only and buddy hugepages only. Signed-off-by: NZhou Guanghui <zhouguanghui1@huawei.com> Signed-off-by: NGuo Mengqi <guomengqi3@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Zhou Guanghui 提交于
ascend inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4D63I CVE: NA ------------------------------------------------------------- The following functions are used only in the ascend scenario: hugetlb_get_hstate, hugetlb_alloc_hugepage, hugetlb_insert_hugepage_pte, hugetlb_insert_hugepage_pte_by_pa Remove unused interface hugetlb_insert_hugepage Signed-off-by: NZhou Guanghui <zhouguanghui1@huawei.com> Signed-off-by: NGuo Mengqi <guomengqi3@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
- 20 10月, 2021 2 次提交
-
-
由 Yu'an Wang 提交于
driver inclusion category: bugfix bugzilla: NA CVE: NA 1.add input para check of uacce_unregister api 2.change uacce_qfrt_str to internal interface, because it is used just in uacce.c Signed-off-by: NYu'an Wang <wangyuan46@huawei.com> Reviewed-by: NLongfang Liu <liulongfang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Arun KS 提交于
mainline inclusion from mainline-v5.1-rc1 commit a9cd410a category: feature bugzilla: 182882 CVE: NA ----------------------------------------------- When freeing pages are done with higher order, time spent on coalescing pages by buddy allocator can be reduced. With section size of 256MB, hot add latency of a single section shows improvement from 50-60 ms to less than 1 ms, hence improving the hot add latency by 60 times. Modify external providers of online callback to align with the change. [arunks@codeaurora.org: v11] Link: http://lkml.kernel.org/r/1547792588-18032-1-git-send-email-arunks@codeaurora.org [akpm@linux-foundation.org: remove unused local, per Arun] [akpm@linux-foundation.org: avoid return of void-returning __free_pages_core(), per Oscar] [akpm@linux-foundation.org: fix it for mm-convert-totalram_pages-and-totalhigh_pages-variables-to-atomic.patch] [arunks@codeaurora.org: v8] Link: http://lkml.kernel.org/r/1547032395-24582-1-git-send-email-arunks@codeaurora.org [arunks@codeaurora.org: v9] Link: http://lkml.kernel.org/r/1547098543-26452-1-git-send-email-arunks@codeaurora.org Link: http://lkml.kernel.org/r/1538727006-5727-1-git-send-email-arunks@codeaurora.orgSigned-off-by: NArun KS <arunks@codeaurora.org> Reviewed-by: NAndrew Morton <akpm@linux-foundation.org> Acked-by: NMichal Hocko <mhocko@suse.com> Reviewed-by: NOscar Salvador <osalvador@suse.de> Reviewed-by: NAlexander Duyck <alexander.h.duyck@linux.intel.com> Cc: K. Y. Srinivasan <kys@microsoft.com> Cc: Haiyang Zhang <haiyangz@microsoft.com> Cc: Stephen Hemminger <sthemmin@microsoft.com> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: Juergen Gross <jgross@suse.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Mathieu Malaterre <malat@debian.org> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Souptick Joarder <jrdr.linux@gmail.com> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Aaron Lu <aaron.lu@intel.com> Cc: Srivatsa Vaddagiri <vatsa@codeaurora.org> Cc: Vinayak Menon <vinmenon@codeaurora.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Conflicts: mm/page_alloc.c mm/memory_hotplug.c [Peng Liu: adjust context] Signed-off-by: NPeng Liu <liupeng256@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
- 14 10月, 2021 8 次提交
-
-
由 Wei Li 提交于
hulk inclusion category: Bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I47H3V CVE: NA ------------------------------------------------- Use the new pmu_attr_update_notifier_chain to update attribute groups to fix kabi broken introduced by the following commit: "Intel: perf/core: Add attr_groups_update into struct pmu" "Intel: perf/x86: Use the new pmu::update_attrs attribute group". Signed-off-by: NWei Li <liwei391@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com> Reviewed-by: NYang Jihong <yangjihong1@huawei.com> Reviewed-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Zheng Zengkai 提交于
hulk inclusion category: Bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I47H3V CVE: NA ------------------------------------------------- Fix kabi broken introduced by the following commit: "perf/x86/intel: Support per-thread RDPMC TopDown metrics". Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com> Reviewed-by: NYang Jihong <yangjihong1@huawei.com> Reviewed-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Zheng Zengkai 提交于
hulk inclusion category: Bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I47H3V CVE: NA ------------------------------------------------- Fix kabi broken introduced by the following commits: "PCI: Add support for Immediate Readiness" "PCI: Make link active reporting detection generic" "PCI: Get rid of dev->has_secondary_link flag" Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com> Reviewed-by: Nwangxiongfeng 00379786 <wangxiongfeng2@huawei.com> Reviewed-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Zheng Zengkai 提交于
hulk inclusion category: Bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I47H3V CVE: NA ------------------------------------------------- Fix kabi broken introduced by the following commit: "PCI: Decode PCIe 32 GT/s link speed". Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com> Reviewed-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Stephen Rothwell 提交于
mainline inclusion from mainline-v5.3-rc1 commit 8da04e05 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I47H3V CVE: NA -------------------------------- Fixes: 7ebf8eff ("intel_rapl: introduce struct rapl_if_private") Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Reviewed-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Alexander Shishkin 提交于
mainline inclusion from mainline-v5.4-rc1 commit 615c164d category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I47H3V CVE: NA -------------------------------- Introduces a concept of external buffers, which is a mechanism for creating trace sinks that would receive trace data from MSC buffers and transfer it elsewhere. A external buffer can implement its own window allocation/deallocation if it has to. It must provide a callback that's used to notify it when a window fills up, so that it can then start a DMA transaction from that window 'elsewhere'. This window remains in a 'locked' state and won't be used for storing new trace data until the buffer 'unlocks' it with a provided API call, at which point the window can be used again for storing trace data. This relies on a functional "last block" interrupt, so not all versions of Trace Hub can use this feature, which does not reflect on existing users. Signed-off-by: NAlexander Shishkin <alexander.shishkin@linux.intel.com> Reviewed-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://lore.kernel.org/r/20190705141425.19894-2-alexander.shishkin@linux.intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NJackie Liu <liuyun01@kylinos.cn> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com> Reviewed-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Peter Zijlstra 提交于
mainline inclusion from mainline-v5.8-rc1 commit 69ea03b5 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I47H3V CVE: NA -------------------------------- commit 69ea03b5 upstream Backport summary: Backport to kernel 4.19.57 to enhance MCA-R for copyin, backporting removes the changes to non-x86 architecture. Since there are already a number of sites (ARM64, PowerPC) that effectively nest nmi_enter(), make the primitive support this before adding even more. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NMarc Zyngier <maz@kernel.org> Acked-by: NWill Deacon <will@kernel.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Link: https://lkml.kernel.org/r/20200505134100.864179229@linutronix.deSigned-off-by: NYouquan Song <youquan.song@intel.com> Signed-off-by: NJackie Liu <liuyun01@kylinos.cn> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com> Reviewed-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-
由 Kan Liang 提交于
mainline inclusion from mainline-v5.10-rc1 commit 2cb5383b category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I47H3V CVE: NA -------------------------------- commit 2cb5383b upstream Backport summary: backport to kernel 4.19.57 for ICX perf topdown support Starts from Ice Lake, the TopDown metrics are directly available as fixed counters and do not require generic counters. Also, the TopDown metrics can be collected per thread. Extend the RDPMC usage to support per-thread TopDown metrics. The RDPMC index of the PERF_METRICS will be output if RDPMC users ask for the RDPMC index of the metrics events. To support per thread RDPMC TopDown, the metrics and slots counters have to be saved/restored during the context switching. The last_period and period_left are not used in the counting mode. Use the fields for saved_metric and saved_slots. Signed-off-by: NKan Liang <kan.liang@linux.intel.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20200723171117.9918-12-kan.liang@linux.intel.comSigned-off-by: NYunying Sun <yunying.sun@intel.com> Signed-off-by: NJackie Liu <liuyun01@kylinos.cn> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com> Reviewed-by: NWei Li <liwei391@huawei.com> Reviewed-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com>
-