- 17 2月, 2022 9 次提交
-
-
由 Chao Liu 提交于
euler inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4T7MX CVE: NA -------------------------------- Provide a separate, distinct keyring for platform trusted keys which is used in secure boot. Signed-off-by: NChao Liu <liuchao173@huawei.com> Reviewed-by: NXiu Jianfeng <xiujianfeng@huawei.com> Reviewed-by: NKai Liu <kai.liu@suse.com> Reviewed-by: NYin Xiujiang <yinxiujiang@kylinos.cn> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Jon Maloy 提交于
mainline inclusion from mainline-v5.17-rc4 commit 9aa422ad category: bugfix bugzilla: 186252 https://gitee.com/openeuler/kernel/issues/I4U36S CVE: CVE-2022-0435 Reference: https://github.com/torvalds/linux/commit/9aa422ad326634b76309e8ff342c246800621216 -------------------------------- The function tipc_mon_rcv() allows a node to receive and process domain_record structs from peer nodes to track their views of the network topology. This patch verifies that the number of members in a received domain record does not exceed the limit defined by MAX_MON_DOMAIN, something that may otherwise lead to a stack overflow. tipc_mon_rcv() is called from the function tipc_link_proto_rcv(), where we are reading a 32 bit message data length field into a uint16. To avert any risk of bit overflow, we add an extra sanity check for this in that function. We cannot see that happen with the current code, but future designers being unaware of this risk, may introduce it by allowing delivery of very large (> 64k) sk buffers from the bearer layer. This potential problem was identified by Eric Dumazet. This fixes CVE-2022-0435 Reported-by: NSamuel Page <samuel.page@appgate.com> Reported-by: NEric Dumazet <edumazet@google.com> Fixes: 35c55c98 ("tipc: add neighbor monitoring framework") Signed-off-by: NJon Maloy <jmaloy@redhat.com> Reviewed-by: NXin Long <lucien.xin@gmail.com> Reviewed-by: NSamuel Page <samuel.page@appgate.com> Reviewed-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NZhengchao Shao <shaozhengchao@huawei.com> Reviewed-by: NYongjun Wei <weiyongjun1@huawei.com> Reviewed-by: NYue Haibing <yuehaibing@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Eric W. Biederman 提交于
stable inclusion from stable-v5.10.97 commit 1fc3444cda9a78c65b769e3fa93455e09ff7a0d3 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I4TUA0?from=project-issue CVE: CVE-2022-0492 ------------------------------- commit 24f60085 upstream. The cgroup release_agent is called with call_usermodehelper. The function call_usermodehelper starts the release_agent with a full set fo capabilities. Therefore require capabilities when setting the release_agaent. Reported-by: NTabitha Sable <tabitha.c.sable@gmail.com> Tested-by: NTabitha Sable <tabitha.c.sable@gmail.com> Fixes: 81a6a5cd ("Task Control Groups: automatic userspace notification of idle cgroups") Cc: stable@vger.kernel.org # v2.6.24+ Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: NTejun Heo <tj@kernel.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NLu Jialin <lujialin4@huawei.com> Reviewed-by: NWang Weiyang <wangweiyang2@huawei.com> Reviewed-by: NXiu Jianfeng <xiujianfeng@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Trond Myklebust 提交于
mainline inclusion from mainline-v5.17-rc2 commit 1751fc1d category: bugfix bugzilla: 186205 https://gitee.com/openeuler/kernel/issues/I4U2NK CVE: CVE-2022-24448 ----------------------------------------------- If the file type changes back to being a regular file on the server between the failed OPEN and our LOOKUP, then we need to re-run the OPEN. Fixes: 0dd2b474 ("nfs: implement i_op->atomic_open()") Signed-off-by: NTrond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com> Signed-off-by: NZhang Xiaoxu <zhangxiaoxu5@huawei.com> Reviewed-by: NZhang Yi <yi.zhang@huawei.com> Reviewed-by: NXiu Jianfeng <xiujianfeng@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Trond Myklebust 提交于
mainline inclusion from mainline-v5.17-rc2 commit ac795161 category: bugfix bugzilla: 186205 https://gitee.com/openeuler/kernel/issues/I4U2NK CVE: CVE-2022-24448 ----------------------------------------------- If the application sets the O_DIRECTORY flag, and tries to open a regular file, nfs_atomic_open() will punt to doing a regular lookup. If the server then returns a regular file, we will happily return a file descriptor with uninitialised open state. The fix is to return the expected ENOTDIR error in these cases. Reported-by: NLyu Tao <tao.lyu@epfl.ch> Fixes: 0dd2b474 ("nfs: implement i_op->atomic_open()") Signed-off-by: NTrond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com> Signed-off-by: NZhang Xiaoxu <zhangxiaoxu5@huawei.com> Reviewed-by: NZhang Yi <yi.zhang@huawei.com> Reviewed-by: NXiu Jianfeng <xiujianfeng@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Zhang Qiao 提交于
maillist inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I4TR86 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup.git/commit/?id=05c7b7a92cc87ff8d7fde189d0fade250697573c -------------------------------- As previously discussed(https://lkml.org/lkml/2022/1/20/51), cpuset_attach() is affected with similar cpu hotplug race, as follow scenario: cpuset_attach() cpu hotplug --------------------------- ---------------------- down_write(cpuset_rwsem) guarantee_online_cpus() // (load cpus_attach) sched_cpu_deactivate set_cpu_active() // will change cpu_active_mask set_cpus_allowed_ptr(cpus_attach) __set_cpus_allowed_ptr_locked() // (if the intersection of cpus_attach and cpu_active_mask is empty, will return -EINVAL) up_write(cpuset_rwsem) To avoid races such as described above, protect cpuset_attach() call with cpu_hotplug_lock. Fixes: be367d09 ("cgroups: let ss->can_attach and ss->attach do whole threadgroups at a time") Cc: stable@vger.kernel.org # v2.6.32+ Reported-by: NZhao Gongyi <zhaogongyi@huawei.com> Signed-off-by: NZhang Qiao <zhangqiao22@huawei.com> Acked-by: NWaiman Long <longman@redhat.com> Reviewed-by: NMichal Koutný <mkoutny@suse.com> Signed-off-by: NTejun Heo <tj@kernel.org> Reviewed-by: NChen Hui <judy.chenhui@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Zhang Wensheng 提交于
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I4STNX?from=project-issue CVE: NA -------------------------------- When the inflight IOs are slow and no new IOs are issued, we expect iostat could manifest the IO hang problem. However after commit 5b18b5a7 ("block: delete part_round_stats and switch to less precise counting"), io_tick and time_in_queue will not be updated until the end of IO, and the avgqu-sz and %util columns of iostat will be zero. Because it has using stat.nsecs accumulation to express time_in_queue which is not suitable to change, and may %util will express the status better when io hang occur. To fix io_ticks, we use update_io_ticks and inflight to update io_ticks when diskstats_show and part_stat_show been called. Fixes: 5b18b5a7 ("block: delete part_round_stats and switch to less precise counting") Signed-off-by: NZhang Wensheng <zhangwensheng5@huawei.com> Reviewed-by: NHou Tao <houtao1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Yang Yingliang 提交于
mainline inclusion from mainline-v5.17-rc1 commit 50a0f3f5 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I4TF7T Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=50a0f3f55e382b313e7cbebdf8ccf1593296e16f -------------------------------- Add missing unlock when try_module_get() fails in klp_enable_patch(). Fixes: 5ef3dd20 ("livepatch: Fix kobject refcount bug on klp_init_patch_early failure path") Reported-by: NHulk Robot <hulkci@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Acked-by: NDavid Vernet <void@manifault.com> Reviewed-by: NPetr Mladek <pmladek@suse.com> Signed-off-by: NPetr Mladek <pmladek@suse.com> Link: https://lore.kernel.org/r/20211225025115.475348-1-yangyingliang@huawei.comSigned-off-by: NZheng Yejian <zhengyejian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 David Vernet 提交于
mainline inclusion from mainline-v5.17-rc1 commit 5ef3dd20 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I4TF7T Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=5ef3dd20555e8e878ac390a71e658db5fd02845c -------------------------------- When enabling a klp patch with klp_enable_patch(), klp_init_patch_early() is invoked to initialize the kobjects for the patch itself, as well as the 'struct klp_object' and 'struct klp_func' objects that comprise it. However, there are some error paths in klp_enable_patch() where some kobjects may have been initialized with kobject_init(), but an error code is still returned due to e.g. a 'struct klp_object' having a NULL funcs pointer. In these paths, the initial reference of the kobject of the 'struct klp_patch' may never be released, along with one or more of its objects and their functions, as kobject_put() is not invoked on the cleanup path if klp_init_patch_early() returns an error code. For example, if an object entry such as the following were added to the sample livepatch module's klp patch, it would cause the vmlinux klp_object, and its klp_func which updates 'cmdline_proc_show', to never be released: static struct klp_object objs[] = { { /* name being NULL means vmlinux */ .funcs = funcs, }, { /* NULL funcs -- would cause reference leak */ .name = "kvm", }, { } }; Without this change, if CONFIG_DEBUG_KOBJECT is enabled, and the sample klp patch is loaded, the kobjects (the patch, the vmlinux 'struct klp_object', and its func) are observed as initialized, but never released, in the dmesg log output. With the change, these kobject references no longer fail to be released as the error case is properly handled before they are initialized. Since 81fd525cedd9 ("[Huawei] livepatch: Add klp_{register,unregister}_patch for stop_machine model"), klp_register_patch was born out of klp_enable_patch with similar issue, we also fix it in this patch. Signed-off-by: NDavid Vernet <void@manifault.com> Reviewed-by: NPetr Mladek <pmladek@suse.com> Acked-by: NMiroslav Benes <mbenes@suse.cz> Acked-by: NJosh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: NPetr Mladek <pmladek@suse.com> Conflicts: kernel/livepatch/core.c Fixes: 0430f78b ("livepatch: Consolidate klp_free functions") Fixes: c33e4283 ("livepatch/core: Allow implementation without ftrace") Signed-off-by: NZheng Yejian <zhengyejian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 11 2月, 2022 6 次提交
-
-
由 Wei Li 提交于
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I4T1NF CVE: NA ------------------------------------------------- Move config entries of kabi to "General setup", and make CONFIG_KABI_SIZE_ALIGN_CHECKS depending on CONFIG_KABI_RESERVE. Signed-off-by: NWei Li <liwei391@huawei.com> Reviewed-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Wei Li 提交于
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I4T1NF CVE: NA ------------------------------------------------- When CONFIG_KABI_RESERVE=n && CONFIG_KABI_SIZE_ALIGN_CHECKS=y, with kabi reserved padding replaced by KABI_USE(), we will get the build error: include/linux/kabi.h:383:3: error: static assertion failed: \ "include/linux/fs.h:2306: long aaa is larger than . \ Disable CONFIG_KABI_SIZE_ALIGN_CHECKS if debugging." _Static_assert(sizeof(struct{_new;}) <= sizeof(struct{_orig;}), \ ^~~~~~~~~~~~~~ In fact, the result of KABI_USE() when CONFIG_KABI_RESERVE=n is weird, update _KABI_REPLACE() to fix this. Signed-off-by: NWei Li <liwei391@huawei.com> Reviewed-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 wangshouping 提交于
euleros inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4T4W4?from=project-issue CVE: NA -------- Reserve some fields beforehand for parsing RSASSA-PSS style certificates --------- Signed-off-by: Nwangshouping <wangshouping@huawei.com> Reviewed-by: NXiu Jianfeng <xiujianfeng@huawei.com> Reviewed-by: NWang Weiyang <wangweiyang2@huawei.com> Reviewed-by: NXie XiuQi <xiexiuqi@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Krupa Ramakrishnan 提交于
mainline inclusion from mainline-v5.16-rc1 commit 54d032ce category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I4T0ML CVE: NA ----------------------------------------------- In build_zonelists(), when the fallback list is built for the nodes, the node load gets reinitialized during each iteration. This results in nodes with same distances occupying the same slot in different node fallback lists rather than appearing in the intended round- robin manner. This results in one node getting picked for allocation more compared to other nodes with the same distance. As an example, consider a 4 node system with the following distance matrix. Node 0 1 2 3 ---------------- 0 10 12 32 32 1 12 10 32 32 2 32 32 10 12 3 32 32 12 10 For this case, the node fallback list gets built like this: Node Fallback list --------------------- 0 0 1 2 3 1 1 0 3 2 2 2 3 0 1 3 3 2 0 1 <-- Unexpected fallback order In the fallback list for nodes 2 and 3, the nodes 0 and 1 appear in the same order which results in more allocations getting satisfied from node 0 compared to node 1. The effect of this on remote memory bandwidth as seen by stream benchmark is shown below: Case 1: Bandwidth from cores on nodes 2 & 3 to memory on nodes 0 & 1 (numactl -m 0,1 ./stream_lowOverhead ... --cores <from 2, 3>) Case 2: Bandwidth from cores on nodes 0 & 1 to memory on nodes 2 & 3 (numactl -m 2,3 ./stream_lowOverhead ... --cores <from 0, 1>) ---------------------------------------- BANDWIDTH (MB/s) TEST Case 1 Case 2 ---------------------------------------- COPY 57479.6 110791.8 SCALE 55372.9 105685.9 ADD 50460.6 96734.2 TRIADD 50397.6 97119.1 ---------------------------------------- The bandwidth drop in Case 1 occurs because most of the allocations get satisfied by node 0 as it appears first in the fallback order for both nodes 2 and 3. This can be fixed by accumulating the node load in build_zonelists() rather than reinitializing it during each iteration. With this the nodes with the same distance rightly get assigned in the round robin manner. In fact this was how it was originally until commit f0c0b2b8 ("change zonelist order: zonelist order selection logic") dropped the load accumulation and resorted to initializing the load during each iteration. While zonelist ordering was removed by commit c9bff3ee ("mm, page_alloc: rip out ZONELIST_ORDER_ZONE"), the change to the node load accumulation in build_zonelists() remained. So essentially this patch reverts back to the accumulated node load logic. After this fix, the fallback order gets built like this: Node Fallback list ------------------ 0 0 1 2 3 1 1 0 3 2 2 2 3 0 1 3 3 2 1 0 <-- Note the change here The bandwidth in Case 1 improves and matches Case 2 as shown below. ---------------------------------------- BANDWIDTH (MB/s) TEST Case 1 Case 2 ---------------------------------------- COPY 110438.9 110107.2 SCALE 105930.5 105817.5 ADD 97005.1 96159.8 TRIADD 97441.5 96757.1 ---------------------------------------- The correctness of the fallback list generation has been verified for the above node configuration where the node 3 starts as memory-less node and comes up online only during memory hotplug. [bharata@amd.com: Added changelog, review, test validation] Link: https://lkml.kernel.org/r/20210830121603.1081-3-bharata@amd.com Fixes: f0c0b2b8 ("change zonelist order: zonelist order selection logic") Signed-off-by: NKrupa Ramakrishnan <krupa.ramakrishnan@amd.com> Co-developed-by: NSadagopan Srinivasan <Sadagopan.Srinivasan@amd.com> Signed-off-by: NSadagopan Srinivasan <Sadagopan.Srinivasan@amd.com> Signed-off-by: NBharata B Rao <bharata@amd.com> Acked-by: NMel Gorman <mgorman@suse.de> Reviewed-by: NAnshuman Khandual <anshuman.khandual@arm.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Lee Schermerhorn <lee.schermerhorn@hp.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NPeng Liu <liupeng256@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Bharata B Rao 提交于
mainline inclusion from mainline-v5.16-rc1 commit 6cf25392 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I4T0ML CVE: NA ------------------------------------------------- Patch series "Fix NUMA nodes fallback list ordering". For a NUMA system that has multiple nodes at same distance from other nodes, the fallback list generation prefers same node order for them instead of round-robin thereby penalizing one node over others. This series fixes it. More description of the problem and the fix is present in the patch description. This patch (of 2): Print information message about the allocation fallback order for each NUMA node during boot. No functional changes here. This makes it easier to illustrate the problem in the node fallback list generation, which the next patch fixes. Link: https://lkml.kernel.org/r/20210830121603.1081-1-bharata@amd.com Link: https://lkml.kernel.org/r/20210830121603.1081-2-bharata@amd.comSigned-off-by: NBharata B Rao <bharata@amd.com> Acked-by: NMel Gorman <mgorman@suse.de> Reviewed-by: NAnshuman Khandual <anshuman.khandual@arm.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Lee Schermerhorn <lee.schermerhorn@hp.com> Cc: Krupa Ramakrishnan <krupa.ramakrishnan@amd.com> Cc: Sadagopan Srinivasan <Sadagopan.Srinivasan@amd.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NPeng Liu <liupeng256@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Zheng Zengkai 提交于
driver inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4SJLU CVE: NA ----------------------------------------- Enable following configs in arm64 openeuler_defconfig for Kunpeng platform: CONFIG_PCIE_EDR=y CONFIG_HISI_PCIE_PMU=m CONFIG_MLX5_ESWITCH=y Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com> Reviewed-by: NChao Liu <liuchao173@huawei.com> Acked-by: Xinwei Kong<kong.kongxinwei@hisilicon.com> Reviewed-by: NYicong Yang <yangyicong@huawei.com> Reviewed-by: NXie XiuQi <xiexiuqi@huawei.com> Reviewed-by: NKai Liu <kai.liu@suse.com> Reviewed-by: NYin Xiujiang <yinxiujiang@kylinos.cn> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 10 2月, 2022 25 次提交
-
-
由 Paul E. McKenney 提交于
mainline inclusion from mainline-v5.12-rc1 commit c26165ef category: bugfix bugzilla: https://e.gitee.com/open_euler/issues/list?issue=I4SV19 Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=c26165efac41bce0c7764262b21f5897e771f34f ------------------------------------------------------------------------- Tasks Trace RCU uses irq_work_queue() to safely awaken its grace-period kthread, so this commit therefore causes the TASKS_TRACE_RCU Kconfig option select the IRQ_WORK Kconfig option. Reported-by: Nkernel test robot <lkp@intel.com> Acked-by: Randy Dunlap <rdunlap@infradead.org> # build-tested Signed-off-by: NPaul E. McKenney <paulmck@kernel.org> Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Bin Wang 提交于
euleros inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I4SJBG?from=project-issue CVE: NA --------------------------- If other cpus offline before handle the crash NMI, the waiting_for_crash_ipi can not be decreased to 0, and current cpu will wait 1 second. So break if all other cpus offline. Signed-off-by: NBin Wang <wangbin224@huawei.com> Reviewed-by: Nluo chunsheng <luochunsheng@huawei.com> Reviewed-by: NYang Jihong <yangjihong1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Naoya Horiguchi 提交于
mainline inclusion from mainline-v5.16-rc7 commit e37e7b0b category: bugfix bugzilla: https://e.gitee.com/open_euler/issues/list?issue=I4SJ2V CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=e37e7b0b3bd52ec4f8ab71b027bcec08f57f1b3b -------------------------------- When a memory error hits a tail page of a free hugepage, __page_handle_poison() is expected to be called to isolate the error in 4kB unit, but it's not called due to the outdated if-condition in memory_failure_hugetlb(). This loses the chance to isolate the error in the finer unit, so it's not optimal. Drop the condition. This "(p != head && TestSetPageHWPoison(head)" condition is based on the old semantics of PageHWPoison on hugepage (where PG_hwpoison flag was set on the subpage), so it's not necessray any more. By getting to set PG_hwpoison on head page for hugepages, concurrent error events on different subpages in a single hugepage can be prevented by TestSetPageHWPoison(head) at the beginning of memory_failure_hugetlb(). So dropping the condition should not reopen the race window originally mentioned in commit b985194c ("hwpoison, hugetlb: lock_page/unlock_page does not match for handling a free hugepage") [naoya.horiguchi@linux.dev: fix "HardwareCorrupted" counter] Link: https://lkml.kernel.org/r/20211220084851.GA1460264@u2004 Link: https://lkml.kernel.org/r/20211210110208.879740-1-naoya.horiguchi@linux.devSigned-off-by: NNaoya Horiguchi <naoya.horiguchi@nec.com> Reported-by: NFei Luo <luofei@unicloud.com> Reviewed-by: NMike Kravetz <mike.kravetz@oracle.com> Cc: <stable@vger.kernel.org> [5.14+] Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NBin Wang <wangbin224@huawei.com> Reviewed-by: Nluo chunsheng <luochunsheng@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Stefan Berger 提交于
mainline inclusion from mainline-v5.13-rc1 commit d1a303e8 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I4S9XR CVE: NA -------------------------------- Detect whether a key is an sm2 type of key by its OID in the parameters array rather than assuming that everything under OID_id_ecPublicKey is sm2, which is not the case. Cc: David Howells <dhowells@redhat.com> Cc: keyrings@vger.kernel.org Signed-off-by: NStefan Berger <stefanb@linux.ibm.com> Reviewed-by: NTianjia Zhang <tianjia.zhang@linux.alibaba.com> Tested-by: NTianjia Zhang <tianjia.zhang@linux.alibaba.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NGUO Zihua <guozihua@huawei.com> Reviewed-by: Nweiyang wang <wangweiyang2@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Nico Pache 提交于
mainline inclusion from mainline-v5.15-rc1 commit b346075f category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I4S7MA CVE: NA -------------------------------- When compiling with -Werror, cc1 will warn that 'zone_id' may be used uninitialized in this function warning. Initialize the zone_id as 0. Its safe to assume that if the code reaches this point it has at least one numa node with memory, so no need for an assertion before init_unavilable_range. Link: https://lkml.kernel.org/r/20210716210336.1114114-1-npache@redhat.com Fixes: 122e093c ("mm/page_alloc: fix memory map initialization for descending nodes") Signed-off-by: NNico Pache <npache@redhat.com> Cc: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NMa Wupeng <mawupeng1@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Baokun Li 提交于
mainline inclusion from mainline-5.17-rc1 commit 1622ed7d category: bugfix bugzilla: 185873, https://gitee.com/openeuler/kernel/issues/I4MTTR Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=1622ed7d0743201293094162c26019d2573ecacb ------------------------------------------------- When we pass a negative value to the proc_doulongvec_minmax() function, the function returns 0, but the corresponding interface value does not change. we can easily reproduce this problem with the following commands: cd /proc/sys/fs/epoll echo -1 > max_user_watches; echo $?; cat max_user_watches This function requires a non-negative number to be passed in, so when a negative number is passed in, -EINVAL is returned. Link: https://lkml.kernel.org/r/20211220092627.3744624-1-libaokun1@huawei.comSigned-off-by: NBaokun Li <libaokun1@huawei.com> Reported-by: NHulk Robot <hulkci@huawei.com> Acked-by: NLuis Chamberlain <mcgrof@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NBaokun Li <libaokun1@huawei.com> Reviewed-by: NZhang Yi <yi.zhang@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Chen Jiahao 提交于
hulk inclusion category: bugfix bugzilla: 51408 https://e.gitee.com/open_euler/issues/list?issue=I4SCW7 CVE: NA ------------------------------------------------------------------------- In commit e29beeac53c8 ("arm64: uaccess: remove set_fs()"), thread_info->addr_limit and macro USER_DS has been removed and replace by macro TASK_SIZE_MAX. However the address limit set by TASK_SIZE_MAX is incorrect in compat mode, see commit 2ef73d5148e ("[Huawei] arm64: fix current_thread_info()->addr_limit setup") for detail. Fix the problem by modifying TASK_SIZE_MAX definition in compat mode. Signed-off-by: NChen Jiahao <chenjiahao16@huawei.com> Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Reviewed-by: NChang Liao <liaochang1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Mark Rutland 提交于
mainline inclusion from mainline-v5.11-rc1 commit 701f4906 category: performance bugzilla: https://e.gitee.com/open_euler/issues/list?issue=I4SCW7 CVE: NA ------------------------------------------------------------------------- Now that the PAN toggling has been removed, the only user of __system_matches_cap() is has_generic_auth(), which is only built when CONFIG_ARM64_PTR_AUTH is selected, and Qian reports that this results in a build-time warning when CONFIG_ARM64_PTR_AUTH is not selected: | arch/arm64/kernel/cpufeature.c:2649:13: warning: '__system_matches_cap' defined but not used [-Wunused-function] | static bool __system_matches_cap(unsigned int n) | ^~~~~~~~~~~~~~~~~~~~ It's tricky to restructure things to prevent this, so let's mark __system_matches_cap() as __maybe_unused, as we used to do for the other user of __system_matches_cap() which we just removed. Reported-by: NQian Cai <qcai@redhat.com> Suggested-by: NQian Cai <qcai@redhat.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20201203152403.26100-1-mark.rutland@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Robin Murphy 提交于
mainline inclusion from mainline-5.14-rc2 commit 295cf156 category: bugfix bugzilla: 55085 https://gitee.com/openeuler/kernel/issues/I4DDEL https://e.gitee.com/open_euler/issues/list?issue=I4SCW7 CVE: NA ------------------------------------------------------------------------- Al reminds us that the usercopy API must only return complete failure if absolutely nothing could be copied. Currently, if userspace does something silly like giving us an unaligned pointer to Device memory, or a size which overruns MTE tag bounds, we may fail to honour that requirement when faulting on a multi-byte access even though a smaller access could have succeeded. Add a mitigation to the fixup routines to fall back to a single-byte copy if we faulted on a larger access before anything has been written to the destination, to guarantee making *some* forward progress. We needn't be too concerned about the overall performance since this should only occur when callers are doing something a bit dodgy in the first place. Particularly broken userspace might still be able to trick generic_perform_write() into an infinite loop by targeting write() at an mmap() of some read-only device register where the fault-in load succeeds but any store synchronously aborts such that copy_to_user() is genuinely unable to make progress, but, well, don't do that... CC: stable@vger.kernel.org Reported-by: NChen Huang <chenhuang5@huawei.com> Suggested-by: NAl Viro <viro@zeniv.linux.org.uk> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/dc03d5c675731a1f24a62417dba5429ad744234e.1626098433.git.robin.murphy@arm.comSigned-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NChen Wandun <chenwandun@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Mark Rutland 提交于
mainline inclusion from mainline-v5.11-rc1 commit 1517c4fa category: performance bugzilla: https://e.gitee.com/open_euler/issues/list?issue=I4SCW7 CVE: NA ------------------------------------------------------------------------- Now that arm64 no longer uses UAO, remove the vestigal feature detection code and Kconfig text. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201202131558.39270-13-mark.rutland@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Mark Rutland 提交于
mainline inclusion from mainline-v5.11-rc1 commit 7cf283c7 category: performance bugzilla: https://e.gitee.com/open_euler/issues/list?issue=I4SCW7 CVE: NA ------------------------------------------------------------------------- Some code (e.g. futex) needs to make privileged accesses to userspace memory, and uses uaccess_{enable,disable}_privileged() in order to permit this. All other uaccess primitives use LDTR/STTR, and never need to toggle PAN. Remove the redundant PAN toggling. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201202131558.39270-12-mark.rutland@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Mark Rutland 提交于
mainline inclusion from mainline-v5.11-rc1 commit b5a5a01d category: performance bugzilla: https://e.gitee.com/open_euler/issues/list?issue=I4SCW7 CVE: NA ------------------------------------------------------------------------- Now that set_fs() is gone, addr_limit_user_check() is redundant. Remove the checks and associated thread flag. To ensure that _TIF_WORK_MASK can be used as an immediate value in an AND instruction (as it is in `ret_to_user`), TIF_MTE_ASYNC_FAULT is renumbered to keep the constituent bits of _TIF_WORK_MASK contiguous. There should be no functional change as a result of this patch. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201202131558.39270-11-mark.rutland@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NJiahao Chen <chenjiahao16@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Mark Rutland 提交于
mainline inclusion from mainline-v5.11-rc1 commit 3d2403fd category: performance bugzilla: https://e.gitee.com/open_euler/issues/list?issue=I4SCW7 CVE: NA ------------------------------------------------------------------------- Now that the uaccess primitives dont take addr_limit into account, we have no need to manipulate this via set_fs() and get_fs(). Remove support for these, along with some infrastructure this renders redundant. We no longer need to flip UAO to access kernel memory under KERNEL_DS, and head.S unconditionally clears UAO for all kernel configurations via an ERET in init_kernel_el. Thus, we don't need to dynamically flip UAO, nor do we need to context-switch it. However, we still need to adjust PAN during SDEI entry. Masking of __user pointers no longer needs to use the dynamic value of addr_limit, and can use a constant derived from the maximum possible userspace task size. A new TASK_SIZE_MAX constant is introduced for this, which is also used by core code. In configurations supporting 52-bit VAs, this may include a region of unusable VA space above a 48-bit TTBR0 limit, but never includes any portion of TTBR1. Note that TASK_SIZE_MAX is an exclusive limit, while USER_DS and KERNEL_DS were inclusive limits, and is converted to a mask by subtracting one. As the SDEI entry code repurposes the otherwise unnecessary pt_regs::orig_addr_limit field to store the TTBR1 of the interrupted context, for now we rename that to pt_regs::sdei_ttbr1. In future we can consider factoring that out. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NJames Morse <james.morse@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201202131558.39270-10-mark.rutland@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com> Conflicts: arch/arm64/kernel/asm-offsets.c Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NJiahao Chen <chenjiahao16@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Mark Rutland 提交于
mainline inclusion from mainline-v5.11-rc1 commit 7b90dc40 category: performance bugzilla: https://e.gitee.com/open_euler/issues/list?issue=I4SCW7 CVE: NA ------------------------------------------------------------------------- Now the uaccess primitives use LDTR/STTR unconditionally, the uao_{ldp,stp,user_alternative} asm macros are misnamed, and have a redundant argument. Let's remove the redundant argument and rename these to user_{ldp,stp,ldst} respectively to clean this up. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NRobin Murohy <robin.murphy@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201202131558.39270-9-mark.rutland@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Mark Rutland 提交于
mainline inclusion from mainline-v5.11-rc1 commit fc703d80 category: performance bugzilla: https://e.gitee.com/open_euler/issues/list?issue=I4SCW7 CVE: NA ------------------------------------------------------------------------- This patch separates arm64's user and kernel memory access primitives into distinct routines, adding new __{get,put}_kernel_nofault() helpers to access kernel memory, upon which core code builds larger copy routines. The kernel access routines (using LDR/STR) are not affected by PAN (when legitimately accessing kernel memory), nor are they affected by UAO. Switching to KERNEL_DS may set UAO, but this does not adversely affect the kernel access routines. The user access routines (using LDTR/STTR) are not affected by PAN (when legitimately accessing user memory), but are affected by UAO. As these are only legitimate to use under USER_DS with UAO clear, this should not be problematic. Routines performing atomics to user memory (futex and deprecated instruction emulation) still need to transiently clear PAN, and these are left as-is. These are never used on kernel memory. Subsequent patches will refactor the uaccess helpers to remove redundant code, and will also remove the redundant PAN/UAO manipulation. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201202131558.39270-8-mark.rutland@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Mark Rutland 提交于
mainline inclusion from mainline-v5.11-rc1 commit f253d827 category: performance bugzilla: https://e.gitee.com/open_euler/issues/list?issue=I4SCW7 CVE: NA ------------------------------------------------------------------------- As a step towards implementing __{get,put}_kernel_nofault(), this patch splits most user-memory specific logic out of __{get,put}_user(), with the memory access and fault handling in new __{raw_get,put}_mem() helpers. For now the LDR/LDTR patching is left within the *get_mem() helpers, and will be removed in a subsequent patch. There should be no functional change as a result of this patch. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201202131558.39270-7-mark.rutland@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Mark Rutland 提交于
mainline inclusion from mainline-v5.11-rc1 commit 9e94fdad category: performance bugzilla: https://e.gitee.com/open_euler/issues/list?issue=I4SCW7 CVE: NA ------------------------------------------------------------------------- Currently __copy_user_flushcache() open-codes raw_copy_from_user(), and doesn't use uaccess_mask_ptr() on the user address. Let's have it call raw_copy_from_user(), which is both a simplification and ensures that user pointers are masked under speculation. There should be no functional change as a result of this patch. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201202131558.39270-6-mark.rutland@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Mark Rutland 提交于
mainline inclusion from mainline-v5.11-rc1 commit 923e1e7d category: performance bugzilla: https://e.gitee.com/open_euler/issues/list?issue=I4SCW7 CVE: NA ------------------------------------------------------------------------- We currently have many uaccess_*{enable,disable}*() variants, which subsequent patches will cut down as part of removing set_fs() and friends. Once this simplification is made, most uaccess routines will only need to ensure that the user page tables are mapped in TTBR0, as is currently dealt with by uaccess_ttbr0_{enable,disable}(). The existing uaccess_{enable,disable}() routines ensure that user page tables are mapped in TTBR0, and also disable PAN protections, which is necessary to be able to use atomics on user memory, but also permit unrelated privileged accesses to access user memory. As preparatory step, let's rename uaccess_{enable,disable}() to uaccess_{enable,disable}_privileged(), highlighting this caveat and discouraging wider misuse. Subsequent patches can reuse the uaccess_{enable,disable}() naming for the common case of ensuring the user page tables are mapped in TTBR0. There should be no functional change as a result of this patch. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201202131558.39270-5-mark.rutland@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Mark Rutland 提交于
mainline inclusion from mainline-v5.11-rc1 commit 2376e75c category: performance bugzilla: https://e.gitee.com/open_euler/issues/list?issue=I4SCW7 CVE: NA ------------------------------------------------------------------------- In preparation for removing addr_limit and set_fs() we must decouple the SDEI PAN/UAO manipulation from the uaccess code, and explicitly reinitialize these as required. SDEI enters the kernel with a non-architectural exception, and prior to the most recent revision of the specification (ARM DEN 0054B), PSTATE bits (e.g. PAN, UAO) are not manipulated in the same way as for architectural exceptions. Notably, older versions of the spec can be read ambiguously as to whether PSTATE bits are inherited unchanged from the interrupted context or whether they are generated from scratch, with TF-A doing the latter. We have three cases to consider: 1) The existing TF-A implementation of SDEI will clear PAN and clear UAO (along with other bits in PSTATE) when delivering an SDEI exception. 2) In theory, implementations of SDEI prior to revision B could inherit PAN and UAO (along with other bits in PSTATE) unchanged from the interrupted context. However, in practice such implementations do not exist. 3) Going forward, new implementations of SDEI must clear UAO, and depending on SCTLR_ELx.SPAN must either inherit or set PAN. As we can ignore (2) we can assume that upon SDEI entry, UAO is always clear, though PAN may be clear, inherited, or set per SCTLR_ELx.SPAN. Therefore, we must explicitly initialize PAN, but do not need to do anything for UAO. Considering what we need to do: * When set_fs() is removed, force_uaccess_begin() will have no HW side-effects. As this only clears UAO, which we can assume has already been cleared upon entry, this is not a problem. We do not need to add code to manipulate UAO explicitly. * PAN may be cleared upon entry (in case 1 above), so where a kernel is built to use PAN and this is supported by all CPUs, the kernel must set PAN upon entry to ensure expected behaviour. * PAN may be inherited from the interrupted context (in case 3 above), and so where a kernel is not built to use PAN or where PAN support is not uniform across CPUs, the kernel must clear PAN to ensure expected behaviour. This patch reworks the SDEI code accordingly, explicitly setting PAN to the expected state in all cases. To cater for the cases where the kernel does not use PAN or this is not uniformly supported by hardware we add a new cpu_has_pan() helper which can be used regardless of whether the kernel is built to use PAN. The existing system_uses_ttbr0_pan() is redefined in terms of system_uses_hw_pan() both for clarity and as a minor optimization when HW PAN is not selected. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NJames Morse <james.morse@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201202131558.39270-3-mark.rutland@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Mark Rutland 提交于
mainline inclusion from mainline-v5.11-rc1 commit a0ccf2ba category: performance bugzilla: https://e.gitee.com/open_euler/issues/list?issue=I4SCW7 CVE: NA ------------------------------------------------------------------------- The SDEI support code is split across arch/arm64/ and drivers/firmware/, largley this is split so that the arch-specific portions are under arch/arm64, and the management logic is under drivers/firmware/. However, exception entry fixups are currently under drivers/firmware. Let's move the exception entry fixups under arch/arm64/. This de-clutters the management logic, and puts all the arch-specific portions in one place. Doing this also allows the fixups to be applied earlier, so things like PAN and UAO will be in a known good state before we run other logic. This will also make subsequent refactoring easier. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NJames Morse <james.morse@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201202131558.39270-2-mark.rutland@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Zhen Lei 提交于
hulk inclusion category: performance bugzilla: https://e.gitee.com/open_euler/issues/list?issue=I4SCW7 CVE: NA ------------------------------------------------------------------------- This reverts commit 979046bd. The macro 'USER_DS' and related assembly code is deleted by commit 3d2403fd ("arm64: uaccess: remove set_fs()", so the problem fixed by this patch is disappeared accordingly. Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NJiahao Chen <chenjiahao16@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Zhen Lei 提交于
hulk inclusion category: performance bugzilla: https://e.gitee.com/open_euler/issues/list?issue=I4SCW7 CVE: NA ------------------------------------------------------------------------- This reverts commit 97cb3288. The macro 'USER_DS' and related assembly code is deleted by commit 3d2403fd ("arm64: uaccess: remove set_fs()", so the problem fixed by this patch is disappeared accordingly. Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NJiahao Chen <chenjiahao16@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Zhen Lei 提交于
hulk inclusion category: performance bugzilla: https://e.gitee.com/open_euler/issues/list?issue=I4SCW7 CVE: NA ------------------------------------------------------------------------- This reverts commit 8d4f091c. Temporary rollback, which will be backported later so that other patches can be backported without conflict. Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NChen Wandun <chenwandun@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Nathan Chancellor 提交于
mainline inclusion from mainline-v5.12-rc8 commit 22315a22 category: performance bugzilla: 51796 https://e.gitee.com/open_euler/issues/list?issue=I4SCW7 CVE: NA ------------------------------------------------------------------------- After commit 2decad92 ("arm64: mte: Ensure TIF_MTE_ASYNC_FAULT is set atomically"), LLVM's integrated assembler fails to build entry.S: <instantiation>:5:7: error: expected assembly-time absolute expression .org . - (664b-663b) + (662b-661b) ^ <instantiation>:6:7: error: expected assembly-time absolute expression .org . - (662b-661b) + (664b-663b) ^ The root cause is LLVM's assembler has a one-pass design, meaning it cannot figure out these instruction lengths when the .org directive is outside of the subsection that they are in, which was changed by the .arch_extension directive added in the above commit. Apply the same fix from commit 966a0acc ("arm64/alternatives: move length validation inside the subsection") to the alternative_endif macro, shuffling the .org directives so that the length validation happen will always happen in the same subsections. alternative_insn has not shown any issue yet but it appears that it could have the same issue in the future so just preemptively change it. Fixes: f7b93d42 ("arm64/alternatives: use subsections for replacement sequences") Cc: <stable@vger.kernel.org> # 5.8.x Link: https://github.com/ClangBuiltLinux/linux/issues/1347Signed-off-by: NNathan Chancellor <nathan@kernel.org> Reviewed-by: NSami Tolvanen <samitolvanen@google.com> Tested-by: NSami Tolvanen <samitolvanen@google.com> Reviewed-by: NNick Desaulniers <ndesaulniers@google.com> Tested-by: NNick Desaulniers <ndesaulniers@google.com> Link: https://lore.kernel.org/r/20210414000803.662534-1-nathan@kernel.orgSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NWeilong Chen <chenweilong@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Will Deacon 提交于
mainline inclusion from mainline-v5.11-rc1 commit 7cda23da category: performance bugzilla: https://e.gitee.com/open_euler/issues/list?issue=I4SCW7 CVE: NA ------------------------------------------------------------------------- asm/alternative.h contains both the macros needed to use alternatives, as well the type definitions and function prototypes for applying them. Split the header in two, so that alternatives can be used from core header files such as linux/compiler.h without the risk of circular includes Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will@kernel.org> Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-