- 31 8月, 2021 19 次提交
-
-
由 Hao Chen 提交于
mainline inclusion from mainline-master commit 98fa7525 category: feature bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=98fa7525d36091da9eeafb94f98bf9bbb3d6748e ---------------------------------------------------------------------- Add devlink reload support for HNS3 ethernet PF driver. Signed-off-by: NHao Chen <chenhao288@hisilicon.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NJunxin Chen <chenjunxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Yufeng Mo 提交于
mainline inclusion from mainline-master commit bd85e55b category: feature bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=bd85e55bfb959faad17c470384a1a90caa6d157d ---------------------------------------------------------------------- Add devlink get info support for HNS3 ethernet VF driver. Signed-off-by: NYufeng Mo <moyufeng@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NJunxin Chen <chenjunxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Yufeng Mo 提交于
mainline inclusion from mainline-master commit 26fbf511 category: feature bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=26fbf511693e7dead8f1a6b497a53d58966008bf ---------------------------------------------------------------------- Add devlink get info support for HNS3 ethernet PF driver. Signed-off-by: NYufeng Mo <moyufeng@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NJunxin Chen <chenjunxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Yufeng Mo 提交于
mainline inclusion from mainline-master commit cd624299 category: feature bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=cd6242991d2e3990c828a7c2215d2d3321f1da39 ---------------------------------------------------------------------- Add devlink register support for HNS3 ethernet VF driver. Signed-off-by: NYufeng Mo <moyufeng@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NJunxin Chen <chenjunxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Yufeng Mo 提交于
mainline inclusion from mainline-master commit b741269b category: feature bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=b741269b275953786832805df329851299ab4de7 ---------------------------------------------------------------------- Add devlink register support for HNS3 ethernet PF driver. Signed-off-by: NYufeng Mo <moyufeng@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NJunxin Chen <chenjunxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Hao Chen 提交于
mainline inclusion from mainline-master commit 6149ab60 category: feature bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=6149ab604c80a20e5741bea6c90583edde15c488 ---------------------------------------------------------------------- Add a file to document devlink support for hns3 driver, now support devlink info and devlink reload. Signed-off-by: NHao Chen <chenhao288@hisilicon.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NJunxin Chen <chenjunxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Sang Yan 提交于
hulk inclusion category: feature bugzilla: 48159 CVE: N/A ------------------------------ enable kernel hot upgrade features by default: 1 add pin mem method for checkpoint and restore: CONFIG_PIN_MEMORY=y CONFIG_PIN_MEMORY_DEV=m 2 add pid reserve method for checkpoint and restore CONFIG_PID_RESERVE=y 3 add cpu park method CONFIG_ARM64_CPU_PARK=y 4 add quick kexec support for kernel CONFIG_QUICK_KEXEC=y 5 add legacy pmem support for arm64 CONFIG_ARM64_PMEM_RESERVE=y CONFIG_ARM64_PMEM_LEGACY_DEVICE=y CONFIG_PMEM_LEGACY=m Signed-off-by: NSang Yan <sangyan@huawei.com> Signed-off-by: NJingxian He <hejingxian@huawei.com> Signed-off-by: NZhu Ling <zhuling8@huawei.com> Reviewed-by: NChen Wandun <chenwandun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Jingxian He 提交于
hulk inclusion category: feature bugzilla: 48159 CVE: N/A ------------------------------ We record the pid of dump tasks in the reserved memory, and reserve the pids before init task start. In the recover process, free the reserved pids and realloc them for use. Signed-off-by: NJingxian He <hejingxian@huawei.com> Reviewed-by: NChen Wandun <chenwandun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Jingxian He 提交于
hulk inclusion category: feature bugzilla: 48159 CVE: N/A ------------------------------ We can use the checkpoint and restore in userspace(criu) method to dump and restore tasks when updating the kernel. Currently, criu needs dump all memory data of tasks to files. When the memory size is very large(larger than 1G), the cost time of the dumping data will be very long(more than 1 min). By pin the memory data of tasks and collect the corresponding physical pages mapping info in checkpoint process, we can remap the physical pages to restore tasks after upgrading the kernel. This pin memory method can restore the task data within one second. The pin memory area info is saved in the reserved memblock, which can keep usable in the kernel update process. The pin memory driver provides the following ioctl command for criu: 1) SET_PIN_MEM_AREA: Set pin memory area, which can be remap to the restore task. 2) CLEAR_PIN_MEM_AREA: Clear the pin memory area info, which enable user reset the pin data. 3) REMAP_PIN_MEM_AREA: Remap the pages of the pin memory to the restore task. Signed-off-by: NJingxian He <hejingxian@huawei.com> Reviewed-by: NChen Wandun <chenwandun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 ZhuLing 提交于
hulk inclusion category: feature bugzilla: 48159 CVE: NA ------------------------------ Register pmem in arm64: Use memmap(memmap=nn[KMG]!ss[KMG]) reserve memory and e820(driver/nvdimm/e820.c) function to register persistent memory in arm64. when the kernel restart or update, the data in PMEM will not be lost and can be loaded faster. this is a general features. driver/nvdimm/e820.c: The function of this file is scan "iomem_resource" and take advantage of nvdimm resource discovery mechanism by registering a resource named "Persistent Memory (legacy)", this function doesn't depend on architecture. We will push the feature to linux kernel community and discuss to modify the file name. because people have a mistaken notion that the e820.c is depend on x86. If you want use this features, you need do as follows: 1.Reserve memory: add memmap to reserve memory in grub.cfg memmap=nn[KMG]!ss[KMG] exp:memmap=100K!0x1a0000000. 2.Insmod nd_e820.ko: modprobe nd_e820. 3.Check pmem device in /dev exp: /dev/pmem0 Signed-off-by: NZhuLing <zhuling8@huawei.com> Signed-off-by: NSang Yan <sangyan@huawei.com> Reviewed-by: NChen Wandun <chenwandun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Sang Yan 提交于
hulk inclusion category: feature bugzilla: 48159 CVE: N/A ------------------------------ Introducing a feature of CPU PARK in order to save time of cpus down and up during kexec, which may cost 250ms of per cpu's down and 30ms of up. As a result, for 128 cores, it costs more than 30 seconds to down and up cpus during kexec. Think about 256 cores and more. CPU PARK is a state that cpu power-on and staying in spin loop, polling for exit chances, such as writing exit address. Reserving a block of memory, to fill with cpu park text section, exit address and park-magic-flag of each cpu. In implementation, reserved one page for one cpu core. Cpus going to park state instead of down in machine_shutdown(). Cpus going out of park state in smp_init instead of brought up. One of cpu park sections in pre-reserved memory blocks,: +--------------+ + exit address + +--------------+ + park magic + +--------------+ + park codes + + . + + . + + . + +--------------+ Signed-off-by: NSang Yan <sangyan@huawei.com> Reviewed-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Sang Yan 提交于
hulk inclusion category: feature bugzilla: 48159 CVE: N/A ------------------------------ Reserve memory for quick kexec on arm64 with cmdline "quickkexec=". Signed-off-by: NSang Yan <sangyan@huawei.com> Reviewed-by: NChen Wandun <chenwandun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Sang Yan 提交于
hulk inclusion category: feature bugzilla: 48159 CVE: N/A ------------------------------ In normal kexec, relocating kernel may cost 5 ~ 10 seconds, to copy all segments from vmalloced memory to kernel boot memory, because of disabled mmu. We introduce quick kexec to save time of copying memory as above, just like kdump(kexec on crash), by using reserved memory "Quick Kexec". To enable it, we should reserve memory and setup quick_kexec_res. Constructing quick kimage as the same as crash kernel, then simply copy all segments of kimage to reserved memroy. We also add this support in syscall kexec_load using flags of KEXEC_QUICK. Signed-off-by: NSang Yan <sangyan@huawei.com> Reviewed-by: NKuohai Xu <xukuohai@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Zheng Zengkai 提交于
euleros inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I42DAP CVE: NA ---------------------------------- Some manufacturers want to enable SMMU for using virtualization pass-through feature on the server with the 3408iMR/3416iMR RAID card installed. Enable CONFIG_SMMU_BYPASS_DEV by default in openeuler_defconfig. Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 luochunsheng 提交于
euleros inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I42DAP CVE: NA ---------------------------------- arm64 cannot support virtualization pass-through feature when using the 3408iMR/3416iMR raid card. For solving the problem, we add two fuctions: 1.we prepare init bypass level2 entry for specific devices when smmu uses 2 level streamid entry. 2.we add smmu.bypassdev cmdline to allow SMMU bypass streams for some specific devices. usage: 3408iMRraid: smmu.bypassdev=0x1000:0x17 3416iMRraid: smmu.bypassdev=0x1000:0x15 Signed-off-by: luochunsheng<luochunsheng@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Jian Shen 提交于
mainline inclusion from mainline-v5.14-rc3 commit bbfd4506 category: bugfix bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=bbfd4506f962e7e6fff8f37f017154a3c3791264 ---------------------------------------------------------------------- Currently, VF doesn't enable rx VLAN offload when initializating, and PF does it for VFs. If user disable the rx VLAN offload for VF with ethtool -K, and reload the VF driver, it may cause the rx VLAN offload state being inconsistent between hardware and software. Fixes it by enabling rx VLAN offload when VF initializing. Fixes: e2cb1dec ("net: hns3: Add HNS3 VF HCL(Hardware Compatibility Layer) Support") Signed-off-by: NJian Shen <shenjian15@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NJunxin Chen <chenjunxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Jian Shen 提交于
mainline inclusion from mainline-master commit 184cd221 category: feature bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=184cd221a86321e53df9389c4b35a247b60c1e77 ---------------------------------------------------------------------- For hardware limitation, port VLAN filter is port level, and effective for all the functions of the port. So if not support port VLAN bypass, it's necessary to disable the port VLAN filter, in order to support function level VLAN filter control. Fixes: 2ba30662 ("net: hns3: add support for modify VLAN filter state") Signed-off-by: NJian Shen <shenjian15@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NJunxin Chen <chenjunxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Peng Li 提交于
mainline inclusion from mainline-master commit 4671042f category: feature bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=4671042f1ef0d37137884811afcc4ae67685ce07 ---------------------------------------------------------------------- When VF need response from PF, VF will wait (1us - 1s) to receive the response, or it will wait timeout and the VF action fails. If VF do not receive response in 1st action because timeout, the 2nd action may receive response for the 1st action, and get incorrect response data.VF must reciveve the right response from PF,or it will cause unexpected error. This patch adds match_id to check mailbox response from PF to VF, to make sure VF get the right response: 1. The message sent from VF was labelled with match_id which was a unique 16-bit non-zero value. 2. The response sent from PF will label with match_id which got from the request. 3. The VF uses the match_id to match request and response message. This scheme depends on PF driver supports match_id, if PF driver doesn't support then VF will uses the original scheme. Signed-off-by: NPeng Li <lipeng321@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NJunxin Chen <chenjunxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Chengwen Feng 提交于
mainline inclusion from mainline-master commit 1b713d14 category: bugfix bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=1b713d14dc3c077ec45e65dab4ea01a8bc41b8c1 ---------------------------------------------------------------------- Currently, the mailbox synchronous communication between VF and PF use the following fields to maintain communication: 1. Origin_mbx_msg which was combined by message code and subcode, used to match request and response. 2. Received_resp which means whether received response. There may possible mismatches of the following situation: 1. VF sends message A with code=1 subcode=1. 2. PF was blocked about 500ms when processing the message A. 3. VF will detect message A timeout because it can't get the response within 500ms. 4. VF sends message B with code=1 subcode=1 which equal message A. 5. PF processes the first message A and send the response message to VF. 6. VF will identify the response matched the message B because the code/subcode is the same. This will lead to mismatch of request and response. To fix the above bug, we use the following scheme: 1. The message sent from VF was labelled with match_id which was a unique 16-bit non-zero value. 2. The response sent from PF will label with match_id which got from the request. 3. The VF uses the match_id to match request and response message. As for PF driver, it only needs to copy the match_id from request to response. Fixes: dde1a86e ("net: hns3: Add mailbox support to PF driver") Signed-off-by: NChengwen Feng <fengchengwen@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NJunxin Chen <chenjunxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 30 7月, 2021 1 次提交
-
-
由 Eric Sandeen 提交于
stable inclusion from stable-5.10.52 commit 174c34d9cda1b5818419b8f5a332ced10755e52f bugzilla: 331 CVE: CVE-2021-33909 --------------------------------------------------------------- commit 8cae8cd8 upstream. There is no reasonable need for a buffer larger than this, and it avoids int overflow pitfalls. Fixes: 058504ed ("fs/seq_file: fallback to vmalloc allocation") Suggested-by: NAl Viro <viro@zeniv.linux.org.uk> Reported-by: NQualys Security Advisory <qsa@qualys.com> Signed-off-by: NEric Sandeen <sandeen@redhat.com> Cc: stable@kernel.org Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 28 7月, 2021 18 次提交
-
-
由 Zhen Lei 提交于
hulk inclusion category: feature bugzilla: 174251 CVE: NA ------------------------------------------------------------------------- Due to limited hardware resources, the number of ECMDQs may be less than the number of cores. If the number of ECMDQs is greater than the number of numa nodes, ensure that each node has at least one ECMDQ. This is because ECMDQ queue memory is requested from the NUMA node where it resides, which may result in better command filling and insertion performance. The current ECMDQ implementation reuses the command insertion function arm_smmu_cmdq_issue_cmdlist() of the normal CMDQ. This function already supports multiple cores concurrent insertion commands. Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Zhen Lei 提交于
hulk inclusion category: feature bugzilla: 174251 CVE: NA ------------------------------------------------------------------------- When a core can exclusively own an ECMDQ, competition with other cores does not need to be considered during command insertion. Therefore, we can delete the part of arm_smmu_cmdq_issue_cmdlist() that deals with multi-core contention and generate a more efficient ECMDQ-specific function arm_smmu_ecmdq_issue_cmdlist(). Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Zhen Lei 提交于
hulk inclusion category: feature bugzilla: 174251 CVE: NA ------------------------------------------------------------------------- The SYNC command only ensures that the command that precedes it in the same ECMDQ must be executed, but cannot synchronize the commands in other ECMDQs. If an unmap involves multiple commands, some commands are executed on one core, and the other commands are executed on another core. In this case, after the SYNC execution is complete, the execution of all preceded commands can not be ensured. Prevent the process that performs a set of associated commands insertion from being migrated to other cores ensures that all commands are inserted into the same ECMDQ. Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Zhen Lei 提交于
hulk inclusion category: feature bugzilla: 174251 CVE: NA ------------------------------------------------------------------------- Ensure that each core exclusively occupies an ECMDQ and all of them are enabled during initialization. During this initialization process, any errors will result in a fallback to using normal CMDQ. When GERROR is triggered by ECMDQ, all ECMDQs need to be traversed: the ECMDQs with errors will be processed and the ECMDQs without errors will be skipped directly. Compared with register SMMU_CMDQ_PROD, register SMMU_ECMDQ_PROD has one more 'EN' bit and one more 'ERRACK' bit. Therefore, an extra member 'ecmdq_prod' is added to record the values of these two bits. Each time register SMMU_ECMDQ_PROD is updated, the value of 'ecmdq_prod' is ORed. After the error indicated by SMMU_GERROR.CMDQP_ERR is fixed, the 'ERRACK' bit needs to be toggled to resume the corresponding ECMDQ. Therefore, a rwlock is used to protect the write operation to bit 'ERRACK' during error handling and the read operation to bit 'ERRACK' during command insertion. Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Zhen Lei 提交于
hulk inclusion category: feature bugzilla: 174251 CVE: NA ------------------------------------------------------------------------- When SMMU_GERROR.CMDQP_ERR is different to SMMU_GERRORN.CMDQP_ERR, it indicates that one or more errors have been encountered on a command queue control page interface. We need to traverse all ECMDQs in that control page to find all errors. For each ECMDQ error handling, it is much the same as the CMDQ error handling. This common processing part is extracted as a new function __arm_smmu_cmdq_skip_err(). Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Zhen Lei 提交于
hulk inclusion category: feature bugzilla: 174251 CVE: NA ------------------------------------------------------------------------- One SMMU has only one normal CMDQ. Therefore, this CMDQ is used regardless of the core on which the command is inserted. It can be referenced directly through "smmu->cmdq". However, one SMMU has multiple ECMDQs, and the ECMDQ used by the core on which the command insertion is executed may be different. So the helper function arm_smmu_get_cmdq() is added, which returns the CMDQ/ECMDQ that the current core should use. Currently, the code that supports ECMDQ is not added. just simply returns "&smmu->cmdq". Many subfunctions of arm_smmu_cmdq_issue_cmdlist() use "&smmu->cmdq" or "&smmu->cmdq.q" directly. To support ECMDQ, they need to call the newly added function arm_smmu_get_cmdq() instead. Note that normal CMDQ is still required until ECMDQ is available. Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Zhen Lei 提交于
hulk inclusion category: feature bugzilla: 174251 CVE: NA ------------------------------------------------------------------------- The obvious key to the performance optimization of commit 587e6c10 ("iommu/arm-smmu-v3: Reduce contention during command-queue insertion") is to allow multiple cores to insert commands in parallel after a brief mutex contention. Obviously, inserting as many commands at a time as possible can reduce the number of times the mutex contention participates, thereby improving the overall performance. At least it reduces the number of calls to function arm_smmu_cmdq_issue_cmdlist(). Therefore, function arm_smmu_cmdq_issue_cmd_with_sync() is added to insert the 'cmd+sync' commands at a time. Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Zhen Lei 提交于
hulk inclusion category: feature bugzilla: 174251 CVE: NA ------------------------------------------------------------------------- The obvious key to the performance optimization of commit 587e6c10 ("iommu/arm-smmu-v3: Reduce contention during command-queue insertion") is to allow multiple cores to insert commands in parallel after a brief mutex contention. Obviously, inserting as many commands at a time as possible can reduce the number of times the mutex contention participates, thereby improving the overall performance. At least it reduces the number of calls to function arm_smmu_cmdq_issue_cmdlist(). Therefore, use command queue batching helpers to insert multiple commands at a time. Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Peter Zijlstra 提交于
mainline inclusion from mainline-5.12-rc1 commit e59e10f8 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I410UT CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=e59e10f8ef63d42fbb99776a5a112841e798b3b5 --------------------------- Add a debugfs file to muck about with the preempt mode at runtime. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/YAsGiUYf6NyaTplX@hirez.programming.kicks-ass.netSigned-off-by: NMa Junhai <majunhai2@huawei.com> Reviewed-by: NChen Hui <judy.chenhui@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Peter Zijlstra (Intel) 提交于
mainline inclusion from mainline-5.12-rc1 commit 826bfeb3 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I410UT CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=826bfeb37bb4302ee6042f330c4c0c757152bdb8 --------------------------- Support the preempt= boot option and patch the static call sites accordingly. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NFrederic Weisbecker <frederic@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210118141223.123667-9-frederic@kernel.orgSigned-off-by: NMa Junhai <majunhai2@huawei.com> Reviewed-by: NChen Hui <judy.chenhui@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Peter Zijlstra (Intel) 提交于
mainline inclusion from mainline-5.12-rc1 commit 40607ee9 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I410UT CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=40607ee97e4eec5655cc0f76a720bdc4c63a6434 --------------------------- Provide static call to control IRQ preemption (called in CONFIG_PREEMPT) so that we can override its behaviour when preempt= is overriden. Since the default behaviour is full preemption, its call is initialized to provide IRQ preemption when preempt= isn't passed. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NFrederic Weisbecker <frederic@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210118141223.123667-8-frederic@kernel.orgSigned-off-by: NMa Junhai <majunhai2@huawei.com> Reviewed-by: NChen Hui <judy.chenhui@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Peter Zijlstra (Intel) 提交于
mainline inclusion from mainline-5.12-rc1 commit 2c9a98d3 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I410UT CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=2c9a98d3bc808717ab63ad928a2b568967775388 --------------------------- Provide static calls to control preempt_schedule[_notrace]() (called in CONFIG_PREEMPT) so that we can override their behaviour when preempt= is overriden. Since the default behaviour is full preemption, both their calls are initialized to the arch provided wrapper, if any. [fweisbec: only define static calls when PREEMPT_DYNAMIC, make it less dependent on x86 with __preempt_schedule_func] Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NFrederic Weisbecker <frederic@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210118141223.123667-7-frederic@kernel.orgSigned-off-by: NMa Junhai <majunhai2@huawei.com> Reviewed-by: NChen Hui <judy.chenhui@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Peter Zijlstra (Intel) 提交于
mainline inclusion from mainline-5.12-rc1 commit b965f1dd category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I410UT CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=b965f1ddb47daa5b8b2e2bc9c921431236830367 --------------------------- Provide static calls to control cond_resched() (called in !CONFIG_PREEMPT) and might_resched() (called in CONFIG_PREEMPT_VOLUNTARY) to that we can override their behaviour when preempt= is overriden. Since the default behaviour is full preemption, both their calls are ignored when preempt= isn't passed. [fweisbec: branch might_resched() directly to __cond_resched(), only define static calls when PREEMPT_DYNAMIC] Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NFrederic Weisbecker <frederic@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210118141223.123667-6-frederic@kernel.orgSigned-off-by: NMa Junhai <majunhai2@huawei.com> Conflicts: include/linux/kernel.h Reviewed-by: NChen Hui <judy.chenhui@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Michal Hocko 提交于
mainline inclusion from mainline-5.12-rc1 commit 6ef869e0 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I410UT CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=6ef869e0647439af0fc28dde162d33320d4e1dd7 --------------------------- Preemption mode selection is currently hardcoded on Kconfig choices. Introduce a dedicated option to tune preemption flavour at boot time, This will be only available on architectures efficiently supporting static calls in order not to tempt with the feature against additional overhead that might be prohibitive or undesirable. CONFIG_PREEMPT_DYNAMIC is automatically selected by CONFIG_PREEMPT if the architecture provides the necessary support (CONFIG_STATIC_CALL_INLINE, CONFIG_GENERIC_ENTRY, and provide with __preempt_schedule_function() / __preempt_schedule_notrace_function()). Suggested-by: NPeter Zijlstra <peterz@infradead.org> Signed-off-by: NMichal Hocko <mhocko@suse.com> Signed-off-by: NFrederic Weisbecker <frederic@kernel.org> [peterz: relax requirement to HAVE_STATIC_CALL] Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210118141223.123667-5-frederic@kernel.orgSigned-off-by: NMa Junhai <majunhai2@huawei.com> Reviewed-by: NChen Hui <judy.chenhui@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Frederic Weisbecker 提交于
mainline inclusion from mainline-5.12-rc1 commit 29fd0194 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I410UT CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=29fd01944b7273bb630c649a2104b7f9e4ef3fa6 --------------------------- DECLARE_STATIC_CALL() must pass the original function targeted for a given static call. But DEFINE_STATIC_CALL() may want to initialize it as off. In this case we can't pass NULL (for functions without return value) or __static_call_return0 (for functions returning a value) directly to DEFINE_STATIC_CALL() as that may trigger a static call redeclaration with a different function prototype. Type casts neither can work around that as they don't get along with typeof(). The proper way to do that for functions that don't return a value is to use DEFINE_STATIC_CALL_NULL(). But functions returning a actual value don't have an equivalent yet. Provide DEFINE_STATIC_CALL_RET0() to solve this situation. Signed-off-by: NFrederic Weisbecker <frederic@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210118141223.123667-3-frederic@kernel.orgSigned-off-by: NMa Junhai <majunhai2@huawei.com> Reviewed-by: NChen Hui <judy.chenhui@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Peter Zijlstra 提交于
mainline inclusion from mainline-5.12-rc1 commit 3f2a8fc4 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I410UT CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=3f2a8fc4b15de18644e8a80a09edda168676e22c --------------------------- Provide a stub function that return 0 and wire up the static call site patching to replace the CALL with a single 5 byte instruction that clears %RAX, the return value register. The function can be cast to any function pointer type that has a single %RAX return (including pointers). Also provide a version that returns an int for convenience. We are clearing the entire %RAX register in any case, whether the return value is 32 or 64 bits, since %RAX is always a scratch register anyway. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NFrederic Weisbecker <frederic@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210118141223.123667-2-frederic@kernel.orgSigned-off-by: NMa Junhai <majunhai2@huawei.com> Reviewed-by: NChen Hui <judy.chenhui@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Zheng Zengkai 提交于
hulk inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I41AUQ CVE: NA ------------------------------------------------- While compiling with openeuler_defconfig, configs CONFIG_ARCH_PHYTIUM and CONFIG_ARM_GIC_PHYTIUM_2500 should be set to enable support for Phytium ARMv8 S2500/FT-2500 SoC and 2-Processor server. Enable these two configs for arch/arm64/configs/openeuler_defconfig by default. Note: As FT-2500 2-Processor server has 16 NUMA nodes, CONFIG_NODES_SHIFT should be 4. Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com> Reviewed-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Wang Yinfeng 提交于
hulk inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I41AUQ CVE: NA ------------------------------------------------- Phytium S2500 adjusts the GIC's implementation to support multi-socket system designs. This patch adds the driver of this new implementation while keeping the kernel binary compatible with other ARM servers in the ecosystem. Signed-off-by: NWang Yinfeng <wangyinfeng@phytium.com.cn> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com> Reviewed-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 26 7月, 2021 2 次提交
-
-
由 Mel Gorman 提交于
mainline inclusion from mainline-5.12-rc1 commit 9fe1f127 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I41A4K CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=9fe1f127b913318c631d0041ecf71486e38c2c2d --------------------------- Both select_idle_core() and select_idle_cpu() do a loop over the same cpumask. Observe that by clearing the already visited CPUs, we can fold the iteration and iterate a core at a time. All we need to do is remember any non-idle CPU we encountered while scanning for an idle core. This way we'll only iterate every CPU once. Signed-off-by: NMel Gorman <mgorman@techsingularity.net> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Reviewed-by: NVincent Guittot <vincent.guittot@linaro.org> Link: https://lkml.kernel.org/r/20210127135203.19633-5-mgorman@techsingularity.netSigned-off-by: NZheng Zucheng <zhengzucheng@huawei.com> Conflicts: kernel/sched/fair.c Reviewed-by: NChen Hui <judy.chenhui@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Mel Gorman 提交于
mainline inclusion from mainline-5.12-rc1 commit 6cd56ef1 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I41A4K CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=6cd56ef1df399a004f90ecb682427f9964969fc9 --------------------------- In order to make the next patch more readable, and to quantify the actual effectiveness of this pass, start by removing it. Signed-off-by: NMel Gorman <mgorman@techsingularity.net> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Reviewed-by: NVincent Guittot <vincent.guittot@linaro.org> Link: https://lkml.kernel.org/r/20210125085909.4600-4-mgorman@techsingularity.netSigned-off-by: NZheng Zucheng <zhengzucheng@huawei.com> Reviewed-by: NChen Hui <judy.chenhui@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-