- 31 8月, 2021 8 次提交
-
-
由 Sang Yan 提交于
hulk inclusion category: feature bugzilla: 48159 CVE: N/A ------------------------------ Reserve memory for quick kexec on arm64 with cmdline "quickkexec=". Signed-off-by: NSang Yan <sangyan@huawei.com> Reviewed-by: NChen Wandun <chenwandun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Sang Yan 提交于
hulk inclusion category: feature bugzilla: 48159 CVE: N/A ------------------------------ In normal kexec, relocating kernel may cost 5 ~ 10 seconds, to copy all segments from vmalloced memory to kernel boot memory, because of disabled mmu. We introduce quick kexec to save time of copying memory as above, just like kdump(kexec on crash), by using reserved memory "Quick Kexec". To enable it, we should reserve memory and setup quick_kexec_res. Constructing quick kimage as the same as crash kernel, then simply copy all segments of kimage to reserved memroy. We also add this support in syscall kexec_load using flags of KEXEC_QUICK. Signed-off-by: NSang Yan <sangyan@huawei.com> Reviewed-by: NKuohai Xu <xukuohai@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Zheng Zengkai 提交于
euleros inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I42DAP CVE: NA ---------------------------------- Some manufacturers want to enable SMMU for using virtualization pass-through feature on the server with the 3408iMR/3416iMR RAID card installed. Enable CONFIG_SMMU_BYPASS_DEV by default in openeuler_defconfig. Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 luochunsheng 提交于
euleros inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I42DAP CVE: NA ---------------------------------- arm64 cannot support virtualization pass-through feature when using the 3408iMR/3416iMR raid card. For solving the problem, we add two fuctions: 1.we prepare init bypass level2 entry for specific devices when smmu uses 2 level streamid entry. 2.we add smmu.bypassdev cmdline to allow SMMU bypass streams for some specific devices. usage: 3408iMRraid: smmu.bypassdev=0x1000:0x17 3416iMRraid: smmu.bypassdev=0x1000:0x15 Signed-off-by: luochunsheng<luochunsheng@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Jian Shen 提交于
mainline inclusion from mainline-v5.14-rc3 commit bbfd4506 category: bugfix bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=bbfd4506f962e7e6fff8f37f017154a3c3791264 ---------------------------------------------------------------------- Currently, VF doesn't enable rx VLAN offload when initializating, and PF does it for VFs. If user disable the rx VLAN offload for VF with ethtool -K, and reload the VF driver, it may cause the rx VLAN offload state being inconsistent between hardware and software. Fixes it by enabling rx VLAN offload when VF initializing. Fixes: e2cb1dec ("net: hns3: Add HNS3 VF HCL(Hardware Compatibility Layer) Support") Signed-off-by: NJian Shen <shenjian15@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NJunxin Chen <chenjunxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Jian Shen 提交于
mainline inclusion from mainline-master commit 184cd221 category: feature bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=184cd221a86321e53df9389c4b35a247b60c1e77 ---------------------------------------------------------------------- For hardware limitation, port VLAN filter is port level, and effective for all the functions of the port. So if not support port VLAN bypass, it's necessary to disable the port VLAN filter, in order to support function level VLAN filter control. Fixes: 2ba30662 ("net: hns3: add support for modify VLAN filter state") Signed-off-by: NJian Shen <shenjian15@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NJunxin Chen <chenjunxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Peng Li 提交于
mainline inclusion from mainline-master commit 4671042f category: feature bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=4671042f1ef0d37137884811afcc4ae67685ce07 ---------------------------------------------------------------------- When VF need response from PF, VF will wait (1us - 1s) to receive the response, or it will wait timeout and the VF action fails. If VF do not receive response in 1st action because timeout, the 2nd action may receive response for the 1st action, and get incorrect response data.VF must reciveve the right response from PF,or it will cause unexpected error. This patch adds match_id to check mailbox response from PF to VF, to make sure VF get the right response: 1. The message sent from VF was labelled with match_id which was a unique 16-bit non-zero value. 2. The response sent from PF will label with match_id which got from the request. 3. The VF uses the match_id to match request and response message. This scheme depends on PF driver supports match_id, if PF driver doesn't support then VF will uses the original scheme. Signed-off-by: NPeng Li <lipeng321@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NJunxin Chen <chenjunxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Chengwen Feng 提交于
mainline inclusion from mainline-master commit 1b713d14 category: bugfix bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=1b713d14dc3c077ec45e65dab4ea01a8bc41b8c1 ---------------------------------------------------------------------- Currently, the mailbox synchronous communication between VF and PF use the following fields to maintain communication: 1. Origin_mbx_msg which was combined by message code and subcode, used to match request and response. 2. Received_resp which means whether received response. There may possible mismatches of the following situation: 1. VF sends message A with code=1 subcode=1. 2. PF was blocked about 500ms when processing the message A. 3. VF will detect message A timeout because it can't get the response within 500ms. 4. VF sends message B with code=1 subcode=1 which equal message A. 5. PF processes the first message A and send the response message to VF. 6. VF will identify the response matched the message B because the code/subcode is the same. This will lead to mismatch of request and response. To fix the above bug, we use the following scheme: 1. The message sent from VF was labelled with match_id which was a unique 16-bit non-zero value. 2. The response sent from PF will label with match_id which got from the request. 3. The VF uses the match_id to match request and response message. As for PF driver, it only needs to copy the match_id from request to response. Fixes: dde1a86e ("net: hns3: Add mailbox support to PF driver") Signed-off-by: NChengwen Feng <fengchengwen@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NJunxin Chen <chenjunxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 30 7月, 2021 1 次提交
-
-
由 Eric Sandeen 提交于
stable inclusion from stable-5.10.52 commit 174c34d9cda1b5818419b8f5a332ced10755e52f bugzilla: 331 CVE: CVE-2021-33909 --------------------------------------------------------------- commit 8cae8cd8 upstream. There is no reasonable need for a buffer larger than this, and it avoids int overflow pitfalls. Fixes: 058504ed ("fs/seq_file: fallback to vmalloc allocation") Suggested-by: NAl Viro <viro@zeniv.linux.org.uk> Reported-by: NQualys Security Advisory <qsa@qualys.com> Signed-off-by: NEric Sandeen <sandeen@redhat.com> Cc: stable@kernel.org Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 28 7月, 2021 18 次提交
-
-
由 Zhen Lei 提交于
hulk inclusion category: feature bugzilla: 174251 CVE: NA ------------------------------------------------------------------------- Due to limited hardware resources, the number of ECMDQs may be less than the number of cores. If the number of ECMDQs is greater than the number of numa nodes, ensure that each node has at least one ECMDQ. This is because ECMDQ queue memory is requested from the NUMA node where it resides, which may result in better command filling and insertion performance. The current ECMDQ implementation reuses the command insertion function arm_smmu_cmdq_issue_cmdlist() of the normal CMDQ. This function already supports multiple cores concurrent insertion commands. Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Zhen Lei 提交于
hulk inclusion category: feature bugzilla: 174251 CVE: NA ------------------------------------------------------------------------- When a core can exclusively own an ECMDQ, competition with other cores does not need to be considered during command insertion. Therefore, we can delete the part of arm_smmu_cmdq_issue_cmdlist() that deals with multi-core contention and generate a more efficient ECMDQ-specific function arm_smmu_ecmdq_issue_cmdlist(). Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Zhen Lei 提交于
hulk inclusion category: feature bugzilla: 174251 CVE: NA ------------------------------------------------------------------------- The SYNC command only ensures that the command that precedes it in the same ECMDQ must be executed, but cannot synchronize the commands in other ECMDQs. If an unmap involves multiple commands, some commands are executed on one core, and the other commands are executed on another core. In this case, after the SYNC execution is complete, the execution of all preceded commands can not be ensured. Prevent the process that performs a set of associated commands insertion from being migrated to other cores ensures that all commands are inserted into the same ECMDQ. Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Zhen Lei 提交于
hulk inclusion category: feature bugzilla: 174251 CVE: NA ------------------------------------------------------------------------- Ensure that each core exclusively occupies an ECMDQ and all of them are enabled during initialization. During this initialization process, any errors will result in a fallback to using normal CMDQ. When GERROR is triggered by ECMDQ, all ECMDQs need to be traversed: the ECMDQs with errors will be processed and the ECMDQs without errors will be skipped directly. Compared with register SMMU_CMDQ_PROD, register SMMU_ECMDQ_PROD has one more 'EN' bit and one more 'ERRACK' bit. Therefore, an extra member 'ecmdq_prod' is added to record the values of these two bits. Each time register SMMU_ECMDQ_PROD is updated, the value of 'ecmdq_prod' is ORed. After the error indicated by SMMU_GERROR.CMDQP_ERR is fixed, the 'ERRACK' bit needs to be toggled to resume the corresponding ECMDQ. Therefore, a rwlock is used to protect the write operation to bit 'ERRACK' during error handling and the read operation to bit 'ERRACK' during command insertion. Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Zhen Lei 提交于
hulk inclusion category: feature bugzilla: 174251 CVE: NA ------------------------------------------------------------------------- When SMMU_GERROR.CMDQP_ERR is different to SMMU_GERRORN.CMDQP_ERR, it indicates that one or more errors have been encountered on a command queue control page interface. We need to traverse all ECMDQs in that control page to find all errors. For each ECMDQ error handling, it is much the same as the CMDQ error handling. This common processing part is extracted as a new function __arm_smmu_cmdq_skip_err(). Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Zhen Lei 提交于
hulk inclusion category: feature bugzilla: 174251 CVE: NA ------------------------------------------------------------------------- One SMMU has only one normal CMDQ. Therefore, this CMDQ is used regardless of the core on which the command is inserted. It can be referenced directly through "smmu->cmdq". However, one SMMU has multiple ECMDQs, and the ECMDQ used by the core on which the command insertion is executed may be different. So the helper function arm_smmu_get_cmdq() is added, which returns the CMDQ/ECMDQ that the current core should use. Currently, the code that supports ECMDQ is not added. just simply returns "&smmu->cmdq". Many subfunctions of arm_smmu_cmdq_issue_cmdlist() use "&smmu->cmdq" or "&smmu->cmdq.q" directly. To support ECMDQ, they need to call the newly added function arm_smmu_get_cmdq() instead. Note that normal CMDQ is still required until ECMDQ is available. Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Zhen Lei 提交于
hulk inclusion category: feature bugzilla: 174251 CVE: NA ------------------------------------------------------------------------- The obvious key to the performance optimization of commit 587e6c10 ("iommu/arm-smmu-v3: Reduce contention during command-queue insertion") is to allow multiple cores to insert commands in parallel after a brief mutex contention. Obviously, inserting as many commands at a time as possible can reduce the number of times the mutex contention participates, thereby improving the overall performance. At least it reduces the number of calls to function arm_smmu_cmdq_issue_cmdlist(). Therefore, function arm_smmu_cmdq_issue_cmd_with_sync() is added to insert the 'cmd+sync' commands at a time. Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Zhen Lei 提交于
hulk inclusion category: feature bugzilla: 174251 CVE: NA ------------------------------------------------------------------------- The obvious key to the performance optimization of commit 587e6c10 ("iommu/arm-smmu-v3: Reduce contention during command-queue insertion") is to allow multiple cores to insert commands in parallel after a brief mutex contention. Obviously, inserting as many commands at a time as possible can reduce the number of times the mutex contention participates, thereby improving the overall performance. At least it reduces the number of calls to function arm_smmu_cmdq_issue_cmdlist(). Therefore, use command queue batching helpers to insert multiple commands at a time. Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Peter Zijlstra 提交于
mainline inclusion from mainline-5.12-rc1 commit e59e10f8 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I410UT CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=e59e10f8ef63d42fbb99776a5a112841e798b3b5 --------------------------- Add a debugfs file to muck about with the preempt mode at runtime. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/YAsGiUYf6NyaTplX@hirez.programming.kicks-ass.netSigned-off-by: NMa Junhai <majunhai2@huawei.com> Reviewed-by: NChen Hui <judy.chenhui@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Peter Zijlstra (Intel) 提交于
mainline inclusion from mainline-5.12-rc1 commit 826bfeb3 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I410UT CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=826bfeb37bb4302ee6042f330c4c0c757152bdb8 --------------------------- Support the preempt= boot option and patch the static call sites accordingly. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NFrederic Weisbecker <frederic@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210118141223.123667-9-frederic@kernel.orgSigned-off-by: NMa Junhai <majunhai2@huawei.com> Reviewed-by: NChen Hui <judy.chenhui@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Peter Zijlstra (Intel) 提交于
mainline inclusion from mainline-5.12-rc1 commit 40607ee9 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I410UT CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=40607ee97e4eec5655cc0f76a720bdc4c63a6434 --------------------------- Provide static call to control IRQ preemption (called in CONFIG_PREEMPT) so that we can override its behaviour when preempt= is overriden. Since the default behaviour is full preemption, its call is initialized to provide IRQ preemption when preempt= isn't passed. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NFrederic Weisbecker <frederic@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210118141223.123667-8-frederic@kernel.orgSigned-off-by: NMa Junhai <majunhai2@huawei.com> Reviewed-by: NChen Hui <judy.chenhui@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Peter Zijlstra (Intel) 提交于
mainline inclusion from mainline-5.12-rc1 commit 2c9a98d3 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I410UT CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=2c9a98d3bc808717ab63ad928a2b568967775388 --------------------------- Provide static calls to control preempt_schedule[_notrace]() (called in CONFIG_PREEMPT) so that we can override their behaviour when preempt= is overriden. Since the default behaviour is full preemption, both their calls are initialized to the arch provided wrapper, if any. [fweisbec: only define static calls when PREEMPT_DYNAMIC, make it less dependent on x86 with __preempt_schedule_func] Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NFrederic Weisbecker <frederic@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210118141223.123667-7-frederic@kernel.orgSigned-off-by: NMa Junhai <majunhai2@huawei.com> Reviewed-by: NChen Hui <judy.chenhui@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Peter Zijlstra (Intel) 提交于
mainline inclusion from mainline-5.12-rc1 commit b965f1dd category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I410UT CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=b965f1ddb47daa5b8b2e2bc9c921431236830367 --------------------------- Provide static calls to control cond_resched() (called in !CONFIG_PREEMPT) and might_resched() (called in CONFIG_PREEMPT_VOLUNTARY) to that we can override their behaviour when preempt= is overriden. Since the default behaviour is full preemption, both their calls are ignored when preempt= isn't passed. [fweisbec: branch might_resched() directly to __cond_resched(), only define static calls when PREEMPT_DYNAMIC] Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NFrederic Weisbecker <frederic@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210118141223.123667-6-frederic@kernel.orgSigned-off-by: NMa Junhai <majunhai2@huawei.com> Conflicts: include/linux/kernel.h Reviewed-by: NChen Hui <judy.chenhui@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Michal Hocko 提交于
mainline inclusion from mainline-5.12-rc1 commit 6ef869e0 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I410UT CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=6ef869e0647439af0fc28dde162d33320d4e1dd7 --------------------------- Preemption mode selection is currently hardcoded on Kconfig choices. Introduce a dedicated option to tune preemption flavour at boot time, This will be only available on architectures efficiently supporting static calls in order not to tempt with the feature against additional overhead that might be prohibitive or undesirable. CONFIG_PREEMPT_DYNAMIC is automatically selected by CONFIG_PREEMPT if the architecture provides the necessary support (CONFIG_STATIC_CALL_INLINE, CONFIG_GENERIC_ENTRY, and provide with __preempt_schedule_function() / __preempt_schedule_notrace_function()). Suggested-by: NPeter Zijlstra <peterz@infradead.org> Signed-off-by: NMichal Hocko <mhocko@suse.com> Signed-off-by: NFrederic Weisbecker <frederic@kernel.org> [peterz: relax requirement to HAVE_STATIC_CALL] Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210118141223.123667-5-frederic@kernel.orgSigned-off-by: NMa Junhai <majunhai2@huawei.com> Reviewed-by: NChen Hui <judy.chenhui@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Frederic Weisbecker 提交于
mainline inclusion from mainline-5.12-rc1 commit 29fd0194 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I410UT CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=29fd01944b7273bb630c649a2104b7f9e4ef3fa6 --------------------------- DECLARE_STATIC_CALL() must pass the original function targeted for a given static call. But DEFINE_STATIC_CALL() may want to initialize it as off. In this case we can't pass NULL (for functions without return value) or __static_call_return0 (for functions returning a value) directly to DEFINE_STATIC_CALL() as that may trigger a static call redeclaration with a different function prototype. Type casts neither can work around that as they don't get along with typeof(). The proper way to do that for functions that don't return a value is to use DEFINE_STATIC_CALL_NULL(). But functions returning a actual value don't have an equivalent yet. Provide DEFINE_STATIC_CALL_RET0() to solve this situation. Signed-off-by: NFrederic Weisbecker <frederic@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210118141223.123667-3-frederic@kernel.orgSigned-off-by: NMa Junhai <majunhai2@huawei.com> Reviewed-by: NChen Hui <judy.chenhui@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Peter Zijlstra 提交于
mainline inclusion from mainline-5.12-rc1 commit 3f2a8fc4 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I410UT CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=3f2a8fc4b15de18644e8a80a09edda168676e22c --------------------------- Provide a stub function that return 0 and wire up the static call site patching to replace the CALL with a single 5 byte instruction that clears %RAX, the return value register. The function can be cast to any function pointer type that has a single %RAX return (including pointers). Also provide a version that returns an int for convenience. We are clearing the entire %RAX register in any case, whether the return value is 32 or 64 bits, since %RAX is always a scratch register anyway. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NFrederic Weisbecker <frederic@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210118141223.123667-2-frederic@kernel.orgSigned-off-by: NMa Junhai <majunhai2@huawei.com> Reviewed-by: NChen Hui <judy.chenhui@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Zheng Zengkai 提交于
hulk inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I41AUQ CVE: NA ------------------------------------------------- While compiling with openeuler_defconfig, configs CONFIG_ARCH_PHYTIUM and CONFIG_ARM_GIC_PHYTIUM_2500 should be set to enable support for Phytium ARMv8 S2500/FT-2500 SoC and 2-Processor server. Enable these two configs for arch/arm64/configs/openeuler_defconfig by default. Note: As FT-2500 2-Processor server has 16 NUMA nodes, CONFIG_NODES_SHIFT should be 4. Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com> Reviewed-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Wang Yinfeng 提交于
hulk inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I41AUQ CVE: NA ------------------------------------------------- Phytium S2500 adjusts the GIC's implementation to support multi-socket system designs. This patch adds the driver of this new implementation while keeping the kernel binary compatible with other ARM servers in the ecosystem. Signed-off-by: NWang Yinfeng <wangyinfeng@phytium.com.cn> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com> Reviewed-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 26 7月, 2021 13 次提交
-
-
由 Mel Gorman 提交于
mainline inclusion from mainline-5.12-rc1 commit 9fe1f127 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I41A4K CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=9fe1f127b913318c631d0041ecf71486e38c2c2d --------------------------- Both select_idle_core() and select_idle_cpu() do a loop over the same cpumask. Observe that by clearing the already visited CPUs, we can fold the iteration and iterate a core at a time. All we need to do is remember any non-idle CPU we encountered while scanning for an idle core. This way we'll only iterate every CPU once. Signed-off-by: NMel Gorman <mgorman@techsingularity.net> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Reviewed-by: NVincent Guittot <vincent.guittot@linaro.org> Link: https://lkml.kernel.org/r/20210127135203.19633-5-mgorman@techsingularity.netSigned-off-by: NZheng Zucheng <zhengzucheng@huawei.com> Conflicts: kernel/sched/fair.c Reviewed-by: NChen Hui <judy.chenhui@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Mel Gorman 提交于
mainline inclusion from mainline-5.12-rc1 commit 6cd56ef1 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I41A4K CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=6cd56ef1df399a004f90ecb682427f9964969fc9 --------------------------- In order to make the next patch more readable, and to quantify the actual effectiveness of this pass, start by removing it. Signed-off-by: NMel Gorman <mgorman@techsingularity.net> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Reviewed-by: NVincent Guittot <vincent.guittot@linaro.org> Link: https://lkml.kernel.org/r/20210125085909.4600-4-mgorman@techsingularity.netSigned-off-by: NZheng Zucheng <zhengzucheng@huawei.com> Reviewed-by: NChen Hui <judy.chenhui@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Mel Gorman 提交于
mainline inclusion from mainline-5.12-rc1 commit bae4ec13 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I41A4K CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=bae4ec13640b0915e7dd86da7e65c5d085160571 --------------------------- As noted by Vincent Guittot, avg_scan_costs are calculated for SIS_PROP even if SIS_PROP is disabled. Move the time calculations under a SIS_PROP check and while we are at it, exclude the cost of initialising the CPU mask from the average scan cost. Signed-off-by: NMel Gorman <mgorman@techsingularity.net> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: NVincent Guittot <vincent.guittot@linaro.org> Link: https://lkml.kernel.org/r/20210125085909.4600-3-mgorman@techsingularity.netSigned-off-by: NZheng Zucheng <zhengzucheng@huawei.com> Reviewed-by: NChen Hui <judy.chenhui@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Mel Gorman 提交于
mainline inclusion from mainline-5.12-rc1 commit e6e0dc2d category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I41A4K CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=e6e0dc2d5497f7f3ed970052917e2923c6f453f4 --------------------------- SIS_AVG_CPU was introduced as a means of avoiding a search when the average search cost indicated that the search would likely fail. It was a blunt instrument and disabled by commit 4c77b18c ("sched/fair: Make select_idle_cpu() more aggressive") and later replaced with a proportional search depth by commit 1ad3aaf3 ("sched/core: Implement new approach to scale select_idle_cpu()"). While there are corner cases where SIS_AVG_CPU is better, it has now been disabled for almost three years. As the intent of SIS_PROP is to reduce the time complexity of select_idle_cpu(), lets drop SIS_AVG_CPU and focus on SIS_PROP as a throttling mechanism. Signed-off-by: NMel Gorman <mgorman@techsingularity.net> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: NVincent Guittot <vincent.guittot@linaro.org> Link: https://lkml.kernel.org/r/20210125085909.4600-2-mgorman@techsingularity.netSigned-off-by: NZheng Zucheng <zhengzucheng@huawei.com> Reviewed-by: NChen Hui <judy.chenhui@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Jian Shen 提交于
mainline inclusion from mainline-master commit d59daf6a category: feature bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=d59daf6a4ceedf342f349e94f1300e1598213252 ---------------------------------------------------------------------- This patch adds support of dumping MAC umv counter in debugfs, which will be helpful for debugging. The display style is below: $ cat umv_info num_alloc_vport : 2 max_umv_size : 256 wanted_umv_size : 256 priv_umv_size : 85 share_umv_size : 86 vport(0) used_umv_num : 1 vport(1) used_umv_num : 1 Signed-off-by: NJian Shen <shenjian15@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NJunxin Chen <chenjunxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Jian Shen 提交于
mainline inclusion from mainline-master commit 03a92fe8 category: feature bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=03a92fe8cedb6f619df416d38d0b57fd55070cd7 ---------------------------------------------------------------------- Previously, the flow director counter is not enabled. To improve the maintainability for chechking whether flow director hit or not, enable flow director counter for each function, and add debugfs query inerface to query the counters for each function. The debugfs command is below: cat fd_counter func_id hit_times pf 0 vf0 0 vf1 0 Signed-off-by: NJian Shen <shenjian15@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NJunxin Chen <chenjunxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Christophe JAILLET 提交于
mainline inclusion from mainline-master commit b40d7af7 category: bugfix bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=b40d7af798a0a459d65bd95f34e3dff004eb554a ---------------------------------------------------------------------- If this 'kzalloc()' fails we must free some resources as in all the other error handling paths of this function. Fixes: 2e2deee7 ("net: hns3: add the RAS compatibility adaptation solution") Signed-off-by: NChristophe JAILLET <christophe.jaillet@wanadoo.fr> Reviewed-by: Jiaran Zhang <zhangjiaran@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NJunxin Chen <chenjunxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Dan Carpenter 提交于
mainline inclusion from mainline-master commit faebad85 category: bugfix bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=faebad853455b7126450c1690f7c31e048213543 ---------------------------------------------------------------------- This patch doesn't affect runtime at all, it's just a correctness issue. The ptp->info.name[] buffer has 16 characters but the snprintf() limit was capped at 32 characters. Fortunately, HCLGE_DRIVER_NAME is "hclge" which isn't close to 16 characters so we're fine. Fixes: 0bf5eb78 ("net: hns3: add support for PTP") Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NJunxin Chen <chenjunxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Yunsheng Lin 提交于
mainline inclusion from mainline-master commit 96104500 category: bugfix bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=961045004b774aae7a244fa0435f8a6a2495c234 ---------------------------------------------------------------------- In the current rx page reuse handling process, the rx page buffer may have conflict between driver and stack in high-pressure scenario. To fix this problem, we need to check whether the page is only owned by driver at the begin and at the end of a page to make sure there is no reuse conflict between driver and stack when desc_cb->page_offset is rollbacked to zero or increased. Fixes: fa7711b8 ("net: hns3: optimize the rx page reuse handling process") Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NJunxin Chen <chenjunxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Yunsheng Lin 提交于
mainline inclusion from mainline-master commit 99f6b5fb category: feature bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=99f6b5fb5f63cf69c6e56bba8e5492c98c521a63 ---------------------------------------------------------------------- Currently rx page will be reused to receive future packet when the stack releases the previous skb quickly. If the old page can not be reused, a new page will be allocated and mapped, which comsumes a lot of cpu when IOMMU is in the strict mode, especially when the application and irq/NAPI happens to run on the same cpu. So allocate a new frag to memcpy the data to avoid the costly IOMMU unmapping/mapping operation, and add "frag_alloc_err" and "frag_alloc" stats in "ethtool -S ethX" cmd. The throughput improves above 50% when running single thread of iperf using TCP when IOMMU is in strict mode and iperf shares the same cpu with irq/NAPI(rx_copybreak = 2048 and mtu = 1500). Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NJunxin Chen <chenjunxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Yunsheng Lin 提交于
mainline inclusion from mainline-master commit fa7711b8 category: feature bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=fa7711b888f24ee9291d90f8fbdaccfc80ed72c7 ---------------------------------------------------------------------- Current rx page offset only reset to zero when all the below conditions are satisfied: 1. rx page is only owned by driver. 2. rx page is reusable. 3. the page offset that is above to be given to the stack has reached the end of the page. If the page offset is over the hns3_buf_size(), it means the buffer below the offset of the page is usable when the above condition 1 & 2 are satisfied, so page offset can be reset to zero instead of increasing the offset. We may be able to always reuse the first 4K buffer of a 64K page, which means we can limit the hot buffer size as much as possible. The above optimization is a side effect when refacting the rx page reuse handling in order to support the rx copybreak. Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NJunxin Chen <chenjunxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Yunsheng Lin 提交于
mainline inclusion from mainline-master commit 7459775e category: feature bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=7459775e9f658a2d5f3ff9d4d087e86f4d4e5b83 ---------------------------------------------------------------------- Using the queue based tx buffer, it is also possible to allocate a sgl buffer, and use skb_to_sgvec() to convert the skb to the sgvec in order to support the dma_map_sg() to decreases the overhead of IOMMU mapping and unmapping. Firstly, it reduces the number of buffers. For example, a tcp skb may have a 66-byte header and 3 fragments of 4328, 32768, and 28064 bytes. With this patch, dma_map_sg() will combine them into two buffers, 66-bytes header and one 65160-bytes fragment by using IOMMU. Secondly, it reduces the number of dma mapping and unmapping. All the original 4 buffers are mapped only once rather than 4 times. The throughput improves above 10% when running single thread of iperf using TCP when IOMMU is in strict mode. Suggested-by: NBarry Song <song.bao.hua@hisilicon.com> Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NJunxin Chen <chenjunxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Huazhong Tan 提交于
mainline inclusion from mainline-master commit 1a00197b category: feature bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=1a00197b7d2fe57f0be93037d5090e19a9b178c8 ---------------------------------------------------------------------- Add support to query tx spare buffer size from configuration file, and use this info to do spare buffer initialization when the module parameter 'tx_spare_buf_size' is not specified. Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NJunxin Chen <chenjunxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-