- 26 9月, 2021 1 次提交
-
-
由 Zhangfei Gao 提交于
driver inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I472UK?from=project-issue ---------------------------------------------------------------------- A PASID-like feature is implemented on AMBA without using TLP prefixes and these devices have PASID capability though not supporting TLP. Adding a pasid_no_tlp bit for "PASID works without TLP prefixes" and pci_enable_pasid() checks pasid_no_tlp as well as eetlp_prefix_path. Suggested-by: NBjorn Helgaas <helgaas@kernel.org> Signed-off-by: NZhangfei Gao <zhangfei.gao@linaro.org> Reviewed-by: NHao Fang <fanghao11@huawei.com> Reviewed-by: NMingqiang Ling <lingmingqiang@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 02 9月, 2021 5 次提交
-
-
由 Liang Li (Euler) 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I401IF CVE: NA ------------------------------ When CONFIG_IOMMU_API is off, kernel compile failed with below error: ... LD vmlinux.o drivers/of/platform.o: In function `iommu_bind_guest_msi': platform.c:(.text+0x9e): multiple definition of `iommu_bind_guest_msi' drivers/of/device.o:device.c:(.text+0x122): first defined here .../repo/srcs/kernel/Makefile:1178: recipe for target 'vmlinux' failed ... This should be a typo introduced by commit 9db83ab7. Simply correct the stub function to be 'static inline' in header file should be good enough. Signed-off-by: NLiang Li (Euler) <liliang889@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com> Reviewed-by: NKunkun Jiang <jiangkunkun@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com>
-
由 Xiongfeng Wang 提交于
hulk inclusion category: bugfix bugzilla: 175146 CVE: NA ------------------------------------ Disable userswap by default and add a kernel parameter to enable it. Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Conflicts: include/linux/userfaultfd_k.h Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 John Garry 提交于
mainline inclusion from mainline-master commit e15f2fa9 category: bugfix bugzilla: 175270 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=e15f2fa959f2cce8a05e8e3a596e75d068cd42c5 ------------------------------------------------------------------------ Drivers for multi-queue platform devices may also want managed interrupts for handling HW queue completion interrupts, so add support. The function accepts an affinity descriptor pointer, which covers all IRQs expected for the device. The function is devm class as the only current in-tree user will also use devm method for requesting the interrupts; as such, the function is made as devm as it can ensure ordering of freeing the irq and disposing of the mapping. Signed-off-by: NJohn Garry <john.garry@huawei.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Acked-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/1606905417-183214-5-git-send-email-john.garry@huawei.comReviewed-by: NOuyangdelong <ouyangdelong@huawei.com> Signed-off-by: NNifujia <nifujia1@hisilicon.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 John Garry 提交于
mainline inclusion from mainline-master commit 9806731d category: bugfix bugzilla: 175270 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=9806731db684a475ade1e95d166089b9edbd9da3 ------------------------------------------------------------------------ Add a common function to set the fields for a irq resource to disabled, which mimics what is done in acpi_dev_irqresource_disabled(), with a view to replace that function. Signed-off-by: NJohn Garry <john.garry@huawei.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Link: https://lore.kernel.org/r/1606905417-183214-3-git-send-email-john.garry@huawei.comReviewed-by: NOuyangdelong <ouyangdelong@huawei.com> Signed-off-by: NNifujia <nifujia1@hisilicon.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 John Garry 提交于
mainline inclusion from mainline-master commit 1d3aec89 category: bugfix bugzilla: 175270 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=1d3aec89286254487df7641c30f1b14ad1d127a5 ------------------------------------------------------------------------ Add a function to allow the affinity of an interrupt be switched to managed, such that interrupts allocated for platform devices may be managed. This new interface has certain limitations, and attempts to use it in the following circumstances will fail: - For when the kernel is configured for generic IRQ reservation mode (in config GENERIC_IRQ_RESERVATION_MODE). The reason being that it could conflict with managed vs. non-managed interrupt accounting. - The interrupt is already started, which should not be the case during init - The interrupt is already configured as managed, which means double init Suggested-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NJohn Garry <john.garry@huawei.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/1606905417-183214-2-git-send-email-john.garry@huawei.comReviewed-by: NOuyangdelong <ouyangdelong@huawei.com> Signed-off-by: NNifujia <nifujia1@hisilicon.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 01 9月, 2021 6 次提交
-
-
由 Yicong Yang 提交于
mainline inclusion from mainline-v5.12-rc7 3b4c747c category: feature bugzilla: 175140 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=3b4c747cd32078172dd238929e38a43cfed83580 --------------------------- Some I2C drivers like Designware and HiSilicon will print the bus frequency mode information, so add a public one that everyone can make use of. Add the definition of the I2C Frequency Modes macro. Tested-by: NJarkko Nikula <jarkko.nikula@linux.intel.com> Reviewed-by: NJarkko Nikula <jarkko.nikula@linux.intel.com> Reviewed-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: NYicong Yang <yangyicong@hisilicon.com> Signed-off-by: NWolfram Sang <wsa@kernel.org> Reviewed-by: NJay Fang <f.fangjian@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Yicong Yang 提交于
mainline inclusion from mainline-v5.12-rc7 07740c92 category: feature bugzilla: 175140 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=07740c92ae57ca21204f1e0c6f59272cdf3190cc --------------------------- Some I2C controller drivers will only unregister the I2C adapter in their .remove() callback, which can be done by simply using a managed variant to add the I2C adapter. So add the managed functions for adding the I2C adapter. Reviewed-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com> Reviewed-by: NDmitry Osipenko <digetx@gmail.com> Signed-off-by: NYicong Yang <yangyicong@hisilicon.com> Signed-off-by: NWolfram Sang <wsa@kernel.org> Reviewed-by: NJay Fang <f.fangjian@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Barry Song 提交于
mainline inclusion from mainline-v5.12-rc2 commit cbe16f35 category: bugfix bugzilla: 175148 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=cbe16f35bee6880becca6f20d2ebf6b457148552 ------------------------------------------------------------------------ Many drivers don't want interrupts enabled automatically via request_irq(). So they are handling this issue by either way of the below two: (1) irq_set_status_flags(irq, IRQ_NOAUTOEN); request_irq(dev, irq...); (2) request_irq(dev, irq...); disable_irq(irq); The code in the second way is silly and unsafe. In the small time gap between request_irq() and disable_irq(), interrupts can still come. The code in the first way is safe though it's subobtimal. Add a new IRQF_NO_AUTOEN flag which can be handed in by drivers to request_irq() and request_nmi(). It prevents the automatic enabling of the requested interrupt/nmi in the same safe way as #1 above. With that the various usage sites of #1 and #2 above can be simplified and corrected. Signed-off-by: NBarry Song <song.bao.hua@hisilicon.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NIngo Molnar <mingo@kernel.org> Cc: dmitry.torokhov@gmail.com Link: https://lore.kernel.org/r/20210302224916.13980-2-song.bao.hua@hisilicon.comReviewed-by: NShaokun Zhang <zhangshaokun@hisilicon.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Thomas Gleixner 提交于
mainline inclusion from mainline-v5.13-rc3 commit 4d80d6ca category: bugfix bugzilla: 175148 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=4d80d6ca5d77fde9880da8466e5b64f250e5bf82 ------------------------------------------------------------------------ Perf modules abuse irq_set_affinity_hint() to set the affinity of system PMU interrupts just because irq_set_affinity() was not exported. The fact that irq_set_affinity_hint() actually sets the affinity is a non-documented side effect and the name is clearly saying it's a hint. To clean this up, export the real affinity setter. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NMark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20210518093117.968251441@linutronix.deReviewed-by: NShaokun Zhang <zhangshaokun@hisilicon.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Shaokun Zhang 提交于
mainline inclusion from mainline-v5.12-rc3 commit a0ab25cd category: feature bugzilla: 175148 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=a0ab25cd82eeb68bfa19a4d93a097521af5011b8 ------------------------------------------------------------------------ On HiSilicon Hip09 platform, there is a PA (Protocol Adapter) module on each chip SICL (Super I/O Cluster) which incorporates three Hydra interface and facilitates the cache coherency between the dies on the chip. While PA uncore PMU model is the same as other Hip09 PMU modules and many PMU events are supported. Let's support the PMU driver using the HiSilicon uncore PMU framework. PA PMU supports the following filter functions: * tracetag_en: allows user to count events according to tt_req or tt_core set in L3C PMU. It's the same as other PMUs. * srcid_cmd & srcid_msk: allows user to filter statistics that come from specific CCL/ICL by configuration source ID. * tgtid_cmd & tgtid_msk: it is the similar function to srcid_cmd & srcid_msk. Both are used to check where the data comes from or go to. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will@kernel.org> Cc: John Garry <john.garry@huawei.com> Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: NJohn Garry <john.garry@huawei.com> Co-developed-by: NQi Liu <liuqi115@huawei.com> Signed-off-by: NQi Liu <liuqi115@huawei.com> Signed-off-by: NShaokun Zhang <zhangshaokun@hisilicon.com> Link: https://lore.kernel.org/r/1615186237-22263-9-git-send-email-zhangshaokun@hisilicon.comSigned-off-by: NWill Deacon <will@kernel.org> Reviewed-by: NShaokun Zhang <zhangshaokun@hisilicon.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Shaokun Zhang 提交于
mainline inclusion from mainline-v5.12-rc3 commit 3bf30882 category: feature bugzilla: 175148 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=3bf30882c3c7b6e376d9d6d04082c9aa2d2ac30a ------------------------------------------------------------------------ HiSilicon's Hip09 is comprised by multi-dies that can be connected by SLLC module (Skyros Link Layer Controller), its has separate PMU registers which the driver can program it freely and interrupt is supported to handle counter overflow. Let's support its driver under the framework of HiSilicon uncore PMU driver. SLLC PMU supports the following filter functions: * tracetag_en: allows user to count data according to tt_req or tt_core set in L3C PMU. * srcid_cmd & srcid_msk: allows user to filter statistics that come from specific CCL/ICL by configuration source ID. * tgtid_hi & tgtid_lo: it also supports event statistics that these operations will go to the CCL/ICL by configuration target ID or target ID range. It's the same as source ID with 11-bit width in the SoC. More introduction is added in documentation: Documentation/admin-guide/perf/hisi-pmu.rst Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will@kernel.org> Cc: John Garry <john.garry@huawei.com> Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: NJohn Garry <john.garry@huawei.com> Co-developed-by: NQi Liu <liuqi115@huawei.com> Signed-off-by: NQi Liu <liuqi115@huawei.com> Signed-off-by: NShaokun Zhang <zhangshaokun@hisilicon.com> Link: https://lore.kernel.org/r/1615186237-22263-8-git-send-email-zhangshaokun@hisilicon.comSigned-off-by: NWill Deacon <will@kernel.org> Reviewed-by: NShaokun Zhang <zhangshaokun@hisilicon.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 31 8月, 2021 3 次提交
-
-
由 Jingxian He 提交于
hulk inclusion category: feature bugzilla: 48159 CVE: N/A ------------------------------ We record the pid of dump tasks in the reserved memory, and reserve the pids before init task start. In the recover process, free the reserved pids and realloc them for use. Signed-off-by: NJingxian He <hejingxian@huawei.com> Reviewed-by: NChen Wandun <chenwandun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Jingxian He 提交于
hulk inclusion category: feature bugzilla: 48159 CVE: N/A ------------------------------ We can use the checkpoint and restore in userspace(criu) method to dump and restore tasks when updating the kernel. Currently, criu needs dump all memory data of tasks to files. When the memory size is very large(larger than 1G), the cost time of the dumping data will be very long(more than 1 min). By pin the memory data of tasks and collect the corresponding physical pages mapping info in checkpoint process, we can remap the physical pages to restore tasks after upgrading the kernel. This pin memory method can restore the task data within one second. The pin memory area info is saved in the reserved memblock, which can keep usable in the kernel update process. The pin memory driver provides the following ioctl command for criu: 1) SET_PIN_MEM_AREA: Set pin memory area, which can be remap to the restore task. 2) CLEAR_PIN_MEM_AREA: Clear the pin memory area info, which enable user reset the pin data. 3) REMAP_PIN_MEM_AREA: Remap the pages of the pin memory to the restore task. Signed-off-by: NJingxian He <hejingxian@huawei.com> Reviewed-by: NChen Wandun <chenwandun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Sang Yan 提交于
hulk inclusion category: feature bugzilla: 48159 CVE: N/A ------------------------------ In normal kexec, relocating kernel may cost 5 ~ 10 seconds, to copy all segments from vmalloced memory to kernel boot memory, because of disabled mmu. We introduce quick kexec to save time of copying memory as above, just like kdump(kexec on crash), by using reserved memory "Quick Kexec". To enable it, we should reserve memory and setup quick_kexec_res. Constructing quick kimage as the same as crash kernel, then simply copy all segments of kimage to reserved memroy. We also add this support in syscall kexec_load using flags of KEXEC_QUICK. Signed-off-by: NSang Yan <sangyan@huawei.com> Reviewed-by: NKuohai Xu <xukuohai@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 28 7月, 2021 5 次提交
-
-
由 Peter Zijlstra (Intel) 提交于
mainline inclusion from mainline-5.12-rc1 commit 40607ee9 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I410UT CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=40607ee97e4eec5655cc0f76a720bdc4c63a6434 --------------------------- Provide static call to control IRQ preemption (called in CONFIG_PREEMPT) so that we can override its behaviour when preempt= is overriden. Since the default behaviour is full preemption, its call is initialized to provide IRQ preemption when preempt= isn't passed. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NFrederic Weisbecker <frederic@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210118141223.123667-8-frederic@kernel.orgSigned-off-by: NMa Junhai <majunhai2@huawei.com> Reviewed-by: NChen Hui <judy.chenhui@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Peter Zijlstra (Intel) 提交于
mainline inclusion from mainline-5.12-rc1 commit b965f1dd category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I410UT CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=b965f1ddb47daa5b8b2e2bc9c921431236830367 --------------------------- Provide static calls to control cond_resched() (called in !CONFIG_PREEMPT) and might_resched() (called in CONFIG_PREEMPT_VOLUNTARY) to that we can override their behaviour when preempt= is overriden. Since the default behaviour is full preemption, both their calls are ignored when preempt= isn't passed. [fweisbec: branch might_resched() directly to __cond_resched(), only define static calls when PREEMPT_DYNAMIC] Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NFrederic Weisbecker <frederic@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210118141223.123667-6-frederic@kernel.orgSigned-off-by: NMa Junhai <majunhai2@huawei.com> Conflicts: include/linux/kernel.h Reviewed-by: NChen Hui <judy.chenhui@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Frederic Weisbecker 提交于
mainline inclusion from mainline-5.12-rc1 commit 29fd0194 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I410UT CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=29fd01944b7273bb630c649a2104b7f9e4ef3fa6 --------------------------- DECLARE_STATIC_CALL() must pass the original function targeted for a given static call. But DEFINE_STATIC_CALL() may want to initialize it as off. In this case we can't pass NULL (for functions without return value) or __static_call_return0 (for functions returning a value) directly to DEFINE_STATIC_CALL() as that may trigger a static call redeclaration with a different function prototype. Type casts neither can work around that as they don't get along with typeof(). The proper way to do that for functions that don't return a value is to use DEFINE_STATIC_CALL_NULL(). But functions returning a actual value don't have an equivalent yet. Provide DEFINE_STATIC_CALL_RET0() to solve this situation. Signed-off-by: NFrederic Weisbecker <frederic@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210118141223.123667-3-frederic@kernel.orgSigned-off-by: NMa Junhai <majunhai2@huawei.com> Reviewed-by: NChen Hui <judy.chenhui@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Peter Zijlstra 提交于
mainline inclusion from mainline-5.12-rc1 commit 3f2a8fc4 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I410UT CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=3f2a8fc4b15de18644e8a80a09edda168676e22c --------------------------- Provide a stub function that return 0 and wire up the static call site patching to replace the CALL with a single 5 byte instruction that clears %RAX, the return value register. The function can be cast to any function pointer type that has a single %RAX return (including pointers). Also provide a version that returns an int for convenience. We are clearing the entire %RAX register in any case, whether the return value is 32 or 64 bits, since %RAX is always a scratch register anyway. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NFrederic Weisbecker <frederic@kernel.org> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210118141223.123667-2-frederic@kernel.orgSigned-off-by: NMa Junhai <majunhai2@huawei.com> Reviewed-by: NChen Hui <judy.chenhui@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Wang Yinfeng 提交于
hulk inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I41AUQ CVE: NA ------------------------------------------------- Phytium S2500 adjusts the GIC's implementation to support multi-socket system designs. This patch adds the driver of this new implementation while keeping the kernel binary compatible with other ARM servers in the ecosystem. Signed-off-by: NWang Yinfeng <wangyinfeng@phytium.com.cn> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com> Reviewed-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 23 7月, 2021 1 次提交
-
-
由 LeoLiu-oc 提交于
zhaoxin inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I40QDN CVE: NA ---------------------------------------------------------------- Some ACPI devices need to issue dma requests to access the reserved memory area.BIOS uses the device scope type ACPI_NAMESPACE_DEVICE in RMRR to report these ACPI devices. This patch add support for detecting ACPI devices in RMRR and in order to distinguish it from PCI device, some interface functions are modified. Signed-off-by: NLeoLiu-oc <LeoLiu-oc@zhaoxin.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com> Reviewed-by: NHanjun Guo <guohanjun@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 22 7月, 2021 2 次提交
-
-
由 Stefan Berger 提交于
mainline inclusion from mainline-v5.13-rc1 commit 2a8e6154 category: feature bugzilla: 173981 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=2a8e615436de4cd59a7b0af43590ede899906bdf ---------------------------------------------------------------------- Add OIDs for ECDSA with SHA224/256/384/512. Signed-off-by: NStefan Berger <stefanb@linux.ibm.com> Acked-by: NJarkko Sakkinen <jarkko@kernel.org> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NMingqiang Ling <lingmingqiang@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Eric Biggers 提交于
mainline inclusion from mainline-v5.11-rc1 commit a24d22b2 category: feature bugzilla: 173981 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=a24d22b225ce158651378869a6b88105c4bdb887 ---------------------------------------------------------------------- Currently <crypto/sha.h> contains declarations for both SHA-1 and SHA-2, and <crypto/sha3.h> contains declarations for SHA-3. This organization is inconsistent, but more importantly SHA-1 is no longer considered to be cryptographically secure. So to the extent possible, SHA-1 shouldn't be grouped together with any of the other SHA versions, and usage of it should be phased out. Therefore, split <crypto/sha.h> into two headers <crypto/sha1.h> and <crypto/sha2.h>, and make everyone explicitly specify whether they want the declarations for SHA-1, SHA-2, or both. This avoids making the SHA-1 declarations visible to files that don't want anything to do with SHA-1. It also prepares for potentially moving sha1.h into a new insecure/ or dangerous/ directory. Signed-off-by: NEric Biggers <ebiggers@google.com> Acked-by: NArd Biesheuvel <ardb@kernel.org> Acked-by: NJason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NMingqiang Ling <lingmingqiang@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 19 7月, 2021 5 次提交
-
-
由 Eric Auger 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I401IF CVE: NA ------------------------------ Up to now, when the type was UNMANAGED, we used to allocate IOVA pages within a reserved IOVA MSI range. If both the host and the guest are exposed with SMMUs, each would allocate an IOVA. The guest allocates an IOVA (gIOVA) to map onto the guest MSI doorbell (gDB). The Host allocates another IOVA (hIOVA) to map onto the physical doorbell (hDB). So we end up with 2 unrelated mappings, at S1 and S2: S1 S2 gIOVA -> gDB hIOVA -> hDB The PCI device would be programmed with hIOVA. No stage 1 mapping would existing, causing the MSIs to fault. iommu_dma_bind_guest_msi() allows to pass gIOVA/gDB to the host so that gIOVA can be used by the host instead of re-allocating a new hIOVA. S1 S2 gIOVA -> gDB -> hDB this time, the PCI device can be programmed with the gIOVA MSI doorbell which is correctly mapped through both stages. Nested mode is not compatible with HW MSI regions as in that case gDB and hDB should have a 1-1 mapping. This check will be done when attaching each device to the IOMMU domain. Signed-off-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: Kunkun Jiang<jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Eric Auger 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I401IF CVE: NA ------------------------------ On ARM, MSI are translated by the SMMU. An IOVA is allocated for each MSI doorbell. If both the host and the guest are exposed with SMMUs, we end up with 2 different IOVAs allocated by each. guest allocates an IOVA (gIOVA) to map onto the guest MSI doorbell (gDB). The Host allocates another IOVA (hIOVA) to map onto the physical doorbell (hDB). So we end up with 2 untied mappings: S1 S2 gIOVA -> gDB hIOVA -> hDB Currently the PCI device is programmed by the host with hIOVA as MSI doorbell. So this does not work. This patch introduces an API to pass gIOVA/gDB to the host so that gIOVA can be reused by the host instead of re-allocating a new IOVA. So the goal is to create the following nested mapping: S1 S2 gIOVA -> gDB -> hDB and program the PCI device with gIOVA MSI doorbell. In case we have several devices attached to this nested domain (devices belonging to the same group), they cannot be isolated on guest side either. So they should also end up in the same domain on guest side. We will enforce that all the devices attached to the host iommu domain use the same physical doorbell and similarly a single virtual doorbell mapping gets registered (1 single virtual doorbell is used on guest as well). Signed-off-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: Kunkun Jiang<jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Eric Auger 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I401IF CVE: NA ------------------------------ In virtualization use case, when a guest is assigned a PCI host device, protected by a virtual IOMMU on the guest, the physical IOMMU must be programmed to be consistent with the guest mappings. If the physical IOMMU supports two translation stages it makes sense to program guest mappings onto the first stage/level (ARM/Intel terminology) while the host owns the stage/level 2. In that case, it is mandated to trap on guest configuration settings and pass those to the physical iommu driver. This patch adds a new API to the iommu subsystem that allows to set/unset the pasid table information. A generic iommu_pasid_table_config struct is introduced in a new iommu.h uapi header. This is going to be used by the VFIO user API. Signed-off-by: NJean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: NLiu, Yi L <yi.l.liu@linux.intel.com> Signed-off-by: NAshok Raj <ashok.raj@intel.com> Signed-off-by: NJacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: Kunkun Jiang<jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Guo Fan 提交于
hulk inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I40AXF CVE: NA -------------------------------------- This patch modify the userfaultfd to support userswap. To check whether tha pages are dirty since the last swap in, we make them clean when we swap in the pages. The userspace may swap in a large area and part of it are not swapped out. We need to skip those pages that are not swapped out. Signed-off-by: NGuo Fan <guofan5@huawei.com> Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: Ntong tiangen <tongtiangen@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Guo Fan 提交于
hulk inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I40AXF CVE: NA -------------------------------------- To make sure there are no other userspace threads access the memory region we are swapping out, we need unmmap the memory region, map it to a new address and use the new address to perform the swapout. We add a new flag 'MAP_REPLACE' for mmap() to unmap the pages of the input parameter 'VA' and remap them to a new tmpVA. Signed-off-by: NGuo Fan <guofan5@huawei.com> Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by: Ntong tiangen <tongtiangen@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 17 7月, 2021 1 次提交
-
-
由 Xin Long 提交于
mainline inclusion from mainline-v5.12-rc1-dontuse commit fa821170 category: feature bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=fa82117010430aff2ce86400f7328f55a31b48a6 ---------------------------------------------------------------------- This patch is to define a inline function skb_csum_is_sctp(), and also replace all places where it checks if it's a SCTP CSUM skb. This function would be used later in many networking drivers in the following patches. Suggested-by: NAlexander Duyck <alexander.duyck@gmail.com> Signed-off-by: NXin Long <lucien.xin@gmail.com> Reviewed-by: NAlexander Duyck <alexanderduyck@fb.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 16 7月, 2021 11 次提交
-
-
由 Alexander Lobakin 提交于
mainline inclusion from mainline-v5.12-rc1-dontuse commit bc38f30f category: feature bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=bc38f30f8dbce0afb8af05d917bee084b1329418 ---------------------------------------------------------------------- A bunch of drivers test the page before reusing/recycling for two common conditions: - if a page was allocated under memory pressure (pfmemalloc page); - if a page was allocated at a distant memory node (to exclude slowdowns). Introduce a new common inline for doing this, with likely() already folded inside to make driver code a bit simpler. Suggested-by: NDavid Rientjes <rientjes@google.com> Suggested-by: NJakub Kicinski <kuba@kernel.org> Cc: John Hubbard <jhubbard@nvidia.com> Signed-off-by: NAlexander Lobakin <alobakin@pm.me> Reviewed-by: NJesse Brandeburg <jesse.brandeburg@intel.com> Acked-by: NDavid Rientjes <rientjes@google.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Alexander Lobakin 提交于
mainline inclusion from mainline-v5.12-rc1-dontuse commit 48f971c9 category: feature bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=48f971c9c80a728646fc03367a28df747f20d0f4 ---------------------------------------------------------------------- The function doesn't write anything to the page struct itself, so this argument can be const. Misc: align second argument to the brace while at it. Signed-off-by: NAlexander Lobakin <alobakin@pm.me> Reviewed-by: NJesse Brandeburg <jesse.brandeburg@intel.com> Acked-by: NDavid Rientjes <rientjes@google.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Alexander Lobakin 提交于
mainline inclusion from mainline-v5.12-rc1-dontuse commit 1d7bab6a category: feature bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=1d7bab6a94458e959f3f55788fd50ddc7d97403b ---------------------------------------------------------------------- The function only tests for page->index, so its argument should be const. Signed-off-by: NAlexander Lobakin <alobakin@pm.me> Reviewed-by: NJesse Brandeburg <jesse.brandeburg@intel.com> Acked-by: NDavid Rientjes <rientjes@google.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Kunkun Jiang 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I3ZUKK CVE: NA ------------------------------ This realizes switch_dirty_log. In order to get finer dirty granule, it invokes arm_smmu_split_block when start dirty log, and invokes arm_smmu_merge_page() to recover block mapping when stop dirty log. Co-developed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NKunkun Jiang <jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Kunkun Jiang 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I3ZUKK CVE: NA ------------------------------ After dirty log is retrieved, user should clear dirty log to re-enable dirty log tracking for these dirtied pages. This clears the dirty state (As we just set DBM bit for stage1 mapping, so should set the AP[2] bit) of these leaf TTDs that are specified by the user provided bitmap. Co-developed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NKunkun Jiang <jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Kunkun Jiang 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I3ZUKK CVE: NA ------------------------------ During dirty log tracking, user will try to retrieve dirty log from iommu if it supports hardware dirty log. Scan leaf TTD and treat it is dirty if it's writable. As we just set DBM bit for stage1 mapping, so check whether AP[2] is not set. Co-developed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NKunkun Jiang <jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Kunkun Jiang 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I3ZUKK CVE: NA ------------------------------ If block(largepage) mappings are split during start dirty log, then when stop dirty log, we need to recover them for better DMA performance. This recovers block mappings and unmap the span of page mappings. BBML1 or BBML2 feature is required. Merging page is designed to be only used by dirty log tracking, which does not concurrently work with other pgtable ops that access underlying page table, so race condition does not exist. Co-developed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NKunkun Jiang <jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Kunkun Jiang 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I3ZUKK CVE: NA ------------------------------ Block(largepage) mapping is not a proper granule for dirty log tracking. Take an extreme example, if DMA writes one byte, under 1G mapping, the dirty amount reported is 1G, but under 4K mapping, the dirty amount is just 4K. This splits block descriptor to an span of page descriptors. BBML1 or BBML2 feature is required. Spliting block is designed to be only used by dirty log tracking, which does not concurrently work with other pgtable ops that access underlying page table, so race condition does not exist. Co-developed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NKunkun Jiang <jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Kunkun Jiang 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I3ZUKK CVE: NA ------------------------------ These features are essential to support dirty log tracking for SMMU with io-pgtable mapping. The dirty state information is encoded using the access permission bits AP[2] (stage 1) or S2AP[1] (stage 2) in conjunction with the DBM (Dirty Bit Modifier) bit, where DBM means writable and AP[2]/ S2AP[1] means dirty. When has ARM_HD, we set DBM bit for S1 mapping. As SMMU nested mode is not upstreamed for now, we just aim to support dirty log tracking for stage1 with io-pgtable mapping (means not support SVA). Co-developed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NKunkun Jiang <jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Keqian Zhu 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I3ZUKK CVE: NA ------------------------------ Some types of IOMMU are capable of tracking DMA dirty log, such as ARM SMMU with HTTU or Intel IOMMU with SLADE. This introduces the dirty log tracking framework in the IOMMU base layer. Four new essential interfaces are added, and we maintaince the status of dirty log tracking in iommu_domain. 1. iommu_support_dirty_log: Check whether domain supports dirty log tracking 2. iommu_switch_dirty_log: Perform actions to start|stop dirty log tracking 3. iommu_sync_dirty_log: Sync dirty log from IOMMU into a dirty bitmap 4. iommu_clear_dirty_log: Clear dirty log of IOMMU by a mask bitmap Note: Don't concurrently call these interfaces with other ops that access underlying page table. Signed-off-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NKunkun Jiang <jiangkunkun@huawei.com> Reviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Shenming Lu 提交于
virt inclusion category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I40TDK CVE: NA --------------------------- In order to reduce the impact of the VPT parsing happening on the GIC, we can split the vcpu reseidency in two phases: - programming GICR_VPENDBASER: this still happens in vcpu_load() - checking for the VPT parsing to be complete: this can happen on vcpu entry (in kvm_vgic_flush_hwstate()) This allows the GIC and the CPU to work in parallel, rewmoving some of the entry overhead. Suggested-by: NMarc Zyngier <maz@kernel.org> Signed-off-by: NShenming Lu <lushenming@huawei.com> Signed-off-by: NMarc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201128141857.983-3-lushenming@huawei.comReviewed-by: NKeqian Zhu <zhukeqian1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-