- 01 6月, 2018 9 次提交
-
-
由 Marc Zyngier 提交于
In order to allow userspace to be mitigated on demand, let's introduce a new thread flag that prevents the mitigation from being turned off when exiting to userspace, and doesn't turn it on on entry into the kernel (with the assumption that the mitigation is always enabled in the kernel itself). This will be used by a prctl interface introduced in a later patch. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
On a system where firmware can dynamically change the state of the mitigation, the CPU will always come up with the mitigation enabled, including when coming back from suspend. If the user has requested "no mitigation" via a command line option, let's enforce it by calling into the firmware again to disable it. Similarily, for a resume from hibernate, the mitigation could have been disabled by the boot kernel. Let's ensure that it is set back on in that case. Acked-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
In order to avoid checking arm64_ssbd_callback_required on each kernel entry/exit even if no mitigation is required, let's add yet another alternative that by default jumps over the mitigation, and that gets nop'ed out if we're doing dynamic mitigation. Think of it as a poor man's static key... Reviewed-by: NJulien Grall <julien.grall@arm.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
We're about to need the mitigation state in various parts of the kernel in order to do the right thing for userspace and guests. Let's expose an accessor that will let other subsystems know about the state. Reviewed-by: NJulien Grall <julien.grall@arm.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
On a system where the firmware implements ARCH_WORKAROUND_2, it may be useful to either permanently enable or disable the workaround for cases where the user decides that they'd rather not get a trap overhead, and keep the mitigation permanently on or off instead of switching it on exception entry/exit. In any case, default to the mitigation being enabled. Reviewed-by: NJulien Grall <julien.grall@arm.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
As for Spectre variant-2, we rely on SMCCC 1.1 to provide the discovery mechanism for detecting the SSBD mitigation. A new capability is also allocated for that purpose, and a config option. Reviewed-by: NJulien Grall <julien.grall@arm.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
In a heterogeneous system, we can end up with both affected and unaffected CPUs. Let's check their status before calling into the firmware. Reviewed-by: NJulien Grall <julien.grall@arm.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
In order for the kernel to protect itself, let's call the SSBD mitigation implemented by the higher exception level (either hypervisor or firmware) on each transition between userspace and kernel. We must take the PSCI conduit into account in order to target the right exception level, hence the introduction of a runtime patching callback. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NJulien Grall <julien.grall@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
We've so far used the PSCI return codes for SMCCC because they were extremely similar. But with the new ARM DEN 0070A specification, "NOT_REQUIRED" (-2) is clashing with PSCI's "PSCI_RET_INVALID_PARAMS". Let's bite the bullet and add SMCCC specific return codes. Users can be repainted as and when required. Acked-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 23 5月, 2018 3 次提交
-
-
由 Mark Rutland 提交于
In do_page_fault(), we handle some kernel faults early, and simply die() with a message. For faults handled later, we dump the faulting address, decode the ESR, walk the page tables, and perform a number of steps to ensure that this data is reported. Let's unify the handling of fatal kernel faults with a new die_kernel_fault() helper, handling all of these details. This is largely the same as the existing logic in __do_kernel_fault(), except that addresses are consistently padded to 16 hex characters, as would be expected for a 64-bit address. The messages currently logged in do_page_fault are adjusted to fit into the die_kernel_fault() message template. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
The naming of is_permission_fault() makes it sound like it should return true for permission faults from EL0, but by design, it only does so for faults from EL1. Let's make this clear by dropping el1 in the name, as we do for is_el1_instruction_abort(). Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
Now that we're seeing CPUs shipping with LSE atomics, default them to 'on' in Kconfig. CPUs without the instructions will continue to use LDXR/STXR-based sequences, but they will be placed out-of-line by the compiler. Acked-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 18 5月, 2018 13 次提交
-
-
由 Dave Martin 提交于
Writes to ZCR_EL1 are self-synchronising, and so may be expensive in typical implementations. This patch adopts the approach used for costly system register writes elsewhere in the kernel: the system register write is suppressed if it would not change the stored value. Since the common case will be that of switching between tasks that use the same vector length as one another, prediction hit rates on the conditional branch should be reasonably good, with lower expected amortised cost than the unconditional execution of a heavyweight self-synchronising instruction. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Jeremy Linton 提交于
Now that we have an accurate view of the physical topology we need to represent it correctly to the scheduler. Generally MC should equal the LLC in the system, but there are a number of special cases that need to be dealt with. In the case of NUMA in socket, we need to assure that the sched domain we build for the MC layer isn't larger than the DIE above it. Similarly for LLC's that might exist in cross socket interconnect or directory hardware we need to assure that MC is shrunk to the socket or NUMA node. This patch builds a sibling mask for the LLC, and then picks the smallest of LLC, socket siblings, or NUMA node siblings, which gives us the behavior described above. This is ever so slightly different than the similar alternative where we look for a cache layer less than or equal to the socket/NUMA siblings. The logic to pick the MC layer affects all arm64 machines, but only changes the behavior for DT/MPIDR systems if the NUMA domain is smaller than the core siblings (generally set to the cluster). Potentially this fixes a possible bug in DT systems, but really it only affects ACPI systems where the core siblings is correctly set to the socket siblings. Thus all currently available ACPI systems should have MC equal to LLC, including the NUMA in socket machines where the LLC is partitioned between the NUMA nodes. Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NVijaya Kumar K <vkilari@codeaurora.org> Tested-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Tested-by: NTomasz Nowicki <Tomasz.Nowicki@cavium.com> Acked-by: NSudeep Holla <sudeep.holla@arm.com> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NMorten Rasmussen <morten.rasmussen@arm.com> Signed-off-by: NJeremy Linton <jeremy.linton@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Jeremy Linton 提交于
Add ACPI_SIG_PPTT to the table so initrd's can override the system topology. Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NVijaya Kumar K <vkilari@codeaurora.org> Tested-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Tested-by: NTomasz Nowicki <Tomasz.Nowicki@cavium.com> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NGeoffrey Blake <geoffrey.blake@arm.com> Signed-off-by: NJeremy Linton <jeremy.linton@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Jeremy Linton 提交于
Propagate the topology information from the PPTT tree to the cpu_topology array. We can get the thread id and core_id by assuming certain levels of the PPTT tree correspond to those concepts. The package_id is flagged in the tree and can be found by calling find_acpi_cpu_topology_package() which terminates its search when it finds an ACPI node flagged as the physical package. If the tree doesn't contain enough levels to represent all of the requested levels then the root node will be returned for all subsequent levels. Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NVijaya Kumar K <vkilari@codeaurora.org> Tested-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Tested-by: NTomasz Nowicki <Tomasz.Nowicki@cavium.com> Acked-by: NSudeep Holla <sudeep.holla@arm.com> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NMorten Rasmussen <morten.rasmussen@arm.com> Signed-off-by: NJeremy Linton <jeremy.linton@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Jeremy Linton 提交于
The cluster concept isn't architecturally defined for arm64. Lets match the name of the arm64 topology field to the kernel macro that uses it. Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NVijaya Kumar K <vkilari@codeaurora.org> Tested-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Tested-by: NTomasz Nowicki <Tomasz.Nowicki@cavium.com> Acked-by: NSudeep Holla <sudeep.holla@arm.com> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NMorten Rasmussen <morten.rasmussen@arm.com> Signed-off-by: NJeremy Linton <jeremy.linton@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Jeremy Linton 提交于
The /sys cache entries should support ACPI/PPTT generated cache topology information. For arm64, if ACPI is enabled, determine the max number of cache levels and populate them using the PPTT table if one is available. Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NVijaya Kumar K <vkilari@codeaurora.org> Tested-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Tested-by: NTomasz Nowicki <Tomasz.Nowicki@cavium.com> Reviewed-by: NSudeep Holla <sudeep.holla@arm.com> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NJeremy Linton <jeremy.linton@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Jeremy Linton 提交于
Call ACPI cache parsing routines from base cacheinfo code if ACPI is enabled. Also stub out cache_setup_acpi and acpi_find_last_cache_level so that individual architectures can enable ACPI topology parsing. Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NVijaya Kumar K <vkilari@codeaurora.org> Tested-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Tested-by: NTomasz Nowicki <Tomasz.Nowicki@cavium.com> Acked-by: NSudeep Holla <sudeep.holla@arm.com> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NJeremy Linton <jeremy.linton@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Jeremy Linton 提交于
Now that we have a PPTT parser, in preparation for its use on arm64, lets build it. Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NVijaya Kumar K <vkilari@codeaurora.org> Tested-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Tested-by: NTomasz Nowicki <Tomasz.Nowicki@cavium.com> Reviewed-by: NSudeep Holla <sudeep.holla@arm.com> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NJeremy Linton <jeremy.linton@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Jeremy Linton 提交于
ACPI 6.2 adds a new table, which describes how processing units are related to each other in tree like fashion. Caches are also sprinkled throughout the tree and describe the properties of the caches in relation to other caches and processing units. Add the code to parse the cache hierarchy and report the total number of levels of cache for a given core using acpi_find_last_cache_level() as well as fill out the individual cores cache information with cache_setup_acpi() once the cpu_cacheinfo structure has been populated by the arch specific code. An additional patch later in the set adds the ability to report peers in the topology using find_acpi_cpu_topology() to report a unique ID for each processing unit at a given level in the tree. These unique id's can then be used to match related processing units which exist as threads, within a given package, etc. Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NVijaya Kumar K <vkilari@codeaurora.org> Tested-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Tested-by: NTomasz Nowicki <Tomasz.Nowicki@cavium.com> Acked-by: NSudeep Holla <sudeep.holla@arm.com> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NJeremy Linton <jeremy.linton@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Jeremy Linton 提交于
Its helpful to be able to lookup the acpi_processor_id associated with a logical cpu. Provide an arm64 helper to do this. Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NVijaya Kumar K <vkilari@codeaurora.org> Tested-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Tested-by: NTomasz Nowicki <Tomasz.Nowicki@cavium.com> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NSudeep Holla <sudeep.holla@arm.com> Signed-off-by: NJeremy Linton <jeremy.linton@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Jeremy Linton 提交于
Rename and change the type of of_node to indicate it is a generic pointer which is generally only used for comparison purposes. In a later patch we will put an ACPI/PPTT token pointer in fw_token so that the code which builds the shared cpu masks can be reused. Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NVijaya Kumar K <vkilari@codeaurora.org> Tested-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Tested-by: NTomasz Nowicki <Tomasz.Nowicki@cavium.com> Acked-by: NSudeep Holla <sudeep.holla@arm.com> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NJeremy Linton <jeremy.linton@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Jeremy Linton 提交于
The original intent in cacheinfo was that an architecture specific populate_cache_leaves() would probe the hardware and then cache_shared_cpu_map_setup() and cache_override_properties() would provide firmware help to extend/expand upon what was probed. Arm64 was really the only architecture that was working this way, and with the removal of most of the hardware probing logic it became clear that it was possible to simplify the logic a bit. This patch combines the walk of the DT nodes with the code updating the cache size/line_size and nr_sets. cache_override_properties() (which was DT specific) is then removed. The result is that cacheinfo.of_node is no longer used as a temporary place to hold DT references for future calls that update cache properties. That change helps to clarify its one remaining use (matching cacheinfo nodes that represent shared caches) which will be used by the ACPI/PPTT code in the following patches. Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NVijaya Kumar K <vkilari@codeaurora.org> Tested-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Tested-by: NTomasz Nowicki <Tomasz.Nowicki@cavium.com> Acked-by: NSudeep Holla <sudeep.holla@arm.com> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NJeremy Linton <jeremy.linton@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Jeremy Linton 提交于
In preparation for the next patch, and to aid in review of that patch, lets move cache_setup_of_node further down in the module without any changes. Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NVijaya Kumar K <vkilari@codeaurora.org> Tested-by: NXiongfeng Wang <wangxiongfeng2@huawei.com> Tested-by: NTomasz Nowicki <Tomasz.Nowicki@cavium.com> Reviewed-by: NSudeep Holla <sudeep.holla@arm.com> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NJeremy Linton <jeremy.linton@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 16 5月, 2018 4 次提交
-
-
由 Will Deacon 提交于
When waiting for a cacheline to change state in cmpwait, we may immediately wake-up the first time around the outer loop if the event register was already set (for example, because of the event stream). Avoid these spurious wakeups by explicitly clearing the event register before loading the cacheline and setting the exclusive monitor. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Robin Murphy 提交于
It is probably safe to assume that all Armv8-A implementations have a multiplier whose efficiency is comparable or better than a sequence of three or so register-dependent arithmetic instructions. Select ARCH_HAS_FAST_MULTIPLIER to get ever-so-slightly nicer codegen in the few dusty old corners which care. In a contrived benchmark calling hweight64() in a loop, this does indeed turn out to be a small win overall, with no measurable impact on Cortex-A57 but about 5% performance improvement on Cortex-A53. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Vincenzo Frascino 提交于
"make includecheck" detected few duplicated includes in arch/arm64. This patch removes the double inclusions. Signed-off-by: NVincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Masahiro Yamada 提交于
VMLINUX_SYMBOL() is no-op unless CONFIG_HAVE_UNDERSCORE_SYMBOL_PREFIX is defined. It has ever been selected only by BLACKFIN and METAG. VMLINUX_SYMBOL() is unneeded for ARM64-specific code. Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 15 5月, 2018 1 次提交
-
-
由 Catalin Marinas 提交于
This patch increases the ARCH_DMA_MINALIGN to 128 so that it covers the currently known Cache Writeback Granule (CTR_EL0.CWG) on arm64 and moves the fallback in cache_line_size() from L1_CACHE_BYTES to this constant. In addition, it warns (and taints) if the CWG is larger than ARCH_DMA_MINALIGN as this is not safe with non-coherent DMA. Cc: Will Deacon <will.deacon@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 11 5月, 2018 1 次提交
-
-
由 Catalin Marinas 提交于
This reverts commit 97303480. Commit 97303480 ("arm64: Increase the max granular size") increased the cache line size to 128 to match Cavium ThunderX, apparently for some performance benefit which could not be confirmed. This change, however, has an impact on the network packet allocation in certain circumstances, requiring slightly over a 4K page with a significant performance degradation. The patch reverts L1_CACHE_SHIFT back to 6 (64-byte cache line). Cc: Will Deacon <will.deacon@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 07 5月, 2018 1 次提交
-
-
由 Linus Torvalds 提交于
-
- 06 5月, 2018 8 次提交
-
-
git://git.kernel.org/pub/scm/virt/kvm/kvm由 Linus Torvalds 提交于
Pll KVM fixes from Radim Krčmář: "ARM: - Fix proxying of GICv2 CPU interface accesses - Fix crash when switching to BE - Track source vcpu git GICv2 SGIs - Fix an outdated bit of documentation x86: - Speed up injection of expired timers (for stable)" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: KVM: x86: remove APIC Timer periodic/oneshot spikes arm64: vgic-v2: Fix proxying of cpuif access KVM: arm/arm64: vgic_init: Cleanup reference to process_maintenance KVM: arm64: Fix order of vcpu_write_sys_reg() arguments KVM: arm/arm64: vgic: Fix source vcpu issues for GICv2 SGI
-
git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu由 Linus Torvalds 提交于
Pull iommu fixes from Joerg Roedel: - fix a compile warning in the AMD IOMMU driver with irq remapping disabled - fix for VT-d interrupt remapping and invalidation size (caused a BUG_ON when trying to invalidate more than 4GB) - build fix and a regression fix for broken graphics with old DTS for the rockchip iommu driver - a revert in the PCI window reservation code which fixes a regression with VFIO. * tag 'iommu-fixes-v4.17-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: iommu: rockchip: fix building without CONFIG_OF iommu/vt-d: Use WARN_ON_ONCE instead of BUG_ON in qi_flush_dev_iotlb() iommu/vt-d: fix shift-out-of-bounds in bug checking iommu/dma: Move PCI window region reservation back into dma specific path. iommu/rockchip: Make clock handling optional iommu/amd: Hide unused iommu_table_lock iommu/vt-d: Fix usage of force parameter in intel_ir_reconfigure_irte()
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip由 Linus Torvalds 提交于
Pull x86 fix from Thomas Gleixner: "Unbreak the CPUID CPUID_8000_0008_EBX reload which got dropped when the evaluation of physical and virtual bits which uses the same CPUID leaf was moved out of get_cpu_cap()" * 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/cpu: Restore CPUID_8000_0008_EBX reload
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip由 Linus Torvalds 提交于
Pull clocksource fixes from Thomas Gleixner: "The recent addition of the early TSC clocksource breaks on machines which have an unstable TSC because in case that TSC is disabled, then the clocksource selection logic falls back to the early TSC which is obviously bogus. That also unearthed a few robustness issues in the clocksource derating code which are addressed as well" * 'timers-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: clocksource: Rework stale comment clocksource: Consistent de-rate when marking unstable x86/tsc: Fix mark_tsc_unstable() clocksource: Initialize cs->wd_list clocksource: Allow clocksource_mark_unstable() on unregistered clocksources x86/tsc: Always unregister clocksource_tsc_early
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip由 Linus Torvalds 提交于
Pull irq fix from Thomas Gleixner: "A single fix to prevent false positives in the spurious interrupt detector when more than a single demultiplex register is evaluated in the Qualcom irq combiner driver" * 'irq-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: irqchip/qcom: Fix check for spurious interrupts
-
git://git.infradead.org/linux-platform-drivers-x86由 Linus Torvalds 提交于
Pull x86 platform driver fixes from Darren Hart: - We missed a case in the Dell config dependencies resulting in a possible bad configuration, resolve it by giving up on trying to keep DELL_LAPTOP visible in the menu and make it depend on DELL_SMBIOS. - Fix a null pointer dereference at module unload for the asus-wireless driver. * tag 'platform-drivers-x86-v4.17-2' of git://git.infradead.org/linux-platform-drivers-x86: platform/x86: Kconfig: Fix dell-laptop dependency chain. platform/x86: asus-wireless: Fix NULL pointer dereference
-
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb由 Linus Torvalds 提交于
Pull USB fixes from Greg KH: "Here are some USB driver fixes for 4.17-rc4. The majority of them are some USB gadget fixes that missed my last pull request. The "largest" patch in here is a fix for the old visor driver that syzbot found 6 months or so ago and I finally remembered to fix it. All of these have been in linux-next with no reported issues" * tag 'usb-4.17-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb: Revert "usb: host: ehci: Use dma_pool_zalloc()" usb: typec: tps6598x: handle block reads separately with plain-I2C adapters usb: typec: tcpm: Release the role mux when exiting USB: Accept bulk endpoints with 1024-byte maxpacket xhci: Fix use-after-free in xhci_free_virt_device USB: serial: visor: handle potential invalid device configuration USB: serial: option: adding support for ublox R410M usb: musb: trace: fix NULL pointer dereference in musb_g_tx() usb: musb: host: fix potential NULL pointer dereference usb: gadget: composite Allow for larger configuration descriptors usb: dwc3: gadget: Fix list_del corruption in dwc3_ep_dequeue usb: dwc3: gadget: dwc3_gadget_del_and_unmap_request() can be static usb: dwc2: pci: Fix error return code in dwc2_pci_probe() usb: dwc2: WA for Full speed ISOC IN in DDMA mode. usb: dwc2: dwc2_vbus_supply_init: fix error check usb: gadget: f_phonet: fix pn_net_xmit()'s return type
-
由 Anthoine Bourgeois 提交于
Since the commit "8003c9ae: add APIC Timer periodic/oneshot mode VMX preemption timer support", a Windows 10 guest has some erratic timer spikes. Here the results on a 150000 times 1ms timer without any load: Before 8003c9ae | After 8003c9ae Max 1834us | 86000us Mean 1100us | 1021us Deviation 59us | 149us Here the results on a 150000 times 1ms timer with a cpu-z stress test: Before 8003c9ae | After 8003c9ae Max 32000us | 140000us Mean 1006us | 1997us Deviation 140us | 11095us The root cause of the problem is starting hrtimer with an expiry time already in the past can take more than 20 milliseconds to trigger the timer function. It can be solved by forward such past timers immediately, rather than submitting them to hrtimer_start(). In case the timer is periodic, update the target expiration and call hrtimer_start with it. v2: Check if the tsc deadline is already expired. Thank you Mika. v3: Execute the past timers immediately rather than submitting them to hrtimer_start(). v4: Rearm the periodic timer with advance_periodic_target_expiration() a simpler version of set_target_expiration(). Thank you Paolo. Cc: Mika Penttilä <mika.penttila@nextfour.com> Cc: Wanpeng Li <kernellwp@gmail.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: NAnthoine Bourgeois <anthoine.bourgeois@blade-group.com> 8003c9ae ("KVM: LAPIC: add APIC Timer periodic/oneshot mode VMX preemption timer support") Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
-