- 14 3月, 2020 1 次提交
-
-
由 Alex Hung 提交于
BGRT is for displaying seamless OEM logo from booting to login screen; however, this mechanism does not always work well on all configurations and the OEM logo can be displayed multiple times. This looks worse than without BGRT enabled. This patch adds a kernel parameter to disable BGRT in boot time. This is easier than re-compiling a kernel with CONFIG_ACPI_BGRT disabled. Signed-off-by: NAlex Hung <alex.hung@canonical.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 11 3月, 2020 1 次提交
-
-
由 Guilherme G. Piccoli 提交于
Commit 9c44bc03 ("softlockup: allow panic on lockup") added the softlockup_panic sysctl, but didn't add information about it to the file Documentation/admin-guide/sysctl/kernel.rst (which in that time certainly wasn't rst and had other name!). This patch just adds the respective documentation and references it from the corresponding entry in Documentation/admin-guide/kernel-parameters.txt. This patch was strongly based on Scott Wood's commit d22881dc ("Documentation: Better document the hardlockup_panic sysctl"). Reviewed-by: NKees Cook <keescook@chromium.org> Signed-off-by: NGuilherme G. Piccoli <gpiccoli@canonical.com> Link: https://lore.kernel.org/r/20200310183649.23163-1-gpiccoli@canonical.comSigned-off-by: NJonathan Corbet <corbet@lwn.net>
-
- 05 3月, 2020 2 次提交
-
-
由 Saravana Kannan 提交于
With the addition of fw_devlink kernel commandline option, of_devlink is redundant and not useful anymore. So, delete it. Acked-by: NRob Herring <robh@kernel.org> Signed-off-by: NSaravana Kannan <saravanak@google.com> Link: https://lore.kernel.org/r/20200222014038.180923-6-saravanak@google.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Saravana Kannan 提交于
fwnode_operations.add_links allows creating device links from information provided by firmware. fwnode_operations.add_links is currently implemented only by OF/devicetree code and a specific case of efi. However, there's nothing preventing ACPI or other firmware types from implementing it. The OF implementation is currently controlled by a kernel commandline parameter called of_devlink. Since this feature is generic isn't limited to OF, add a generic fw_devlink kernel commandline parameter to control this feature across firmware types. Signed-off-by: NSaravana Kannan <saravanak@google.com> Link: https://lore.kernel.org/r/20200222014038.180923-3-saravanak@google.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 03 3月, 2020 2 次提交
-
-
由 Niklas Söderlund 提交于
When converting and moving nfsroot.txt to nfsroot.rst the references to the old text file was not updated to match the change, fix this. Fixes: f9a93498 ("Documentation: nfsroot.txt: convert to ReST") Signed-off-by: NNiklas Söderlund <niklas.soderlund+renesas@ragnatech.se> Reviewed-by: NGeert Uytterhoeven <geert+renesas@glider.be> Link: https://lore.kernel.org/r/20200212181332.520545-1-niklas.soderlund+renesas@ragnatech.seSigned-off-by: NJonathan Corbet <corbet@lwn.net>
-
由 Jonathan Neuschäfer 提交于
drivers/tty/serial/imx.c implements these earlycon options. Signed-off-by: NJonathan Neuschäfer <j.neuschaefer@gmx.net> Link: https://lore.kernel.org/r/20200229132750.2783-1-j.neuschaefer@gmx.netSigned-off-by: NJonathan Corbet <corbet@lwn.net>
-
- 25 2月, 2020 1 次提交
-
-
由 Alex Hung 提交于
"untrusted" was mis-spelled as "unstrusted" Signed-off-by: NAlex Hung <alex.hung@canonical.com> Signed-off-by: NJonathan Corbet <corbet@lwn.net>
-
- 19 2月, 2020 1 次提交
-
-
由 Jonathan Neuschäfer 提交于
cmdlinepart.c has been moved to drivers/mtd/parsers/. Fixes: a3f12a35 ("mtd: parsers: Move CMDLINE parser") Signed-off-by: NJonathan Neuschäfer <j.neuschaefer@gmx.net> Signed-off-by: NJonathan Corbet <corbet@lwn.net>
-
- 10 2月, 2020 1 次提交
-
-
由 Jean Delvare 提交于
In case the WDAT interface is broken, give the user an option to ignore it to let a native driver bind to the watchdog device instead. Signed-off-by: NJean Delvare <jdelvare@suse.de> Acked-by: NMika Westerberg <mika.westerberg@linux.intel.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 05 2月, 2020 1 次提交
-
-
由 Steven Rostedt (VMware) 提交于
As the bootconfig is appended to the initrd it is not as easy to modify as the kernel command line. If there's some issue with the kernel, and the developer wants to boot a pristine kernel, it should not be needed to modify the initrd to remove the bootconfig for a single boot. As bootconfig is silently added (if the admin does not know where to look they may not know it's being loaded). It should be explicitly added to the kernel cmdline. The loading of the bootconfig is only done if "bootconfig" is on the kernel command line. This will let admins know that the kernel command line is extended. Note, after adding printk()s for when the size is too great or the checksum is wrong, exposed that the current method always looked for the boot config, and if this size and checksum matched, it would parse it (as if either is wrong a printk has been added to show this). It's better to only check this if the boot config is asked to be looked for. Link: https://lore.kernel.org/r/CAHk-=wjfjO+h6bQzrTf=YCZA53Y3EDyAs3Z4gEsT7icA3u_Psw@mail.gmail.comAcked-by: NMasami Hiramatsu <mhiramat@kernel.org> Suggested-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
-
- 01 2月, 2020 1 次提交
-
-
由 Mikhail Zaslonko 提交于
Add the new kernel command line parameter 'dfltcc=' to configure s390 zlib hardware support. Format: { on | off | def_only | inf_only | always } on: s390 zlib hardware support for compression on level 1 and decompression (default) off: No s390 zlib hardware support def_only: s390 zlib hardware support for deflate only (compression on level 1) inf_only: s390 zlib hardware support for inflate only (decompression) always: Same as 'on' but ignores the selected compression level always using hardware support (used for debugging) Link: http://lkml.kernel.org/r/20200103223334.20669-5-zaslonko@linux.ibm.comSigned-off-by: NMikhail Zaslonko <zaslonko@linux.ibm.com> Cc: Chris Mason <clm@fb.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: David Sterba <dsterba@suse.com> Cc: Eduard Shishkin <edward6@linux.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Ilya Leoshkevich <iii@linux.ibm.com> Cc: Josef Bacik <josef@toxicpanda.com> Cc: Richard Purdie <rpurdie@rpsys.net> Cc: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 25 1月, 2020 2 次提交
-
-
由 Joel Fernandes (Google) 提交于
Now that the kfree_rcu() special-casing has been removed from tree RCU, this commit removes kfree_call_rcu_nobatch() since it is no longer needed. Signed-off-by: NJoel Fernandes (Google) <joel@joelfernandes.org> Signed-off-by: NPaul E. McKenney <paulmck@kernel.org>
-
由 Joel Fernandes (Google) 提交于
This test runs kfree_rcu() in a loop to measure performance of the new kfree_rcu() batching functionality. The following table shows results when booting with arguments: rcuperf.kfree_loops=20000 rcuperf.kfree_alloc_num=8000 rcuperf.kfree_rcu_test=1 rcuperf.kfree_no_batch=X rcuperf.kfree_no_batch=X # Grace Periods Test Duration (s) X=1 (old behavior) 9133 11.5 X=0 (new behavior) 1732 12.5 On a 16 CPU system with the above boot parameters, we see that the total number of grace periods that elapse during the test drops from 9133 when not batching to 1732 when batching (a 5X improvement). The kfree_rcu() flood itself slows down a bit when batching, though, as shown. Note that the active memory consumption during the kfree_rcu() flood does increase to around 200-250MB due to the batching (from around 50MB without batching). However, this memory consumption is relatively constant. In other words, the system is able to keep up with the kfree_rcu() load. The memory consumption comes down considerably if KFREE_DRAIN_JIFFIES is increased from HZ/50 to HZ/80. A later patch will reduce memory consumption further by using multiple lists. Also, when running the test, please disable CONFIG_DEBUG_PREEMPT and CONFIG_PROVE_RCU for realistic comparisons with/without batching. Signed-off-by: NJoel Fernandes (Google) <joel@joelfernandes.org> Signed-off-by: NPaul E. McKenney <paulmck@kernel.org>
-
- 22 1月, 2020 1 次提交
-
-
由 Ming Lei 提交于
The affinity of managed interrupts is completely handled in the kernel and cannot be changed via the /proc/irq/* interfaces from user space. As the kernel tries to spread out interrupts evenly accross CPUs on x86 to prevent vector exhaustion, it can happen that a managed interrupt whose affinity mask contains both isolated and housekeeping CPUs is routed to an isolated CPU. As a consequence IO submitted on a housekeeping CPU causes interrupts on the isolated CPU. Add a new sub-parameter 'managed_irq' for 'isolcpus' and the corresponding logic in the interrupt affinity selection code. The subparameter indicates to the interrupt affinity selection logic that it should try to avoid the above scenario. This isolation is best effort and only effective if the automatically assigned interrupt mask of a device queue contains isolated and housekeeping CPUs. If housekeeping CPUs are online then such interrupts are directed to the housekeeping CPU so that IO submitted on the housekeeping CPU cannot disturb the isolated CPU. If a queue's affinity mask contains only isolated CPUs then this parameter has no effect on the interrupt routing decision, though interrupts are only happening when tasks running on those isolated CPUs submit IO. IO submitted on housekeeping CPUs has no influence on those queues. If the affinity mask contains both housekeeping and isolated CPUs, but none of the contained housekeeping CPUs is online, then the interrupt is also routed to an isolated CPU. Interrupts are only delivered when one of the isolated CPUs in the affinity mask submits IO. If one of the contained housekeeping CPUs comes online, the CPU hotplug logic migrates the interrupt automatically back to the upcoming housekeeping CPU. Depending on the type of interrupt controller, this can require that at least one interrupt is delivered to the isolated CPU in order to complete the migration. [ tglx: Removed unused parameter, added and edited comments/documentation and rephrased the changelog so it contains more details. ] Signed-off-by: NMing Lei <ming.lei@redhat.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20200120091625.17912-1-ming.lei@redhat.com
-
- 20 1月, 2020 1 次提交
-
-
由 Ard Biesheuvel 提交于
We carry a quirk in the x86 EFI code to switch back to an older method of mapping the EFI runtime services memory regions, because it was deemed risky at the time to implement a new method without providing a fallback to the old method in case problems arose. Such problems did arise, but they appear to be limited to SGI UV1 machines, and so these are the only ones for which the fallback gets enabled automatically (via a DMI quirk). The fallback can be enabled manually as well, by passing efi=old_map, but there is very little evidence that suggests that this is something that is being relied upon in the field. Given that UV1 support is not enabled by default by the distros (Ubuntu, Fedora), there is no point in carrying this fallback code all the time if there are no other users. So let's move it into the UV support code, and document that efi=old_map now requires this support code to be enabled. Note that efi=old_map has been used in the past on other SGI UV machines to work around kernel regressions in production, so we keep the option to enable it by hand, but only if the kernel was built with UV support. Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Signed-off-by: NIngo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20200113172245.27925-8-ardb@kernel.org
-
- 11 1月, 2020 1 次提交
-
-
由 Matthew Garrett 提交于
Add an option to disable the busmaster bit in the control register on all PCI bridges before calling ExitBootServices() and passing control to the runtime kernel. System firmware may configure the IOMMU to prevent malicious PCI devices from being able to attack the OS via DMA. However, since firmware can't guarantee that the OS is IOMMU-aware, it will tear down IOMMU configuration when ExitBootServices() is called. This leaves a window between where a hostile device could still cause damage before Linux configures the IOMMU again. If CONFIG_EFI_DISABLE_PCI_DMA is enabled or "efi=disable_early_pci_dma" is passed on the command line, the EFI stub will clear the busmaster bit on all PCI bridges before ExitBootServices() is called. This will prevent any malicious PCI devices from being able to perform DMA until the kernel reenables busmastering after configuring the IOMMU. This option may cause failures with some poorly behaved hardware and should not be enabled without testing. The kernel commandline options "efi=disable_early_pci_dma" or "efi=no_disable_early_pci_dma" may be used to override the default. Note that PCI devices downstream from PCI bridges are disconnected from their drivers first, using the UEFI driver model API, so that DMA can be disabled safely at the bridge level. [ardb: disconnect PCI I/O handles first, as suggested by Arvind] Co-developed-by: NMatthew Garrett <mjg59@google.com> Signed-off-by: NMatthew Garrett <mjg59@google.com> Signed-off-by: NArd Biesheuvel <ardb@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Arvind Sankar <nivedita@alum.mit.edu> Cc: Matthew Garrett <matthewgarrett@google.com> Cc: linux-efi@vger.kernel.org Link: https://lkml.kernel.org/r/20200103113953.9571-18-ardb@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 08 1月, 2020 1 次提交
-
-
由 Stephen Smalley 提交于
selinuxfs was originally mounted on /selinux, and various docs and kconfig help texts referred to nodes under it. In Linux 3.0, /sys/fs/selinux was introduced as the preferred mount point for selinuxfs. Fix all the old references to /selinux/ to /sys/fs/selinux/. While we are there, update the description of the selinux boot parameter to reflect the fact that the default value is always 1 since commit be6ec88f ("selinux: Remove SECURITY_SELINUX_BOOTPARAM_VALUE") and drop discussion of runtime disable since it is deprecated. Signed-off-by: NStephen Smalley <sds@tycho.nsa.gov> Signed-off-by: NPaul Moore <paul@paul-moore.com>
-
- 23 11月, 2019 1 次提交
-
-
由 Masami Hiramatsu 提交于
Remove bootmem_debug kernel paramenter because it has been replaced by memblock=debug. Signed-off-by: NMasami Hiramatsu <mhiramat@kernel.org> Cc: Mike Rapoport <rppt@linux.ibm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jonathan Corbet <corbet@lwn.net> Link: https://lore.kernel.org/r/157443061745.20995.9432492850513217966.stgit@devnote2Signed-off-by: NJonathan Corbet <corbet@lwn.net>
-
- 19 11月, 2019 1 次提交
-
-
由 Yunfeng Ye 提交于
The commit 0f27cff8 ("ACPI: sysfs: Make ACPI GPE mask kernel parameter cover all GPEs") says: "Use a bitmap of size 0xFF instead of a u64 for the GPE mask so 256 GPEs can be masked" But the masking of GPE 0xFF it not supported and the check condition "gpe > ACPI_MASKABLE_GPE_MAX" is not valid because the type of gpe is u8. So modify the macro ACPI_MASKABLE_GPE_MAX to 0x100, and drop the "gpe > ACPI_MASKABLE_GPE_MAX" check. In addition, update the docs "Format" for acpi_mask_gpe parameter. Fixes: 0f27cff8 ("ACPI: sysfs: Make ACPI GPE mask kernel parameter cover all GPEs") Signed-off-by: NYunfeng Ye <yeyunfeng@huawei.com> [ rjw: Use u16 as gpe data type in acpi_gpe_apply_masked_gpes() ] Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 18 11月, 2019 1 次提交
-
-
由 Oliver Neukum 提交于
Document which flags work storage, UAS or both Signed-off-by: NOliver Neukum <oneukum@suse.com> Cc: stable <stable@vger.kernel.org> Link: https://lore.kernel.org/r/20191114112758.32747-4-oneukum@suse.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 16 11月, 2019 1 次提交
-
-
由 Waiman Long 提交于
For MDS vulnerable processors with TSX support, enabling either MDS or TAA mitigations will enable the use of VERW to flush internal processor buffers at the right code path. IOW, they are either both mitigated or both not. However, if the command line options are inconsistent, the vulnerabilites sysfs files may not report the mitigation status correctly. For example, with only the "mds=off" option: vulnerabilities/mds:Vulnerable; SMT vulnerable vulnerabilities/tsx_async_abort:Mitigation: Clear CPU buffers; SMT vulnerable The mds vulnerabilities file has wrong status in this case. Similarly, the taa vulnerability file will be wrong with mds mitigation on, but taa off. Change taa_select_mitigation() to sync up the two mitigation status and have them turned off if both "mds=off" and "tsx_async_abort=off" are present. Update documentation to emphasize the fact that both "mds=off" and "tsx_async_abort=off" have to be specified together for processors that are affected by both TAA and MDS to be effective. [ bp: Massage and add kernel-parameters.txt change too. ] Fixes: 1b42f017 ("x86/speculation/taa: Add mitigation for TSX Async Abort") Signed-off-by: NWaiman Long <longman@redhat.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Kosina <jkosina@suse.cz> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: linux-doc@vger.kernel.org Cc: Mark Gross <mgross@linux.intel.com> Cc: <stable@vger.kernel.org> Cc: Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tim Chen <tim.c.chen@linux.intel.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Tyler Hicks <tyhicks@canonical.com> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20191115161445.30809-2-longman@redhat.com
-
- 13 11月, 2019 1 次提交
-
-
由 Jonathan Corbet 提交于
This reverts commit 7f70ae56. Christoph H. notes that the information is redundant, and Paul W. agrees with reverting. Signed-off-by: NJonathan Corbet <corbet@lwn.net>
-
- 07 11月, 2019 2 次提交
-
-
由 Dan Williams 提交于
Given that EFI_MEMORY_SP is platform BIOS policy decision for marking memory ranges as "reserved for a specific purpose" there will inevitably be scenarios where the BIOS omits the attribute in situations where it is desired. Unlike other attributes if the OS wants to reserve this memory from the kernel the reservation needs to happen early in init. So early, in fact, that it needs to happen before e820__memblock_setup() which is a pre-requisite for efi_fake_memmap() that wants to allocate memory for the updated table. Introduce an x86 specific efi_fake_memmap_early() that can search for attempts to set EFI_MEMORY_SP via efi_fake_mem and update the e820 table accordingly. The KASLR code that scans the command line looking for user-directed memory reservations also needs to be updated to consider "efi_fake_mem=nn@ss:0x40000" requests. Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NDave Hansen <dave.hansen@linux.intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com> Acked-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Dan Williams 提交于
UEFI 2.8 defines an EFI_MEMORY_SP attribute bit to augment the interpretation of the EFI Memory Types as "reserved for a specific purpose". The proposed Linux behavior for specific purpose memory is that it is reserved for direct-access (device-dax) by default and not available for any kernel usage, not even as an OOM fallback. Later, through udev scripts or another init mechanism, these device-dax claimed ranges can be reconfigured and hot-added to the available System-RAM with a unique node identifier. This device-dax management scheme implements "soft" in the "soft reserved" designation by allowing some or all of the reservation to be recovered as typical memory. This policy can be disabled at compile-time with CONFIG_EFI_SOFT_RESERVE=n, or runtime with efi=nosoftreserve. As for this patch, define the common helpers to determine if the EFI_MEMORY_SP attribute should be honored. The determination needs to be made early to prevent the kernel from being loaded into soft-reserved memory, or otherwise allowing early allocations to land there. Follow-on changes are needed per architecture to leverage these helpers in their respective mem-init paths. Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NDan Williams <dan.j.williams@intel.com> Acked-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 05 11月, 2019 1 次提交
-
-
由 Junaid Shahid 提交于
The page table pages corresponding to broken down large pages are zapped in FIFO order, so that the large page can potentially be recovered, if it is not longer being used for execution. This removes the performance penalty for walking deeper EPT page tables. By default, one large page will last about one hour once the guest reaches a steady state. Signed-off-by: NJunaid Shahid <junaids@google.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 04 11月, 2019 1 次提交
-
-
由 Paolo Bonzini 提交于
With some Intel processors, putting the same virtual address in the TLB as both a 4 KiB and 2 MiB page can confuse the instruction fetch unit and cause the processor to issue a machine check resulting in a CPU lockup. Unfortunately when EPT page tables use huge pages, it is possible for a malicious guest to cause this situation. Add a knob to mark huge pages as non-executable. When the nx_huge_pages parameter is enabled (and we are using EPT), all huge pages are marked as NX. If the guest attempts to execute in one of those pages, the page is broken down into 4K pages, which are then marked executable. This is not an issue for shadow paging (except nested EPT), because then the host is in control of TLB flushes and the problematic situation cannot happen. With nested EPT, again the nested guest can cause problems shadow and direct EPT is treated in the same way. [ tglx: Fixup default to auto and massage wording a bit ] Originally-by: NJunaid Shahid <junaids@google.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 28 10月, 2019 3 次提交
-
-
由 Pawan Gupta 提交于
Add the documenation for TSX Async Abort. Include the description of the issue, how to check the mitigation state, control the mitigation, guidance for system administrators. [ bp: Add proper SPDX tags, touch ups by Josh and me. ] Co-developed-by: NAntonio Gomez Iglesias <antonio.gomez.iglesias@intel.com> Signed-off-by: NPawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by: NAntonio Gomez Iglesias <antonio.gomez.iglesias@intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NMark Gross <mgross@linux.intel.com> Reviewed-by: NTony Luck <tony.luck@intel.com> Reviewed-by: NJosh Poimboeuf <jpoimboe@redhat.com>
-
由 Pawan Gupta 提交于
Platforms which are not affected by X86_BUG_TAA may want the TSX feature enabled. Add "auto" option to the TSX cmdline parameter. When tsx=auto disable TSX when X86_BUG_TAA is present, otherwise enable TSX. More details on X86_BUG_TAA can be found here: https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html [ bp: Extend the arg buffer to accommodate "auto\0". ] Signed-off-by: NPawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NTony Luck <tony.luck@intel.com> Reviewed-by: NJosh Poimboeuf <jpoimboe@redhat.com>
-
由 Pawan Gupta 提交于
Add a kernel cmdline parameter "tsx" to control the Transactional Synchronization Extensions (TSX) feature. On CPUs that support TSX control, use "tsx=on|off" to enable or disable TSX. Not specifying this option is equivalent to "tsx=off". This is because on certain processors TSX may be used as a part of a speculative side channel attack. Carve out the TSX controlling functionality into a separate compilation unit because TSX is a CPU feature while the TSX async abort control machinery will go to cpu/bugs.c. [ bp: - Massage, shorten and clear the arg buffer. - Clarifications of the tsx= possible options - Josh. - Expand on TSX_CTRL availability - Pawan. ] Signed-off-by: NPawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NJosh Poimboeuf <jpoimboe@redhat.com>
-
- 26 10月, 2019 1 次提交
-
-
由 Olof Johansson 提交于
Prior to eed85ff4 ("PCI/DPC: Enable DPC only if AER is available"), Linux handled DPC events regardless of whether firmware had granted it ownership of AER or DPC, e.g., via _OSC. PCIe r5.0, sec 6.2.10, recommends that the OS link control of DPC to control of AER, so after eed85ff4, Linux handles DPC events only if it has control of AER. On platforms that do not grant OS control of AER via _OSC, Linux DPC handling worked before eed85ff4 but not after. To make Linux DPC handling work on those platforms the same way they did before, add a "pcie_ports=dpc-native" kernel parameter that makes Linux handle DPC events regardless of whether it has control of AER. [bhelgaas: commit log, move pcie_ports_dpc_native to drivers/pci/] Link: https://lore.kernel.org/r/20191023192205.97024-1-olof@lixom.netSigned-off-by: NOlof Johansson <olof@lixom.net> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
-
- 23 10月, 2019 1 次提交
-
-
由 Nicholas Johnson 提交于
The existing "pci=hpmemsize=nn[KMG]" kernel parameter overrides the default size of both the non-prefetchable and the prefetchable MMIO windows for hotplug bridges. Add "pci=hpmmiosize=nn[KMG]" to override the default size of only the non-prefetchable MMIO window. Add "pci=hpmmioprefsize=nn[KMG]" to override the default size of only the prefetchable MMIO window. Link: https://lore.kernel.org/r/SL2P216MB0187E4D0055791957B7E2660806B0@SL2P216MB0187.KORP216.PROD.OUTLOOK.COMSigned-off-by: NNicholas Johnson <nicholas.johnson-opensource@outlook.com.au> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Reviewed-by: NMika Westerberg <mika.westerberg@linux.intel.com>
-
- 22 10月, 2019 1 次提交
-
-
由 Steven Price 提交于
Enable paravirtualization features when running under a hypervisor supporting the PV_TIME_ST hypercall. For each (v)CPU, we ask the hypervisor for the location of a shared page which the hypervisor will use to report stolen time to us. We set pv_time_ops to the stolen time function which simply reads the stolen value from the shared page for a VCPU. We guarantee single-copy atomicity using READ_ONCE which means we can also read the stolen time for another VCPU than the currently running one while it is potentially being updated by the hypervisor. Signed-off-by: NSteven Price <steven.price@arm.com> Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
- 16 10月, 2019 1 次提交
-
-
由 Stefan-Gabriel Mirea 提交于
For consistency reasons, spell the controller name as "LINFlexD" in comments and documentation. Signed-off-by: NStefan-Gabriel Mirea <stefan-gabriel.mirea@nxp.com> Link: https://lore.kernel.org/r/1571230107-8493-4-git-send-email-stefan-gabriel.mirea@nxp.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 11 10月, 2019 1 次提交
-
-
由 Paul Walmsley 提交于
Kernels booting on RISC-V can specify "earlycon" with no options on the Linux command line, and the generic DT earlycon support will query the "chosen/stdout-path" property (if present) to determine which early console device to use. Document this appropriately in the admin-guide. Signed-off-by: NPaul Walmsley <paul.walmsley@sifive.com> Reviewed-by: NGeert Uytterhoeven <geert+renesas@glider.be> Cc: Christoph Hellwig <hch@lst.de> Cc: Andreas Schwab <schwab@suse.de> Signed-off-by: NJonathan Corbet <corbet@lwn.net>
-
- 08 10月, 2019 1 次提交
-
-
由 Boris Ostrovsky 提交于
Currently execution of panic() continues until Xen's panic notifier (xen_panic_event()) is called at which point we make a hypercall that never returns. This means that any notifier that is supposed to be called later as well as significant part of panic() code (such as pstore writes from kmsg_dump()) is never executed. There is no reason for xen_panic_event() to be this last point in execution since panic()'s emergency_restart() will call into xen_emergency_restart() from where we can perform our hypercall. Nevertheless, we will provide xen_legacy_crash boot option that will preserve original behavior during crash. This option could be used, for example, if running kernel dumper (which happens after panic notifiers) is undesirable. Reported-by: NJames Dingwall <james@dingwall.me.uk> Signed-off-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com> Reviewed-by: NJuergen Gross <jgross@suse.com>
-
- 04 10月, 2019 1 次提交
-
-
由 Saravana Kannan 提交于
Add device links after the devices are created (but before they are probed) by looking at common DT bindings like clocks and interconnects. Automatically adding device links for functional dependencies at the framework level provides the following benefits: - Optimizes device probe order and avoids the useless work of attempting probes of devices that will not probe successfully (because their suppliers aren't present or haven't probed yet). For example, in a commonly available mobile SoC, registering just one consumer device's driver at an initcall level earlier than the supplier device's driver causes 11 failed probe attempts before the consumer device probes successfully. This was with a kernel with all the drivers statically compiled in. This problem gets a lot worse if all the drivers are loaded as modules without direct symbol dependencies. - Supplier devices like clock providers, interconnect providers, etc need to keep the resources they provide active and at a particular state(s) during boot up even if their current set of consumers don't request the resource to be active. This is because the rest of the consumers might not have probed yet and turning off the resource before all the consumers have probed could lead to a hang or undesired user experience. Some frameworks (Eg: regulator) handle this today by turning off "unused" resources at late_initcall_sync and hoping all the devices have probed by then. This is not a valid assumption for systems with loadable modules. Other frameworks (Eg: clock) just don't handle this due to the lack of a clear signal for when they can turn off resources. This leads to downstream hacks to handle cases like this that can easily be solved in the upstream kernel. By linking devices before they are probed, we give suppliers a clear count of the number of dependent consumers. Once all of the consumers are active, the suppliers can turn off the unused resources without making assumptions about the number of consumers. By default we just add device-links to track "driver presence" (probe succeeded) of the supplier device. If any other functionality provided by device-links are needed, it is left to the consumer/supplier devices to change the link when they probe. kbuild test robot reported clang error about missing const Reported-by: Nkbuild test robot <lkp@intel.com> Signed-off-by: NSaravana Kannan <saravanak@google.com> Link: https://lore.kernel.org/r/20190904211126.47518-4-saravanak@google.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 01 10月, 2019 1 次提交
-
-
由 Christoph Hellwig 提交于
The earlycon options without arguments is supposed to work on all device tree platforms, not just arm64. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJonathan Corbet <corbet@lwn.net>
-
- 25 9月, 2019 1 次提交
-
-
由 Vlastimil Babka 提交于
The debug_pagealloc functionality is useful to catch buggy page allocator users that cause e.g. use after free or double free. When page inconsistency is detected, debugging is often simpler by knowing the call stack of process that last allocated and freed the page. When page_owner is also enabled, we record the allocation stack trace, but not freeing. This patch therefore adds recording of freeing process stack trace to page owner info, if both page_owner and debug_pagealloc are configured and enabled. With only page_owner enabled, this info is not useful for the memory leak debugging use case. dump_page() is adjusted to print the info. An example result of calling __free_pages() twice may look like this (note the page last free stack trace): BUG: Bad page state in process bash pfn:13d8f8 page:ffffc31984f63e00 refcount:-1 mapcount:0 mapping:0000000000000000 index:0x0 flags: 0x1affff800000000() raw: 01affff800000000 dead000000000100 dead000000000122 0000000000000000 raw: 0000000000000000 0000000000000000 ffffffffffffffff 0000000000000000 page dumped because: nonzero _refcount page_owner tracks the page as freed page last allocated via order 0, migratetype Unmovable, gfp_mask 0xcc0(GFP_KERNEL) prep_new_page+0x143/0x150 get_page_from_freelist+0x289/0x380 __alloc_pages_nodemask+0x13c/0x2d0 khugepaged+0x6e/0xc10 kthread+0xf9/0x130 ret_from_fork+0x3a/0x50 page last free stack trace: free_pcp_prepare+0x134/0x1e0 free_unref_page+0x18/0x90 khugepaged+0x7b/0xc10 kthread+0xf9/0x130 ret_from_fork+0x3a/0x50 Modules linked in: CPU: 3 PID: 271 Comm: bash Not tainted 5.3.0-rc4-2.g07a1a73-default+ #57 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.12.1-0-ga5cab58-prebuilt.qemu.org 04/01/2014 Call Trace: dump_stack+0x85/0xc0 bad_page.cold+0xba/0xbf rmqueue_pcplist.isra.0+0x6c5/0x6d0 rmqueue+0x2d/0x810 get_page_from_freelist+0x191/0x380 __alloc_pages_nodemask+0x13c/0x2d0 __get_free_pages+0xd/0x30 __pud_alloc+0x2c/0x110 copy_page_range+0x4f9/0x630 dup_mmap+0x362/0x480 dup_mm+0x68/0x110 copy_process+0x19e1/0x1b40 _do_fork+0x73/0x310 __x64_sys_clone+0x75/0x80 do_syscall_64+0x6e/0x1e0 entry_SYSCALL_64_after_hwframe+0x49/0xbe RIP: 0033:0x7f10af854a10 ... Link: http://lkml.kernel.org/r/20190820131828.22684-5-vbabka@suse.czSigned-off-by: NVlastimil Babka <vbabka@suse.cz> Cc: Kirill A. Shutemov <kirill@shutemov.name> Cc: Matthew Wilcox <willy@infradead.org> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Michal Hocko <mhocko@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 14 9月, 2019 1 次提交
-
-
由 Palmer Dabbelt 提交于
This argument is supported on RISC-V systems and widely used, but was not documented here. Signed-off-by: NPalmer Dabbelt <palmer@sifive.com> Signed-off-by: NJonathan Corbet <corbet@lwn.net>
-
- 11 9月, 2019 1 次提交
-
-
由 Lu Baolu 提交于
This adds a helper to check whether a device needs to use bounce buffer. It also provides a boot time option to disable the bounce buffer. Users can use this to prevent the iommu driver from using the bounce buffer for performance gain. Cc: Ashok Raj <ashok.raj@intel.com> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com> Cc: Kevin Tian <kevin.tian@intel.com> Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Tested-by: NXu Pengfei <pengfei.xu@intel.com> Tested-by: NMika Westerberg <mika.westerberg@intel.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-