- 26 7月, 2021 2 次提交
-
-
由 Zhen Lei 提交于
First, add build options IOMMU_DEFAULT_{LAZY|STRICT}, so that we have the opportunity to set {lazy|strict} mode as default at build time. Then put the two config options in an choice, as they are mutually exclusive. [jpg: Make choice between strict and lazy only (and not passthrough)] Signed-off-by: NZhen Lei <thunder.leizhen@huawei.com> Signed-off-by: NJohn Garry <john.garry@huawei.com> Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Reviewed-by: NLu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/1626088340-5838-4-git-send-email-john.garry@huawei.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 John Garry 提交于
Now that the x86 drivers support iommu.strict, deprecate the custom methods. Signed-off-by: NJohn Garry <john.garry@huawei.com> Acked-by: NRobin Murphy <robin.murphy@arm.com> Reviewed-by: NLu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/1626088340-5838-2-git-send-email-john.garry@huawei.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 01 7月, 2021 4 次提交
-
-
由 Muchun Song 提交于
When using HUGETLB_PAGE_FREE_VMEMMAP, the freeing unused vmemmap pages associated with each HugeTLB page is default off. Now the vmemmap is PMD mapped. So there is no side effect when this feature is enabled with no HugeTLB pages in the system. Someone may want to enable this feature in the compiler time instead of using boot command line. So add a config to make it default on when someone do not want to enable it via command line. Link: https://lkml.kernel.org/r/20210616094915.34432-4-songmuchun@bytedance.comSigned-off-by: NMuchun Song <songmuchun@bytedance.com> Cc: Chen Huang <chenhuang5@huawei.com> Cc: David Hildenbrand <david@redhat.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Michal Hocko <mhocko@suse.com> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Oscar Salvador <osalvador@suse.de> Cc: Xiongchun Duan <duanxiongchun@bytedance.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Muchun Song 提交于
The preparation of splitting huge PMD mapping of vmemmap pages is ready, so switch the mapping from PTE to PMD. Link: https://lkml.kernel.org/r/20210616094915.34432-3-songmuchun@bytedance.comSigned-off-by: NMuchun Song <songmuchun@bytedance.com> Reviewed-by: NMike Kravetz <mike.kravetz@oracle.com> Cc: Chen Huang <chenhuang5@huawei.com> Cc: David Hildenbrand <david@redhat.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Michal Hocko <mhocko@suse.com> Cc: Oscar Salvador <osalvador@suse.de> Cc: Xiongchun Duan <duanxiongchun@bytedance.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Muchun Song 提交于
The parameter of memory_hotplug.memmap_on_memory is not compatible with hugetlb_free_vmemmap. So disable it when hugetlb_free_vmemmap is enabled. [akpm@linux-foundation.org: remove unneeded include, per Oscar] Link: https://lkml.kernel.org/r/20210510030027.56044-9-songmuchun@bytedance.comSigned-off-by: NMuchun Song <songmuchun@bytedance.com> Acked-by: NMike Kravetz <mike.kravetz@oracle.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Andy Lutomirski <luto@kernel.org> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Balbir Singh <bsingharora@gmail.com> Cc: Barry Song <song.bao.hua@hisilicon.com> Cc: Bodeddula Balasubramaniam <bodeddub@amazon.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Chen Huang <chenhuang5@huawei.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: David Hildenbrand <david@redhat.com> Cc: David Rientjes <rientjes@google.com> Cc: HORIGUCHI NAOYA <naoya.horiguchi@nec.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Joao Martins <joao.m.martins@oracle.com> Cc: Joerg Roedel <jroedel@suse.de> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Matthew Wilcox <willy@infradead.org> Cc: Miaohe Lin <linmiaohe@huawei.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Mina Almasry <almasrymina@google.com> Cc: Oliver Neukum <oneukum@suse.com> Cc: Oscar Salvador <osalvador@suse.de> Cc: Paul E. McKenney <paulmck@kernel.org> Cc: Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Xiongchun Duan <duanxiongchun@bytedance.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Muchun Song 提交于
Add a kernel parameter hugetlb_free_vmemmap to enable the feature of freeing unused vmemmap pages associated with each hugetlb page on boot. We disable PMD mapping of vmemmap pages for x86-64 arch when this feature is enabled. Because vmemmap_remap_free() depends on vmemmap being base page mapped. Link: https://lkml.kernel.org/r/20210510030027.56044-8-songmuchun@bytedance.comSigned-off-by: NMuchun Song <songmuchun@bytedance.com> Reviewed-by: NOscar Salvador <osalvador@suse.de> Reviewed-by: NBarry Song <song.bao.hua@hisilicon.com> Reviewed-by: NMiaohe Lin <linmiaohe@huawei.com> Tested-by: NChen Huang <chenhuang5@huawei.com> Tested-by: NBodeddula Balasubramaniam <bodeddub@amazon.com> Reviewed-by: NMike Kravetz <mike.kravetz@oracle.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Andy Lutomirski <luto@kernel.org> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Balbir Singh <bsingharora@gmail.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: David Hildenbrand <david@redhat.com> Cc: David Rientjes <rientjes@google.com> Cc: HORIGUCHI NAOYA <naoya.horiguchi@nec.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Joao Martins <joao.m.martins@oracle.com> Cc: Joerg Roedel <jroedel@suse.de> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Matthew Wilcox <willy@infradead.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Mina Almasry <almasrymina@google.com> Cc: Oliver Neukum <oneukum@suse.com> Cc: Paul E. McKenney <paulmck@kernel.org> Cc: Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Xiongchun Duan <duanxiongchun@bytedance.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 30 6月, 2021 1 次提交
-
-
由 Gavin Shan 提交于
The macro PAGE_REPORTING_MIN_ORDER is defined as the page reporting threshold. It can't be adjusted at runtime. This introduces a variable (@page_reporting_order) to replace the marcro (PAGE_REPORTING_MIN_ORDER). MAX_ORDER is assigned to it initially, meaning the page reporting is disabled. It will be specified by driver if valid one is provided. Otherwise, it will fall back to @pageblock_order. It's also exported so that the page reporting order can be adjusted at runtime. Link: https://lkml.kernel.org/r/20210625014710.42954-3-gshan@redhat.comSigned-off-by: NGavin Shan <gshan@redhat.com> Suggested-by: NDavid Hildenbrand <david@redhat.com> Reviewed-by: NAlexander Duyck <alexanderduyck@fb.com> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 22 6月, 2021 3 次提交
-
-
由 Paul E. McKenney 提交于
When the clocksource watchdog marks a clock as unstable, this might be due to that clock being unstable or it might be due to delays that happen to occur between the reads of the two clocks. It would be good to have a way of testing the clocksource watchdog's ability to distinguish between these two causes of clock skew and instability. Therefore, provide a new clocksource-wdtest module selected by a new TEST_CLOCKSOURCE_WATCHDOG Kconfig option. This module has a single module parameter named "holdoff" that provides the number of seconds of delay before testing should start, which defaults to zero when built as a module and to 10 seconds when built directly into the kernel. Very large systems that boot slowly may need to increase the value of this module parameter. This module uses hand-crafted clocksource structures to do its testing, thus avoiding messing up timing for the rest of the kernel and for user applications. This module first verifies that the ->uncertainty_margin field of the clocksource structures are set sanely. It then tests the delay-detection capability of the clocksource watchdog, increasing the number of consecutive delays injected, first provoking console messages complaining about the delays and finally forcing a clock-skew event. Unexpected test results cause at least one WARN_ON_ONCE() console splat. If there are no splats, the test has passed. Finally, it fuzzes the value returned from a clocksource to test the clocksource watchdog's ability to detect time skew. This module checks the state of its clocksource after each test, and uses WARN_ON_ONCE() to emit a console splat if there are any failures. This should enable all types of test frameworks to detect any such failures. This facility is intended for diagnostic use only, and should be avoided on production systems. Reported-by: NChris Mason <clm@fb.com> Suggested-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NPaul E. McKenney <paulmck@kernel.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NFeng Tang <feng.tang@intel.com> Link: https://lore.kernel.org/r/20210527190124.440372-5-paulmck@kernel.org
-
由 Paul E. McKenney 提交于
Currently, if skew is detected on a clock marked CLOCK_SOURCE_VERIFY_PERCPU, that clock is checked on all CPUs. This is thorough, but might not be what you want on a system with a few tens of CPUs, let alone a few hundred of them. Therefore, by default check only up to eight randomly chosen CPUs. Also provide a new clocksource.verify_n_cpus kernel boot parameter. A value of -1 says to check all of the CPUs, and a non-negative value says to randomly select that number of CPUs, without concern about selecting the same CPU multiple times. However, make use of a cpumask so that a given CPU will be checked at most once. Suggested-by: Thomas Gleixner <tglx@linutronix.de> # For verify_n_cpus=1. Signed-off-by: NPaul E. McKenney <paulmck@kernel.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NFeng Tang <feng.tang@intel.com> Link: https://lore.kernel.org/r/20210527190124.440372-3-paulmck@kernel.org
-
由 Paul E. McKenney 提交于
When the clocksource watchdog marks a clock as unstable, this might be due to that clock being unstable or it might be due to delays that happen to occur between the reads of the two clocks. Yes, interrupts are disabled across those two reads, but there are no shortage of things that can delay interrupts-disabled regions of code ranging from SMI handlers to vCPU preemption. It would be good to have some indication as to why the clock was marked unstable. Therefore, re-read the watchdog clock on either side of the read from the clock under test. If the watchdog clock shows an excessive time delta between its pair of reads, the reads are retried. The maximum number of retries is specified by a new kernel boot parameter clocksource.max_cswd_read_retries, which defaults to three, that is, up to four reads, one initial and up to three retries. If more than one retry was required, a message is printed on the console (the occasional single retry is expected behavior, especially in guest OSes). If the maximum number of retries is exceeded, the clock under test will be marked unstable. However, the probability of this happening due to various sorts of delays is quite small. In addition, the reason (clock-read delays) for the unstable marking will be apparent. Reported-by: NChris Mason <clm@fb.com> Signed-off-by: NPaul E. McKenney <paulmck@kernel.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NFeng Tang <feng.tang@intel.com> Link: https://lore.kernel.org/r/20210527190124.440372-1-paulmck@kernel.org
-
- 17 6月, 2021 3 次提交
-
-
由 Andy Shevchenko 提交于
Currently we need to use as many acpi_mask_gpe options as we want to have GPEs to be masked. Even with two it already becomes inconveniently large the kernel command line. Instead, allow acpi_mask_gpe to represent bitmap list. Signed-off-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Robin Murphy 提交于
Consolidating the flush queue logic also meant that the "iommu.strict" option started taking effect on x86 as well. Make sure we document that. Fixes: a250c23f ("iommu: remove DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE") Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Reviewed-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NJohn Garry <john.garry@huawei.com> Link: https://lore.kernel.org/r/2c8c06e1b449d6b060c5bf9ad3b403cd142f405d.1623682646.git.robin.murphy@arm.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
由 Steven Rostedt (VMware) 提交于
Add a kernel command line option that disables printing of events to console at late_initcall_sync(). This is useful when needing to see specific events written to console on boot up, but not wanting it when user space starts, as user space may make the console so noisy that the system becomes inoperable. Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
-
- 09 6月, 2021 1 次提交
-
-
由 Suren Baghdasaryan 提交于
PSI accounts stalls for each cgroup separately and aggregates it at each level of the hierarchy. This causes additional overhead with psi_avgs_work being called for each cgroup in the hierarchy. psi_avgs_work has been highly optimized, however on systems with large number of cgroups the overhead becomes noticeable. Systems which use PSI only at the system level could avoid this overhead if PSI can be configured to skip per-cgroup stall accounting. Add "cgroup_disable=pressure" kernel command-line option to allow requesting system-wide only pressure stall accounting. When set, it keeps system-wide accounting under /proc/pressure/ but skips accounting for individual cgroups and does not expose PSI nodes in cgroup hierarchy. Signed-off-by: NSuren Baghdasaryan <surenb@google.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NJohannes Weiner <hannes@cmpxchg.org> Signed-off-by: NTejun Heo <tj@kernel.org>
-
- 07 6月, 2021 1 次提交
-
-
由 Mike Rapoport 提交于
The CONFIG_X86_RESERVE_LOW build time and reservelow= command line option allowed to control the amount of memory under 1M that would be reserved at boot to avoid using memory that can be potentially clobbered by BIOS. Since the entire range under 1M is always reserved there is no need for these options anymore and they can be removed. Signed-off-by: NMike Rapoport <rppt@linux.ibm.com> Signed-off-by: NBorislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210601075354.5149-3-rppt@kernel.org
-
- 04 6月, 2021 1 次提交
-
-
由 Joerg Roedel 提交于
Add this option to enable the IOMMU on platforms like AMD Stoney, where the kernel usually disables it because it may cause problems in some scenarios. Signed-off-by: NJoerg Roedel <jroedel@suse.de> Acked-by: NAlex Deucher <alexander.deucher@amd.com> Link: https://lore.kernel.org/r/20210603130203.29016-1-joro@8bytes.org
-
- 28 5月, 2021 1 次提交
-
-
由 Barry Song 提交于
risc-v and arm64 support numa=off by common arch_numa_init() in drivers/base/arch_numa.c. x86, ppc, mips, sparc support it by arch-level early_param. numa=off is widely used in linux distributions. it is better to document it. Signed-off-by: NBarry Song <song.bao.hua@hisilicon.com> Link: https://lore.kernel.org/r/20210524051715.13604-1-song.bao.hua@hisilicon.comSigned-off-by: NJonathan Corbet <corbet@lwn.net>
-
- 20 5月, 2021 1 次提交
-
-
由 Stafford Horne 提交于
Most litex boards using RISC-V soft cores us the sbi earlycon, however this is not available for non RISC-V litex SoC's. This patch enables earlycon for liteuart which is available on all Litex SoC's making support for earycon debugging more widely available. Cc: Florent Kermarrec <florent@enjoy-digital.fr> Cc: Mateusz Holenko <mholenko@antmicro.com> Cc: Joel Stanley <joel@jms.id.au> Cc: Gabriel L. Somlo <gsomlo@gmail.com> Reviewed-and-tested-by: NGabriel Somlo <gsomlo@gmail.com> Reviewed-by: NJiri Slaby <jirislaby@kernel.org> Reviewed-by: NJoel Stanley <joel@jms.id.au> Signed-off-by: NStafford Horne <shorne@gmail.com> Link: https://lore.kernel.org/r/20210517115453.24365-1-shorne@gmail.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 18 5月, 2021 1 次提交
-
-
由 Fenghua Yu 提交于
Since bus lock rate limit changes the split_lock_detect parameter, update the documentation for the change. Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NTony Luck <tony.luck@intel.com> Link: https://lore.kernel.org/r/20210419214958.4035512-4-fenghua.yu@intel.com
-
- 12 5月, 2021 1 次提交
-
-
由 Peter Zijlstra 提交于
Assuming this stuff isn't actually used much; disable it by default and avoid allocating and tracking the task_delay_info structure. taskstats is changed to still report the regular sched and sched_info and only skip the missing task_delay_info fields instead of not reporting anything. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NIngo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210505111525.308018373@infradead.org
-
- 11 5月, 2021 1 次提交
-
-
由 Zhang Qiang 提交于
Add a drain_page_cache() function to drain a per-cpu page cache. The reason behind of it is a system can run into a low memory condition, in that case a page shrinker can ask for its users to free their caches in order to get extra memory available for other needs in a system. When a system hits such condition, a page cache is drained for all CPUs in a system. By default a page cache work is delayed with 5 seconds interval until a memory pressure disappears, if needed it can be changed. See a rcu_delay_page_cache_fill_msec module parameter. Co-developed-by: NUladzislau Rezki (Sony) <urezki@gmail.com> Signed-off-by: NUladzislau Rezki (Sony) <urezki@gmail.com> Signed-off-by: NZqiang <qiang.zhang@windriver.com> Signed-off-by: NPaul E. McKenney <paulmck@kernel.org>
-
- 07 5月, 2021 1 次提交
-
-
由 Rasmus Villemoes 提交于
Patch series "background initramfs unpacking, and CONFIG_MODPROBE_PATH", v3. These two patches are independent, but better-together. The second is a rather trivial patch that simply allows the developer to change "/sbin/modprobe" to something else - e.g. the empty string, so that all request_module() during early boot return -ENOENT early, without even spawning a usermode helper, needlessly synchronizing with the initramfs unpacking. The first patch delegates decompressing the initramfs to a worker thread, allowing do_initcalls() in main.c to proceed to the device_ and late_ initcalls without waiting for that decompression (and populating of rootfs) to finish. Obviously, some of those later calls may rely on the initramfs being available, so I've added synchronization points in the firmware loader and usermodehelper paths - there might be other places that would need this, but so far no one has been able to think of any places I have missed. There's not much to win if most of the functionality needed during boot is only available as modules. But systems with a custom-made .config and initramfs can boot faster, partly due to utilizing more than one cpu earlier, partly by avoiding known-futile modprobe calls (which would still trigger synchronization with the initramfs unpacking, thus eliminating most of the first benefit). This patch (of 2): Most of the boot process doesn't actually need anything from the initramfs, until of course PID1 is to be executed. So instead of doing the decompressing and populating of the initramfs synchronously in populate_rootfs() itself, push that off to a worker thread. This is primarily motivated by an embedded ppc target, where unpacking even the rather modest sized initramfs takes 0.6 seconds, which is long enough that the external watchdog becomes unhappy that it doesn't get attention soon enough. By doing the initramfs decompression in a worker thread, we get to do the device_initcalls and hence start petting the watchdog much sooner. Normal desktops might benefit as well. On my mostly stock Ubuntu kernel, my initramfs is a 26M xz-compressed blob, decompressing to around 126M. That takes almost two seconds: [ 0.201454] Trying to unpack rootfs image as initramfs... [ 1.976633] Freeing initrd memory: 29416K Before this patch, these lines occur consecutively in dmesg. With this patch, the timestamps on these two lines is roughly the same as above, but with 172 lines inbetween - so more than one cpu has been kept busy doing work that would otherwise only happen after the populate_rootfs() finished. Should one of the initcalls done after rootfs_initcall time (i.e., device_ and late_ initcalls) need something from the initramfs (say, a kernel module or a firmware blob), it will simply wait for the initramfs unpacking to be done before proceeding, which should in theory make this completely safe. But if some driver pokes around in the filesystem directly and not via one of the official kernel interfaces (i.e. request_firmware*(), call_usermodehelper*) that theory may not hold - also, I certainly might have missed a spot when sprinkling wait_for_initramfs(). So there is an escape hatch in the form of an initramfs_async= command line parameter. Link: https://lkml.kernel.org/r/20210313212528.2956377-1-linux@rasmusvillemoes.dk Link: https://lkml.kernel.org/r/20210313212528.2956377-2-linux@rasmusvillemoes.dkSigned-off-by: NRasmus Villemoes <linux@rasmusvillemoes.dk> Reviewed-by: NLuis Chamberlain <mcgrof@kernel.org> Cc: Jessica Yu <jeyu@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Nick Desaulniers <ndesaulniers@google.com> Cc: Takashi Iwai <tiwai@suse.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 06 5月, 2021 1 次提交
-
-
由 Oscar Salvador 提交于
Self stored memmap leads to a sparse memory situation which is unsuitable for workloads that requires large contiguous memory chunks, so make this an opt-in which needs to be explicitly enabled. To control this, let memory_hotplug have its own memory space, as suggested by David, so we can add memory_hotplug.memmap_on_memory parameter. Link: https://lkml.kernel.org/r/20210421102701.25051-7-osalvador@suse.deSigned-off-by: NOscar Salvador <osalvador@suse.de> Reviewed-by: NDavid Hildenbrand <david@redhat.com> Acked-by: NMichal Hocko <mhocko@suse.com> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Pavel Tatashin <pasha.tatashin@soleen.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 05 5月, 2021 2 次提交
-
-
由 Alexander Dahl 提交于
Missing since introduced in the driver. Fixes: 8a68ea00 ("gpio: mockup: implement naming the lines") Signed-off-by: NAlexander Dahl <ada@thorsis.com> Signed-off-by: NBartosz Golaszewski <bgolaszewski@baylibre.com>
-
由 Alexander Dahl 提交于
All other sections are ordered alphabetically so do the same for gpio-mockup. Fixes: 0f98dd1b ("gpio/mockup: add virtual gpio device") Signed-off-by: NAlexander Dahl <ada@thorsis.com> Signed-off-by: NBartosz Golaszewski <bgolaszewski@baylibre.com>
-
- 04 5月, 2021 1 次提交
-
-
由 Nicholas Piggin 提交于
This reduces TLB misses by nearly 30x on a `git diff` workload on a 2-node POWER9 (59,800 -> 2,100) and reduces CPU cycles by 0.54%, due to vfs hashes being allocated with 2MB pages. Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Reviewed-by: NChristophe Leroy <christophe.leroy@csgroup.eu> Acked-by: NMichael Ellerman <mpe@ellerman.id.au> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210503091755.613393-1-npiggin@gmail.com
-
- 01 5月, 2021 1 次提交
-
-
由 Rafael Aquini 提交于
This is a minor addition to the allocator setup options to provide a simple way to on demand enable back cache merging for builds that by default run with CONFIG_SLAB_MERGE_DEFAULT not set. Link: https://lkml.kernel.org/r/20210319194506.200159-1-aquini@redhat.comSigned-off-by: NRafael Aquini <aquini@redhat.com> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 17 4月, 2021 1 次提交
-
-
由 Peter Zijlstra 提交于
CONFIG_SCHED_DEBUG is the build-time Kconfig knob, the boot param sched_debug and the /debug/sched/debug_enabled knobs control the sched_debug_enabled variable, but what they really do is make SCHED_DEBUG more verbose, so rename the lot. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
-
- 14 4月, 2021 1 次提交
-
-
由 Sumit Garg 提交于
Current trusted keys framework is tightly coupled to use TPM device as an underlying implementation which makes it difficult for implementations like Trusted Execution Environment (TEE) etc. to provide trusted keys support in case platform doesn't posses a TPM device. Add a generic trusted keys framework where underlying implementations can be easily plugged in. Create struct trusted_key_ops to achieve this, which contains necessary functions of a backend. Also, define a module parameter in order to select a particular trust source in case a platform support multiple trust sources. In case its not specified then implementation itetrates through trust sources list starting with TPM and assign the first trust source as a backend which has initiazed successfully during iteration. Note that current implementation only supports a single trust source at runtime which is either selectable at compile time or during boot via aforementioned module parameter. Suggested-by: NJarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Signed-off-by: NSumit Garg <sumit.garg@linaro.org> Reviewed-by: NJarkko Sakkinen <jarkko@kernel.org> Signed-off-by: NJarkko Sakkinen <jarkko@kernel.org>
-
- 09 4月, 2021 1 次提交
-
-
由 Marc Zyngier 提交于
CONFIG_ARM64_VHE was introduced with ARMv8.1 (some 7 years ago), and has been enabled by default for almost all that time. Given that newer systems that are VHE capable are finally becoming available, and that some systems are even incapable of not running VHE, drop the configuration altogether. Anyone willing to stick to non-VHE on VHE hardware for obscure reasons should use the 'kvm-arm.mode=nvhe' command-line option. Suggested-by: NWill Deacon <will@kernel.org> Signed-off-by: NMarc Zyngier <maz@kernel.org> Acked-by: NWill Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210408131010.1109027-4-maz@kernel.orgSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 08 4月, 2021 1 次提交
-
-
由 Kees Cook 提交于
This provides the ability for architectures to enable kernel stack base address offset randomization. This feature is controlled by the boot param "randomize_kstack_offset=on/off", with its default value set by CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT. This feature is based on the original idea from the last public release of PaX's RANDKSTACK feature: https://pax.grsecurity.net/docs/randkstack.txt All the credit for the original idea goes to the PaX team. Note that the design and implementation of this upstream randomize_kstack_offset feature differs greatly from the RANDKSTACK feature (see below). Reasoning for the feature: This feature aims to make harder the various stack-based attacks that rely on deterministic stack structure. We have had many such attacks in past (just to name few): https://jon.oberheide.org/files/infiltrate12-thestackisback.pdf https://jon.oberheide.org/files/stackjacking-infiltrate11.pdf https://googleprojectzero.blogspot.com/2016/06/exploiting-recursion-in-linux-kernel_20.html As Linux kernel stack protections have been constantly improving (vmap-based stack allocation with guard pages, removal of thread_info, STACKLEAK), attackers have had to find new ways for their exploits to work. They have done so, continuing to rely on the kernel's stack determinism, in situations where VMAP_STACK and THREAD_INFO_IN_TASK_STRUCT were not relevant. For example, the following recent attacks would have been hampered if the stack offset was non-deterministic between syscalls: https://repositorio-aberto.up.pt/bitstream/10216/125357/2/374717.pdf (page 70: targeting the pt_regs copy with linear stack overflow) https://a13xp0p0v.github.io/2020/02/15/CVE-2019-18683.html (leaked stack address from one syscall as a target during next syscall) The main idea is that since the stack offset is randomized on each system call, it is harder for an attack to reliably land in any particular place on the thread stack, even with address exposures, as the stack base will change on the next syscall. Also, since randomization is performed after placing pt_regs, the ptrace-based approach[1] to discover the randomized offset during a long-running syscall should not be possible. Design description: During most of the kernel's execution, it runs on the "thread stack", which is pretty deterministic in its structure: it is fixed in size, and on every entry from userspace to kernel on a syscall the thread stack starts construction from an address fetched from the per-cpu cpu_current_top_of_stack variable. The first element to be pushed to the thread stack is the pt_regs struct that stores all required CPU registers and syscall parameters. Finally the specific syscall function is called, with the stack being used as the kernel executes the resulting request. The goal of randomize_kstack_offset feature is to add a random offset after the pt_regs has been pushed to the stack and before the rest of the thread stack is used during the syscall processing, and to change it every time a process issues a syscall. The source of randomness is currently architecture-defined (but x86 is using the low byte of rdtsc()). Future improvements for different entropy sources is possible, but out of scope for this patch. Further more, to add more unpredictability, new offsets are chosen at the end of syscalls (the timing of which should be less easy to measure from userspace than at syscall entry time), and stored in a per-CPU variable, so that the life of the value does not stay explicitly tied to a single task. As suggested by Andy Lutomirski, the offset is added using alloca() and an empty asm() statement with an output constraint, since it avoids changes to assembly syscall entry code, to the unwinder, and provides correct stack alignment as defined by the compiler. In order to make this available by default with zero performance impact for those that don't want it, it is boot-time selectable with static branches. This way, if the overhead is not wanted, it can just be left turned off with no performance impact. The generated assembly for x86_64 with GCC looks like this: ... ffffffff81003977: 65 8b 05 02 ea 00 7f mov %gs:0x7f00ea02(%rip),%eax # 12380 <kstack_offset> ffffffff8100397e: 25 ff 03 00 00 and $0x3ff,%eax ffffffff81003983: 48 83 c0 0f add $0xf,%rax ffffffff81003987: 25 f8 07 00 00 and $0x7f8,%eax ffffffff8100398c: 48 29 c4 sub %rax,%rsp ffffffff8100398f: 48 8d 44 24 0f lea 0xf(%rsp),%rax ffffffff81003994: 48 83 e0 f0 and $0xfffffffffffffff0,%rax ... As a result of the above stack alignment, this patch introduces about 5 bits of randomness after pt_regs is spilled to the thread stack on x86_64, and 6 bits on x86_32 (since its has 1 fewer bit required for stack alignment). The amount of entropy could be adjusted based on how much of the stack space we wish to trade for security. My measure of syscall performance overhead (on x86_64): lmbench: /usr/lib/lmbench/bin/x86_64-linux-gnu/lat_syscall -N 10000 null randomize_kstack_offset=y Simple syscall: 0.7082 microseconds randomize_kstack_offset=n Simple syscall: 0.7016 microseconds So, roughly 0.9% overhead growth for a no-op syscall, which is very manageable. And for people that don't want this, it's off by default. There are two gotchas with using the alloca() trick. First, compilers that have Stack Clash protection (-fstack-clash-protection) enabled by default (e.g. Ubuntu[3]) add pagesize stack probes to any dynamic stack allocations. While the randomization offset is always less than a page, the resulting assembly would still contain (unreachable!) probing routines, bloating the resulting assembly. To avoid this, -fno-stack-clash-protection is unconditionally added to the kernel Makefile since this is the only dynamic stack allocation in the kernel (now that VLAs have been removed) and it is provably safe from Stack Clash style attacks. The second gotcha with alloca() is a negative interaction with -fstack-protector*, in that it sees the alloca() as an array allocation, which triggers the unconditional addition of the stack canary function pre/post-amble which slows down syscalls regardless of the static branch. In order to avoid adding this unneeded check and its associated performance impact, architectures need to carefully remove uses of -fstack-protector-strong (or -fstack-protector) in the compilation units that use the add_random_kstack() macro and to audit the resulting stack mitigation coverage (to make sure no desired coverage disappears). No change is visible for this on x86 because the stack protector is already unconditionally disabled for the compilation unit, but the change is required on arm64. There is, unfortunately, no attribute that can be used to disable stack protector for specific functions. Comparison to PaX RANDKSTACK feature: The RANDKSTACK feature randomizes the location of the stack start (cpu_current_top_of_stack), i.e. including the location of pt_regs structure itself on the stack. Initially this patch followed the same approach, but during the recent discussions[2], it has been determined to be of a little value since, if ptrace functionality is available for an attacker, they can use PTRACE_PEEKUSR/PTRACE_POKEUSR to read/write different offsets in the pt_regs struct, observe the cache behavior of the pt_regs accesses, and figure out the random stack offset. Another difference is that the random offset is stored in a per-cpu variable, rather than having it be per-thread. As a result, these implementations differ a fair bit in their implementation details and results, though obviously the intent is similar. [1] https://lore.kernel.org/kernel-hardening/2236FBA76BA1254E88B949DDB74E612BA4BC57C1@IRSMSX102.ger.corp.intel.com/ [2] https://lore.kernel.org/kernel-hardening/20190329081358.30497-1-elena.reshetova@intel.com/ [3] https://lists.ubuntu.com/archives/ubuntu-devel/2019-June/040741.htmlCo-developed-by: NElena Reshetova <elena.reshetova@intel.com> Signed-off-by: NElena Reshetova <elena.reshetova@intel.com> Signed-off-by: NKees Cook <keescook@chromium.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NThomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20210401232347.2791257-4-keescook@chromium.org
-
- 06 4月, 2021 3 次提交
-
-
由 Maciej W. Rozycki 提交于
Carry the `probe_mask' parameter over from ide-generic to pata_legacy so that there is a way to prevent random poking at ISA port I/O locations in attempt to discover adapter option cards with libata like with the old IDE driver. By default all enabled locations are tried, however it may interfere with a different kind of hardware responding there. For example with a plain (E)ISA system the driver tries all the six possible locations: scsi host0: pata_legacy ata1: PATA max PIO4 cmd 0x1f0 ctl 0x3f6 irq 14 ata1.00: ATA-4: ST310211A, 3.54, max UDMA/100 ata1.00: 19541088 sectors, multi 16: LBA ata1.00: configured for PIO scsi 0:0:0:0: Direct-Access ATA ST310211A 3.54 PQ: 0 ANSI: 5 scsi 0:0:0:0: Attached scsi generic sg0 type 0 sd 0:0:0:0: [sda] 19541088 512-byte logical blocks: (10.0 GB/9.32 GiB) sd 0:0:0:0: [sda] Write Protect is off sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sda: sda1 sda2 sda3 sd 0:0:0:0: [sda] Attached SCSI disk scsi host1: pata_legacy ata2: PATA max PIO4 cmd 0x170 ctl 0x376 irq 15 scsi host1: pata_legacy ata3: PATA max PIO4 cmd 0x1e8 ctl 0x3ee irq 11 scsi host1: pata_legacy ata4: PATA max PIO4 cmd 0x168 ctl 0x36e irq 10 scsi host1: pata_legacy ata5: PATA max PIO4 cmd 0x1e0 ctl 0x3e6 irq 8 scsi host1: pata_legacy ata6: PATA max PIO4 cmd 0x160 ctl 0x366 irq 12 however giving the kernel "pata_legacy.probe_mask=21" makes it try every other location only: scsi host0: pata_legacy ata1: PATA max PIO4 cmd 0x1f0 ctl 0x3f6 irq 14 ata1.00: ATA-4: ST310211A, 3.54, max UDMA/100 ata1.00: 19541088 sectors, multi 16: LBA ata1.00: configured for PIO scsi 0:0:0:0: Direct-Access ATA ST310211A 3.54 PQ: 0 ANSI: 5 scsi 0:0:0:0: Attached scsi generic sg0 type 0 sd 0:0:0:0: [sda] 19541088 512-byte logical blocks: (10.0 GB/9.32 GiB) sd 0:0:0:0: [sda] Write Protect is off sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sda: sda1 sda2 sda3 sd 0:0:0:0: [sda] Attached SCSI disk scsi host1: pata_legacy ata2: PATA max PIO4 cmd 0x1e8 ctl 0x3ee irq 11 scsi host1: pata_legacy ata3: PATA max PIO4 cmd 0x1e0 ctl 0x3e6 irq 8 Signed-off-by: NMaciej W. Rozycki <macro@orcam.me.uk> Reviewed-by: NChristoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/alpine.DEB.2.21.2103211800110.21463@angie.orcam.me.ukSigned-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Maciej W. Rozycki 提交于
Add MODULE_PARM_DESC documentation and a kernel-parameters.txt entry. Signed-off-by: NMaciej W. Rozycki <macro@orcam.me.uk> Reviewed-by: NChristoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/alpine.DEB.2.21.2103212023190.21463@angie.orcam.me.ukSigned-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Maciej W. Rozycki 提交于
Most pata_legacy module parameters lack MODULE_PARM_DESC documentation and none is described in kernel-parameters.txt. Also several comments are inaccurate or wrong. Add the missing documentation pieces then and reorder parameters into a consistent block. Remove inaccuracies as follows: - `all' affects primary and secondary port ranges only rather than all, - `probe_all' affects tertiary and further port ranges rather than all, - `ht6560b' is for HT 6560B rather than HT 6560A. Signed-off-by: NMaciej W. Rozycki <macro@orcam.me.uk> Reviewed-by: NChristoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/alpine.DEB.2.21.2103211909560.21463@angie.orcam.me.ukSigned-off-by: NJens Axboe <axboe@kernel.dk>
-
- 29 3月, 2021 1 次提交
-
-
由 Fenghua Yu 提交于
Since #DB for bus lock detect changes the split_lock_detect parameter, update the documentation for the changes. Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NTony Luck <tony.luck@intel.com> Acked-by: NRandy Dunlap <rdunlap@infradead.org> Link: https://lore.kernel.org/r/20210322135325.682257-4-fenghua.yu@intel.com
-
- 18 3月, 2021 1 次提交
-
-
由 Robin Murphy 提交于
In converting intel-iommu over to the common IOMMU DMA ops, it quietly lost the functionality of its "forcedac" option. Since this is a handy thing both for testing and for performance optimisation on certain platforms, reimplement it under the common IOMMU parameter namespace. For the sake of fixing the inadvertent breakage of the Intel-specific parameter, remove the dmar_forcedac remnants and hook it up as an alias while documenting the transition to the new common parameter. Fixes: c588072b ("iommu/vt-d: Convert intel iommu driver to the iommu ops") Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Acked-by: NLu Baolu <baolu.lu@linux.intel.com> Reviewed-by: NJohn Garry <john.garry@huawei.com> Link: https://lore.kernel.org/r/7eece8e0ea7bfbe2cd0e30789e0d46df573af9b0.1614961776.git.robin.murphy@arm.comSigned-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 10 3月, 2021 1 次提交
-
-
由 Kefeng Wang 提交于
The riscv [rv32_]defconfig enabled CONFIG_MEMTEST, but memtest feature is not supported in RISCV. Add early_memtest() to support for memtest. Signed-off-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NPalmer Dabbelt <palmerdabbelt@google.com>
-
- 09 3月, 2021 3 次提交
-
-
由 Barry Song 提交于
X86 isn't the only architecture supporting NUMA_BALANCING. ARM64, PPC, S390 and RISCV also support it: arch$ git grep NUMA_BALANCING arm64/Kconfig: select ARCH_SUPPORTS_NUMA_BALANCING arm64/configs/defconfig:CONFIG_NUMA_BALANCING=y arm64/include/asm/pgtable.h:#ifdef CONFIG_NUMA_BALANCING powerpc/configs/powernv_defconfig:CONFIG_NUMA_BALANCING=y powerpc/configs/ppc64_defconfig:CONFIG_NUMA_BALANCING=y powerpc/configs/pseries_defconfig:CONFIG_NUMA_BALANCING=y powerpc/include/asm/book3s/64/pgtable.h:#ifdef CONFIG_NUMA_BALANCING powerpc/include/asm/book3s/64/pgtable.h:#ifdef CONFIG_NUMA_BALANCING powerpc/include/asm/book3s/64/pgtable.h:#endif /* CONFIG_NUMA_BALANCING */ powerpc/include/asm/book3s/64/pgtable.h:#ifdef CONFIG_NUMA_BALANCING powerpc/include/asm/book3s/64/pgtable.h:#endif /* CONFIG_NUMA_BALANCING */ powerpc/include/asm/nohash/pgtable.h:#ifdef CONFIG_NUMA_BALANCING powerpc/include/asm/nohash/pgtable.h:#endif /* CONFIG_NUMA_BALANCING */ powerpc/platforms/Kconfig.cputype: select ARCH_SUPPORTS_NUMA_BALANCING riscv/Kconfig: select ARCH_SUPPORTS_NUMA_BALANCING riscv/include/asm/pgtable.h:#ifdef CONFIG_NUMA_BALANCING s390/Kconfig: select ARCH_SUPPORTS_NUMA_BALANCING s390/configs/debug_defconfig:CONFIG_NUMA_BALANCING=y s390/configs/defconfig:CONFIG_NUMA_BALANCING=y s390/include/asm/pgtable.h:#ifdef CONFIG_NUMA_BALANCING x86/Kconfig: select ARCH_SUPPORTS_NUMA_BALANCING if X86_64 x86/include/asm/pgtable.h:#ifdef CONFIG_NUMA_BALANCING x86/include/asm/pgtable.h:#endif /* CONFIG_NUMA_BALANCING */ On the other hand, setup_numabalancing() is implemented in mm/mempolicy.c which doesn't depend on architectures. Signed-off-by: NBarry Song <song.bao.hua@hisilicon.com> Reviewed-by: NPalmer Dabbelt <palmerdabbelt@google.com> Acked-by: NPalmer Dabbelt <palmerdabbelt@google.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: "Paul E. McKenney" <paulmck@kernel.org> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Mauro Carvalho Chehab <mchehab+huawei@kernel.org> Cc: Viresh Kumar <viresh.kumar@linaro.org> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20210302084159.33688-1-song.bao.hua@hisilicon.comSigned-off-by: NJonathan Corbet <corbet@lwn.net>
-
由 Uladzislau Rezki (Sony) 提交于
The single-argument variant of kfree_rcu() is currently not tested by any member of the rcutoture test suite. This commit therefore adds rcuscale code to test it. This testing is controlled by two new boolean module parameters, kfree_rcu_test_single and kfree_rcu_test_double. If one is set and the other not, only the corresponding variant is tested, otherwise both are tested, with the variant to be tested determined randomly on each invocation. Both of these module parameters are initialized to false, so setting either to true will test only that variant. Suggested-by: NPaul E. McKenney <paulmck@kernel.org> Signed-off-by: NUladzislau Rezki (Sony) <urezki@gmail.com> Signed-off-by: NPaul E. McKenney <paulmck@kernel.org>
-
由 Paul Gortmaker 提交于
With the core bitmap support now accepting "N" as a placeholder for the end of the bitmap, "all" can be represented as "0-N" and has the advantage of not being specific to RCU (or any other subsystem). So deprecate the use of "all" by removing documentation references to it. The support itself needs to remain for now, since we don't know how many people out there are using it currently, but since it is in an __init area anyway, it isn't worth losing sleep over. Cc: Yury Norov <yury.norov@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: "Paul E. McKenney" <paulmck@kernel.org> Cc: Josh Triplett <josh@joshtriplett.org> Acked-by: NYury Norov <yury.norov@gmail.com> Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: NPaul E. McKenney <paulmck@kernel.org>
-