- 13 10月, 2017 2 次提交
-
-
由 Tom Saeger 提交于
Make `input` document refs valid including: - joystick - joystick-parport Signed-off-by: NTom Saeger <tom.saeger@oracle.com> Reviewed-by: NTakashi Iwai <tiwai@suse.de> Signed-off-by: NJonathan Corbet <corbet@lwn.net>
-
由 Tom Saeger 提交于
Make admin-guide document refs valid. Signed-off-by: NTom Saeger <tom.saeger@oracle.com> Acked-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NJonathan Corbet <corbet@lwn.net>
-
- 08 10月, 2017 1 次提交
-
-
由 Rafael J. Wysocki 提交于
Commit 7aa7a036 (PM: docs: Delete the obsolete states.txt document) forgot to update kernel-parameters.txt with a reference to the new sleep-states.rst document and it still points to states.txt that was dropped, so fix it now. Fixes: 7aa7a036 (PM: docs: Delete the obsolete states.txt document) Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NJonathan Corbet <corbet@lwn.net>
-
- 04 10月, 2017 1 次提交
-
-
由 Borislav Petkov 提交于
It should say what that <integer> range is and what that integer value means. I had to look at the code... Signed-off-by: NBorislav Petkov <bp@suse.de> [jc: changed non-null to nonzero] Signed-off-by: NJonathan Corbet <corbet@lwn.net>
-
- 27 9月, 2017 1 次提交
-
-
由 Daniel Xu 提交于
The default value was changed from 10 minutes to disabled in commit a4199f5e. Signed-off-by: NDaniel Xu <dxu@dxuuu.xyz> Signed-off-by: NJonathan Corbet <corbet@lwn.net>
-
- 07 9月, 2017 1 次提交
-
-
由 Michal Hocko 提交于
Patch series "cleanup zonelists initialization", v1. This is aimed at cleaning up the zonelists initialization code we have but the primary motivation was bug report [2] which got resolved but the usage of stop_machine is just too ugly to live. Most patches are straightforward but 3 of them need a special consideration. Patch 1 removes zone ordered zonelists completely. I am CCing linux-api because this is a user visible change. As I argue in the patch description I do not think we have a strong usecase for it these days. I have kept sysctl in place and warn into the log if somebody tries to configure zone lists ordering. If somebody has a real usecase for it we can revert this patch but I do not expect anybody will actually notice runtime differences. This patch is not strictly needed for the rest but it made patch 6 easier to implement. Patch 7 removes stop_machine from build_all_zonelists without adding any special synchronization between iterators and updater which I _believe_ is acceptable as explained in the changelog. I hope I am not missing anything. Patch 8 then removes zonelists_mutex which is kind of ugly as well and not really needed AFAICS but a care should be taken when double checking my thinking. This patch (of 9): Supporting zone ordered zonelists costs us just a lot of code while the usefulness is arguable if existent at all. Mel has already made node ordering default on 64b systems. 32b systems are still using ZONELIST_ORDER_ZONE because it is considered better to fallback to a different NUMA node rather than consume precious lowmem zones. This argument is, however, weaken by the fact that the memory reclaim has been reworked to be node rather than zone oriented. This means that lowmem requests have to skip over all highmem pages on LRUs already and so zone ordering doesn't save the reclaim time much. So the only advantage of the zone ordering is under a light memory pressure when highmem requests do not ever hit into lowmem zones and the lowmem pressure doesn't need to reclaim. Considering that 32b NUMA systems are rather suboptimal already and it is generally advisable to use 64b kernel on such a HW I believe we should rather care about the code maintainability and just get rid of ZONELIST_ORDER_ZONE altogether. Keep systcl in place and warn if somebody tries to set zone ordering either from kernel command line or the sysctl. [mhocko@suse.com: reading vm.numa_zonelist_order will never terminate] Link: http://lkml.kernel.org/r/20170721143915.14161-2-mhocko@kernel.orgSigned-off-by: NMichal Hocko <mhocko@suse.com> Acked-by: NMel Gorman <mgorman@suse.de> Acked-by: NVlastimil Babka <vbabka@suse.cz> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Joonsoo Kim <js1304@gmail.com> Cc: Shaohua Li <shaohua.li@intel.com> Cc: Toshi Kani <toshi.kani@hpe.com> Cc: Abdul Haleem <abdhalee@linux.vnet.ibm.com> Cc: <linux-api@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 29 8月, 2017 1 次提交
-
-
由 Noam Camus 提交于
We add ability for all cores at NPS SoC to control the number of cycles HW thread can execute before it is replace with another eligible HW thread within the same core. The replacement is done by the HW scheduler. Signed-off-by: NNoam Camus <noamca@mellanox.com> Signed-off-by: NVineet Gupta <vgupta@synopsys.com> [vgupta: simplified handlign of out of range argument value]
-
- 26 8月, 2017 1 次提交
-
-
由 Tony Luck 提交于
Command line options allow us to ignore features that we don't want. Also we can re-enable options that have been disabled on a platform (so long as the underlying h/w actually supports the option). [ tglx: Marked the option array __initdata and the helper function __init ] Signed-off-by: NTony Luck <tony.luck@intel.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Fenghua" <fenghua.yu@intel.com> Cc: Ravi V" <ravi.v.shankar@intel.com> Cc: "Peter Zijlstra" <peterz@infradead.org> Cc: "Stephane Eranian" <eranian@google.com> Cc: "Andi Kleen" <ak@linux.intel.com> Cc: "David Carrillo-Cisneros" <davidcc@google.com> Cc: Vikas Shivappa <vikas.shivappa@linux.intel.com> Link: http://lkml.kernel.org/r/0c37b0d4dbc30977a3c1cee08b66420f83662694.1503512900.git.tony.luck@intel.com
-
- 09 8月, 2017 1 次提交
-
-
由 Heiko Carstens 提交于
If memory is fragmented it is unlikely that large order memory allocations succeed. This has been an issue with the vmcp device driver since a long time, since it requires large physical contiguous memory ares for large responses. To hopefully resolve this issue make use of the contiguous memory allocator (cma). This patch adds a vmcp specific vmcp cma area with a default size of 4MB. The size can be changed either via the VMCP_CMA_SIZE config option at compile time or with the "vmcp_cma" kernel parameter (e.g. "vmcp_cma=16m"). For any vmcp response buffers larger than 16k memory from the cma area will be allocated. If such an allocation fails, there is a fallback to the buddy allocator. Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 25 7月, 2017 1 次提交
-
-
由 Paul E. McKenney 提交于
If a CPU is specified in the nohz_full= kernel boot parameter to a kernel built with CONFIG_NO_HZ_FULL=y, then that CPU's callbacks will be offloaded, just as if that CPU had also been specified in the rcu_nocbs= kernel boot parameter. But the current documentation states that the user must keep these two boot parameters manually synchronized. This commit therefore updates the documentation to reflect reality. Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
- 18 7月, 2017 1 次提交
-
-
由 Tom Lendacky 提交于
Create a Documentation entry to describe the AMD Secure Memory Encryption (SME) feature and add documentation for the mem_encrypt= kernel parameter. Signed-off-by: NTom Lendacky <thomas.lendacky@amd.com> Reviewed-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NBorislav Petkov <bp@suse.de> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Brijesh Singh <brijesh.singh@amd.com> Cc: Dave Young <dyoung@redhat.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Larry Woodman <lwoodman@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Matt Fleming <matt@codeblueprint.co.uk> Cc: Michael S. Tsirkin <mst@redhat.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Radim Krčmář <rkrcmar@redhat.com> Cc: Rik van Riel <riel@redhat.com> Cc: Toshimitsu Kani <toshi.kani@hpe.com> Cc: kasan-dev@googlegroups.com Cc: kvm@vger.kernel.org Cc: linux-arch@vger.kernel.org Cc: linux-doc@vger.kernel.org Cc: linux-efi@vger.kernel.org Cc: linux-mm@kvack.org Link: http://lkml.kernel.org/r/ca0a0c13b055fd804cfc92cbaca8acd68057eed0.1500319216.git.thomas.lendacky@amd.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 07 7月, 2017 2 次提交
-
-
由 Michal Hocko 提交于
Commit 20b2f52b ("numa: add CONFIG_MOVABLE_NODE for movable-dedicated node") has introduced CONFIG_MOVABLE_NODE without a good explanation on why it is actually useful. It makes a lot of sense to make movable node semantic opt in but we already have that because the feature has to be explicitly enabled on the kernel command line. A config option on top only makes the configuration space larger without a good reason. It also adds an additional ifdefery that pollutes the code. Just drop the config option and make it de-facto always enabled. This shouldn't introduce any change to the semantic. Link: http://lkml.kernel.org/r/20170529114141.536-3-mhocko@kernel.orgSigned-off-by: NMichal Hocko <mhocko@suse.com> Acked-by: NReza Arbab <arbab@linux.vnet.ibm.com> Acked-by: NVlastimil Babka <vbabka@suse.cz> Cc: Mel Gorman <mgorman@suse.de> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Jerome Glisse <jglisse@redhat.com> Cc: Yasuaki Ishimatsu <yasu.isimatu@gmail.com> Cc: Xishi Qiu <qiuxishi@huawei.com> Cc: Kani Toshimitsu <toshi.kani@hpe.com> Cc: Chen Yucong <slaoub@gmail.com> Cc: Joonsoo Kim <js1304@gmail.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Rientjes <rientjes@google.com> Cc: Daniel Kiper <daniel.kiper@oracle.com> Cc: Igor Mammedov <imammedo@redhat.com> Cc: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Kees Cook 提交于
Some hardened environments want to build kernels with slab_nomerge already set (so that they do not depend on remembering to set the kernel command line option). This is desired to reduce the risk of kernel heap overflows being able to overwrite objects from merged caches and changes the requirements for cache layout control, increasing the difficulty of these attacks. By keeping caches unmerged, these kinds of exploits can usually only damage objects in the same cache (though the risk to metadata exploitation is unchanged). Link: http://lkml.kernel.org/r/20170620230911.GA25238@beastSigned-off-by: NKees Cook <keescook@chromium.org> Cc: Daniel Micay <danielmicay@gmail.com> Cc: David Windsor <dave@nullcore.net> Cc: Eric Biggers <ebiggers3@gmail.com> Cc: Christoph Lameter <cl@linux.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Daniel Micay <danielmicay@gmail.com> Cc: David Windsor <dave@nullcore.net> Cc: Eric Biggers <ebiggers3@gmail.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@kernel.org> Cc: Mauro Carvalho Chehab <mchehab@kernel.org> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Andy Lutomirski <luto@kernel.org> Cc: Nicolas Pitre <nicolas.pitre@linaro.org> Cc: Tejun Heo <tj@kernel.org> Cc: Daniel Mack <daniel@zonque.org> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Helge Deller <deller@gmx.de> Cc: Rik van Riel <riel@redhat.com> Cc: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 05 7月, 2017 1 次提交
-
-
由 Andy Lutomirski 提交于
The parameter is only present on x86_64 systems to save a few bytes, as PCID is always disabled on x86_32. Signed-off-by: NAndy Lutomirski <luto@kernel.org> Reviewed-by: NNadav Amit <nadav.amit@gmail.com> Reviewed-by: NBorislav Petkov <bp@suse.de> Reviewed-by: NThomas Gleixner <tglx@linutronix.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mel Gorman <mgorman@suse.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@redhat.com> Cc: linux-mm@kvack.org Link: http://lkml.kernel.org/r/8bbb2e65bcd249a5f18bfb8128b4689f08ac2b60.1498751203.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 23 6月, 2017 1 次提交
-
-
由 Steffen Maier 提交于
Another place in lib/Kconfig.debug was already fixed in commit f8998c22 ("lib/Kconfig.debug: correct documentation paths"). Fixes: 9d85025b ("docs-rst: create an user's manual book") Signed-off-by: NSteffen Maier <maier@linux.vnet.ibm.com> Signed-off-by: NJonathan Corbet <corbet@lwn.net>
-
- 22 6月, 2017 2 次提交
-
-
由 Mimi Zohar 提交于
The builtin "ima_appraise_tcb" policy should require file signatures for at least a few of the hooks (eg. kernel modules, firmware, and the kexec kernel image), but changing it would break the existing userspace/kernel ABI. This patch defines a new builtin policy named "secure_boot", which can be specified on the "ima_policy=" boot command line, independently or in conjunction with the "ima_appraise_tcb" policy, by specifing ima_policy="appraise_tcb | secure_boot". The new appraisal rules requiring file signatures will be added prior to the "ima_appraise_tcb" rules. Signed-off-by: NMimi Zohar <zohar@linux.vnet.ibm.com> Changelog: - Reference secure boot in the new builtin policy name. (Thiago Bauermann)
-
由 Mimi Zohar 提交于
Add support for providing multiple builtin policies on the "ima_policy=" boot command line. Use "|" as the delimitor separating the policy names. Signed-off-by: NMimi Zohar <zohar@linux.vnet.ibm.com>
-
- 20 6月, 2017 1 次提交
-
-
由 Andreas Färber 提交于
This implements an earlycon for Actions Semi S500/S900 SoCs. Based on LeMaker linux-actions tree. Signed-off-by: NAndreas Färber <afaerber@suse.de> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 19 6月, 2017 1 次提交
-
-
由 Hugh Dickins 提交于
Stack guard page is a useful feature to reduce a risk of stack smashing into a different mapping. We have been using a single page gap which is sufficient to prevent having stack adjacent to a different mapping. But this seems to be insufficient in the light of the stack usage in userspace. E.g. glibc uses as large as 64kB alloca() in many commonly used functions. Others use constructs liks gid_t buffer[NGROUPS_MAX] which is 256kB or stack strings with MAX_ARG_STRLEN. This will become especially dangerous for suid binaries and the default no limit for the stack size limit because those applications can be tricked to consume a large portion of the stack and a single glibc call could jump over the guard page. These attacks are not theoretical, unfortunatelly. Make those attacks less probable by increasing the stack guard gap to 1MB (on systems with 4k pages; but make it depend on the page size because systems with larger base pages might cap stack allocations in the PAGE_SIZE units) which should cover larger alloca() and VLA stack allocations. It is obviously not a full fix because the problem is somehow inherent, but it should reduce attack space a lot. One could argue that the gap size should be configurable from userspace, but that can be done later when somebody finds that the new 1MB is wrong for some special case applications. For now, add a kernel command line option (stack_guard_gap) to specify the stack gap size (in page units). Implementation wise, first delete all the old code for stack guard page: because although we could get away with accounting one extra page in a stack vma, accounting a larger gap can break userspace - case in point, a program run with "ulimit -S -v 20000" failed when the 1MB gap was counted for RLIMIT_AS; similar problems could come with RLIMIT_MLOCK and strict non-overcommit mode. Instead of keeping gap inside the stack vma, maintain the stack guard gap as a gap between vmas: using vm_start_gap() in place of vm_start (or vm_end_gap() in place of vm_end if VM_GROWSUP) in just those few places which need to respect the gap - mainly arch_get_unmapped_area(), and and the vma tree's subtree_gap support for that. Original-patch-by: NOleg Nesterov <oleg@redhat.com> Original-patch-by: NMichal Hocko <mhocko@suse.com> Signed-off-by: NHugh Dickins <hughd@google.com> Acked-by: NMichal Hocko <mhocko@suse.com> Tested-by: Helge Deller <deller@gmx.de> # parisc Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 15 6月, 2017 3 次提交
-
-
由 Marc Zyngier 提交于
Now that we're able to safely handle common sysreg access, let's give the user the opportunity to enable it by passing a specific command-line option (vgic_v3.common_trap). Tested-by: NAlexander Graf <agraf@suse.de> Acked-by: NDavid Daney <david.daney@cavium.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NChristoffer Dall <cdall@linaro.org> Signed-off-by: NChristoffer Dall <cdall@linaro.org>
-
由 Marc Zyngier 提交于
Now that we're able to safely handle Group-0 sysreg access, let's give the user the opportunity to enable it by passing a specific command-line option (vgic_v3.group0_trap). Tested-by: NAlexander Graf <agraf@suse.de> Acked-by: NDavid Daney <david.daney@cavium.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <cdall@linaro.org>
-
由 Marc Zyngier 提交于
Now that we're able to safely handle Group-1 sysreg access, let's give the user the opportunity to enable it by passing a specific command-line option (vgic_v3.group1_trap). Tested-by: NAlexander Graf <agraf@suse.de> Acked-by: NDavid Daney <david.daney@cavium.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NChristoffer Dall <cdall@linaro.org> Signed-off-by: NChristoffer Dall <cdall@linaro.org>
-
- 09 6月, 2017 3 次提交
-
-
由 Leo Yan 提交于
Add coresight_cpu_debug.enable to kernel-parameters.txt, this flag is used to enable/disable the CPU sampling based debugging. Signed-off-by: NLeo Yan <leo.yan@linaro.org> Signed-off-by: NMathieu Poirier <mathieu.poirier@linaro.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Paul E. McKenney 提交于
The RCU_TORTURE_TEST_SLOW_PREINIT, RCU_TORTURE_TEST_SLOW_PREINIT_DELAY, RCU_TORTURE_TEST_SLOW_PREINIT_DELAY, RCU_TORTURE_TEST_SLOW_INIT, RCU_TORTURE_TEST_SLOW_INIT_DELAY, RCU_TORTURE_TEST_SLOW_CLEANUP, and RCU_TORTURE_TEST_SLOW_CLEANUP_DELAY Kconfig options are only useful for torture testing, and there are the rcutree.gp_cleanup_delay, rcutree.gp_init_delay, and rcutree.gp_preinit_delay kernel boot parameters that rcutorture can use instead. The effect of these parameters is to artificially slow down grace period initialization and cleanup in order to make some types of race conditions happen more often. This commit therefore simplifies Tree RCU a bit by removing the Kconfig options and adding the corresponding kernel parameters to rcutorture's .boot files instead. However, this commit also leaves out the kernel parameters for TREE02, TREE04, and TREE07 in order to have about the same number of tests slowed as not slowed. TREE01, TREE03, TREE05, and TREE06 are slowed, and the rest are not slowed. Reported-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
由 Paul E. McKenney 提交于
If a given CPU never happens to ever start an SRCU grace period, the grace-period sequence counter might wrap. If this CPU were to decide to finally start a grace period, the state of its sdp->srcu_gp_seq_needed might make it appear that it has already requested this grace period, which would prevent starting the grace period. If no other CPU ever started a grace period again, this would look like a grace-period hang. Even if some other CPU took pity and started the needed grace period, the leaf rcu_node structure's ->srcu_data_have_cbs field won't have record of the fact that this CPU has a callback pending, which would look like a very localized grace-period hang. This might seem very unlikely, but SRCU grace periods can take less than a microsecond on small systems, which means that overflow can happen in much less than an hour on a 32-bit embedded system. And embedded systems are especially likely to have long-term idle CPUs. Therefore, it makes sense to prevent this scenario from happening. This commit therefore scans each srcu_data structure occasionally, with frequency controlled by the srcutree.counter_wrap_check kernel boot parameter. This parameter can be set to something like 255 in order to exercise the counter-wrap-prevention code. Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
- 08 6月, 2017 2 次提交
-
-
由 Paul E. McKenney 提交于
This commit adds a writer_holdoff boot parameter to rcuperf, which is intended to be used to test Tree SRCU's auto-expediting. This boot parameter is in microseconds, and defaults to zero (that is, disabled). Set it to a bit larger than srcutree.exp_holdoff, keeping the nanosecond/microsecond conversion, to force Tree SRCU to auto-expedite more aggressively. This commit also adds documentation for this parameter, and fixes some alphabetization while in the neighborhood. Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
由 Paul E. McKenney 提交于
This commit upgrades rcuperf so that it can do performance testing on asynchronous grace-period primitives such as call_srcu(). There is a new rcuperf.gp_async module parameter that specifies this new behavior, with the pre-existing rcuperf.gp_exp testing expedited grace periods such as synchronize_rcu_expedited, and with the default being to test synchronous non-expedited grace periods such as synchronize_rcu(). There is also a new rcuperf.gp_async_max module parameter that specifies the maximum number of outstanding callbacks per writer kthread, defaulting to 1,000. When this limit is exceeded, the writer thread invokes the appropriate flavor of rcu_barrier() to wait for callbacks to drain. Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> [ paulmck: Removed the redundant initialization noted by Arnd Bergmann. ]
-
- 01 6月, 2017 1 次提交
-
-
由 Nicholas Piggin 提交于
Provide a dt_cpu_ftrs= cmdline option to disable the dt_cpu_ftrs CPU feature discovery, and fall back to the "cputable" based version. Also allow control of advertising unknown features to userspace and with this parameter, and remove the clunky CONFIG option. Signed-off-by: NNicholas Piggin <npiggin@gmail.com> [mpe: Add explicit early check of bootargs in dt_cpu_ftrs_init()] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 24 5月, 2017 1 次提交
-
-
由 Baoquan He 提交于
In commit: 9710f581 ("x86, mm: Let "memmap=" take more entries one time") ... 'memmap=' was changed to adopt multiple, comma delimited values in a single entry, so update the related description. In the special case of only specifying size value without an offset, like memmap=nn[KMG], memmap behaves similarly to mem=nn[KMG], so update it too here. Furthermore, for memmap=nn[KMG]$ss[KMG], an escape character needs be added before '$' for some bootloaders. E.g in grub2, if we specify memmap=100M$5G as suggested by the documentation, "memmap=100MG" gets passed to the kernel. Clarify all this. Signed-off-by: NBaoquan He <bhe@redhat.com> Acked-by: NKees Cook <keescook@chromium.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: dan.j.williams@intel.com Cc: douly.fnst@cn.fujitsu.com Cc: dyoung@redhat.com Cc: m.mizuma@jp.fujitsu.com Link: http://lkml.kernel.org/r/1494654390-23861-4-git-send-email-bhe@redhat.com [ Various spelling fixes. ] Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
- 10 5月, 2017 1 次提交
-
-
由 Andre Przywara 提交于
The Marvell Armada 3700 UART uses "ar3700_uart" for its earlycon name. Adjust documentation to match the code. Signed-off-by: NAndre Przywara <andre.przywara@arm.com> Signed-off-by: NJonathan Corbet <corbet@lwn.net>
-
- 01 5月, 2017 1 次提交
-
-
The AVR32 architecture support has been removed from the Linux kernel, hence remove all references to it from Documentation. Signed-off-by: NHans-Christian Noren Egtvedt <egtvedt@samfundet.no> Signed-off-by: NHåvard Skinnemoen <hskinnemoen@gmail.com> Signed-off-by: NNicolas Ferre <nicolas.ferre@microchip.com> Acked-by: NAndy Shevchenko <andy.shevchenko@gmail.com> Acked-by: NBoris Brezillon <boris.brezillon@free-electrons.com>
-
- 27 4月, 2017 2 次提交
-
-
由 Paul E. McKenney 提交于
On small systems, in the absence of readers, expedited SRCU grace periods can complete in less than a microsecond. This means that an eight-CPU system can have all CPUs doing synchronize_srcu() in a tight loop and almost always expedite. This might actually be desirable in some situations, but in general it is a good way to needlessly burn CPU cycles. And in those situations where it is desirable, your friend is the function synchronize_srcu_expedited(). For other situations, this commit adds a kernel parameter that specifies a holdoff between completing the last SRCU grace period and auto-expediting the next. If the next grace period starts before the holdoff expires, auto-expediting is disabled. The holdoff is 50 microseconds by default, and can be tuned to the desired number of nanoseconds. A value of zero disables auto-expediting. Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Tested-by: NMike Galbraith <efault@gmx.de>
-
由 Shaohua Li 提交于
IOMMU harms performance signficantly when we run very fast networking workloads. It's 40GB networking doing XDP test. Software overhead is almost unaware, but it's the IOTLB miss (based on our analysis) which kills the performance. We observed the same performance issue even with software passthrough (identity mapping), only the hardware passthrough survives. The pps with iommu (with software passthrough) is only about ~30% of that without it. This is a limitation in hardware based on our observation, so we'd like to disable the IOMMU force on, but we do want to use TBOOT and we can sacrifice the DMA security bought by IOMMU. I must admit I know nothing about TBOOT, but TBOOT guys (cc-ed) think not eabling IOMMU is totally ok. So introduce a new boot option to disable the force on. It's kind of silly we need to run into intel_iommu_init even without force on, but we need to disable TBOOT PMR registers. For system without the boot option, nothing is changed. Signed-off-by: NShaohua Li <shli@fb.com> Signed-off-by: NJoerg Roedel <jroedel@suse.de>
-
- 21 4月, 2017 1 次提交
-
-
由 Christoph Hellwig 提交于
The objlayout code has been in the tree, but it's been unmaintained and no server product for it actually ever shipped. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com>
-
- 06 4月, 2017 1 次提交
-
-
由 Will Deacon 提交于
The IOMMU core currently initialises the default domain for each group to IOMMU_DOMAIN_DMA, under the assumption that devices will use IOMMU-backed DMA ops by default. However, in some cases it is desirable for the DMA ops to bypass the IOMMU for performance reasons, reserving use of translation for subsystems such as VFIO that require it for enforcing device isolation. Rather than modify each IOMMU driver to provide different semantics for DMA domains, instead we introduce a command line parameter that can be used to change the type of the default domain. Passthrough can then be specified using "iommu.passthrough=1" on the kernel command line. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 01 4月, 2017 1 次提交
-
-
由 Mark Rutland 提交于
Disable kasan after the first report. There are several reasons for this: - Single bug quite often has multiple invalid memory accesses causing storm in the dmesg. - Write OOB access might corrupt metadata so the next report will print bogus alloc/free stacktraces. - Reports after the first easily could be not bugs by itself but just side effects of the first one. Given that multiple reports usually only do harm, it makes sense to disable kasan after the first one. If user wants to see all the reports, the boot-time parameter kasan_multi_shot must be used. [aryabinin@virtuozzo.com: wrote changelog and doc, added missing include] Link: http://lkml.kernel.org/r/20170323154416.30257-1-aryabinin@virtuozzo.comSigned-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NAndrey Ryabinin <aryabinin@virtuozzo.com> Cc: Andrey Konovalov <andreyknvl@google.com> Cc: Alexander Potapenko <glider@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 28 3月, 2017 1 次提交
-
-
由 Borislav Petkov 提交于
Introduce a simple data structure for collecting correctable errors along with accessors. More detailed description in the code itself. The error decoding is done with the decoding chain now and mce_first_notifier() gets to see the error first and the CEC decides whether to log it and then the rest of the chain doesn't hear about it - basically the main reason for the CE collector - or to continue running the notifiers. When the CEC hits the action threshold, it will try to soft-offine the page containing the ECC and then the whole decoding chain gets to see the error. Signed-off-by: NBorislav Petkov <bp@suse.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-edac <linux-edac@vger.kernel.org> Link: http://lkml.kernel.org/r/20170327093304.10683-5-bp@alien8.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 21 3月, 2017 1 次提交
-
-
由 Lu Baolu 提交于
Add support for earlyprintk by writing debug messages to the USB3 debug port. Users can use this type of early printk by specifying the kernel parameter of "earlyprintk=xdbc". This gives users a chance of providing debugging output. The hardware for USB3 debug port requires DMA memory blocks. This requires to delay setting up debugging hardware and registering boot console until the memblocks are filled. Signed-off-by: NLu Baolu <baolu.lu@linux.intel.com> Acked-by: NThomas Gleixner <tglx@linutronix.de> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mathias Nyman <mathias.nyman@linux.intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: linux-usb@vger.kernel.org Link: http://lkml.kernel.org/r/1490083293-3792-4-git-send-email-baolu.lu@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 06 3月, 2017 2 次提交
-
-
由 Tobias Jakobi 提交于
For mouse devices we can currently change the polling interval via usbhid.mousepoll. Implement the same thing for joysticks, so users can reduce input latency this way. This has been tested with a Logitech RumblePad 2 with jspoll=2, resulting in a polling rate of 500Hz (verified with evhz). Signed-off-by: NTobias Jakobi <tjakobi@math.uni-bielefeld.de> Acked-by: NBenjamin Tissoires <benjamin.tissoires@redhat.com> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
由 Len Brown 提交于
Add the "cpufreq.off=1" cmdline option. At boot-time, this allows a user to request CONFIG_CPU_FREQ=n behavior from a kernel built with CONFIG_CPU_FREQ=y. This is analogous to the existing "cpuidle.off=1" option and CONFIG_CPU_IDLE=y This capability is valuable when we need to debug end-user issues in the BIOS or in Linux. It is also convenient for enabling comparisons, which may otherwise require a new kernel, or help from BIOS SETUP, which may be buggy or unavailable. Signed-off-by: NLen Brown <len.brown@intel.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-