- 02 8月, 2018 3 次提交
-
-
由 Christoph Hellwig 提交于
Almost all architectures include it. Add a ARCH_NO_PREEMPT symbol to disable preempt support for alpha, hexagon, non-coldfire m68k and user mode Linux. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
-
由 Christoph Hellwig 提交于
Move the source of lib/Kconfig.debug and arch/$(ARCH)/Kconfig.debug to the top-level Kconfig. For two architectures that means moving their arch-specific symbols in that menu into a new arch Kconfig.debug file, and for a few more creating a dummy file so that we can include it unconditionally. Also move the actual 'Kernel hacking' menu to lib/Kconfig.debug, where it belongs. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
-
由 Christoph Hellwig 提交于
Instead of duplicating the source statements in every architecture just do it once in the toplevel Kconfig file. Note that with this the inclusion of arch/$(SRCARCH/Kconfig moves out of the top-level Kconfig into arch/Kconfig so that don't violate ordering constraits while keeping a sensible menu structure. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
-
- 06 7月, 2018 1 次提交
-
-
由 Paul Burton 提交于
We currently attempt to check whether a physical address range provided to __ioremap() may be in use by the page allocator by examining the value of PageReserved for each page in the region - lowmem pages not marked reserved are presumed to be in use by the page allocator, and requests to ioremap them fail. The way we check this has been broken since commit 92923ca3 ("mm: meminit: only set page reserved in the memblock region"), because memblock will typically not have any knowledge of non-RAM pages and therefore those pages will not have the PageReserved flag set. Thus when we attempt to ioremap a region outside of RAM we incorrectly fail believing that the region is RAM that may be in use. In most cases ioremap() on MIPS will take a fast-path to use the unmapped kseg1 or xkphys virtual address spaces and never hit this path, so the only way to hit it is for a MIPS32 system to attempt to ioremap() an address range in lowmem with flags other than _CACHE_UNCACHED. Perhaps the most straightforward way to do this is using ioremap_uncached_accelerated(), which is how the problem was discovered. Fix this by making use of walk_system_ram_range() to test the address range provided to __ioremap() against only RAM pages, rather than all lowmem pages. This means that if we have a lowmem I/O region, which is very common for MIPS systems, we're free to ioremap() address ranges within it. A nice bonus is that the test is no longer limited to lowmem. The approach here matches the way x86 performed the same test after commit c81c8a1e ("x86, ioremap: Speed up check for RAM pages") until x86 moved towards a slightly more complicated check using walk_mem_res() for unrelated reasons with commit 0e4c12b4 ("x86/mm, resource: Use PAGE_KERNEL protection for ioremap of memory pages"). Signed-off-by: NPaul Burton <paul.burton@mips.com> Reported-by: NSerge Semin <fancer.lancer@gmail.com> Tested-by: NSerge Semin <fancer.lancer@gmail.com> Fixes: 92923ca3 ("mm: meminit: only set page reserved in the memblock region") Cc: James Hogan <jhogan@kernel.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: stable@vger.kernel.org # v4.2+ Patchwork: https://patchwork.linux-mips.org/patch/19786/
-
- 29 6月, 2018 2 次提交
-
-
由 Paul Burton 提交于
The current MIPS implementation of arch_trigger_cpumask_backtrace() is broken because it attempts to use synchronous IPIs despite the fact that it may be run with interrupts disabled. This means that when arch_trigger_cpumask_backtrace() is invoked, for example by the RCU CPU stall watchdog, we may: - Deadlock due to use of synchronous IPIs with interrupts disabled, causing the CPU that's attempting to generate the backtrace output to hang itself. - Not succeed in generating the desired output from remote CPUs. - Produce warnings about this from smp_call_function_many(), for example: [42760.526910] INFO: rcu_sched detected stalls on CPUs/tasks: [42760.535755] 0-...!: (1 GPs behind) idle=ade/140000000000000/0 softirq=526944/526945 fqs=0 [42760.547874] 1-...!: (0 ticks this GP) idle=e4a/140000000000000/0 softirq=547885/547885 fqs=0 [42760.559869] (detected by 2, t=2162 jiffies, g=266689, c=266688, q=33) [42760.568927] ------------[ cut here ]------------ [42760.576146] WARNING: CPU: 2 PID: 1216 at kernel/smp.c:416 smp_call_function_many+0x88/0x20c [42760.587839] Modules linked in: [42760.593152] CPU: 2 PID: 1216 Comm: sh Not tainted 4.15.4-00373-gee058bb4d0c2 #2 [42760.603767] Stack : 8e09bd20 8e09bd20 8e09bd20 fffffff0 00000007 00000006 00000000 8e09bca8 [42760.616937] 95b2b379 95b2b379 807a0080 00000007 81944518 0000018a 00000032 00000000 [42760.630095] 00000000 00000030 80000000 00000000 806eca74 00000009 8017e2b8 000001a0 [42760.643169] 00000000 00000002 00000000 8e09baa4 00000008 808b8008 86d69080 8e09bca0 [42760.656282] 8e09ad50 805e20aa 00000000 00000000 00000000 8017e2b8 00000009 801070ca [42760.669424] ... [42760.673919] Call Trace: [42760.678672] [<27fde568>] show_stack+0x70/0xf0 [42760.685417] [<84751641>] dump_stack+0xaa/0xd0 [42760.692188] [<699d671c>] __warn+0x80/0x92 [42760.698549] [<68915d41>] warn_slowpath_null+0x28/0x36 [42760.705912] [<f7c76c1c>] smp_call_function_many+0x88/0x20c [42760.713696] [<6bbdfc2a>] arch_trigger_cpumask_backtrace+0x30/0x4a [42760.722216] [<f845bd33>] rcu_dump_cpu_stacks+0x6a/0x98 [42760.729580] [<796e7629>] rcu_check_callbacks+0x672/0x6ac [42760.737476] [<059b3b43>] update_process_times+0x18/0x34 [42760.744981] [<6eb94941>] tick_sched_handle.isra.5+0x26/0x38 [42760.752793] [<478d3d70>] tick_sched_timer+0x1c/0x50 [42760.759882] [<e56ea39f>] __hrtimer_run_queues+0xc6/0x226 [42760.767418] [<e88bbcae>] hrtimer_interrupt+0x88/0x19a [42760.775031] [<6765a19e>] gic_compare_interrupt+0x2e/0x3a [42760.782761] [<0558bf5f>] handle_percpu_devid_irq+0x78/0x168 [42760.790795] [<90c11ba2>] generic_handle_irq+0x1e/0x2c [42760.798117] [<1b6d462c>] gic_handle_local_int+0x38/0x86 [42760.805545] [<b2ada1c7>] gic_irq_dispatch+0xa/0x14 [42760.812534] [<90c11ba2>] generic_handle_irq+0x1e/0x2c [42760.820086] [<c7521934>] do_IRQ+0x16/0x20 [42760.826274] [<9aef3ce6>] plat_irq_dispatch+0x62/0x94 [42760.833458] [<6a94b53c>] except_vec_vi_end+0x70/0x78 [42760.840655] [<22284043>] smp_call_function_many+0x1ba/0x20c [42760.848501] [<54022b58>] smp_call_function+0x1e/0x2c [42760.855693] [<ab9fc705>] flush_tlb_mm+0x2a/0x98 [42760.862730] [<0844cdd0>] tlb_flush_mmu+0x1c/0x44 [42760.869628] [<cb259b74>] arch_tlb_finish_mmu+0x26/0x3e [42760.877021] [<1aeaaf74>] tlb_finish_mmu+0x18/0x66 [42760.883907] [<b3fce717>] exit_mmap+0x76/0xea [42760.890428] [<c4c8a2f6>] mmput+0x80/0x11a [42760.896632] [<a41a08f4>] do_exit+0x1f4/0x80c [42760.903158] [<ee01cef6>] do_group_exit+0x20/0x7e [42760.909990] [<13fa8d54>] __wake_up_parent+0x0/0x1e [42760.917045] [<46cf89d0>] smp_call_function_many+0x1a2/0x20c [42760.924893] [<8c21a93b>] syscall_common+0x14/0x1c [42760.931765] ---[ end trace 02aa09da9dc52a60 ]--- [42760.938342] ------------[ cut here ]------------ [42760.945311] WARNING: CPU: 2 PID: 1216 at kernel/smp.c:291 smp_call_function_single+0xee/0xf8 ... This patch switches MIPS' arch_trigger_cpumask_backtrace() to use async IPIs & smp_call_function_single_async() in order to resolve this problem. We ensure use of the pre-allocated call_single_data_t structures is serialized by maintaining a cpumask indicating that they're busy, and refusing to attempt to send an IPI when a CPU's bit is set in this mask. This should only happen if a CPU hasn't responded to a previous backtrace IPI - ie. if it's hung - and we print a warning to the console in this case. I've marked this for stable branches as far back as v4.9, to which it applies cleanly. Strictly speaking the faulty MIPS implementation can be traced further back to commit 856839b7 ("MIPS: Add arch_trigger_all_cpu_backtrace() function") in v3.19, but kernel versions v3.19 through v4.8 will require further work to backport due to the rework performed in commit 9a01c3ed ("nmi_backtrace: add more trigger_*_cpu_backtrace() methods"). Signed-off-by: NPaul Burton <paul.burton@mips.com> Patchwork: https://patchwork.linux-mips.org/patch/19597/ Cc: James Hogan <jhogan@kernel.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Huacai Chen <chenhc@lemote.com> Cc: linux-mips@linux-mips.org Cc: stable@vger.kernel.org # v4.9+ Fixes: 856839b7 ("MIPS: Add arch_trigger_all_cpu_backtrace() function") Fixes: 9a01c3ed ("nmi_backtrace: add more trigger_*_cpu_backtrace() methods")
-
由 Paul Burton 提交于
The generic nmi_cpu_backtrace() function calls show_regs() when a struct pt_regs is available, and dump_stack() otherwise. If we were to make use of the generic nmi_cpu_backtrace() with MIPS' current implementation of show_regs() this would mean that we see only register data with no accompanying stack information, in contrast with our current implementation which calls dump_stack() regardless of whether register state is available. In preparation for making use of the generic nmi_cpu_backtrace() to implement arch_trigger_cpumask_backtrace(), have our implementation of show_regs() call dump_stack() and drop the explicit dump_stack() call in arch_dump_stack() which is invoked by arch_trigger_cpumask_backtrace(). This will allow the output we produce to remain the same after a later patch switches to using nmi_cpu_backtrace(). It may mean that we produce extra stack output in other uses of show_regs(), but this: 1) Seems harmless. 2) Is good for consistency between arch_trigger_cpumask_backtrace() and other users of show_regs(). 3) Matches the behaviour of the ARM & PowerPC architectures. Marked for stable back to v4.9 as a prerequisite of the following patch "MIPS: Call dump_stack() from show_regs()". Signed-off-by: NPaul Burton <paul.burton@mips.com> Patchwork: https://patchwork.linux-mips.org/patch/19596/ Cc: James Hogan <jhogan@kernel.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Huacai Chen <chenhc@lemote.com> Cc: linux-mips@linux-mips.org Cc: stable@vger.kernel.org # v4.9+
-
- 25 6月, 2018 1 次提交
-
-
由 Paul Burton 提交于
Commit 784e0300 ("rseq: Avoid infinite recursion when delivering SIGSEGV") added a new ksig argument to the rseq_signal_deliver() & rseq_handle_notify_resume() functions, and was merged in v4.18-rc2. Meanwhile MIPS support for restartable sequences was also merged in v4.18-rc2 with commit 9ea141ad ("MIPS: Add support for restartable sequences"), and therefore didn't get updated for the API change. This results in build failures like the following: CC arch/mips/kernel/signal.o arch/mips/kernel/signal.c: In function 'handle_signal': arch/mips/kernel/signal.c:804:22: error: passing argument 1 of 'rseq_signal_deliver' from incompatible pointer type [-Werror=incompatible-pointer-types] rseq_signal_deliver(regs); ^~~~ In file included from ./include/linux/context_tracking.h:5, from arch/mips/kernel/signal.c:12: ./include/linux/sched.h:1811:56: note: expected 'struct ksignal *' but argument is of type 'struct pt_regs *' static inline void rseq_signal_deliver(struct ksignal *ksig, ~~~~~~~~~~~~~~~~^~~~ arch/mips/kernel/signal.c:804:2: error: too few arguments to function 'rseq_signal_deliver' rseq_signal_deliver(regs); ^~~~~~~~~~~~~~~~~~~ Fix this by adding the ksig argument as was done for other architectures in commit 784e0300 ("rseq: Avoid infinite recursion when delivering SIGSEGV"). Signed-off-by: NPaul Burton <paul.burton@mips.com> Acked-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com> Patchwork: https://patchwork.linux-mips.org/patch/19603/ Cc: James Hogan <jhogan@kernel.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org
-
- 20 6月, 2018 6 次提交
-
-
由 Paul Burton 提交于
Wire up the io_pgetevents syscall that was introduced by commit 7a074e96 ("aio: implement io_pgetevents"). Signed-off-by: NPaul Burton <paul.burton@mips.com> Patchwork: https://patchwork.linux-mips.org/patch/19593/ Cc: James Hogan <jhogan@kernel.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org
-
由 Paul Burton 提交于
Wire up the restartable sequences (rseq) syscall for MIPS. This was introduced by commit d7822b1e ("rseq: Introduce restartable sequences system call") & MIPS now supports the prerequisites. Signed-off-by: NPaul Burton <paul.burton@mips.com> Reviewed-by: NJames Hogan <jhogan@kernel.org> Patchwork: https://patchwork.linux-mips.org/patch/19525/ Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org
-
由 Paul Burton 提交于
Syscalls are not allowed inside restartable sequences, so add a call to rseq_syscall() at the very beginning of the system call exit path when CONFIG_DEBUG_RSEQ=y. This will help us to detect whether there is a syscall issued erroneously inside a restartable sequence. Signed-off-by: NPaul Burton <paul.burton@mips.com> Reviewed-by: NJames Hogan <jhogan@kernel.org> Patchwork: https://patchwork.linux-mips.org/patch/19522/ Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org
-
由 Paul Burton 提交于
Implement support for restartable sequences on MIPS, which requires 3 simple things: - Call rseq_handle_notify_resume() on return to userspace if TIF_NOTIFY_RESUME is set. - Call rseq_signal_deliver() to fixup the pre-signal stack frame when a signal is delivered whilst executing a restartable sequence critical section. - Select CONFIG_HAVE_RSEQ. Signed-off-by: NPaul Burton <paul.burton@mips.com> Reviewed-by: NJames Hogan <jhogan@kernel.org> Patchwork: https://patchwork.linux-mips.org/patch/19523/ Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org
-
由 Huacai Chen 提交于
While a barrier is present in the outX() functions before the register write, a similar barrier is missing in the inX() functions after the register read. This could allow memory accesses following inX() to observe stale data. This patch is very similar to commit a1cc7034 ("MIPS: io: Add barrier after register read in readX()"). Because war_io_reorder_wmb() is both used by writeX() and outX(), if readX() need a barrier then so does inX(). Cc: stable@vger.kernel.org Signed-off-by: NHuacai Chen <chenhc@lemote.com> Patchwork: https://patchwork.linux-mips.org/patch/19516/Signed-off-by: NPaul Burton <paul.burton@mips.com> Cc: James Hogan <james.hogan@mips.com> Cc: linux-mips@linux-mips.org Cc: Fuxin Zhang <zhangfx@lemote.com> Cc: Zhangjin Wu <wuzhangjin@gmail.com> Cc: Huacai Chen <chenhuacai@gmail.com>
-
由 Matthias Schiffer 提交于
ftrace_graph_caller was never run after calling ftrace_trace_function, breaking the function graph tracer. Fix this, bringing it in line with the x86 implementation. While we're at it, also streamline the control flow of _mcount a bit to reduce the number of branches. This issue was reported before: https://www.linux-mips.org/archives/linux-mips/2014-11/msg00295.htmlSigned-off-by: NMatthias Schiffer <mschiffer@universe-factory.net> Tested-by: NMatt Redfearn <matt.redfearn@mips.com> Patchwork: https://patchwork.linux-mips.org/patch/18929/Signed-off-by: NPaul Burton <paul.burton@mips.com> Cc: stable@vger.kernel.org # v3.17+
-
- 19 6月, 2018 2 次提交
-
-
由 Tokunori Ikegami 提交于
The erratum and workaround are described by BCM5300X-ES300-RDS.pdf as below. R10: PCIe Transactions Periodically Fail Description: The BCM5300X PCIe does not maintain transaction ordering. This may cause PCIe transaction failure. Fix Comment: Add a dummy PCIe configuration read after a PCIe configuration write to ensure PCIe configuration access ordering. Set ES bit of CP0 configu7 register to enable sync function so that the sync instruction is functional. Resolution: hndpci.c: extpci_write_config() hndmips.c: si_mips_init() mipsinc.h CONF7_ES This is fixed by the CFE MIPS bcmsi chipset driver also for BCM47XX. Also the dummy PCIe configuration read is already implemented in the Linux BCMA driver. Enable ExternalSync in Config7 when CONFIG_BCMA_DRIVER_PCI_HOSTMODE=y too so that the sync instruction is externalised. Signed-off-by: NTokunori Ikegami <ikegami@allied-telesis.co.jp> Reviewed-by: NPaul Burton <paul.burton@mips.com> Acked-by: NHauke Mehrtens <hauke@hauke-m.de> Cc: Chris Packham <chris.packham@alliedtelesis.co.nz> Cc: Rafał Miłecki <zajec5@gmail.com> Cc: linux-mips@linux-mips.org Cc: stable@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/19461/Signed-off-by: NJames Hogan <jhogan@kernel.org>
-
由 Linus Walleij 提交于
I used bad names in my clumsiness when rewriting many board files to use GPIO descriptors instead of platform data. A few had the platform_device ID set to -1 which would indeed give the device name "i2c-gpio". But several had it set to >=0 which gives the names "i2c-gpio.0", "i2c-gpio.1" ... Fix the one affected board in the MIPS tree. Sorry. Fixes: b2e63555 ("i2c: gpio: Convert to use descriptors") Reported-by: NSimon Guinot <simon.guinot@sequanux.org> Signed-off-by: NLinus Walleij <linus.walleij@linaro.org> Reviewed-by: NPaul Burton <paul.burton@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Wolfram Sang <wsa@the-dreams.de> Cc: Simon Guinot <simon.guinot@sequanux.org> Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # 4.15+ Patchwork: https://patchwork.linux-mips.org/patch/19387/Signed-off-by: NJames Hogan <jhogan@kernel.org>
-
- 15 6月, 2018 2 次提交
-
-
由 Stefan Agner 提交于
With PHYS_ADDR_MAX there is now a type safe variant for all bits set. Make use of it. Patch created using a semantic patch as follows: // <smpl> @@ typedef phys_addr_t; @@ -(phys_addr_t)ULLONG_MAX +PHYS_ADDR_MAX // </smpl> Link: http://lkml.kernel.org/r/20180419214204.19322-1-stefan@agner.chSigned-off-by: NStefan Agner <stefan@agner.ch> Reviewed-by: NAndrew Morton <akpm@linux-foundation.org> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> [arm64] Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Masahiro Yamada 提交于
HAVE_CC_STACKPROTECTOR should be selected by architectures with stack canary implementation. It is not about the compiler support. For the consistency with commit 050e9baa ("Kbuild: rename CC_STACKPROTECTOR[_STRONG] config variables"), remove 'CC_' from the config symbol. I moved the 'select' lines to keep the alphabetical sorting. Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Acked-by: NKees Cook <keescook@chromium.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 14 6月, 2018 1 次提交
-
-
由 Linus Torvalds 提交于
The changes to automatically test for working stack protector compiler support in the Kconfig files removed the special STACKPROTECTOR_AUTO option that picked the strongest stack protector that the compiler supported. That was all a nice cleanup - it makes no sense to have the AUTO case now that the Kconfig phase can just determine the compiler support directly. HOWEVER. It also meant that doing "make oldconfig" would now _disable_ the strong stackprotector if you had AUTO enabled, because in a legacy config file, the sane stack protector configuration would look like CONFIG_HAVE_CC_STACKPROTECTOR=y # CONFIG_CC_STACKPROTECTOR_NONE is not set # CONFIG_CC_STACKPROTECTOR_REGULAR is not set # CONFIG_CC_STACKPROTECTOR_STRONG is not set CONFIG_CC_STACKPROTECTOR_AUTO=y and when you ran this through "make oldconfig" with the Kbuild changes, it would ask you about the regular CONFIG_CC_STACKPROTECTOR (that had been renamed from CONFIG_CC_STACKPROTECTOR_REGULAR to just CONFIG_CC_STACKPROTECTOR), but it would think that the STRONG version used to be disabled (because it was really enabled by AUTO), and would disable it in the new config, resulting in: CONFIG_HAVE_CC_STACKPROTECTOR=y CONFIG_CC_HAS_STACKPROTECTOR_NONE=y CONFIG_CC_STACKPROTECTOR=y # CONFIG_CC_STACKPROTECTOR_STRONG is not set CONFIG_CC_HAS_SANE_STACKPROTECTOR=y That's dangerously subtle - people could suddenly find themselves with the weaker stack protector setup without even realizing. The solution here is to just rename not just the old RECULAR stack protector option, but also the strong one. This does that by just removing the CC_ prefix entirely for the user choices, because it really is not about the compiler support (the compiler support now instead automatially impacts _visibility_ of the options to users). This results in "make oldconfig" actually asking the user for their choice, so that we don't have any silent subtle security model changes. The end result would generally look like this: CONFIG_HAVE_CC_STACKPROTECTOR=y CONFIG_CC_HAS_STACKPROTECTOR_NONE=y CONFIG_STACKPROTECTOR=y CONFIG_STACKPROTECTOR_STRONG=y CONFIG_CC_HAS_SANE_STACKPROTECTOR=y where the "CC_" versions really are about internal compiler infrastructure, not the user selections. Acked-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 13 6月, 2018 2 次提交
-
-
由 Kees Cook 提交于
The kzalloc() function has a 2-factor argument form, kcalloc(). This patch replaces cases of: kzalloc(a * b, gfp) with: kcalloc(a * b, gfp) as well as handling cases of: kzalloc(a * b * c, gfp) with: kzalloc(array3_size(a, b, c), gfp) as it's slightly less ugly than: kzalloc_array(array_size(a, b), c, gfp) This does, however, attempt to ignore constant size factors like: kzalloc(4 * 1024, gfp) though any constants defined via macros get caught up in the conversion. Any factors with a sizeof() of "unsigned char", "char", and "u8" were dropped, since they're redundant. The Coccinelle script used for this was: // Fix redundant parens around sizeof(). @@ type TYPE; expression THING, E; @@ ( kzalloc( - (sizeof(TYPE)) * E + sizeof(TYPE) * E , ...) | kzalloc( - (sizeof(THING)) * E + sizeof(THING) * E , ...) ) // Drop single-byte sizes and redundant parens. @@ expression COUNT; typedef u8; typedef __u8; @@ ( kzalloc( - sizeof(u8) * (COUNT) + COUNT , ...) | kzalloc( - sizeof(__u8) * (COUNT) + COUNT , ...) | kzalloc( - sizeof(char) * (COUNT) + COUNT , ...) | kzalloc( - sizeof(unsigned char) * (COUNT) + COUNT , ...) | kzalloc( - sizeof(u8) * COUNT + COUNT , ...) | kzalloc( - sizeof(__u8) * COUNT + COUNT , ...) | kzalloc( - sizeof(char) * COUNT + COUNT , ...) | kzalloc( - sizeof(unsigned char) * COUNT + COUNT , ...) ) // 2-factor product with sizeof(type/expression) and identifier or constant. @@ type TYPE; expression THING; identifier COUNT_ID; constant COUNT_CONST; @@ ( - kzalloc + kcalloc ( - sizeof(TYPE) * (COUNT_ID) + COUNT_ID, sizeof(TYPE) , ...) | - kzalloc + kcalloc ( - sizeof(TYPE) * COUNT_ID + COUNT_ID, sizeof(TYPE) , ...) | - kzalloc + kcalloc ( - sizeof(TYPE) * (COUNT_CONST) + COUNT_CONST, sizeof(TYPE) , ...) | - kzalloc + kcalloc ( - sizeof(TYPE) * COUNT_CONST + COUNT_CONST, sizeof(TYPE) , ...) | - kzalloc + kcalloc ( - sizeof(THING) * (COUNT_ID) + COUNT_ID, sizeof(THING) , ...) | - kzalloc + kcalloc ( - sizeof(THING) * COUNT_ID + COUNT_ID, sizeof(THING) , ...) | - kzalloc + kcalloc ( - sizeof(THING) * (COUNT_CONST) + COUNT_CONST, sizeof(THING) , ...) | - kzalloc + kcalloc ( - sizeof(THING) * COUNT_CONST + COUNT_CONST, sizeof(THING) , ...) ) // 2-factor product, only identifiers. @@ identifier SIZE, COUNT; @@ - kzalloc + kcalloc ( - SIZE * COUNT + COUNT, SIZE , ...) // 3-factor product with 1 sizeof(type) or sizeof(expression), with // redundant parens removed. @@ expression THING; identifier STRIDE, COUNT; type TYPE; @@ ( kzalloc( - sizeof(TYPE) * (COUNT) * (STRIDE) + array3_size(COUNT, STRIDE, sizeof(TYPE)) , ...) | kzalloc( - sizeof(TYPE) * (COUNT) * STRIDE + array3_size(COUNT, STRIDE, sizeof(TYPE)) , ...) | kzalloc( - sizeof(TYPE) * COUNT * (STRIDE) + array3_size(COUNT, STRIDE, sizeof(TYPE)) , ...) | kzalloc( - sizeof(TYPE) * COUNT * STRIDE + array3_size(COUNT, STRIDE, sizeof(TYPE)) , ...) | kzalloc( - sizeof(THING) * (COUNT) * (STRIDE) + array3_size(COUNT, STRIDE, sizeof(THING)) , ...) | kzalloc( - sizeof(THING) * (COUNT) * STRIDE + array3_size(COUNT, STRIDE, sizeof(THING)) , ...) | kzalloc( - sizeof(THING) * COUNT * (STRIDE) + array3_size(COUNT, STRIDE, sizeof(THING)) , ...) | kzalloc( - sizeof(THING) * COUNT * STRIDE + array3_size(COUNT, STRIDE, sizeof(THING)) , ...) ) // 3-factor product with 2 sizeof(variable), with redundant parens removed. @@ expression THING1, THING2; identifier COUNT; type TYPE1, TYPE2; @@ ( kzalloc( - sizeof(TYPE1) * sizeof(TYPE2) * COUNT + array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2)) , ...) | kzalloc( - sizeof(TYPE1) * sizeof(THING2) * (COUNT) + array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2)) , ...) | kzalloc( - sizeof(THING1) * sizeof(THING2) * COUNT + array3_size(COUNT, sizeof(THING1), sizeof(THING2)) , ...) | kzalloc( - sizeof(THING1) * sizeof(THING2) * (COUNT) + array3_size(COUNT, sizeof(THING1), sizeof(THING2)) , ...) | kzalloc( - sizeof(TYPE1) * sizeof(THING2) * COUNT + array3_size(COUNT, sizeof(TYPE1), sizeof(THING2)) , ...) | kzalloc( - sizeof(TYPE1) * sizeof(THING2) * (COUNT) + array3_size(COUNT, sizeof(TYPE1), sizeof(THING2)) , ...) ) // 3-factor product, only identifiers, with redundant parens removed. @@ identifier STRIDE, SIZE, COUNT; @@ ( kzalloc( - (COUNT) * STRIDE * SIZE + array3_size(COUNT, STRIDE, SIZE) , ...) | kzalloc( - COUNT * (STRIDE) * SIZE + array3_size(COUNT, STRIDE, SIZE) , ...) | kzalloc( - COUNT * STRIDE * (SIZE) + array3_size(COUNT, STRIDE, SIZE) , ...) | kzalloc( - (COUNT) * (STRIDE) * SIZE + array3_size(COUNT, STRIDE, SIZE) , ...) | kzalloc( - COUNT * (STRIDE) * (SIZE) + array3_size(COUNT, STRIDE, SIZE) , ...) | kzalloc( - (COUNT) * STRIDE * (SIZE) + array3_size(COUNT, STRIDE, SIZE) , ...) | kzalloc( - (COUNT) * (STRIDE) * (SIZE) + array3_size(COUNT, STRIDE, SIZE) , ...) | kzalloc( - COUNT * STRIDE * SIZE + array3_size(COUNT, STRIDE, SIZE) , ...) ) // Any remaining multi-factor products, first at least 3-factor products, // when they're not all constants... @@ expression E1, E2, E3; constant C1, C2, C3; @@ ( kzalloc(C1 * C2 * C3, ...) | kzalloc( - (E1) * E2 * E3 + array3_size(E1, E2, E3) , ...) | kzalloc( - (E1) * (E2) * E3 + array3_size(E1, E2, E3) , ...) | kzalloc( - (E1) * (E2) * (E3) + array3_size(E1, E2, E3) , ...) | kzalloc( - E1 * E2 * E3 + array3_size(E1, E2, E3) , ...) ) // And then all remaining 2 factors products when they're not all constants, // keeping sizeof() as the second factor argument. @@ expression THING, E1, E2; type TYPE; constant C1, C2, C3; @@ ( kzalloc(sizeof(THING) * C2, ...) | kzalloc(sizeof(TYPE) * C2, ...) | kzalloc(C1 * C2 * C3, ...) | kzalloc(C1 * C2, ...) | - kzalloc + kcalloc ( - sizeof(TYPE) * (E2) + E2, sizeof(TYPE) , ...) | - kzalloc + kcalloc ( - sizeof(TYPE) * E2 + E2, sizeof(TYPE) , ...) | - kzalloc + kcalloc ( - sizeof(THING) * (E2) + E2, sizeof(THING) , ...) | - kzalloc + kcalloc ( - sizeof(THING) * E2 + E2, sizeof(THING) , ...) | - kzalloc + kcalloc ( - (E1) * E2 + E1, E2 , ...) | - kzalloc + kcalloc ( - (E1) * (E2) + E1, E2 , ...) | - kzalloc + kcalloc ( - E1 * E2 + E1, E2 , ...) ) Signed-off-by: NKees Cook <keescook@chromium.org>
-
由 Kees Cook 提交于
The kmalloc() function has a 2-factor argument form, kmalloc_array(). This patch replaces cases of: kmalloc(a * b, gfp) with: kmalloc_array(a * b, gfp) as well as handling cases of: kmalloc(a * b * c, gfp) with: kmalloc(array3_size(a, b, c), gfp) as it's slightly less ugly than: kmalloc_array(array_size(a, b), c, gfp) This does, however, attempt to ignore constant size factors like: kmalloc(4 * 1024, gfp) though any constants defined via macros get caught up in the conversion. Any factors with a sizeof() of "unsigned char", "char", and "u8" were dropped, since they're redundant. The tools/ directory was manually excluded, since it has its own implementation of kmalloc(). The Coccinelle script used for this was: // Fix redundant parens around sizeof(). @@ type TYPE; expression THING, E; @@ ( kmalloc( - (sizeof(TYPE)) * E + sizeof(TYPE) * E , ...) | kmalloc( - (sizeof(THING)) * E + sizeof(THING) * E , ...) ) // Drop single-byte sizes and redundant parens. @@ expression COUNT; typedef u8; typedef __u8; @@ ( kmalloc( - sizeof(u8) * (COUNT) + COUNT , ...) | kmalloc( - sizeof(__u8) * (COUNT) + COUNT , ...) | kmalloc( - sizeof(char) * (COUNT) + COUNT , ...) | kmalloc( - sizeof(unsigned char) * (COUNT) + COUNT , ...) | kmalloc( - sizeof(u8) * COUNT + COUNT , ...) | kmalloc( - sizeof(__u8) * COUNT + COUNT , ...) | kmalloc( - sizeof(char) * COUNT + COUNT , ...) | kmalloc( - sizeof(unsigned char) * COUNT + COUNT , ...) ) // 2-factor product with sizeof(type/expression) and identifier or constant. @@ type TYPE; expression THING; identifier COUNT_ID; constant COUNT_CONST; @@ ( - kmalloc + kmalloc_array ( - sizeof(TYPE) * (COUNT_ID) + COUNT_ID, sizeof(TYPE) , ...) | - kmalloc + kmalloc_array ( - sizeof(TYPE) * COUNT_ID + COUNT_ID, sizeof(TYPE) , ...) | - kmalloc + kmalloc_array ( - sizeof(TYPE) * (COUNT_CONST) + COUNT_CONST, sizeof(TYPE) , ...) | - kmalloc + kmalloc_array ( - sizeof(TYPE) * COUNT_CONST + COUNT_CONST, sizeof(TYPE) , ...) | - kmalloc + kmalloc_array ( - sizeof(THING) * (COUNT_ID) + COUNT_ID, sizeof(THING) , ...) | - kmalloc + kmalloc_array ( - sizeof(THING) * COUNT_ID + COUNT_ID, sizeof(THING) , ...) | - kmalloc + kmalloc_array ( - sizeof(THING) * (COUNT_CONST) + COUNT_CONST, sizeof(THING) , ...) | - kmalloc + kmalloc_array ( - sizeof(THING) * COUNT_CONST + COUNT_CONST, sizeof(THING) , ...) ) // 2-factor product, only identifiers. @@ identifier SIZE, COUNT; @@ - kmalloc + kmalloc_array ( - SIZE * COUNT + COUNT, SIZE , ...) // 3-factor product with 1 sizeof(type) or sizeof(expression), with // redundant parens removed. @@ expression THING; identifier STRIDE, COUNT; type TYPE; @@ ( kmalloc( - sizeof(TYPE) * (COUNT) * (STRIDE) + array3_size(COUNT, STRIDE, sizeof(TYPE)) , ...) | kmalloc( - sizeof(TYPE) * (COUNT) * STRIDE + array3_size(COUNT, STRIDE, sizeof(TYPE)) , ...) | kmalloc( - sizeof(TYPE) * COUNT * (STRIDE) + array3_size(COUNT, STRIDE, sizeof(TYPE)) , ...) | kmalloc( - sizeof(TYPE) * COUNT * STRIDE + array3_size(COUNT, STRIDE, sizeof(TYPE)) , ...) | kmalloc( - sizeof(THING) * (COUNT) * (STRIDE) + array3_size(COUNT, STRIDE, sizeof(THING)) , ...) | kmalloc( - sizeof(THING) * (COUNT) * STRIDE + array3_size(COUNT, STRIDE, sizeof(THING)) , ...) | kmalloc( - sizeof(THING) * COUNT * (STRIDE) + array3_size(COUNT, STRIDE, sizeof(THING)) , ...) | kmalloc( - sizeof(THING) * COUNT * STRIDE + array3_size(COUNT, STRIDE, sizeof(THING)) , ...) ) // 3-factor product with 2 sizeof(variable), with redundant parens removed. @@ expression THING1, THING2; identifier COUNT; type TYPE1, TYPE2; @@ ( kmalloc( - sizeof(TYPE1) * sizeof(TYPE2) * COUNT + array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2)) , ...) | kmalloc( - sizeof(TYPE1) * sizeof(THING2) * (COUNT) + array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2)) , ...) | kmalloc( - sizeof(THING1) * sizeof(THING2) * COUNT + array3_size(COUNT, sizeof(THING1), sizeof(THING2)) , ...) | kmalloc( - sizeof(THING1) * sizeof(THING2) * (COUNT) + array3_size(COUNT, sizeof(THING1), sizeof(THING2)) , ...) | kmalloc( - sizeof(TYPE1) * sizeof(THING2) * COUNT + array3_size(COUNT, sizeof(TYPE1), sizeof(THING2)) , ...) | kmalloc( - sizeof(TYPE1) * sizeof(THING2) * (COUNT) + array3_size(COUNT, sizeof(TYPE1), sizeof(THING2)) , ...) ) // 3-factor product, only identifiers, with redundant parens removed. @@ identifier STRIDE, SIZE, COUNT; @@ ( kmalloc( - (COUNT) * STRIDE * SIZE + array3_size(COUNT, STRIDE, SIZE) , ...) | kmalloc( - COUNT * (STRIDE) * SIZE + array3_size(COUNT, STRIDE, SIZE) , ...) | kmalloc( - COUNT * STRIDE * (SIZE) + array3_size(COUNT, STRIDE, SIZE) , ...) | kmalloc( - (COUNT) * (STRIDE) * SIZE + array3_size(COUNT, STRIDE, SIZE) , ...) | kmalloc( - COUNT * (STRIDE) * (SIZE) + array3_size(COUNT, STRIDE, SIZE) , ...) | kmalloc( - (COUNT) * STRIDE * (SIZE) + array3_size(COUNT, STRIDE, SIZE) , ...) | kmalloc( - (COUNT) * (STRIDE) * (SIZE) + array3_size(COUNT, STRIDE, SIZE) , ...) | kmalloc( - COUNT * STRIDE * SIZE + array3_size(COUNT, STRIDE, SIZE) , ...) ) // Any remaining multi-factor products, first at least 3-factor products, // when they're not all constants... @@ expression E1, E2, E3; constant C1, C2, C3; @@ ( kmalloc(C1 * C2 * C3, ...) | kmalloc( - (E1) * E2 * E3 + array3_size(E1, E2, E3) , ...) | kmalloc( - (E1) * (E2) * E3 + array3_size(E1, E2, E3) , ...) | kmalloc( - (E1) * (E2) * (E3) + array3_size(E1, E2, E3) , ...) | kmalloc( - E1 * E2 * E3 + array3_size(E1, E2, E3) , ...) ) // And then all remaining 2 factors products when they're not all constants, // keeping sizeof() as the second factor argument. @@ expression THING, E1, E2; type TYPE; constant C1, C2, C3; @@ ( kmalloc(sizeof(THING) * C2, ...) | kmalloc(sizeof(TYPE) * C2, ...) | kmalloc(C1 * C2 * C3, ...) | kmalloc(C1 * C2, ...) | - kmalloc + kmalloc_array ( - sizeof(TYPE) * (E2) + E2, sizeof(TYPE) , ...) | - kmalloc + kmalloc_array ( - sizeof(TYPE) * E2 + E2, sizeof(TYPE) , ...) | - kmalloc + kmalloc_array ( - sizeof(THING) * (E2) + E2, sizeof(THING) , ...) | - kmalloc + kmalloc_array ( - sizeof(THING) * E2 + E2, sizeof(THING) , ...) | - kmalloc + kmalloc_array ( - (E1) * E2 + E1, E2 , ...) | - kmalloc + kmalloc_array ( - (E1) * (E2) + E1, E2 , ...) | - kmalloc + kmalloc_array ( - E1 * E2 + E1, E2 , ...) ) Signed-off-by: NKees Cook <keescook@chromium.org>
-
- 02 6月, 2018 1 次提交
-
-
由 Souptick Joarder 提交于
Use new return type vm_fault_t for fault handler. For now, this is just documenting that the function returns a VM_FAULT value rather than an errno. Once all instances are converted, vm_fault_t will become a distinct type. commit 1c8f4220 ("mm: change return type to vm_fault_t") Signed-off-by: NSouptick Joarder <jrdr.linux@gmail.com> Reviewed-by: NMatthew Wilcox <mawilcox@microsoft.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 01 6月, 2018 1 次提交
-
-
由 Luc Van Oostenryck 提交于
By default, sparse assumes a 64bit machine when compiled on x86-64 and 32bit when compiled on anything else. This can of course create all sort of problems for the other archs, like issuing false warnings ('shift too big (32) for type unsigned long'), or worse, failing to emit legitimate warnings. Fix this by adding the -m32/-m64 flag, depending on CONFIG_64BIT, to CHECKFLAGS in the main Makefile (and so for all archs). Also, remove the now unneeded -m32/-m64 in arch specific Makefiles. Signed-off-by: NLuc Van Oostenryck <luc.vanoostenryck@gmail.com> Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
-
- 24 5月, 2018 3 次提交
-
-
由 Maciej W. Rozycki 提交于
Use 64-bit accesses for 64-bit floating-point general registers with PTRACE_PEEKUSR, removing the truncation of their upper halves in the FR=1 mode, caused by commit bbd426f5 ("MIPS: Simplify FP context access"), which inadvertently switched them to using 32-bit accesses. The PTRACE_POKEUSR side is fine as it's never been broken and continues using 64-bit accesses. Fixes: bbd426f5 ("MIPS: Simplify FP context access") Signed-off-by: NMaciej W. Rozycki <macro@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # 3.15+ Patchwork: https://patchwork.linux-mips.org/patch/19334/Signed-off-by: NJames Hogan <jhogan@kernel.org>
-
由 Maciej W. Rozycki 提交于
Having PR_FP_MODE_FRE (i.e. Config5.FRE) set without PR_FP_MODE_FR (i.e. Status.FR) is not supported as the lone purpose of Config5.FRE is to emulate Status.FR=0 handling on FPU hardware that has Status.FR=1 hardwired[1][2]. Also we do not handle this case elsewhere, and assume throughout our code that TIF_HYBRID_FPREGS and TIF_32BIT_FPREGS cannot be set both at once for a task, leading to inconsistent behaviour if this does happen. Return unsuccessfully then from prctl(2) PR_SET_FP_MODE calls requesting PR_FP_MODE_FRE to be set with PR_FP_MODE_FR clear. This corresponds to modes allowed by `mips_set_personality_fp'. References: [1] "MIPS Architecture For Programmers, Vol. III: MIPS32 / microMIPS32 Privileged Resource Architecture", Imagination Technologies, Document Number: MD00090, Revision 6.02, July 10, 2015, Table 9.69 "Config5 Register Field Descriptions", p. 262 [2] "MIPS Architecture For Programmers, Volume III: MIPS64 / microMIPS64 Privileged Resource Architecture", Imagination Technologies, Document Number: MD00091, Revision 6.03, December 22, 2015, Table 9.72 "Config5 Register Field Descriptions", p. 288 Fixes: 9791554b ("MIPS,prctl: add PR_[GS]ET_FP_MODE prctl options for MIPS") Signed-off-by: NMaciej W. Rozycki <macro@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # 4.0+ Patchwork: https://patchwork.linux-mips.org/patch/19327/Signed-off-by: NJames Hogan <jhogan@kernel.org>
-
由 Maciej W. Rozycki 提交于
Correct comments across ptrace(2) handlers about an FPU register context layout discrepancy between MIPS I and later ISAs, which was fixed with `linux-mips.org' (LMO) commit 42533948caac ("Major pile of FP emulator changes."), the fix corrected with LMO commit 849fa7a50dff ("R3k FPU ptrace() handling fixes."), and then broken and fixed over and over again, until last time fixed with commit 80cbfad7 ("MIPS: Correct MIPS I FP context layout"). NB running the GDB test suite for the relevant ABI/ISA and watching out for regressions is advisable when poking around ptrace(2). Signed-off-by: NMaciej W. Rozycki <macro@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/19326/Signed-off-by: NJames Hogan <jhogan@kernel.org>
-
- 22 5月, 2018 1 次提交
-
-
由 Bjorn Helgaas 提交于
Use the pci_info() and pci_err() wrappers for dev_printk() when possible. Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Acked-by: NJames Hogan <jhogan@kernel.org>
-
- 21 5月, 2018 1 次提交
-
-
由 Matt Redfearn 提交于
Assembly language within the MIPS kernel conventionally indents instructions which are in a branch delay slot to make them easier to see. Commit 8483b14a ("MIPS: lib: memset: Whitespace fixes") rather inexplicably removed all of these indentations from memset.S. Reinstate the convention for all instructions in a branch delay slot. This effectively reverts the above commit, plus other locations introduced with MIPSR6 support. Signed-off-by: NMatt Redfearn <matt.redfearn@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/19111/Signed-off-by: NJames Hogan <jhogan@kernel.org>
-
- 17 5月, 2018 2 次提交
-
-
由 Wolfram Sang 提交于
This header only contains platform_data. Move it to the proper directory. Signed-off-by: NWolfram Sang <wsa@the-dreams.de> Acked-by: NTony Lindgren <tony@atomide.com> Acked-by: NLee Jones <lee.jones@linaro.org> Acked-by: NRobert Jarzmik <robert.jarzmik@free.fr> Acked-by: NMauro Carvalho Chehab <mchehab+samsung@kernel.org> Acked-by: NJames Hogan <jhogan@kernel.org> Acked-by: NGreg Ungerer <gerg@uclinux.org>
-
由 Guenter Roeck 提交于
Most mips builds fail with arch/mips/kernel/traps.c: In function ‘force_fcr31_sig’: arch/mips/kernel/traps.c:732:2: error: ‘si_code’ may be used uninitialized in this function Fix the problem by initializing si_code with FPE_FLTUNK (undiagnosed floating point exception). Fixes: f43a54a0 ("signal/mips: Use force_sig_fault where appropriate") Cc: linux-mips@linux-mips.org Cc: Eric W. Biederman <ebiederm@xmission.com> Signed-off-by: NGuenter Roeck <linux@roeck-us.net> Signed-off-by: NEric W. Biederman <ebiederm@xmission.com>
-
- 16 5月, 2018 1 次提交
-
-
由 Christoph Hellwig 提交于
Variants of proc_create{,_data} that directly take a seq_file show callback and drastically reduces the boilerplate code in the callers. All trivial callers converted over. Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 15 5月, 2018 10 次提交
-
-
由 Matt Redfearn 提交于
When perf is used in non-system mode, i.e. without specifying CPUs to count on, check_and_calc_range falls into the case when it sets M_TC_EN_ALL in the counter config_base. This has the impact of always counting for all of the threads in a core, even when the user has not requested it. For example this can be seen with a test program which executes 30002 instructions and 10000 branches running on one VPE and a busy load on the other VPE in the core. Without this commit, the expected count is not returned: taskset 4 dd if=/dev/zero of=/dev/null count=100000 & taskset 8 perf stat -e instructions:u,branches:u ./test_prog Performance counter stats for './test_prog': 103235 instructions:u 17015 branches:u In order to fix this, remove check_and_calc_range entirely and perform all of the logic in mipsxx_pmu_enable_event. Since mipsxx_pmu_enable_event now requires the range of the event, ensure that it is set by mipspmu_perf_event_encode in the same circumstances as before (i.e. #ifdef CONFIG_MIPS_MT_SMP && num_possible_cpus() > 1). The logic of mipsxx_pmu_enable_event now becomes: If the CPU is a BMIPS5000, then use the special vpe_id() implementation to select which VPE to count. If the counter has a range greater than a single VPE, i.e. it is a core-wide counter, then ensure that the counter is set up to count events from all TCs (though, since this is true by definition, is this necessary? Just enabling a core-wide counter in the per-VPE case appears experimentally to return the same counts. This is left in for now as the logic was present before). If the event is set up to count a particular CPU (i.e. system mode), then the VPE ID of that CPU is used for the counter. Otherwise, the event should be counted on the CPU scheduling this thread (this was the critical bit missing from the previous implementation) so the VPE ID of this CPU is used for the counter. With this commit, the same test as before returns the counts expected: taskset 4 dd if=/dev/zero of=/dev/null count=100000 & taskset 8 perf stat -e instructions:u,branches:u ./test_prog Performance counter stats for './test_prog': 30002 instructions:u 10000 branches:u Signed-off-by: NMatt Redfearn <matt.redfearn@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/19138/Signed-off-by: NJames Hogan <jhogan@kernel.org>
-
由 Matt Redfearn 提交于
There are a couple of FIXME's in the perf code which state that cpu_data[event->cpu].vpe_id reports 0 for both CPUs. This is no longer the case, since the vpe_id is used extensively by SMP CPS. VPE local counting gets around this by using smp_processor_id() instead. As it happens this does work correctly to count events on the right VPE, but relies on 2 assumptions: a) Always having 2 VPEs / core. b) The hardware only paying attention to the least significant bit of the PERFCTL.VPEID field. If either of these assumptions change then the incorrect VPEs events will be counted. Fix this by replacing smp_processor_id() with cpu_vpe_id(¤t_cpu_data), in the vpe_id() macro, and pass vpe_id() to M_PERFCTL_VPEID() when setting up PERFCTL.VPEID. The FIXME's can also be removed since they no longer apply. Signed-off-by: NMatt Redfearn <matt.redfearn@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/19137/Signed-off-by: NJames Hogan <jhogan@kernel.org>
-
由 Matt Redfearn 提交于
The presence of per TC performance counters is now detected by cpu-probe.c and indicated by MIPS_CPU_MT_PER_TC_PERF_COUNTERS in cpu_data. Switch detection of the feature to use this new flag rather than blindly testing the implementation specific config7 register with a magic number. Signed-off-by: NMatt Redfearn <matt.redfearn@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: Maciej W. Rozycki <macro@mips.com> Cc: Paul Burton <paul.burton@mips.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Robert Richter <rric@kernel.org> Cc: linux-mips@linux-mips.org Cc: oprofile-list@lists.sf.net Patchwork: https://patchwork.linux-mips.org/patch/19142/Signed-off-by: NJames Hogan <jhogan@kernel.org>
-
由 Matt Redfearn 提交于
Processors implementing the MIPS MT ASE may have performance counters implemented per core or per TC. Processors implemented by MIPS Technologies signify presence per TC through a bit in the implementation specific Config7 register. Currently the code which probes for their presence blindly reads a magic number corresponding to this bit, despite it potentially having a different meaning in the CPU implementation. Since CPU features are generally detected by cpu-probe.c, perform the detection here instead. Introduce cpu_set_mt_per_tc_perf which checks the bit in config7 and call it from MIPS CPUs known to implement this bit and the MT ASE, specifically, the 34K, 1004K and interAptiv. Once the presence of the per-tc counter is indicated in cpu_data, tests for it can be updated to use this flag. Suggested-by: NJames Hogan <jhogan@kernel.org> Signed-off-by: NMatt Redfearn <matt.redfearn@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: Matt Redfearn <matt.redfearn@mips.com> Cc: Paul Burton <paul.burton@mips.com> Cc: Maciej W. Rozycki <macro@mips.com> Cc: linux-mips@linux-mips.org> Patchwork: https://patchwork.linux-mips.org/patch/19136/Signed-off-by: NJames Hogan <jhogan@kernel.org>
-
由 Daniel Borkmann 提交于
The ool_skb_header_pointer() and size_to_len() is unused same as tmp_offset, therefore remove all of them. Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
-
由 Alexandre Belloni 提交于
Add phy to switch port connections for PCB123 for internal PHYs. Signed-off-by: NAlexandre Belloni <alexandre.belloni@bootlin.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com> Cc: David S. Miller <davem@davemloft.net> Cc: linux-mips@linux-mips.org Cc: netdev@vger.kernel.org Signed-off-by: NJames Hogan <jhogan@kernel.org>
-
由 Alexandre Belloni 提交于
Ocelot has an integrated switch, add support for it. Signed-off-by: NAlexandre Belloni <alexandre.belloni@bootlin.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com> Cc: David S. Miller <davem@davemloft.net> Cc: linux-mips@linux-mips.org Cc: netdev@vger.kernel.org Signed-off-by: NJames Hogan <jhogan@kernel.org>
-
由 Paul Cercueil 提交于
This work is now performed by the watchdog driver directly. Signed-off-by: NPaul Cercueil <paul@crapouillou.net> Acked-by: NJames Hogan <jhogan@kernel.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Guenter Roeck <linux@roeck-us.net> Cc: Wim Van Sebroeck <wim@linux-watchdog.org> Cc: Mathieu Malaterre <malat@debian.org> Cc: linux-mips@linux-mips.org Cc: linux-watchdog@vger.kernel.org Signed-off-by: NJames Hogan <jhogan@kernel.org>
-
由 Paul Cercueil 提交于
The watchdog is an useful piece of hardware, so there's no reason not to enable it. Besides, this is important for restart to work after the change in the next commit. This commit enables the Kconfig option in the qi_lb60 defconfig. Signed-off-by: NPaul Cercueil <paul@crapouillou.net> Acked-by: NJames Hogan <jhogan@kernel.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Guenter Roeck <linux@roeck-us.net> Cc: Wim Van Sebroeck <wim@linux-watchdog.org> Cc: Mathieu Malaterre <malat@debian.org> Cc: linux-mips@linux-mips.org Cc: linux-watchdog@vger.kernel.org Signed-off-by: NJames Hogan <jhogan@kernel.org>
-
由 Paul Cercueil 提交于
- The previous node requested a memory area of 0x100 bytes, while the driver only manipulates four registers present in the first 0x10 bytes. - The driver requests for the "rtc" clock, but the previous node did not provide any. Signed-off-by: NPaul Cercueil <paul@crapouillou.net> Reviewed-by: NMathieu Malaterre <malat@debian.org> Acked-by: NJames Hogan <jhogan@kernel.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Rob Herring <robh+dt@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Guenter Roeck <linux@roeck-us.net> Cc: Wim Van Sebroeck <wim@linux-watchdog.org> Cc: Mathieu Malaterre <malat@debian.org> Cc: linux-mips@linux-mips.org Cc: devicetree@vger.kernel.org Cc: linux-watchdog@vger.kernel.org Signed-off-by: NJames Hogan <jhogan@kernel.org>
-