- 14 6月, 2018 1 次提交
-
-
由 Linus Torvalds 提交于
The changes to automatically test for working stack protector compiler support in the Kconfig files removed the special STACKPROTECTOR_AUTO option that picked the strongest stack protector that the compiler supported. That was all a nice cleanup - it makes no sense to have the AUTO case now that the Kconfig phase can just determine the compiler support directly. HOWEVER. It also meant that doing "make oldconfig" would now _disable_ the strong stackprotector if you had AUTO enabled, because in a legacy config file, the sane stack protector configuration would look like CONFIG_HAVE_CC_STACKPROTECTOR=y # CONFIG_CC_STACKPROTECTOR_NONE is not set # CONFIG_CC_STACKPROTECTOR_REGULAR is not set # CONFIG_CC_STACKPROTECTOR_STRONG is not set CONFIG_CC_STACKPROTECTOR_AUTO=y and when you ran this through "make oldconfig" with the Kbuild changes, it would ask you about the regular CONFIG_CC_STACKPROTECTOR (that had been renamed from CONFIG_CC_STACKPROTECTOR_REGULAR to just CONFIG_CC_STACKPROTECTOR), but it would think that the STRONG version used to be disabled (because it was really enabled by AUTO), and would disable it in the new config, resulting in: CONFIG_HAVE_CC_STACKPROTECTOR=y CONFIG_CC_HAS_STACKPROTECTOR_NONE=y CONFIG_CC_STACKPROTECTOR=y # CONFIG_CC_STACKPROTECTOR_STRONG is not set CONFIG_CC_HAS_SANE_STACKPROTECTOR=y That's dangerously subtle - people could suddenly find themselves with the weaker stack protector setup without even realizing. The solution here is to just rename not just the old RECULAR stack protector option, but also the strong one. This does that by just removing the CC_ prefix entirely for the user choices, because it really is not about the compiler support (the compiler support now instead automatially impacts _visibility_ of the options to users). This results in "make oldconfig" actually asking the user for their choice, so that we don't have any silent subtle security model changes. The end result would generally look like this: CONFIG_HAVE_CC_STACKPROTECTOR=y CONFIG_CC_HAS_STACKPROTECTOR_NONE=y CONFIG_STACKPROTECTOR=y CONFIG_STACKPROTECTOR_STRONG=y CONFIG_CC_HAS_SANE_STACKPROTECTOR=y where the "CC_" versions really are about internal compiler infrastructure, not the user selections. Acked-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 24 5月, 2018 3 次提交
-
-
由 Maciej W. Rozycki 提交于
Use 64-bit accesses for 64-bit floating-point general registers with PTRACE_PEEKUSR, removing the truncation of their upper halves in the FR=1 mode, caused by commit bbd426f5 ("MIPS: Simplify FP context access"), which inadvertently switched them to using 32-bit accesses. The PTRACE_POKEUSR side is fine as it's never been broken and continues using 64-bit accesses. Fixes: bbd426f5 ("MIPS: Simplify FP context access") Signed-off-by: NMaciej W. Rozycki <macro@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # 3.15+ Patchwork: https://patchwork.linux-mips.org/patch/19334/Signed-off-by: NJames Hogan <jhogan@kernel.org>
-
由 Maciej W. Rozycki 提交于
Having PR_FP_MODE_FRE (i.e. Config5.FRE) set without PR_FP_MODE_FR (i.e. Status.FR) is not supported as the lone purpose of Config5.FRE is to emulate Status.FR=0 handling on FPU hardware that has Status.FR=1 hardwired[1][2]. Also we do not handle this case elsewhere, and assume throughout our code that TIF_HYBRID_FPREGS and TIF_32BIT_FPREGS cannot be set both at once for a task, leading to inconsistent behaviour if this does happen. Return unsuccessfully then from prctl(2) PR_SET_FP_MODE calls requesting PR_FP_MODE_FRE to be set with PR_FP_MODE_FR clear. This corresponds to modes allowed by `mips_set_personality_fp'. References: [1] "MIPS Architecture For Programmers, Vol. III: MIPS32 / microMIPS32 Privileged Resource Architecture", Imagination Technologies, Document Number: MD00090, Revision 6.02, July 10, 2015, Table 9.69 "Config5 Register Field Descriptions", p. 262 [2] "MIPS Architecture For Programmers, Volume III: MIPS64 / microMIPS64 Privileged Resource Architecture", Imagination Technologies, Document Number: MD00091, Revision 6.03, December 22, 2015, Table 9.72 "Config5 Register Field Descriptions", p. 288 Fixes: 9791554b ("MIPS,prctl: add PR_[GS]ET_FP_MODE prctl options for MIPS") Signed-off-by: NMaciej W. Rozycki <macro@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # 4.0+ Patchwork: https://patchwork.linux-mips.org/patch/19327/Signed-off-by: NJames Hogan <jhogan@kernel.org>
-
由 Maciej W. Rozycki 提交于
Correct comments across ptrace(2) handlers about an FPU register context layout discrepancy between MIPS I and later ISAs, which was fixed with `linux-mips.org' (LMO) commit 42533948caac ("Major pile of FP emulator changes."), the fix corrected with LMO commit 849fa7a50dff ("R3k FPU ptrace() handling fixes."), and then broken and fixed over and over again, until last time fixed with commit 80cbfad7 ("MIPS: Correct MIPS I FP context layout"). NB running the GDB test suite for the relevant ABI/ISA and watching out for regressions is advisable when poking around ptrace(2). Signed-off-by: NMaciej W. Rozycki <macro@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/19326/Signed-off-by: NJames Hogan <jhogan@kernel.org>
-
- 17 5月, 2018 1 次提交
-
-
由 Guenter Roeck 提交于
Most mips builds fail with arch/mips/kernel/traps.c: In function ‘force_fcr31_sig’: arch/mips/kernel/traps.c:732:2: error: ‘si_code’ may be used uninitialized in this function Fix the problem by initializing si_code with FPE_FLTUNK (undiagnosed floating point exception). Fixes: f43a54a0 ("signal/mips: Use force_sig_fault where appropriate") Cc: linux-mips@linux-mips.org Cc: Eric W. Biederman <ebiederm@xmission.com> Signed-off-by: NGuenter Roeck <linux@roeck-us.net> Signed-off-by: NEric W. Biederman <ebiederm@xmission.com>
-
- 15 5月, 2018 8 次提交
-
-
由 Matt Redfearn 提交于
When perf is used in non-system mode, i.e. without specifying CPUs to count on, check_and_calc_range falls into the case when it sets M_TC_EN_ALL in the counter config_base. This has the impact of always counting for all of the threads in a core, even when the user has not requested it. For example this can be seen with a test program which executes 30002 instructions and 10000 branches running on one VPE and a busy load on the other VPE in the core. Without this commit, the expected count is not returned: taskset 4 dd if=/dev/zero of=/dev/null count=100000 & taskset 8 perf stat -e instructions:u,branches:u ./test_prog Performance counter stats for './test_prog': 103235 instructions:u 17015 branches:u In order to fix this, remove check_and_calc_range entirely and perform all of the logic in mipsxx_pmu_enable_event. Since mipsxx_pmu_enable_event now requires the range of the event, ensure that it is set by mipspmu_perf_event_encode in the same circumstances as before (i.e. #ifdef CONFIG_MIPS_MT_SMP && num_possible_cpus() > 1). The logic of mipsxx_pmu_enable_event now becomes: If the CPU is a BMIPS5000, then use the special vpe_id() implementation to select which VPE to count. If the counter has a range greater than a single VPE, i.e. it is a core-wide counter, then ensure that the counter is set up to count events from all TCs (though, since this is true by definition, is this necessary? Just enabling a core-wide counter in the per-VPE case appears experimentally to return the same counts. This is left in for now as the logic was present before). If the event is set up to count a particular CPU (i.e. system mode), then the VPE ID of that CPU is used for the counter. Otherwise, the event should be counted on the CPU scheduling this thread (this was the critical bit missing from the previous implementation) so the VPE ID of this CPU is used for the counter. With this commit, the same test as before returns the counts expected: taskset 4 dd if=/dev/zero of=/dev/null count=100000 & taskset 8 perf stat -e instructions:u,branches:u ./test_prog Performance counter stats for './test_prog': 30002 instructions:u 10000 branches:u Signed-off-by: NMatt Redfearn <matt.redfearn@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/19138/Signed-off-by: NJames Hogan <jhogan@kernel.org>
-
由 Matt Redfearn 提交于
There are a couple of FIXME's in the perf code which state that cpu_data[event->cpu].vpe_id reports 0 for both CPUs. This is no longer the case, since the vpe_id is used extensively by SMP CPS. VPE local counting gets around this by using smp_processor_id() instead. As it happens this does work correctly to count events on the right VPE, but relies on 2 assumptions: a) Always having 2 VPEs / core. b) The hardware only paying attention to the least significant bit of the PERFCTL.VPEID field. If either of these assumptions change then the incorrect VPEs events will be counted. Fix this by replacing smp_processor_id() with cpu_vpe_id(¤t_cpu_data), in the vpe_id() macro, and pass vpe_id() to M_PERFCTL_VPEID() when setting up PERFCTL.VPEID. The FIXME's can also be removed since they no longer apply. Signed-off-by: NMatt Redfearn <matt.redfearn@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/19137/Signed-off-by: NJames Hogan <jhogan@kernel.org>
-
由 Matt Redfearn 提交于
The presence of per TC performance counters is now detected by cpu-probe.c and indicated by MIPS_CPU_MT_PER_TC_PERF_COUNTERS in cpu_data. Switch detection of the feature to use this new flag rather than blindly testing the implementation specific config7 register with a magic number. Signed-off-by: NMatt Redfearn <matt.redfearn@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: Maciej W. Rozycki <macro@mips.com> Cc: Paul Burton <paul.burton@mips.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Robert Richter <rric@kernel.org> Cc: linux-mips@linux-mips.org Cc: oprofile-list@lists.sf.net Patchwork: https://patchwork.linux-mips.org/patch/19142/Signed-off-by: NJames Hogan <jhogan@kernel.org>
-
由 Matt Redfearn 提交于
Processors implementing the MIPS MT ASE may have performance counters implemented per core or per TC. Processors implemented by MIPS Technologies signify presence per TC through a bit in the implementation specific Config7 register. Currently the code which probes for their presence blindly reads a magic number corresponding to this bit, despite it potentially having a different meaning in the CPU implementation. Since CPU features are generally detected by cpu-probe.c, perform the detection here instead. Introduce cpu_set_mt_per_tc_perf which checks the bit in config7 and call it from MIPS CPUs known to implement this bit and the MT ASE, specifically, the 34K, 1004K and interAptiv. Once the presence of the per-tc counter is indicated in cpu_data, tests for it can be updated to use this flag. Suggested-by: NJames Hogan <jhogan@kernel.org> Signed-off-by: NMatt Redfearn <matt.redfearn@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: Matt Redfearn <matt.redfearn@mips.com> Cc: Paul Burton <paul.burton@mips.com> Cc: Maciej W. Rozycki <macro@mips.com> Cc: linux-mips@linux-mips.org> Patchwork: https://patchwork.linux-mips.org/patch/19136/Signed-off-by: NJames Hogan <jhogan@kernel.org>
-
由 Colin Ian King 提交于
Trivial fix to spelling mistake in pr_warn message text. Signed-off-by: NColin Ian King <colin.king@canonical.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: kernel-janitors@vger.kernel.org Signed-off-by: NJames Hogan <jhogan@kernel.org>
-
由 Baolin Wang 提交于
Since struct timespec is not y2038 safe on 32bit machines, this patch converts update_persistent_clock() to update_persistent_clock64() using struct timespec64. The rtc_mips_set_time() and rtc_mips_set_mmss() interfaces were using 'unsigned long' type that is not y2038 safe on 32bit machines, moreover there is only one platform implementing rtc_mips_set_time() and two platforms implementing rtc_mips_set_mmss(), so we can just make them each implement update_persistent_clock64() directly, to get that helper out of the common mips code by removing rtc_mips_set_time() and rtc_mips_set_mmss() interfaces. Signed-off-by: NBaolin Wang <baolin.wang@linaro.org> Acked-by: NArnd Bergmann <arnd@arndb.de> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Huacai Chen <chenhc@lemote.com> Cc: Paul Burton <paul.burton@mips.com> Cc: linux-mips@linux-mips.org Signed-off-by: NJames Hogan <jhogan@kernel.org>
-
由 Maciej W. Rozycki 提交于
Check the TIF_32BIT_FPREGS task setting of the tracee rather than the tracer in determining the layout of floating-point general registers in the floating-point context, correcting access to odd-numbered registers for o32 tracees where the setting disagrees between the two processes. Fixes: 597ce172 ("MIPS: Support for 64-bit FP with O32 binaries") Signed-off-by: NMaciej W. Rozycki <macro@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # 3.14+ Signed-off-by: NJames Hogan <jhogan@kernel.org>
-
由 Maciej W. Rozycki 提交于
Correct commit 7aeb753b ("MIPS: Implement task_user_regset_view.") and expose the FIR register using the unused 4 bytes at the end of the NT_PRFPREG regset. Without that register included clients cannot use the PTRACE_GETREGSET request to retrieve the complete FPU register set and have to resort to one of the older interfaces, either PTRACE_PEEKUSR or PTRACE_GETFPREGS, to retrieve the missing piece of data. Also the register is irreversibly missing from core dumps. This register is architecturally hardwired and read-only so the write path does not matter. Ignore data supplied on writes then. Fixes: 7aeb753b ("MIPS: Implement task_user_regset_view.") Signed-off-by: NJames Hogan <jhogan@kernel.org> Signed-off-by: NMaciej W. Rozycki <macro@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # 3.13+ Patchwork: https://patchwork.linux-mips.org/patch/19273/Signed-off-by: NJames Hogan <jhogan@kernel.org>
-
- 25 4月, 2018 1 次提交
-
-
由 Eric W. Biederman 提交于
Filling in struct siginfo before calling force_sig_info a tedious and error prone process, where once in a great while the wrong fields are filled out, and siginfo has been inconsistently cleared. Simplify this process by using the helper force_sig_fault. Which takes as a parameters all of the information it needs, ensures all of the fiddly bits of filling in struct siginfo are done properly and then calls force_sig_info. In short about a 5 line reduction in code for every time force_sig_info is called, which makes the calling function clearer. Cc: Ralf Baechle <ralf@linux-mips.org> Cc: James Hogan <jhogan@kernel.org> Cc: linux-mips@linux-mips.org Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
-
- 19 4月, 2018 1 次提交
-
-
由 Deepa Dinamani 提交于
All the current architecture specific defines for these are the same. Refactor these common defines to a common header file. The new common linux/compat_time.h is also useful as it will eventually be used to hold all the defines that are needed for compat time types that support non y2038 safe types. New architectures need not have to define these new types as they will only use new y2038 safe syscalls. This file can be deleted after y2038 when we stop supporting non y2038 safe syscalls. The patch also requires an operation similar to: git grep "asm/compat\.h" | cut -d ":" -f 1 | xargs -n 1 sed -i -e "s%asm/compat.h%linux/compat.h%g" Cc: acme@kernel.org Cc: benh@kernel.crashing.org Cc: borntraeger@de.ibm.com Cc: catalin.marinas@arm.com Cc: cmetcalf@mellanox.com Cc: cohuck@redhat.com Cc: davem@davemloft.net Cc: deller@gmx.de Cc: devel@driverdev.osuosl.org Cc: gerald.schaefer@de.ibm.com Cc: gregkh@linuxfoundation.org Cc: heiko.carstens@de.ibm.com Cc: hoeppner@linux.vnet.ibm.com Cc: hpa@zytor.com Cc: jejb@parisc-linux.org Cc: jwi@linux.vnet.ibm.com Cc: linux-kernel@vger.kernel.org Cc: linux-mips@linux-mips.org Cc: linux-parisc@vger.kernel.org Cc: linuxppc-dev@lists.ozlabs.org Cc: linux-s390@vger.kernel.org Cc: mark.rutland@arm.com Cc: mingo@redhat.com Cc: mpe@ellerman.id.au Cc: oberpar@linux.vnet.ibm.com Cc: oprofile-list@lists.sf.net Cc: paulus@samba.org Cc: peterz@infradead.org Cc: ralf@linux-mips.org Cc: rostedt@goodmis.org Cc: rric@kernel.org Cc: schwidefsky@de.ibm.com Cc: sebott@linux.vnet.ibm.com Cc: sparclinux@vger.kernel.org Cc: sth@linux.vnet.ibm.com Cc: ubraun@linux.vnet.ibm.com Cc: will.deacon@arm.com Cc: x86@kernel.org Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NDeepa Dinamani <deepa.kernel@gmail.com> Acked-by: NSteven Rostedt (VMware) <rostedt@goodmis.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NJames Hogan <jhogan@kernel.org> Acked-by: NHelge Deller <deller@gmx.de> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
- 03 4月, 2018 8 次提交
-
-
由 Dominik Brodowski 提交于
Using this helper allows us to avoid the in-kernel calls to the sys_readahead() syscall. The ksys_ prefix denotes that this function is meant as a drop-in replacement for the syscall. In particular, it uses the same calling convention as sys_readahead(). This patch is part of a series which removes in-kernel calls to syscalls. On this basis, the syscall entry path can be streamlined. For details, see http://lkml.kernel.org/r/20180325162527.GA17492@light.dominikbrodowski.net Cc: Andrew Morton <akpm@linux-foundation.org> Cc: linux-mm@kvack.org Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net>
-
由 Dominik Brodowski 提交于
Using this helper allows us to avoid the in-kernel calls to the sys_mmap_pgoff() syscall. The ksys_ prefix denotes that this function is meant as a drop-in replacement for the syscall. In particular, it uses the same calling convention as sys_mmap_pgoff(). This patch is part of a series which removes in-kernel calls to syscalls. On this basis, the syscall entry path can be streamlined. For details, see http://lkml.kernel.org/r/20180325162527.GA17492@light.dominikbrodowski.net Cc: Andrew Morton <akpm@linux-foundation.org> Cc: linux-mm@kvack.org Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net>
-
由 Dominik Brodowski 提交于
Using the ksys_fadvise64_64() helper allows us to avoid the in-kernel calls to the sys_fadvise64_64() syscall. The ksys_ prefix denotes that this function is meant as a drop-in replacement for the syscall. In particular, it uses the same calling convention as ksys_fadvise64_64(). Some compat stubs called sys_fadvise64(), which then just passed through the arguments to sys_fadvise64_64(). Get rid of this indirection, and call ksys_fadvise64_64() directly. This patch is part of a series which removes in-kernel calls to syscalls. On this basis, the syscall entry path can be streamlined. For details, see http://lkml.kernel.org/r/20180325162527.GA17492@light.dominikbrodowski.net Cc: Andrew Morton <akpm@linux-foundation.org> Cc: linux-mm@kvack.org Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net>
-
由 Dominik Brodowski 提交于
Using the ksys_fallocate() wrapper allows us to get rid of in-kernel calls to the sys_fallocate() syscall. The ksys_ prefix denotes that this function is meant as a drop-in replacement for the syscall. In particular, it uses the same calling convention as sys_fallocate(). This patch is part of a series which removes in-kernel calls to syscalls. On this basis, the syscall entry path can be streamlined. For details, see http://lkml.kernel.org/r/20180325162527.GA17492@light.dominikbrodowski.net Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net>
-
由 Dominik Brodowski 提交于
Using the ksys_p{read,write}64() wrappers allows us to get rid of in-kernel calls to the sys_pread64() and sys_pwrite64() syscalls. The ksys_ prefix denotes that this function is meant as a drop-in replacement for the syscall. In particular, it uses the same calling convention as sys_p{read,write}64(). This patch is part of a series which removes in-kernel calls to syscalls. On this basis, the syscall entry path can be streamlined. For details, see http://lkml.kernel.org/r/20180325162527.GA17492@light.dominikbrodowski.net Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net>
-
由 Dominik Brodowski 提交于
Using the ksys_truncate() wrapper allows us to get rid of in-kernel calls to the sys_truncate() syscall. The ksys_ prefix denotes that this function is meant as a drop-in replacement for the syscall. In particular, it uses the same calling convention as sys_truncate(). This patch is part of a series which removes in-kernel calls to syscalls. On this basis, the syscall entry path can be streamlined. For details, see http://lkml.kernel.org/r/20180325162527.GA17492@light.dominikbrodowski.net Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net>
-
由 Dominik Brodowski 提交于
Using this helper allows us to avoid the in-kernel calls to the sys_sync_file_range() syscall. The ksys_ prefix denotes that this function is meant as a drop-in replacement for the syscall. In particular, it uses the same calling convention as sys_sync_file_range(). This patch is part of a series which removes in-kernel calls to syscalls. On this basis, the syscall entry path can be streamlined. For details, see http://lkml.kernel.org/r/20180325162527.GA17492@light.dominikbrodowski.net Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net>
-
由 Dominik Brodowski 提交于
Using the ksys_ftruncate() wrapper allows us to get rid of in-kernel calls to the sys_ftruncate() syscall. The ksys_ prefix denotes that this function is meant as a drop-in replacement for the syscall. In particular, it uses the same calling convention as sys_ftruncate(). This patch is part of a series which removes in-kernel calls to syscalls. On this basis, the syscall entry path can be streamlined. For details, see http://lkml.kernel.org/r/20180325162527.GA17492@light.dominikbrodowski.net Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: NDominik Brodowski <linux@dominikbrodowski.net>
-
- 20 3月, 2018 1 次提交
-
-
由 Peter Zijlstra 提交于
The old wait_on_atomic_t() is going to get removed, use the more flexible wait_var_event() API instead. And while there, fix a bug and add the missing wakeup... Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: James Hogan <jhogan@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
- 17 3月, 2018 1 次提交
-
-
由 Peter Zijlstra 提交于
Mark noticed that the change to sibling_list changed some iteration semantics; because previously we used group_list as list entry, sibling events would always have an empty sibling_list. But because we now use sibling_list for both list head and list entry, siblings will report as having siblings. Fix this with a custom for_each_sibling_event() iterator. Fixes: 8343aae6 ("perf/core: Remove perf_event::group_entry") Reported-by: NMark Rutland <mark.rutland@arm.com> Suggested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: vincent.weaver@maine.edu Cc: alexander.shishkin@linux.intel.com Cc: torvalds@linux-foundation.org Cc: alexey.budankov@linux.intel.com Cc: valery.cherepennikov@intel.com Cc: eranian@google.com Cc: acme@redhat.com Cc: linux-tip-commits@vger.kernel.org Cc: davidcc@google.com Cc: kan.liang@intel.com Cc: Dmitry.Prohorov@intel.com Cc: jolsa@redhat.com Link: https://lkml.kernel.org/r/20180315170129.GX4043@hirez.programming.kicks-ass.net
-
- 16 3月, 2018 1 次提交
-
-
由 Peter Zijlstra 提交于
Mark noticed that the change to sibling_list changed some iteration semantics; because previously we used group_list as list entry, sibling events would always have an empty sibling_list. But because we now use sibling_list for both list head and list entry, siblings will report as having siblings. Fix this with a custom for_each_sibling_event() iterator. Fixes: 8343aae6 ("perf/core: Remove perf_event::group_entry") Reported-by: NMark Rutland <mark.rutland@arm.com> Suggested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: vincent.weaver@maine.edu Cc: alexander.shishkin@linux.intel.com Cc: torvalds@linux-foundation.org Cc: alexey.budankov@linux.intel.com Cc: valery.cherepennikov@intel.com Cc: eranian@google.com Cc: acme@redhat.com Cc: linux-tip-commits@vger.kernel.org Cc: davidcc@google.com Cc: kan.liang@intel.com Cc: Dmitry.Prohorov@intel.com Cc: jolsa@redhat.com Link: https://lkml.kernel.org/r/20180315170129.GX4043@hirez.programming.kicks-ass.net
-
- 12 3月, 2018 1 次提交
-
-
由 Peter Zijlstra 提交于
Now that all the grouping is done with RB trees, we no longer need group_entry and can replace the whole thing with sibling_list. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexey Budankov <alexey.budankov@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: David Carrillo-Cisneros <davidcc@google.com> Cc: Dmitri Prokhorov <Dmitry.Prohorov@intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Valery Cherepennikov <valery.cherepennikov@intel.com> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: linux-kernel@vger.kernel.org Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
- 09 3月, 2018 2 次提交
-
-
由 Matt Redfearn 提交于
If a JTAG probe is connected to a MIPS cluster, then the CPC detects it and latches the CPC.STAT_CONF.EJTAG_PROBE bit to 1. While set, attempting to send a power-down command to a core will be blocked, and the CPC will instead send the core to clock-off state. This can interfere with systems fully entering a low power state where all cores, CM, GIC, etc are powered down. Detect that a JTAG probe is / has been connected to the cluster and block the suspend attempt. Attempting to suspend the system while a JTAG probe is connected now yields: # echo mem > /sys/power/state [ 11.654000] PM: Syncing filesystems ... done. [ 11.658000] JTAG probe is connected - abort suspend -sh: echo: write error: Operation not permitted # To restore suspend, the JTAG probe should be disconnected or put into quiescent state. Platform code can then clear the CPC.STAT_CONF.EJTAG_PROBE bit. Reported-by: NEd Blake <ed.blake@sondrel.com> Signed-off-by: NMatt Redfearn <matt.redfearn@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paul Burton <paul.burton@mips.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/18641/Signed-off-by: NJames Hogan <jhogan@kernel.org>
-
由 Paul Burton 提交于
The generic MIPS implementations of halting, powering down or restarting the system all hang using a busy loop as a last resort. We have many platforms which avoid this loop by implementing their own, many using some variation upon executing a wait instruction to lower CPU power usage if we reach this point. In order to prepare for cleaning up these various custom implementations of the same thing, this patch makes the generic machine_halt(), machine_power_off() & machine_restart() functions each make use of the wait instruction to lower CPU power usage in cases where we know that the wait instruction is available. If wait isn't known to be supported then we fall back to calling cpu_wait(), and if we don't have a cpu_wait() callback then we effectively continue using a busy loop. In effect the new machine_hang() function provides a superset of the functionality that the various platforms currently provide differing subsets of. Signed-off-by: NPaul Burton <paul.burton@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/17178/Signed-off-by: NJames Hogan <jhogan@kernel.org>
-
- 06 3月, 2018 1 次提交
-
-
由 Justin Chen 提交于
Commit a3e6c1ef ("MIPS: IRQ: Fix disable_irq on CPU IRQs") fixes an issue where disable_irq did not actually disable the irq. The bug caused our IPIs to not be disabled, which actually is the correct behavior. With the addition of commit a3e6c1ef ("MIPS: IRQ: Fix disable_irq on CPU IRQs"), the IPIs were getting disabled going into suspend, thus schedule_ipi() was not being called. This caused deadlocks where schedulable task were not being scheduled and other cpus were waiting for them to do something. Add the IRQF_NO_SUSPEND flag so an irq_disable will not be called on the IPIs during suspend. Signed-off-by: NJustin Chen <justinpopo6@gmail.com> Fixes: a3e6c1ef ("MIPS: IRQ: Fix disabled_irq on CPU IRQs") Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: linux-mips@linux-mips.org Cc: stable@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/17385/ [jhogan@kernel.org: checkpatch: wrap long lines and fix commit refs] Signed-off-by: NJames Hogan <jhogan@kernel.org>
-
- 20 2月, 2018 1 次提交
-
-
由 Marcin Nowakowski 提交于
Indicate that CRC32 and CRC32C instuctions are supported by the CPU through elf_hwcap flags. This will be used by a follow-up commit that introduces crc32(c) crypto acceleration modules and is required by GENERIC_CPU_AUTOPROBE feature. Signed-off-by: NMarcin Nowakowski <marcin.nowakowski@mips.com> Signed-off-by: NJames Hogan <jhogan@kernel.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/18600/
-
- 19 2月, 2018 2 次提交
-
-
由 Mathieu Malaterre 提交于
Rewrite the comparison in `else if` statement, case where `min_low_pfn > ARCH_PFN_OFFSET` has already been checked in the first `if` statement: if (min_low_pfn > ARCH_PFN_OFFSET) { Fix non-fatal warning during compilation using W=1: arch/mips/kernel/setup.c: In function ‘bootmem_init’: arch/mips/kernel/setup.c:461:25: warning: comparison of unsigned expression < 0 is always false [-Wtype-limits] } else if (min_low_pfn < ARCH_PFN_OFFSET) { ^ Signed-off-by: NMathieu Malaterre <malat@debian.org> Reviewed-by: NJames Hogan <jhogan@kernel.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/18176/Signed-off-by: NJames Hogan <jhogan@kernel.org>
-
由 Mathieu Malaterre 提交于
Fix non-fatal warning during compilation using W=1: arch/mips/kernel/setup.c:158:13: warning: no previous prototype for ‘memory_region_available’ [-Wmissing-prototypes] bool __init memory_region_available(phys_addr_t start, phys_addr_t size) ^~~~~~~~~~~~~~~~~~~~~~~ Signed-off-by: NMathieu Malaterre <malat@debian.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/18175/ [jhogan@kernel.org: tweak whitespace] Signed-off-by: NJames Hogan <jhogan@kernel.org>
-
- 13 2月, 2018 2 次提交
-
-
由 Marcin Nowakowski 提交于
Commit 73fbc1eb ("MIPS: fix mem=X@Y commandline processing") added a fix to ensure that the memory range between PHYS_OFFSET and low memory address specified by mem= cmdline argument is not later processed by free_all_bootmem. This change was incorrect for systems where the commandline specifies more than 1 mem argument, as it will cause all memory between PHYS_OFFSET and each of the memory offsets to be marked as reserved, which results in parts of the RAM marked as reserved (Creator CI20's u-boot has a default commandline argument 'mem=256M@0x0 mem=768M@0x30000000'). Change the behaviour to ensure that only the range between PHYS_OFFSET and the lowest start address of the memories is marked as protected. This change also ensures that the range is marked protected even if it's only defined through the devicetree and not only via commandline arguments. Reported-by: NMathieu Malaterre <mathieu.malaterre@gmail.com> Signed-off-by: NMarcin Nowakowski <marcin.nowakowski@mips.com> Fixes: 73fbc1eb ("MIPS: fix mem=X@Y commandline processing") Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # v4.11+ Tested-by: NMathieu Malaterre <malat@debian.org> Patchwork: https://patchwork.linux-mips.org/patch/18562/Signed-off-by: NJames Hogan <jhogan@kernel.org>
-
由 Jaedon Shin 提交于
Remove the __init annotation from bmips_cpu_setup() to avoid the following warning. WARNING: vmlinux.o(.text+0x35c950): Section mismatch in reference from the function brcmstb_pm_s3() to the function .init.text:bmips_cpu_setup() The function brcmstb_pm_s3() references the function __init bmips_cpu_setup(). This is often because brcmstb_pm_s3 lacks a __init annotation or the annotation of bmips_cpu_setup is wrong. Signed-off-by: NJaedon Shin <jaedon.shin@gmail.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: Kevin Cernekee <cernekee@gmail.com> Cc: linux-mips@linux-mips.org Reviewed-by: NJames Hogan <jhogan@kernel.org> Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com> Patchwork: https://patchwork.linux-mips.org/patch/18589/Signed-off-by: NJames Hogan <jhogan@kernel.org>
-
- 12 2月, 2018 1 次提交
-
-
由 Linus Torvalds 提交于
This is the mindless scripted replacement of kernel use of POLL* variables as described by Al, done by this script: for V in IN OUT PRI ERR RDNORM RDBAND WRNORM WRBAND HUP RDHUP NVAL MSG; do L=`git grep -l -w POLL$V | grep -v '^t' | grep -v /um/ | grep -v '^sa' | grep -v '/poll.h$'|grep -v '^D'` for f in $L; do sed -i "-es/^\([^\"]*\)\(\<POLL$V\>\)/\\1E\\2/" $f; done done with de-mangling cleanups yet to come. NOTE! On almost all architectures, the EPOLL* constants have the same values as the POLL* constants do. But they keyword here is "almost". For various bad reasons they aren't the same, and epoll() doesn't actually work quite correctly in some cases due to this on Sparc et al. The next patch from Al will sort out the final differences, and we should be all done. Scripted-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 08 2月, 2018 1 次提交
-
-
由 Paul Burton 提交于
Reading mips_cpc_base value from the DT allows each platform to define it according to its needs. This is especially convenient for MIPS_GENERIC kernel where this kind of information should be determined in runtime. Use mti,mips-cpc compatible string with just a reg property to specify the register location for your platform. Signed-off-by: NPaul Burton <paul.burton@mips.com> Signed-off-by: NMiodrag Dinic <miodrag.dinic@mips.com> Signed-off-by: NAleksandar Markovic <aleksandar.markovic@mips.com> Cc: linux-mips@linux-mips.org Cc: Ralf Baechle <ralf@linux-mips.org> Patchwork: https://patchwork.linux-mips.org/patch/18513/Signed-off-by: NJames Hogan <jhogan@kernel.org>
-
- 05 2月, 2018 2 次提交
-
-
由 Matt Redfearn 提交于
The merge of commit f875a832 ("MIPS: Abstract CPU core & VP(E) ID access through accessor functions") ended up creating a duplicate assignment of core during the rebase on commit bac06cf0 ("MIPS: smp-cps: Fix potentially uninitialised value of core"). Remove the duplicate. Fixes: f875a832 ("MIPS: Abstract CPU core & VP(E) ID access through accessor functions") Signed-off-by: NMatt Redfearn <matt.redfearn@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paul Burton <paul.burton@mips.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/17955/Signed-off-by: NJames Hogan <jhogan@kernel.org>
-
由 James Hogan 提交于
Commit 17278a91 ("MIPS: CPS: Fix r1 .set mt assembler warning") added .set MIPS_ISA_LEVEL_RAW to silence warnings about .set mt on r1, however this can result in a MOVE being encoded as a 64-bit DADDU instruction on certain version of binutils (e.g. 2.22), and reserved instruction exceptions at runtime on 32-bit hardware. Reduce the sizes of the push/pop sections to include only instructions that are part of the MT ASE or which won't convert to 64-bit instructions after .set mips64r2/mips64r6. Reported-by: NGreg Ungerer <gerg@linux-m68k.org> Fixes: 17278a91 ("MIPS: CPS: Fix r1 .set mt assembler warning") Signed-off-by: NJames Hogan <jhogan@kernel.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paul Burton <paul.burton@mips.com> Cc: linux-mips@linux-mips.org Cc: <stable@vger.kernel.org> # 4.15 Tested-by: NGreg Ungerer <gerg@linux-m68k.org> Patchwork: https://patchwork.linux-mips.org/patch/18578/
-
- 31 1月, 2018 1 次提交
-
-
由 Rob Herring 提交于
Now that the DT core code handles bootmem arches, we can remove the MIPS specific early_init_dt_alloc_memory_arch function. Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Signed-off-by: NRob Herring <robh@kernel.org>
-