- 25 10月, 2017 2 次提交
-
-
由 Julien Thierry 提交于
Software Step exception is missing after stepping a trapped instruction. Ensure SPSR.SS gets set to 0 after emulating/skipping a trapped instruction before doing ERET. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: NJulien Thierry <julien.thierry@arm.com> Reviewed-by: NAlex Bennée <alex.bennee@linaro.org> [will: replaced AARCH32_INSN_SIZE with 4] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Julien Thierry 提交于
Literal values are being used to set single stepping in mdscr from assembly code. There are already existing defines representing those values, use those instead of the literal values. Signed-off-by: NJulien Thierry <julien.thierry@arm.com> Reviewed-by: NAlex Bennée <alex.bennee@linaro.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 24 10月, 2017 1 次提交
-
-
由 Mark Salyzyn 提交于
__memcpy_{to,from}io fall back to byte-at-a-time copying if both the source and destination pointers are not 8-byte aligned. Since one of the pointers always points at normal memory, this is unnecessary and detrimental to performance, so only do byte copying until we hit an 8-byte boundary for the device pointer. This change was motivated by performance issues in the pstore driver. On a test platform, measuring probe time for pstore, console buffer size of 1/4MB and pmsg of 1/2MB, was in the 90-107ms region. Change managed to reduce it to 10-25ms, an improvement in boot time. Cc: Kees Cook <keescook@chromium.org> Cc: Anton Vorontsov <anton@enomsg.org> Cc: Tony Luck <tony.luck@intel.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Anton Vorontsov <anton@enomsg.org> Cc: Robin Murphy <robin.murphy@arm.com> Signed-off-by: NMark Salyzyn <salyzyn@android.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 20 10月, 2017 1 次提交
-
-
由 Suzuki K Poulose 提交于
Now that the ARM ARM clearly specifies the rules for inferring the values of the ID register fields, fix the types of the feature bits we have in the kernel. As per ARM ARM DDI0487B.b, section D10.1.4 "Principles of the ID scheme for fields in ID registers" lists the registers to which the scheme applies along with the exceptions. This patch changes the relevant feature bits from FTR_EXACT to FTR_LOWER_SAFE to select the safer value. This will enable an older kernel running on a new CPU detect the safer option rather than completely disabling the feature. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Dave Martin <dave.martin@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 19 10月, 2017 1 次提交
-
-
由 Julien Thierry 提交于
Based on: ARM Architecture Reference Manual, ARMv8 (DDI 0487B.b). ARMv8.1 introduces the optional feature ARMv8.1-TTHM which can trigger a new type of memory abort. This exception is triggered when hardware update of page table flags is not atomic in regards to other memory accesses. Replace the corresponding unknown entry with a more accurate one. Cf: Section D10.2.28 ESR_ELx, Exception Syndrome Register (p D10-2381), section D4.4.11 Restriction on memory types for hardware updates on page tables (p D4-2116 - D4-2117). ARMv8.2 does not add new exception types, however it is worth mentioning that when obligatory feature RAS (optional for ARMv8.{0,1}) is implemented, exceptions related to "Synchronous parity or ECC error on memory access, not on translation table walk" become reserved and should not occur. Signed-off-by: NJulien Thierry <julien.thierry@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 18 10月, 2017 2 次提交
-
-
由 Will Deacon 提交于
When booting at EL2, ensure that we permit the EL1 host to sample physical addresses and physical counter values using SPE. Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
SPE is part of the v8.2 architecture, so move its system register and field definitions into sysreg.h and the new PSB barrier into barrier.h Finally, move KVM over to using the generic definitions so that it doesn't have to open-code its own versions. Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 14 10月, 2017 2 次提交
-
-
由 Julien Thierry 提交于
The current delay implementation uses the yield instruction, which is a hint that it is beneficial to schedule another thread. As this is a hint, it may be implemented as a NOP, causing all delays to be busy loops. This is the case for many existing CPUs. Taking advantage of the generic timer sending periodic events to all cores, we can use WFE during delays to reduce power consumption. This is beneficial only for delays longer than the period of the timer event stream. If timer event stream is not enabled, delays will behave as yield/busy loops. Signed-off-by: NJulien Thierry <julien.thierry@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Julien Thierry 提交于
The arch timer configuration for a CPU might get reset after suspending said CPU. In order to reliably use the event stream in the kernel (e.g. for delays), we keep track of the state where we can safely consider the event stream as properly configured. After writing to cntkctl, we issue an ISB to ensure that subsequent delay loops can rely on the event stream being enabled. Signed-off-by: NJulien Thierry <julien.thierry@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 11 10月, 2017 1 次提交
-
-
由 Suzuki K Poulose 提交于
ARMv8-A adds a few optional features for ARMv8.2 and ARMv8.3. Expose them to the userspace via HWCAPs and mrs emulation. SHA2-512 - Instruction support for SHA512 Hash algorithm (e.g SHA512H, SHA512H2, SHA512U0, SHA512SU1) SHA3 - SHA3 crypto instructions (EOR3, RAX1, XAR, BCAX). SM3 - Instruction support for Chinese cryptography algorithm SM3 SM4 - Instruction support for Chinese cryptography algorithm SM4 DP - Dot Product instructions (UDOT, SDOT). Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Dave Martin <dave.martin@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 09 10月, 2017 1 次提交
-
-
由 Ben Hutchings 提交于
Process personality always propagates across a fork(), but can change at an execve(). Signed-off-by: NBen Hutchings <ben@decadent.org.uk> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 04 10月, 2017 3 次提交
-
-
由 Matthieu CASTET 提交于
For example on arm64 board, this add info to "user" entries in vmallocinfo Before : [...] 0xffffff8008997000 0xffffff80089d8000 266240 user [...] Afer : [...] 0xffffff8008997000 0xffffff80089d8000 266240 atomic_pool_init+0x0/0x1d8 user [...] This help to debug mapping issues, and is consistent with others entries (ioremap, vmalloc, ...) that already provide caller. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NMatthieu CASTET <matthieu.castet@parrot.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Stephen Boyd 提交于
From what I can see there isn't anything about ACPI_APEI_SEA that means the arm64 architecture can or cannot support NMI safe cmpxchg or NMIs, so the 'if' condition here is not important. Let's remove it. Doing that allows us to support ftrace histograms via CONFIG_HIST_TRIGGERS that depends on the arch having the ARCH_HAVE_NMI_SAFE_CMPXCHG config selected. Cc: Tyler Baicar <tbaicar@codeaurora.org> Cc: Jonathan (Zhixiong) Zhang <zjzhang@codeaurora.org> Cc: Dongjiu Geng <gengdongjiu@huawei.com> Acked-by: NJames Morse <james.morse@arm.com> Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
Currently we inconsistently log identifying information for the boot CPU and secondary CPUs. For the boot CPU, we log the MIDR and MPIDR across separate messages, whereas for the secondary CPUs we only log the MIDR. In some cases, it would be useful to know the MPIDR of secondary CPUs, and it would be nice for these messages to be consistent. This patch ensures that in the primary and secondary boot paths, we log both the MPIDR and MIDR in a single message, with a consistent format. the MPIDR is consistently padded to 10 hex characters to cover Aff3 in bits 39:32, so that IDs can be compared easily. The newly redundant message in setup_arch() is removed. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Al Stone <ahs3@redhat.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> [will: added '0x' prefixes consistently] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 02 10月, 2017 5 次提交
-
-
由 Kees Cook 提交于
As discussed at the Linux Security Summit, arm64 prefers to use REFCOUNT_FULL by default. This enables it for the architecture. Cc: hw.likun@huawei.com Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NKees Cook <keescook@chromium.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Thomas Meyer 提交于
Use vma_pages function on vma object instead of explicit computation. Found by coccinelle spatch "api/vma_pages.cocci" Reviewed-by: NRobin Murphy <robin.murphy@arm.com> Signed-off-by: NThomas Meyer <thomas@m3y3r.de> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Masahiro Yamada 提交于
As you see in init/version.c, init_uts_ns.name.machine is initially set to UTS_MACHINE. There is no point to copy the same string. I dug the git history to figure out why this line is here. My best guess is like this: - This line has been around here since the initial support of arm64 by commit 9703d9d7 ("arm64: Kernel booting and initialisation"). If ARCH (=arm64) and UTS_MACHINE (=aarch64) do not match, arch/$(ARCH)/Makefile is supposed to override UTS_MACHINE, but the initial version of arch/arm64/Makefile missed to do that. Instead, the boot code copied "aarch64" to init_utsname()->machine. - Commit 94ed1f2c ("arm64: setup: report ELF_PLATFORM as the machine for utsname") replaced "aarch64" with ELF_PLATFORM to make "uname" to reflect the endianness. - ELF_PLATFORM does not help to provide the UTS machine name to rpm target, so commit cfa88c79 ("arm64: Set UTS_MACHINE in the Makefile") fixed it. The commit simply replaced ELF_PLATFORM with UTS_MACHINE, but missed the fact the string copy itself is no longer needed. Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
Fault.c seems to be a magnet for useless and wrong comments, largely due to its ancestry in other architectures where the code has since moved on, but the comments have remained intact. This patch removes both useless and incorrect comments, leaving only those that say something correct and relevant. Reported-by: NWenjia Zhou <zhiyuan_zhu@htc.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Yury Norov 提交于
ILP32 series [1] introduces the dependency on <asm/is_compat.h> for TASK_SIZE macro. Which in turn requires <asm/thread_info.h>, and <asm/thread_info.h> include <asm/memory.h>, giving a circular dependency, because TASK_SIZE is currently located in <asm/memory.h>. In other architectures, TASK_SIZE is defined in <asm/processor.h>, and moving TASK_SIZE there fixes the problem. Discussion: https://patchwork.kernel.org/patch/9929107/ [1] https://github.com/norov/linux/tree/ilp32-next CC: Will Deacon <will.deacon@arm.com> CC: Laura Abbott <labbott@redhat.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Suggested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NYury Norov <ynorov@caviumnetworks.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 29 9月, 2017 2 次提交
-
-
由 Will Deacon 提交于
We currently route pte translation faults via do_page_fault, which elides the address check against TASK_SIZE before invoking the mm fault handling code. However, this can cause issues with the path walking code in conjunction with our word-at-a-time implementation because load_unaligned_zeropad can end up faulting in kernel space if it reads across a page boundary and runs into a page fault (e.g. by attempting to read from a guard region). In the case of such a fault, load_unaligned_zeropad has registered a fixup to shift the valid data and pad with zeroes, however the abort is reported as a level 3 translation fault and we dispatch it straight to do_page_fault, despite it being a kernel address. This results in calling a sleeping function from atomic context: BUG: sleeping function called from invalid context at arch/arm64/mm/fault.c:313 in_atomic(): 0, irqs_disabled(): 0, pid: 10290 Internal error: Oops - BUG: 0 [#1] PREEMPT SMP [...] [<ffffff8e016cd0cc>] ___might_sleep+0x134/0x144 [<ffffff8e016cd158>] __might_sleep+0x7c/0x8c [<ffffff8e016977f0>] do_page_fault+0x140/0x330 [<ffffff8e01681328>] do_mem_abort+0x54/0xb0 Exception stack(0xfffffffb20247a70 to 0xfffffffb20247ba0) [...] [<ffffff8e016844fc>] el1_da+0x18/0x78 [<ffffff8e017f399c>] path_parentat+0x44/0x88 [<ffffff8e017f4c9c>] filename_parentat+0x5c/0xd8 [<ffffff8e017f5044>] filename_create+0x4c/0x128 [<ffffff8e017f59e4>] SyS_mkdirat+0x50/0xc8 [<ffffff8e01684e30>] el0_svc_naked+0x24/0x28 Code: 36380080 d5384100 f9400800 9402566d (d4210000) ---[ end trace 2d01889f2bca9b9f ]--- Fix this by dispatching all translation faults to do_translation_faults, which avoids invoking the page fault logic for faults on kernel addresses. Cc: <stable@vger.kernel.org> Reported-by: NAnkit Jain <ankijain@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
On kernels built with support for transparent huge pages, different CPUs can access the PMD concurrently due to e.g. fast GUP or page_vma_mapped_walk and they must take care to use READ_ONCE to avoid value tearing or caching of stale values by the compiler. Unfortunately, these functions call into our pgtable macros, which don't use READ_ONCE, and compiler caching has been observed to cause the following crash during ext4 writeback: PC is at check_pte+0x20/0x170 LR is at page_vma_mapped_walk+0x2e0/0x540 [...] Process doio (pid: 2463, stack limit = 0xffff00000f2e8000) Call trace: [<ffff000008233328>] check_pte+0x20/0x170 [<ffff000008233758>] page_vma_mapped_walk+0x2e0/0x540 [<ffff000008234adc>] page_mkclean_one+0xac/0x278 [<ffff000008234d98>] rmap_walk_file+0xf0/0x238 [<ffff000008236e74>] rmap_walk+0x64/0xa0 [<ffff0000082370c8>] page_mkclean+0x90/0xa8 [<ffff0000081f3c64>] clear_page_dirty_for_io+0x84/0x2a8 [<ffff00000832f984>] mpage_submit_page+0x34/0x98 [<ffff00000832fb4c>] mpage_process_page_bufs+0x164/0x170 [<ffff00000832fc8c>] mpage_prepare_extent_to_map+0x134/0x2b8 [<ffff00000833530c>] ext4_writepages+0x484/0xe30 [<ffff0000081f6ab4>] do_writepages+0x44/0xe8 [<ffff0000081e5bd4>] __filemap_fdatawrite_range+0xbc/0x110 [<ffff0000081e5e68>] file_write_and_wait_range+0x48/0xd8 [<ffff000008324310>] ext4_sync_file+0x80/0x4b8 [<ffff0000082bd434>] vfs_fsync_range+0x64/0xc0 [<ffff0000082332b4>] SyS_msync+0x194/0x1e8 This is because page_vma_mapped_walk loads the PMD twice before calling pte_offset_map: the first time without READ_ONCE (where it gets all zeroes due to a concurrent pmdp_invalidate) and the second time with READ_ONCE (where it sees a valid table pointer due to a concurrent pmd_populate). However, the compiler inlines everything and caches the first value in a register, which is subsequently used in pte_offset_phys which returns a junk pointer that is later dereferenced when attempting to access the relevant pte. This patch fixes the issue by using READ_ONCE in pte_offset_phys to ensure that a stale value is not used. Whilst this is a point fix for a known failure (and simple to backport), a full fix moving all of our page table accessors over to {READ,WRITE}_ONCE and consistently using READ_ONCE in page_vma_mapped_walk is in the works for a future kernel release. Cc: Jon Masters <jcm@redhat.com> Cc: Timur Tabi <timur@codeaurora.org> Cc: <stable@vger.kernel.org> Fixes: f27176cf ("mm: convert page_mkclean_one() to use page_vma_mapped_walk()") Tested-by: NRichard Ruigrok <rruigrok@codeaurora.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 27 9月, 2017 1 次提交
-
-
由 Marc Zyngier 提交于
When the kernel is entered at EL2 on an ARMv8.0 system, we construct the EL1 pstate and make sure this uses the the EL1 stack pointer (we perform an exception return to EL1h). But if the kernel is either entered at EL1 or stays at EL2 (because we're on a VHE-capable system), we fail to set SPsel, and use whatever stack selection the higher exception level has choosen for us. Let's not take any chance, and make sure that SPsel is set to one before we decide the mode we're going to run in. Cc: <stable@vger.kernel.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 18 9月, 2017 4 次提交
-
-
由 Andrew Pinski 提交于
The kernel needs to be compiled as a LP64 binary for ARM64, even when using a compiler that defaults to code-generation for the ILP32 ABI. Consequently, we need to explicitly pass '-mabi=lp64' (supported on gcc-4.9 and newer). Signed-off-by: NAndrew Pinski <Andrew.Pinski@caviumnetworks.com> Signed-off-by: NPhilipp Tomsich <philipp.tomsich@theobroma-systems.com> Signed-off-by: NChristoph Muellner <christoph.muellner@theobroma-systems.com> Signed-off-by: NYury Norov <ynorov@caviumnetworks.com> Reviewed-by: NDavid Daney <ddaney@caviumnetworks.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Masahiro Yamada 提交于
Aarch64 instructions must be word aligned. The current 16 byte alignment is more than enough. Relax it into 4 byte alignment. Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Dave Martin 提交于
__efi_fpsimd_begin()/__efi_fpsimd_end() are for use when making EFI calls only, so using them in non-EFI kernels is not allowed. This patch compiles them out if CONFIG_EFI is not set. Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Thomas Garnier 提交于
A bug was reported on ARM where set_fs might be called after it was checked on the work pending function. ARM64 is not affected by this bug but has a similar construct. In order to avoid any similar problems in the future, the addr_limit_user_check function is moved at the beginning of the loop. Fixes: cf7de27a ("arm64/syscalls: Check address limit on user-mode return") Reported-by: NLeonard Crestez <leonard.crestez@nxp.com> Signed-off-by: NThomas Garnier <thgarnie@google.com> Signed-off-by: NKees Cook <keescook@chromium.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Pratyush Anand <panand@redhat.com> Cc: Dave Martin <Dave.Martin@arm.com> Cc: Will Drewry <wad@chromium.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Andy Lutomirski <luto@amacapital.net> Cc: David Howells <dhowells@redhat.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: linux-api@vger.kernel.org Cc: Yonghong Song <yhs@fb.com> Cc: linux-arm-kernel@lists.infradead.org Link: http://lkml.kernel.org/r/1504798247-48833-5-git-send-email-keescook@chromium.org
-
- 14 9月, 2017 1 次提交
-
-
由 Prakash Gupta 提交于
The stacktraces always begin as follows: [<c00117b4>] save_stack_trace_tsk+0x0/0x98 [<c0011870>] save_stack_trace+0x24/0x28 ... This is because the stack trace code includes the stack frames for itself. This is incorrect behaviour, and also leads to "skip" doing the wrong thing (which is the number of stack frames to avoid recording.) Perversely, it does the right thing when passed a non-current thread. Fix this by ensuring that we have a known constant number of frames above the main stack trace function, and always skip these. This was fixed for arch arm by commit 3683f44c ("ARM: stacktrace: avoid listing stacktrace functions in stacktrace") Link: http://lkml.kernel.org/r/1504078343-28754-1-git-send-email-guptap@codeaurora.orgSigned-off-by: NPrakash Gupta <guptap@codeaurora.org> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Michal Hocko <mhocko@suse.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 09 9月, 2017 1 次提交
-
-
由 Alexey Dobriyan 提交于
First, number of CPUs can't be negative number. Second, different signnnedness leads to suboptimal code in the following cases: 1) kmalloc(nr_cpu_ids * sizeof(X)); "int" has to be sign extended to size_t. 2) while (loff_t *pos < nr_cpu_ids) MOVSXD is 1 byte longed than the same MOV. Other cases exist as well. Basically compiler is told that nr_cpu_ids can't be negative which can't be deduced if it is "int". Code savings on allyesconfig kernel: -3KB add/remove: 0/0 grow/shrink: 25/264 up/down: 261/-3631 (-3370) function old new delta coretemp_cpu_online 450 512 +62 rcu_init_one 1234 1272 +38 pci_device_probe 374 399 +25 ... pgdat_reclaimable_pages 628 556 -72 select_fallback_rq 446 369 -77 task_numa_find_cpu 1923 1807 -116 Link: http://lkml.kernel.org/r/20170819114959.GA30580@avx2Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 05 9月, 2017 2 次提交
-
-
由 Christoffer Dall 提交于
As we are about to access the APRs from the GICv2 uaccess interface, make this logic generally available. Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <cdall@linaro.org>
-
由 James Morse 提交于
The ARM-ARM has two bits in the ESR/HSR relevant to external aborts. A range of {I,D}FSC values (of which bit 5 is always set) and bit 9 'EA' which provides: > an IMPLEMENTATION DEFINED classification of External Aborts. This bit is in addition to the {I,D}FSC range, and has an implementation defined meaning. KVM should always ignore this bit when handling external aborts from a guest. Remove the ESR_ELx_EA definition and rewrite its helper kvm_vcpu_dabt_isextabt() to check the {I,D}FSC range. This merges kvm_vcpu_dabt_isextabt() and the recently added is_abort_sea() helper. CC: Tyler Baicar <tbaicar@codeaurora.org> Reported-by: Ngengdongjiu <gengdj.1984@gmail.com> Signed-off-by: NJames Morse <james.morse@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <cdall@linaro.org>
-
- 01 9月, 2017 1 次提交
-
-
由 Jérôme Glisse 提交于
Calls to mmu_notifier_invalidate_page() were replaced by calls to mmu_notifier_invalidate_range() and are now bracketed by calls to mmu_notifier_invalidate_range_start()/end() Remove now useless invalidate_page callback. Changed since v1 (Linus Torvalds) - remove now useless kvm_arch_mmu_notifier_invalidate_page() Signed-off-by: NJérôme Glisse <jglisse@redhat.com> Tested-by: NMike Galbraith <efault@gmx.de> Tested-by: NAdam Borowski <kilobyte@angband.pl> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Cc: kvm@vger.kernel.org Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andrea Arcangeli <aarcange@redhat.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 31 8月, 2017 5 次提交
-
-
由 Minghuan Lian 提交于
LS1046a includes 3 MSI controllers. Each controller supports 128 interrupts. Acked-by: NRob Herring <robh@kernel.org> Signed-off-by: NMinghuan Lian <Minghuan.Lian@nxp.com> Signed-off-by: NHou Zhiqiang <Zhiqiang.Hou@nxp.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Minghuan Lian 提交于
In order to maximize the use of MSI, a PCIe controller will share all MSI controllers. The patch changes "msi-parent" to refer to all MSI controller dts nodes. Signed-off-by: NMinghuan Lian <Minghuan.Lian@nxp.com> Signed-off-by: NHou Zhiqiang <Zhiqiang.Hou@nxp.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Minghuan Lian 提交于
"1" should be replaced by "l". This is a typo. The patch is to fix it. Signed-off-by: NMinghuan Lian <Minghuan.Lian@nxp.com> Signed-off-by: NHou Zhiqiang <Zhiqiang.Hou@nxp.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
When masking/unmasking a doorbell interrupt, it is necessary to issue an invalidation to the corresponding redistributor. We use the DirectLPI feature by writting directly to the corresponding redistributor. Reviewed-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
V{PEND,PROP}BASER being 64bit registers, they need some ad-hoc accessors on 32bit, specially given that VPENDBASER contains a Valid bit, making the access a bit convoluted. Reviewed-by: NThomas Gleixner <tglx@linutronix.de> Reviewed-by: NEric Auger <eric.auger@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 30 8月, 2017 4 次提交
-
-
由 Thomas Petazzoni 提交于
The Armada AP806 has 20 pins, and therefore 20 GPIOs (from 0 to 19 included) and not 19 pins. Therefore, we fix the Device Tree description for the GPIO controller. Before this patch: $ cat /sys/kernel/debug/pinctrl/f06f4000.system-controller:pinctrl/gpio-ranges GPIO ranges handled: 0: mvebu-gpio GPIOS [0 - 19] PINS [0 - 19] 0: f06f4000.system-controller:gpio GPIOS [0 - 18] PINS [0 - 18] After this patch: $ cat /sys/kernel/debug/pinctrl/f06f4000.system-controller:pinctrl/gpio-ranges GPIO ranges handled: 0: mvebu-gpio GPIOS [0 - 19] PINS [0 - 19] 0: f06f4000.system-controller:gpio GPIOS [0 - 19] PINS [0 - 19] Fixes: 63dac0f4 ("arm64: dts: marvell: add gpio support for Armada 7K/8K") Signed-off-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com>
-
由 Antoine Tenart 提交于
This patch enables the two GE/SFP ports. They are configured in 10GKR mode by default. To do this the cpm_xdmio is enabled as well, and two phy descriptions are added. Signed-off-by: NAntoine Tenart <antoine.tenart@free-electrons.com> Tested-by: NMarcin Wojtas <mw@semihalf.com> Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com>
-
由 Antoine Tenart 提交于
The network driver on Marvell SoC (7k/8k) needs to access some registers in the system controller to configure its ports at runtime. This patch adds a phandle reference to the syscon system controller node in the ppv2 node. Signed-off-by: NAntoine Tenart <antoine.tenart@free-electrons.com> Tested-by: NMarcin Wojtas <mw@semihalf.com> Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com>
-
由 Thomas Petazzoni 提交于
This commit updates the Marvell Armada 7K/8K Device Tree to describe the TX interrupts of the Ethernet controllers, in both the master and slave CP110s. Signed-off-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com>
-