- 12 6月, 2015 6 次提交
-
-
由 Vladimir Murzin 提交于
tlb.S has been removed since fa48e6f7 "arm64: mm: Optimise tlb flush logic where we have >4K granule", so align comment with that. Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
After secondary CPU boot or hotplug, the active_mm of the idle thread is &init_mm. The init_mm.pgd (swapper_pg_dir) is only meant for TTBR1_EL1 and must not be set in TTBR0_EL1. Since when active_mm == &init_mm the TTBR0_EL1 is already set to the reserved value, there is no need to perform any context reset. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: <stable@vger.kernel.org>
-
由 Marc Zyngier 提交于
So far, we configured the world-switch by having a small array of pointers to the save and restore functions, depending on the GIC used on the platform. Loading these values each time is a bit silly (they never change), and it makes sense to rely on the instruction patching instead. This leads to a nice cleanup of the code. Acked-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
Add a new item to the feature set (ARM64_HAS_SYSREG_GIC_CPUIF) to indicate that we have a system register GIC CPU interface This will help KVM switching to alternative instruction patching. Reviewed-by: NAndre Przywara <andre.przywara@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
When building without CONFIG_HOTPLUG_CPU, GCC complains (rightly) that psci_tos_resident_on is unused: arch/arm64/kernel/psci.c:61:13: warning: ‘psci_tos_resident_on’ defined but not used [-Wunused-function] static bool psci_tos_resident_on(int cpu) As it's only ever used when CONFIG_HOTPLUG_CPU is selected, let's move it into the existing ifdef. Signed-off-by: NWill Deacon <will.deacon@arm.com> [Mark: write commit message] Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Janet Liu 提交于
Now FPSIMD don't handle HOTPLUG_CPU. This introduces bug after cpu down/up process. After cpu down/up process, the FPSMID hardware register is default value, not any process's fpsimd context. when CPU_DEAD set cpu's fpsimd_state to NULL, it will force to load the fpsimd context for the thread, to avoid the chance to skip to load the context. If process A is the last user process on CPU N before cpu down, and the first user process on the same CPU N after cpu up, A's fpsimd_state.cpu is the current cpu id, and per_cpu(fpsimd_last_state) points A's fpsimd_state, so kernel will not reload the context during it return to user space. Signed-off-by: NJanet Liu <janet.liu@spreadtrum.com> Signed-off-by: NXiongshan An <xiongshan.an@spreadtrum.com> Signed-off-by: NChunyan Zhang <chunyan.zhang@spreadtrum.com> [catalin.marinas@arm.com: some mostly cosmetic clean-ups] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 11 6月, 2015 1 次提交
-
-
由 Janet Liu 提交于
kernel thread's default fpsimd state is zero. When fork a thread, if parent is kernel thread, and save hardware context to parent's fpsimd state, but this hardware context is user process's context, because kernel thread don't use fpsimd, it will not introduce issue, it add a little cost. Signed-off-by: NJanet Liu <janet.liu@spreadtrum.com> Signed-off-by: NChunyan Zhang <chunyan.zhang@spreadtrum.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 09 6月, 2015 1 次提交
-
-
由 Josh Stone 提交于
If a syscall is entered without TIF_SYSCALL_TRACE set, then it goes on the fast path. It's then possible to have TIF_SYSCALL_TRACE added in the middle of the syscall, but ret_fast_syscall doesn't check this flag again. This causes a ptrace syscall-exit-stop to be missed. For instance, from a PTRACE_EVENT_FORK reported during do_fork, the tracer might resume with PTRACE_SYSCALL, setting TIF_SYSCALL_TRACE. Now the completion of the fork should have a syscall-exit-stop. Russell King fixed this on arm by re-checking _TIF_SYSCALL_WORK in the fast exit path. Do the same on arm64. Reviewed-by: NWill Deacon <will.deacon@arm.com> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NJosh Stone <jistone@redhat.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 05 6月, 2015 5 次提交
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux由 Catalin Marinas 提交于
* 'arm64/psci-rework' of git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux: arm64: psci: remove ACPI coupling arm64: psci: kill psci_power_state arm64: psci: account for Trusted OS instances arm64: psci: support unsigned return values arm64: psci: remove unnecessary id indirection arm64: smp: consistently use error codes arm64: smp_plat: add get_logical_index arm/arm64: kvm: add missing PSCI include Conflicts: arch/arm64/kernel/smp.c
-
由 Marc Zyngier 提交于
AArch64 toolchains suffer from the following bug: $ cat blah.S 1: .inst 0x01020304 .if ((. - 1b) != 4) .error "blah" .endif $ aarch64-linux-gnu-gcc -c blah.S blah.S: Assembler messages: blah.S:3: Error: non-constant expression in ".if" statement which precludes the use of msr_s and co as part of alternatives. We workaround this issue by not directly testing the labels themselves, but by moving the current output pointer by a value that should always be zero. If this value is not null, then we will trigger a backward move, which is expclicitely forbidden. This triggers the error we're after: AS arch/arm64/kvm/hyp.o arch/arm64/kvm/hyp.S: Assembler messages: arch/arm64/kvm/hyp.S:1377: Error: attempt to move .org backwards scripts/Makefile.build:294: recipe for target 'arch/arm64/kvm/hyp.o' failed make[1]: *** [arch/arm64/kvm/hyp.o] Error 1 Makefile:946: recipe for target 'arch/arm64/kvm' failed Not pretty, but at least works on the current toolchains. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
asm/alternative-asm.h and asm/alternative.h are extremely similar, and really deserve to live in the same file (as this makes further modufications a bit easier). Fold the content of alternative-asm.h into alternative.h, and update the few users. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
Since all branches are PC-relative on AArch64, these instructions cannot be used as an alternative with the simplistic approach we currently have (the immediate has been computed from the .altinstr_replacement section, and end-up being completely off if the target is outside of the replacement sequence). This patch handles the branch instructions in a different way, using the insn framework to recompute the immediate, and generate the right displacement in the above case. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
The workaround for erratum 845719 is currently using a branch between two alternate sequences, which is quite fragile, and that we are going to break as we rework the alternative code. This patch reworks the workaround to fit in a single alternative sequence. The generated code itself is unchanged. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 03 6月, 2015 3 次提交
-
-
由 Marc Zyngier 提交于
In order to deal with branches located in alternate sequences, but pointing to the main kernel text, it is required to extract the relative displacement encoded in the instruction, and to be able to update said instruction with a new offset (once it is known). For this, we introduce three new helpers: - aarch64_insn_is_branch_imm is a predicate indicating if the instruction is an immediate branch - aarch64_get_branch_offset returns a signed value representing the byte offset encoded in a branch instruction - aarch64_set_branch_offset takes an instruction and an offset, and returns the corresponding updated instruction. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
Two cleanups of the asm function cpu_resume(): - The global variable sleep_idmap_phys always points to idmap_pg_dir, so we can just use that value directly in the CPU resume path. - Unclutter the load of sleep_save_sp::save_ptr_stash_phys. Acked-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Tested-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
Commit ea8c2e11 ("arm64: Extend the idmap to the whole kernel image") changed the early page table code so that the entire kernel Image is covered by the identity map. This allows functions that need to enable or disable the MMU to reside anywhere in the kernel Image. However, this change has the unfortunate side effect that the Image cannot cross a physical 512 MB alignment boundary anymore, since the early page table code cannot deal with the Image crossing a /virtual/ 512 MB alignment boundary. So instead, reduce the ID map to a single page, that is populated by the contents of the .idmap.text section. Only three functions reside there at the moment: __enable_mmu(), cpu_resume_mmu() and cpu_reset(). If new code is introduced that needs to manipulate the MMU state, it should be added to this section as well. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 02 6月, 2015 2 次提交
-
-
由 Ard Biesheuvel 提交于
Currently, the FDT blob needs to be in the same 512 MB region as the kernel, so that it can be mapped into the kernel virtual memory space very early on using a minimal set of statically allocated translation tables. Now that we have early fixmap support, we can relax this restriction, by moving the permanent FDT mapping to the fixmap region instead. This way, the FDT blob may be anywhere in memory. This also moves the vetting of the FDT to mmu.c, since the early init code in head.S does not handle mapping of the FDT anymore. At the same time, fix up some comments in head.S that have gone stale. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Ard Biesheuvel 提交于
This splits off the reservation of the memory occupied by the FDT binary itself from the processing of the memory reservations it contains. This is necessary because the physical address of the FDT, which is needed to perform the reservation, may not be known to the FDT driver core, i.e., it may be mapped outside the linear direct mapping, in which case __pa() returns a bogus value. Cc: Russell King <linux@arm.linux.org.uk> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Acked-by: NRob Herring <robh@kernel.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 01 6月, 2015 1 次提交
-
-
由 Will Deacon 提交于
Since commit a4780ade ("ARM: 7735/2: Preserve the user r/w register TPIDRURW on context switch and fork"), arch/arm/ has context switched the user-writable TLS register, so do the same for compat tasks running under the arm64 kernel. Reported-by: NAndré Hentschel <nerv@dawncrow.de> Tested-by: NAndré Hentschel <nerv@dawncrow.de> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 27 5月, 2015 8 次提交
-
-
由 Mark Rutland 提交于
The 32-bit ARM port doesn't have ACPI headers, and conditionally including them is going to look horrendous. In preparation for sharing the PSCI invocation code with 32-bit, move the acpi_psci_* function declarations and definitions such that the PSCI client code need not include ACPI headers. While it would seem like we could simply hide the ACPI includes in psci.h, the ACPI headers have hilarious circular dependencies which make this infeasible without reorganising most of ACPICA. So rather than doing that, move the acpi_psci_* prototypes into psci.h. The psci_acpi_init function is made dependent on CONFIG_ACPI (with a stub implementation in asm/psci.h) such that it need not be built for 32-bit ARM or kernels without ACPI support. The currently missing __init annotations are added to the prototypes in the header. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NHanjun Guo <hanjun.guo@linaro.org> Reviewed-by: NAl Stone <al.stone@linaro.org> Reviewed-by: NAshwin Chaugule <ashwin.chaugule@linaro.org> Tested-by: NHanjun Guo <hanjun.guo@linaro.org> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Will Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
A PSCI 1.0 implementation may choose to use the new extended StateID format, the presence of which may be queried via the PSCI_FEATURES call. The layout of this new StateID format is incompatible with the existing format, and so to handle both we must abstract attempts to parse the fields. In preparation for PSCI 1.0 support, this patch introduces psci_power_state_loses_context and psci_power_state_is_valid functions to query information from a PSCI power state, which is no longer decomposed (and hence the pack/unpack functions are removed). As it is no longer decomposed, it is now passed round as an opaque u32 token. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Tested-by: NHanjun Guo <hanjun.guo@linaro.org> Cc: Will Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
Software resident in the secure world (a "Trusted OS") may cause CPU_OFF calls for the CPU it is resident on to be denied. Such a denial would be fatal for the kernel, and so we must detect when this can happen before the point of no return. This patch implements Trusted OS detection for PSCI 0.2+ systems, using MIGRATE_INFO_TYPE and MIGRATE_INFO_UP_CPU. When a trusted OS is detected as resident on a particular CPU, attempts to hot unplug that CPU will be denied early, before they can prove fatal. Trusted OS migration is not implemented by this patch. Implementation of migratable UP trusted OSs seems unlikely, and the right policy for migration is unclear (and will likely differ across implementations). As such, it is likely that migration will require cooperation with Trusted OS drivers. PSCI implementations prior to 0.1 do not provide the facility to detect the presence of a Trusted OS, nor the CPU any such OS is resident on, so without additional information it is not possible to handle Trusted OSs with PSCI 0.1. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NHanjun Guo <hanjun.guo@linaro.org> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Will Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
PSCI_VERSION and MIGRATE_INFO_TYPE_UP_CPU return unsigned values, with the latter returning a 64-bit value. However, the PSCI invocation functions have prototypes returning int. This patch upgrades the invocation functions to return unsigned long, with a new typedef to keep things legible. As PSCI_VERSION cannot return a negative value, the erroneous check against PSCI_RET_NOT_SUPPORTED is also removed. The unrelated psci_initcall_t typedef is moved closer to its first user, to avoid confusion with the invocation functions. In preparation for sharing the code with ARM, unsigned long is used in preference of u64. In the SMC32 calling convention, the relevant fields will be 32 bits wide. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NHanjun Guo <hanjun.guo@linaro.org> Tested-by: NHanjun Guo <hanjun.guo@linaro.org> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Will Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
PSCI 0.1 did not define canonical IDs for CPU_ON, CPU_OFF, CPU_SUSPEND, or MIGRATE, and so these need to be provided when using firmware compliant to PSCI 0.1. However, functions introduced in 0.2 or later have canonical IDs, and these cannot be provided via DT. There's no need to indirect the IDs via a table; they can be used directly at callsites (and already are for SYSTEM_OFF and SYSTEM_RESET). This patch removes the unnecessary function ID indirection for AFFINITY_INFO and MIGRATE_INFO_TYPE. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: NHanjun Guo <hanjun.guo@linaro.org> Tested-by: NHanjun Guo <hanjun.guo@linaro.org> Cc: Will Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
cpu_kill currently returns one for success and zero for failure, which is unlike all the other cpu_operations, which return zero for success and an error code upon failure. This difference is unnecessarily confusing. Make cpu_kill consistent with the other cpu_operations. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: NHanjun Guo <hanjun.guo@linaro.org> Tested-by: NHanjun Guo <hanjun.guo@linaro.org> Cc: Will Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
The PSCI MIGRATE_INFO_UP_CPU call returns a physical ID, which we will need to map back to a Linux logical ID. Implement a reusable get_logical_index to map from a physical ID to a logical ID. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: NHanjun Guo <hanjun.guo@linaro.org> Tested-by: NHanjun Guo <hanjun.guo@linaro.org> Cc: Will Deacon <will.deacon@arm.com>
-
由 Mark Rutland 提交于
We make use of the PSCI function IDs, but don't explicitly include the header which defines them. Relying on transitive header includes is fragile and will be broken as headers are refactored. This patch includes the relevant header file directly so as to avoid future breakage. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NChristoffer Dall <christoffer.dall@linaro.org> Reviewed-by: NHanjun Guo <hanjun.guo@linaro.org> Tested-by: NHanjun Guo <hanjun.guo@linaro.org> Cc: Marc Zyngier <marc.zyngier@arm.com>
-
- 21 5月, 2015 1 次提交
-
-
由 Paul E. McKenney 提交于
This commit removes the open-coded CPU-offline notification with new common code. In particular, this change avoids calling scheduler code using RCU from an offline CPU that RCU is ignoring. This is a minimal change. A more intrusive change might invoke the cpu_check_up_prepare() and cpu_set_state_online() functions at CPU-online time, which would allow onlining throw an error if the CPU did not go offline properly. Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: linux-arm-kernel@lists.infradead.org Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 20 5月, 2015 1 次提交
-
-
由 Hou Pengyang 提交于
For ARM64, when tracing with tracepoint events, the IP and pstate are set to 0, preventing the perf code parsing the callchain and resolving the symbols correctly. ./perf record -e sched:sched_switch -g --call-graph dwarf ls [ perf record: Captured and wrote 0.146 MB perf.data ] ./perf report -f Samples: 194 of event 'sched:sched_switch', Event count (approx.): 194 Children Self Command Shared Object Symbol 100.00% 100.00% ls [unknown] [.] 0000000000000000 The fix is to implement perf_arch_fetch_caller_regs for ARM64, which fills several necessary registers used for callchain unwinding, including pc,sp, fp and spsr . With this patch, callchain can be parsed correctly as follows: ...... + 2.63% 0.00% ls [kernel.kallsyms] [k] vfs_symlink + 2.63% 0.00% ls [kernel.kallsyms] [k] follow_down + 2.63% 0.00% ls [kernel.kallsyms] [k] pfkey_get + 2.63% 0.00% ls [kernel.kallsyms] [k] do_execveat_common.isra.33 - 2.63% 0.00% ls [kernel.kallsyms] [k] pfkey_send_policy_notify pfkey_send_policy_notify pfkey_get v9fs_vfs_rename page_follow_link_light link_path_walk el0_svc_naked ....... Signed-off-by: NHou Pengyang <houpengyang@huawei.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 19 5月, 2015 10 次提交
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux由 Catalin Marinas 提交于
* 'for-next/cpu-init' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: ARM64: kernel: unify ACPI and DT cpus initialization ARM64: kernel: make cpu_ops hooks DT agnostic
-
由 Lorenzo Pieralisi 提交于
The code that initializes cpus on arm64 is currently split in two different code paths that carry out DT and ACPI cpus initialization. Most of the code executing SMP initialization is common and should be merged to reduce discrepancies between ACPI and DT initialization and to have code initializing cpus in a single common place in the kernel. This patch refactors arm64 SMP cpus initialization code to merge ACPI and DT boot paths in a common file and to create sanity checks that can be reused by both boot methods. Current code assumes PSCI is the only available boot method when arm64 boots with ACPI; this can be easily extended if/when the ACPI parking protocol is merged into the kernel. Signed-off-by: NSudeep Holla <sudeep.holla@arm.com> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: NHanjun Guo <hanjun.guo@linaro.org> Acked-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NHanjun Guo <hanjun.guo@linaro.org> Tested-by: Mark Rutland <mark.rutland@arm.com> [DT] Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Lorenzo Pieralisi 提交于
ARM64 CPU operations such as cpu_init and cpu_init_idle take a struct device_node pointer as a parameter, which corresponds to the device tree node of the logical cpu on which the operation has to be applied. With the advent of ACPI on arm64, where MADT static table entries are used to initialize cpus, the device tree node parameter in cpu_ops hooks become useless when booting with ACPI, since in that case cpu device tree nodes are not present and can not be used for cpu initialization. The current cpu_init hook requires a struct device_node pointer parameter because it is called while parsing the device tree to initialize CPUs, when the cpu_logical_map (that is used to match a cpu node reg property to a device tree node) for a given logical cpu id is not set up yet. This means that the cpu_init hook cannot rely on the of_get_cpu_node function to retrieve the device tree node corresponding to the logical cpu id passed in as parameter, so the cpu device tree node must be passed in as a parameter to fix this catch-22 dependency cycle. This patch reshuffles the cpu_logical_map initialization code so that the cpu_init cpu_ops hook can safely use the of_get_cpu_node function to retrieve the cpu device tree node, removing the need for the device tree node pointer parameter. In the process, the patch removes device tree node parameters from all cpu_ops hooks, in preparation for SMP DT/ACPI cpus initialization consolidation. Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: NHanjun Guo <hanjun.guo@linaro.org> Acked-by: NSudeep Holla <sudeep.holla@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NHanjun Guo <hanjun.guo@linaro.org> Tested-by: Mark Rutland <mark.rutland@arm.com> [DT] Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Michal Simek 提交于
This resolves the following sparse warning from readl() and other macros, which ends up embedding readl_relaxed() using the same variable. Warning log: include/asm-generic/io.h:364:16: warning: symbol '__v' shadows an earlier one include/asm-generic/io.h:364:16: originally declared here include/asm-generic/io.h:372:16: warning: symbol '__v' shadows an earlier one include/asm-generic/io.h:372:16: originally declared here include/asm-generic/io.h:380:16: warning: symbol '__v' shadows an earlier one include/asm-generic/io.h:380:16: originally declared here include/asm-generic/io.h:568:16: warning: symbol '__v' shadows an earlier one include/asm-generic/io.h:568:16: originally declared here include/asm-generic/io.h:576:16: warning: symbol '__v' shadows an earlier one include/asm-generic/io.h:576:16: originally declared here include/asm-generic/io.h:584:16: warning: symbol '__v' shadows an earlier one include/asm-generic/io.h:584:16: originally declared here The same patch was already applied to arm32 as "ARM: 7118/1: rename temp variable in read*_relaxed()" (sha1: b0c1264f) Acked-by: NLiviu Dudau <Liviu.Dudau@arm.com> Signed-off-by: NMichal Simek <michal.simek@xilinx.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
The documented semantics of flush_cache_all are not possible to provide for arm64 (short of flushing the entire physical address space by VA), and there are currently no users; KVM uses VA maintenance exclusively, cpu_reset is never called, and the only two users outside of arch code cannot be built for arm64. While cpu_soft_reset and related functions (which call flush_cache_all) were thought to be useful for kexec, their current implementations only serve to mask bugs. For correctness kexec will need to perform maintenance by VA anyway to account for system caches, line migration, and other subtleties of the cache architecture. As the extent of this cache maintenance will be kexec-specific, it should probably live in the kexec code. This patch removes flush_cache_all, and related unused components, preventing further abuse. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: AKASHI Takahiro <takahiro.akashi@linaro.org> Cc: Geoff Levand <geoff@infradead.org> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Anders Roxell 提交于
Now its safe to allow forced interrupt threading for arm64, all timer interrupts and the perf interrupt are marked NO_THREAD, as is the case with arch/arm: da0ec6f7 ARM: 7814/2: Allow forced irq threading Acked-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Suggested-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NAnders Roxell <anders.roxell@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Anders Roxell 提交于
Mark the PMU interrupts as non-threadable, as is the case with arch/arm: d9c3365b ARM: 7813/1: Mark pmu interupt IRQF_NO_THREAD Acked-by: NWill Deacon <will.deacon@arm.com> Suggested-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NAnders Roxell <anders.roxell@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Linus Torvalds 提交于
-
由 Peter Zijlstra 提交于
Two watchdog changes that came through different trees had a non conflicting conflict, that is, one changed the semantics of a variable but no actual code conflict happened. So the merge appeared fine, but the resulting code did not behave as expected. Commit 195daf66 ("watchdog: enable the new user interface of the watchdog mechanism") changes the semantics of watchdog_user_enabled, which thereafter is only used by the functions introduced by b3738d29 ("watchdog: Add watchdog enable/disable all functions"). There further appears to be a distinct lack of serialization between setting and using watchdog_enabled, so perhaps we should wrap the {en,dis}able_all() things in watchdog_proc_mutex. This patch fixes a s2r failure reported by Michal; which I cannot readily explain. But this does make the code internally consistent again. Reported-and-tested-by: NMichal Hocko <mhocko@suse.cz> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
git://git.infradead.org/linux-mtd由 Linus Torvalds 提交于
Pull MTD fixes from Brian Norris: "Two MTD fixes for 4.1: - readtest: the signal-handling code was clobbering the error codes we should be handling/reporting in this test, rendering it useless. Noticed by Coverity. - the common SPI NOR flash DT binding (merged for 4.1-rc1) is being revised, so let's change that before 4.1 is minted" * tag 'for-linus-20150516' of git://git.infradead.org/linux-mtd: Documentation: dt: mtd: replace "nor-jedec" binding with "jedec, spi-nor" mtd: readtest: don't clobber error reports
-
- 17 5月, 2015 1 次提交
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb由 Linus Torvalds 提交于
Pull USB fixes from Greg KH: "Here are some USB fixes and new device ids for 4.1-rc4. All are pretty minor, and have been in linux-next successfully" * tag 'usb-4.1-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb: usb-storage: Add NO_WP_DETECT quirk for Lacie 059f:0651 devices Added another USB product ID for ELAN touchscreen quirks. xhci: gracefully handle xhci_irq dead device xhci: Solve full event ring by increasing TRBS_PER_SEGMENT to 256 xhci: fix isoc endpoint dequeue from advancing too far on transaction error usb: chipidea: debug: avoid out of bound read USB: visor: Match I330 phone more precisely USB: pl2303: Remove support for Samsung I330 USB: cp210x: add ID for KCF Technologies PRN device usb: gadget: remove incorrect __init/__exit annotations usb: phy: isp1301: work around tps65010 dependency usb: gadget: serial: fix re-ordering of tx data usb: gadget: hid: Fix static variable usage usb: gadget: configfs: Fix interfaces array NULL-termination usb: gadget: xilinx: fix devm_ioremap_resource() check usb: dwc3: dwc3-omap: correct the register macros
-