- 23 7月, 2014 2 次提交
-
-
由 Catalin Marinas 提交于
The early_ioremap_init() function already handles fixmap pte initialisation, so upgrade this to cover all of pud/pmd/pte and remove one page from swapper_pg_dir. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NJungseok Lee <jungseoklee85@gmail.com>
-
由 Mark Brown 提交于
A reference to ARCH_HAS_OPP was added in commit 333d17e5 (arm64: add ARCH_HAS_OPP to allow enabling OPP library) however this symbol is no longer needed after commit 049d595a (PM / OPP: Make OPP invisible to users in Kconfig). Signed-off-by: NMark Brown <broonie@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 21 7月, 2014 1 次提交
-
-
由 Yi Li 提交于
SMbios is important for server hardware vendors. It implements a spec for providing descriptive information about the platform. Things like serial numbers, physical layout of the ports, build configuration data, and the like. This has been tested by dmidecode and lshw tools. Signed-off-by: NYi Li <yi.li@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 19 7月, 2014 1 次提交
-
-
由 Mark Rutland 提交于
Currently reading /proc/cpuinfo will result in information being read out of the MIDR_EL1 of the current CPU, and the information is not associated with any particular logical CPU number. This is problematic for systems with heterogeneous CPUs (i.e. big.LITTLE) where MIDR fields will vary across CPUs, and the output will differ depending on the executing CPU. This patch reorganises the code responsible for /proc/cpuinfo to print information per-cpu. In the process, we perform several cleanups: * Property names are coerced to lower-case (to match "processor" as per glibc's expectations). * Property names are simplified and made to match the MIDR field names. * Revision is changed to hex as with every other field. * The meaningless Architecture property is removed. * The ripe-for-abuse Machine field is removed. The features field (a human-readable representation of the hwcaps) remains printed once, as this is expected to remain in use as the globally support CPU features. To enable the possibility of the addition of per-cpu HW feature information later, this is printed before any CPU-specific information. Comments are added to guide userspace developers in the right direction (using the hwcaps provided in auxval). Hopefully where userspace applications parse /proc/cpuinfo rather than using the readily available hwcaps, they limit themselves to reading said first line. If CPU features differ from each other, the previously installed sanity checks will give us some advance notice with warnings and TAINT_CPU_OUT_OF_SPEC. If we are lucky, we will never see such systems. Rework will be required in many places to support such systems anyway. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marcus Shawcroft <marcus.shawcroft@arm.com> Cc: Peter Maydell <peter.maydell@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> [catalin.marinas@arm.com: remove machine_name as it is no longer reported] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 18 7月, 2014 8 次提交
-
-
由 Mark Rutland 提交于
Unexpected variation in certain system register values across CPUs is an indicator of potential problems with a system. The kernel expects CPUs to be mostly identical in terms of supported features, even in systems with heterogeneous CPUs, with uniform instruction set support being critical for the correct operation of userspace. To help detect issues early where hardware violates the expectations of the kernel, this patch adds simple runtime sanity checks on important ID registers in the bring up path of each CPU. Where CPUs are fundamentally mismatched, set TAINT_CPU_OUT_OF_SPEC. Given that the kernel assumes CPUs are identical feature wise, let's not pretend that we expect such configurations to work. Supporting such configurations would require massive rework, and hopefully they will never exist. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
In big.LITTLE systems, the I-cache policy may differ across CPUs, and thus we must always meet the most stringent maintenance requirements of any I-cache in the system when performing maintenance to ensure correctness. Unfortunately this requirement is not met as we always look at the current CPU's cache type register to determine the maintenance requirements. This patch causes the I-cache policy of all CPUs to be taken into account for icache_is_aliasing and icache_is_aivivt. If any I-cache in the system is aliasing or AIVIVT, the respective function will return true. At boot each CPU may set flags to identify that at least one I-cache in the system is aliasing and/or AIVIVT. The now unused and potentially misleading icache_policy function is removed. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
Several kernel subsystems need to know details about CPU system register values, sometimes for CPUs other than that they are executing on. Rather than hard-coding system register accesses and cross-calls for these cases, this patch adds logic to record various system register values at boot-time. This may be used for feature reporting, firmware bug detection, etc. Separate hooks are added for the boot and hotplug paths to enable one-time intialisation and cold/warm boot value mismatch detection in later patches. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
The MIDR_EL1 register is composed of a number of bitfields, and uses of the fields has so far involved open-coding of the shifts and masks required. This patch adds shifts and masks for each of the MIDR_EL1 subfields, and also provides accessors built atop of these. Existing uses within cputype.h are updated to use these accessors. The read_cpuid_part_number macro is modified to return the extracted bitfield rather than returning the value in-place with all other fields (including revision) masked out, to better match the other accessors. As the value is only used in comparison with the *_CPU_PART_* macros which are similarly updated, and these values are never exposed to userspace, this change should not affect any functionality. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Lorenzo Pieralisi 提交于
Suspend init function must be marked as __init, since it is not needed after the kernel has booted. This patch moves the cpu_suspend_init() function to the __init section. Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Lorenzo Pieralisi 提交于
PSCI init functions must be marked as __init so that they are freed by the kernel upon boot. This patch marks the PSCI init functions as such since they need not be persistent in the kernel address space after the kernel has booted. Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Lorenzo Pieralisi 提交于
PSCI CPU operations have to be enabled on UP kernels so that calls like eg cpu_suspend can be made functional on UP too. This patch reworks the PSCI CPU operations so that they can be enabled on UP systems. Acked-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
Writing to the FPCR is commonly implemented as a self-synchronising operation in the CPU, so avoid writing to the register when the saved value matches that in the hardware already. Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 17 7月, 2014 6 次提交
-
-
由 Ian Campbell 提交于
Signed-off-by: NIan Campbell <ijc@hellion.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Michal Marek <mmarek@suse.cz> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kbuild@vger.kernel.org Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
Andy pointed out that binutils generates additional sections in the vdso image (e.g. section string table) which, if our .text section gets big enough, could cross a page boundary and end up screwing up the location where the kernel expects to put the data page. This patch solves the issue in the same manner as x86_32, by moving the data page before the code pages. Cc: Andy Lutomirski <luto@amacapital.net> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
_install_special_mapping replaces install_special_mapping and removes the need to detect special VMA in arch_vma_name. This patch moves the vdso and compat vectors page code over to the new API. Cc: Andy Lutomirski <luto@amacapital.net> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
The VDSO datapage doesn't need to be executable (no code there) or CoW-able (the kernel writes the page, so a private copy is totally useless). This patch moves the datapage into its own VMA, identified as "[vvar]" in /proc/<pid>/maps. Cc: Andy Lutomirski <luto@amacapital.net> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
Just keep the asm/page.h definition as this is included in vmlinux.lds.S as well. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com>
-
由 Jungseok Lee 提交于
This patch fixed the following checkpatch complaint as using pr_* instead of printk. WARNING: printk() should include KERN_ facility level Signed-off-by: NJungseok Lee <jays.lee@samsung.com> Reviewed-by: NSungjinn Chung <sungjinn.chung@samsung.com> Acked-by: NKukjin Kim <kgene.kim@samsung.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 10 7月, 2014 10 次提交
-
-
由 Mark Rutland 提交于
The arm64 Image header contains a text_offset field which bootloaders are supposed to read to determine the offset (from a 2MB aligned "start of memory" per booting.txt) at which to load the kernel. The offset is not well respected by bootloaders at present, and due to the lack of variation there is little incentive to support it. This is unfortunate for the sake of future kernels where we may wish to vary the text offset (even zeroing it). This patch adds options to arm64 to enable fuzz-testing of text_offset. CONFIG_ARM64_RANDOMIZE_TEXT_OFFSET forces the text offset to a random 16-byte aligned value value in the range [0..2MB) upon a build of the kernel. It is recommended that distribution kernels enable randomization to test bootloaders such that any compliance issues can be fixed early. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NTom Rini <trini@ti.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
Currently the kernel Image is stripped of everything past the initial stack, and at runtime the memory is initialised and used by the kernel. This makes the effective minimum memory footprint of the kernel larger than the size of the loaded binary, though bootloaders have no mechanism to identify how large this minimum memory footprint is. This makes it difficult to choose safe locations to place both the kernel and other binaries required at boot (DTB, initrd, etc), such that the kernel won't clobber said binaries or other reserved memory during initialisation. Additionally when big endian support was added the image load offset was overlooked, and is currently of an arbitrary endianness, which makes it difficult for bootloaders to make use of it. It seems that bootloaders aren't respecting the image load offset at present anyway, and are assuming that offset 0x80000 will always be correct. This patch adds an effective image size to the kernel header which describes the amount of memory from the start of the kernel Image binary which the kernel expects to use before detecting memory and handling any memory reservations. This can be used by bootloaders to choose suitable locations to load the kernel and/or other binaries such that the kernel will not clobber any memory unexpectedly. As before, memory reservations are required to prevent the kernel from clobbering these locations later. Both the image load offset and the effective image size are forced to be little-endian regardless of the native endianness of the kernel to enable bootloaders to load a kernel of arbitrary endianness. Bootloaders which wish to make use of the load offset can inspect the effective image size field for a non-zero value to determine if the offset is of a known endianness. To enable software to determine the endinanness of the kernel as may be required for certain use-cases, a new flags field (also little-endian) is added to the kernel header to export this information. The documentation is updated to clarify these details. To discourage future assumptions regarding the value of text_offset, the value at this point in time is removed from the main flow of the documentation (though kept as a compatibility note). Some minor formatting issues in the documentation are also corrected. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NTom Rini <trini@ti.com> Cc: Geoff Levand <geoff@infradead.org> Cc: Kevin Hilman <kevin.hilman@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
Currently we place swapper_pg_dir and idmap_pg_dir below the kernel image, between PHYS_OFFSET and (PHYS_OFFSET + TEXT_OFFSET). However, bootloaders may use portions of this memory below the kernel and we do not parse the memory reservation list until after the MMU has been enabled. As such we may clobber some memory a bootloader wishes to have preserved. To enable the use of all of this memory by bootloaders (when the required memory reservations are communicated to the kernel) it is necessary to move our initial page tables elsewhere. As we currently have an effectively unbound requirement for memory at the end of the kernel image for .bss, we can place the page tables here. This patch moves the initial page table to the end of the kernel image, after the BSS. As they do not consist of any initialised data they will be stripped from the kernel Image as with the BSS. The BSS clearing routine is updated to stop at __bss_stop rather than _end so as to not clobber the page tables, and memory reservations made redundant by the new organisation are removed. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NLaura Abbott <lauraa@codeaurora.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
Currently __turn_mmu_on is aligned to 64 bytes to ensure that it doesn't span any page boundary, which simplifies the idmap and spares us requiring an additional page table to map half of the function. In keeping with other important requirements in architecture code, this fact is undocumented. Additionally, as the function consists of three instructions totalling 12 bytes with no literal pool data, a smaller alignment of 16 bytes would be sufficient. This patch reduces the alignment to 16 bytes and documents the underlying reason for the alignment. This reduces the required alignment of the entire .head.text section from 64 bytes to 16 bytes, though it may still be aligned to a larger value depending on TEXT_OFFSET. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NLaura Abbott <lauraa@codeaurora.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
This is for similarity with thread_saved_(pc|sp) and to avoid some compiler warnings in the audit code. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 AKASHI Takahiro 提交于
On AArch64, audit is supported through generic lib/audit.c and compat_audit.c, and so this patch adds arch specific definitions required. Acked-by Will Deacon <will.deacon@arm.com> Acked-by: NRichard Guy Briggs <rgb@redhat.com> Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 AKASHI Takahiro 提交于
This patch adds auditing functions on entry to or exit from every system call invocation. Acked-by: NRichard Guy Briggs <rgb@redhat.com> Acked-by Will Deacon <will.deacon@arm.com> Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
This patch adds __NR_* definitions to asm/unistd32.h, moves the __NR_compat_* definitions to asm/unistd.h and removes all the explicit unistd32.h includes apart from the one building the compat syscall table. The aim is to have the compat __NR_* definitions available but without colliding with the native syscall definitions (required by lib/compat_audit.c to avoid duplicating the audit header files between native and compat). Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Larry Bassel 提交于
Make calls to ct_user_enter when the kernel is exited and ct_user_exit when the kernel is entered (in el0_da, el0_ia, el0_svc, el0_irq and all of the "error" paths). These macros expand to function calls which will only work properly if el0_sync and related code has been rearranged (in a previous patch of this series). The calls to ct_user_exit are made after hw debugging has been enabled (enable_dbg_and_irq). The call to ct_user_enter is made at the beginning of the kernel_exit macro. This patch is based on earlier work by Kevin Hilman. Save/restore optimizations were also done by Kevin. Acked-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NKevin Hilman <khilman@linaro.org> Tested-by: NKevin Hilman <khilman@linaro.org> Signed-off-by: NLarry Bassel <larry.bassel@linaro.org> Signed-off-by: NKevin Hilman <khilman@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Larry Bassel 提交于
To implement the context tracker properly on arm64, a function call needs to be made after debugging and interrupts are turned on, but before the lr is changed to point to ret_to_user(). If the function call is made after the lr is changed the function will not return to the correct place. For similar reasons, defer the setting of x0 so that it doesn't need to be saved around the function call (save far_el1 in x26 temporarily instead). Acked-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NKevin Hilman <khilman@linaro.org> Tested-by: NKevin Hilman <khilman@linaro.org> Signed-off-by: NLarry Bassel <larry.bassel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 09 7月, 2014 2 次提交
-
-
由 Laura Abbott 提交于
arm64 currently lacks support for -fstack-protector. Add similar functionality to arm to detect stack corruption. Acked-by: NWill Deacon <will.deacon@arm.com> Acked-by: NKees Cook <keescook@chromium.org> Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Zi Shen Lim 提交于
Create cpu topology based on MPIDR. When hardware sets MPIDR to sane values, this method will always work. Therefore it should also work well as the fallback method. [1] When we have multiple processing elements in the system, we create the cpu topology by mapping each affinity level (from lowest to highest) to threads (if they exist), cores, and clusters. [1] http://www.spinics.net/lists/arm-kernel/msg317445.htmlAcked-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NZi Shen Lim <zlim@broadcom.com> Signed-off-by: NMark Brown <broonie@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 05 7月, 2014 1 次提交
-
-
由 Maxime Ripard 提交于
This partly reverts commits 55360050 (ARM: sunxi: Remove reset code from the platform) and 5e669ec5 (ARM: sunxi: Remove init_machine callback) for the sun4i, sun5i and sun7i families. This is needed because the watchdog counterpart of these commits was dropped, and didn't make it into 3.16. In order to still be able to reboot the board, we need to reintroduce that code. Of course, the long term view is still to get rid of that code in mach-sunxi. Signed-off-by: NMaxime Ripard <maxime.ripard@free-electrons.com> Signed-off-by: NOlof Johansson <olof@lixom.net>
-
- 04 7月, 2014 4 次提交
-
-
由 Marc Zyngier 提交于
The CurrentEL system register reports the Current Exception Level of the CPU. It doesn't say anything about the stack handling, and yet we compare it to PSR_MODE_EL2t and PSR_MODE_EL2h. It works by chance because PSR_MODE_EL2t happens to match the right bits, but that's otherwise a very bad idea. Just check for the EL value instead. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> [catalin.marinas@arm.com: fixed arch/arm64/kernel/efi-entry.S] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Steve Capper 提交于
The __sync_icache_dcache routine will only flush the dcache for the first page of a compound page, potentially leading to stale icache data residing further on in a hugetlb page. This patch addresses this issue by taking into consideration the order of the page when flushing the dcache. Reported-by: NMark Brown <broonie@linaro.org> Tested-by: NMark Brown <broonie@linaro.org> Signed-off-by: NSteve Capper <steve.capper@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: <stable@vger.kernel.org> # v3.11+
-
由 Steve Capper 提交于
The define ARM64_64K_PAGES is tested for rather than CONFIG_ARM64_64K_PAGES. Correct that typo here. Signed-off-by: NSteve Capper <steve.capper@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Tejun Heo 提交于
The 'sysret' fastpath does not correctly restore even all regular registers, much less any segment registers or reflags values. That is very much part of why it's faster than 'iret'. Normally that isn't a problem, because the normal ptrace() interface catches the process using the signal handler infrastructure, which always returns with an iret. However, some paths can get caught using ptrace_event() instead of the signal path, and for those we need to make sure that we aren't going to return to user space using 'sysret'. Otherwise the modifications that may have been done to the register set by the tracer wouldn't necessarily take effect. Fix it by forcing IRET path by setting TIF_NOTIFY_RESUME from arch_ptrace_stop_needed() which is invoked from ptrace_stop(). Signed-off-by: NTejun Heo <tj@kernel.org> Reported-by: NAndy Lutomirski <luto@amacapital.net> Acked-by: NOleg Nesterov <oleg@redhat.com> Suggested-by: NLinus Torvalds <torvalds@linux-foundation.org> Cc: stable@vger.kernel.org Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 01 7月, 2014 3 次提交
-
-
由 Thomas Petazzoni 提交于
On Marvell Armada XP, when a CPU comes back from deep idle state of cpuidle, it restarts its execution at armada_370_xp_cpu_resume(), which puts back the CPU into the coherency, and then calls the generic cpu_resume() function. While this works on little-endian configurations, it doesn't work on big-endian configurations because the CPU restarts in little-endian, and therefore must be switched back to big-endian to operate properly. To achieve this, a 'setend be' instruction must be executed in big-endian configurations. However, the ARM_BE8() macro that is used to implement nice compile-time conditional for ARM LE vs. ARM BE8 is not easily usable in inline assembly. Therefore, this patch moves the armada_370_xp_cpu_resume() C function, which was anyway just a block of inline assembly, into a proper pmsu_ll.S file, and adds the appropriate ARM_BE8(setend be) instruction. Without this patch, an Armada XP big endian configuration with cpuidle enabled fails to boot, as it hangs as soon as one of the CPU hits the deep idle state. Signed-off-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com> Link: https://lkml.kernel.org/r/1404130165-3593-1-git-send-email-thomas.petazzoni@free-electrons.comSigned-off-by: NJason Cooper <jason@lakedaemon.net>
-
由 Thomas Petazzoni 提交于
Commit 497a9230 ("ARM: mvebu: implement L2/PCIe deadlock workaround") introduced some logic in coherency.c to adjust the PL310 cache controller Device Tree node of Armada 375 and Armada 38x platform to include the 'arm,io-coherent' property if the system is running with hardware I/O coherency enabled. However, with the L2CC driver cleanup done by Russell King, the initialization of the L2CC driver has been moved earlier, and is now part of the init_IRQ() ARM function in arch/arm/kernel/irq.c. Therefore, calling coherency_init() in ->init_time() is now too late, as the Device Tree property gets added too late (after the L2CC driver has been initialized). In order to fix this, this commit removes the ->init_time() callback use in board-v7.c and replaces it with an ->init_irq() callback. We therefore no longer use the default ->init_irq() callback, but we now use the default ->init_time() callback. In this newly introduced ->init_irq() callback, we call irqchip_init() which is the default behavior when ->init_irq() isn't defined, and then do the initialization related to the coherency: SCU, coherency fabric, and mvebu-mbus (which is needed to start secondary CPUs). Signed-off-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com> Link: https://lkml.kernel.org/r/1402585772-10405-4-git-send-email-thomas.petazzoni@free-electrons.comSigned-off-by: NJason Cooper <jason@lakedaemon.net>
-
由 Thomas Petazzoni 提交于
In preparation to a small re-organization of the initialization sequence in board-v7.c, this commit moves the registration of the custom external abort handler on Armada 375 later in the boot sequence, and makes it more similar to the other quirks that we already have. There is indeed no need to register this abort handler particularly early, it simply needs to be registered before switching to userspace. In addition to this, this commit makes the registration of the custom abort handler conditional on Armada 375 Z1, because Armada 375 A0 and later iterations are not affected by the issue. This commit was tested on both Armada 375 Z1 and Armada 375 A0 platforms. Signed-off-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com> Link: https://lkml.kernel.org/r/1402585772-10405-3-git-send-email-thomas.petazzoni@free-electrons.comSigned-off-by: NJason Cooper <jason@lakedaemon.net>
-
- 30 6月, 2014 1 次提交
-
-
由 Jan Kiszka 提交于
We import the CPL via SS.DPL since ae9fedc7. However, we fail to export it this way so far. This caused spurious guest crashes, e.g. of Linux when accessing the vmport from guest user space which triggered register saving/restoring to/from host user space. Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 29 6月, 2014 1 次提交
-
-
由 Will Deacon 提交于
On the syscall tracing path, we call out to secure_computing() to allow seccomp to check the syscall number being attempted. As part of this, a SIGTRAP may be sent to the tracer and the syscall could be re-written by a subsequent SET_SYSCALL ptrace request. Unfortunately, this new syscall is ignored by the current code unless TIF_SYSCALL_TRACE is also set on the current thread. This patch slightly reworks the enter path of the syscall tracing code so that we always reload the syscall number from current_thread_info()->syscall after the potential ptrace traps. Acked-by: NKees Cook <keescook@chromium.org> Tested-by: NKees Cook <keescook@chromium.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-