- 03 10月, 2012 2 次提交
-
-
由 David Howells 提交于
Set up empty UAPI Kbuild files to be populated by the header splitter. Signed-off-by: NDavid Howells <dhowells@redhat.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Acked-by: NDave Jones <davej@redhat.com>
-
由 David Howells 提交于
Convert #include "..." to #include <path/...> in kernel system headers. Signed-off-by: NDavid Howells <dhowells@redhat.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Acked-by: NDave Jones <davej@redhat.com>
-
- 02 10月, 2012 2 次提交
-
-
由 Rob Herring 提交于
With ixp2xxx removed, there are no platforms that define arch_is_coherent, so the last occurrences of arch_is_coherent can be removed. Any new platform with coherent i/o should use coherent dma mapping functions. Signed-off-by: NRob Herring <rob.herring@calxeda.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
由 Rob Herring 提交于
arch_is_coherent is problematic as it is a global symbol. This doesn't work for multi-platform kernels or platforms which can support per device coherent DMA. This adds arm_coherent_dma_ops to be used for devices which connected coherently (i.e. to the ACP port on Cortex-A9 or A15). The arm_dma_ops are modified at boot when arch_is_coherent is true. Signed-off-by: NRob Herring <rob.herring@calxeda.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 22 9月, 2012 1 次提交
-
-
由 Russell King 提交于
kcmp has appeared on x86, but has not been noticed because checksyscalls.sh is broken at the moment. Reserve ARM syscall 378 for this should we ever need it, and add an __IGNORE entry for this unimplemented syscall. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 14 9月, 2012 10 次提交
-
-
由 Rob Herring 提交于
This lets us build a multiplatform kernel for experimental purposes. However, it will not be useful for any real work, because it relies on a number of useful things to be disabled for now: * SMP support must be turned off because of conflicting symbols. Marc Zyngier has proposed a solution by adding a new SOC operations structure to hold indirect function pointers for these, but that work is currently stalled * We turn on SPARSE_IRQ unconditionally, which is not supported on most platforms. Each of them is currently in a different state, but most are being worked on. * A common clock framework is in place since v3.4 but not yet being used. Work on this is on its way. * DEBUG_LL for early debugging is currently disabled. * THUMB2_KERNEL does not work with allyesconfig because the kernel gets too big [Rob Herring]: Rebased to not be dependent on the mass mach header rename. As a result, omap2plus, imx, mxs and ux500 are not converted. Highbank, picoxcell, mvebu, and socfpga are converted. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NRob Herring <rob.herring@calxeda.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Jason Cooper <jason@lakedaemon.net> Cc: Andrew Lunn <andrew@lunn.ch> Acked-by: NJamie Iles <jamie@jamieiles.com> Cc: Dinh Nguyen <dinguyen@altera.com>
-
由 Rob Herring 提交于
Move picoxcell debug-macro.S over to common debug macro directory. Signed-off-by: NRob Herring <rob.herring@calxeda.com> Cc: Jamie Iles <jamie@jamieiles.com>
-
由 Rob Herring 提交于
Move socfpga debug-macro.S over to common debug macro directory. Signed-off-by: NRob Herring <rob.herring@calxeda.com> Acked-by: NDinh Nguyen <dinguyen@altera.com>
-
由 Rob Herring 提交于
Move mvebu debug-macro.S over to common debug macro directory. Signed-off-by: NRob Herring <rob.herring@calxeda.com> Cc: Jason Cooper <jason@lakedaemon.net> Cc: Andrew Lunn <andrew@lunn.ch>
-
由 Rob Herring 提交于
Move vexpress debug-macro.S over to common debug macro directory. Signed-off-by: NRob Herring <rob.herring@calxeda.com> Cc: Pawel Moll <pawel.moll@arm.com>
-
由 Rob Herring 提交于
Move highbank debug-macro.S over to common debug macro directory. Also, remove v7 specific movw/movt instructions so this can compile under v6 mode. Signed-off-by: NRob Herring <rob.herring@calxeda.com>
-
由 Rob Herring 提交于
Based on suggestion by Russell King, create a common location for debug macros and select the included debug macro file using config option. Signed-off-by: NRob Herring <rob.herring@calxeda.com> Cc: Russell King <linux@arm.linux.org.uk>
-
由 Rob Herring 提交于
Most platforms don't need mach/gpio.h and it prevents multi-platform kernel images. Add CONFIG_NEED_MACH_GPIO_H and make platforns select it if they need gpio.h. This is platforms that define __GPIOLIB_COMPLEX or have lots of implicit includes pulled in by mach/gpio.h. at91 and omap have gpio clean-up pending and can drop CONFIG_NEED_MACH_GPIO_H once that is in. Signed-off-by: NRob Herring <rob.herring@calxeda.com> Cc: Russell King <linux@arm.linux.org.uk> Acked-by: NJason Cooper <jason@lakedaemon.net> Acked-by: NLinus Walleij <linus.walleij@linaro.org>
-
由 Marc Zyngier 提交于
Almost each SMP platform defines pen_release to manage booting secondary CPUs. This of course clashes with the single zImage effort. Add the pen_release definition to the ARM SMP code, and remove all others. This should only be used by platforms which lack any kind of CPU power management... Reported-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
由 Marc Zyngier 提交于
Now that all SMP platforms have been converted to use struct smp_operations, remove the "weak" attribute from the hooks in smp.c, and make the functions static wherever possible. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
- 13 9月, 2012 1 次提交
-
-
由 Marc Zyngier 提交于
This adds a 'struct smp_operations' to abstract the CPU initialization and hot plugging functions on SMP systems, which otherwise conflict in a multiplatform kernel. This also helps shmobile and potentially others that have more than one method to do these. To allow the kernel to continue building, the platform hooks are defined as weak symbols which are overrided by the platform code. Once all platforms are converted, the "weak" attribute will be removed and the function made static. Unlike the original version from Marc, this new version from Arnd does not use a generalized abstraction for per-soc data structures but only tries to solve the problem for the SMP operations. This way, we can collapse the previous four data structures into a single struct, which is less systematic but also easier to follow as a causal reader. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NNicolas Pitre <nico@fluxnic.net> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
- 10 9月, 2012 2 次提交
-
-
由 Will Deacon 提交于
The user access functions may generate a fault, resulting in invocation of a handler that may sleep. This patch annotates the accessors with might_fault() so that we print a warning if they are invoked from atomic context and help lockdep keep track of mmap_sem. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
The {get,put}_user macros don't perform range checking on the provided __user address when !CPU_HAS_DOMAINS. This patch reworks the out-of-line assembly accessors to check the user address against a specified limit, returning -EFAULT if is is out of range. [will: changed get_user register allocation to match put_user] [rmk: fixed building on older ARM architectures] Reported-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Cc: stable@vger.kernel.org Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 08 9月, 2012 1 次提交
-
-
由 Stephen Boyd 提交于
During the p2v changes, the PHYS_OFFSET #define moved into a !__ASSEMBLY__ section. This causes a XIP build to fail with arch/arm/kernel/head.o: In function 'stext': arch/arm/kernel/head.S:146: undefined reference to 'PHYS_OFFSET' Momentarily leave the #ifndef __ASSEMBLY__ section so we can define PHYS_OFFSET for all compilation units. Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 29 8月, 2012 1 次提交
-
-
由 Marek Szyprowski 提交于
Some platforms might require to increase atomic coherent pool to make sure that their device will be able to allocate all their buffers from atomic context. This function can be also used to decrease atomic coherent pool size if coherent allocations are not used for the given sub-platform. Suggested-by: NJosh Coombs <josh.coombs@gmail.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 25 8月, 2012 1 次提交
-
-
由 Will Deacon 提交于
LPAE does not use two pmd entries for a pte, so the additional tlb flushing is not required. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 23 8月, 2012 5 次提交
-
-
由 Sudeep KarkadaNagesha 提交于
This patch moves the CPU-specific IRQ registration and parsing code into the CPU PMU backend. This is required because a PMU may have more than one interrupt, which in turn can be either PPI (per-cpu) or SPI (requiring strict affinity setting at the interrupt distributor). Signed-off-by: NSudeep KarkadaNagesha <Sudeep.KarkadaNagesha@arm.com> [will: cosmetic edits and reworked interrupt dispatching] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
The CPU PMU code is tightly coupled with generic ARM PMU handling code. This makes it cumbersome when trying to add support for other ARM PMUs (e.g. interconnect, L2 cache controller, bus) as the generic parts of the code are not readily reusable. This patch cleans up perf_event.c so that reusable code is exposed via header files to other potential PMU drivers. The CPU code is consistently named to identify it as such and also to prepare for moving it into a separate file. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Sudeep KarkadaNagesha 提交于
The arm_pmu_type enumeration was initially introduced to identify different PMU types in the system, the usual one being that on the CPU (ARM_PMU_DEVICE_CPU). With the removal of the PMU reservation code and the introduction of devicetree bindings for the CPU PMU, the enumeration is no longer required. This patch removes the enumeration and updates the various CPU PMU platform devices so that they no longer pass an .id field referring to identify the PMU type. Cc: Haojian Zhuang <haojian.zhuang@gmail.com> Cc: Olof Johansson <olof@lixom.net> Cc: Pawel Moll <pawel.moll@arm.com> Acked-by: NJon Hunter <jon-hunter@ti.com> Acked-by: NKukjin Kim <kgene.kim@samsung.com> Acked-by: NLinus Walleij <linus.walleij@linaro.org> Acked-by: NJiandong Zheng <jdzheng@broadcom.com> Signed-off-by: NSudeep KarkadaNagesha <Sudeep.KarkadaNagesha@arm.com> [will: cosmetic edits and actual removal of the enum type] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
The PMU reservation mechanism was originally intended to allow OProfile and perf-events to co-ordinate over access to the CPU PMU. Since then, OProfile for ARM has moved to using perf as its backend, so the reservation code is no longer used. This patch removes the reservation code for the CPU PMU on ARM. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Jon Hunter 提交于
Add runtime PM support to the ARM PMU driver so that devices such as OMAP supporting dynamic PM can use the platform->runtime_* hooks to initialise hardware at runtime. Without having these runtime PM hooks in place any configuration of the PMU hardware would be lost when low power states are entered and hence would prevent PMU from working. This change also replaces the PMU platform functions enable_irq and disable_irq added by Ming Lei with runtime_resume and runtime_suspend funtions. Ming had added the enable_irq and disable_irq functions as a method to configure the cross trigger interface on OMAP4 for routing the PMU interrupts. By adding runtime PM support, we can move the code called by enable_irq and disable_irq into the runtime PM callbacks runtime_resume and runtime_suspend. Cc: Ming Lei <ming.lei@canonical.com> Cc: Benoit Cousson <b-cousson@ti.com> Cc: Paul Walmsley <paul@pwsan.com> Cc: Kevin Hilman <khilman@ti.com> Signed-off-by: NJon Hunter <jon-hunter@ti.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 16 8月, 2012 1 次提交
-
-
由 Chao Xie 提交于
The extra feature may be used by SOCs are prefetch, burst8, write buffer coalesce Signed-off-by: NChao Xie <xiechao.mail@gmail.com> Signed-off-by: NHaojian Zhuang <haojian.zhuang@gmail.com>
-
- 11 8月, 2012 3 次提交
-
-
由 Will Deacon 提交于
Page migration encodes the pfn in the offset field of a swp_entry_t. For LPAE, we support physical addresses of up to 36 bits (due to sparsemem limitations with the size of page flags), requiring 24 bits to represent a pfn. A further 3 bits are used to encode a swp_entry into a pte, leaving 5 bits for the type field. Furthermore, the core code defines MAX_SWAPFILES_SHIFT as 5, so the additional type bit does not get used. This patch reduces the width of the type field to 5 bits, allowing us to create up to 31 swapfiles of 64GB each. Cc: <stable@vger.kernel.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
Swap entries are encoding in ptes such that !pte_present(pte) and pte_file(pte). The remaining bits of the descriptor are used to identify the swapfile and offset within it to the swap entry. When writing such a pte for a user virtual address, set_pte_at unconditionally sets the nG bit, which (in the case of LPAE) will corrupt the swapfile offset and lead to a BUG: [ 140.494067] swap_free: Unused swap offset entry 000763b4 [ 140.509989] BUG: Bad page map in process rs:main Q:Reg pte:0ec76800 pmd:8f92e003 This patch fixes the problem by only setting the nG bit for user mappings that are actually present. Cc: <stable@vger.kernel.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Colin Cross 提交于
Many clocks that are used to provide sched_clock will reset during suspend. If read_sched_clock returns 0 after suspend, sched_clock will appear to jump forward. This patch resets cd.epoch_cyc to the current value of read_sched_clock during resume, which causes sched_clock() just after suspend to return the same value as sched_clock() just before suspend. In addition, during the window where epoch_ns has been updated before suspend, but epoch_cyc has not been updated after suspend, it is unknown whether the clock has reset or not, and sched_clock() could return a bogus value. Add a suspended flag, and return the pre-suspend epoch_ns value during this period. The new behavior is triggered by calling setup_sched_clock_needs_suspend instead of setup_sched_clock. Signed-off-by: NColin Cross <ccross@android.com> Reviewed-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 09 8月, 2012 1 次提交
-
-
由 Stefano Stabellini 提交于
Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Acked-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
- 14 9月, 2012 2 次提交
-
-
由 Stefano Stabellini 提交于
Compile events.c on ARM. Parse, map and enable the IRQ to get event notifications from the device tree (node "/xen"). Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Acked-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
由 Stefano Stabellini 提交于
All the original Xen headers have xen_ulong_t as unsigned long type, however when they have been imported in Linux, xen_ulong_t has been replaced with unsigned long. That might work for x86 and ia64 but it does not for arm. Bring back xen_ulong_t and let each architecture define xen_ulong_t as they see fit. Also explicitly size pointers (__DEFINE_GUEST_HANDLE) to 64 bit. Changes in v3: - remove the incorrect changes to multicall_entry; - remove the change to apic_physbase. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Acked-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
- 09 8月, 2012 2 次提交
-
-
由 Stefano Stabellini 提交于
sync_bitops functions are equivalent to the SMP implementation of the original functions, independently from CONFIG_SMP being defined. We need them because _set_bit etc are not SMP safe if !CONFIG_SMP. But under Xen you might be communicating with a completely external entity who might be on another CPU (e.g. two uniprocessor guests communicating via event channels and grant tables). So we need a variant of the bit ops which are SMP safe even on a UP kernel. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Acked-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
由 Stefano Stabellini 提交于
ARM Xen guests always use paging in hardware, like PV on HVM guests in the X86 world. Changes in v3: - improve comments. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Acked-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
- 14 9月, 2012 2 次提交
-
-
由 Stefano Stabellini 提交于
Use r12 to pass the hypercall number to the hypervisor. We need a register to pass the hypercall number because we might not know it at compile time and HVC only takes an immediate argument. Among the available registers r12 seems to be the best choice because it is defined as "intra-procedure call scratch register". Use the ISS to pass an hypervisor specific tag. Changes in v2: - define an HYPERCALL macro for 5 arguments hypercall wrappers, even if at the moment is unused; - use ldm instead of pop; - fix up comments. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Reviewed-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
由 Stefano Stabellini 提交于
- Basic hypervisor.h and interface.h definitions. - Skeleton enlighten.c, set xen_start_info to an empty struct. - Make xen_initial_domain dependent on the SIF_PRIVILIGED_BIT. The new code only compiles when CONFIG_XEN is set, that is going to be added to arch/arm/Kconfig in patch #11 "xen/arm: introduce CONFIG_XEN on ARM". Changes in v3: - improve comments. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Acked-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
- 01 8月, 2012 1 次提交
-
-
由 Bryan Wu 提交于
Cc: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: NBryan Wu <bryan.wu@canonical.com>
-
- 31 7月, 2012 2 次提交
-
-
由 Will Deacon 提交于
The vivt_flush_cache_{range,page} functions check that the mm_struct of the VMA being flushed has been active on the current CPU before performing the cache maintenance. The gate_vma has a NULL mm_struct pointer and, as such, will cause a kernel fault if we try to flush it with the above operations. This happens during ELF core dumps, which include the gate_vma as it may be useful for debugging purposes. This patch adds checks to the VIVT cache flushing functions so that VMAs with a NULL mm_struct are flushed unconditionally (the vectors page may be dirty if we use it to store the current TLS pointer). Cc: <stable@vger.kernel.org> # 3.4+ Reported-by: NGilles Chanteperdrix <gilles.chanteperdrix@xenomai.org> Tested-by: NUros Bizjak <ubizjak@gmail.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
The open-coded mutex implementation for ARMv6+ cores suffers from a severe lack of barriers, so in the uncontended case we don't actually protect any accesses performed during the critical section. Furthermore, the code is largely a duplication of the ARMv6+ atomic_dec code but optimised to remove a branch instruction, as the mutex fastpath was previously inlined. Now that this is executed out-of-line, we can reuse the atomic access code for the locking (in fact, we use the xchg code as this produces shorter critical sections). This patch uses the generic xchg based implementation for mutexes on ARMv6+, which introduces barriers to the lock/unlock operations and also has the benefit of removing a fair amount of inline assembly code. Cc: <stable@vger.kernel.org> Acked-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NNicolas Pitre <nico@linaro.org> Reported-by: NShan Kang <kangshan0910@gmail.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-