- 09 9月, 2015 3 次提交
-
-
由 Julien Grall 提交于
Based on include/xen/mm.h [1], Linux is mistakenly using MFN when GFN is meant, I suspect this is because the first support for Xen was for PV. This resulted in some misimplementation of helpers on ARM and confused developers about the expected behavior. For instance, with pfn_to_mfn, we expect to get an MFN based on the name. Although, if we look at the implementation on x86, it's returning a GFN. For clarity and avoid new confusion, replace any reference to mfn with gfn in any helpers used by PV drivers. The x86 code will still keep some reference of pfn_to_mfn which may be used by all kind of guests No changes as been made in the hypercall field, even though they may be invalid, in order to keep the same as the defintion in xen repo. Note that page_to_mfn has been renamed to xen_page_to_gfn to avoid a name to close to the KVM function gfn_to_page. Take also the opportunity to simplify simple construction such as pfn_to_mfn(page_to_pfn(page)) into xen_page_to_gfn. More complex clean up will come in follow-up patches. [1] http://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=e758ed14f390342513405dd766e874934573e6cbSigned-off-by: NJulien Grall <julien.grall@citrix.com> Reviewed-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Acked-by: NDmitry Torokhov <dmitry.torokhov@gmail.com> Acked-by: NWei Liu <wei.liu2@citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Julien Grall 提交于
After the commit introducing convertion between DMA and guest addresses, all the callers of pfn_to_mfn are expecting to get a GFN (Guest Frame Number). On ARM, all the guests are auto-translated so the GFN is equal to the Linux PFN (Pseudo-physical Frame Number). The current implementation may return an MFN if the caller is passing a PFN associated to a mapped foreign grant. In pratice, I haven't seen the problem on running guest but we should fix it for the sake of correctness. Correct the implementation by always returning the pfn passed in parameter. A follow-up patch will take care to rename pfn_to_mfn to a suitable name. Signed-off-by: NJulien Grall <julien.grall@citrix.com> Reviewed-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Julien Grall 提交于
The swiotlb is required when programming a DMA address on ARM when a device is not protected by an IOMMU. In this case, the DMA address should always be equal to the machine address. For DOM0 memory, Xen ensure it by have an identity mapping between the guest address and host address. However, when mapping a foreign grant reference, the 1:1 model doesn't work. For ARM guest, most of the callers of pfn_to_mfn expects to get a GFN (Guest Frame Number), i.e a PFN (Page Frame Number) from the Linux point of view given that all ARM guest are auto-translated. Even though the name pfn_to_mfn is misleading, we need to ensure that those caller get a GFN and not by mistake a MFN. In pratical, I haven't seen error related to this but we should fix it for the sake of correctness. In order to fix the implementation of pfn_to_mfn on ARM in a follow-up patch, we have to introduce new helpers to return the DMA from a PFN and the invert. On x86, the new helpers will be an alias of pfn_to_mfn and mfn_to_pfn. The helpers will be used in swiotlb and xen_biovec_phys_mergeable. This is necessary in the latter because we have to ensure that the biovec code will not try to merge a biovec using foreign page and another using Linux memory. Lastly, the helper mfn_to_local_pfn has been renamed to bfn_to_local_pfn given that the only usage was in swiotlb. Signed-off-by: NJulien Grall <julien.grall@citrix.com> Reviewed-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
- 28 8月, 2015 1 次提交
-
-
由 Christoph Hellwig 提交于
Three architectures already define these, and we'll need them genericly soon. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 27 8月, 2015 2 次提交
-
-
由 Russell King 提交于
Provide a software-based implementation of the priviledged no access support found in ARMv8.1. Userspace pages are mapped using a different domain number from the kernel and IO mappings. If we switch the user domain to "no access" when we enter the kernel, we can prevent the kernel from touching userspace. However, the kernel needs to be able to access userspace via the various user accessor functions. With the wrapping in the previous patch, we can temporarily enable access when the kernel needs user access, and re-disable it afterwards. This allows us to trap non-intended accesses to userspace, eg, caused by an inadvertent dereference of the LIST_POISON* values, which, with appropriate user mappings setup, can be made to succeed. This in turn can allow use-after-free bugs to be further exploited than would otherwise be possible. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> -
由 Russell King 提交于
Provide hooks into the kernel entry and exit paths to permit control of userspace visibility to the kernel. The intended use is: - on entry to kernel from user, uaccess_disable will be called to disable userspace visibility - on exit from kernel to user, uaccess_enable will be called to enable userspace visibility - on entry from a kernel exception, uaccess_save_and_disable will be called to save the current userspace visibility setting, and disable access - on exit from a kernel exception, uaccess_restore will be called to restore the userspace visibility as it was before the exception occurred. These hooks allows us to keep userspace visibility disabled for the vast majority of the kernel, except for localised regions where we want to explicitly access userspace. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 26 8月, 2015 1 次提交
-
-
由 Stephen Boyd 提交于
The only caller of cpu_die() on ARM is arch_cpu_idle_dead(), so let's simplify the code by renaming cpu_die() to arch_cpu_idle_dead(). While were here, drop the __ref annotation because __cpuinit is gone nowadays. Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 25 8月, 2015 4 次提交
-
-
由 Russell King 提交于
Provide uaccess_save_and_enable() and uaccess_restore() to permit control of userspace visibility to the kernel, and hook these into the appropriate places in the kernel where we need to access userspace. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> -
由 Russell King 提交于
Make the "fast" syscall return path fast again. The addition of IRQ tracing and context tracking has made this path grossly inefficient. We can do much better if these options are enabled if we save the syscall return code on the stack - we then don't need to save a bunch of registers around every single callout to C code. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
There's no need for this macro, it can use a default for the condition argument. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
The user assembly for byte and word accesses was virtually identical. Rather than duplicating this, use a macro instead. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 21 8月, 2015 8 次提交
-
-
由 Russell King 提交于
DOMAIN_TABLE is not used; in any case, it aliases to the kernel domain. Remove this definition. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> -
由 Russell King 提交于
Keep the machine vectors in its own domain to avoid software based user access control from making the vector code inaccessible, and thereby deadlocking the machine. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> -
由 Russell King 提交于
Since we switched to early trap initialisation in 94e5a85b ("ARM: earlier initialization of vectors page") we haven't been writing directly to the vectors page, and so there's no need for this domain to be in manager mode. Switch it to client mode. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> -
由 Russell King 提交于
Provide a macro to generate the mask for a domain, rather than using domain_val(, DOMAIN_MANAGER) which won't work when CPU_USE_DOMAINS is turned off. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> -
由 Russell King 提交于
Rather than modifying both the domain access control register and our per-thread copy, modify only the domain access control register, and use the per-thread copy to save and restore the register over context switches. We can also avoid the explicit initialisation of the init thread_info structure. This allows us to avoid needing to gain access to the thread information at the uaccess control sites. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> -
由 Lorenzo Pieralisi 提交于
ARM now uses pci_bus->msi to store the msi_controller pointer, so we don't need to save it in struct pci_sys_data, and we don't need to implement pcibios_msi_controller() to get it out of pci_sys_data. Remove msi_controller from struct pci_sys_data and pcibios_msi_controller(). [bhelgaas: changelog, split into separate patch] Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Reviewed-by: NJingoo Han <jingoohan1@gmail.com>
-
由 Lorenzo Pieralisi 提交于
ARM previously stored the msi_controller pointer in its sysdata, struct pci_sys_data, and implemented pcibios_msi_controller() to retrieve it. That made PCI host controller drivers specific to ARM because they had to put the msi_controller pointer in the ARM-specific pci_sys_data. There is now a generic mechanism, pci_scan_root_bus_msi(), for giving the msi_controller pointer to the PCI core. Use this for all ARM systems and for the DesignWare and Xilinx PCI host controller drivers. This removes an ARM dependency from the DesignWare, DRA7xx, EXYNOS, i.MX6, Keystone, Layerscape, SPEAr13xx, and Xilinx drivers. [bhelgaas: changelog, split into separate patch] Suggested-by: NRussell King <linux@arm.linux.org.uk> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Acked-by: NJingoo Han <jingoohan1@gmail.com> CC: Pratyush Anand <pratyush.anand@gmail.com> CC: Arnd Bergmann <arnd@arndb.de> CC: Simon Horman <horms@verge.net.au> CC: Russell King <linux@arm.linux.org.uk> CC: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> CC: Thierry Reding <thierry.reding@gmail.com> CC: Michal Simek <michal.simek@xilinx.com> CC: Marc Zyngier <marc.zyngier@arm.com>
-
- 20 8月, 2015 2 次提交
-
-
由 Julien Grall 提交于
ARM guests are always HVM. The current implementation is assuming a 1:1 mapping which is only true for DOM0 and may not be at all in the future. Furthermore, all the helpers but arbitrary_virt_to_machine are used in x86 specific code (or only compiled for). The helper arbitrary_virt_to_machine is only used in PV specific code. Therefore we should never call the function. Add a BUG() in this helper and drop all the others. Signed-off-by: NJulien Grall <julien.grall@citrix.com> Acked-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Julien Grall 提交于
Currently, the event channel rebind code is gated with the presence of the vector callback. The virtual interrupt controller on ARM has the concept of per-CPU interrupt (PPI) which allow us to support per-VCPU event channel. Therefore there is no need of vector callback for ARM. Xen is already using a free PPI to notify the guest VCPU of an event. Furthermore, the xen code initialization in Linux (see arch/arm/xen/enlighten.c) is requesting correctly a per-CPU IRQ. Introduce new helper xen_support_evtchn_rebind to allow architecture decide whether rebind an event is support or not. It will always return true on ARM and keep the same behavior on x86. This is also allow us to drop the usage of xen_have_vector_callback entirely in the ARM code. Signed-off-by: NJulien Grall <julien.grall@citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
- 18 8月, 2015 3 次提交
-
-
由 Marek Szyprowski 提交于
All architectures except arm that define DMA_ERROR_CODE are casting it to (dma_addr_t) - as it is always compared to dma_addr_t in arm as well this could be harmonized. Signed-off-by: NNicholas Mc Guire <hofrat@osadl.org> Acked-by: NMarek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Masahiro Yamada 提交于
Use BIT_MASK() and BIT_WORD() rather than hard-coding the size of the "long" type. Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Stefan Agner 提交于
Add early fixmap support, initially to support permanent, fixed mapping support for early console. A temporary, early pte is created which is migrated to a permanent mapping in paging_init. This is also needed since the attributes may change as the memory types are initialized. The 3MiB range of fixmap spans two pte tables, but currently only one pte is created for early fixmap support. Re-add FIX_KMAP_BEGIN to the index calculation in highmem.c since the index for kmap does not start at zero anymore. This reverts 4221e2e6 ("ARM: 8031/1: fixmap: remove FIX_KMAP_BEGIN and FIX_KMAP_END") to some extent. Cc: Mark Salter <msalter@redhat.com> Cc: Kees Cook <keescook@chromium.org> Cc: Laura Abbott <lauraa@codeaurora.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NRob Herring <robh@kernel.org> Signed-off-by: NStefan Agner <stefan@agner.ch> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 12 8月, 2015 2 次提交
-
-
由 Will Deacon 提交于
By defining our SMP atomics in terms of relaxed operations, we gain a small reduction in code size and have acquire/release/fence variants generated automatically by the core code. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Waiman.Long@hp.com Cc: paulmck@linux.vnet.ibm.com Link: http://lkml.kernel.org/r/1438880084-18856-9-git-send-email-will.deacon@arm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Russell King 提交于
Rathe rthan directly accessing architecture internal functions, provide an "method"-centric wrapper for qcom_scm-32 to do what's necessary to ensure that the secure monitor can see the data. This is called "secure_flush_area" and ensures that the specified memory area is coherent across the secure boundary. Acked-by: NAndy Gross <agross@codeaurora.org> Reviewed-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 04 8月, 2015 1 次提交
-
-
由 Will Deacon 提交于
Fold finish_arch_switch() into switch_to(). Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Cc: linux@arm.linux.org.uk [ Fixed up the SOB chain. ] Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
- 03 8月, 2015 3 次提交
-
-
由 Mark Rutland 提交于
Now that the common PSCI client code has been factored out to drivers/firmware, and made safe for 32-bit use, move the 32-bit ARM code over to it. This results in a moderate reduction of duplicated lines, and will prevent further duplication as the PSCI client code is updated for PSCI 1.0 and beyond. The two legacy platform users of the PSCI invocation code are updated to account for interface changes. In both cases the power state parameter (which is constant) is now generated using macros, so that the pack/unpack logic can be killed in preparation for PSCI 1.0 power state changes. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NRob Herring <robh@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Ashwin Chaugule <ashwin.chaugule@linaro.org> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Peter Zijlstra 提交于
There are various problems and short-comings with the current static_key interface: - static_key_{true,false}() read like a branch depending on the key value, instead of the actual likely/unlikely branch depending on init value. - static_key_{true,false}() are, as stated above, tied to the static_key init values STATIC_KEY_INIT_{TRUE,FALSE}. - we're limited to the 2 (out of 4) possible options that compile to a default NOP because that's what our arch_static_branch() assembly emits. So provide a new static_key interface: DEFINE_STATIC_KEY_TRUE(name); DEFINE_STATIC_KEY_FALSE(name); Which define a key of different types with an initial true/false value. Then allow: static_branch_likely() static_branch_unlikely() to take a key of either type and emit the right instruction for the case. This means adding a second arch_static_branch_jump() assembly helper which emits a JMP per default. In order to determine the right instruction for the right state, encode the branch type in the LSB of jump_entry::key. This is the final step in removing the naming confusion that has led to a stream of avoidable bugs such as: a833581e ("x86, perf: Fix static_key bug in load_mm_cr4()") ... but it also allows new static key combinations that will give us performance enhancements in the subsequent patches. Tested-by: Rabin Vincent <rabin@rab.in> # arm Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Michael Ellerman <mpe@ellerman.id.au> # ppc Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com> # s390 Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: NIngo Molnar <mingo@kernel.org> -
由 Andrey Konovalov 提交于
Replace ACCESS_ONCE() macro in smp_store_release() and smp_load_acquire() with WRITE_ONCE() and READ_ONCE() on x86, arm, arm64, ia64, metag, mips, powerpc, s390, sparc and asm-generic since ACCESS_ONCE() does not work reliably on non-scalar types. WRITE_ONCE() and READ_ONCE() were introduced in the following commits: 230fa253 ("kernel: Provide READ_ONCE and ASSIGN_ONCE") 43239cbe ("kernel: Change ASSIGN_ONCE(val, x) to WRITE_ONCE(x, val)") Signed-off-by: NAndrey Konovalov <andreyknvl@google.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NDavidlohr Bueso <dbueso@suse.de> Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc) Acked-by: NRalf Baechle <ralf@linux-mips.org> Cc: Alexander Duyck <alexander.h.duyck@redhat.com> Cc: Andre Przywara <andre.przywara@arm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Borislav Petkov <bp@suse.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: David S. Miller <davem@davemloft.net> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arch@vger.kernel.org Link: http://lkml.kernel.org/r/1438528264-714-1-git-send-email-andreyknvl@google.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 02 8月, 2015 1 次提交
-
-
由 Russell King 提交于
The dmac_* functions are private to the ARM DMA API implementation, and should not be used by drivers. In order to discourage their use, remove their prototypes and macros from asm/*.h. We have to leave dmac_flush_range() behind as Exynos and MSM IOMMU code use these; once these sites are fixed, this can be moved also. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 01 8月, 2015 2 次提交
-
-
由 Will Deacon 提交于
Fold finish_arch_switch() into switch_to(), in preparation for the removal of the finish_arch_switch call from core sched code. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Stephen Boyd 提交于
Writes to /sys/.../cpuX/online fail if we determine the platform doesn't support hotplug for that CPU. Furthermore, if the cpu_die op isn't specified the system hangs when we try to offline a CPU and it comes right back online unexpectedly. Let's figure this stuff out before we make the sysfs nodes so that the online file doesn't even exist if it isn't (at least sometimes) possible to hotplug the CPU. Add a new 'cpu_can_disable' op and repoint all 'cpu_disable' implementations at it because all implementers use the op to indicate if a CPU can be hotplugged or not in a static fashion. With PSCI we may need to add a 'cpu_disable' op so that the secure OS can be migrated off the CPU we're trying to hotplug. In this case, the 'cpu_can_disable' op will indicate that all CPUs are hotpluggable by returning true, but the 'cpu_disable' op will make a PSCI migration call and occasionally fail, denying the hotplug of a CPU. This shouldn't be any worse than x86 where we may indicate that all CPUs are hotpluggable but occasionally we can't offline a CPU due to check_irq_vectors_for_cpu_disable() failing to find a CPU to move vectors to. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Nicolas Pitre <nico@linaro.org> Cc: Dave Martin <Dave.Martin@arm.com> Acked-by: Simon Horman <horms@verge.net.au> [shmobile portion] Tested-by: NSimon Horman <horms@verge.net.au> Cc: Magnus Damm <magnus.damm@gmail.com> Cc: <linux-sh@vger.kernel.org> Tested-by: NTyler Baker <tyler.baker@linaro.org> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 31 7月, 2015 1 次提交
-
-
由 Mark Rutland 提交于
To enable sharing of the arm_pmu code with arm64, this patch factors it out to drivers/perf/. A new drivers/perf directory is added for performance monitor drivers to live under. MAINTAINERS is updated accordingly. Files added previously without a corresponsing MAINTAINERS update (perf_regs.c, perf_callchain.c, and perf_event.h) are also added. Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Linus Walleij <linus.walleij@linaro.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Russell King <linux@arm.linux.org.uk> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NMark Rutland <mark.rutland@arm.com> [will: augmented Kconfig help slightly] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 27 7月, 2015 2 次提交
-
-
由 Peter Zijlstra 提交于
Implement atomic logic ops -- atomic_{or,xor,and}. These will replace the atomic_{set,clear}_mask functions that are available on some archs. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> -
由 Peter Zijlstra 提交于
Implement atomic logic ops -- atomic_{or,xor,and}. These will replace the atomic_{set,clear}_mask functions that are available on some archs. Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 25 7月, 2015 2 次提交
-
-
由 Russell King 提交于
Add an extension to the heavy barrier code to allow a SoC specific memory barrier function to be provided. This is needed for platforms where the interconnect has weak ordering, and thus needs assistance to ensure that memory writes are properly visible in the correct order to other parts of the system. Acked-by: NTony Lindgren <tony@atomide.com> Acked-by: NRichard Woodruff <r-woodruff2@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
The existing memory barrier macro causes a significant amount of code to be inserted inline at every call site. For example, in gpio_set_irq_type(), we have this for mb(): c0344c08: f57ff04e dsb st c0344c0c: e59f8190 ldr r8, [pc, #400] ; c0344da4 <gpio_set_irq_type+0x230> c0344c10: e3590004 cmp r9, #4 c0344c14: e5983014 ldr r3, [r8, #20] c0344c18: 0a000054 beq c0344d70 <gpio_set_irq_type+0x1fc> c0344c1c: e3530000 cmp r3, #0 c0344c20: 0a000004 beq c0344c38 <gpio_set_irq_type+0xc4> c0344c24: e50b2030 str r2, [fp, #-48] ; 0xffffffd0 c0344c28: e50bc034 str ip, [fp, #-52] ; 0xffffffcc c0344c2c: e12fff33 blx r3 c0344c30: e51bc034 ldr ip, [fp, #-52] ; 0xffffffcc c0344c34: e51b2030 ldr r2, [fp, #-48] ; 0xffffffd0 c0344c38: e5963004 ldr r3, [r6, #4] Moving the outer_cache_sync() call out of line reduces the impact of the barrier: c0344968: f57ff04e dsb st c034496c: e35a0004 cmp sl, #4 c0344970: e50b2030 str r2, [fp, #-48] ; 0xffffffd0 c0344974: 0a000044 beq c0344a8c <gpio_set_irq_type+0x1b8> c0344978: ebf363dd bl c001d8f4 <arm_heavy_mb> c034497c: e5953004 ldr r3, [r5, #4] This should reduce the cache footprint of this code. Overall, this results in a reduction of around 20K in the kernel size: text data bss dec hex filename 10773970 667392 10369656 21811018 14ccf4a ../build/imx6/vmlinux-old 10754219 667392 10369656 21791267 14c8223 ../build/imx6/vmlinux-new Another advantage to this approach is that we can finally resolve the issue of SoCs which have their own memory barrier requirements within multiplatform kernels (such as OMAP.) Here, the bus interconnects need additional handling to ensure that writes become visible in the correct order (eg, between dma_map() operations, writes to DMA coherent memory, and MMIO accesses.) Acked-by: NTony Lindgren <tony@atomide.com> Acked-by: NRichard Woodruff <r-woodruff2@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 18 7月, 2015 1 次提交
-
-
由 Laurent Dufour 提交于
Commit 2ae416b1 ("mm: new mm hook framework") introduced an empty header file (mm-arch-hooks.h) for every architecture, even those which doesn't need to define mm hooks. As suggested by Geert Uytterhoeven, this could be cleaned through the use of a generic header file included via each per architecture asm/include/Kbuild file. The PowerPC architecture is not impacted here since this architecture has to defined the arch_remap MM hook. Signed-off-by: NLaurent Dufour <ldufour@linux.vnet.ibm.com> Suggested-by: NGeert Uytterhoeven <geert@linux-m68k.org> Acked-by: NGeert Uytterhoeven <geert@linux-m68k.org> Acked-by: NVineet Gupta <vgupta@synopsys.com> Cc: Oleg Nesterov <oleg@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 17 7月, 2015 1 次提交
-
-
由 Russell King 提交于
Fengguang Wu reports that building ARM with !MMU results in the following build error: arch/arm/kernel/built-in.o: In function `__soft_restart': >> :(.text+0x1624): undefined reference to `arch_virt_to_idmap' Fix this by adding an appropriate IS_ENABLED(CONFIG_MMU) into the __virt_to_idmap() inline function. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-