- 11 12月, 2013 2 次提交
-
-
由 Russell King 提交于
Document the permissions which the various MT_MEMORY* mapping types will provide. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
This patch allows the kernel page tables to be dumped via a debugfs file, allowing kernel developers to check the layout of the kernel page tables and the verify the various permissions and type settings. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 15 11月, 2013 1 次提交
-
-
由 Kirill A. Shutemov 提交于
We're going to introduce split page table lock for PMD level. Let's rename existing split ptlock for PTE level to avoid confusion. Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Tested-by: NAlex Thorlton <athorlton@sgi.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: "Eric W . Biederman" <ebiederm@xmission.com> Cc: "Paul E . McKenney" <paulmck@linux.vnet.ibm.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Andi Kleen <ak@linux.intel.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Dave Jones <davej@redhat.com> Cc: David Howells <dhowells@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Kees Cook <keescook@chromium.org> Cc: Mel Gorman <mgorman@suse.de> Cc: Michael Kerrisk <mtk.manpages@gmail.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@redhat.com> Cc: Robin Holt <robinmholt@gmail.com> Cc: Sedat Dilek <sedat.dilek@gmail.com> Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Hugh Dickins <hughd@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 14 11月, 2013 3 次提交
-
-
由 Mahesh Sivasubramanian 提交于
LPAE enabled kernels use the 64-bit version of TTBR0 and TTBR1 registers. If we're running an LPAE kernel, fill the upper half of TTBR0 with 0 because we're setting it to the idmap here (the idmap is guaranteed to be < 4Gb) and fully restore TTBR1 instead of just restoring the lower 32 bits. Failure to do so can cause failures on resume from suspend when these registers are only half restored. Signed-off-by: NMahesh Sivasubramanian <msivasub@codeaurora.org> Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Acked-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Michal Simek 提交于
ECC policy can be applied to the whole system when this bit is implemented by SoC vendor (IMP - bit 9 - in L1 page table entry format). When this bit is not implemented by SoC vendor it doesn't mean that system has no other way how to do ECC. This patch ensures to show this message only when ECC is requested via cmd line ecc=on and runs on appropriate ARM core. Signed-off-by: NMichal Simek <michal.simek@xilinx.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
The 0-day kernel build robot found this new warning: arch/arm/mm/nommu.c:303:17: warning: 'struct proc_info_list' declared inside parameter list [enabled by default] arch/arm/mm/nommu.c:303:17: warning: its scope is only this definition or declaration, which is probably not what you want [enabled by default] Fix it by including the appropriate header. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 07 11月, 2013 2 次提交
-
-
由 Thierry Reding 提交于
No-MMU configurations currenty fail to build because they are missing the early_paging_init() symbol. Acked-by: NOlof Johansson <olof@lixom.net> Signed-off-by: NThierry Reding <treding@nvidia.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Marc Zyngier 提交于
The exception handling code fails to clear the IT state, potentially leading to incorrect execution of the fixup if the size of the IT block is more than one. Let fixup_exception do the IT sanitizing if a fixup has been found, and restore CPSR from the stack when returning from a data abort. Cc: Will Deacon <will.deacon@arm.com> Cc: stable@vger.kernel.org Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 31 10月, 2013 2 次提交
-
-
由 Santosh Shilimkar 提交于
Most of the kernel code assumes that max*pfn is maximum pfns because the physical start of memory is expected to be PFN0. Since this assumption is not true on ARM architectures, the meaning of max*pfn is number of memory pages. This is done to keep drivers happy which are making use of of these variable to calculate the dma bounce limit using dma_mask. Now since we have a architecture override possibility for DMAable maximum pfns, lets make meaning of max*pfns as maximum pnfs on ARM as well. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
We need to start treating DMA masks as something which is specific to the bus that the device resides on, otherwise we're going to hit all sorts of nasty issues with LPAE and 32-bit DMA controllers in >32-bit systems, where memory is offset from PFN 0. In order to start doing this, we convert the DMA mask to a PFN using the device specific dma_to_pfn() macro. This is the reverse of the pfn_to_dma() macro which is used to get the DMA address for the device. This gives us a PFN mask, which we can then check against the PFN limit of the DMA zone. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 29 10月, 2013 1 次提交
-
-
由 Sergey Dyasly 提交于
With LPAE enabled, physical address space is larger than 4GB. Allow mapping any part of it via /dev/mem by using PHYS_MASK to determine valid range. PHYS_MASK covers 40 bits with LPAE enabled and 32 bits otherwise. Reported-by: NVassili Karpov <av1474@comtv.ru> Signed-off-by: NSergey Dyasly <dserrg@gmail.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 24 10月, 2013 1 次提交
-
-
由 Russell King 提交于
DMA mapping permissions were being derived from pgprot_kernel directly without using PAGE_KERNEL. This causes them to be marked with executable permission, which is not what we want. Fix this. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 20 10月, 2013 3 次提交
-
-
由 Ben Dooks 提交于
If we are in BE8 mode, we must deal with the instruction stream being in LE order when data is being loaded in BE order. Ensure the data is swapped before processing to avoid thre following: Change to using <asm/opcodes.h> to provide the necessary conversion functions to change the byte ordering. This stops the following warning messages from the kernel on a fault: Unhandled fault: alignment exception (0x001) at 0xbfa09567 Alignment trap: not handling instruction 030091e8 at [<80333e8c>] Signed-off-by: NBen Dooks <ben.dooks@codethink.co.uk> Reviewed-by: NDave Martin <Dave.Martin@arm.com> Tested-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com>
-
由 Ben Dooks 提交于
Add ARM_BE8() helper to wrap any code conditional on being compile when CONFIG_ARM_ENDIAN_BE8 is selected and convert existing places where this is to use it. Acked-by: NNicolas Pitre <nico@linaro.org> Reviewed-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NBen Dooks <ben.dooks@codethink.co.uk>
-
由 Ben Dooks 提交于
The Kconfig for arch/arm/mach-ixp4xx has a local definition of ARCH_SUPPORTS_BIG_ENDIAN which could be used elsewhere. This means that if IXP4xx is selected and this symbol is selected eleswhere then an warning is produced. Clean the following error up by making the symbol be selected by the main ARCH_IXP4XX definition and have a common definition in arch/arm/mm/Kconfig warning: (ARCH_xxx) selects ARCH_SUPPORTS_BIG_ENDIAN which has unmet direct dependencies (ARCH_IXP4XX) warning: (ARCH_xxx) selects ARCH_SUPPORTS_BIG_ENDIAN which has unmet direct dependencies (ARCH_IXP4XX) Signed-off-by: NBen Dooks <ben.dooks@codethink.co.uk>
-
- 15 10月, 2013 1 次提交
-
-
由 Marek Szyprowski 提交于
This reverts commit 10bcdfb8. There is no consensus on the bindings for the reserved memory, so the code for handing it will be reverted. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NGrant Likely <grant.likely@linaro.org>
-
- 11 10月, 2013 3 次提交
-
-
由 Santosh Shilimkar 提交于
This patch adds a step in the init sequence, in order to recreate the kernel code/data page table mappings prior to full paging initialization. This is necessary on LPAE systems that run out of a physical address space outside the 4G limit. On these systems, this implementation provides a machine descriptor hook that allows the PHYS_OFFSET to be overridden in a machine specific fashion. Cc: Russell King <linux@arm.linux.org.uk> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NR Sricharan <r.sricharan@ti.com> Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
-
由 Santosh Shilimkar 提交于
Commit 9e9a367c {ARM: Section based HYP idmap} moved the address conversion inside identity_mapping_add() without respective print which carries useful idmap information. Move the print as well inside identity_mapping_add() to fix the same. Cc: Will Deacon <will.deacon@arm.com> Cc: Nicolas Pitre <nico@linaro.org> Cc: Russell King <linux@arm.linux.org.uk> Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
-
由 Santosh Shilimkar 提交于
On some PAE systems (e.g. TI Keystone), memory is above the 32-bit addressable limit, and the interconnect provides an aliased view of parts of physical memory in the 32-bit addressable space. This alias is strictly for boot time usage, and is not otherwise usable because of coherency limitations. On such systems, the idmap mechanism needs to take this aliased mapping into account. This patch introduces virt_to_idmap() and a arch function pointer which can be populated by platform which needs it. Also populate necessary idmap spots with now available virt_to_idmap(). Avoided #ifdef approach to be compatible with multi-platform builds. Most architecture won't touch it and in that case virt_to_idmap() fall-back to existing virt_to_phys() macro. Cc: Russell King <linux@arm.linux.org.uk> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
-
- 10 10月, 2013 2 次提交
-
-
由 Rob Herring 提交于
All arches do essentially the same thing now for early_init_dt_setup_initrd_arch, so it can now be removed. Signed-off-by: NRob Herring <rob.herring@calxeda.com> Acked-by: NVineet Gupta <vgupta@synopsys.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Mark Salter <msalter@redhat.com> Cc: Aurelien Jacquiot <a-jacquiot@ti.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Michal Simek <monstr@monstr.eu> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Jonas Bonn <jonas@southpole.se> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: x86@kernel.org Cc: Chris Zankel <chris@zankel.net> Cc: Max Filippov <jcmvbkbc@gmail.com> Acked-by: NGrant Likely <grant.likely@linaro.org>
-
由 Rob Herring 提交于
In order to unify the initrd scanning for DT across architectures, make arm set initrd_start and initrd_end instead of the physical addresses. This is aligned with all other architectures. Signed-off-by: NRob Herring <rob.herring@calxeda.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: linux-arm-kernel@lists.infradead.org Acked-by: NGrant Likely <grant.likely@linaro.org>
-
- 02 10月, 2013 1 次提交
-
-
由 Andreas Herrmann 提交于
... otherwise it is impossible for the low level iommu driver to figure out which pte flags should be used. In __map_sg_chunk we can derive the flags from dma_data_direction. In __iommu_create_mapping we should treat the memory like DMA_BIDIRECTIONAL and pass both IOMMU_READ and IOMMU_WRITE to iommu_map. __iommu_create_mapping is used during dma_alloc_coherent (via arm_iommu_alloc_attrs). AFAIK dma_alloc_coherent is responsible for allocation _and_ mapping. I think this implies that access to the mapped pages should be allowed. Cc: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NAndreas Herrmann <andreas.herrmann@calxeda.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 13 9月, 2013 2 次提交
-
-
由 Johannes Weiner 提交于
Unlike global OOM handling, memory cgroup code will invoke the OOM killer in any OOM situation because it has no way of telling faults occuring in kernel context - which could be handled more gracefully - from user-triggered faults. Pass a flag that identifies faults originating in user space from the architecture-specific fault handlers to generic code so that memcg OOM handling can be improved. Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org> Reviewed-by: NMichal Hocko <mhocko@suse.cz> Cc: David Rientjes <rientjes@google.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: azurIt <azurit@pobox.sk> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Johannes Weiner 提交于
Kernel faults are expected to handle OOM conditions gracefully (gup, uaccess etc.), so they should never invoke the OOM killer. Reserve this for faults triggered in user context when it is the only option. Most architectures already do this, fix up the remaining few. Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org> Reviewed-by: NMichal Hocko <mhocko@suse.cz> Acked-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: David Rientjes <rientjes@google.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: azurIt <azurit@pobox.sk> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 12 9月, 2013 1 次提交
-
-
由 Naoya Horiguchi 提交于
Currently hugepage migration works well only for pmd-based hugepages (mainly due to lack of testing,) so we had better not enable migration of other levels of hugepages until we are ready for it. Some users of hugepage migration (mbind, move_pages, and migrate_pages) do page table walk and check pud/pmd_huge() there, so they are safe. But the other users (softoffline and memory hotremove) don't do this, so without this patch they can try to migrate unexpected types of hugepages. To prevent this, we introduce hugepage_migration_support() as an architecture dependent check of whether hugepage are implemented on a pmd basis or not. And on some architecture multiple sizes of hugepages are available, so hugepage_migration_support() also checks hugepage size. Signed-off-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Hillf Danton <dhillf@gmail.com> Cc: Wanpeng Li <liwanp@linux.vnet.ibm.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Hugh Dickins <hughd@google.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Michal Hocko <mhocko@suse.cz> Cc: Rik van Riel <riel@redhat.com> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 02 9月, 2013 1 次提交
-
-
由 Will Deacon 提交于
On Cortex-A15 CPUs up to and including r0p4, in certain rare sequences of code, the loop buffer may deliver incorrect instructions. This workaround disables the loop buffer to avoid the erratum. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 27 8月, 2013 1 次提交
-
-
由 Marek Szyprowski 提交于
Enable reserved memory initialization from device tree. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NKyungmin Park <kyungmin.park@samsung.com> Acked-by: NMichal Nazarewicz <mina86@mina86.com> Acked-by: NTomasz Figa <t.figa@samsung.com>
-
- 20 8月, 2013 4 次提交
-
-
由 Christian Daudt 提交于
[ this is a follow-up to this discussion: http://archive.arm.linux.org.uk/lurker/message/20130730.230827.a1ceb12a.en.html ] This patchset renames all uses of "bcm," name bindings to "brcm," as they were done prior to knowing that brcm had already been standardized as Broadcom vendor prefix (in Documentation/devicetree/bindings/vendor-prefixes.txt). This will not cause any churn on devices because none of these bindings have made it into production yet. Acked-by: NStephen Warren <swarren@nvidia.com> Signed-off-by: NChristian Daudt <csd@broadcom.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Fabio Estevam 提交于
Currently we have the following output from cache-l2x0: l2x0: 16 ways, CACHE_ID 0x410000c7, AUX_CTRL 0x32070000, Cache size: 1048576 B Using kB for the cache size can improve readability a bit: l2x0: 16 ways, CACHE_ID 0x410000c7, AUX_CTRL 0x32070000, Cache size: 1024 kB While at it use pr_info. Signed-off-by: NFabio Estevam <fabio.estevam@freescale.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Nicolas Pitre 提交于
Commit f6f91b0d ("ARM: allow kuser helpers to be removed from the vector page") introduced some help text for the CONFIG_KUSER_HELPERS option which is rather contradictory. Let's fix that, and improve it a little. Cc: <stable@vger.kernel.org> Signed-off-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Ezequiel Garcia 提交于
Add support for suspend/resume operations. The implemented procedures are identical to the ones for ARM926. Signed-off-by: NEzequiel Garcia <ezequiel.garcia@free-electrons.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 14 8月, 2013 1 次提交
-
-
由 Rob Herring 提交于
In order to specify a DMA zone size of 4GB on LPAE systems, the sizes need to be 64-bit. So make machine_desc.dma_zone_size and arm_dma_zone_size be phys_addr_t instead of unsigned long. Signed-off-by: NRob Herring <rob.herring@calxeda.com>
-
- 12 8月, 2013 4 次提交
-
-
由 Will Deacon 提交于
writel_relaxed and spin_unlock are both store operations, so we only need to enforce store ordering in the dsb. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
System-wide barriers aren't required for situations where we only need to make visibility and ordering guarantees in the inner-shareable domain (i.e. we are not dealing with devices or potentially incoherent CPUs). This patch changes the v7 TLB operations, coherent_user_range and dcache_clean_area functions to user inner-shareable barriers. For cache maintenance, only the store access type is required to ensure completion. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
Inner-shareable TLB invalidation is typically more expensive than local (non-shareable) invalidation, so performing the broadcasting for local_flush_tlb_* operations is a waste of cycles and needlessly clobbers entries in the TLBs of other CPUs. This patch introduces __flush_tlb_* versions for many of the TLB invalidation functions, which only respect inner-shareable variants of the invalidation instructions when presented with the TLB_V7_UIS_FULL flag. The local version is also inlined to prevent SMP_ON_UP kernels from missing flushes, where the __flush variant would be called with the UP flags. This gains us around 0.5% in hackbench scores for a dual-core A15, but I would expect this to improve as more cores (and clusters) are added to the equation. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Reported-by: NAlbin Tonnerre <Albin.Tonnerre@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
The kernel TLB range invalidation functions already contain dsb instructions before and after the maintenance, so there is no need to introduce additional barriers. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 01 8月, 2013 3 次提交
-
-
由 Russell King 提交于
If kuser helpers are not provided by the kernel, disable user access to the vectors page. With the kuser helpers gone, there is no reason for this page to be visible to userspace. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Provide a kernel configuration option to allow the kernel user helpers to be removed from the vector page, thereby preventing their use with ROP (return orientated programming) attacks. This option is only visible for CPU architectures which natively support all the operations which kernel user helpers would normally provide, and must be enabled with caution. Cc: <stable@vger.kernel.org> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Move the machine vector stubs into the page above the vector page, which we can prevent from being visible to userspace. Also move the reset stub, and place the swi vector at a location that the 'ldr' can get to it. This hides pointers into the kernel which could give valuable information to attackers, and reduces the number of exploitable instructions at a fixed address. Cc: <stable@vger.kernel.org> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 31 7月, 2013 1 次提交
-
-
由 Steven Capper 提交于
General forms of huge_pte_alloc, huge_pte_offset and follow_huge_pmd are now available in mm/hugetlb.c. This patch removes the ARM copies of these functions and activates the general ones by enabling: CONFIG_ARCH_WANT_GENERAL_HUGETLB Signed-off-by: NSteve Capper <steve.capper@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-