1. 26 5月, 2014 1 次提交
  2. 22 5月, 2014 1 次提交
    • R
      ARM: dma-mapping: avoid calling dma_cache_maint_page() on dev=>cpu · deace4a6
      Russell King 提交于
      Avoid calling dma_cache_maint_page() when unmapping a DMA_TO_DEVICE
      buffer.  The L1 cache ops never do anything in this circumstance, nor
      do they ever need to - all that matters for this case is that the data
      written is visible to the device before DMA starts.  What happens during
      the transfer (provided the buffer is not written to) is of no real
      consequence.
      
      We already do this optimisation for the L2 cache.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      deace4a6
  3. 25 4月, 2014 1 次提交
    • J
      ARM: 8037/1: mm: support big-endian page tables · 86f40622
      Jianguo Wu 提交于
      When enable LPAE and big-endian in a hisilicon board, while specify
      mem=384M mem=512M@7680M, will get bad page state:
      
      Freeing unused kernel memory: 180K (c0466000 - c0493000)
      BUG: Bad page state in process init  pfn:fa442
      page:c7749840 count:0 mapcount:-1 mapping:  (null) index:0x0
      page flags: 0x40000400(reserved)
      Modules linked in:
      CPU: 0 PID: 1 Comm: init Not tainted 3.10.27+ #66
      [<c000f5f0>] (unwind_backtrace+0x0/0x11c) from [<c000cbc4>] (show_stack+0x10/0x14)
      [<c000cbc4>] (show_stack+0x10/0x14) from [<c009e448>] (bad_page+0xd4/0x104)
      [<c009e448>] (bad_page+0xd4/0x104) from [<c009e520>] (free_pages_prepare+0xa8/0x14c)
      [<c009e520>] (free_pages_prepare+0xa8/0x14c) from [<c009f8ec>] (free_hot_cold_page+0x18/0xf0)
      [<c009f8ec>] (free_hot_cold_page+0x18/0xf0) from [<c00b5444>] (handle_pte_fault+0xcf4/0xdc8)
      [<c00b5444>] (handle_pte_fault+0xcf4/0xdc8) from [<c00b6458>] (handle_mm_fault+0xf4/0x120)
      [<c00b6458>] (handle_mm_fault+0xf4/0x120) from [<c0013754>] (do_page_fault+0xfc/0x354)
      [<c0013754>] (do_page_fault+0xfc/0x354) from [<c0008400>] (do_DataAbort+0x2c/0x90)
      [<c0008400>] (do_DataAbort+0x2c/0x90) from [<c0008fb4>] (__dabt_usr+0x34/0x40)
      
      The bad pfn:fa442 is not system memory(mem=384M mem=512M@7680M), after debugging,
      I find in page fault handler, will get wrong pfn from pte just after set pte,
      as follow:
      do_anonymous_page()
      {
      	...
      	set_pte_at(mm, address, page_table, entry);
      
      	//debug code
      	pfn = pte_pfn(entry);
      	pr_info("pfn:0x%lx, pte:0x%llxn", pfn, pte_val(entry));
      
      	//read out the pte just set
      	new_pte = pte_offset_map(pmd, address);
      	new_pfn = pte_pfn(*new_pte);
      	pr_info("new pfn:0x%lx, new pte:0x%llxn", pfn, pte_val(entry));
      	...
      }
      
      pfn:   0x1fa4f5,     pte:0xc00001fa4f575f
      new_pfn:0xfa4f5, new_pte:0xc00000fa4f5f5f	//new pfn/pte is wrong.
      
      The bug is happened in cpu_v7_set_pte_ext(ptep, pte):
      An LPAE PTE is a 64bit quantity, passed to cpu_v7_set_pte_ext in the r2 and r3 registers.
      On an LE kernel, r2 contains the LSB of the PTE, and r3 the MSB.
      On a BE kernel, the assignment is reversed.
      
      Unfortunately, the current code always assumes the LE case,
      leading to corruption of the PTE when clearing/setting bits.
      
      This patch fixes this issue much like it has been done already in the
      cpu_v7_switch_mm case.
      
      CC stable <stable@vger.kernel.org>
      Signed-off-by: NJianguo Wu <wujianguo@huawei.com>
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      86f40622
  4. 23 4月, 2014 3 次提交
  5. 07 4月, 2014 1 次提交
  6. 27 3月, 2014 1 次提交
    • A
      ARM: cache-tauros2: remove ARMv6 code · 027f3f96
      Arnd Bergmann 提交于
      When building a kernel with support for both ARMv6 and ARMv7 but
      no MMU, the call from tauros2_internal_init to adjust_cr causes
      a link error. While that could probably be resolved, we don't
      actually support cache-tauros2 on ARMv6 any more. All PJ4 CPU
      implementations support both ARMv6 and ARMv7 and we already assume
      that we are using them only in ARMv7 mode.
      
      Removing the ARMv6 code path reduces the code size and avoids
      the linker error.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Acked-by: NHaojian Zhuang <haojian.zhuang@gmail.com>
      027f3f96
  7. 22 3月, 2014 1 次提交
    • A
      ARM: rpc: autoselect CPU_SA110 · fa04e209
      Arnd Bergmann 提交于
      ARCH_RPC no longer supports other CPUs aside from StrongARM110,
      so we can make the option implicitly selected by the platform
      and no longer give the option of building a kernel without CPU
      support.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Cc: Russell King <linux@arm.linux.org.uk>
      fa04e209
  8. 12 3月, 2014 1 次提交
  9. 28 2月, 2014 2 次提交
    • M
      arm: dma-mapping: remove order parameter from arm_iommu_create_mapping() · 68efd7d2
      Marek Szyprowski 提交于
      The 'order' parameter for IOMMU-aware dma-mapping implementation was
      introduced mainly as a hack to reduce size of the bitmap used for
      tracking IO virtual address space. Since now it is possible to dynamically
      resize the bitmap, this hack is not needed and can be removed without any
      impact on the client devices. This way the parameters for
      arm_iommu_create_mapping() becomes much easier to understand. 'size'
      parameter now means the maximum supported IO address space size.
      
      The code will allocate (resize) bitmap in chunks, ensuring that a single
      chunk is not larger than a single memory page to avoid unreliable
      allocations of size larger than PAGE_SIZE in atomic context.
      Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
      68efd7d2
    • A
      arm: dma-mapping: Add support to extend DMA IOMMU mappings · 4d852ef8
      Andreas Herrmann 提交于
      Instead of using just one bitmap to keep track of IO virtual addresses
      (handed out for IOMMU use) introduce an array of bitmaps. This allows
      us to extend existing mappings when running out of iova space in the
      initial mapping etc.
      
      If there is not enough space in the mapping to service an IO virtual
      address allocation request, __alloc_iova() tries to extend the mapping
      -- by allocating another bitmap -- and makes another allocation
      attempt using the freshly allocated bitmap.
      
      This allows arm iommu drivers to start with a decent initial size when
      an dma_iommu_mapping is created and still to avoid running out of IO
      virtual addresses for the mapping.
      Signed-off-by: NAndreas Herrmann <andreas.herrmann@calxeda.com>
      [mszyprow: removed extensions parameter to arm_iommu_create_mapping()
       function, which will be modified in the next patch anyway, also some
       debug messages about extending bitmap]
      Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
      4d852ef8
  10. 23 2月, 2014 3 次提交
  11. 19 2月, 2014 2 次提交
  12. 11 2月, 2014 1 次提交
  13. 10 2月, 2014 5 次提交
    • W
      ARM: 7954/1: mm: remove remaining domain support from ARMv6 · b6ccb980
      Will Deacon 提交于
      CPU_32v6 currently selects CPU_USE_DOMAINS if CPU_V6 and MMU. This is
      because ARM 1136 r0pX CPUs lack the v6k extensions, and therefore do
      not have hardware thread registers. The lack of these registers requires
      the kernel to update the vectors page at each context switch in order to
      write a new TLS pointer. This write must be done via the userspace
      mapping, since aliasing caches can lead to expensive flushing when using
      kmap. Finally, this requires the vectors page to be mapped r/w for
      kernel and r/o for user, which has implications for things like put_user
      which must trigger CoW appropriately when targetting user pages.
      
      The upshot of all this is that a v6/v7 kernel makes use of domains to
      segregate kernel and user memory accesses. This has the nasty
      side-effect of making device mappings executable, which has been
      observed to cause subtle bugs on recent cores (e.g. Cortex-A15
      performing a speculative instruction fetch from the GIC and acking an
      interrupt in the process).
      
      This patch solves this problem by removing the remaining domain support
      from ARMv6. A new memory type is added specifically for the vectors page
      which allows that page (and only that page) to be mapped as user r/o,
      kernel r/w. All other user r/o pages are mapped also as kernel r/o.
      Patch co-developed with Russell King.
      
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      b6ccb980
    • J
      ARM: 7949/1: feroceon: Log a FW_BUG if the L2 cache is turned on at boot · 0f054e3c
      Jason Gunthorpe 提交于
      Booting on feroceon CPUS requires the L2 cache to be turned off. With
      some kernel configurations (notably CONFIG_ARM_PATCH_PHYS_VIRT
      disabled) the kernel will boot even if the L2 is turned on.
      
      However there may be subtle breakage, and when PATCH_PHYS_VIRT is
      enabled it is very likely that booting with L2 will crash at early
      boot before any kernel diagnostic output.
      
      The diagnostic message is intended to discourage people from shipping
      bootloaders that leave the L2 turned on.
      
      The issue on feroceon is that the L2 is bypassed when the L1 caches
      are disabled. So the decompressor will place parts of the kernel image
      into the L2 and the early cache-off boot code in head.S will write to
      parts of the kernel image, bypassing the L2 and creating inconsistency.
      
      Tested on ARM Kirkwood.
      Signed-off-by: NJason Gunthorpe <jgunthorpe@obsidianresearch.com>
      Acked-by: NJason Cooper <jason@lakedaemon.net>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      0f054e3c
    • J
      ARM: 7940/1: add support for the Cortex-A12 processor · ddb2ff73
      Jonathan Austin 提交于
      The A12 behaves as the A7/A15 does with respect to setting the SMP bit, and
      doesn't require TLB ops broadcasting to be explicitly enabled like the A9 does.
      
      Note that as the ACTLR cannot (usually) be written from non-secure, it is the
      responsibility of the bootloader/firmware to set this bit per core - it is
      done here in Linux as last resort in case of bad firmware.
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NJonathan Austin <jonathan.austin@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      ddb2ff73
    • W
      ARM: 7953/1: mm: ensure TLB invalidation is complete before enabling MMU · bae0ca2b
      Will Deacon 提交于
      During __v{6,7}_setup, we invalidate the TLBs since we are about to
      enable the MMU on return to head.S. Unfortunately, without a subsequent
      dsb instruction, the invalidation is not guaranteed to have completed by
      the time we write to the sctlr, potentially exposing us to junk/stale
      translations cached in the TLB.
      
      This patch reworks the init functions so that the dsb used to ensure
      completion of cache/predictor maintenance is also used to ensure
      completion of the TLB invalidation.
      
      Cc: <stable@vger.kernel.org>
      Reported-by: NAlbin Tonnerre <Albin.Tonnerre@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      bae0ca2b
    • C
      ARM: 7950/1: mm: Fix stage-2 device memory attributes · 4d9c5b89
      Christoffer Dall 提交于
      The stage-2 memory attributes are distinct from the Hyp memory
      attributes and the Stage-1 memory attributes.  We were using the stage-1
      memory attributes for stage-2 mappings causing device mappings to be
      mapped as normal memory.  Add the S2 equivalent defines for memory
      attributes and fix the comments explaining the defines while at it.
      
      Add a prot_pte_s2 field to the mem_type struct and fill out the field
      for device mappings accordingly.
      
      Cc: <stable@vger.kernel.org>	[3.9+]
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      4d9c5b89
  14. 28 1月, 2014 1 次提交
  15. 22 1月, 2014 2 次提交
    • S
      arch/arm/mm/init.c: use memblock apis for early memory allocations · cfb66586
      Santosh Shilimkar 提交于
      Switch to memblock interfaces for early memory allocator instead of
      bootmem allocator.  No functional change in beahvior than what it is in
      current code from bootmem users points of view.
      
      Archs already converted to NO_BOOTMEM now directly use memblock
      interfaces instead of bootmem wrappers build on top of memblock.  And
      the archs which still uses bootmem, these new apis just fallback to
      exiting bootmem APIs.
      Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Grygorii Strashko <grygorii.strashko@ti.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Paul Walmsley <paul@pwsan.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Tony Lindgren <tony@atomide.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cfb66586
    • M
      mm, show_mem: remove SHOW_MEM_FILTER_PAGE_COUNT · aec6a888
      Mel Gorman 提交于
      Commit 4b59e6c4 ("mm, show_mem: suppress page counts in
      non-blockable contexts") introduced SHOW_MEM_FILTER_PAGE_COUNT to
      suppress PFN walks on large memory machines.  Commit c78e9363 ("mm:
      do not walk all of system memory during show_mem") avoided a PFN walk in
      the generic show_mem helper which removes the requirement for
      SHOW_MEM_FILTER_PAGE_COUNT in that case.
      
      This patch removes PFN walkers from the arch-specific implementations
      that report on a per-node or per-zone granularity.  ARM and unicore32
      still do a PFN walk as they report memory usage on each bank which is a
      much finer granularity where the debugging information may still be of
      use.  As the remaining arches doing PFN walks have relatively small
      amounts of memory, this patch simply removes SHOW_MEM_FILTER_PAGE_COUNT.
      
      [akpm@linux-foundation.org: fix parisc]
      Signed-off-by: NMel Gorman <mgorman@suse.de>
      Acked-by: NDavid Rientjes <rientjes@google.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: James Bottomley <jejb@parisc-linux.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      aec6a888
  16. 08 1月, 2014 1 次提交
  17. 29 12月, 2013 7 次提交
  18. 11 12月, 2013 5 次提交
  19. 10 12月, 2013 1 次提交