1. 24 12月, 2009 1 次提交
  2. 02 12月, 2009 1 次提交
  3. 01 12月, 2009 1 次提交
  4. 03 11月, 2009 1 次提交
    • R
      ARM: ensure initial page tables are setup for SMP systems · 4b46d641
      Russell King 提交于
      Mapping the same memory using two different attributes (memory
      type, shareability, cacheability) is unpredictable.  During boot,
      we encounter a situation when we're updating the kernel's page
      tables which can lead to dirty cache lines existing in the cache
      which are subsequently missed.  This causes stack corruption,
      and therefore a crash.
      
      Therefore, ensure that the shared and cacheability settings
      matches the configuration that will be used later; this together
      with the restriction in early_cachepolicy() ensures that we won't
      create a mismatch during boot.
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      4b46d641
  5. 29 9月, 2009 2 次提交
    • R
      ARM: Don't allow highmem on SMP platforms without h/w TLB ops broadcast · e616c591
      Russell King 提交于
      We suffer an unfortunate combination of "features" which makes highmem
      support on platforms without hardware TLB maintainence broadcast difficult:
      
      - we need kmap_high_get() support for DMA cache coherence
      - this requires kmap_high() to take a spinlock with IRQs disabled
      - kmap_high() occasionally calls flush_all_zero_pkmaps() to clear
        out old mappings
      - flush_all_zero_pkmaps() calls flush_tlb_kernel_range(), which
        on s/w IPI'd systems eventually calls smp_call_function_many()
      - smp_call_function_many() must not be called with IRQs disabled:
      
      WARNING: at kernel/smp.c:380 smp_call_function_many+0xc4/0x240()
      Modules linked in:
      Backtrace:
      [<c00306f0>] (dump_backtrace+0x0/0x108) from [<c0286e6c>] (dump_stack+0x18/0x1c)
       r6:c007cd18 r5:c02ff228 r4:0000017c
      [<c0286e54>] (dump_stack+0x0/0x1c) from [<c0053e08>] (warn_slowpath_common+0x50/0x80)
      [<c0053db8>] (warn_slowpath_common+0x0/0x80) from [<c0053e50>] (warn_slowpath_null+0x18/0x1c)
       r7:00000003 r6:00000001 r5:c1ff4000 r4:c035fa34
      [<c0053e38>] (warn_slowpath_null+0x0/0x1c) from [<c007cd18>] (smp_call_function_many+0xc4/0x240)
      [<c007cc54>] (smp_call_function_many+0x0/0x240) from [<c007cec0>] (smp_call_function+0x2c/0x38)
      [<c007ce94>] (smp_call_function+0x0/0x38) from [<c005980c>] (on_each_cpu+0x1c/0x38)
      [<c00597f0>] (on_each_cpu+0x0/0x38) from [<c0031788>] (flush_tlb_kernel_range+0x50/0x58)
       r6:00000001 r5:00000800 r4:c05f3590
      [<c0031738>] (flush_tlb_kernel_range+0x0/0x58) from [<c009c600>] (flush_all_zero_pkmaps+0xc0/0xe8)
      [<c009c540>] (flush_all_zero_pkmaps+0x0/0xe8) from [<c009c6b4>] (kmap_high+0x8c/0x1e0)
      [<c009c628>] (kmap_high+0x0/0x1e0) from [<c00364a8>] (kmap+0x44/0x5c)
      [<c0036464>] (kmap+0x0/0x5c) from [<c0109dfc>] (cramfs_readpage+0x3c/0x194)
      [<c0109dc0>] (cramfs_readpage+0x0/0x194) from [<c0090c14>] (__do_page_cache_readahead+0x1f0/0x290)
      [<c0090a24>] (__do_page_cache_readahead+0x0/0x290) from [<c0090ce4>] (ra_submit+0x30/0x38)
      [<c0090cb4>] (ra_submit+0x0/0x38) from [<c0089384>] (filemap_fault+0x3dc/0x438)
       r4:c1819988
      [<c0088fa8>] (filemap_fault+0x0/0x438) from [<c009d21c>] (__do_fault+0x58/0x43c)
      [<c009d1c4>] (__do_fault+0x0/0x43c) from [<c009e8cc>] (handle_mm_fault+0x104/0x318)
      [<c009e7c8>] (handle_mm_fault+0x0/0x318) from [<c0033c98>] (do_page_fault+0x188/0x1e4)
      [<c0033b10>] (do_page_fault+0x0/0x1e4) from [<c0033ddc>] (do_translation_fault+0x7c/0x84)
      [<c0033d60>] (do_translation_fault+0x0/0x84) from [<c002b474>] (do_DataAbort+0x40/0xa4)
       r8:c1ff5e20 r7:c0340120 r6:00000805 r5:c1ff5e54 r4:c03400d0
      [<c002b434>] (do_DataAbort+0x0/0xa4) from [<c002bcac>] (__dabt_svc+0x4c/0x60)
      ...
      
      So we disable highmem support on these systems.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      e616c591
    • R
      041d785f
  6. 15 8月, 2009 1 次提交
    • R
      ARM: Fix broken highmem support · dde5828f
      Russell King 提交于
      Currently, highmem is selectable, and you can request an increased
      vmalloc area.  However, none of this has any effect on the memory
      layout since a patch in the highmem series was accidentally dropped.
      Moreover, even if you did want highmem, all memory would still be
      registered as lowmem, possibly resulting in overflow of the available
      virtual mapping space.
      
      The highmem boundary is determined by the highest allowed beginning
      of the vmalloc area, which depends on its configurable minimum size
      (see commit 60296c71 for details on
      this).
      
      We should create mappings and initialize bootmem only for low memory,
      while the zone allocator must still be told about highmem.
      
      Currently, memory nodes which are completely located in high memory
      are not supported.  This is not a huge limitation since systems
      relying on highmem support are unlikely to have discontiguous memory
      with large holes.
      
      [ A similar patch was meant to be merged before commit 5f0fbf9e
        and be available  in Linux v2.6.30, however some git rebase screw-up
        of mine dropped the first commit of the series, and that goofage
        escaped testing somehow as well. -- Nico ]
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      Reviewed-by: NNicolas Pitre <nico@marvell.com>
      dde5828f
  7. 16 6月, 2009 1 次提交
  8. 19 5月, 2009 1 次提交
    • H
      omap iommu: simple virtual address space management · 69d3a84a
      Hiroshi DOYU 提交于
      This patch provides a device drivers, which has a omap iommu, with
      address mapping APIs between device virtual address(iommu), physical
      address and MPU virtual address.
      
      There are 4 possible patterns for iommu virtual address(iova/da) mapping.
      
          |iova/			  mapping		iommu_		page
          | da	pa	va	(d)-(p)-(v)		function	type
        ---------------------------------------------------------------------------
        1 | c		c	c	 1 - 1 - 1	  _kmap() / _kunmap()	s
        2 | c		c,a	c	 1 - 1 - 1	_kmalloc()/ _kfree()	s
        3 | c		d	c	 1 - n - 1	  _vmap() / _vunmap()	s
        4 | c		d,a	c	 1 - n - 1	_vmalloc()/ _vfree()	n*
      
          'iova':	device iommu virtual address
          'da':	alias of 'iova'
          'pa':	physical address
          'va':	mpu virtual address
      
          'c':	contiguous memory area
          'd':	dicontiguous memory area
          'a':	anonymous memory allocation
          '()':	optional feature
      
          'n':	a normal page(4KB) size is used.
          's':	multiple iommu superpage(16MB, 1MB, 64KB, 4KB) size is used.
      
          '*':	not yet, but feasible.
      Signed-off-by: NHiroshi DOYU <Hiroshi.DOYU@nokia.com>
      69d3a84a
  9. 29 4月, 2009 1 次提交
  10. 04 4月, 2009 1 次提交
  11. 29 3月, 2009 1 次提交
  12. 16 3月, 2009 2 次提交
    • N
      [ARM] ignore high memory with VIPT aliasing caches · 3f973e22
      Nicolas Pitre 提交于
      VIPT aliasing caches have issues of their own which are not yet handled.
      Usage of discard_old_kernel_data() in copypage-v6.c is not highmem ready,
      kmap/fixmap stuff doesn't take account of cache colouring, etc.
      If/when those issues are handled then this could be reverted.
      Signed-off-by: NNicolas Pitre <nico@marvell.com>
      3f973e22
    • N
      [ARM] kmap support · d73cd428
      Nicolas Pitre 提交于
      The kmap virtual area borrows a 2MB range at the top of the 16MB area
      below PAGE_OFFSET currently reserved for kernel modules and/or the
      XIP kernel.  This 2MB corresponds to the range covered by 2 consecutive
      second-level page tables, or a single pmd entry as seen by the Linux
      page table abstraction.  Because XIP kernels are unlikely to be seen
      on systems needing highmem support, there shouldn't be any shortage of
      VM space for modules (14 MB for modules is still way more than twice the
      typical usage).
      
      Because the virtual mapping of highmem pages can go away at any moment
      after kunmap() is called on them, we need to bypass the delayed cache
      flushing provided by flush_dcache_page() in that case.
      
      The atomic kmap versions are based on fixmaps, and
      __cpuc_flush_dcache_page() is used directly in that case.
      Signed-off-by: NNicolas Pitre <nico@marvell.com>
      d73cd428
  13. 13 3月, 2009 1 次提交
    • P
      [ARM] 5422/1: ARM: MMU: add a Non-cacheable Normal executable memory type · e4707dd3
      Paul Walmsley 提交于
      This patch adds a Non-cacheable Normal ARM executable memory type,
      MT_MEMORY_NONCACHED.
      
      On OMAP3, this is used for rapid dynamic voltage/frequency scaling in
      the VDD2 voltage domain. OMAP3's SDRAM controller (SDRC) is in the
      VDD2 voltage domain, and its clock frequency must change along with
      voltage. The SDRC clock change code cannot run from SDRAM itself,
      since SDRAM accesses are paused during the clock change. So the
      current implementation of the DVFS code executes from OMAP on-chip
      SRAM, aka "OCM RAM."
      
      If the OCM RAM pages are marked as Cacheable, the ARM cache controller
      will attempt to flush dirty cache lines to the SDRC, so it can fill
      those lines with OCM RAM instruction code. The problem is that the
      SDRC is paused during DVFS, and so any SDRAM access causes the ARM MPU
      subsystem to hang.
      
      TI's original solution to this problem was to mark the OCM RAM
      sections as Strongly Ordered memory, thus preventing caching. This is
      overkill: since the memory is marked as non-bufferable, OCM RAM writes
      become needlessly slow. The idea of "Strongly Ordered SRAM" is also
      conceptually disturbing. Previous LAKML list discussion is here:
      
      http://www.spinics.net/lists/arm-kernel/msg54312.html
      
      This memory type MT_MEMORY_NONCACHED is used for OCM RAM by a future
      patch.
      
      Cc: Richard Woodruff <r-woodruff2@ti.com>
      Signed-off-by: NPaul Walmsley <paul@pwsan.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      e4707dd3
  14. 19 2月, 2009 1 次提交
  15. 14 12月, 2008 1 次提交
    • J
      [ARM] eliminate NULL test and memset after alloc_bootmem · 6ce1b871
      Julia Lawall 提交于
      As noted by Akinobu Mita in patch b1fceac2,
      alloc_bootmem and related functions never return NULL and always return a
      zeroed region of memory.  Thus a NULL test or memset after calls to these
      functions is unnecessary.
      
      This was fixed using the following semantic patch.
      (http://www.emn.fr/x-info/coccinelle/)
      
      // <smpl>
      @@
      expression E;
      statement S;
      @@
      
      E = \(alloc_bootmem\|alloc_bootmem_low\|alloc_bootmem_pages\|alloc_bootmem_low_pages\|alloc_bootmem_node\|alloc_bootmem_low_pages_node\|alloc_bootmem_pages_node\)(...)
      ... when != E
      (
      - BUG_ON (E == NULL);
      |
      - if (E == NULL) S
      )
      
      @@
      expression E,E1;
      @@
      
      E = \(alloc_bootmem\|alloc_bootmem_low\|alloc_bootmem_pages\|alloc_bootmem_low_pages\|alloc_bootmem_node\|alloc_bootmem_low_pages_node\|alloc_bootmem_pages_node\)(...)
      ... when != E
      - memset(E,0,E1);
      // </smpl>
      Signed-off-by: NJulia Lawall <julia@diku.dk>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      6ce1b871
  16. 01 12月, 2008 1 次提交
  17. 28 11月, 2008 3 次提交
  18. 27 11月, 2008 1 次提交
    • R
      [ARM] remove memzero() · 59f0cb0f
      Russell King 提交于
      As suggested by Andrew Morton, remove memzero() - it's not supported
      on other architectures so use of it is a potential build breaking bug.
      Since the compiler optimizes memset(x,0,n) to __memzero() perfectly
      well, we don't miss out on the underlying benefits of memzero().
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      59f0cb0f
  19. 09 11月, 2008 1 次提交
    • R
      [ARM] iop: iop3xx needs registers mapped uncached+unbuffered · ebb4c658
      Russell King 提交于
      Mikael Pettersson reported:
      
         The 2.6.28-rc kernels fail to detect PCI device 0000:00:01.0
         (the first ethernet port) on my Thecus n2100 XScale box.
      
         There is however still a strange "ghost" device that gets partially
         detected in 2.6.28-rc2 vanilla.
      
      The IOP321 manual says:
      
        The user designates the memory region containing the OCCDR as
        non-cacheable and non-bufferable from the IntelR XScaleTM core.
        This guarantees that all load/stores to the OCCDR are only of
        DWORD quantities.
      
      Ensure that the OCCDR is so mapped.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      ebb4c658
  20. 07 11月, 2008 2 次提交
  21. 01 10月, 2008 6 次提交
  22. 06 9月, 2008 2 次提交
    • L
      [ARM] 5241/1: provide ioremap_wc() · 1ad77a87
      Lennert Buytenhek 提交于
      This patch provides an ARM implementation of ioremap_wc().
      
      We use different page table attributes depending on which CPU we
      are running on:
      
      - Non-XScale ARMv5 and earlier systems: The ARMv5 ARM documents four
        possible mapping types (CB=00/01/10/11).  We can't use any of the
        cached memory types (CB=10/11), since that breaks coherency with
        peripheral devices.  Both CB=00 and CB=01 are suitable for _wc, and
        CB=01 (Uncached/Buffered) allows the hardware more freedom than
        CB=00, so we'll use that.
      
        (The ARMv5 ARM seems to suggest that CB=01 is allowed to delay stores
        but isn't allowed to merge them, but there is no other mapping type
        we can use that allows the hardware to delay and merge stores, so
        we'll go with CB=01.)
      
      - XScale v1/v2 (ARMv5): same as the ARMv5 case above, with the slight
        difference that on these platforms, CB=01 actually _does_ allow
        merging stores.  (If you want noncoalescing bufferable behavior
        on Xscale v1/v2, you need to use XCB=101.)
      
      - Xscale v3 (ARMv5) and ARMv6+: on these systems, we use TEXCB=00100
        mappings (Inner/Outer Uncacheable in xsc3 parlance, Uncached Normal
        in ARMv6 parlance).
      
        The ARMv6 ARM explicitly says that any accesses to Normal memory can
        be merged, which makes Normal memory more suitable for _wc mappings
        than Device or Strongly Ordered memory, as the latter two mapping
        types are guaranteed to maintain transaction number, size and order.
        We use the Uncached variety of Normal mappings for the same reason
        that we can't use C=1 mappings on ARMv5.
      
        The xsc3 Architecture Specification documents TEXCB=00100 as being
        Uncacheable and allowing coalescing of writes, which is also just
        what we need.
      Signed-off-by: NLennert Buytenhek <buytenh@marvell.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      1ad77a87
    • R
      [ARM] clean up a load of old declarations · 5ed5fdf5
      Russell King 提交于
      ... some of which are now in linux/*.h headers.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      5ed5fdf5
  23. 01 9月, 2008 1 次提交
  24. 09 8月, 2008 1 次提交
    • L
      [ARM] prevent crashing when too much RAM installed · 60296c71
      Lennert Buytenhek 提交于
      This patch will truncate and/or ignore memory banks if their kernel
      direct mappings would (partially) overlap with the vmalloc area or
      the mappings between the vmalloc area and the address space top, to
      prevent crashing during early boot if there happens to be more RAM
      installed than we are expecting.
      
      Since the start of the vmalloc area is not at a fixed address (but
      the vmalloc end address is, via the per-platform VMALLOC_END define),
      a default area of 128M is reserved for vmalloc mappings, which can
      be shrunk or enlarged by passing an appropriate vmalloc= command line
      option as it is done on x86.
      
      On a board with a 3:1 user:kernel split, VMALLOC_END at 0xfe000000,
      two 512M RAM banks and vmalloc=128M (the default), this patch gives:
      
      	Truncating RAM at 20000000-3fffffff to -35ffffff (vmalloc region overlap).
      	Memory: 512MB 352MB = 864MB total
      
      On a board with a 3:1 user:kernel split, VMALLOC_END at 0xfe800000,
      two 256M RAM banks and vmalloc=768M, this patch gives:
      
      	Truncating RAM at 00000000-0fffffff to -0e7fffff (vmalloc region overlap).
      	Ignoring RAM at 10000000-1fffffff (vmalloc region overlap).
      Signed-off-by: NLennert Buytenhek <buytenh@marvell.com>
      Tested-by: NRiku Voipio <riku.voipio@iki.fi>
      60296c71
  25. 29 4月, 2008 1 次提交
  26. 08 2月, 2008 1 次提交
    • B
      Introduce flags for reserve_bootmem() · 72a7fe39
      Bernhard Walle 提交于
      This patchset adds a flags variable to reserve_bootmem() and uses the
      BOOTMEM_EXCLUSIVE flag in crashkernel reservation code to detect collisions
      between crashkernel area and already used memory.
      
      This patch:
      
      Change the reserve_bootmem() function to accept a new flag BOOTMEM_EXCLUSIVE.
      If that flag is set, the function returns with -EBUSY if the memory already
      has been reserved in the past.  This is to avoid conflicts.
      
      Because that code runs before SMP initialisation, there's no race condition
      inside reserve_bootmem_core().
      
      [akpm@linux-foundation.org: coding-style fixes]
      [akpm@linux-foundation.org: fix powerpc build]
      Signed-off-by: NBernhard Walle <bwalle@suse.de>
      Cc: <linux-arch@vger.kernel.org>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Vivek Goyal <vgoyal@in.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      72a7fe39
  27. 21 7月, 2007 1 次提交
  28. 05 7月, 2007 1 次提交
    • R
      [ARM] Fix non-page aligned boot time mappings · 7b9c7b4d
      Russell King 提交于
      AT91SAM9260 stopped booting with the recent changes to MM
      initialisation - it was asking for a non-aligned virtual address
      which caused loops to be non-terminal.  Fix this by rounding
      virtual addresses down, but remember to include the offset in
      the length, and round the length up to the following page.
      
      This means that asking for a mapping of 4K starting at 2K into
      a page maps two pages as one would expect.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      7b9c7b4d
  29. 21 5月, 2007 1 次提交