1. 25 8月, 2012 1 次提交
    • R
      ARM: Fix ioremap() of address zero · a849088a
      Russell King 提交于
      Murali Nalajala reports a regression that ioremapping address zero
      results in an oops dump:
      
      Unable to handle kernel paging request at virtual address fa200000
      pgd = d4f80000
      [fa200000] *pgd=00000000
      Internal error: Oops: 5 [#1] PREEMPT SMP ARM
      Modules linked in:
      CPU: 0    Tainted: G        W (3.4.0-g3b5f728-00009-g638207a #13)
      PC is at msm_pm_config_rst_vector_before_pc+0x8/0x30
      LR is at msm_pm_boot_config_before_pc+0x18/0x20
      pc : [<c0078f84>]    lr : [<c007903c>]    psr: a0000093
      sp : c0837ef0  ip : cfe00000  fp : 0000000d
      r10: da7efc17  r9 : 225c4278  r8 : 00000006
      r7 : 0003c000  r6 : c085c824  r5 : 00000001  r4 : fa101000
      r3 : fa200000  r2 : c095080c  r1 : 002250fc  r0 : 00000000
      Flags: NzCv  IRQs off  FIQs on  Mode SVC_32  ISA ARM Segment kernel
      Control: 10c5387d  Table: 25180059  DAC: 00000015
      [<c0078f84>] (msm_pm_config_rst_vector_before_pc+0x8/0x30) from [<c007903c>] (msm_pm_boot_config_before_pc+0x18/0x20)
      [<c007903c>] (msm_pm_boot_config_before_pc+0x18/0x20) from [<c007a55c>] (msm_pm_power_collapse+0x410/0xb04)
      [<c007a55c>] (msm_pm_power_collapse+0x410/0xb04) from [<c007b17c>] (arch_idle+0x294/0x3e0)
      [<c007b17c>] (arch_idle+0x294/0x3e0) from [<c000eed8>] (default_idle+0x18/0x2c)
      [<c000eed8>] (default_idle+0x18/0x2c) from [<c000f254>] (cpu_idle+0x90/0xe4)
      [<c000f254>] (cpu_idle+0x90/0xe4) from [<c057231c>] (rest_init+0x88/0xa0)
      [<c057231c>] (rest_init+0x88/0xa0) from [<c07ff890>] (start_kernel+0x3a8/0x40c)
      Code: c0704256 e12fff1e e59f2020 e5923000 (e5930000)
      
      This is caused by the 'reserved' entries which we insert (see
      19b52abe - ARM: 7438/1: fill possible PMD empty section gaps)
      which get matched for physical address zero.
      
      Resolve this by marking these reserved entries with a different flag.
      
      Cc: <stable@vger.kernel.org>
      Tested-by: NMurali Nalajala <mnalajal@codeaurora.org>
      Acked-by: NNicolas Pitre <nico@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      a849088a
  2. 10 7月, 2012 1 次提交
  3. 01 7月, 2012 1 次提交
  4. 29 6月, 2012 1 次提交
  5. 21 5月, 2012 1 次提交
  6. 17 5月, 2012 1 次提交
  7. 28 4月, 2012 1 次提交
    • S
      ARM: 7401/1: mm: Fix section mismatches · 14904927
      Stephen Boyd 提交于
      WARNING: vmlinux.o(.text+0x111b8): Section mismatch in reference
      from the function arm_memory_present() to the function
      .init.text:memory_present()
      The function arm_memory_present() references
      the function __init memory_present().
      This is often because arm_memory_present lacks a __init
      annotation or the annotation of memory_present is wrong.
      
      WARNING: arch/arm/mm/built-in.o(.text+0x1edc): Section mismatch
      in reference from the function alloc_init_pud() to the function
      .init.text:alloc_init_section()
      The function alloc_init_pud() references
      the function __init alloc_init_section().
      This is often because alloc_init_pud lacks a __init
      annotation or the annotation of alloc_init_section is wrong.
      Signed-off-by: NStephen Boyd <sboyd@codeaurora.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      14904927
  8. 29 3月, 2012 2 次提交
  9. 24 3月, 2012 1 次提交
  10. 23 1月, 2012 1 次提交
  11. 08 12月, 2011 2 次提交
  12. 27 11月, 2011 2 次提交
  13. 19 11月, 2011 1 次提交
  14. 06 10月, 2011 1 次提交
  15. 23 9月, 2011 1 次提交
  16. 23 8月, 2011 1 次提交
  17. 06 7月, 2011 1 次提交
  18. 26 5月, 2011 1 次提交
    • W
      ARM: 6914/1: sparsemem: fix highmem detection when using SPARSEMEM · 40f7bfe4
      Will Deacon 提交于
      sanity_check_meminfo walks over the registered memory banks and attempts
      to split banks across lowmem and highmem when they would otherwise
      overlap with the vmalloc space.
      
      When SPARSEMEM is used, there are two potential problems that occur
      when the virtual address of the start of a bank is equal to vmalloc_min.
      
       1.) The end of lowmem is calculated as __pa(vmalloc_min - 1) + 1.
           In the above scenario, this will give the end address of the
           previous bank, rather than the actual bank we are interested in.
           This value is later used as the memblock limit and artificially
           restricts the total amount of available memory.
      
       2.) The checks to determine whether or not a bank belongs to highmem
           or not only check if __va(bank->start) is greater or less than
           vmalloc_min. In the case that it is equal, the bank is incorrectly
           treated as lowmem, which hoses the vmalloc area.
      
      This patch fixes these two problems by checking whether the virtual
      start address of a bank is >= vmalloc_min and then calculating
      lowmem_end by finding the virtual end address of the highest lowmem
      bank.
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: NNicolas Pitre <nicolas.pitre@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      40f7bfe4
  19. 25 5月, 2011 1 次提交
  20. 24 2月, 2011 1 次提交
    • N
      ARM: 6639/1: allow highmem on SMP platforms without h/w TLB ops broadcast · aaa50048
      Nicolas Pitre 提交于
      In commit e616c591, highmem support was
      deactivated for SMP platforms without hardware TLB ops broadcast because
      usage of kmap_high_get() requires that IRQs be disabled when kmap_lock
      is locked which is incompatible with the IPI mechanism used by the
      software TLB ops broadcast invoked through flush_all_zero_pkmaps().
      
      The reason for kmap_high_get() is to ensure that the currently kmap'd
      page usage count does not decrease to zero while we're using its
      existing virtual mapping in an atomic context.  With a VIVT cache this
      is essential to do due to cache coherency issues, but with a VIPT cache
      this is only an optimization so not to pay the price of establishing a
      second mapping if an existing one can be used.  However, on VIPT
      platforms without hardware TLB maintenance we can give up on that
      optimization in order to be able to use highmem.
      
      From ARMv7 onwards the TLB ops are broadcasted in hardware, so let's
      disable ARCH_NEEDS_KMAP_HIGH_GET only when CONFIG_SMP and
      CONFIG_CPU_TLB_V6 are defined.
      Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
      Tested-by: NSaeed Bishara <saeed.bishara@gmail.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      aaa50048
  21. 22 2月, 2011 2 次提交
  22. 15 2月, 2011 3 次提交
  23. 22 12月, 2010 3 次提交
  24. 27 11月, 2010 2 次提交
  25. 04 11月, 2010 1 次提交
    • C
      ARM: 6384/1: Remove the domain switching on ARMv6k/v7 CPUs · 247055aa
      Catalin Marinas 提交于
      This patch removes the domain switching functionality via the set_fs and
      __switch_to functions on cores that have a TLS register.
      
      Currently, the ioremap and vmalloc areas share the same level 1 page
      tables and therefore have the same domain (DOMAIN_KERNEL). When the
      kernel domain is modified from Client to Manager (via the __set_fs or in
      the __switch_to function), the XN (eXecute Never) bit is overridden and
      newer CPUs can speculatively prefetch the ioremap'ed memory.
      
      Linux performs the kernel domain switching to allow user-specific
      functions (copy_to/from_user, get/put_user etc.) to access kernel
      memory. In order for these functions to work with the kernel domain set
      to Client, the patch modifies the LDRT/STRT and related instructions to
      the LDR/STR ones.
      
      The user pages access rights are also modified for kernel read-only
      access rather than read/write so that the copy-on-write mechanism still
      works. CPU_USE_DOMAINS gets disabled only if the hardware has a TLS register
      (CPU_32v6K is defined) since writing the TLS value to the high vectors page
      isn't possible.
      
      The user addresses passed to the kernel are checked by the access_ok()
      function so that they do not point to the kernel space.
      Tested-by: NAnton Vorontsov <cbouatmailru@gmail.com>
      Cc: Tony Lindgren <tony@atomide.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      247055aa
  26. 28 10月, 2010 4 次提交
    • R
      ARM: memblock: setup lowmem mappings using memblock · 8df65168
      Russell King 提交于
      Use memblock information to setup lowmem mappings rather than the
      membank array.
      
      This allows platforms to manipulate the memblock information during
      initialization to reserve (and remove) memory from the kernel's view
      of memory - and thus allowing platforms to setup their own private
      mappings for this memory without causing problems with multiple
      aliasing mappings:
      
      	size = min(size, SZ_2M);
      	base = memblock_alloc(size, min(align, SZ_2M));
      	memblock_free(base, size);
      	memblock_remove(base, size);
      
      This is needed because multiple mappings of regions with differing
      attributes (sharability, type, cache) are not permitted with ARMv6
      and above.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      8df65168
    • R
      ARM: ensure membank array is always sorted · 7dc50ec7
      Russell King 提交于
      This was missing from the noMMU code, so there was the possibility
      of things not working as expected if out of order memory information
      was passed.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      7dc50ec7
    • R
      ARM: fix memblock breakage · 4e929d2b
      Russell King 提交于
      Will says:
      | Commit e63075a3 removed the explicit MEMBLOCK_REAL_LIMIT #define
      | and introduced the requirement that arch code calls
      | memblock_set_current_limit to ensure that the __va macro can
      | be used on physical addresses returned from memblock_alloc.
      
      Unfortunately, ARM was missed out of this change.  Fix this.
      Reported-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      4e929d2b
    • L
      ARM: 6445/1: fixup TCM memory types · f444fce3
      Linus Walleij 提交于
      After Santosh's fixup of the generic MT_MEMORY and
      MT_MEMORY_NONCACHED I add this fix to the TCM memory types.
      The main change is that the ITCM memory is L_PTE_WRITE and
      DOMAIN_KERNEL which works just fine. The changed to the DTCM
      is just cosmetic to fit with surrounding code.
      
      Cc: Santosh Shilimkar <santosh.shilimkar@ti.com>
      Cc: Rickard Andersson <rickard.andersson@stericsson.com>
      Signed-off-by: NLinus Walleij <linus.walleij@stericsson.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      f444fce3
  27. 05 10月, 2010 2 次提交