1. 18 7月, 2014 1 次提交
    • R
      ARM: convert all "mov.* pc, reg" to "bx reg" for ARMv6+ · 6ebbf2ce
      Russell King 提交于
      ARMv6 and greater introduced a new instruction ("bx") which can be used
      to return from function calls.  Recent CPUs perform better when the
      "bx lr" instruction is used rather than the "mov pc, lr" instruction,
      and this sequence is strongly recommended to be used by the ARM
      architecture manual (section A.4.1.1).
      
      We provide a new macro "ret" with all its variants for the condition
      code which will resolve to the appropriate instruction.
      
      Rather than doing this piecemeal, and miss some instances, change all
      the "mov pc" instances to use the new macro, with the exception of
      the "movs" instruction and the kprobes code.  This allows us to detect
      the "mov pc, lr" case and fix it up - and also gives us the possibility
      of deploying this for other registers depending on the CPU selection.
      Reported-by: NWill Deacon <will.deacon@arm.com>
      Tested-by: Stephen Warren <swarren@nvidia.com> # Tegra Jetson TK1
      Tested-by: Robert Jarzmik <robert.jarzmik@free.fr> # mioa701_bootresume.S
      Tested-by: Andrew Lunn <andrew@lunn.ch> # Kirkwood
      Tested-by: NShawn Guo <shawn.guo@freescale.com>
      Tested-by: Tony Lindgren <tony@atomide.com> # OMAPs
      Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com> # Armada XP, 375, 385
      Acked-by: Sekhar Nori <nsekhar@ti.com> # DaVinci
      Acked-by: Christoffer Dall <christoffer.dall@linaro.org> # kvm/hyp
      Acked-by: Haojian Zhuang <haojian.zhuang@gmail.com> # PXA3xx
      Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> # Xen
      Tested-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> # ARMv7M
      Tested-by: Simon Horman <horms+renesas@verge.net.au> # Shmobile
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      6ebbf2ce
  2. 25 4月, 2014 1 次提交
    • J
      ARM: 8037/1: mm: support big-endian page tables · 86f40622
      Jianguo Wu 提交于
      When enable LPAE and big-endian in a hisilicon board, while specify
      mem=384M mem=512M@7680M, will get bad page state:
      
      Freeing unused kernel memory: 180K (c0466000 - c0493000)
      BUG: Bad page state in process init  pfn:fa442
      page:c7749840 count:0 mapcount:-1 mapping:  (null) index:0x0
      page flags: 0x40000400(reserved)
      Modules linked in:
      CPU: 0 PID: 1 Comm: init Not tainted 3.10.27+ #66
      [<c000f5f0>] (unwind_backtrace+0x0/0x11c) from [<c000cbc4>] (show_stack+0x10/0x14)
      [<c000cbc4>] (show_stack+0x10/0x14) from [<c009e448>] (bad_page+0xd4/0x104)
      [<c009e448>] (bad_page+0xd4/0x104) from [<c009e520>] (free_pages_prepare+0xa8/0x14c)
      [<c009e520>] (free_pages_prepare+0xa8/0x14c) from [<c009f8ec>] (free_hot_cold_page+0x18/0xf0)
      [<c009f8ec>] (free_hot_cold_page+0x18/0xf0) from [<c00b5444>] (handle_pte_fault+0xcf4/0xdc8)
      [<c00b5444>] (handle_pte_fault+0xcf4/0xdc8) from [<c00b6458>] (handle_mm_fault+0xf4/0x120)
      [<c00b6458>] (handle_mm_fault+0xf4/0x120) from [<c0013754>] (do_page_fault+0xfc/0x354)
      [<c0013754>] (do_page_fault+0xfc/0x354) from [<c0008400>] (do_DataAbort+0x2c/0x90)
      [<c0008400>] (do_DataAbort+0x2c/0x90) from [<c0008fb4>] (__dabt_usr+0x34/0x40)
      
      The bad pfn:fa442 is not system memory(mem=384M mem=512M@7680M), after debugging,
      I find in page fault handler, will get wrong pfn from pte just after set pte,
      as follow:
      do_anonymous_page()
      {
      	...
      	set_pte_at(mm, address, page_table, entry);
      
      	//debug code
      	pfn = pte_pfn(entry);
      	pr_info("pfn:0x%lx, pte:0x%llxn", pfn, pte_val(entry));
      
      	//read out the pte just set
      	new_pte = pte_offset_map(pmd, address);
      	new_pfn = pte_pfn(*new_pte);
      	pr_info("new pfn:0x%lx, new pte:0x%llxn", pfn, pte_val(entry));
      	...
      }
      
      pfn:   0x1fa4f5,     pte:0xc00001fa4f575f
      new_pfn:0xfa4f5, new_pte:0xc00000fa4f5f5f	//new pfn/pte is wrong.
      
      The bug is happened in cpu_v7_set_pte_ext(ptep, pte):
      An LPAE PTE is a 64bit quantity, passed to cpu_v7_set_pte_ext in the r2 and r3 registers.
      On an LE kernel, r2 contains the LSB of the PTE, and r3 the MSB.
      On a BE kernel, the assignment is reversed.
      
      Unfortunately, the current code always assumes the LE case,
      leading to corruption of the PTE when clearing/setting bits.
      
      This patch fixes this issue much like it has been done already in the
      cpu_v7_switch_mm case.
      
      CC stable <stable@vger.kernel.org>
      Signed-off-by: NJianguo Wu <wujianguo@huawei.com>
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      86f40622
  3. 22 7月, 2013 1 次提交
  4. 15 7月, 2013 1 次提交
    • P
      arm: delete __cpuinit/__CPUINIT usage from all ARM users · 8bd26e3a
      Paul Gortmaker 提交于
      The __cpuinit type of throwaway sections might have made sense
      some time ago when RAM was more constrained, but now the savings
      do not offset the cost and complications.  For example, the fix in
      commit 5e427ec2 ("x86: Fix bit corruption at CPU resume time")
      is a good example of the nasty type of bugs that can be created
      with improper use of the various __init prefixes.
      
      After a discussion on LKML[1] it was decided that cpuinit should go
      the way of devinit and be phased out.  Once all the users are gone,
      we can then finally remove the macros themselves from linux/init.h.
      
      Note that some harmless section mismatch warnings may result, since
      notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
      and are flagged as __cpuinit  -- so if we remove the __cpuinit from
      the arch specific callers, we will also get section mismatch warnings.
      As an intermediate step, we intend to turn the linux/init.h cpuinit
      related content into no-ops as early as possible, since that will get
      rid of these warnings.  In any case, they are temporary and harmless.
      
      This removes all the ARM uses of the __cpuinit macros from C code,
      and all __CPUINIT from assembly code.  It also had two ".previous"
      section statements that were paired off against __CPUINIT
      (aka .section ".cpuinit.text") that also get removed here.
      
      [1] https://lkml.org/lkml/2013/5/20/589
      
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: linux-arm-kernel@lists.infradead.org
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      8bd26e3a
  5. 30 5月, 2013 3 次提交
  6. 04 4月, 2013 1 次提交
  7. 04 3月, 2013 1 次提交
  8. 17 2月, 2013 1 次提交
  9. 09 11月, 2012 2 次提交
    • W
      ARM: mm: introduce present, faulting entries for PAGE_NONE · 26ffd0d4
      Will Deacon 提交于
      PROT_NONE mappings apply the page protection attributes defined by _P000
      which translate to PAGE_NONE for ARM. These attributes specify an XN,
      RDONLY pte that is inaccessible to userspace. However, on kernels
      configured without support for domains, such a pte *is* accessible to
      the kernel and can be read via get_user, allowing tasks to read
      PROT_NONE pages via syscalls such as read/write over a pipe.
      
      This patch introduces a new software pte flag, L_PTE_NONE, that is set
      to identify faulting, present entries.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      26ffd0d4
    • W
      ARM: mm: introduce L_PTE_VALID for page table entries · dbf62d50
      Will Deacon 提交于
      For long-descriptor translation table formats, the ARMv7 architecture
      defines the last two bits of the second- and third-level descriptors to
      be:
      
      	x0b	- Invalid
      	01b	- Block (second-level), Reserved (third-level)
      	11b	- Table (second-level), Page (third-level)
      
      This allows us to define L_PTE_PRESENT as (3 << 0) and use this value to
      create ptes directly. However, when determining whether a given pte
      value is present in the low-level page table accessors, we only need to
      check the least significant bit of the descriptor, allowing us to write
      faulting, present entries which are required for PROT_NONE mappings.
      
      This patch introduces L_PTE_VALID, which can be used to test whether a
      pte should fault, and updates the low-level page table accessors
      accordingly.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      dbf62d50
  10. 08 12月, 2011 1 次提交
    • C
      ARM: LPAE: MMU setup for the 3-level page table format · 1b6ba46b
      Catalin Marinas 提交于
      This patch adds the MMU initialisation for the LPAE page table format.
      The swapper_pg_dir size with LPAE is 5 rather than 4 pages. A new
      proc-v7-3level.S file contains the TTB initialisation, context switch
      and PTE setting code with the LPAE. The TTBRx split is based on the
      PAGE_OFFSET with TTBR1 used for the kernel mappings. The 36-bit mappings
      (supersections) and a few other memory types in mmu.c are conditionally
      compiled.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      1b6ba46b