1. 26 7月, 2013 4 次提交
    • W
      ARM: 7791/1: a.out: remove partial a.out support · acfdd4b1
      Will Deacon 提交于
      a.out support on ARM requires that argc, argv and envp are passed in
      r0-r2 respectively, which requires hacking load_aout_binary to
      prevent argc being clobbered by the return code. Whilst mainline kernels
      do set the registers up in start_thread, the aout loader has never
      carried the hack in mainline.
      
      Initialising the registers in this way actually goes against the libc
      expectations for ELF binaries, where argc, argv and envp are passed on
      the stack, with r0 being used to hold a pointer to an exit function for
      cleaning up after the dynamic linker if required. If the pointer is
      NULL, then it is ignored. When execing an ELF binary, Linux currently
      zeroes r0, then sets it to argc and then finally clobbers it with the
      return value of the execve syscall, so we actually end up with:
      
      	r0 = 0
      	stack[0] = argc
      	r1 = stack[1] = argv
      	r2 = stack[2] = envp
      
      libc treats r1 and r2 as undefined. The clobbering of r0 by sys_execve
      works for user-spawned threads, but when executing an ELF binary from a
      kernel thread (via call_usermodehelper), the execve is performed on the
      ret_from_fork path, which restores r0 from the saved pt_regs, resulting
      in argc being presented to the C library. This has horrible consequences
      when the application exits, since we have an exit function registered
      using argc, resulting in a jump to hyperspace.
      
      This patch solves the problem by removing the partial a.out support from
      arch/arm/ altogether.
      
      Cc: <stable@vger.kernel.org>
      Cc: Ashish Sangwan <ashishsangwan2@gmail.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      acfdd4b1
    • C
      ARM: 7790/1: Fix deferred mm switch on VIVT processors · bdae73cd
      Catalin Marinas 提交于
      As of commit b9d4d42a (ARM: Remove __ARCH_WANT_INTERRUPTS_ON_CTXSW on
      pre-ARMv6 CPUs), the mm switching on VIVT processors is done in the
      finish_arch_post_lock_switch() function to avoid whole cache flushing
      with interrupts disabled. The need for deferred mm switch is stored as a
      thread flag (TIF_SWITCH_MM). However, with preemption enabled, we can
      have another thread switch before finish_arch_post_lock_switch(). If the
      new thread has the same mm as the previous 'next' thread, the scheduler
      will not call switch_mm() and the TIF_SWITCH_MM flag won't be set for
      the new thread.
      
      This patch moves the switch pending flag to the mm_context_t structure
      since this is specific to the mm rather than thread.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reported-by: NMarc Kleine-Budde <mkl@pengutronix.de>
      Tested-by: NMarc Kleine-Budde <mkl@pengutronix.de>
      Cc: <stable@vger.kernel.org> # 3.5+
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      bdae73cd
    • F
      ARM: 7789/1: Do not run dummy_flush_tlb_a15_erratum() on non-Cortex-A15 · 1f49856b
      Fabio Estevam 提交于
      Commit 93dc6887 (ARM: 7684/1: errata: Workaround for Cortex-A15 erratum 798181 (TLBI/DSB operations)) causes the following undefined instruction error on a mx53 (Cortex-A8):
      
      Internal error: Oops - undefined instruction: 0 [#1] SMP ARM
      CPU: 0 PID: 275 Comm: modprobe Not tainted 3.11.0-rc2-next-20130722-00009-g9b0f371 #881
      task: df46cc00 ti: df48e000 task.ti: df48e000
      PC is at check_and_switch_context+0x17c/0x4d0
      LR is at check_and_switch_context+0xdc/0x4d0
      
      This problem happens because check_and_switch_context() calls dummy_flush_tlb_a15_erratum() without checking if we are really running on a Cortex-A15 or not.
      
      To avoid this issue, only call dummy_flush_tlb_a15_erratum() inside
      check_and_switch_context() if erratum_a15_798181() returns true, which means that we are really running on a Cortex-A15.
      Signed-off-by: NFabio Estevam <fabio.estevam@freescale.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: NRoger Quadros <rogerq@ti.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      1f49856b
    • M
      ARM: 7787/1: virt: ensure visibility of __boot_cpu_mode · 8fbac214
      Mark Rutland 提交于
      Secondary CPUs write to __boot_cpu_mode with caches disabled, and thus a
      cached value of __boot_cpu_mode may be incoherent with that in memory.
      This could lead to a failure to detect mismatched boot modes.
      
      This patch adds flushing to ensure that writes by secondaries to
      __boot_cpu_mode are made visible before we test against it.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NDave Martin <Dave.Martin@arm.com>
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Cc: Christoffer Dall <cdall@cs.columbia.edu>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      8fbac214
  2. 22 7月, 2013 5 次提交
    • T
      ARM: 7788/1: elf: fix lpae hwcap feature reporting in proc/cpuinfo · ab8d46c0
      Tetsuyuki Kobayashi 提交于
      Commit a469abd0 ("ARM: elf: add new hwcap for identifying atomic
      ldrd/strd instructions") added a new hwcap to identify LPAE on CPUs
      which support it. Whilst the hwcap data is correct, the string reported
      in /proc/cpuinfo actually matches on HWCAP_VFPD32, which was missing
      an entry in the string table.
      
      This patch fixes this problem by adding a "vfpd32" string at the correct
      offset, preventing us from falsely advertising LPAE on CPUs which do not
      support it.
      
      [will: added commit message]
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Tested-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NTetsuyuki Kobayashi <koba@kmckk.co.jp>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      ab8d46c0
    • M
      ARM: 7786/1: hyp: fix macro parameterisation · b60d5db6
      Mark Rutland 提交于
      Currently, compare_cpu_mode_with_primary uses a mixture of macro
      arguments and hardcoded registers, and does so incorrectly, as it
      stores (__boot_cpu_mode_offset | BOOT_CPU_MODE_MISMATCH) to
      (__boot_cpu_mode + &__boot_cpu_mode_offset), which could corrupt an
      arbitrary portion of memory.
      
      This patch fixes up compare_cpu_mode_with_primary to use the macro
      arguments, correctly updating __boot_cpu_mode.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NDave Martin <Dave.Martin@arm.com>
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Cc: Christoffer Dall <cdall@cs.columbia.edu>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      b60d5db6
    • R
      ARM: 7785/1: mm: restrict early_alloc to section-aligned memory · c65b7e98
      Russell King 提交于
      When map_lowmem() runs, and processes a memory bank whose start or end
      is not section-aligned, memory must be allocated to store the 2nd-level
      page tables. Those allocations are made by calling memblock_alloc().
      
      At this point, the only memory that is free *and* mapped is memory which
      has already been mapped by map_lowmem() itself. For this reason, we must
      calculate the first point at which map_lowmem() will need to allocate
      memory, and set the memblock allocation limit to a lower address, so that
      memblock_alloc() is guaranteed to return memory that is already mapped.
      
      This patch enhances sanity_check_meminfo() to calculate that memory
      address, and pass it to memblock_set_current_limit(), rather than just
      assuming the limit is arm_lowmem_limit.
      
      The algorithm applied is:
      
      * Default memblock_limit to arm_lowmem_limit in the absence of any other
        limit; arm_lowmem_limit is the highest memory that is mapped by
        map_lowmem().
      
      * While walking the list of memblocks, if the start of a block is not
        aligned, 2nd-level page tables will need to be allocated to map the
        first few pages of the block. Hence, the memblock_limit must be before
        the start of the block.
      
      * Similarly, if the end of any block is not aligned, 2nd-level page
        tables will need to be allocated to map the last few pages of the
        block. Hence, the memblock_limit must point at the end of the block,
        rounded down to section-alignment.
      
      * The memory blocks are assumed to be sorted in address order, so the
        first unaligned block start or end is used to set the limit.
      
      With this algorithm, the start or end of almost any bank can be non-
      section-aligned. The only exception is that the start of bank 0 must
      be section-aligned, since otherwise memory would need to be allocated
      when mapping the start of bank 0, which occurs before any free memory
      is mapped.
      
      [swarren, wrote commit description, rewrote calculation of memblock_limit]
      Signed-off-by: NStephen Warren <swarren@nvidia.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      c65b7e98
    • W
      ARM: 7784/1: mm: ensure SMP alternates assemble to exactly 4 bytes with Thumb-2 · bf3f0f33
      Will Deacon 提交于
      Commit ae8a8b95 ("ARM: 7691/1: mm: kill unused TLB_CAN_READ_FROM_L1_CACHE
      and use ALT_SMP instead") added early function returns for page table
      cache flushing operations on ARMv7 SMP CPUs.
      
      Unfortunately, when targetting Thumb-2, these `mov pc, lr' sequences
      assemble to 2 bytes which can lead to corruption of the instruction
      stream after code patching.
      
      This patch fixes the alternates to use wide (32-bit) instructions for
      Thumb-2, therefore ensuring that the patching code works correctly.
      
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      bf3f0f33
    • R
      ARM: document DEBUG_UNCOMPRESS Kconfig option · b6992fa9
      Russell King 提交于
      This non-user visible option lacked any kind of documentation.  This
      is quite common for non-user visible options; certian people can't
      understand the point of documenting such options with help text.
      
      However, here we have a case in point: developers don't understand the
      option either, as they were thinking that when the option is not set,
      the decompressor should produce no output what so ever.  This is
      incorrect, as the purpose of this option is to control whether a
      multiplatform kernel uses the kernel debugging macros to produce
      output or not.
      
      So let's document this via help rather than commentry to prevent others
      falling into this misunderstanding.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      b6992fa9
  3. 19 7月, 2013 18 次提交
  4. 18 7月, 2013 6 次提交
  5. 17 7月, 2013 3 次提交
  6. 16 7月, 2013 3 次提交
  7. 15 7月, 2013 1 次提交
    • P
      x86: delete __cpuinit usage from all x86 files · 148f9bb8
      Paul Gortmaker 提交于
      The __cpuinit type of throwaway sections might have made sense
      some time ago when RAM was more constrained, but now the savings
      do not offset the cost and complications.  For example, the fix in
      commit 5e427ec2 ("x86: Fix bit corruption at CPU resume time")
      is a good example of the nasty type of bugs that can be created
      with improper use of the various __init prefixes.
      
      After a discussion on LKML[1] it was decided that cpuinit should go
      the way of devinit and be phased out.  Once all the users are gone,
      we can then finally remove the macros themselves from linux/init.h.
      
      Note that some harmless section mismatch warnings may result, since
      notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
      are flagged as __cpuinit  -- so if we remove the __cpuinit from
      arch specific callers, we will also get section mismatch warnings.
      As an intermediate step, we intend to turn the linux/init.h cpuinit
      content into no-ops as early as possible, since that will get rid
      of these warnings.  In any case, they are temporary and harmless.
      
      This removes all the arch/x86 uses of the __cpuinit macros from
      all C files.  x86 only had the one __CPUINIT used in assembly files,
      and it wasn't paired off with a .previous or a __FINIT, so we can
      delete it directly w/o any corresponding additional change there.
      
      [1] https://lkml.org/lkml/2013/5/20/589
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: x86@kernel.org
      Acked-by: NIngo Molnar <mingo@kernel.org>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NH. Peter Anvin <hpa@linux.intel.com>
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      148f9bb8