1. 16 9月, 2012 2 次提交
    • R
      ARM: Fix build warning in arch/arm/mm/alignment.c · a761cebf
      Russell King 提交于
      Fix this harmless build warning:
      
      arch/arm/mm/alignment.c: In function 'do_alignment':
      arch/arm/mm/alignment.c:749:21: warning: 'offset.un' may be used uninitialized in this function
      
      This is caused by the compiler not being able to properly analyse the
      code to prove that offset.un is assigned in every case.  The case it
      struggles with is where we assign the handler from the Thumb parser -
      do_alignment_t32_to_handler().  As this starts by zeroing this variable
      via a pointer, move it into the calling function.  This fixes the
      warning.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      a761cebf
    • R
      ARM: Add irqtime accounting support · a42c3629
      Russell King 提交于
      Add support for irq time accounting.  This commit prepares ARM by adding
      the call to enable_sched_clock_irqtime() in sched_clock().  We introduce
      a new kernel parameter - irqtime - which takes an integer.  -1 for auto,
      0 for disabled, and 1 for enabled.  Auto mode selects IRQ accounting if
      we have a sched_clock() tick rate greater than 1MHz.
      
      Frederic Weisbecker is working on a patch set which moves the
      IRQ_TIME_ACCOUNTING into arch/, so that part is not incorporated into
      this patch; this facility becomes available on ARM only when both this
      patch and Frederic's patches are merged.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      a42c3629
  2. 02 9月, 2012 1 次提交
  3. 25 8月, 2012 7 次提交
    • W
      ARM: 7500/1: io: avoid writeback addressing modes for __raw_ accessors · 195bbcac
      Will Deacon 提交于
      Data aborts taken to hyp mode do not provide a valid instruction
      syndrome field in the HSR if the faulting instruction is a memory
      access using a writeback addressing mode.
      
      For hypervisors emulating MMIO accesses to virtual peripherals, taking
      such an exception requires disassembling the faulting instruction in
      order to determine the behaviour of the access. Since this requires
      manually walking the two stages of translation, the world must be
      stopped to prevent races against page aging in the guest, where the
      first-stage translation is invalidated after the hypervisor has
      translated to an IPA and the physical page is reused for something else.
      
      This patch avoids taking this heavy performance penalty when running
      Linux as a guest by ensuring that our I/O accessors do not make use of
      writeback addressing modes.
      
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Reviewed-by: NArnd Bergmann <arnd@arndb.de>
      Reviewed-by: NNicolas Pitre <nico@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      195bbcac
    • S
    • W
      ARM: 7495/1: mutex: use generic atomic_dec-based implementation for ARMv6+ · 08928e7a
      Will Deacon 提交于
      Commit a76d7bd9 ("ARM: 7467/1: mutex: use generic xchg-based
      implementation for ARMv6+") removed the barrier-less, ARM-specific
      mutex implementation in favour of the generic xchg-based code.
      
      Since then, a bug was uncovered in the xchg code when running on SMP
      platforms, due to interactions between the locking paths and the
      MUTEX_SPIN_ON_OWNER code. This was fixed in 0bce9c46 ("mutex: place
      lock in contended state after fastpath_lock failure"), however, the
      atomic_dec-based mutex algorithm is now marginally more efficient for
      ARM (~0.5% improvement in hackbench scores on dual A15).
      
      This patch moves ARMv6+ platforms to the atomic_dec-based mutex code.
      Acked-by: NNicolas Pitre <nico@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      08928e7a
    • R
      ARM: 7494/1: use generic termios.h · e780c452
      Rob Herring 提交于
      As pointed out by Arnd Bergmann, this fixes a couple of issues but will
      increase code size:
      
      The original macro user_termio_to_kernel_termios was not endian safe. It
      used an unsigned short ptr to access the low bits in a 32-bit word.
      
      Both user_termio_to_kernel_termios and kernel_termios_to_user_termio are
      missing error checking on put_user/get_user and copy_to/from_user.
      Signed-off-by: NRob Herring <rob.herring@calxeda.com>
      Reviewed-by: NNicolas Pitre <nico@linaro.org>
      Tested-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com>
      Reviewed-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      e780c452
    • R
      ARM: 7493/1: use generic unaligned.h · d25c881a
      Rob Herring 提交于
      This moves ARM over to the asm-generic/unaligned.h header. This has the
      benefit of better code generated especially for ARMv7 on gcc 4.7+
      compilers.
      
      As Arnd Bergmann, points out: The asm-generic version uses the "struct"
      version for native-endian unaligned access and the "byteshift" version
      for the opposite endianess. The current ARM version however uses the
      "byteshift" implementation for both.
      
      Thanks to Nicolas Pitre for the excellent analysis:
      
      Test case:
      
      int foo (int *x) { return get_unaligned(x); }
      long long bar (long long *x) { return get_unaligned(x); }
      
      With the current ARM version:
      
      foo:
      	ldrb	r3, [r0, #2]	@ zero_extendqisi2	@ MEM[(const u8 *)x_1(D) + 2B], MEM[(const u8 *)x_1(D) + 2B]
      	ldrb	r1, [r0, #1]	@ zero_extendqisi2	@ MEM[(const u8 *)x_1(D) + 1B], MEM[(const u8 *)x_1(D) + 1B]
      	ldrb	r2, [r0, #0]	@ zero_extendqisi2	@ MEM[(const u8 *)x_1(D)], MEM[(const u8 *)x_1(D)]
      	mov	r3, r3, asl #16	@ tmp154, MEM[(const u8 *)x_1(D) + 2B],
      	ldrb	r0, [r0, #3]	@ zero_extendqisi2	@ MEM[(const u8 *)x_1(D) + 3B], MEM[(const u8 *)x_1(D) + 3B]
      	orr	r3, r3, r1, asl #8	@, tmp155, tmp154, MEM[(const u8 *)x_1(D) + 1B],
      	orr	r3, r3, r2	@ tmp157, tmp155, MEM[(const u8 *)x_1(D)]
      	orr	r0, r3, r0, asl #24	@,, tmp157, MEM[(const u8 *)x_1(D) + 3B],
      	bx	lr	@
      
      bar:
      	stmfd	sp!, {r4, r5, r6, r7}	@,
      	mov	r2, #0	@ tmp184,
      	ldrb	r5, [r0, #6]	@ zero_extendqisi2	@ MEM[(const u8 *)x_1(D) + 6B], MEM[(const u8 *)x_1(D) + 6B]
      	ldrb	r4, [r0, #5]	@ zero_extendqisi2	@ MEM[(const u8 *)x_1(D) + 5B], MEM[(const u8 *)x_1(D) + 5B]
      	ldrb	ip, [r0, #2]	@ zero_extendqisi2	@ MEM[(const u8 *)x_1(D) + 2B], MEM[(const u8 *)x_1(D) + 2B]
      	ldrb	r1, [r0, #4]	@ zero_extendqisi2	@ MEM[(const u8 *)x_1(D) + 4B], MEM[(const u8 *)x_1(D) + 4B]
      	mov	r5, r5, asl #16	@ tmp175, MEM[(const u8 *)x_1(D) + 6B],
      	ldrb	r7, [r0, #1]	@ zero_extendqisi2	@ MEM[(const u8 *)x_1(D) + 1B], MEM[(const u8 *)x_1(D) + 1B]
      	orr	r5, r5, r4, asl #8	@, tmp176, tmp175, MEM[(const u8 *)x_1(D) + 5B],
      	ldrb	r6, [r0, #7]	@ zero_extendqisi2	@ MEM[(const u8 *)x_1(D) + 7B], MEM[(const u8 *)x_1(D) + 7B]
      	orr	r5, r5, r1	@ tmp178, tmp176, MEM[(const u8 *)x_1(D) + 4B]
      	ldrb	r4, [r0, #0]	@ zero_extendqisi2	@ MEM[(const u8 *)x_1(D)], MEM[(const u8 *)x_1(D)]
      	mov	ip, ip, asl #16	@ tmp188, MEM[(const u8 *)x_1(D) + 2B],
      	ldrb	r1, [r0, #3]	@ zero_extendqisi2	@ MEM[(const u8 *)x_1(D) + 3B], MEM[(const u8 *)x_1(D) + 3B]
      	orr	ip, ip, r7, asl #8	@, tmp189, tmp188, MEM[(const u8 *)x_1(D) + 1B],
      	orr	r3, r5, r6, asl #24	@,, tmp178, MEM[(const u8 *)x_1(D) + 7B],
      	orr	ip, ip, r4	@ tmp191, tmp189, MEM[(const u8 *)x_1(D)]
      	orr	ip, ip, r1, asl #24	@, tmp194, tmp191, MEM[(const u8 *)x_1(D) + 3B],
      	mov	r1, r3	@,
      	orr	r0, r2, ip	@ tmp171, tmp184, tmp194
      	ldmfd	sp!, {r4, r5, r6, r7}
      	bx	lr
      
      In both cases the code is slightly suboptimal.  One may wonder why
      wasting r2 with the constant 0 in the second case for example.  And all
      the mov's could be folded in subsequent orr's, etc.
      
      Now with the asm-generic version:
      
      foo:
      	ldr	r0, [r0, #0]	@ unaligned	@,* x
      	bx	lr	@
      
      bar:
      	mov	r3, r0	@ x, x
      	ldr	r0, [r0, #0]	@ unaligned	@,* x
      	ldr	r1, [r3, #4]	@ unaligned	@,
      	bx	lr	@
      
      This is way better of course, but only because this was compiled for
      ARMv7. In this case the compiler knows that the hardware can do
      unaligned word access.  This isn't that obvious for foo(), but if we
      remove the get_unaligned() from bar as follows:
      
      long long bar (long long *x) {return *x; }
      
      then the resulting code is:
      
      bar:
      	ldmia	r0, {r0, r1}	@ x,,
      	bx	lr	@
      
      So this proves that the presumed aligned vs unaligned cases does have
      influence on the instructions the compiler may use and that the above
      unaligned code results are not just an accident.
      
      Still... this isn't fully conclusive without at least looking at the
      resulting assembly fron a pre ARMv6 compilation.  Let's see with an
      ARMv5 target:
      
      foo:
      	ldrb	r3, [r0, #0]	@ zero_extendqisi2	@ tmp139,* x
      	ldrb	r1, [r0, #1]	@ zero_extendqisi2	@ tmp140,
      	ldrb	r2, [r0, #2]	@ zero_extendqisi2	@ tmp143,
      	ldrb	r0, [r0, #3]	@ zero_extendqisi2	@ tmp146,
      	orr	r3, r3, r1, asl #8	@, tmp142, tmp139, tmp140,
      	orr	r3, r3, r2, asl #16	@, tmp145, tmp142, tmp143,
      	orr	r0, r3, r0, asl #24	@,, tmp145, tmp146,
      	bx	lr	@
      
      bar:
      	stmfd	sp!, {r4, r5, r6, r7}	@,
      	ldrb	r2, [r0, #0]	@ zero_extendqisi2	@ tmp139,* x
      	ldrb	r7, [r0, #1]	@ zero_extendqisi2	@ tmp140,
      	ldrb	r3, [r0, #4]	@ zero_extendqisi2	@ tmp149,
      	ldrb	r6, [r0, #5]	@ zero_extendqisi2	@ tmp150,
      	ldrb	r5, [r0, #2]	@ zero_extendqisi2	@ tmp143,
      	ldrb	r4, [r0, #6]	@ zero_extendqisi2	@ tmp153,
      	ldrb	r1, [r0, #7]	@ zero_extendqisi2	@ tmp156,
      	ldrb	ip, [r0, #3]	@ zero_extendqisi2	@ tmp146,
      	orr	r2, r2, r7, asl #8	@, tmp142, tmp139, tmp140,
      	orr	r3, r3, r6, asl #8	@, tmp152, tmp149, tmp150,
      	orr	r2, r2, r5, asl #16	@, tmp145, tmp142, tmp143,
      	orr	r3, r3, r4, asl #16	@, tmp155, tmp152, tmp153,
      	orr	r0, r2, ip, asl #24	@,, tmp145, tmp146,
      	orr	r1, r3, r1, asl #24	@,, tmp155, tmp156,
      	ldmfd	sp!, {r4, r5, r6, r7}
      	bx	lr
      
      Compared to the initial results, this is really nicely optimized and I
      couldn't do much better if I were to hand code it myself.
      Signed-off-by: NRob Herring <rob.herring@calxeda.com>
      Reviewed-by: NNicolas Pitre <nico@linaro.org>
      Tested-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com>
      Reviewed-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      d25c881a
    • R
      ARM: 7492/1: add strstr declaration for decompressors · ef1c2096
      Rob Herring 提交于
      With the generic unaligned.h, more kernel headers get pulled in including
      dynamic_debug.h which needs strstr. As it is not really used, we only need
      a declaration here.
      Signed-off-by: NRob Herring <rob.herring@calxeda.com>
      Tested-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com>
      Reviewed-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      ef1c2096
    • R
      ARM: 7491/1: use generic version of identical asm headers · 4a8052d8
      Rob Herring 提交于
      Inspired by the AArgh64 claim that it should be separate from ARM and one
      reason was being able to use more asm-generic headers. Doing a diff of
      arch/arm/include/asm and include/asm-generic there are numerous asm
      headers which are functionally identical to their asm-generic counterparts.
      Delete the ARM version and use the generic ones.
      Signed-off-by: NRob Herring <rob.herring@calxeda.com>
      Reviewed-by: NNicolas Pitre <nico@linaro.org>
      Tested-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com>
      Reviewed-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      4a8052d8
  4. 13 8月, 2012 1 次提交
  5. 11 8月, 2012 1 次提交
  6. 03 8月, 2012 3 次提交
  7. 31 7月, 2012 15 次提交
    • J
      ARM: 7481/1: OMAP2+: omap2plus_defconfig: enable OMAP DMA engine · 89269ef1
      Javier Martinez Canillas 提交于
      commit 13f30fc893e4610f67dd7a8b0b67aec02eac1775
      Author: Russell King <rmk+kernel@arm.linux.org.uk>
      Date:   Sat Apr 21 22:41:10 2012 +0100
      
          mmc: omap: remove private DMA API implementation
      
      removed the private DMA API implementation from the OMAP mmc host to exclusively use the DMA engine API.
      
      Unfortunately OMAP MMC and High Speed MMC host drivers don't support poll mode and only works with DMA.
      
      Since omap2plus_defconfig doesn't enable this feature by default, the
      following error is happens on an IGEPv2 Rev.C (and probably on most OMAP boards with MMC support):
      
      [    2.199981] omap_hsmmc omap_hsmmc.1: unable to obtain RX DMA engine channel 48
      [    2.215087] omap_hsmmc omap_hsmmc.0: unable to obtain RX DMA engine channel 62
      Signed-off-by: NJavier Martinez Canillas <javier@dowhile0.org>
      Signed-off-by: NTony Lindgren <tony@atomide.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      89269ef1
    • R
      ARM: omap: remove mmc platform data dma_mask and initialization · 8a23fa1b
      Russell King 提交于
      DMAengine uses the DMA engine device structure when mapping/unmapping
      memory for DMA, so the MMC devices do not need their DMA masks
      initialized (this reflects hardware: the MMC device is not the device
      doing DMA.)
      Tested-by: NTony Lindgren <tony@atomide.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      8a23fa1b
    • W
      ARM: 7479/1: mm: avoid NULL dereference when flushing gate_vma with VIVT caches · b74253f7
      Will Deacon 提交于
      The vivt_flush_cache_{range,page} functions check that the mm_struct
      of the VMA being flushed has been active on the current CPU before
      performing the cache maintenance.
      
      The gate_vma has a NULL mm_struct pointer and, as such, will cause a
      kernel fault if we try to flush it with the above operations. This
      happens during ELF core dumps, which include the gate_vma as it may be
      useful for debugging purposes.
      
      This patch adds checks to the VIVT cache flushing functions so that VMAs
      with a NULL mm_struct are flushed unconditionally (the vectors page may
      be dirty if we use it to store the current TLS pointer).
      
      Cc: <stable@vger.kernel.org> # 3.4+
      Reported-by: NGilles Chanteperdrix <gilles.chanteperdrix@xenomai.org>
      Tested-by: NUros Bizjak <ubizjak@gmail.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      b74253f7
    • R
      ARM: Fix undefined instruction exception handling · 15ac49b6
      Russell King 提交于
      While trying to get a v3.5 kernel booted on the cubox, I noticed that
      VFP does not work correctly with VFP bounce handling.  This is because
      of the confusion over 16-bit vs 32-bit instructions, and where PC is
      supposed to point to.
      
      The rule is that FP handlers are entered with regs->ARM_pc pointing at
      the _next_ instruction to be executed.  However, if the exception is
      not handled, regs->ARM_pc points at the faulting instruction.
      
      This is easy for ARM mode, because we know that the next instruction and
      previous instructions are separated by four bytes.  This is not true of
      Thumb2 though.
      
      Since all FP instructions are 32-bit in Thumb2, it makes things easy.
      We just need to select the appropriate adjustment.  Do this by moving
      the adjustment out of do_undefinstr() into the assembly code, as only
      the assembly code knows whether it's dealing with a 32-bit or 16-bit
      instruction.
      
      Cc: <stable@vger.kernel.org>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      15ac49b6
    • J
      ARM: 7480/1: only call smp_send_stop() on SMP · c5dff4ff
      Javier Martinez Canillas 提交于
      On reboot or poweroff (machine_shutdown()) a call to smp_send_stop() is
      made (to stop the others CPU's) when CONFIG_SMP=y.
      
      arch/arm/kernel/process.c:
      
      void machine_shutdown(void)
      {
       #ifdef CONFIG_SMP
             smp_send_stop();
       #endif
      }
      
      smp_send_stop() calls the function pointer smp_cross_call(), which is set
      on the smp_init_cpus() function for OMAP processors.
      
      arch/arm/mach-omap2/omap-smp.c:
      
      void __init smp_init_cpus(void)
      {
      ...
      	set_smp_cross_call(gic_raise_softirq);
      ...
      }
      
      But the ARM setup_arch() function only calls smp_init_cpus()
      if CONFIG_SMP=y && is_smp().
      
      arm/kernel/setup.c:
      
      void __init setup_arch(char **cmdline_p)
      {
      ...
       #ifdef CONFIG_SMP
      	if (is_smp())
      		smp_init_cpus();
       #endif
      ...
      }
      
      Newer OMAP CPU's are SMP machines so omap2plus_defconfig sets
      CONFIG_SMP=y. Unfortunately on an OMAP UP machine is_smp()
      returns false and smp_init_cpus() is never called and the
      smp_cross_call() function remains NULL.
      
      If the machine is rebooted or powered off, smp_send_stop() will
      be called (since CONFIG_SMP=y) leading to the following error:
      
      [   42.815551] Restarting system.
      [   42.819030] Unable to handle kernel NULL pointer dereference at virtual address 00000000
      [   42.827667] pgd = d7a74000
      [   42.830566] [00000000] *pgd=96ce7831, *pte=00000000, *ppte=00000000
      [   42.837249] Internal error: Oops: 80000007 [#1] SMP ARM
      [   42.842773] Modules linked in:
      [   42.846008] CPU: 0    Not tainted  (3.5.0-rc3-next-20120622-00002-g62e87ba-dirty #44)
      [   42.854278] PC is at 0x0
      [   42.856994] LR is at smp_send_stop+0x4c/0xe4
      [   42.861511] pc : [<00000000>]    lr : [<c00183a4>]    psr: 60000013
      [   42.861511] sp : d6c85e70  ip : 00000000  fp : 00000000
      [   42.873626] r10: 00000000  r9 : d6c84000  r8 : 00000002
      [   42.879150] r7 : c07235a0  r6 : c06dd2d0  r5 : 000f4241  r4 : d6c85e74
      [   42.886047] r3 : 00000000  r2 : 00000000  r1 : 00000006  r0 : d6c85e74
      [   42.892944] Flags: nZCv  IRQs on  FIQs on  Mode SVC_32  ISA ARM  Segment user
      [   42.900482] Control: 10c5387d  Table: 97a74019  DAC: 00000015
      [   42.906555] Process reboot (pid: 1166, stack limit = 0xd6c842f8)
      [   42.912902] Stack: (0xd6c85e70 to 0xd6c86000)
      [   42.917510] 5e60:                                     c07235a0 00000000 00000000 d6c84000
      [   42.926177] 5e80: 01234567 c00143d0 4321fedc c00511bc d6c85ebc 00000168 00000460 00000000
      [   42.934814] 5ea0: c1017950 a0000013 c1017900 d8014390 d7ec3858 c0498e48 c1017950 00000000
      [   42.943481] 5ec0: d6ddde10 d6c85f78 00000003 00000000 d6ddde10 d6c84000 00000000 00000000
      [   42.952117] 5ee0: 00000002 00000000 00000000 c0088c88 00000002 00000000 00000000 c00f4b90
      [   42.960784] 5f00: 00000000 d6c85ebc d8014390 d7e311c8 60000013 00000103 00000002 d6c84000
      [   42.969421] 5f20: c00f3274 d6e00a00 00000001 60000013 d6c84000 00000000 00000000 c00895d4
      [   42.978057] 5f40: 00000002 d8007c80 d781f000 c00f6150 d8010cc0 c00f3274 d781f000 d6c84000
      [   42.986694] 5f60: c0013020 d6e00a00 00000001 20000010 0001257c ef000000 00000000 c00895d4
      [   42.995361] 5f80: 00000002 00000001 00000003 00000000 00000001 00000003 00000000 00000058
      [   43.003997] 5fa0: c00130c8 c0012f00 00000001 00000003 fee1dead 28121969 01234567 00000002
      [   43.012634] 5fc0: 00000001 00000003 00000000 00000058 00012584 0001257c 00000001 00000000
      [   43.021270] 5fe0: 000124bc bec5cc6c 00008f9c 4a2f7c40 20000010 fee1dead 00000000 00000000
      [   43.029968] [<c00183a4>] (smp_send_stop+0x4c/0xe4) from [<c00143d0>] (machine_restart+0xc/0x4c)
      [   43.039154] [<c00143d0>] (machine_restart+0xc/0x4c) from [<c00511bc>] (sys_reboot+0x144/0x1f0)
      [   43.048278] [<c00511bc>] (sys_reboot+0x144/0x1f0) from [<c0012f00>] (ret_fast_syscall+0x0/0x3c)
      [   43.057464] Code: bad PC value
      [   43.060760] ---[ end trace c3988d1dd0b8f0fb ]---
      
      Add a check so smp_cross_call() is only called when there is more than one CPU on-line.
      
      Cc: <stable@vger.kernel.org>
      Signed-off-by: Javier Martinez Canillas <javier at dowhile0.org>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      c5dff4ff
    • W
      ARM: 7478/1: errata: extend workaround for erratum #720789 · 5a783cbc
      Will Deacon 提交于
      Commit cdf357f1 ("ARM: 6299/1: errata: TLBIASIDIS and TLBIMVAIS
      operations can broadcast a faulty ASID") replaced by-ASID TLB flushing
      operations with all-ASID variants to workaround A9 erratum #720789.
      
      This patch extends the workaround to include the tlb_range operations,
      which were overlooked by the original patch.
      
      Cc: <stable@vger.kernel.org>
      Tested-by: NSteve Capper <steve.capper@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      5a783cbc
    • C
      ARM: 7477/1: vfp: Always save VFP state in vfp_pm_suspend on UP · 24b35521
      Colin Cross 提交于
      vfp_pm_suspend should save the VFP state in suspend after
      any lazy context switch.  If it only saves when the VFP is enabled,
      the state can get lost when, on a UP system:
        Thread 1 uses the VFP
        Context switch occurs to thread 2, VFP is disabled but the
           VFP context is not saved
        Thread 2 initiates suspend
        vfp_pm_suspend is called with the VFP disabled, and the unsaved
           VFP context of Thread 1 in the registers
      
      Modify vfp_pm_suspend to save the VFP context whenever
      vfp_current_hw_state is not NULL.
      
      Includes a fix from Ido Yariv <ido@wizery.com>, who pointed out that on
      SMP systems, the state pointer can be pointing to a freed task struct if
      a task exited on another cpu, fixed by using #ifndef CONFIG_SMP in the
      new if clause.
      
      Cc: Barry Song <bs14@csr.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Ido Yariv <ido@wizery.com>
      Cc: Daniel Drake <dsd@laptop.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NColin Cross <ccross@android.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      24b35521
    • C
      ARM: 7476/1: vfp: only clear vfp state for current cpu in vfp_pm_suspend · a84b895a
      Colin Cross 提交于
      vfp_pm_suspend runs on each cpu, only clear the hardware state
      pointer for the current cpu.  Prevents a possible crash if one
      cpu clears the hw state pointer when another cpu has already
      checked if it is valid.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NColin Cross <ccross@android.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      a84b895a
    • C
      ARM: 7468/1: ftrace: Trace function entry before updating index · 4c36595e
      Colin Cross 提交于
      Commit 722b3c74 modified x86 ftrace to
      avoid tracing all functions called from irqs when function graph was
      used with a filter.  Port the same fix to ARM.
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NColin Cross <ccross@android.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      4c36595e
    • W
      ARM: 7467/1: mutex: use generic xchg-based implementation for ARMv6+ · a76d7bd9
      Will Deacon 提交于
      The open-coded mutex implementation for ARMv6+ cores suffers from a
      severe lack of barriers, so in the uncontended case we don't actually
      protect any accesses performed during the critical section.
      
      Furthermore, the code is largely a duplication of the ARMv6+ atomic_dec
      code but optimised to remove a branch instruction, as the mutex fastpath
      was previously inlined. Now that this is executed out-of-line, we can
      reuse the atomic access code for the locking (in fact, we use the xchg
      code as this produces shorter critical sections).
      
      This patch uses the generic xchg based implementation for mutexes on
      ARMv6+, which introduces barriers to the lock/unlock operations and also
      has the benefit of removing a fair amount of inline assembly code.
      
      Cc: <stable@vger.kernel.org>
      Acked-by: NArnd Bergmann <arnd@arndb.de>
      Acked-by: NNicolas Pitre <nico@linaro.org>
      Reported-by: NShan Kang <kangshan0910@gmail.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      a76d7bd9
    • S
      ARM: 7466/1: disable interrupt before spinning endlessly · 98bd8b96
      Shawn Guo 提交于
      The CPU will endlessly spin at the end of machine_halt and
      machine_restart calls.  However, this will lead to a soft lockup
      warning after about 20 seconds, if CONFIG_LOCKUP_DETECTOR is enabled,
      as system timer is still alive.
      
      Disable interrupt before going to spin endlessly, so that the lockup
      warning will never be seen.
      
      Cc: <stable@vger.kernel.org>
      Reported-by: NMarek Vasut <marex@denx.de>
      Signed-off-by: NShawn Guo <shawn.guo@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      98bd8b96
    • W
      ipc: use Kconfig options for __ARCH_WANT_[COMPAT_]IPC_PARSE_VERSION · c1d7e01d
      Will Deacon 提交于
      Rather than #define the options manually in the architecture code, add
      Kconfig options for them and select them there instead.  This also allows
      us to select the compat IPC version parsing automatically for platforms
      using the old compat IPC interface.
      Reported-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c1d7e01d
    • C
      atomic64_test: simplify the #ifdef for atomic64_dec_if_positive() test · 7463449b
      Catalin Marinas 提交于
      Introduce CONFIG_ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE and use this instead
      of the multitude of #if defined() checks in atomic64_test.c
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7463449b
    • J
      arch: remove direct definitions of KERN_<LEVEL> uses · 0cc41e4a
      Joe Perches 提交于
      Add #include <linux/kern_levels.h> so that the #define KERN_<LEVEL> macros
      don't have to be duplicated.
      Signed-off-by: NJoe Perches <joe@perches.com>
      Cc: Kay Sievers <kay.sievers@vrfy.org>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Kay Sievers <kay@vrfy.org>
      Acked-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0cc41e4a
    • V
      arch/arm/mach-netx/fb.c: reuse dummy clk routines for CONFIG_HAVE_CLK=n · 7041717e
      Viresh Kumar 提交于
      mach-netx had its own implementation of clk routines like, clk_get{put},
      clk_enable{disable}, etc.  And with introduction of following patchset:
      
        https://lkml.org/lkml/2012/4/24/154
      
      we get compilation error for multiple definition of these routines.
      
      Sascha had following suggestion to deal with it:
      
        http://www.spinics.net/lists/arm-kernel/msg179369.html
      
      So, remove this code completely.
      Signed-off-by: NViresh Kumar <viresh.kumar2@arm.com>
      Reported-by: NPaul Gortmaker <paul.gortmaker@gmail.com>
      Acked-by: NSascha Hauer <s.hauer@pengutronix.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7041717e
  8. 30 7月, 2012 8 次提交
  9. 29 7月, 2012 2 次提交