1. 29 1月, 2015 2 次提交
  2. 21 1月, 2015 1 次提交
  3. 13 1月, 2015 2 次提交
    • D
      ARM: 8255/1: perf: Prevent wraparound during overflow · 2d9ed740
      Daniel Thompson 提交于
      If the overflow threshold for a counter is set above or near the
      0xffffffff boundary then the kernel may lose track of the overflow
      causing only events that occur *after* the overflow to be recorded.
      Specifically the problem occurs when the value of the performance counter
      overtakes its original programmed value due to wrap around.
      
      Typical solutions to this problem are either to avoid programming in
      values likely to be overtaken or to treat the overflow bit as the 33rd
      bit of the counter.
      
      Its somewhat fiddly to refactor the code to correctly handle the 33rd bit
      during irqsave sections (context switches for example) so instead we take
      the simpler approach of avoiding values likely to be overtaken.
      
      We set the limit to half of max_period because this matches the limit
      imposed in __hw_perf_event_init(). This causes a doubling of the interrupt
      rate for large threshold values, however even with a very fast counter
      ticking at 4GHz the interrupt rate would only be ~1Hz.
      Signed-off-by: NDaniel Thompson <daniel.thompson@linaro.org>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      2d9ed740
    • D
      ARM: 8266/1: Remove early stack deallocation from restore_user_regs · a18f3645
      Daniel Thompson 提交于
      Currently restore_user_regs deallocates the SVC stack early in
      its execution and relies on no exception being taken between
      the deallocation and the registers being restored. The introduction
      of a default FIQ handler that also uses the SVC stack breaks this
      assumption and can result in corrupted register state.
      
      This patch works around the problem by removing the early
      stack deallocation and using r2 as a temporary instead. I have
      not found a way to do this without introducing an extra mov
      instruction to the macro.
      Signed-off-by: NDaniel Thompson <daniel.thompson@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      a18f3645
  4. 10 1月, 2015 1 次提交
  5. 08 1月, 2015 3 次提交
    • G
      ARM: 8253/1: mm: use phys_addr_t type in map_lowmem() for kernel mem region · ac084688
      Grygorii Strashko 提交于
      Now local variables kernel_x_start and kernel_x_end defined using
      'unsigned long' type which is wrong because they represent physical
      memory range and will be calculated wrongly if LPAE is enabled.
      As result, all following code in map_lowmem() will not work correctly.
      
      For example, Keystone 2 boot is broken because
       kernel_x_start == 0x0000 0000
       kernel_x_end   == 0x0080 0000
      
      instead of
       kernel_x_start == 0x0000 0008 0000 0000
       kernel_x_end   == 0x0000 0008 0080 0000
      and as result whole low memory will be mapped with MT_MEMORY_RW
      permissions by code (start > kernel_x_end):
      		} else if (start >= kernel_x_end) {
      			map.pfn = __phys_to_pfn(start);
      			map.virtual = __phys_to_virt(start);
      			map.length = end - start;
      			map.type = MT_MEMORY_RW;
      
      			create_mapping(&map);
      		}
      
      Hence, fix it by using phys_addr_t type for variables kernel_x_start
      and kernel_x_end.
      Tested-by: NMurali Karicheri <m-karicheri2@ti.com>
      Signed-off-by: NGrygorii Strashko <grygorii.strashko@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      ac084688
    • M
      ARM: 8249/1: mm: dump: don't skip regions · cca547e9
      Mark Rutland 提交于
      Currently the arm page table dumping code starts dumping page tables
      from USER_PGTABLES_CEILING. This is unnecessary for skipping any entries
      related to userspace as the swapper_pg_dir does not contain such
      entries, and results in a couple of unfortuante side effects.
      
      Firstly, any kernel mappings which might exist below
      USER_PGTABLES_CEILING will not be accounted in the dump output. This
      masks any entries erroneously created below this address.
      
      Secondly, if the final page table entry walked is part of a valid
      mapping the page table dumping code will not log the region this entry
      is part of, as the final note_page call in walk_pgd will trigger an
      early return when 0 < USER_PGTABLES_CEILING. Luckily this isn't seen on
      contemporary systems as they typically don't have enough RAM to extend
      the linear mapping right to the end of the address space.
      
      Due to the way addr is constructed in the walk_* functions, it can never
      be less than USER_PGTABLES_CEILING when walking the page tables, so it
      is not necessary to avoid dereferencing invalid table addresses. The
      existing checks for st->current_prot and st->marker[1].start_address are
      sufficient to ensure we will not print and/or dereference garbage when
      trying to log information.
      
      This patch removes both problematic uses of USER_PGTABLES_CEILING from
      the arm page table dumping code, preventing both of these issues. We
      will now report any low mappings, and the final note_page call will not
      return early, ensuring all regions are logged.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Steve Capper <steve.capper@linaro.org>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      cca547e9
    • R
      ARM: wire up execveat syscall · 841ee230
      Russell King 提交于
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      841ee230
  6. 20 12月, 2014 1 次提交
  7. 18 12月, 2014 1 次提交
  8. 16 12月, 2014 3 次提交
  9. 15 12月, 2014 1 次提交
    • C
      arm/arm64: KVM: Require in-kernel vgic for the arch timers · 05971120
      Christoffer Dall 提交于
      It is curently possible to run a VM with architected timers support
      without creating an in-kernel VGIC, which will result in interrupts from
      the virtual timer going nowhere.
      
      To address this issue, move the architected timers initialization to the
      time when we run a VCPU for the first time, and then only initialize
      (and enable) the architected timers if we have a properly created and
      initialized in-kernel VGIC.
      
      When injecting interrupts from the virtual timer to the vgic, the
      current setup should ensure that this never calls an on-demand init of
      the VGIC, which is the only call path that could return an error from
      kvm_vgic_inject_irq(), so capture the return value and raise a warning
      if there's an error there.
      
      We also change the kvm_timer_init() function from returning an int to be
      a void function, since the function always succeeds.
      Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      05971120
  10. 14 12月, 2014 1 次提交
  11. 13 12月, 2014 11 次提交
  12. 12 12月, 2014 1 次提交
    • A
      arch: Add lightweight memory barriers dma_rmb() and dma_wmb() · 1077fa36
      Alexander Duyck 提交于
      There are a number of situations where the mandatory barriers rmb() and
      wmb() are used to order memory/memory operations in the device drivers
      and those barriers are much heavier than they actually need to be.  For
      example in the case of PowerPC wmb() calls the heavy-weight sync
      instruction when for coherent memory operations all that is really needed
      is an lsync or eieio instruction.
      
      This commit adds a coherent only version of the mandatory memory barriers
      rmb() and wmb().  In most cases this should result in the barrier being the
      same as the SMP barriers for the SMP case, however in some cases we use a
      barrier that is somewhere in between rmb() and smp_rmb().  For example on
      ARM the rmb barriers break down as follows:
      
        Barrier   Call     Explanation
        --------- -------- ----------------------------------
        rmb()     dsb()    Data synchronization barrier - system
        dma_rmb() dmb(osh) data memory barrier - outer sharable
        smp_rmb() dmb(ish) data memory barrier - inner sharable
      
      These new barriers are not as safe as the standard rmb() and wmb().
      Specifically they do not guarantee ordering between coherent and incoherent
      memories.  The primary use case for these would be to enforce ordering of
      reads and writes when accessing coherent memory that is shared between the
      CPU and a device.
      
      It may also be noted that there is no dma_mb().  Most architectures don't
      provide a good mechanism for performing a coherent only full barrier without
      resorting to the same mechanism used in mb().  As such there isn't much to
      be gained in trying to define such a function.
      
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Cc: Michael Ellerman <michael@ellerman.id.au>
      Cc: Michael Neuling <mikey@neuling.org>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: David Miller <davem@davemloft.net>
      Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NAlexander Duyck <alexander.h.duyck@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1077fa36
  13. 11 12月, 2014 12 次提交