1. 27 8月, 2015 6 次提交
  2. 21 8月, 2015 1 次提交
  3. 20 8月, 2015 10 次提交
    • V
      09074950
    • Y
      ARC: change some branchs to jumps to resolve linkage errors · 6de6066c
      Yuriy Kolerov 提交于
      When kernel's binary becomes large enough (32M and more) errors
      may occur during the final linkage stage. It happens because
      the build system uses short relocations for ARC  by default.
      This problem may be easily resolved by passing -mlong-calls
      option to GCC to use long absolute jumps (j) instead of short
      relative branchs (b).
      
      But there are fragments of pure assembler code exist which use
      branchs in inappropriate places and cause a linkage error because
      of relocations overflow.
      
      First of these fragments is .fixup insertion in futex.h and
      unaligned.c. It inserts a code in the separate section (.fixup)
      with branch instruction. It leads to the linkage error when
      kernel becomes large.
      
      Second of these fragments is calling scheduler's functions
      (common kernel code) from entry.S of ARC's code. When kernel's
      binary becomes large it may lead to the linkage error because
      scheduler may occur far enough from ARC's code in the final
      binary.
      Signed-off-by: NYuriy Kolerov <yuriy.kolerov@synopsys.com>
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      6de6066c
    • V
      ARC: ensure futex ops are atomic in !LLSC config · eb2cd8b7
      Vineet Gupta 提交于
      W/o hardware assisted atomic r-m-w the best we can do is to disable
      preemption.
      
      Cc: David Hildenbrand <dahi@linux.vnet.ibm.com>
      Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Michel Lespinasse <walken@google.com>
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      eb2cd8b7
    • V
      ARC: Enable HAVE_FUTEX_CMPXCHG · 5e057429
      Vineet Gupta 提交于
      ARC doesn't need the runtime detection of futex cmpxchg op
      
      Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      5e057429
    • V
      ARC: make futex_atomic_cmpxchg_inatomic() return bimodal · 882a95ae
      Vineet Gupta 提交于
      Callers of cmpxchg_futex_value_locked() in futex code expect bimodal
      return value:
        !0 (essentially -EFAULT as failure)
         0 (success)
      
      Before this patch, the success return value was old value of futex,
      which could very well be non zero, causing caller to possibly take the
      failure path erroneously.
      
      Fix that by returning 0 for success
      
      (This fix was done back in 2011 for all upstream arches, which ARC
      obviously missed)
      
      Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Michel Lespinasse <walken@google.com>
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      882a95ae
    • V
      ARC: futex cosmetics · ed574e2b
      Vineet Gupta 提交于
      Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Michel Lespinasse <walken@google.com>
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      ed574e2b
    • V
      ARC: add barriers to futex code · 31d30c82
      Vineet Gupta 提交于
      The atomic ops on futex need to provide the full barrier just like
      regular atomics in kernel.
      
      Also remove pagefault_enable/disable in futex_atomic_cmpxchg_inatomic()
      as core code already does that
      
      Cc: David Hildenbrand <dahi@linux.vnet.ibm.com>
      Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Michel Lespinasse <walken@google.com>
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      31d30c82
    • A
      ARCv2: IOC: Allow boot time disable · 1648c70d
      Alexey Brodkin 提交于
      Signed-off-by: NAlexey Brodkin <abrodkin@synopsys.com>
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      1648c70d
    • V
      ARCv2: SLC: Allow boot time disable · 79335a2c
      Vineet Gupta 提交于
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      79335a2c
    • A
      ARCv2: Support IO Coherency and permutations involving L1 and L2 caches · f2b0b25a
      Alexey Brodkin 提交于
      In case of ARCv2 CPU there're could be following configurations
      that affect cache handling for data exchanged with peripherals
      via DMA:
       [1] Only L1 cache exists
       [2] Both L1 and L2 exist, but no IO coherency unit
       [3] L1, L2 caches and IO coherency unit exist
      
      Current implementation takes care of [1] and [2].
      Moreover support of [2] is implemented with run-time check
      for SLC existence which is not super optimal.
      
      This patch introduces support of [3] and rework of DMA ops
      usage. Instead of doing run-time check every time a particular
      DMA op is executed we'll have 3 different implementations of
      DMA ops and select appropriate one during init.
      
      As for IOC support for it we need:
       [a] Implement empty DMA ops because IOC takes care of cache
           coherency with DMAed data
       [b] Route dma_alloc_coherent() via dma_alloc_noncoherent()
           This is required to make IOC work in first place and also
           serves as optimization as LD/ST to coherent buffers can be
           srviced from caches w/o going all the way to memory
      Signed-off-by: NAlexey Brodkin <abrodkin@synopsys.com>
      [vgupta:
        -Added some comments about IOC gains
        -Marked dma ops as static,
        -Massaged changelog a bit]
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      f2b0b25a
  4. 11 8月, 2015 1 次提交
  5. 07 8月, 2015 4 次提交
    • V
      ARCv2: spinlock/rwlock/atomics: reduce 1 instruction in exponential backoff · 10971638
      Vineet Gupta 提交于
      The increment of delay counter was 2 instructions:
      Arithmatic Shfit Left (ASL) + set to 1 on overflow
      
      This can be done in 1 using ROtate Left (ROL)
      Suggested-by: NNigel Topham <ntopham@synopsys.com>
      Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      10971638
    • D
      sparc64: Fix userspace FPU register corruptions. · 44922150
      David S. Miller 提交于
      If we have a series of events from userpsace, with %fprs=FPRS_FEF,
      like follows:
      
      ETRAP
      	ETRAP
      		VIS_ENTRY(fprs=0x4)
      		VIS_EXIT
      		RTRAP (kernel FPU restore with fpu_saved=0x4)
      	RTRAP
      
      We will not restore the user registers that were clobbered by the FPU
      using kernel code in the inner-most trap.
      
      Traps allocate FPU save slots in the thread struct, and FPU using
      sequences save the "dirty" FPU registers only.
      
      This works at the initial trap level because all of the registers
      get recorded into the top-level FPU save area, and we'll return
      to userspace with the FPU disabled so that any FPU use by the user
      will take an FPU disabled trap wherein we'll load the registers
      back up properly.
      
      But this is not how trap returns from kernel to kernel operate.
      
      The simplest fix for this bug is to always save all FPU register state
      for anything other than the top-most FPU save area.
      
      Getting rid of the optimized inner-slot FPU saving code ends up
      making VISEntryHalf degenerate into plain VISEntry.
      
      Longer term we need to do something smarter to reinstate the partial
      save optimizations.  Perhaps the fundament error is having trap entry
      and exit allocate FPU save slots and restore register state.  Instead,
      the VISEntry et al. calls should be doing that work.
      
      This bug is about two decades old.
      Reported-by: NJames Y Knight <jyknight@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      44922150
    • A
      signal: fix information leak in copy_siginfo_to_user · 26135022
      Amanieu d'Antras 提交于
      This function may copy the si_addr_lsb, si_lower and si_upper fields to
      user mode when they haven't been initialized, which can leak kernel
      stack data to user mode.
      
      Just checking the value of si_code is insufficient because the same
      si_code value is shared between multiple signals.  This is solved by
      checking the value of si_signo in addition to si_code.
      Signed-off-by: NAmanieu d'Antras <amanieu@gmail.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      26135022
    • A
      signal: fix information leak in copy_siginfo_from_user32 · 3c00cb5e
      Amanieu d'Antras 提交于
      This function can leak kernel stack data when the user siginfo_t has a
      positive si_code value.  The top 16 bits of si_code descibe which fields
      in the siginfo_t union are active, but they are treated inconsistently
      between copy_siginfo_from_user32, copy_siginfo_to_user32 and
      copy_siginfo_to_user.
      
      copy_siginfo_from_user32 is called from rt_sigqueueinfo and
      rt_tgsigqueueinfo in which the user has full control overthe top 16 bits
      of si_code.
      
      This fixes the following information leaks:
      x86:   8 bytes leaked when sending a signal from a 32-bit process to
             itself. This leak grows to 16 bytes if the process uses x32.
             (si_code = __SI_CHLD)
      x86:   100 bytes leaked when sending a signal from a 32-bit process to
             a 64-bit process. (si_code = -1)
      sparc: 4 bytes leaked when sending a signal from a 32-bit process to a
             64-bit process. (si_code = any)
      
      parsic and s390 have similar bugs, but they are not vulnerable because
      rt_[tg]sigqueueinfo have checks that prevent sending a positive si_code
      to a different process.  These bugs are also fixed for consistency.
      Signed-off-by: NAmanieu d'Antras <amanieu@gmail.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Chris Metcalf <cmetcalf@ezchip.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3c00cb5e
  6. 05 8月, 2015 3 次提交
  7. 04 8月, 2015 8 次提交
  8. 03 8月, 2015 7 次提交