1. 18 4月, 2014 2 次提交
  2. 26 3月, 2014 1 次提交
  3. 10 2月, 2014 2 次提交
    • T
      locking/mcs: Allow architecture specific asm files to be used for contended case · ddf1d169
      Tim Chen 提交于
      This patch allows each architecture to add its specific assembly optimized
      arch_mcs_spin_lock_contended and arch_mcs_spinlock_uncontended for
      MCS lock and unlock functions.
      Signed-off-by: NTim Chen <tim.c.chen@linux.intel.com>
      Cc: Scott J Norton <scott.norton@hp.com>
      Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
      Cc: AswinChandramouleeswaran <aswin@hp.com>
      Cc: George Spelvin <linux@horizon.com>
      Cc: Rik vanRiel <riel@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: MichelLespinasse <walken@google.com>
      Cc: Peter Hurley <peter@hurleysoftware.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Alex Shi <alex.shi@linaro.org>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: "Figo.zhang" <figo1802@gmail.com>
      Cc: "Paul E.McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Davidlohr Bueso <davidlohr.bueso@hp.com>
      Cc: Waiman Long <waiman.long@hp.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Matthew R Wilcox <matthew.r.wilcox@intel.com>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/1390347382.3138.67.camel@schen9-DESKSigned-off-by: NIngo Molnar <mingo@kernel.org>
      ddf1d169
    • T
      locking/mcs: Order the header files in Kbuild of each architecture in alphabetical order · b119fa61
      Tim Chen 提交于
      We perform a clean up of the Kbuid files in each architecture.
      We order the files in each Kbuild in alphabetical order
      by running the below script.
      
      for i in arch/*/include/asm/Kbuild
      do
              cat $i | gawk '/^generic-y/ {
                      i = 3;
                      do {
                              for (; i <= NF; i++) {
                                      if ($i == "\\") {
                                              getline;
                                              i = 1;
                                              continue;
                                      }
                                      if ($i != "")
                                              hdr[$i] = $i;
                              }
                              break;
                      } while (1);
                      next;
              }
              // {
                      print $0;
              }
              END {
                      n = asort(hdr);
                      for (i = 1; i <= n; i++)
                              print "generic-y += " hdr[i];
              }' > ${i}.sorted;
              mv ${i}.sorted $i;
      done
      Signed-off-by: NTim Chen <tim.c.chen@linux.intel.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Matthew R Wilcox <matthew.r.wilcox@intel.com>
      Cc: AswinChandramouleeswaran <aswin@hp.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: "Paul E.McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Scott J Norton <scott.norton@hp.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: "Figo.zhang" <figo1802@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Waiman Long <waiman.long@hp.com>
      Cc: Peter Hurley <peter@hurleysoftware.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Alex Shi <alex.shi@linaro.org>
      Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: George Spelvin <linux@horizon.com>
      Cc: MichelLespinasse <walken@google.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Davidlohr Bueso <davidlohr.bueso@hp.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      [ Fixed build bug. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      b119fa61
  4. 28 1月, 2014 1 次提交
  5. 14 1月, 2014 2 次提交
  6. 12 1月, 2014 2 次提交
  7. 23 12月, 2013 2 次提交
  8. 18 12月, 2013 1 次提交
  9. 15 11月, 2013 1 次提交
  10. 14 11月, 2013 1 次提交
  11. 12 11月, 2013 1 次提交
  12. 06 11月, 2013 8 次提交
  13. 10 10月, 2013 4 次提交
  14. 27 9月, 2013 2 次提交
    • V
      ARC: Workaround spinlock livelock in SMP SystemC simulation · 6c00350b
      Vineet Gupta 提交于
      Some ARC SMP systems lack native atomic R-M-W (LLOCK/SCOND) insns and
      can only use atomic EX insn (reg with mem) to build higher level R-M-W
      primitives. This includes a SystemC based SMP simulation model.
      
      So rwlocks need to use a protecting spinlock for atomic cmp-n-exchange
      operation to update reader(s)/writer count.
      
      The spinlock operation itself looks as follows:
      
      	mov reg, 1		; 1=locked, 0=unlocked
      retry:
      	EX reg, [lock]		; load existing, store 1, atomically
      	BREQ reg, 1, rety	; if already locked, retry
      
      In single-threaded simulation, SystemC alternates between the 2 cores
      with "N" insn each based scheduling. Additionally for insn with global
      side effect, such as EX writing to shared mem, a core switch is
      enforced too.
      
      Given that, 2 cores doing a repeated EX on same location, Linux often
      got into a livelock e.g. when both cores were fiddling with tasklist
      lock (gdbserver / hackbench) for read/write respectively as the
      sequence diagram below shows:
      
                 core1                                   core2
               --------                                --------
      1. spin lock [EX r=0, w=1] - LOCKED
      2. rwlock(Read)            - LOCKED
      3. spin unlock  [ST 0]     - UNLOCKED
                                               spin lock [EX r=0,w=1] - LOCKED
                            -- resched core 1----
      
      5. spin lock [EX r=1] - ALREADY-LOCKED
      
                            -- resched core 2----
      6.                                       rwlock(Write) - READER-LOCKED
      7.                                       spin unlock [ST 0]
      8.                                       rwlock failed, retry again
      
      9.                                       spin lock  [EX r=0, w=1]
                            -- resched core 1----
      
      10  spinlock locked in #9, retry #5
      11. spin lock [EX gets 1]
                            -- resched core 2----
      ...
      ...
      
      The fix was to unlock using the EX insn too (step 7), to trigger another
      SystemC scheduling pass which would let core1 proceed, eliding the
      livelock.
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      6c00350b
    • V
      ARC: Fix 32-bit wrap around in access_ok() · 0752adfd
      Vineet Gupta 提交于
      Anton reported
      
       | LTP tests syscalls/process_vm_readv01 and process_vm_writev01 fail
       | similarly in one testcase test_iov_invalid -> lvec->iov_base.
       | Testcase expects errno EFAULT and return code -1,
       | but it gets return code 1 and ERRNO is 0 what means success.
      
      Essentially test case was passing a pointer of -1 which access_ok()
      was not catching. It was doing [@addr + @sz <= TASK_SIZE] which would
      pass for @addr == -1
      
      Fixed that by rewriting as [@addr <= TASK_SIZE - @sz]
      Reported-by: NAnton Kolesov <Anton.Kolesov@synopsys.com>
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      0752adfd
  15. 25 9月, 2013 1 次提交
  16. 12 9月, 2013 1 次提交
    • N
      ARC: SMP failed to boot due to missing IVT setup · c3567f8a
      Noam Camus 提交于
      Commit 05b016ec "ARC: Setup Vector Table Base in early boot" moved
      the Interrupt vector Table setup out of arc_init_IRQ() which is called
      for all CPUs, to entry point of boot cpu only, breaking booting of others.
      
      Fix by adding the same to entry point of non-boot CPUs too.
      
      read_arc_build_cfg_regs() printing IVT Base Register didn't help the
      casue since it prints a synthetic value if zero which is totally bogus,
      so fix that to print the exact Register.
      
      [vgupta: Remove the now stale comment from header of arc_init_IRQ and
      also added the commentary for halt-on-reset]
      
      Cc: Gilad Ben-Yossef <gilad@benyossef.com>
      Cc: Cc: <stable@vger.kernel.org> #3.11
      Signed-off-by: NNoam Camus <noamc@ezchip.com>
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c3567f8a
  17. 05 9月, 2013 3 次提交
  18. 31 8月, 2013 5 次提交
    • V
      ARC: [ASID] Track ASID allocation cycles/generations · 947bf103
      Vineet Gupta 提交于
      This helps remove asid-to-mm reverse map
      
      While mm->context.id contains the ASID assigned to a process, our ASID
      allocator also used asid_mm_map[] reverse map. In a new allocation
      cycle (mm->ASID >= @asid_cache), the Round Robin ASID allocator used this
      to check if new @asid_cache belonged to some mm2 (from prev cycle).
      If so, it could locate that mm using the ASID reverse map, and mark that
      mm as unallocated ASID, to force it to refresh at the time of switch_mm()
      
      However, for SMP, the reverse map has to be maintained per CPU, so
      becomes 2 dimensional, hence got rid of it.
      
      With reverse map gone, it is NOT possible to reach out to current
      assignee. So we track the ASID allocation generation/cycle and
      on every switch_mm(), check if the current generation of CPU ASID is
      same as mm's ASID; If not it is refreshed.
      
      (Based loosely on arch/sh implementation)
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      947bf103
    • V
      ARC: [ASID] activate_mm() == switch_mm() · c6011553
      Vineet Gupta 提交于
      ASID allocation changes/2
      
      Use the fact that switch_mm() and activate_mm() are exactly same code
      now while acknowledging the semantical difference in comment
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      c6011553
    • V
      ARC: [ASID] get_new_mmu_context() to conditionally allocate new ASID · 3daa48d1
      Vineet Gupta 提交于
      ASID allocation changes/1
      
      This patch does 2 things:
      
      (1) get_new_mmu_context() NOW moves mm->ASID to a new value ONLY if it
          was from a prev allocation cycle/generation OR if mm had no ASID
          allocated (vs. before would unconditionally moving to a new ASID)
      
          Callers desiring unconditional update of ASID, e.g.local_flush_tlb_mm()
          (for parent's address space invalidation at fork) need to first force
          the parent to an unallocated ASID.
      
      (2) get_new_mmu_context() always sets the MMU PID reg with unchanged/new
          ASID value.
      
      The gains are:
      - consolidation of all asid alloc logic into get_new_mmu_context()
      - avoiding code duplication in switch_mm() for PID reg setting
      - Enables future change to fold activate_mm() into switch_mm()
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      3daa48d1
    • V
      ARC: [ASID] Refactor the TLB paranoid debug code · 5bd87adf
      Vineet Gupta 提交于
      -Asm code already has values of SW and HW ASID values, so they can be
       passed to the printing routine.
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      5bd87adf
    • V
      ARC: [ASID] Remove legacy/unused debug code · ade922f8
      Vineet Gupta 提交于
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      ade922f8