1. 06 11月, 2013 1 次提交
  2. 11 10月, 2013 2 次提交
  3. 27 8月, 2013 1 次提交
  4. 14 8月, 2013 2 次提交
  5. 09 8月, 2013 2 次提交
    • M
      powerpc: Save the TAR register earlier · c2d52644
      Michael Neuling 提交于
      This moves us to save the Target Address Register (TAR) a earlier in
      __switch_to.  It introduces a new function save_tar() to do this.
      
      We need to save the TAR earlier as we will overwrite it in the transactional
      memory reclaim/recheckpoint path.  We are going to do this in a subsequent
      patch which will fix saving the TAR register when it's modified inside a
      transaction.
      Signed-off-by: NMichael Neuling <mikey@neuling.org>
      Cc: <stable@vger.kernel.org> [v3.10]
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      c2d52644
    • M
      powerpc: Fix context switch DSCR on POWER8 · 2517617e
      Michael Neuling 提交于
      POWER8 allows the DSCR to be accessed directly from userspace via a new SPR
      number 0x3 (Rather than 0x11.  DSCR SPR number 0x11 is still used on POWER8 but
      like POWER7, is only accessible in HV and OS modes).  Currently, we allow this
      by setting H/FSCR DSCR bit on boot.
      
      Unfortunately this doesn't work, as the kernel needs to see the DSCR change so
      that it knows to no longer restore the system wide version of DSCR on context
      switch (ie. to set thread.dscr_inherit).
      
      This clears the H/FSCR DSCR bit initially.  If a process then accesses the DSCR
      (via SPR 0x3), it'll trap into the kernel where we set thread.dscr_inherit in
      facility_unavailable_exception().
      
      We also change _switch() so that we set or clear the H/FSCR DSCR bit based on
      the thread.dscr_inherit.
      Signed-off-by: NMichael Neuling <mikey@neuling.org>
      Cc: <stable@vger.kernel.org> [v3.10]
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      2517617e
  6. 20 6月, 2013 1 次提交
    • B
      powerpc: Restore dbcr0 on user space exit · 13d543cd
      Bharat Bhushan 提交于
      On BookE (Branch taken + Single Step) is as same as Branch Taken
      on BookS and in Linux we simulate BookS behavior for BookE as well.
      When doing so, in Branch taken handling we want to set DBCR0_IC but
      we update the current->thread->dbcr0 and not DBCR0.
      
      Now on 64bit the current->thread.dbcr0 (and other debug registers)
      is synchronized ONLY on context switch flow. But after handling
      Branch taken in debug exception if we return back to user space
      without context switch then single stepping change (DBCR0_ICMP)
      does not get written in h/w DBCR0 and Instruction Complete exception
      does not happen.
      
      This fixes using ptrace reliably on BookE-PowerPC
      
      lmbench latency test (lat_syscall) Results are (they varies a little
      on each run)
      
      1) ./lat_syscall <action> /dev/shm/uImage
      
      action:	Open	read	write	stat	fstat	null
      Before:	3.8618	0.2017	0.2851	1.6789	0.2256	0.0856
      After:	3.8580	0.2017	0.2851	1.6955	0.2255	0.0856
      
      1) ./lat_syscall -P 2 -N 10 <action> /dev/shm/uImage
      action:	Open	read	write	stat	fstat	null
      Before:	4.1388	0.2238	0.3066	1.7106	0.2256	0.0856
      After:	4.1413	0.2236	0.3062	1.7107	0.2256	0.0856
      
      [ Slightly modified to avoid extra branch in the fast path
        on Book3S and fix build on all non-BookE 64-bit -- BenH
      ]
      Signed-off-by: NBharat Bhushan <bharat.bhushan@freescale.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      13d543cd
  7. 10 6月, 2013 1 次提交
  8. 01 6月, 2013 1 次提交
  9. 24 5月, 2013 1 次提交
  10. 14 5月, 2013 2 次提交
  11. 02 5月, 2013 2 次提交
  12. 15 4月, 2013 2 次提交
    • K
      powerpc: add a missing label in resume_kernel · d8b92292
      Kevin Hao 提交于
      A label 0 was missed in the patch a9c4e541 (powerpc/kprobe: Complete
      kprobe and migrate exception frame). This will cause the kernel
      branch to an undetermined address if there really has a conflict when
      updating the thread flags.
      Signed-off-by: NKevin Hao <haokexin@gmail.com>
      Cc: stable@vger.kernel.org
      Acked-By: NTiejun Chen <tiejun.chen@windriver.com>
      Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
      d8b92292
    • A
      powerpc: Fix audit crash due to save/restore PPR changes · 05e38e5d
      Alistair Popple 提交于
      The current mainline crashes when hitting userspace with the following:
      
      kernel BUG at kernel/auditsc.c:1769!
      cpu 0x1: Vector: 700 (Program Check) at [c000000023883a60]
          pc: c0000000001047a8: .__audit_syscall_entry+0x38/0x130
          lr: c00000000000ed64: .do_syscall_trace_enter+0xc4/0x270
          sp: c000000023883ce0
         msr: 8000000000029032
        current = 0xc000000023800000
        paca    = 0xc00000000f080380   softe: 0        irq_happened: 0x01
          pid   = 1629, comm = start_udev
      kernel BUG at kernel/auditsc.c:1769!
      enter ? for help
      [c000000023883d80] c00000000000ed64 .do_syscall_trace_enter+0xc4/0x270
      [c000000023883e30] c000000000009b08 syscall_dotrace+0xc/0x38
       --- Exception: c00 (System Call) at 0000008010ec50dc
      
      Bisecting found the following patch caused it:
      
      commit 44e9309f
      Author: Haren Myneni <haren@linux.vnet.ibm.com>
      powerpc: Implement PPR save/restore
      
      It was found this patch corrupted r9 when calling
      SET_DEFAULT_THREAD_PPR()
      
      Using r10 as a scratch register instead of r9 solved the problem.
      Signed-off-by: NAlistair Popple <alistair@popple.id.au>
      Acked-by: NMichael Neuling <mikey@neuling.org>
      Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
      05e38e5d
  13. 11 4月, 2013 1 次提交
  14. 15 2月, 2013 1 次提交
  15. 08 2月, 2013 1 次提交
  16. 29 1月, 2013 1 次提交
  17. 28 1月, 2013 1 次提交
    • F
      cputime: Generic on-demand virtual cputime accounting · abf917cd
      Frederic Weisbecker 提交于
      If we want to stop the tick further idle, we need to be
      able to account the cputime without using the tick.
      
      Virtual based cputime accounting solves that problem by
      hooking into kernel/user boundaries.
      
      However implementing CONFIG_VIRT_CPU_ACCOUNTING require
      low level hooks and involves more overhead. But we already
      have a generic context tracking subsystem that is required
      for RCU needs by archs which plan to shut down the tick
      outside idle.
      
      This patch implements a generic virtual based cputime
      accounting that relies on these generic kernel/user hooks.
      
      There are some upsides of doing this:
      
      - This requires no arch code to implement CONFIG_VIRT_CPU_ACCOUNTING
      if context tracking is already built (already necessary for RCU in full
      tickless mode).
      
      - We can rely on the generic context tracking subsystem to dynamically
      (de)activate the hooks, so that we can switch anytime between virtual
      and tick based accounting. This way we don't have the overhead
      of the virtual accounting when the tick is running periodically.
      
      And one downside:
      
      - There is probably more overhead than a native virtual based cputime
      accounting. But this relies on hooks that are already set anyway.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Li Zhong <zhong@linux.vnet.ibm.com>
      Cc: Namhyung Kim <namhyung.kim@lge.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      abf917cd
  18. 10 1月, 2013 3 次提交
    • H
      powerpc: Implement PPR save/restore · 44e9309f
      Haren Myneni 提交于
      [PATCH 6/6] powerpc: Implement PPR save/restore
      
      When the task enters in to kernel space, the user defined priority (PPR)
      will be saved in to PACA at the beginning of first level exception
      vector and then copy from PACA to thread_info in second level vector.
      PPR will be restored from thread_info before exits the kernel space.
      
      P7/P8 temporarily raises the thread priority to higher level during
      exception until the program executes HMT_* calls. But it will not modify
      PPR register. So we save PPR value whenever some register is available
      to use and then calls HMT_MEDIUM to increase the priority. This feature
      supports on P7 or later processors.
      
      We save/ restore PPR for all exception vectors except system call entry.
      GLIBC will be saving / restore for system calls. So the default PPR
      value (3) will be set for the system call exit when the task returned
      to the user space.
      Signed-off-by: NHaren Myneni <haren@us.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      44e9309f
    • H
      powerpc: Move branch instruction from ACCOUNT_CPU_USER_ENTRY to caller · 5d75b264
      Haren Myneni 提交于
      [PATCH 1/6] powerpc: Move branch instruction from ACCOUNT_CPU_USER_ENTRY to caller
      
      The first instruction in ACCOUNT_CPU_USER_ENTRY is 'beq' which checks for
      exceptions coming from kernel mode. PPR value will be saved immediately after
      ACCOUNT_CPU_USER_ENTRY and is also for user level exceptions. So moved this
      branch instruction in the caller code.
      Signed-off-by: NHaren Myneni <haren@us.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      5d75b264
    • I
      powerpc: Add code to handle soft-disabled doorbells on server · fe9e1d54
      Ian Munsie 提交于
      This patch adds the logic to properly handle doorbells that come in when
      interrupts have been soft disabled and to replay them when interrupts
      are re-enabled:
      
      - masked_##_H##interrupt is modified to leave interrupts enabled when a
        doorbell has come in since doorbells are edge sensitive and as such
        won't be automatically re-raised.
      
      - __check_irq_replay now tests if a doorbell happened on book3s, and
        returns either 0xe80 or 0xa00 depending on whether we are the
        hypervisor or not.
      
      - restore_check_irq_replay now tests for the two possible server
        doorbell vector numbers to replay.
      
      - __replay_interrupt also adds tests for the two server doorbell vector
        numbers, and is modified to use a compare instruction rather than an
        andi. on the single bit difference between 0x500 and 0x900.
      
      The last two use a CPU feature section to avoid needlessly testing
      against the hypervisor vector if it is not the hypervisor, and vice
      versa.
      Signed-off-by: NIan Munsie <imunsie@au1.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      fe9e1d54
  19. 15 11月, 2012 1 次提交
    • L
      powerpc: Fix MAX_STACK_TRACE_ENTRIES too low warning ! · 12660b17
      Li Zhong 提交于
      This patch tries to fix the following BUG report:
      
      [    0.012313] BUG: MAX_STACK_TRACE_ENTRIES too low!
      [    0.012318] turning off the locking correctness validator.
      [    0.012321] Call Trace:
      [    0.012330] [c00000017666f6d0] [c000000000012128] .show_stack+0x78/0x184 (unreliable)
      [    0.012339] [c00000017666f780] [c0000000000b6348] .save_trace+0x12c/0x14c
      [    0.012345] [c00000017666f800] [c0000000000b7448] .mark_lock+0x2bc/0x710
      [    0.012351] [c00000017666f8b0] [c0000000000bb198] .__lock_acquire+0x748/0xaec
      [    0.012357] [c00000017666f9b0] [c0000000000bb684] .lock_acquire+0x148/0x194
      [    0.012365] [c00000017666fa80] [c00000000069371c] .mutex_lock_nested+0x84/0x4ec
      [    0.012372] [c00000017666fb90] [c000000000096998] .smpboot_register_percpu_thread+0x3c/0x10c
      [    0.012380] [c00000017666fc30] [c0000000009ba910] .spawn_ksoftirqd+0x28/0x48
      [    0.012386] [c00000017666fcb0] [c00000000000a98c] .do_one_initcall+0xd8/0x1d0
      [    0.012392] [c00000017666fd60] [c00000000000b1f8] .kernel_init+0x120/0x398
      
      [    0.012398] [c00000017666fe30] [c000000000009ad4] .ret_from_kernel_thread+0x5c/0x64
      [    0.012404] [c00000017666fa00] [c00000017666fb20] 0xc00000017666fb20
      [    0.012410] [c00000017666fa80] [c00000000069371c] .mutex_lock_nested+0x84/0x4ec
      [    0.012416] [c00000017666fb90] [c000000000096998] .smpboot_register_percpu_thread+0x3c/0x10c
      [    0.012422] [c00000017666fc30] [c0000000009ba910] .spawn_ksoftirqd+0x28/0x48
      [    0.012427] [c00000017666fcb0] [c00000000000a98c] .do_one_initcall+0xd8/0x1d0
      [    0.012433] [c00000017666fd60] [c00000000000b1f8] .kernel_init+0x120/0x398
      
      [    0.012439] [c00000017666fe30] [c000000000009ad4] .ret_from_kernel_thread+0x5c/0x64
      .......
      
      The reason is that the back chain of c00000017666fe30
      (ret_from_kernel_thread) contains some invalid value, which might form a
      loop.
      Signed-off-by: NLi Zhong <zhong@linux.vnet.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      12660b17
  20. 22 10月, 2012 1 次提交
  21. 15 10月, 2012 2 次提交
  22. 01 10月, 2012 2 次提交
  23. 18 9月, 2012 1 次提交
  24. 05 9月, 2012 1 次提交
  25. 11 7月, 2012 1 次提交
  26. 03 7月, 2012 1 次提交
  27. 29 6月, 2012 1 次提交
    • T
      ppc64: fix missing to check all bits of _TIF_USER_WORK_MASK in preempt · c58ce2b1
      Tiejun Chen 提交于
      In entry_64.S version of ret_from_except_lite, you'll notice that
      in the !preempt case, after we've checked MSR_PR we test for any
      TIF flag in _TIF_USER_WORK_MASK to decide whether to go to do_work
      or not. However, in the preempt case, we do a convoluted trick to
      test SIGPENDING only if PR was set and always test NEED_RESCHED ...
      but we forget to test any other bit of _TIF_USER_WORK_MASK !!! So
      that means that with preempt, we completely fail to test for things
      like single step, syscall tracing, etc...
      
      This should be fixed as the following path:
      
       - Test PR. If not set, go to resume_kernel, else continue.
      
       - If go resume_kernel, to do that original do_work.
      
       - If else, then always test for _TIF_USER_WORK_MASK to decide to do
      that original user_work, else restore directly.
      Signed-off-by: NTiejun Chen <tiejun.chen@windriver.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      c58ce2b1
  28. 12 5月, 2012 1 次提交
    • B
      powerpc/irq: Fix another case of lazy IRQ state getting out of sync · 7c0482e3
      Benjamin Herrenschmidt 提交于
      So we have another case of paca->irq_happened getting out of
      sync with the HW irq state. This can happen when a perfmon
      interrupt occurs while soft disabled, as it will return to a
      soft disabled but hard enabled context while leaving a stale
      PACA_IRQ_HARD_DIS flag set.
      
      This patch fixes it, and also adds a test for the condition
      of those flags being out of sync in arch_local_irq_restore()
      when CONFIG_TRACE_IRQFLAGS is enabled.
      
      This helps catching those gremlins faster (and so far I
      can't seem see any anymore, so that's good news).
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      7c0482e3
  29. 09 5月, 2012 1 次提交
  30. 30 4月, 2012 1 次提交