1. 18 3月, 2016 1 次提交
  2. 03 3月, 2016 1 次提交
    • R
      powerpc/hw_breakpoint: Fix oops when destroying hw_breakpoint event · fb822e60
      Ravi Bangoria 提交于
      When destroying a hw_breakpoint event, the kernel oopses as follows:
      
        Unable to handle kernel paging request for data at address 0x00000c07
        NIP [c0000000000291d0] arch_unregister_hw_breakpoint+0x40/0x60
        LR [c00000000020b6b4] release_bp_slot+0x44/0x80
      
      Call chain:
      
        hw_breakpoint_event_init()
          bp->destroy = bp_perf_event_destroy;
      
        do_exit()
          perf_event_exit_task()
            perf_event_exit_task_context()
              WRITE_ONCE(child_ctx->task, TASK_TOMBSTONE);
              perf_event_exit_event()
                free_event()
                  _free_event()
                    bp_perf_event_destroy() // event->destroy(event);
                      release_bp_slot()
                        arch_unregister_hw_breakpoint()
      
      perf_event_exit_task_context() sets child_ctx->task as TASK_TOMBSTONE
      which is (void *)-1. arch_unregister_hw_breakpoint() tries to fetch
      'thread' attribute of 'task' resulting in oops.
      
      Peterz points out that the code shouldn't be using bp->ctx anyway, but
      fixing that will require a decent amount of rework. So for now to fix
      the oops, check if bp->ctx->task has been set to (void *)-1, before
      dereferencing it. We don't use TASK_TOMBSTONE, because that would
      require exporting it and it's supposed to be an internal detail.
      
      Fixes: 63b6da39 ("perf: Fix perf_event_exit_task() race")
      Signed-off-by: NRavi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      fb822e60
  3. 02 3月, 2016 1 次提交
    • T
      arch/hotplug: Call into idle with a proper state · fc6d73d6
      Thomas Gleixner 提交于
      Let the non boot cpus call into idle with the corresponding hotplug state, so
      the hotplug core can handle the further bringup. That's a first step to
      convert the boot side of the hotplugged cpus to do all the synchronization
      with the other side through the state machine. For now it'll only start the
      hotplug thread and kick the full bringup of the cpu.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: linux-arch@vger.kernel.org
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Rafael Wysocki <rafael.j.wysocki@intel.com>
      Cc: "Srivatsa S. Bhat" <srivatsa@mit.edu>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul Turner <pjt@google.com>
      Link: http://lkml.kernel.org/r/20160226182341.614102639@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      fc6d73d6
  4. 29 2月, 2016 3 次提交
    • S
      KVM: PPC: Book3S HV: Send IPI to host core to wake VCPU · e17769eb
      Suresh E. Warrier 提交于
      This patch adds support to real-mode KVM to search for a core
      running in the host partition and send it an IPI message with
      VCPU to be woken. This avoids having to switch to the host
      partition to complete an H_IPI hypercall when the VCPU which
      is the target of the the H_IPI is not loaded (is not running
      in the guest).
      
      The patch also includes the support in the IPI handler running
      in the host to do the wakeup by calling kvmppc_xics_ipi_action
      for the PPC_MSG_RM_HOST_ACTION message.
      
      When a guest is being destroyed, we need to ensure that there
      are no pending IPIs waiting to wake up a VCPU before we free
      the VCPUs of the guest. This is accomplished by:
      - Forces a PPC_MSG_CALL_FUNCTION IPI to be completed by all CPUs
        before freeing any VCPUs in kvm_arch_destroy_vm().
      - Any PPC_MSG_RM_HOST_ACTION messages must be executed first
        before any other PPC_MSG_CALL_FUNCTION messages.
      Signed-off-by: NSuresh Warrier <warrier@linux.vnet.ibm.com>
      Acked-by: NMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      e17769eb
    • S
      powerpc/smp: Add smp_muxed_ipi_set_message · 31639c77
      Suresh Warrier 提交于
      smp_muxed_ipi_message_pass() invokes smp_ops->cause_ipi, which
      uses an ioremapped address to access registers on the XICS
      interrupt controller to cause the IPI. Because of this real
      mode callers cannot call smp_muxed_ipi_message_pass() for IPI
      messaging.
      
      This patch creates a separate function smp_muxed_ipi_set_message
      just to set the IPI message without the cause_ipi routine.
      After calling this function to set the IPI message, real
      mode callers must cause the IPI by writing to the XICS registers
      directly.
      
      As part of this, we also change smp_muxed_ipi_message_pass
      to call smp_muxed_ipi_set_message to set the message instead
      of doing it directly inside the routine.
      Signed-off-by: NSuresh Warrier <warrier@linux.vnet.ibm.com>
      Acked-by: NMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      31639c77
    • S
      powerpc/smp: Support more IPI messages · bd7f561f
      Suresh Warrier 提交于
      This patch increases the number of demuxed messages for a
      controller with a single ipi to 8 for 64-bit systems.
      
      This is required because we want to use the IPI mechanism
      to send messages from a CPU running in KVM real mode in a
      guest to a CPU in the host to take some action. Currently,
      we only support 4 messages and all 4 are already taken.
      
      Define a fifth message PPC_MSG_RM_HOST_ACTION for this
      purpose.
      Signed-off-by: NSuresh Warrier <warrier@linux.vnet.ibm.com>
      Acked-by: NMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      bd7f561f
  5. 28 2月, 2016 1 次提交
    • D
      mm: ASLR: use get_random_long() · 5ef11c35
      Daniel Cashman 提交于
      Replace calls to get_random_int() followed by a cast to (unsigned long)
      with calls to get_random_long().  Also address shifting bug which, in
      case of x86 removed entropy mask for mmap_rnd_bits values > 31 bits.
      Signed-off-by: NDaniel Cashman <dcashman@android.com>
      Acked-by: NKees Cook <keescook@chromium.org>
      Cc: "Theodore Ts'o" <tytso@mit.edu>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Nick Kralevich <nnk@google.com>
      Cc: Jeff Vander Stoep <jeffv@google.com>
      Cc: Mark Salyzyn <salyzyn@android.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5ef11c35
  6. 22 2月, 2016 1 次提交
    • G
      powerpc/eeh: Fix partial hotplug criterion · f6bf0fa1
      Gavin Shan 提交于
      During error recovery, the device could be removed as part of the
      partial hotplug. The criterion used to come with partial hotplug
      is: if the device driver provides error_detected(), slot_reset()
      and resume() callbacks, it's immune from hotplug. Otherwise,
      it's going to experience partial hotplug during EEH recovery. But
      the criterion isn't correct enough: mlx4_core driver for Mellanox
      adapters provides error_detected(), slot_reset() callbacks, but
      resume() isn't there. Those Mellanox adapters won't be to involved
      in the partial hotplug.
      
      This fixes the criterion to a practical one: adpater with driver
      that provides error_detected(), slot_reset() will be immune from
      partial hotplug. resume() isn't mandatory.
      
      Fixes: f2da4ccf ("powerpc/eeh: More relaxed hotplug criterion")
      Cc: stable@vger.kernel.org #v4.4+
      Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      f6bf0fa1
  7. 15 2月, 2016 1 次提交
  8. 09 2月, 2016 1 次提交
  9. 08 2月, 2016 1 次提交
    • A
      powerpc: Fix dedotify for binutils >= 2.26 · f15838e9
      Andreas Schwab 提交于
      Since binutils 2.26 BFD is doing suffix merging on STRTAB sections.  But
      dedotify modifies the symbol names in place, which can also modify
      unrelated symbols with a name that matches a suffix of a dotted name.  To
      remove the leading dot of a symbol name we can just increment the pointer
      into the STRTAB section instead.
      
      Backport to all stables to avoid breakage when people update their
      binutils - mpe.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NAndreas Schwab <schwab@linux-m68k.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      f15838e9
  10. 27 1月, 2016 1 次提交
  11. 21 1月, 2016 3 次提交
    • A
      powerpc: Simplify module TOC handling · c153693d
      Alan Modra 提交于
      PowerPC64 uses the symbol .TOC. much as other targets use
      _GLOBAL_OFFSET_TABLE_. It identifies the value of the GOT pointer (or in
      powerpc parlance, the TOC pointer). Global offset tables are generally
      local to an executable or shared library, or in the kernel, module. Thus
      it does not make sense for a module to resolve a relocation against
      .TOC. to the kernel's .TOC. value. A module has its own .TOC., and
      indeed the powerpc64 module relocation processing ignores the kernel
      value of .TOC. and instead calculates a module-local value.
      
      This patch removes code involved in exporting the kernel .TOC., tweaks
      modpost to ignore an undefined .TOC., and the module loader to twiddle
      the section symbol so that .TOC. isn't seen as undefined.
      
      Note that if the kernel was compiled with -msingle-pic-base then ELFv2
      would not have function global entry code setting up r2. In that case
      the module call stubs would need to be modified to set up r2 using the
      kernel .TOC. value, requiring some of this code to be reinstated.
      
      mpe: Furthermore a change in binutils master (not yet released) causes
      the current way we handle the TOC to no longer work when building with
      MODVERSIONS=y and RELOCATABLE=n. The symptom is that modules can not be
      loaded due to there being no version found for TOC.
      
      Cc: stable@vger.kernel.org # 3.16+
      Signed-off-by: NAlan Modra <amodra@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      c153693d
    • D
      powerpc: enable UBSAN support · bf76f73c
      Daniel Axtens 提交于
      This hooks up UBSAN support for PowerPC.
      
      So far it's found some interesting cases where we don't properly sanitise
      input to shifts, including one in our futex handling.  Nothing critical,
      but interesting and worth fixing.
      
      [valentinrothberg@gmail.com: arch/powerpc/Kconfig: fix typo in select statement]
      Signed-off-by: NDaniel Axtens <dja@axtens.net>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Tested-by: NAndrew Donnellan <andrew.donnellan@au1.ibm.com>
      Acked-by: NMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: NValentin Rothberg <valentinrothberg@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      bf76f73c
    • R
      powerpc/fadump: rename cpu_online_mask member of struct fadump_crash_info_header · a0512164
      Rasmus Villemoes 提交于
      The four cpumasks cpu_{possible,online,present,active}_bits are exposed
      readonly via the corresponding const variables cpu_xyz_mask.  But they are
      also accessible for arbitrary writing via the exposed functions
      set_cpu_xyz.  There's quite a bit of code throughout the kernel which
      iterates over or otherwise accesses these bitmaps, and having the access
      go via the cpu_xyz_mask variables is nowadays [1] simply a useless
      indirection.
      
      It may be that any problem in CS can be solved by an extra level of
      indirection, but that doesn't mean every extra indirection solves a
      problem.  In this case, it even necessitates some minor ugliness (see
      4/6).
      
      Patch 1/6 is new in v2, and fixes a build failure on ppc by renaming a
      struct member, to avoid problems when the identifier cpu_online_mask
      becomes a macro later in the series.  The next four patches eliminate the
      cpu_xyz_mask variables by simply exposing the actual bitmaps, after
      renaming them to discourage direct access - that still happens through
      cpu_xyz_mask, which are now simply macros with the same type and value as
      they used to have.
      
      After that, there's no longer any reason to have the setter functions be
      out-of-line: The boolean parameter is almost always a literal true or
      false, so by making them static inlines they will usually compile to one
      or two instructions.
      
      For a defconfig build on x86_64, bloat-o-meter says we save ~3000 bytes.
      We also save a little stack (stackdelta says 127 functions have a 16 byte
      smaller stack frame, while two grow by that amount).  Mostly because, when
      iterating over the mask, gcc typically loads the value of cpu_xyz_mask
      into a callee-saved register and from there into %rdi before each
      find_next_bit call - now it can just load the appropriate immediate
      address into %rdi before each call.
      
      [1] See Rusty's kind explanation
      http://thread.gmane.org/gmane.linux.kernel/2047078/focus=2047722 for
      some historic context.
      
      This patch (of 6):
      
      As preparation for eliminating the indirect access to the various global
      cpu_*_bits bitmaps via the pointer variables cpu_*_mask, rename the
      cpu_online_mask member of struct fadump_crash_info_header to simply
      online_mask, thus allowing cpu_online_mask to become a macro.
      Signed-off-by: NRasmus Villemoes <linux@rasmusvillemoes.dk>
      Acked-by: NMichael Ellerman <mpe@ellerman.id.au>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a0512164
  12. 13 1月, 2016 1 次提交
    • U
      powerpc/module: Handle R_PPC64_ENTRY relocations · a61674bd
      Ulrich Weigand 提交于
      GCC 6 will include changes to generated code with -mcmodel=large,
      which is used to build kernel modules on powerpc64le.  This was
      necessary because the large model is supposed to allow arbitrary
      sizes and locations of the code and data sections, but the ELFv2
      global entry point prolog still made the unconditional assumption
      that the TOC associated with any particular function can be found
      within 2 GB of the function entry point:
      
      func:
      	addis r2,r12,(.TOC.-func)@ha
      	addi  r2,r2,(.TOC.-func)@l
      	.localentry func, .-func
      
      To remove this assumption, GCC will now generate instead this global
      entry point prolog sequence when using -mcmodel=large:
      
      	.quad .TOC.-func
      func:
      	.reloc ., R_PPC64_ENTRY
      	ld    r2, -8(r12)
      	add   r2, r2, r12
      	.localentry func, .-func
      
      The new .reloc triggers an optimization in the linker that will
      replace this new prolog with the original code (see above) if the
      linker determines that the distance between .TOC. and func is in
      range after all.
      
      Since this new relocation is now present in module object files,
      the kernel module loader is required to handle them too.  This
      patch adds support for the new relocation and implements the
      same optimization done by the GNU linker.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NUlrich Weigand <ulrich.weigand@de.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      a61674bd
  13. 11 1月, 2016 1 次提交
    • S
      powerpc: Implement save_stack_trace_regs() to enable kprobe stack tracing · 35de3b1a
      Steven Rostedt 提交于
      It has come to my attention that kprobe event stack tracing does not
      work on powerpc. You can see with the following:
      
        # cd /sys/kernel/debug/tracing
        # echo stacktrace > trace_options
        # echo 'p kfree' > kprobe_events
        # echo 1 > events/kprobes/enable
      
      Will print the following warning:
        save_stack_trace_regs() not implemented yet.
      
      Although save_stack_trace() (which normal event stack traces use) is
      implemented, save_stack_trace_regs() which kprobe events use is not.
      This is a cheap attempt to implement that function.
      
      Note, This may have issues if a task tries to get a stack trace from
      another task with its regs, because it just passes in "current" to
      save_context_stack(). But this does solve the issue with stack tracing
      kprobe events.
      Reported-by: NChunyu Hu <chuhu@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      35de3b1a
  14. 27 12月, 2015 1 次提交
  15. 19 12月, 2015 1 次提交
  16. 17 12月, 2015 9 次提交
    • D
      powerpc: Add missing calls to va_end() · 1b855e16
      Daniel Axtens 提交于
      cppcheck picked up that there were a couple of missing va_end()
      calls in functions using va_start().
      Signed-off-by: NDaniel Axtens <dja@axtens.net>
      Reviewed-by: NRussell Currey <ruscur@russell.cc>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      1b855e16
    • A
      powerpc/476fpe: Add support for kexec · 4450022b
      Alistair Popple 提交于
      PPC476FPE has a different PVR from previous PPC476 processors. The
      kexec code checks the PVR in order to correctly setup the MMU. When
      the initial support for 476FPE processors was added the corresponding
      change in the kexec code was missed. This patch simply adds the check
      and solves the following bug on kexec:
      
      kexec: Starting new kernel
      Bye!
      Unable to handle kernel paging request for instruction fetch
      Faulting instruction address: 0xee9a50f8
      cpu 0x0: Vector: 400 (Instruction Access) at [ee9d7d20]
          pc: ee9a50f8
          lr: ee9a50e4
          sp: ee9d7dd0
          msr: 21020
          current = 0xee40f000
          pid   = 960, comm = kexec
      enter ? for help
      [link register   ] ee9a50e4
      [ee9d7dd0] c0013748 default_machine_kexec+0x58/0x70 (unreliable)
      [ee9d7df0] c0012f04 machine_kexec+0x34/0x40
      [ee9d7e00] c00aa1ec kernel_kexec+0x9c/0xb0
      [ee9d7e20] c005d704 SyS_reboot+0x1f4/0x220
      [ee9d7f40] c000db68 ret_from_syscall+0x0/0x3c
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      4450022b
    • M
      powerpc/kernel: Combine vec/loc for STD_EXCEPTION_PSERIES · 2613265c
      Michael Ellerman 提交于
      The STD_EXCEPTION_PSERIES macro takes both a vector number, and a
      location (memory address). However both are always identical, so combine
      them to save repeating ourselves.
      
      This does mean an exception handler must always exist at the location in
      memory that matches its vector number. But that's OK because this is the
      "STD" macro (standard), which does exactly that. We have other macros
      for the other cases, eg. STD_EXCEPTION_PSERIES_OOL (out of line).
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      2613265c
    • M
      powerpc/kernel: Open code SET_DEFAULT_THREAD_PPR · d8725ce8
      Michael Ellerman 提交于
      This is only used in one location, open code it.
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      d8725ce8
    • M
      powerpc/kernel: Open code HMT_MEDIUM_LOW_HAS_PPR · d030a4b5
      Michael Ellerman 提交于
      HMT_MEDIUM_LOW_HAS_PPR is only used in once place, open code it.
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      d030a4b5
    • M
      powerpc/kernel: Drop HMT_MEDIUM_PPR_DISCARD · d6265aea
      Michael Ellerman 提交于
      HMT_MEDIUM_PPR_DISCARD is a macro which is present at the start of most
      of our first level exception handlers. It conditionally executes a
      HMT_MEDIUM instruction, which sets the processor priority to medium.
      
      On on modern systems, ie. Power7 and later, it is nop'ed out at boot.
      All it does is make the exception vectors more cramped, and consume 4
      bytes of icache.
      
      On old systems it has the effect of boosting the processor priority at
      the start of exception processing. If we were previously in the idle
      loop for example, we may be at low or very low priority. This is
      desirable as we want to process the exception as fast as possible.
      
      However looking closely at the generated code, we see that in all cases
      we execute another HMT_MEDIUM just four instructions later. With code
      patching applied, the final code on an old (Power6) system will look
      like, eg:
      
        c000000000000300 <data_access_pSeries>:
        c000000000000300:	7c 42 13 78	mr	r2,r2		<-
        c000000000000304:	7d b2 43 a6	mtsprg	2,r13
        c000000000000308:	7d b1 42 a6	mfsprg	r13,1
        c00000000000030c:	f9 2d 00 80	std	r9,128(r13)
        c000000000000310:	60 00 00 00	nop
        c000000000000314:	7c 42 13 78	mr	r2,r2		<-
      
      So I suggest that the added code complexity of HMT_MEDIUM_PPR_DISCARD is
      not justified by the benefit of boosting the processor priority for the
      duration of four instructions, and therefore we drop it.
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      d6265aea
    • M
      powerpc/rtas: Make enter_rtas() private · cd5cdeb6
      Michael Ellerman 提交于
      There are no longer any users of enter_rtas() outside of rtas.c, so make
      it "private", by moving the declaration inside rtas.c. Hopefully this
      will encourage people to use one of the wrappers which takes the sharp
      edges off the RTAS calling sequence.
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      cd5cdeb6
    • M
      powerpc/rtas: Use rtas_call_unlocked() in call_rtas_display_status() · 4456f452
      Michael Ellerman 提交于
      Although call_rtas_display_status() does actually want to use the
      regular RTAS locking, it doesn't want the extra logic that is in
      rtas_call(), so currently it open codes the logic.
      
      Instead we can use rtas_call_unlocked(), after taking the RTAS lock.
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      4456f452
    • M
      powerpc/rtas: Add rtas_call_unlocked() · 209eb4e5
      Michael Ellerman 提交于
      Most users of RTAS (Run-Time Abstraction Services) use rtas_call(),
      which deals with locking as well as endian handling.
      
      However we have two users outside of rtas.c that can't use rtas_call()
      because they have different locking requirements.
      
      The hotplug CPU code can't take the RTAS lock because the CPU would go
      offline with the lock held and no other CPUs would be able to call RTAS
      until the CPU came back online.
      
      The xmon code doesn't want to take the lock because it would risk dead
      locking when we are trying to recover from a crash.
      
      Both sites required multiple patches when we added little endian
      support, proving that programmers can't do endian right.
      
      Although that ship has sailed, we can still clean the code up by
      providing an unlocked version of rtas_call() which avoids the need to
      open code the logic elsewhere.
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      209eb4e5
  17. 16 12月, 2015 1 次提交
    • D
      powerpc: Remove broken GregorianDay() · 00b912b0
      Daniel Axtens 提交于
      GregorianDay() is supposed to calculate the day of the week
      (tm->tm_wday) for a given day/month/year. In that calcuation it
      indexed into an array called MonthOffset using tm->tm_mon-1. However
      tm_mon is zero-based, not one-based, so this is off-by-one. It also
      means that every January, GregoiranDay() will access element -1 of
      the MonthOffset array.
      
      It also doesn't appear to be a correct algorithm either: see in
      contrast kernel/time/timeconv.c's time_to_tm function.
      
      It's been broken forever, which suggests no-one in userland uses
      this. It looks like no-one in the kernel uses tm->tm_wday either
      (see e.g. drivers/rtc/rtc-ds1305.c:319).
      
      tm->tm_wday is conventionally set to -1 when not available in
      hardware so we can simply set it to -1 and drop the function.
      (There are over a dozen other drivers in drivers/rtc that do
      this.)
      
      Found using UBSAN.
      
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Andrew Morton <akpm@linux-foundation.org> # as an example of what UBSan finds.
      Cc: Alessandro Zummo <a.zummo@towertech.it>
      Cc: Alexandre Belloni <alexandre.belloni@free-electrons.com>
      Cc: rtc-linux@googlegroups.com
      Signed-off-by: NDaniel Axtens <dja@axtens.net>
      Acked-by: NAlexandre Belloni <alexandre.belloni@free-electrons.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      00b912b0
  18. 14 12月, 2015 2 次提交
  19. 11 12月, 2015 1 次提交
    • B
      PCI: Check for PCI_HEADER_TYPE_BRIDGE equality, not bitmask · 93de6901
      Bjorn Helgaas 提交于
      Bit 7 of the "Header Type" register indicates a multi-function device when
      set.  Bits 0-6 contain encoded values, where 0x1 indicates a PCI-PCI
      bridge.  It is incorrect to test this as though it were a mask.
      
      For example, while the PCI 3.0 spec only defines values 0x0, 0x1, and 0x2,
      it's conceivable that a future spec could define 0x3 to mean something
      else; then tests for "(hdr_type & 0x7f) & PCI_HEADER_TYPE_BRIDGE" would
      incorrectly succeed for this new 0x3 header type.
      
      Test bits 0-6 of the Header Type for equality with PCI_HEADER_TYPE_BRIDGE.
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      93de6901
  20. 10 12月, 2015 4 次提交
  21. 09 12月, 2015 1 次提交
    • A
      Revert "powerpc/eeh: Don't unfreeze PHB PE after reset" · dc9c41bd
      Andrew Donnellan 提交于
      This reverts commit 527d10ef.
      
      The reverted commit breaks cxlflash devices following an EEH reset (and
      possibly other cxl devices, however this has not been tested).
      
      The reverted commit changed the behaviour of eeh_reset_device() so that PHB
      PEs are not unfrozen following the completion of the reset. This should not
      be problematic, as no device resources should have been associated with the
      PHB PE.
      
      However, when attempting to load the cxlflash driver after a reset, the
      driver attempts to read Vital Product Data through a call to
      pci_read_vpd() (which is called on the physical cxl device, not on the
      virtual AFU device). pci_read_vpd() in turn attempts to read from the cxl
      device's config space. This fails, as the PE it's trying to read from is
      still frozen. In turn, the driver gets an -ENODEV and fails to initialise.
      
      It appears this issue only affects some parts of the VPD area, as "lspci
      -vvv", which only reads a subset of the VPD bytes, is not broken by the
      original patch.
      
      At this stage, we don't fully understand why we're trying to read a frozen
      PE, and we don't know how this affects other cxl devices. It is possible
      that there is an underlying bug in the cxl driver or the powerpc CAPI
      support code, or alternatively a bug in the PCI resource allocation/mapping
      code that is incorrectly mapping resources to PE#0.
      
      As such, this fix is incomplete, however it is necessary to prevent a
      serious regression in CAPI support.
      
      In the meantime, revert the commit, especially as it was intended to be a
      non-functional change.
      
      Cc: Gavin Shan <gwshan@linux.vnet.ibm.com>
      Cc: Ian Munsie <imunsie@au1.ibm.com>
      Cc: Daniel Axtens <dja@axtens.net>
      Signed-off-by: NAndrew Donnellan <andrew.donnellan@au1.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      dc9c41bd
  22. 05 12月, 2015 1 次提交
  23. 02 12月, 2015 2 次提交