1. 09 3月, 2012 15 次提交
    • B
      powerpc: Rework lazy-interrupt handling · 7230c564
      Benjamin Herrenschmidt 提交于
      The current implementation of lazy interrupts handling has some
      issues that this tries to address.
      
      We don't do the various workarounds we need to do when re-enabling
      interrupts in some cases such as when returning from an interrupt
      and thus we may still lose or get delayed decrementer or doorbell
      interrupts.
      
      The current scheme also makes it much harder to handle the external
      "edge" interrupts provided by some BookE processors when using the
      EPR facility (External Proxy) and the Freescale Hypervisor.
      
      Additionally, we tend to keep interrupts hard disabled in a number
      of cases, such as decrementer interrupts, external interrupts, or
      when a masked decrementer interrupt is pending. This is sub-optimal.
      
      This is an attempt at fixing it all in one go by reworking the way
      we do the lazy interrupt disabling from the ground up.
      
      The base idea is to replace the "hard_enabled" field with a
      "irq_happened" field in which we store a bit mask of what interrupt
      occurred while soft-disabled.
      
      When re-enabling, either via arch_local_irq_restore() or when returning
      from an interrupt, we can now decide what to do by testing bits in that
      field.
      
      We then implement replaying of the missed interrupts either by
      re-using the existing exception frame (in exception exit case) or via
      the creation of a new one from an assembly trampoline (in the
      arch_local_irq_enable case).
      
      This removes the need to play with the decrementer to try to create
      fake interrupts, among others.
      
      In addition, this adds a few refinements:
      
       - We no longer  hard disable decrementer interrupts that occur
      while soft-disabled. We now simply bump the decrementer back to max
      (on BookS) or leave it stopped (on BookE) and continue with hard interrupts
      enabled, which means that we'll potentially get better sample quality from
      performance monitor interrupts.
      
       - Timer, decrementer and doorbell interrupts now hard-enable
      shortly after removing the source of the interrupt, which means
      they no longer run entirely hard disabled. Again, this will improve
      perf sample quality.
      
       - On Book3E 64-bit, we now make the performance monitor interrupt
      act as an NMI like Book3S (the necessary C code for that to work
      appear to already be present in the FSL perf code, notably calling
      nmi_enter instead of irq_enter). (This also fixes a bug where BookE
      perfmon interrupts could clobber r14 ... oops)
      
       - We could make "masked" decrementer interrupts act as NMIs when doing
      timer-based perf sampling to improve the sample quality.
      
      Signed-off-by-yet: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      ---
      
      v2:
      
      - Add hard-enable to decrementer, timer and doorbells
      - Fix CR clobber in masked irq handling on BookE
      - Make embedded perf interrupt act as an NMI
      - Add a PACA_HAPPENED_EE_EDGE for use by FSL if they want
        to retrigger an interrupt without preventing hard-enable
      
      v3:
      
       - Fix or vs. ori bug on Book3E
       - Fix enabling of interrupts for some exceptions on Book3E
      
      v4:
      
       - Fix resend of doorbells on return from interrupt on Book3E
      
      v5:
      
       - Rebased on top of my latest series, which involves some significant
      rework of some aspects of the patch.
      
      v6:
       - 32-bit compile fix
       - more compile fixes with various .config combos
       - factor out the asm code to soft-disable interrupts
       - remove the C wrapper around preempt_schedule_irq
      
      v7:
       - Fix a bug with hard irq state tracking on native power7
      7230c564
    • B
      powerpc: Replace mfmsr instructions with load from PACA kernel_msr field · d9ada91a
      Benjamin Herrenschmidt 提交于
      On 64-bit, the mfmsr instruction can be quite slow, slower
      than loading a field from the cache-hot PACA, which happens
      to already contain the value we want in most cases.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      d9ada91a
    • B
      powerpc: Fix 64-bit BookE FP unavailable exceptions · 9424fabf
      Benjamin Herrenschmidt 提交于
      We were using CR0.EQ after EXCEPTION_COMMON, hoping it still
      contained whether we came from userspace or kernel space.
      
      However, under some circumstances, EXCEPTION_COMMON will
      call C code and clobber non-volatile registers, so we really
      need to re-load the previous MSR from the stackframe and
      re-test.
      
      While there, invert the condition to make the fast path more
      obvious and remove the BUG_OPCODE which was a debugging
      leftover and call .ret_from_except as we should.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      9424fabf
    • B
      powerpc: Fix register clobbering when accumulating stolen time · 990118c8
      Benjamin Herrenschmidt 提交于
      When running under a hypervisor that supports stolen time accounting,
      we may call C code from the macro EXCEPTION_PROLOG_COMMON in the
      exception entry path, which clobbers CR0.
      
      However, the FPU and vector traps rely on CR0 indicating whether we
      are coming from userspace or kernel to decide what to do.
      
      So we need to restore that value after the C call
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      990118c8
    • B
      powerpc/xmon: Add display of soft & hard irq states · 7ac21cd4
      Benjamin Herrenschmidt 提交于
      Also use local_paca instead of get_paca() to avoid getting into
      the smp_processor_id() debugging code from the debugger
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      7ac21cd4
    • B
      powerpc: Add support for page fault retry and fatal signals · 9be72573
      Benjamin Herrenschmidt 提交于
      Other architectures such as x86 and ARM have been growing
      new support for features like retrying page faults after
      dropping the mm semaphore to break contention, or being
      able to return from a stuck page fault when a SIGKILL is
      pending.
      
      This refactors our implementation of do_page_fault() to
      move the error handling out of line in a way similar to
      x86 and adds support for those two features.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      9be72573
    • B
      powerpc: Disable interrupts in 64-bit kernel FP and vector faults · 9f2f79e3
      Benjamin Herrenschmidt 提交于
      If we get a floating point, altivec or vsx unavaible interrupt in
      kernel, we trigger a kernel error. There is no point preserving
      the interrupt state, in fact, that can even make debugging harder
      as the processor state might change (we may even preempt) between
      taking the exception and landing in a debugger.
      
      So just make those 3 disable interrupts unconditionally.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      ---
      
      v2: On BookE only disable when hitting the kernel unavailable
          path, otherwise it will fail to restore softe as
          fast_exception_return doesn't do it.
      9f2f79e3
    • B
      powerpc: Call do_page_fault() with interrupts off · a546498f
      Benjamin Herrenschmidt 提交于
      We currently turn interrupts back to their previous state before
      calling do_page_fault(). This can be annoying when debugging as
      a bad fault will potentially have lost some processor state before
      getting into the debugger.
      
      We also end up calling some generic code with interrupts enabled
      such as notify_page_fault() with interrupts enabled, which could
      be unexpected.
      
      This changes our code to behave more like other architectures,
      and make the assembly entry code call into do_page_faults() with
      interrupts disabled. They are conditionally re-enabled from
      within do_page_fault() in the same spot x86 does it.
      
      While there, add the might_sleep() test in the case of a successful
      trylock of the mmap semaphore, again like x86.
      
      Also fix a bug in the existing assembly where r12 (_MSR) could get
      clobbered by C calls (the DTL accounting in the exception common
      macro and DISABLE_INTS) in some cases.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      ---
      
      v2. Add the r12 clobber fix
      a546498f
    • B
      powerpc: Improve behaviour of irq tracing on 64-bit exception entry · 1b701179
      Benjamin Herrenschmidt 提交于
      Some exceptions would unconditionally disable interrupts on entry,
      which is fine, but calling lockdep every time not only adds more
      overhead than strictly needed, but also means we get quite a few
      "redudant" disable logged, which makes it hard to spot the really
      bad ones.
      
      So instead, split the macro used by the exception code into a
      normal one and a separate one used when CONFIG_TRACE_IRQFLAGS is
      enabled, and make the later skip th tracing if interrupts were
      already disabled.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      1b701179
    • B
      powerpc: Improve 64-bit syscall entry/exit · 1421ae0b
      Benjamin Herrenschmidt 提交于
      We unconditionally hard enable interrupts. This is unnecessary as
      syscalls are expected to always be called with interrupts enabled.
      
      While at it, we add a WARN_ON if that is not the case and
      CONFIG_TRACE_IRQFLAGS is enabled (we don't want to add overhead
      to the fast path when this is not set though).
      
      Thus let's remove the enabling (and associated irq tracing) from
      the syscall entry path. Also on Book3S, replace a few mfmsr
      instructions with loads of PACAMSR from the PACA, which should be
      faster & schedule better.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      1421ae0b
    • B
      powerpc: Rework runlatch code · fe1952fc
      Benjamin Herrenschmidt 提交于
      This moves the inlines into system.h and changes the runlatch
      code to use the thread local flags (non-atomic) rather than
      the TIF flags (atomic) to keep track of the latch state.
      
      The code to turn it back on in an asynchronous interrupt is
      now simplified and partially inlined.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      fe1952fc
    • B
      powerpc: Use the same interrupt prolog for perfmon as other interrupts · 7450f6f0
      Benjamin Herrenschmidt 提交于
      The perfmon interrupt is the sole user of a special variant of the
      interrupt prolog which differs from the one used by external and timer
      interrupts in that it saves the non-volatile GPRs and doesn't turn the
      runlatch on.
      
      The former is unnecessary and the later is arguably incorrect, so
      let's clean that up by using the same prolog. While at it we rename
      that prolog to use the _ASYNC prefix.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      7450f6f0
    • B
      powerpc: Remove legacy iSeries bits from assembly files · 4f8cf36f
      Benjamin Herrenschmidt 提交于
      This removes the various bits of assembly in the kernel entry,
      exception handling and SLB management code that were specific
      to running under the legacy iSeries hypervisor which is no
      longer supported.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      4f8cf36f
    • S
      powerpc: clean up vio.c · b0787660
      Stephen Rothwell 提交于
      This cleans up vio.c after the removal of the legacy iSeries platform.
      It also removes some no longer referenced include files.
      Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      b0787660
    • S
  2. 07 3月, 2012 7 次提交
  3. 27 2月, 2012 4 次提交
  4. 23 2月, 2012 14 次提交
    • M
      powerpc/perf: Move perf core & PMU code into a subdirectory · f2699491
      Michael Ellerman 提交于
      The perf code has grown a lot since it started, and is big enough to
      warrant its own subdirectory. For reference it's ~60% bigger than the
      oprofile code. It declutters the kernel directory, makes it simpler to
      grep for "just perf stuff", and allows us to shorten some filenames.
      
      While we're at it, make it more obvious that we have two implementations
      of the core perf logic. One for (roughly) Book3S CPUs, which was the
      original implementation, and the other for Freescale embedded CPUs.
      Signed-off-by: NMichael Ellerman <michael@ellerman.id.au>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      f2699491
    • M
      fadump: Remove the phyp assisted dump code. · 12d92992
      Mahesh Salgaonkar 提交于
      Remove the phyp assisted dump implementation which is not is use.
      Signed-off-by: NMahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      12d92992
    • M
      fadump: Invalidate the fadump registration during machine shutdown. · 67b43b9d
      Mahesh Salgaonkar 提交于
      If dump is active during system reboot, shutdown or halt then invalidate
      the fadump registration as it does not get invalidated automatically.
      Signed-off-by: NMahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      67b43b9d
    • M
      fadump: Invalidate registration and release reserved memory for general use. · b500afff
      Mahesh Salgaonkar 提交于
      This patch introduces an sysfs interface '/sys/kernel/fadump_release_mem' to
      invalidate the last fadump registration, invalidate '/proc/vmcore', release
      the reserved memory for general use and re-register for future kernel dump.
      Once the dump is copied to the disk, unlike phyp dump, the userspace tool
      can release all the memory reserved for dump with one single operation of
      echo 1 to '/sys/kernel/fadump_release_mem'.
      
      Release the reserved memory region excluding the size of the memory required
      for future kernel dump registration. And therefore, unlike kdump, Fadump
      doesn't need a 2nd reboot to get back the system to the production
      configuration.
      Signed-off-by: NMahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      b500afff
    • M
      fadump: Add PT_NOTE program header for vmcoreinfo · d34c5f26
      Mahesh Salgaonkar 提交于
      Introduce a PT_NOTE program header that points to physical address of
      vmcoreinfo_note buffer declared in kernel/kexec.c. The vmcoreinfo
      note buffer is populated during crash_fadump() at the time of system
      crash.
      Signed-off-by: NMahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      d34c5f26
    • M
      fadump: Convert firmware-assisted cpu state dump data into elf notes. · ebaeb5ae
      Mahesh Salgaonkar 提交于
      When registered for firmware assisted dump on powerpc, firmware preserves
      the registers for the active CPUs during a system crash. This patch reads
      the cpu register data stored in Firmware-assisted dump format (except for
      crashing cpu) and converts it into elf notes and updates the PT_NOTE program
      header accordingly. The exact register state for crashing cpu is saved to
      fadump crash info structure in scratch area during crash_fadump() and read
      during second kernel boot.
      Signed-off-by: NMahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      ebaeb5ae
    • M
      fadump: Initialize elfcore header and add PT_LOAD program headers. · 2df173d9
      Mahesh Salgaonkar 提交于
      Build the crash memory range list by traversing through system memory during
      the first kernel before we register for firmware-assisted dump. After the
      successful dump registration, initialize the elfcore header and populate
      PT_LOAD program headers with crash memory ranges. The elfcore header is
      saved in the scratch area within the reserved memory. The scratch area starts
      at the end of the memory reserved for saving RMR region contents. The
      scratch area contains fadump crash info structure that contains magic number
      for fadump validation and physical address where the eflcore header can be
      found. This structure will also be used to pass some important crash info
      data to the second kernel which will help second kernel to populate ELF core
      header with correct data before it gets exported through /proc/vmcore. Since
      the firmware preserves the entire partition memory at the time of crash the
      contents of the scratch area will be preserved till second kernel boot.
      
      Since the memory dump exported through /proc/vmcore is in ELF format similar
      to kdump, it will help us to reuse the kdump infrastructure for dump capture
      and filtering. Unlike phyp dump, userspace tool does not need to refer any
      sysfs interface while reading /proc/vmcore.
      
      NOTE: The current design implementation does not address a possibility of
      introducing additional fields (in future) to this structure without affecting
      compatibility. It's on TODO list to come up with better approach to
      address this.
      
      Reserved dump area start => +-------------------------------------+
                                  |  CPU state dump data                |
                                  +-------------------------------------+
                                  |  HPTE region data                   |
                                  +-------------------------------------+
                                  |  RMR region data                    |
      Scratch area start       => +-------------------------------------+
                                  |  fadump crash info structure {      |
                                  |     magic nummber                   |
                           +------|---- elfcorehdr_addr                 |
                           |      |  }                                  |
                           +----> +-------------------------------------+
                                  |  ELF core header                    |
      Reserved dump area end   => +-------------------------------------+
      Signed-off-by: NMahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      2df173d9
    • M
      fadump: Register for firmware assisted dump. · 3ccc00a7
      Mahesh Salgaonkar 提交于
      On 2012-02-20 11:02:51 Mon, Paul Mackerras wrote:
      > On Thu, Feb 16, 2012 at 04:44:30PM +0530, Mahesh J Salgaonkar wrote:
      >
      > If I have read the code correctly, we are going to get this printk on
      > non-pSeries machines or on older pSeries machines, even if the user
      > has not put the fadump=on option on the kernel command line.  The
      > printk will be annoying since there is no actual error condition.  It
      > seems to me that the condition for the printk should include
      > fw_dump.fadump_enabled.  In other words you should probably add
      >
      > 	if (!fw_dump.fadump_enabled)
      > 		return 0;
      >
      > at the beginning of the function.
      
      Hi Paul,
      
      Thanks for pointing it out. Please find the updated patch below.
      
      The existing patches above this (4/10 through 10/10) cleanly applies
      on this update.
      
      Thanks,
      -Mahesh.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      3ccc00a7
    • M
      fadump: Reserve the memory for firmware assisted dump. · eb39c880
      Mahesh Salgaonkar 提交于
      Reserve the memory during early boot to preserve CPU state data, HPTE region
      and RMA (real mode area) region data in case of kernel crash. At the time of
      crash, powerpc firmware will store CPU state data, HPTE region data and move
      RMA region data to the reserved memory area.
      
      If the firmware-assisted dump fails to reserve the memory, then fallback
      to existing kexec-based kdump.
      
      Most of the code implementation to reserve memory has been
      adapted from phyp assisted dump implementation written by Linas Vepstas
      and Manish Ahuja
      
      This patch also introduces a config option CONFIG_FA_DUMP for firmware
      assisted dump feature on Powerpc (ppc64) architecture.
      Signed-off-by: NMahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      eb39c880
    • K
      powerpc/mpic: Remove duplicate MPIC_WANTS_RESET flag · e55d7f73
      Kyle Moffett 提交于
      There are two separate flags controlling whether or not the MPIC is
      reset during initialization, which is completely unnecessary, and only
      one of them can be specified in the device tree.
      
      Also, most platforms in-tree right now do actually want to reset the
      MPIC during initialization anyways, which means lots of duplicate code
      passing the MPIC_WANTS_RESET flag.
      
      Fix all of the callers which currently do not pass the MPIC_WANTS_RESET
      flag to pass the MPIC_NO_RESET flag, then remove the MPIC_WANTS_RESET
      flag and make the code reset the MPIC by default.
      Signed-off-by: NKyle Moffett <Kyle.D.Moffett@boeing.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      e55d7f73
    • K
      powerpc/mpic: Add "last-interrupt-source" property to override hardware · c1b8d45d
      Kyle Moffett 提交于
      The FreeScale PowerQUICC-III-compatible (mpc85xx/mpc86xx) MPICs do not
      correctly report the number of hardware interrupt sources, so software
      needs to override the detected value with "256".
      
      To avoid needing to write custom board-specific code to detect that
      scenario, allow it to be easily overridden in the device-tree.
      Signed-off-by: NKyle Moffett <Kyle.D.Moffett@boeing.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      c1b8d45d
    • K
      powerpc/mpic: Remove MPIC_BROKEN_FRR_NIRQS and duplicate irq_count · 5019609f
      Kyle Moffett 提交于
      The mpic->irq_count variable is only used as a software error-checking
      limit to determine whether or not an IRQ number is valid.  In board code
      which does not manually specify an IRQ count to mpic_alloc(), i.e. 0, it
      is automatically detected from the number of ISUs and the ISU size.
      
      In practice, all hardware ends up with irq_count == num_sources, so all
      of the runtime checks on mpic->irq_count should just check the value of
      mpic->num_sources instead.
      
      When platform hardware does not correctly report the number of IRQs,
      which only happens on the MPC85xx/MPC86xx, the MPIC_BROKEN_FRR_NIRQS
      flag is used to override the detected value of num_sources with the
      manual irq_count parameter.  Since there's no need to manually specify
      the number of IRQs except in this case, the extra flag can be eliminated
      and the test changed to "irq_count != 0".
      Signed-off-by: NKyle Moffett <Kyle.D.Moffett@boeing.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      5019609f
    • K
      fsl/mpic: Create and document the "single-cpu-affinity" device-tree flag · 9ca163c8
      Kyle Moffett 提交于
      The Freescale MPIC (and perhaps others in the future) is incapable of
      routing non-IPI interrupts to more than once CPU at a time.  Currently
      all of the Freescale boards msut pass the MPIC_SINGLE_DEST_CPU flag to
      mpic_alloc(), but that information should really be present in the
      device-tree.
      
      Older board code can't rely on the device-tree having the property set,
      but newer platforms won't need it manually specified in the code.
      
      [BenH: Remove unrelated changes, folded in a different patch]
      Signed-off-by: NKyle Moffett <Kyle.D.Moffett@boeing.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      9ca163c8
    • K
      fsl/mpic: Document and use the "big-endian" device-tree flag · 98cca250
      Kyle Moffett 提交于
      The MPIC code checks for a "big-endian" property and sets the flag
      MPIC_BIG_ENDIAN if one is present, although prior to the "mpic->flags"
      fixup that would never have worked anways.
      
      Unfortunately, even now that it works properly, the Freescale mpic
      device-node (the "PowerQUICC-III"-compatible one) does not specify it,
      so all of the board ports need to manually pass it to mpic_alloc().
      
      Document the flag and add it to the pq3 device tree.  Existing code will
      still need to pass the MPIC_BIG_ENDIAN flag because their dtb may not
      have this property, but new platforms shouldn't need to do so.
      Signed-off-by: NKyle Moffett <Kyle.D.Moffett@boeing.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      98cca250