1. 09 7月, 2010 6 次提交
    • B
      powerpc/fsl-booke: Fix comments in mmu code that mention BATS · d10ac373
      Becky Bruce 提交于
      There are no BATS on BookE - we have the TLBCAM instead.  Also correct
      the page size information to included extended sizes.  We don't actually allow
      a 4G page size to be used, so comment on that as well.
      Signed-off-by: NBecky Bruce <beckyb@kernel.crashing.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      d10ac373
    • K
      powerpc/pseries/eeh: Use for_each_pci_dev() · 6901c6cc
      Kulikov Vasiliy 提交于
      Use for_each_pci_dev() to simplify the code.
      Signed-off-by: NKulikov Vasiliy <segooon@gmail.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      6901c6cc
    • B
      powerpc/pseries: Partition hibernation support · 32d8ad4e
      Brian King 提交于
      Enables support for HMC initiated partition hibernation. This is
      a firmware assisted hibernation, since the firmware handles writing
      the memory out to disk, along with other partition information,
      so we just mimic suspend to ram.
      Signed-off-by: NBrian King <brking@linux.vnet.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      32d8ad4e
    • B
      powerpc/pseries: Migration code reorganization / hibernation prep · 8fe93f8d
      Brian King 提交于
      Partition hibernation will use some of the same code as is
      currently used for Live Partition Migration. This function
      further abstracts this code such that code outside of rtas.c
      can utilize it. It also changes the error field in the suspend
      me data structure to be an atomic type, since it is set and
      checked on different cpus without any barriers or locking.
      Signed-off-by: NBrian King <brking@linux.vnet.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      8fe93f8d
    • P
      powerpc: Clean up obsolete code relating to decrementer and timebase · c1aa687d
      Paul Mackerras 提交于
      Since the decrementer and timekeeping code was moved over to using
      the generic clockevents and timekeeping infrastructure, several
      variables and functions have been obsolete and effectively unused.
      This deletes them.
      
      In particular, wakeup_decrementer() is no longer needed since the
      generic code reprograms the decrementer as part of the process of
      resuming the timekeeping code, which happens during sysdev resume.
      Thus the wakeup_decrementer calls in the suspend_enter methods for
      52xx platforms have been removed.  The call in the powermac cpu
      frequency change code has been replaced by set_dec(1), which will
      cause a timer interrupt as soon as interrupts are enabled, and the
      generic code will then reprogram the decrementer with the correct
      value.
      
      This also simplifies the generic_suspend_en/disable_irqs functions
      and makes them static since they are not referenced outside time.c.
      The preempt_enable/disable calls are removed because the generic
      code has disabled all but the boot cpu at the point where these
      functions are called, so we can't be moved to another cpu.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      c1aa687d
    • P
      powerpc: Rework VDSO gettimeofday to prevent time going backwards · 8fd63a9e
      Paul Mackerras 提交于
      Currently it is possible for userspace to see the result of
      gettimeofday() going backwards by 1 microsecond, assuming that
      userspace is using the gettimeofday() in the VDSO.  The VDSO
      gettimeofday() algorithm computes the time in "xsecs", which are
      units of 2^-20 seconds, or approximately 0.954 microseconds,
      using the algorithm
      
      	now = (timebase - tb_orig_stamp) * tb_to_xs + stamp_xsec
      
      and then converts the time in xsecs to seconds and microseconds.
      
      The kernel updates the tb_orig_stamp and stamp_xsec values every
      tick in update_vsyscall().  If the length of the tick is not an
      integer number of xsecs, then some precision is lost in converting
      the current time to xsecs.  For example, with CONFIG_HZ=1000, the
      tick is 1ms long, which is 1048.576 xsecs.  That means that
      stamp_xsec will advance by either 1048 or 1049 on each tick.
      With the right conditions, it is possible for userspace to get
      (timebase - tb_orig_stamp) * tb_to_xs being 1049 if the kernel is
      slightly late in updating the vdso_datapage, and then for stamp_xsec
      to advance by 1048 when the kernel does update it, and for userspace
      to then see (timebase - tb_orig_stamp) * tb_to_xs being zero due to
      integer truncation.  The result is that time appears to go backwards
      by 1 microsecond.
      
      To fix this we change the VDSO gettimeofday to use a new field in the
      VDSO datapage which stores the nanoseconds part of the time as a
      fractional number of seconds in a 0.32 binary fraction format.
      (Or put another way, as a 32-bit number in units of 0.23283 ns.)
      This is convenient because we can use the mulhwu instruction to
      convert it to either microseconds or nanoseconds.
      
      Since it turns out that computing the time of day using this new field
      is simpler than either using stamp_xsec (as gettimeofday does) or
      stamp_xtime.tv_nsec (as clock_gettime does), this converts both
      gettimeofday and clock_gettime to use the new field.  The existing
      __do_get_tspec function is converted to use the new field and take
      a parameter in r7 that indicates the desired resolution, 1,000,000
      for microseconds or 1,000,000,000 for nanoseconds.  The __do_get_xsec
      function is then unused and is deleted.
      
      The new algorithm is
      
      	now = ((timebase - tb_orig_stamp) << 12) * tb_to_xs
      		+ (stamp_xtime_seconds << 32) + stamp_sec_fraction
      
      with 'now' in units of 2^-32 seconds.  That is then converted to
      seconds and either microseconds or nanoseconds with
      
      	seconds = now >> 32
      	partseconds = ((now & 0xffffffff) * resolution) >> 32
      
      The 32-bit VDSO code also makes a further simplification: it ignores
      the bottom 32 bits of the tb_to_xs value, which is a 0.64 format binary
      fraction.  Doing so gets rid of 4 multiply instructions.  Assuming
      a timebase frequency of 1GHz or less and an update interval of no
      more than 10ms, the upper 32 bits of tb_to_xs will be at least
      4503599, so the error from ignoring the low 32 bits will be at most
      2.2ns, which is more than an order of magnitude less than the time
      taken to do gettimeofday or clock_gettime on our fastest processors,
      so there is no possibility of seeing inconsistent values due to this.
      
      This also moves update_gtod() down next to its only caller, and makes
      update_vsyscall use the time passed in via the wall_time argument rather
      than accessing xtime directly.  At present, wall_time always points to
      xtime, but that could change in future.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      8fd63a9e
  2. 08 7月, 2010 10 次提交
  3. 30 6月, 2010 1 次提交
    • P
      powerpc, hw_breakpoint: Tell generic code we have no instruction breakpoints · d09ec738
      Paul Mackerras 提交于
      At present, hw_breakpoint_slots() returns 1 regardless of what
      type of breakpoint is specified in the type argument.  Since we
      don't define CONFIG_HAVE_MIXED_BREAKPOINTS_REGS, there are
      separate values for TYPE_INST and TYPE_DATA, and hw_breakpoint_slots()
      returns 1 for both, effectively advertising instruction breakpoint
      support which doesn't exist.
      
      This fixes it by making hw_breakpoint_slots return 1 for TYPE_DATA
      and 0 for TYPE_INST.  This moves hw_breakpoint_slots() from the
      powerpc hw_breakpoint.h to hw_breakpoint.c because the definitions
      of TYPE_INST and TYPE_DATA aren't available in <asm/hw_breakpoint.h>.
      They are defined in <linux/hw_breakpoint.h> but we can't include
      that header in <asm/hw_breakpoint.h>, and nor can we rely on
      <linux/hw_breakpoint.h> being included before <asm/hw_breakpoint.h>.
      Since hw_breakpoint_slots() is only called at boot time, there is
      no performance impact from making it a real function rather than
      a static inline.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      d09ec738
  4. 23 6月, 2010 2 次提交
  5. 22 6月, 2010 5 次提交
    • K
      powerpc, hw_breakpoint: Discard extraneous interrupt due to accesses outside symbol length · e3e94084
      K.Prasad 提交于
      Many a times, the requested breakpoint length can be less than the
      fixed breakpoint length i.e. 8 bytes supported by PowerPC 64-bit
      server (Book III S) processors.  This could lead to extraneous
      interrupts resulting in false breakpoint notifications.  This
      detects and discards such interrupts for non-ptrace requests.
      We don't change ptrace behaviour to avoid breaking compatability.
      
      [Suggestion from Paul Mackerras <paulus@samba.org> to add a new flag in
      'struct arch_hw_breakpoint' to identify extraneous interrupts]
      Signed-off-by: NK.Prasad <prasad@linux.vnet.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      e3e94084
    • K
      powerpc, hw_breakpoint: Enable hw-breakpoints while handling intervening signals · 06532a67
      K.Prasad 提交于
      A signal delivered between a hw_breakpoint_handler() and the
      single_step_dabr_instruction() will not have the breakpoint active
      while the signal handler is running -- the signal delivery will
      set up a new MSR value which will not have MSR_SE set, so we
      won't get the signal step interrupt until and unless the signal
      handler returns (which it may never do).
      
      To fix this, we restore the breakpoint when delivering a signal --
      we clear the MSR_SE bit and set the DABR again.  If the signal
      handler returns, the DABR interrupt will occur again when the
      instruction that we were originally trying to single-step gets
      re-executed.
      
      [Paul Mackerras <paulus@samba.org> pointed out the need to do this.]
      Signed-off-by: NK.Prasad <prasad@linux.vnet.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      06532a67
    • K
      powerpc, hw_breakpoint: Handle concurrent alignment interrupts · 2538c2d0
      K.Prasad 提交于
      If an alignment interrupt occurs on an instruction that is being
      single-stepped, the alignment interrupt handler currently handles
      the single-step condition by unconditionally sending a SIGTRAP to
      the process.  Other synchronous interrupts that result in the
      instruction being emulated do likewise.
      
      With hw_breakpoint support, the hw_breakpoint code needs to be able
      to intercept these single-step events as well as those where the
      instruction executes normally and a trace interrupt happens.
      
      Fix this by making emulate_single_step() use the existing
      single_step_exception() function instead of calling _exception()
      directly.  We then make single_step_exception() use the abstracted
      clear_single_step() rather than clearing bits in the MSR image
      directly so that emulate_single_step() will continue to work
      correctly on Book 3E processors.
      Signed-off-by: NK.Prasad <prasad@linux.vnet.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      2538c2d0
    • K
      powerpc, hw_breakpoints: Implement hw_breakpoints for 64-bit server processors · 5aae8a53
      K.Prasad 提交于
      Implement perf-events based hw-breakpoint interfaces for PowerPC
      64-bit server (Book III S) processors.  This allows access to a
      given location to be used as an event that can be counted or
      profiled by the perf_events subsystem.
      
      This is done using the DABR (data breakpoint register), which can
      also be used for process debugging via ptrace.  When perf_event
      hw_breakpoint support is configured in, the perf_event subsystem
      manages the DABR and arbitrates access to it, and ptrace then
      creates a perf_event when it is requested to set a data breakpoint.
      
      [Adopted suggestions from Paul Mackerras <paulus@samba.org> to
      - emulate_step() all system-wide breakpoints and single-step only the
        per-task breakpoints
      - perform arch-specific cleanup before unregistration through
        arch_unregister_hw_breakpoint()
      ]
      Signed-off-by: NK.Prasad <prasad@linux.vnet.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      5aae8a53
    • P
      powerpc: Emulate most Book I instructions in emulate_step() · 0016a4cf
      Paul Mackerras 提交于
      This extends the emulate_step() function to handle a large proportion
      of the Book I instructions implemented on current 64-bit server
      processors.  The aim is to handle all the load and store instructions
      used in the kernel, plus all of the instructions that appear between
      l[wd]arx and st[wd]cx., so this handles the Altivec/VMX lvx and stvx
      and the VSX lxv2dx and stxv2dx instructions (implemented in POWER7).
      
      The new code can emulate user mode instructions, and checks the
      effective address for a load or store if the saved state is for
      user mode.  It doesn't handle little-endian mode at present.
      
      For floating-point, Altivec/VMX and VSX instructions, it checks
      that the saved MSR has the enable bit for the relevant facility
      set, and if so, assumes that the FP/VMX/VSX registers contain
      valid state, and does loads or stores directly to/from the
      FP/VMX/VSX registers, using assembly helpers in ldstfp.S.
      
      Instructions supported now include:
      * Loads and stores, including some but not all VMX and VSX instructions,
        and lmw/stmw
      * Atomic loads and stores (l[dw]arx, st[dw]cx.)
      * Arithmetic instructions (add, subtract, multiply, divide, etc.)
      * Compare instructions
      * Rotate and mask instructions
      * Shift instructions
      * Logical instructions (and, or, xor, etc.)
      * Condition register logical instructions
      * mtcrf, cntlz[wd], exts[bhw]
      * isync, sync, lwsync, ptesync, eieio
      * Cache operations (dcbf, dcbst, dcbt, dcbtst)
      
      The overflow-checking arithmetic instructions are not included, but
      they appear not to be ever used in C code.
      
      This uses decimal values for the minor opcodes in the switch statements
      because that is what appears in the Power ISA specification, thus it is
      easier to check that they are correct if they are in decimal.
      
      If this is used to single-step an instruction where a data breakpoint
      interrupt occurred, then there is the possibility that the instruction
      is a lwarx or ldarx.  In that case we have to be careful not to lose the
      reservation until we get to the matching st[wd]cx., or we'll never make
      forward progress.  One alternative is to try to arrange that we can
      return from interrupts and handle data breakpoint interrupts without
      losing the reservation, which means not using any spinlocks, mutexes,
      or atomic ops (including bitops).  That seems rather fragile.  The
      other alternative is to emulate the larx/stcx and all the instructions
      in between.  This is why this commit adds support for a wide range
      of integer instructions.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      0016a4cf
  6. 16 6月, 2010 2 次提交
  7. 15 6月, 2010 9 次提交
  8. 12 6月, 2010 1 次提交
  9. 09 6月, 2010 1 次提交
  10. 07 6月, 2010 1 次提交
  11. 03 6月, 2010 1 次提交
  12. 02 6月, 2010 1 次提交