1. 14 10月, 2011 1 次提交
  2. 12 10月, 2011 1 次提交
  3. 23 9月, 2011 1 次提交
  4. 20 9月, 2011 4 次提交
  5. 26 8月, 2011 1 次提交
  6. 12 8月, 2011 1 次提交
  7. 05 8月, 2011 1 次提交
  8. 27 7月, 2011 1 次提交
  9. 13 7月, 2011 1 次提交
  10. 12 7月, 2011 3 次提交
    • R
      powerpc: rename ppc_pci_*_flags to pci_*_flags · 0e47ff1c
      Rob Herring 提交于
      This renames pci flags functions and enums in preparation for creating
      generic version in asm-generic/pci-bridge.h. The following search and
      replace is done:
      
      s/ppc_pci_/pci_/
      s/PPC_PCI_/PCI_/
      
      Direct accesses to ppc_pci_flag variable are replaced with helper
      functions.
      Signed-off-by: NRob Herring <rob.herring@calxeda.com>
      Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      0e47ff1c
    • T
      powerpc/4xx: Add check_link to struct ppc4xx_pciex_hwops · 112d1fe9
      Tony Breeds 提交于
      All current pcie controllers unconditionally use SDR to check the link and
      poll for reset.  Refactor the code to include device reset in the
      port_init_hw() op and add a new check_link() op.
      
      This will make room fro new controllers that do not use SDR for these
      operations.
      
      Tested on 460ex.
      Signed-off-by: NTony Breeds <tony@bakeyournoodle.com>
      Signed-off-by: NJosh Boyer <jwboyer@linux.vnet.ibm.com>
      112d1fe9
    • P
      KVM: PPC: Allow book3s_hv guests to use SMT processor modes · 371fefd6
      Paul Mackerras 提交于
      This lifts the restriction that book3s_hv guests can only run one
      hardware thread per core, and allows them to use up to 4 threads
      per core on POWER7.  The host still has to run single-threaded.
      
      This capability is advertised to qemu through a new KVM_CAP_PPC_SMT
      capability.  The return value of the ioctl querying this capability
      is the number of vcpus per virtual CPU core (vcore), currently 4.
      
      To use this, the host kernel should be booted with all threads
      active, and then all the secondary threads should be offlined.
      This will put the secondary threads into nap mode.  KVM will then
      wake them from nap mode and use them for running guest code (while
      they are still offline).  To wake the secondary threads, we send
      them an IPI using a new xics_wake_cpu() function, implemented in
      arch/powerpc/sysdev/xics/icp-native.c.  In other words, at this stage
      we assume that the platform has a XICS interrupt controller and
      we are using icp-native.c to drive it.  Since the woken thread will
      need to acknowledge and clear the IPI, we also export the base
      physical address of the XICS registers using kvmppc_set_xics_phys()
      for use in the low-level KVM book3s code.
      
      When a vcpu is created, it is assigned to a virtual CPU core.
      The vcore number is obtained by dividing the vcpu number by the
      number of threads per core in the host.  This number is exported
      to userspace via the KVM_CAP_PPC_SMT capability.  If qemu wishes
      to run the guest in single-threaded mode, it should make all vcpu
      numbers be multiples of the number of threads per core.
      
      We distinguish three states of a vcpu: runnable (i.e., ready to execute
      the guest), blocked (that is, idle), and busy in host.  We currently
      implement a policy that the vcore can run only when all its threads
      are runnable or blocked.  This way, if a vcpu needs to execute elsewhere
      in the kernel or in qemu, it can do so without being starved of CPU
      by the other vcpus.
      
      When a vcore starts to run, it executes in the context of one of the
      vcpu threads.  The other vcpu threads all go to sleep and stay asleep
      until something happens requiring the vcpu thread to return to qemu,
      or to wake up to run the vcore (this can happen when another vcpu
      thread goes from busy in host state to blocked).
      
      It can happen that a vcpu goes from blocked to runnable state (e.g.
      because of an interrupt), and the vcore it belongs to is already
      running.  In that case it can start to run immediately as long as
      the none of the vcpus in the vcore have started to exit the guest.
      We send the next free thread in the vcore an IPI to get it to start
      to execute the guest.  It synchronizes with the other threads via
      the vcore->entry_exit_count field to make sure that it doesn't go
      into the guest if the other vcpus are exiting by the time that it
      is ready to actually enter the guest.
      
      Note that there is no fixed relationship between the hardware thread
      number and the vcpu number.  Hardware threads are assigned to vcpus
      as they become runnable, so we will always use the lower-numbered
      hardware threads in preference to higher-numbered threads if not all
      the vcpus in the vcore are runnable, regardless of which vcpus are
      runnable.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      371fefd6
  11. 29 6月, 2011 1 次提交
  12. 27 6月, 2011 3 次提交
  13. 23 6月, 2011 2 次提交
  14. 22 6月, 2011 1 次提交
    • S
      powerpc/e500: fix breakage with fsl_rio_mcheck_exception · 82a9a480
      Scott Wood 提交于
      The wrong MCSR bit was being used on e500mc.  MCSR_BUS_RBERR only exists
      on e500v1/v2.  Use MCSR_LD on e500mc, and remove all MCSR checking
      in fsl_rio_mcheck_exception as we now no longer call that function
      if the appropriate bit in MCSR is not set.
      
      If RIO support was enabled at compile-time, but was never probed, just
      return from fsl_rio_mcheck_exception rather than dereference a NULL
      pointer.
      
      TODO: There is still a remaining, though comparitively minor, issue in
      that this recovery mechanism will falsely engage if there's an unrelated
      MCSR_LD event at the same time as a RIO error.
      Signed-off-by: NScott Wood <scottwood@freescale.com>
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      82a9a480
  15. 20 6月, 2011 1 次提交
  16. 10 6月, 2011 1 次提交
  17. 03 6月, 2011 1 次提交
  18. 26 5月, 2011 1 次提交
  19. 20 5月, 2011 3 次提交
  20. 19 5月, 2011 11 次提交