1. 10 9月, 2009 7 次提交
  2. 16 6月, 2009 1 次提交
    • M
      powerpc: Add configurable -Werror for arch/powerpc · ba55bd74
      Michael Ellerman 提交于
      Add the option to build the code under arch/powerpc with -Werror.
      
      The intention is to make it harder for people to inadvertantly introduce
      warnings in the arch/powerpc code. It needs to be configurable so that
      if a warning is introduced, people can easily work around it while it's
      being fixed.
      
      The option is a negative, ie. don't enable -Werror, so that it will be
      turned on for allyes and allmodconfig builds.
      
      The default is n, in the hope that developers will build with -Werror,
      that will probably lead to some build breaks, I am prepared to be flamed.
      
      It's not enabled for math-emu, which is a steaming pile of warnings.
      Signed-off-by: NMichael Ellerman <michael@ellerman.id.au>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      ba55bd74
  3. 10 6月, 2009 1 次提交
  4. 24 3月, 2009 22 次提交
  5. 15 2月, 2009 1 次提交
  6. 31 12月, 2008 8 次提交
    • A
      ca9edaee
    • H
      KVM: ppc: mostly cosmetic updates to the exit timing accounting code · 7b701591
      Hollis Blanchard 提交于
      The only significant changes were to kvmppc_exit_timing_write() and
      kvmppc_exit_timing_show(), both of which were dramatically simplified.
      Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      7b701591
    • H
      KVM: ppc: Implement in-kernel exit timing statistics · 73e75b41
      Hollis Blanchard 提交于
      Existing KVM statistics are either just counters (kvm_stat) reported for
      KVM generally or trace based aproaches like kvm_trace.
      For KVM on powerpc we had the need to track the timings of the different exit
      types. While this could be achieved parsing data created with a kvm_trace
      extension this adds too much overhead (at least on embedded PowerPC) slowing
      down the workloads we wanted to measure.
      
      Therefore this patch adds a in-kernel exit timing statistic to the powerpc kvm
      code. These statistic is available per vm&vcpu under the kvm debugfs directory.
      As this statistic is low, but still some overhead it can be enabled via a
      .config entry and should be off by default.
      
      Since this patch touched all powerpc kvm_stat code anyway this code is now
      merged and simplified together with the exit timing statistic code (still
      working with exit timing disabled in .config).
      Signed-off-by: NChristian Ehrhardt <ehrhardt@linux.vnet.ibm.com>
      Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      73e75b41
    • H
      KVM: ppc: save and restore guest mappings on context switch · c5fbdffb
      Hollis Blanchard 提交于
      Store shadow TLB entries in memory, but only use it on host context switch
      (instead of every guest entry). This improves performance for most workloads on
      440 by reducing the guest TLB miss rate.
      Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      c5fbdffb
    • H
      KVM: ppc: directly insert shadow mappings into the hardware TLB · 7924bd41
      Hollis Blanchard 提交于
      Formerly, we used to maintain a per-vcpu shadow TLB and on every entry to the
      guest would load this array into the hardware TLB. This consumed 1280 bytes of
      memory (64 entries of 16 bytes plus a struct page pointer each), and also
      required some assembly to loop over the array on every entry.
      
      Instead of saving a copy in memory, we can just store shadow mappings directly
      into the hardware TLB, accepting that the host kernel will clobber these as
      part of the normal 440 TLB round robin. When we do that we need less than half
      the memory, and we have decreased the exit handling time for all guest exits,
      at the cost of increased number of TLB misses because the host overwrites some
      guest entries.
      
      These savings will be increased on processors with larger TLBs or which
      implement intelligent flush instructions like tlbivax (which will avoid the
      need to walk arrays in software).
      
      In addition to that and to the code simplification, we have a greater chance of
      leaving other host userspace mappings in the TLB, instead of forcing all
      subsequent tasks to re-fault all their mappings.
      Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      7924bd41
    • H
      KVM: ppc: support large host pages · 89168618
      Hollis Blanchard 提交于
      KVM on 440 has always been able to handle large guest mappings with 4K host
      pages -- we must, since the guest kernel uses 256MB mappings.
      
      This patch makes KVM work when the host has large pages too (tested with 64K).
      Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      89168618
    • H
      KVM: ppc: fix userspace mapping invalidation on context switch · fe4e771d
      Hollis Blanchard 提交于
      We used to defer invalidating userspace TLB entries until jumping out of the
      kernel. This was causing MMU weirdness most easily triggered by using a pipe in
      the guest, e.g. "dmesg | tail". I believe the problem was that after the guest
      kernel changed the PID (part of context switch), the old process's mappings
      were still present, and so copy_to_user() on the "return to new process" path
      ended up using stale mappings.
      
      Testing with large pages (64K) exposed the problem, probably because with 4K
      pages, pressure on the TLB faulted all process A's mappings out before the
      guest kernel could insert any for process B.
      Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      fe4e771d
    • H
      KVM: ppc: use prefetchable mappings for guest memory · df9b856c
      Hollis Blanchard 提交于
      Bare metal Linux on 440 can "overmap" RAM in the kernel linear map, so that it
      can use large (256MB) mappings even if memory isn't a multiple of 256MB. To
      prevent the hardware prefetcher from loading from an invalid physical address
      through that mapping, it's marked Guarded.
      
      However, KVM must ensure that all guest mappings are backed by real physical
      RAM (since a deliberate access through a guarded mapping could still cause a
      machine check). Accordingly, we don't need to make our mappings guarded, so
      let's allow prefetching as the designers intended.
      
      Curiously this patch didn't affect performance at all on the quick test I
      tried, but it's clearly the right thing to do anyways and may improve other
      workloads.
      Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      df9b856c