1. 16 4月, 2013 4 次提交
  2. 08 4月, 2013 2 次提交
  3. 07 4月, 2013 2 次提交
  4. 20 3月, 2013 1 次提交
  5. 11 3月, 2013 2 次提交
  6. 06 3月, 2013 2 次提交
  7. 05 3月, 2013 5 次提交
  8. 28 2月, 2013 1 次提交
    • S
      hlist: drop the node parameter from iterators · b67bfe0d
      Sasha Levin 提交于
      I'm not sure why, but the hlist for each entry iterators were conceived
      
              list_for_each_entry(pos, head, member)
      
      The hlist ones were greedy and wanted an extra parameter:
      
              hlist_for_each_entry(tpos, pos, head, member)
      
      Why did they need an extra pos parameter? I'm not quite sure. Not only
      they don't really need it, it also prevents the iterator from looking
      exactly like the list iterator, which is unfortunate.
      
      Besides the semantic patch, there was some manual work required:
      
       - Fix up the actual hlist iterators in linux/list.h
       - Fix up the declaration of other iterators based on the hlist ones.
       - A very small amount of places were using the 'node' parameter, this
       was modified to use 'obj->member' instead.
       - Coccinelle didn't handle the hlist_for_each_entry_safe iterator
       properly, so those had to be fixed up manually.
      
      The semantic patch which is mostly the work of Peter Senna Tschudin is here:
      
      @@
      iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
      
      type T;
      expression a,c,d,e;
      identifier b;
      statement S;
      @@
      
      -T b;
          <+... when != b
      (
      hlist_for_each_entry(a,
      - b,
      c, d) S
      |
      hlist_for_each_entry_continue(a,
      - b,
      c) S
      |
      hlist_for_each_entry_from(a,
      - b,
      c) S
      |
      hlist_for_each_entry_rcu(a,
      - b,
      c, d) S
      |
      hlist_for_each_entry_rcu_bh(a,
      - b,
      c, d) S
      |
      hlist_for_each_entry_continue_rcu_bh(a,
      - b,
      c) S
      |
      for_each_busy_worker(a, c,
      - b,
      d) S
      |
      ax25_uid_for_each(a,
      - b,
      c) S
      |
      ax25_for_each(a,
      - b,
      c) S
      |
      inet_bind_bucket_for_each(a,
      - b,
      c) S
      |
      sctp_for_each_hentry(a,
      - b,
      c) S
      |
      sk_for_each(a,
      - b,
      c) S
      |
      sk_for_each_rcu(a,
      - b,
      c) S
      |
      sk_for_each_from
      -(a, b)
      +(a)
      S
      + sk_for_each_from(a) S
      |
      sk_for_each_safe(a,
      - b,
      c, d) S
      |
      sk_for_each_bound(a,
      - b,
      c) S
      |
      hlist_for_each_entry_safe(a,
      - b,
      c, d, e) S
      |
      hlist_for_each_entry_continue_rcu(a,
      - b,
      c) S
      |
      nr_neigh_for_each(a,
      - b,
      c) S
      |
      nr_neigh_for_each_safe(a,
      - b,
      c, d) S
      |
      nr_node_for_each(a,
      - b,
      c) S
      |
      nr_node_for_each_safe(a,
      - b,
      c, d) S
      |
      - for_each_gfn_sp(a, c, d, b) S
      + for_each_gfn_sp(a, c, d) S
      |
      - for_each_gfn_indirect_valid_sp(a, c, d, b) S
      + for_each_gfn_indirect_valid_sp(a, c, d) S
      |
      for_each_host(a,
      - b,
      c) S
      |
      for_each_host_safe(a,
      - b,
      c, d) S
      |
      for_each_mesh_entry(a,
      - b,
      c, d) S
      )
          ...+>
      
      [akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
      [akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
      [akpm@linux-foundation.org: checkpatch fixes]
      [akpm@linux-foundation.org: fix warnings]
      [akpm@linux-foudnation.org: redo intrusive kvm changes]
      Tested-by: NPeter Senna Tschudin <peter.senna@gmail.com>
      Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NSasha Levin <sasha.levin@oracle.com>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Gleb Natapov <gleb@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b67bfe0d
  9. 11 2月, 2013 1 次提交
  10. 05 2月, 2013 2 次提交
    • T
      KVM: set_memory_region: Disallow changing read-only attribute later · 75d61fbc
      Takuya Yoshikawa 提交于
      As Xiao pointed out, there are a few problems with it:
       - kvm_arch_commit_memory_region() write protects the memory slot only
         for GET_DIRTY_LOG when modifying the flags.
       - FNAME(sync_page) uses the old spte value to set a new one without
         checking KVM_MEM_READONLY flag.
      
      Since we flush all shadow pages when creating a new slot, the simplest
      fix is to disallow such problematic flag changes: this is safe because
      no one is doing such things.
      Reviewed-by: NGleb Natapov <gleb@redhat.com>
      Signed-off-by: NTakuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
      Cc: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
      Cc: Alex Williamson <alex.williamson@redhat.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      75d61fbc
    • T
      KVM: set_memory_region: Identify the requested change explicitly · f64c0398
      Takuya Yoshikawa 提交于
      KVM_SET_USER_MEMORY_REGION forces __kvm_set_memory_region() to identify
      what kind of change is being requested by checking the arguments.  The
      current code does this checking at various points in code and each
      condition being used there is not easy to understand at first glance.
      
      This patch consolidates these checks and introduces an enum to name the
      possible changes to clean up the code.
      
      Although this does not introduce any functional changes, there is one
      change which optimizes the code a bit: if we have nothing to change, the
      new code returns 0 immediately.
      
      Note that the return value for this case cannot be changed since QEMU
      relies on it: we noticed this when we changed it to -EINVAL and got a
      section mismatch error at the final stage of live migration.
      Signed-off-by: NTakuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      f64c0398
  11. 29 1月, 2013 2 次提交
    • R
      kvm: Handle yield_to failure return code for potential undercommit case · c45c528e
      Raghavendra K T 提交于
      yield_to returns -ESRCH, When source and target of yield_to
      run queue length is one. When we see three successive failures of
      yield_to we assume we are in potential undercommit case and abort
      from PLE handler.
      The assumption is backed by low probability of wrong decision
      for even worst case scenarios such as average runqueue length
      between 1 and 2.
      
      More detail on rationale behind using three tries:
      if p is the probability of finding rq length one on a particular cpu,
      and if we do n tries, then probability of exiting ple handler is:
      
       p^(n+1) [ because we would have come across one source with rq length
      1 and n target cpu rqs  with length 1 ]
      
      so
      num tries:         probability of aborting ple handler (1.5x overcommit)
       1                 1/4
       2                 1/8
       3                 1/16
      
      We can increase this probability with more tries, but the problem is
      the overhead.
      Also, If we have tried three times that means we would have iterated
      over 3 good eligible vcpus along with many non-eligible candidates. In
      worst case if we iterate all the vcpus, we reduce 1x performance and
      overcommit performance get hit.
      
      note that we do not update last boosted vcpu in failure cases.
      Thank Avi for raising question on aborting after first fail from yield_to.
      Reviewed-by: NSrikar Dronamraju <srikar@linux.vnet.ibm.com>
      Signed-off-by: NRaghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
      Tested-by: NChegu Vinod <chegu_vinod@hp.com>
      Signed-off-by: NGleb Natapov <gleb@redhat.com>
      c45c528e
    • Y
      x86, apicv: add virtual interrupt delivery support · c7c9c56c
      Yang Zhang 提交于
      Virtual interrupt delivery avoids KVM to inject vAPIC interrupts
      manually, which is fully taken care of by the hardware. This needs
      some special awareness into existing interrupr injection path:
      
      - for pending interrupt, instead of direct injection, we may need
        update architecture specific indicators before resuming to guest.
      
      - A pending interrupt, which is masked by ISR, should be also
        considered in above update action, since hardware will decide
        when to inject it at right time. Current has_interrupt and
        get_interrupt only returns a valid vector from injection p.o.v.
      Reviewed-by: NMarcelo Tosatti <mtosatti@redhat.com>
      Signed-off-by: NKevin Tian <kevin.tian@intel.com>
      Signed-off-by: NYang Zhang <yang.z.zhang@Intel.com>
      Signed-off-by: NGleb Natapov <gleb@redhat.com>
      c7c9c56c
  12. 27 1月, 2013 2 次提交
  13. 17 1月, 2013 3 次提交
  14. 14 1月, 2013 1 次提交
  15. 24 12月, 2012 1 次提交
  16. 23 12月, 2012 2 次提交
  17. 14 12月, 2012 7 次提交