1. 29 10月, 2013 2 次提交
  2. 28 10月, 2013 1 次提交
  3. 26 10月, 2013 1 次提交
  4. 23 10月, 2013 1 次提交
  5. 16 10月, 2013 5 次提交
    • O
      sched/wait: Introduce prepare_to_wait_event() · c2d81644
      Oleg Nesterov 提交于
      Add the new helper, prepare_to_wait_event() which should only be used
      by ___wait_event().
      
      prepare_to_wait_event() returns -ERESTARTSYS if signal_pending_state()
      is true, otherwise it does prepare_to_wait/exclusive.  This allows to
      uninline the signal-pending checks in wait_event*() macros.
      
      Also, it can initialize wait->private/func. We do not care if they were
      already initialized, the values are the same. This also shaves a couple
      of insns from the inlined code.
      
      This obviously makes prepare_*() path a little bit slower, but we are
      likely going to sleep anyway, so I think it makes sense to shrink .text:
      
                     text    data      bss      dec     hex  filename
                  ===================================================
         before:  5126092 2959248 10117120 18202460 115bf5c   vmlinux
          after:  5124618 2955152 10117120 18196890 115a99a   vmlinux
      
      on my build.
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20131007161824.GA29757@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      c2d81644
    • O
      sched/wait: Add ___wait_cond_timeout() to wait_event*_timeout() too · 8922915b
      Oleg Nesterov 提交于
      Commit 4c663cfc ("wait: fix false timeouts when using
      wait_event_timeout()") introduced the additional condition checks
      after a timeout but only in the "slow" __wait*() paths.
      
      wait_event_timeout(wq, CONDITION, 0) still returns 0 if CONDITION
      is already true and we do not call __wait*().
      
      Now that we have ___wait_cond_timeout() we can use it instead to
      ensure that __ret will be properly updated.
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20131007183106.GA10973@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      8922915b
    • P
      sched: Remove get_online_cpus() usage · 6acce3ef
      Peter Zijlstra 提交于
      Remove get_online_cpus() usage from the scheduler; there's 4 sites that
      use it:
      
       - sched_init_smp(); where its completely superfluous since we're in
         'early' boot and there simply cannot be any hotplugging.
      
       - sched_getaffinity(); we already take a raw spinlock to protect the
         task cpus_allowed mask, this disables preemption and therefore
         also stabilizes cpu_online_mask as that's modified using
         stop_machine. However switch to active mask for symmetry with
         sched_setaffinity()/set_cpus_allowed_ptr(). We guarantee active
         mask stability by inserting sync_rcu/sched() into _cpu_down.
      
       - sched_setaffinity(); we don't appear to need get_online_cpus()
         either, there's two sites where hotplug appears relevant:
          * cpuset_cpus_allowed(); for the !cpuset case we use possible_mask,
            for the cpuset case we hold task_lock, which is a spinlock and
            thus for mainline disables preemption (might cause pain on RT).
          * set_cpus_allowed_ptr(); Holds all scheduler locks and thus has
            preemption properly disabled; also it already deals with hotplug
            races explicitly where it releases them.
      
       - migrate_swap(); we can make stop_two_cpus() do the heavy lifting for
         us with a little trickery. By adding a sync_sched/rcu() after the
         CPU_DOWN_PREPARE notifier we can provide preempt/rcu guarantees for
         cpu_active_mask. Use these to validate that both our cpus are active
         when queueing the stop work before we queue the stop_machine works
         for take_cpu_down().
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com>
      Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Link: http://lkml.kernel.org/r/20131011123820.GV3081@twins.programming.kicks-ass.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      6acce3ef
    • P
      sched: Fix race in migrate_swap_stop() · 74602315
      Peter Zijlstra 提交于
      There is a subtle race in migrate_swap, when task P, on CPU A, decides to swap
      places with task T, on CPU B.
      
      Task P:
        - call migrate_swap
      Task T:
        - go to sleep, removing itself from the runqueue
      Task P:
        - double lock the runqueues on CPU A & B
      Task T:
        - get woken up, place itself on the runqueue of CPU C
      Task P:
        - see that task T is on a runqueue, and pretend to remove it
          from the runqueue on CPU B
      
      Now CPUs B & C both have corrupted scheduler data structures.
      
      This patch fixes it, by holding the pi_lock for both of the tasks
      involved in the migrate swap. This prevents task T from waking up,
      and placing itself onto another runqueue, until after migrate_swap
      has released all locks.
      
      This means that, when migrate_swap checks, task T will be either
      on the runqueue where it was originally seen, or not on any
      runqueue at all. Migrate_swap deals correctly with of those cases.
      Tested-by: NJoe Mario <jmario@redhat.com>
      Acked-by: NMel Gorman <mgorman@suse.de>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: hannes@cmpxchg.org
      Cc: aarcange@redhat.com
      Cc: srikar@linux.vnet.ibm.com
      Cc: tglx@linutronix.de
      Cc: hpa@zytor.com
      Link: http://lkml.kernel.org/r/20131010181722.GO13848@laptop.programming.kicks-ass.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      74602315
    • P
      sched/rt: Add missing rmb() · 7c3f2ab7
      Peter Zijlstra 提交于
      While discussing the proposed SCHED_DEADLINE patches which in parts
      mimic the existing FIFO code it was noticed that the wmb in
      rt_set_overloaded() didn't have a matching barrier.
      
      The only site using rt_overloaded() to test the rto_count is
      pull_rt_task() and we should issue a matching rmb before then assuming
      there's an rto_mask bit set.
      
      Without that smp_rmb() in there we could actually miss seeing the
      rto_mask bit.
      
      Also, change to using smp_[wr]mb(), even though this is SMP only code;
      memory barriers without smp_ always make me think they're against
      hardware of some sort.
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: vincent.guittot@linaro.org
      Cc: luca.abeni@unitn.it
      Cc: bruce.ashfield@windriver.com
      Cc: dhaval.giani@gmail.com
      Cc: rostedt@goodmis.org
      Cc: hgu1972@gmail.com
      Cc: oleg@redhat.com
      Cc: fweisbec@gmail.com
      Cc: darren@dvhart.com
      Cc: johan.eker@ericsson.com
      Cc: p.faure@akatech.ch
      Cc: paulmck@linux.vnet.ibm.com
      Cc: raistlin@linux.it
      Cc: claudio@evidence.eu.com
      Cc: insop.song@gmail.com
      Cc: michael@amarulasolutions.com
      Cc: liming.wang@windriver.com
      Cc: fchecconi@gmail.com
      Cc: jkacur@redhat.com
      Cc: tommaso.cucinotta@sssup.it
      Cc: Juri Lelli <juri.lelli@gmail.com>
      Cc: harald.gustafsson@ericsson.com
      Cc: nicola.manica@disi.unitn.it
      Cc: tglx@linutronix.de
      Link: http://lkml.kernel.org/r/20131015103507.GF10651@twins.programming.kicks-ass.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      7c3f2ab7
  6. 14 10月, 2013 1 次提交
  7. 13 10月, 2013 1 次提交
  8. 11 10月, 2013 14 次提交
  9. 10 10月, 2013 4 次提交
    • B
      kvm: ppc: booke: check range page invalidation progress on page setup · 40fde70d
      Bharat Bhushan 提交于
      When the MM code is invalidating a range of pages, it calls the KVM
      kvm_mmu_notifier_invalidate_range_start() notifier function, which calls
      kvm_unmap_hva_range(), which arranges to flush all the TLBs for guest pages.
      However, the Linux PTEs for the range being flushed are still valid at
      that point.  We are not supposed to establish any new references to pages
      in the range until the ...range_end() notifier gets called.
      The PPC-specific KVM code doesn't get any explicit notification of that;
      instead, we are supposed to use mmu_notifier_retry() to test whether we
      are or have been inside a range flush notifier pair while we have been
      referencing a page.
      
      This patch calls the mmu_notifier_retry() while mapping the guest
      page to ensure we are not referencing a page when in range invalidation.
      
      This call is inside a region locked with kvm->mmu_lock, which is the
      same lock that is called by the KVM MMU notifier functions, thus
      ensuring that no new notification can proceed while we are in the
      locked region.
      Signed-off-by: NBharat Bhushan <bharat.bhushan@freescale.com>
      Acked-by: NAlexander Graf <agraf@suse.de>
      [Backported to 3.12 - Paolo]
      Reviewed-by: NBharat Bhushan <bharat.bhushan@freescale.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      40fde70d
    • P
      KVM: PPC: Book3S HV: Fix typo in saving DSCR · cfc86025
      Paul Mackerras 提交于
      This fixes a typo in the code that saves the guest DSCR (Data Stream
      Control Register) into the kvm_vcpu_arch struct on guest exit.  The
      effect of the typo was that the DSCR value was saved in the wrong place,
      so changes to the DSCR by the guest didn't persist across guest exit
      and entry, and some host kernel memory got corrupted.
      
      Cc: stable@vger.kernel.org [v3.1+]
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Acked-by: NAlexander Graf <agraf@suse.de>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      cfc86025
    • G
      KVM: nVMX: fix shadow on EPT · d0d538b9
      Gleb Natapov 提交于
      72f85795 broke shadow on EPT. This patch reverts it and fixes PAE
      on nEPT (which reverted commit fixed) in other way.
      
      Shadow on EPT is now broken because while L1 builds shadow page table
      for L2 (which is PAE while L2 is in real mode) it never loads L2's
      GUEST_PDPTR[0-3].  They do not need to be loaded because without nested
      virtualization HW does this during guest entry if EPT is disabled,
      but in our case L0 emulates L2's vmentry while EPT is enables, so we
      cannot rely on vmcs12->guest_pdptr[0-3] to contain up-to-date values
      and need to re-read PDPTEs from L2 memory. This is what kvm_set_cr3()
      is doing, but by clearing cache bits during L2 vmentry we drop values
      that kvm_set_cr3() read from memory.
      
      So why the same code does not work for PAE on nEPT? kvm_set_cr3()
      reads pdptes into vcpu->arch.walk_mmu->pdptrs[]. walk_mmu points to
      vcpu->arch.nested_mmu while nested guest is running, but ept_load_pdptrs()
      uses vcpu->arch.mmu which contain incorrect values. Fix that by using
      walk_mmu in ept_(load|save)_pdptrs.
      Signed-off-by: NGleb Natapov <gleb@redhat.com>
      Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com>
      Tested-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      d0d538b9
    • H
      hwmon: (applesmc) Always read until end of data · 25f2bd7f
      Henrik Rydberg 提交于
      The crash reported and investigated in commit 5f4513 turned out to be
      caused by a change to the read interface on newer (2012) SMCs.
      
      Tests by Chris show that simply reading the data valid line is enough
      for the problem to go away. Additional tests show that the newer SMCs
      no longer wait for the number of requested bytes, but start sending
      data right away.  Apparently the number of bytes to read is no longer
      specified as before, but instead found out by reading until end of
      data. Failure to read until end of data confuses the state machine,
      which eventually causes the crash.
      
      As a remedy, assuming bit0 is the read valid line, make sure there is
      nothing more to read before leaving the read function.
      
      Tested to resolve the original problem, and runtested on MBA3,1,
      MBP4,1, MBP8,2, MBP10,1, MBP10,2. The patch seems to have no effect on
      machines before 2012.
      Tested-by: NChris Murphy <chris@cmurf.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NHenrik Rydberg <rydberg@euromail.se>
      Signed-off-by: NGuenter Roeck <linux@roeck-us.net>
      25f2bd7f
  10. 09 10月, 2013 10 次提交