1. 24 3月, 2011 9 次提交
  2. 23 3月, 2011 15 次提交
  3. 22 3月, 2011 1 次提交
    • J
      Prevent rt_sigqueueinfo and rt_tgsigqueueinfo from spoofing the signal code · da48524e
      Julien Tinnes 提交于
      Userland should be able to trust the pid and uid of the sender of a
      signal if the si_code is SI_TKILL.
      
      Unfortunately, the kernel has historically allowed sigqueueinfo() to
      send any si_code at all (as long as it was negative - to distinguish it
      from kernel-generated signals like SIGILL etc), so it could spoof a
      SI_TKILL with incorrect siginfo values.
      
      Happily, it looks like glibc has always set si_code to the appropriate
      SI_QUEUE, so there are probably no actual user code that ever uses
      anything but the appropriate SI_QUEUE flag.
      
      So just tighten the check for si_code (we used to allow any negative
      value), and add a (one-time) warning in case there are binaries out
      there that might depend on using other si_code values.
      Signed-off-by: NJulien Tinnes <jln@google.com>
      Acked-by: NOleg Nesterov <oleg@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      da48524e
  4. 18 3月, 2011 6 次提交
    • I
      trace, filters: Initialize the match variable in process_ops() properly · 1ef1d1c2
      Ingo Molnar 提交于
      Make sure the 'match' variable always has a value.
      
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1ef1d1c2
    • M
      smp_call_function_interrupt: use typedef and %pf · c8def554
      Milton Miller 提交于
      Use the newly added smp_call_func_t in smp_call_function_interrupt for
      the func variable, and make the comment above the WARN more assertive
      and explicit.  Also, func is a function pointer and does not need an
      offset, so use %pf not %pS.
      Signed-off-by: NMilton Miller <miltonm@bga.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c8def554
    • M
      smp_call_function_many: handle concurrent clearing of mask · 723aae25
      Milton Miller 提交于
      Mike Galbraith reported finding a lockup ("perma-spin bug") where the
      cpumask passed to smp_call_function_many was cleared by other cpu(s)
      while a cpu was preparing its call_data block, resulting in no cpu to
      clear the last ref and unlock the block.
      
      Having cpus clear their bit asynchronously could be useful on a mask of
      cpus that might have a translation context, or cpus that need a push to
      complete an rcu window.
      
      Instead of adding a BUG_ON and requiring yet another cpumask copy, just
      detect the race and handle it.
      
      Note: arch_send_call_function_ipi_mask must still handle an empty
      cpumask because the data block is globally visible before the that arch
      callback is made.  And (obviously) there are no guarantees to which cpus
      are notified if the mask is changed during the call; only cpus that were
      online and had their mask bit set during the whole call are guaranteed
      to be called.
      Reported-by: NMike Galbraith <efault@gmx.de>
      Reported-by: NJan Beulich <JBeulich@novell.com>
      Acked-by: NJan Beulich <jbeulich@novell.com>
      Cc: stable@kernel.org
      Signed-off-by: NMilton Miller <miltonm@bga.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      723aae25
    • M
      call_function_many: add missing ordering · 45a57919
      Milton Miller 提交于
      Paul McKenney's review pointed out two problems with the barriers in the
      2.6.38 update to the smp call function many code.
      
      First, a barrier that would force the func and info members of data to
      be visible before their consumption in the interrupt handler was
      missing.  This can be solved by adding a smp_wmb between setting the
      func and info members and setting setting the cpumask; this will pair
      with the existing and required smp_rmb ordering the cpumask read before
      the read of refs.  This placement avoids the need a second smp_rmb in
      the interrupt handler which would be executed on each of the N cpus
      executing the call request.  (I was thinking this barrier was present
      but was not).
      
      Second, the previous write to refs (establishing the zero that we the
      interrupt handler was testing from all cpus) was performed by a third
      party cpu.  This would invoke transitivity which, as a recient or
      concurrent addition to memory-barriers.txt now explicitly states, would
      require a full smp_mb().
      
      However, we know the cpumask will only be set by one cpu (the data
      owner) and any preivous iteration of the mask would have cleared by the
      reading cpu.  By redundantly writing refs to 0 on the owning cpu before
      the smp_wmb, the write to refs will follow the same path as the writes
      that set the cpumask, which in turn allows us to keep the barrier in the
      interrupt handler a smp_rmb instead of promoting it to a smp_mb (which
      will be be executed by N cpus for each of the possible M elements on the
      list).
      
      I moved and expanded the comment about our (ab)use of the rcu list
      primitives for the concurrent walk earlier into this function.  I
      considered moving the first two paragraphs to the queue list head and
      lock, but felt it would have been too disconected from the code.
      
      Cc: Paul McKinney <paulmck@linux.vnet.ibm.com>
      Cc: stable@kernel.org (2.6.32 and later)
      Signed-off-by: NMilton Miller <miltonm@bga.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      45a57919
    • M
      call_function_many: fix list delete vs add race · e6cd1e07
      Milton Miller 提交于
      Peter pointed out there was nothing preventing the list_del_rcu in
      smp_call_function_interrupt from running before the list_add_rcu in
      smp_call_function_many.
      
      Fix this by not setting refs until we have gotten the lock for the list.
      Take advantage of the wmb in list_add_rcu to save an explicit additional
      one.
      
      I tried to force this race with a udelay before the lock & list_add and
      by mixing all 64 online cpus with just 3 random cpus in the mask, but
      was unsuccessful.  Still, inspection shows a valid race, and the fix is
      a extension of the existing protection window in the current code.
      
      Cc: stable@kernel.org (v2.6.32 and later)
      Reported-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NMilton Miller <miltonm@bga.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e6cd1e07
    • R
      export pid symbols needed for kvm_vcpu_on_spin · 77c100c8
      Rik van Riel 提交于
      Export the symbols required for a race-free kvm_vcpu_on_spin.
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      77c100c8
  5. 17 3月, 2011 5 次提交
  6. 16 3月, 2011 4 次提交