1. 01 10月, 2007 8 次提交
  2. 30 9月, 2007 3 次提交
    • J
      fix console change race exposed by CFS · a64314e6
      Jan Lübbe 提交于
      The new behaviour of CFS exposes a race which occurs if a switch is
      requested when vt_mode.mode is VT_PROCESS.
      
      The process with vc->vt_pid is signaled before vc->vt_newvt is set.
      This causes the switch to fail when triggered by the monitoing process
      because the target is still -1.
      
      [ If the signal sending fails, the subsequent "reset_vc(vc)" will then
        reset vt_newvt to -1, so this works for that case too.   - Linus ]
      Signed-off-by: NJan Lübbe <jluebbe@lasnet.de>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a64314e6
    • L
      Merge branch 'upstream-linus' of master.kernel.org:/pub/scm/linux/kernel/git/jgarzik/netdev-2.6 · ed7fdff5
      Linus Torvalds 提交于
      * 'upstream-linus' of master.kernel.org:/pub/scm/linux/kernel/git/jgarzik/netdev-2.6:
        mv643xx_eth: Check ETH_INT_CAUSE_STATE bit
      ed7fdff5
    • N
      i386: remove bogus comment about memory barrier · 4827bbb0
      Nick Piggin 提交于
      The comment being removed by this patch is incorrect and misleading.
      
      In the following situation:
      
      	1. load  ...
      	2. store 1 -> X
      	3. wmb
      	4. rmb
      	5. load  a <- Y
      	6. store ...
      
      4 will only ensure ordering of 1 with 5.
      3 will only ensure ordering of 2 with 6.
      
      Further, a CPU with strictly in-order stores will still only provide that
      2 and 6 are ordered (effectively, it is the same as a weakly ordered CPU
      with wmb after every store).
      
      In all cases, 5 may still be executed before 2 is visible to other CPUs!
      
      The additional piece of the puzzle that mb() provides is the store/load
      ordering, which fundamentally cannot be achieved with any combination of
      rmb()s and wmb()s.
      
      This can be an unexpected result if one expected any sort of global ordering
      guarantee to barriers (eg. that the barriers themselves are sequentially
      consistent with other types of barriers).  However sfence or lfence barriers
      need only provide an ordering partial ordering of memory operations -- Consider
      that wmb may be implemented as nothing more than inserting a special barrier
      entry in the store queue, or, in the case of x86, it can be a noop as the store
      queue is in order. And an rmb may be implemented as a directive to prevent
      subsequent loads only so long as their are no previous outstanding loads (while
      there could be stores still in store queues).
      
      I can actually see the occasional load/store being reordered around lfence on
      my core2. That doesn't prove my above assertions, but it does show the comment
      is wrong (unless my program is -- can send it out by request).
      
      So:
         mb() and smp_mb() always have and always will require a full mfence
         or lock prefixed instruction on x86.  And we should remove this comment.
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Cc: Paul McKenney <paulmck@us.ibm.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Andi Kleen <ak@suse.de>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4827bbb0
  3. 29 9月, 2007 14 次提交
  4. 28 9月, 2007 14 次提交
  5. 27 9月, 2007 1 次提交