1. 30 6月, 2010 2 次提交
    • M
      compiler-gcc.h: gcc-4.5 needs noclone and noinline on __naked functions · 9c695203
      Mikael Pettersson 提交于
      A __naked function is defined in C but with a body completely implemented
      by asm(), including any prologue and epilogue.  These asm() bodies expect
      standard calling conventions for parameter passing.  Older GCCs implement
      that correctly, but 4.[56] currently do not, see GCC PR44290.  In the
      Linux kernel this breaks ARM, causing most arch/arm/mm/copypage-*.c
      modules to get miscompiled, resulting in kernel crashes during bootup.
      
      Part of the kernel fix is to augment the __naked function attribute to
      also imply noinline and noclone.  This patch implements that, and has been
      verified to fix boot failures with gcc-4.5 compiled 2.6.34 and 2.6.35-rc1
      kernels.  The patch is a no-op with older GCCs.
      Signed-off-by: NMikael Pettersson <mikpe@it.uu.se>
      Signed-off-by: NKhem Raj <raj.khem@gmail.com>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: <stable@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9c695203
    • N
      fs: fix superblock iteration race · 57439f87
      npiggin@suse.de 提交于
      list_for_each_entry_safe is not suitable to protect against concurrent
      modification of the list. 6754af64 introduced a race in sb walking.
      
      list_for_each_entry can use the trick of pinning the current entry in
      the list before we drop and retake the lock because it subsequently
      follows cur->next. However list_for_each_entry_safe saves n=cur->next
      for following before entering the loop body, so when the lock is
      dropped, n may be deleted.
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: John Stultz <johnstul@us.ibm.com>
      Cc: Frank Mayhar <fmayhar@google.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      57439f87
  2. 24 6月, 2010 1 次提交
  3. 22 6月, 2010 1 次提交
  4. 14 6月, 2010 1 次提交
  5. 12 6月, 2010 4 次提交
  6. 11 6月, 2010 2 次提交
    • C
      writeback: simplify and split bdi_start_writeback · c5444198
      Christoph Hellwig 提交于
      bdi_start_writeback now never gets a superblock passed, so we can just remove
      that case.  And to further untangle the code and flatten the call stack
      split it into two trivial helpers for it's two callers.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      c5444198
    • J
      net: deliver skbs on inactive slaves to exact matches · 597a264b
      John Fastabend 提交于
      Currently, the accelerated receive path for VLAN's will
      drop packets if the real device is an inactive slave and
      is not one of the special pkts tested for in
      skb_bond_should_drop().  This behavior is different then
      the non-accelerated path and for pkts over a bonded vlan.
      
      For example,
      
      vlanx -> bond0 -> ethx
      
      will be dropped in the vlan path and not delivered to any
      packet handlers at all.  However,
      
      bond0 -> vlanx -> ethx
      
      and
      
      bond0 -> ethx
      
      will be delivered to handlers that match the exact dev,
      because the VLAN path checks the real_dev which is not a
      slave and netif_recv_skb() doesn't drop frames but only
      delivers them to exact matches.
      
      This patch adds a sk_buff flag which is used for tagging
      skbs that would previously been dropped and allows the
      skb to continue to skb_netif_recv().  Here we add
      logic to check for the deliver_no_wcard flag and if it
      is set only deliver to handlers that match exactly.  This
      makes both paths above consistent and gives pkt handlers
      a way to identify skbs that come from inactive slaves.
      Without this patch in some configurations skbs will be
      delivered to handlers with exact matches and in others
      be dropped out right in the vlan path.
      
      I have tested the following 4 configurations in failover modes
      and load balancing modes.
      
      # bond0 -> ethx
      
      # vlanx -> bond0 -> ethx
      
      # bond0 -> vlanx -> ethx
      
      # bond0 -> ethx
                  |
        vlanx -> --
      Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      597a264b
  7. 10 6月, 2010 1 次提交
  8. 09 6月, 2010 4 次提交
    • A
      misc: Fix allocation 'borrowed' by vhost_net · 79907d89
      Alan Cox 提交于
      10, 233 is allocated officially to /dev/kmview which is shipping in
      Ubuntu and Debian distributions.  vhost_net seem to have borrowed it
      without making a proper request and this causes regressions in the other
      distributions.
      
      vhost_net can use a dynamic minor so use that instead.  Also update the
      file with a comment to try and avoid future misunderstandings.
      
      cc: stable@kernel.org
      Signed-off-by: NAlan Cox <device@lanana.org>
      [ We should have caught this before 2.6.34 got released.  - Linus ]
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      79907d89
    • D
      writeback: pay attention to wbc->nr_to_write in write_cache_pages · 0b564927
      Dave Chinner 提交于
      If a filesystem writes more than one page in ->writepage, write_cache_pages
      fails to notice this and continues to attempt writeback when wbc->nr_to_write
      has gone negative - this trace was captured from XFS:
      
          wbc_writeback_start: towrt=1024
          wbc_writepage: towrt=1024
          wbc_writepage: towrt=0
          wbc_writepage: towrt=-1
          wbc_writepage: towrt=-5
          wbc_writepage: towrt=-21
          wbc_writepage: towrt=-85
      
      This has adverse effects on filesystem writeback behaviour. write_cache_pages()
      needs to terminate after a certain number of pages are written, not after a
      certain number of calls to ->writepage are made.  This is a regression
      introduced by 17bc6c30 ("vfs: Add
      no_nrwrite_index_update writeback control flag"), but cannot be reverted
      directly due to subsequent bug fixes that have gone in on top of it.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0b564927
    • O
      tracing: Fix null pointer deref with SEND_SIG_FORCED · b9b76dfa
      Oleg Nesterov 提交于
      BUG: unable to handle kernel NULL pointer dereference at
      	0000000000000006
      IP: [<ffffffff8107bd37>] ftrace_raw_event_signal_generate+0x87/0x140
      
      TP_STORE_SIGINFO() forgets about SEND_SIG_FORCED, fix.
      
      We should probably export is_si_special() and change TP_STORE_SIGINFO()
      to use it in the longer term.
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Acked-by: NRoland McGrath <roland@redhat.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Jason Baron <jbaron@redhat.com>
      Cc: Masami Hiramatsu <mhiramat@redhat.com>
      Cc: 2.6.33.x-2.6.34.x <stable@kernel.org>
      LKML-Reference: <20100603213409.GA8307@redhat.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      b9b76dfa
    • P
      sched: Fix PROVE_RCU vs cpu_cgroup · dc61b1d6
      Peter Zijlstra 提交于
      PROVE_RCU has a few issues with the cpu_cgroup because the scheduler
      typically holds rq->lock around the css rcu derefs but the generic
      cgroup code doesn't (and can't) know about that lock.
      
      Provide means to add extra checks to the css dereference and use that
      in the scheduler to annotate its users.
      
      The addition of rq->lock to these checks is correct because the
      cgroup_subsys::attach() method takes the rq->lock for each task it
      moves, therefore by holding that lock, we ensure the task is pinned to
      the current cgroup and the RCU derefence is valid.
      
      That leaves one genuine race in __sched_setscheduler() where we used
      task_group() without holding any of the required locks and thus raced
      with the cgroup code. Solve this by moving the check under the
      appropriate lock.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      dc61b1d6
  9. 08 6月, 2010 2 次提交
  10. 05 6月, 2010 6 次提交
  11. 03 6月, 2010 5 次提交
  12. 02 6月, 2010 1 次提交
  13. 01 6月, 2010 10 次提交