1. 12 11月, 2012 1 次提交
  2. 11 11月, 2012 1 次提交
  3. 10 11月, 2012 1 次提交
  4. 09 11月, 2012 1 次提交
  5. 05 11月, 2012 1 次提交
  6. 01 11月, 2012 1 次提交
  7. 31 10月, 2012 1 次提交
  8. 27 10月, 2012 5 次提交
  9. 26 10月, 2012 1 次提交
  10. 25 10月, 2012 2 次提交
  11. 24 10月, 2012 1 次提交
  12. 18 10月, 2012 2 次提交
  13. 17 10月, 2012 1 次提交
  14. 15 10月, 2012 2 次提交
  15. 09 10月, 2012 8 次提交
  16. 06 10月, 2012 4 次提交
  17. 05 10月, 2012 7 次提交
    • G
      ipv6: release reference of ip6_null_entry's dst entry in __ip6_del_rt · 6825a26c
      Gao feng 提交于
      as we hold dst_entry before we call __ip6_del_rt,
      so we should alse call dst_release not only return
      -ENOENT when the rt6_info is ip6_null_entry.
      
      and we already hold the dst entry, so I think it's
      safe to call dst_release out of the write-read lock.
      Signed-off-by: NGao feng <gaofeng@cn.fujitsu.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6825a26c
    • D
      Remove noisy printks from llcp_sock_connect · 32418cfe
      Dave Jones 提交于
      Validation of userspace input shouldn't trigger dmesg spamming.
      Signed-off-by: NDave Jones <davej@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      32418cfe
    • D
      Merge branch 'fixes-for-3.7' of git://gitorious.org/linux-can/linux-can · 19d4e663
      David S. Miller 提交于
      Marc Kleine-Budde says:
      
      ====================
      here are three patches for the v3.7 release cycle. Two patches by Peter Senna
      Tschudin which fix the return values in the error handling path of the sja1000
      peak pci and pcmcia driver. And one patch by myself that fixes a compile
      breakage of the mpc5xxx_can mscan driver due to a section conflict.
      ====================
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      19d4e663
    • E
      tipc: prevent dropped connections due to rcvbuf overflow · e57edf6b
      Erik Hugne 提交于
      When large buffers are sent over connected TIPC sockets, it
      is likely that the sk_backlog will be filled up on the
      receiver side, but the TIPC flow control mechanism is happily
      unaware of this since that is based on message count.
      
      The sender will receive a TIPC_ERR_OVERLOAD message when this occurs
      and drop it's side of the connection, leaving it stale on
      the receiver end.
      
      By increasing the sk_rcvbuf to a 'worst case' value, we avoid the
      overload caused by a full backlog queue and the flow control
      will work properly.
      
      This worst case value is the max TIPC message size times
      the flow control window, multiplied by two because a sender
      will transmit up to double the window size before a port is marked
      congested.
      We multiply this by 2 to account for the sk_buff and other overheads.
      Signed-off-by: NErik Hugne <erik.hugne@ericsson.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e57edf6b
    • D
      silence some noisy printks in irda · 09689581
      Dave Jones 提交于
      Fuzzing causes these printks to spew constantly.
      Changing them to DEBUG statements is consistent with other usage in the file,
      and makes them disappear when CONFIG_IRDA_DEBUG is disabled.
      Signed-off-by: NDave Jones <davej@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      09689581
    • E
      team: set qdisc_tx_busylock to avoid LOCKDEP splat · b3c581d5
      Eric Dumazet 提交于
      If a qdisc is installed on a team device, its possible to get
      a lockdep splat under stress, because nested dev_queue_xmit() can
      lock busylock a second time (on a different device, so its a false
      positive)
      
      Avoid this problem using a distinct lock_class_key for team
      devices.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Jiri Pirko <jpirko@redhat.com>
      Acked-by: NJiri Pirko <jiri@resnulli.us>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b3c581d5
    • E
      bonding: set qdisc_tx_busylock to avoid LOCKDEP splat · 49ee4920
      Eric Dumazet 提交于
      If a qdisc is installed on a bonding device, its possible to get
      following lockdep splat under stress :
      
       =============================================
       [ INFO: possible recursive locking detected ]
       3.6.0+ #211 Not tainted
       ---------------------------------------------
       ping/4876 is trying to acquire lock:
        (dev->qdisc_tx_busylock ?: &qdisc_tx_busylock){+.-...}, at: [<ffffffff8157a191>] dev_queue_xmit+0xe1/0x830
      
       but task is already holding lock:
        (dev->qdisc_tx_busylock ?: &qdisc_tx_busylock){+.-...}, at: [<ffffffff8157a191>] dev_queue_xmit+0xe1/0x830
      
       other info that might help us debug this:
        Possible unsafe locking scenario:
      
              CPU0
              ----
         lock(dev->qdisc_tx_busylock ?: &qdisc_tx_busylock);
         lock(dev->qdisc_tx_busylock ?: &qdisc_tx_busylock);
      
        *** DEADLOCK ***
      
        May be due to missing lock nesting notation
      
       6 locks held by ping/4876:
        #0:  (sk_lock-AF_INET){+.+.+.}, at: [<ffffffff815e5030>] raw_sendmsg+0x600/0xc30
        #1:  (rcu_read_lock_bh){.+....}, at: [<ffffffff815ba4bd>] ip_finish_output+0x12d/0x870
        #2:  (rcu_read_lock_bh){.+....}, at: [<ffffffff8157a0b0>] dev_queue_xmit+0x0/0x830
        #3:  (dev->qdisc_tx_busylock ?: &qdisc_tx_busylock){+.-...}, at: [<ffffffff8157a191>] dev_queue_xmit+0xe1/0x830
        #4:  (&bond->lock){++.?..}, at: [<ffffffffa02128c1>] bond_start_xmit+0x31/0x4b0 [bonding]
        #5:  (rcu_read_lock_bh){.+....}, at: [<ffffffff8157a0b0>] dev_queue_xmit+0x0/0x830
      
       stack backtrace:
       Pid: 4876, comm: ping Not tainted 3.6.0+ #211
       Call Trace:
        [<ffffffff810a0145>] __lock_acquire+0x715/0x1b80
        [<ffffffff810a256b>] ? mark_held_locks+0x9b/0x100
        [<ffffffff810a1bf2>] lock_acquire+0x92/0x1d0
        [<ffffffff8157a191>] ? dev_queue_xmit+0xe1/0x830
        [<ffffffff81726b7c>] _raw_spin_lock+0x3c/0x50
        [<ffffffff8157a191>] ? dev_queue_xmit+0xe1/0x830
        [<ffffffff8106264d>] ? rcu_read_lock_bh_held+0x5d/0x90
        [<ffffffff8157a191>] dev_queue_xmit+0xe1/0x830
        [<ffffffff8157a0b0>] ? netdev_pick_tx+0x570/0x570
        [<ffffffffa0212a6a>] bond_start_xmit+0x1da/0x4b0 [bonding]
        [<ffffffff815796d0>] dev_hard_start_xmit+0x240/0x6b0
        [<ffffffff81597c6e>] sch_direct_xmit+0xfe/0x2a0
        [<ffffffff8157a249>] dev_queue_xmit+0x199/0x830
        [<ffffffff8157a0b0>] ? netdev_pick_tx+0x570/0x570
        [<ffffffff815ba96f>] ip_finish_output+0x5df/0x870
        [<ffffffff815ba4bd>] ? ip_finish_output+0x12d/0x870
        [<ffffffff815bb964>] ip_output+0x54/0xf0
        [<ffffffff815bad48>] ip_local_out+0x28/0x90
        [<ffffffff815bc444>] ip_send_skb+0x14/0x50
        [<ffffffff815bc4b2>] ip_push_pending_frames+0x32/0x40
        [<ffffffff815e536a>] raw_sendmsg+0x93a/0xc30
        [<ffffffff8128d570>] ? selinux_file_send_sigiotask+0x1f0/0x1f0
        [<ffffffff8109ddb4>] ? __lock_is_held+0x54/0x80
        [<ffffffff815f6730>] ? inet_recvmsg+0x220/0x220
        [<ffffffff8109ddb4>] ? __lock_is_held+0x54/0x80
        [<ffffffff815f6855>] inet_sendmsg+0x125/0x240
        [<ffffffff815f6730>] ? inet_recvmsg+0x220/0x220
        [<ffffffff8155cddb>] sock_sendmsg+0xab/0xe0
        [<ffffffff810a1650>] ? lock_release_non_nested+0xa0/0x2e0
        [<ffffffff810a1650>] ? lock_release_non_nested+0xa0/0x2e0
        [<ffffffff8155d18c>] __sys_sendmsg+0x37c/0x390
        [<ffffffff81195b2a>] ? fsnotify+0x2ca/0x7e0
        [<ffffffff811958e8>] ? fsnotify+0x88/0x7e0
        [<ffffffff81361f36>] ? put_ldisc+0x56/0xd0
        [<ffffffff8116f98a>] ? fget_light+0x3da/0x510
        [<ffffffff8155f6c4>] sys_sendmsg+0x44/0x80
        [<ffffffff8172fc22>] system_call_fastpath+0x16/0x1b
      
      Avoid this problem using a distinct lock_class_key for bonding
      devices.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Jay Vosburgh <fubar@us.ibm.com>
      Cc: Andy Gospodarek <andy@greyhouse.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      49ee4920