1. 21 7月, 2010 1 次提交
  2. 20 7月, 2010 2 次提交
  3. 19 7月, 2010 1 次提交
    • A
      IPv6: fix CoA check in RH2 input handler (mip6_rthdr_input()) · d9a9dc66
      Arnaud Ebalard 提交于
      The input handler for Type 2 Routing Header (mip6_rthdr_input())
      checks if the CoA in the packet matches the CoA in the XFRM state.
      
      Current check is buggy: it compares the adddress in the Type 2
      Routing Header, i.e. the HoA, against the expected CoA in the state.
      The comparison should be made against the address in the destination
      field of the IPv6 header.
      
      The bug remained unnoticed because the main (and possibly only current)
      user of the code (UMIP MIPv6 Daemon) initializes the XFRM state with the
      unspecified address, i.e. explicitly allows everything.
      
      Yoshifuji-san, can you ack that one?
      Signed-off-by: NArnaud Ebalard <arno@natisbad.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d9a9dc66
  4. 16 7月, 2010 1 次提交
  5. 15 7月, 2010 5 次提交
  6. 13 7月, 2010 2 次提交
  7. 09 7月, 2010 4 次提交
  8. 08 7月, 2010 1 次提交
  9. 06 7月, 2010 1 次提交
  10. 05 7月, 2010 1 次提交
  11. 03 7月, 2010 1 次提交
    • J
      net: decreasing real_num_tx_queues needs to flush qdisc · f0796d5c
      John Fastabend 提交于
      Reducing real_num_queues needs to flush the qdisc otherwise
      skbs with queue_mappings greater then real_num_tx_queues can
      be sent to the underlying driver.
      
      The flow for this is,
      
      dev_queue_xmit()
      	dev_pick_tx()
      		skb_tx_hash()  => hash using real_num_tx_queues
      		skb_set_queue_mapping()
      	...
      	qdisc_enqueue_root() => enqueue skb on txq from hash
      ...
      dev->real_num_tx_queues -= n
      ...
      sch_direct_xmit()
      	dev_hard_start_xmit()
      		ndo_start_xmit(skb,dev) => skb queue set with old hash
      
      skbs are enqueued on the qdisc with skb->queue_mapping set
      0 < queue_mappings < real_num_tx_queues.  When the driver
      decreases real_num_tx_queues skb's may be dequeued from the
      qdisc with a queue_mapping greater then real_num_tx_queues.
      
      This fixes a case in ixgbe where this was occurring with DCB
      and FCoE. Because the driver is using queue_mapping to map
      skbs to tx descriptor rings we can potentially map skbs to
      rings that no longer exist.
      Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com>
      Tested-by: NRoss Brattain <ross.b.brattain@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f0796d5c
  12. 02 7月, 2010 1 次提交
  13. 29 6月, 2010 2 次提交
  14. 26 6月, 2010 1 次提交
  15. 25 6月, 2010 2 次提交
  16. 23 6月, 2010 1 次提交
  17. 22 6月, 2010 1 次提交
  18. 18 6月, 2010 1 次提交
  19. 17 6月, 2010 2 次提交
  20. 16 6月, 2010 1 次提交
  21. 14 6月, 2010 2 次提交
  22. 11 6月, 2010 3 次提交
    • D
      pktgen: Fix accuracy of inter-packet delay. · 07a0f0f0
      Daniel Turull 提交于
      This patch correct a bug in the delay of pktgen. 
      It makes sure the inter-packet interval is accurate.
      Signed-off-by: NDaniel Turull <daniel.turull@gmail.com>
      Signed-off-by: NRobert Olsson <robert.olsson@its.uu.se>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      07a0f0f0
    • E
      pkt_sched: gen_estimator: add a new lock · ae638c47
      Eric Dumazet 提交于
      gen_kill_estimator() / gen_new_estimator() is not always called with
      RTNL held.
      
      net/netfilter/xt_RATEEST.c is one user of these API that do not hold
      RTNL, so random corruptions can occur between "tc" and "iptables".
      
      Add a new fine grained lock instead of trying to use RTNL in netfilter.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ae638c47
    • J
      net: deliver skbs on inactive slaves to exact matches · 597a264b
      John Fastabend 提交于
      Currently, the accelerated receive path for VLAN's will
      drop packets if the real device is an inactive slave and
      is not one of the special pkts tested for in
      skb_bond_should_drop().  This behavior is different then
      the non-accelerated path and for pkts over a bonded vlan.
      
      For example,
      
      vlanx -> bond0 -> ethx
      
      will be dropped in the vlan path and not delivered to any
      packet handlers at all.  However,
      
      bond0 -> vlanx -> ethx
      
      and
      
      bond0 -> ethx
      
      will be delivered to handlers that match the exact dev,
      because the VLAN path checks the real_dev which is not a
      slave and netif_recv_skb() doesn't drop frames but only
      delivers them to exact matches.
      
      This patch adds a sk_buff flag which is used for tagging
      skbs that would previously been dropped and allows the
      skb to continue to skb_netif_recv().  Here we add
      logic to check for the deliver_no_wcard flag and if it
      is set only deliver to handlers that match exactly.  This
      makes both paths above consistent and gives pkt handlers
      a way to identify skbs that come from inactive slaves.
      Without this patch in some configurations skbs will be
      delivered to handlers with exact matches and in others
      be dropped out right in the vlan path.
      
      I have tested the following 4 configurations in failover modes
      and load balancing modes.
      
      # bond0 -> ethx
      
      # vlanx -> bond0 -> ethx
      
      # bond0 -> vlanx -> ethx
      
      # bond0 -> ethx
                  |
        vlanx -> --
      Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      597a264b
  23. 10 6月, 2010 3 次提交