1. 17 6月, 2010 6 次提交
  2. 16 6月, 2010 8 次提交
  3. 15 6月, 2010 5 次提交
  4. 14 6月, 2010 5 次提交
  5. 13 6月, 2010 1 次提交
    • R
      enic: fix pci_alloc_consistent argument · d49aba84
      Randy Dunlap 提交于
      Fix build warning on i386 (32-bit) with 32-bit dma_addr_t:
      
      drivers/net/enic/vnic_dev.c: In function 'vnic_dev_init_prov':
      drivers/net/enic/vnic_dev.c:716: warning: passing argument 3 of 'pci_alloc_consistent' from incompatible pointer type
      include/asm-generic/pci-dma-compat.h:16: note: expected 'dma_addr_t *' but argument is of type 'u64 *'
      
      Now builds without warnings on i386 and on x86_64.
      Signed-off-by: NRandy Dunlap <randy.dunlap@oracle.com>
      Cc:	Scott Feldman <scofeldm@cisco.com>
      Cc:	Vasanthy Kolluri <vkolluri@cisco.com>
      Cc:	Roopa Prabhu <roprabhu@cisco.com>
      Acked-by: NScott Feldman <scofeldm@cisco.com>
      d49aba84
  6. 12 6月, 2010 2 次提交
  7. 11 6月, 2010 4 次提交
    • F
      net8139: fix a race at the end of NAPI · 349124a0
      Figo.zhang 提交于
      fix a race at the end of NAPI complete processing, it had
      better do __napi_complete() first before re-enable interrupt.
      Signed-off-by: NFigo.zhang <figo1802@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      349124a0
    • D
      pktgen: Fix accuracy of inter-packet delay. · 07a0f0f0
      Daniel Turull 提交于
      This patch correct a bug in the delay of pktgen. 
      It makes sure the inter-packet interval is accurate.
      Signed-off-by: NDaniel Turull <daniel.turull@gmail.com>
      Signed-off-by: NRobert Olsson <robert.olsson@its.uu.se>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      07a0f0f0
    • E
      pkt_sched: gen_estimator: add a new lock · ae638c47
      Eric Dumazet 提交于
      gen_kill_estimator() / gen_new_estimator() is not always called with
      RTNL held.
      
      net/netfilter/xt_RATEEST.c is one user of these API that do not hold
      RTNL, so random corruptions can occur between "tc" and "iptables".
      
      Add a new fine grained lock instead of trying to use RTNL in netfilter.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ae638c47
    • J
      net: deliver skbs on inactive slaves to exact matches · 597a264b
      John Fastabend 提交于
      Currently, the accelerated receive path for VLAN's will
      drop packets if the real device is an inactive slave and
      is not one of the special pkts tested for in
      skb_bond_should_drop().  This behavior is different then
      the non-accelerated path and for pkts over a bonded vlan.
      
      For example,
      
      vlanx -> bond0 -> ethx
      
      will be dropped in the vlan path and not delivered to any
      packet handlers at all.  However,
      
      bond0 -> vlanx -> ethx
      
      and
      
      bond0 -> ethx
      
      will be delivered to handlers that match the exact dev,
      because the VLAN path checks the real_dev which is not a
      slave and netif_recv_skb() doesn't drop frames but only
      delivers them to exact matches.
      
      This patch adds a sk_buff flag which is used for tagging
      skbs that would previously been dropped and allows the
      skb to continue to skb_netif_recv().  Here we add
      logic to check for the deliver_no_wcard flag and if it
      is set only deliver to handlers that match exactly.  This
      makes both paths above consistent and gives pkt handlers
      a way to identify skbs that come from inactive slaves.
      Without this patch in some configurations skbs will be
      delivered to handlers with exact matches and in others
      be dropped out right in the vlan path.
      
      I have tested the following 4 configurations in failover modes
      and load balancing modes.
      
      # bond0 -> ethx
      
      # vlanx -> bond0 -> ethx
      
      # bond0 -> vlanx -> ethx
      
      # bond0 -> ethx
                  |
        vlanx -> --
      Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      597a264b
  8. 10 6月, 2010 8 次提交
  9. 09 6月, 2010 1 次提交
    • S
      ipvs: Add missing locking during connection table hashing and unhashing · aea9d711
      Sven Wegener 提交于
      The code that hashes and unhashes connections from the connection table
      is missing locking of the connection being modified, which opens up a
      race condition and results in memory corruption when this race condition
      is hit.
      
      Here is what happens in pretty verbose form:
      
      CPU 0					CPU 1
      ------------				------------
      An active connection is terminated and
      we schedule ip_vs_conn_expire() on this
      CPU to expire this connection.
      
      					IRQ assignment is changed to this CPU,
      					but the expire timer stays scheduled on
      					the other CPU.
      
      					New connection from same ip:port comes
      					in right before the timer expires, we
      					find the inactive connection in our
      					connection table and get a reference to
      					it. We proper lock the connection in
      					tcp_state_transition() and read the
      					connection flags in set_tcp_state().
      
      ip_vs_conn_expire() gets called, we
      unhash the connection from our
      connection table and remove the hashed
      flag in ip_vs_conn_unhash(), without
      proper locking!
      
      					While still holding proper locks we
      					write the connection flags in
      					set_tcp_state() and this sets the hashed
      					flag again.
      
      ip_vs_conn_expire() fails to expire the
      connection, because the other CPU has
      incremented the reference count. We try
      to re-insert the connection into our
      connection table, but this fails in
      ip_vs_conn_hash(), because the hashed
      flag has been set by the other CPU. We
      re-schedule execution of
      ip_vs_conn_expire(). Now this connection
      has the hashed flag set, but isn't
      actually hashed in our connection table
      and has a dangling list_head.
      
      					We drop the reference we held on the
      					connection and schedule the expire timer
      					for timeouting the connection on this
      					CPU. Further packets won't be able to
      					find this connection in our connection
      					table.
      
      					ip_vs_conn_expire() gets called again,
      					we think it's already hashed, but the
      					list_head is dangling and while removing
      					the connection from our connection table
      					we write to the memory location where
      					this list_head points to.
      
      The result will probably be a kernel oops at some other point in time.
      
      This race condition is pretty subtle, but it can be triggered remotely.
      It needs the IRQ assignment change or another circumstance where packets
      coming from the same ip:port for the same service are being processed on
      different CPUs. And it involves hitting the exact time at which
      ip_vs_conn_expire() gets called. It can be avoided by making sure that
      all packets from one connection are always processed on the same CPU and
      can be made harder to exploit by changing the connection timeouts to
      some custom values.
      Signed-off-by: NSven Wegener <sven.wegener@stealer.net>
      Cc: stable@kernel.org
      Acked-by: NSimon Horman <horms@verge.net.au>
      Signed-off-by: NPatrick McHardy <kaber@trash.net>
      aea9d711