1. 20 1月, 2011 1 次提交
  2. 19 1月, 2011 2 次提交
  3. 14 1月, 2011 1 次提交
    • E
      net: remove dev_txq_stats_fold() · 1ac9ad13
      Eric Dumazet 提交于
      After recent changes, (percpu stats on vlan/tunnels...), we dont need
      anymore per struct netdev_queue tx_bytes/tx_packets/tx_dropped counters.
      
      Only remaining users are ixgbe, sch_teql, gianfar & macvlan :
      
      1) ixgbe can be converted to use existing tx_ring counters.
      
      2) macvlan incremented txq->tx_dropped, it can use the
      dev->stats.tx_dropped counter.
      
      3) sch_teql : almost revert ab35cd4b (Use net_device internal stats)
          Now we have ndo_get_stats64(), use it, even for "unsigned long"
      fields (No need to bring back a struct net_device_stats)
      
      4) gianfar adds a stats structure per tx queue to hold
      tx_bytes/tx_packets
      
      This removes a lockdep warning (and possible lockup) in rndis gadget,
      calling dev_get_stats() from hard IRQ context.
      
      Ref: http://www.spinics.net/lists/netdev/msg149202.htmlReported-by: NNeil Jones <neiljay@gmail.com>
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      CC: Jarek Poplawski <jarkao2@gmail.com>
      CC: Alexander Duyck <alexander.h.duyck@intel.com>
      CC: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
      CC: Sandeep Gopalpet <sandeep.kumar@freescale.com>
      CC: Michal Nazarewicz <mina86@mina86.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1ac9ad13
  4. 13 1月, 2011 1 次提交
  5. 11 1月, 2011 2 次提交
  6. 10 1月, 2011 8 次提交
  7. 07 1月, 2011 1 次提交
  8. 24 12月, 2010 1 次提交
    • D
      Revert "ipv4: Allow configuring subnets as local addresses" · e0584649
      David S. Miller 提交于
      This reverts commit 4465b469.
      
      Conflicts:
      
      	net/ipv4/fib_frontend.c
      
      As reported by Ben Greear, this causes regressions:
      
      > Change 4465b469 caused rules
      > to stop matching the input device properly because the
      > FLOWI_FLAG_MATCH_ANY_IIF is always defined in ip_dev_find().
      >
      > This breaks rules such as:
      >
      > ip rule add pref 512 lookup local
      > ip rule del pref 0 lookup local
      > ip link set eth2 up
      > ip -4 addr add 172.16.0.102/24 broadcast 172.16.0.255 dev eth2
      > ip rule add to 172.16.0.102 iif eth2 lookup local pref 10
      > ip rule add iif eth2 lookup 10001 pref 20
      > ip route add 172.16.0.0/24 dev eth2 table 10001
      > ip route add unreachable 0/0 table 10001
      >
      > If you had a second interface 'eth0' that was on a different
      > subnet, pinging a system on that interface would fail:
      >
      >   [root@ct503-60 ~]# ping 192.168.100.1
      >   connect: Invalid argument
      Reported-by: NBen Greear <greearb@candelatech.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e0584649
  9. 22 12月, 2010 2 次提交
    • E
      filter: optimize accesses to ancillary data · 12b16dad
      Eric Dumazet 提交于
      We can translate pseudo load instructions at filter check time to
      dedicated instructions to speed up filtering and avoid one switch().
      libpcap currently uses SKF_AD_PROTOCOL, but custom filters probably use
      other ancillary accesses.
      
      Note : I made the assertion that ancillary data was always accessed with
      BPF_LD|BPF_?|BPF_ABS instructions, not with BPF_LD|BPF_?|BPF_IND ones
      (offset given by K constant, not by K + X register)
      
      On x86_64, this saves a few bytes of text :
      
      # size net/core/filter.o.*
         text	   data	    bss	    dec	    hex	filename
         4864	      0	      0	   4864	   1300	net/core/filter.o.new
         4944	      0	      0	   4944	   1350	net/core/filter.o.old
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      12b16dad
    • E
      net: timestamp cloned packet in dev_queue_xmit_nit · 70978182
      Eric Dumazet 提交于
      Le vendredi 17 décembre 2010 à 10:26 +0100, Eric Dumazet a écrit :
      
      >
      > I think we can add this after latest Changli patch :
      >
      > He does one skb_clone() before calling the sniffers.
      > We could set timestamp on this clone, instead of original skb.
      >
      > Problem solved.
      >
      
      [PATCH net-next-2.6] net: timestamp cloned packet in dev_queue_xmit_nit
      
      Now we do one clone of skb if at least one sniffer might take packet,
      we also can do the skb timestamping on the clone and let original packet
      unchanged.
      
      This is a generalization of commit 8caf1539 (net: sch_netem: Fix an
      inconsistency in ingress netem timestamps.)
      
      This way, we can have a good idea when packets are delivered to our
      stack (tcpdump -i ifb0), while a tcpdump on original device gives
      timestamps right before ingressing.
      
      This also speedup our stack, avoiding taking timestamps if not needed.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Cc: Changli Gao <xiaosuo@gmail.com>
      Cc: Patrick McHardy <kaber@trash.net>
      Cc: Jarek Poplawski <jarkao2@gmail.com>
      Acked-by: NChangli Gao <xiaosuo@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      70978182
  10. 21 12月, 2010 1 次提交
  11. 20 12月, 2010 2 次提交
  12. 17 12月, 2010 5 次提交
  13. 15 12月, 2010 1 次提交
    • T
      workqueue: convert cancel_rearming_delayed_work[queue]() users to cancel_delayed_work_sync() · afe2c511
      Tejun Heo 提交于
      cancel_rearming_delayed_work[queue]() has been superceded by
      cancel_delayed_work_sync() quite some time ago.  Convert all the
      in-kernel users.  The conversions are completely equivalent and
      trivial.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: N"David S. Miller" <davem@davemloft.net>
      Acked-by: NGreg Kroah-Hartman <gregkh@suse.de>
      Acked-by: NEvgeniy Polyakov <zbr@ioremap.net>
      Cc: Jeff Garzik <jgarzik@pobox.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Mauro Carvalho Chehab <mchehab@infradead.org>
      Cc: netdev@vger.kernel.org
      Cc: Anton Vorontsov <cbou@mail.ru>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: "J. Bruce Fields" <bfields@fieldses.org>
      Cc: Neil Brown <neilb@suse.de>
      Cc: Alex Elder <aelder@sgi.com>
      Cc: xfs-masters@oss.sgi.com
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Pekka Enberg <penberg@cs.helsinki.fi>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: netfilter-devel@vger.kernel.org
      Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
      Cc: linux-nfs@vger.kernel.org
      afe2c511
  14. 11 12月, 2010 3 次提交
  15. 10 12月, 2010 2 次提交
    • E
      filter: use size of fetched data in __load_pointer() · 4bc65dd8
      Eric Dumazet 提交于
      __load_pointer() checks data we fetch from skb is included in head
      portion, but assumes we fetch one byte, instead of up to four.
      
      This wont crash because we have extra bytes (struct skb_shared_info)
      after head, but this can read uninitialized bytes.
      
      Fix this using size of the data (1, 2, 4 bytes) in the test.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4bc65dd8
    • E
      net: optimize INET input path further · 68835aba
      Eric Dumazet 提交于
      Followup of commit b178bb3d (net: reorder struct sock fields)
      
      Optimize INET input path a bit further, by :
      
      1) moving sk_refcnt close to sk_lock.
      
      This reduces number of dirtied cache lines by one on 64bit arches (and
      64 bytes cache line size).
      
      2) moving inet_daddr & inet_rcv_saddr at the beginning of sk
      
      (same cache line than hash / family / bound_dev_if / nulls_node)
      
      This reduces number of accessed cache lines in lookups by one, and dont
      increase size of inet and timewait socks.
      inet and tw sockets now share same place-holder for these fields.
      
      Before patch :
      
      offsetof(struct sock, sk_refcnt) = 0x10
      offsetof(struct sock, sk_lock) = 0x40
      offsetof(struct sock, sk_receive_queue) = 0x60
      offsetof(struct inet_sock, inet_daddr) = 0x270
      offsetof(struct inet_sock, inet_rcv_saddr) = 0x274
      
      After patch :
      
      offsetof(struct sock, sk_refcnt) = 0x44
      offsetof(struct sock, sk_lock) = 0x48
      offsetof(struct sock, sk_receive_queue) = 0x68
      offsetof(struct inet_sock, inet_daddr) = 0x0
      offsetof(struct inet_sock, inet_rcv_saddr) = 0x4
      
      compute_score() (udp or tcp) now use a single cache line per ignored
      item, instead of two.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      68835aba
  16. 09 12月, 2010 4 次提交
  17. 08 12月, 2010 1 次提交
  18. 07 12月, 2010 2 次提交