1. 02 1月, 2013 30 次提交
  2. 30 12月, 2012 6 次提交
    • F
      team: add ethtool support · 7f51c587
      Flavio Leitner 提交于
      This patch adds few ethtool operations to team driver.
      Signed-off-by: NFlavio Leitner <fbl@redhat.com>
      Acked-by: NJiri Pirko <jiri@resnulli.us>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7f51c587
    • E
      veth: extend device features · 8093315a
      Eric Dumazet 提交于
      veth is lacking most modern facilities, like SG, checksums, TSO.
      
      It makes sense to extend dev->features to get them, or GRO aggregation
      is defeated by a forced segmentation.
      Reported-by: NAndrew Vagin <avagin@parallels.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Michał Mirosław <mirq-linux@rere.qmqm.pl>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8093315a
    • E
      veth: reduce stat overhead · 2681128f
      Eric Dumazet 提交于
      veth stats are a bit bloated. There is no need to account transmit
      and receive stats, since they are absolutely symmetric.
      
      Also use a per device atomic64_t for the dropped counter, as it
      should never be used in fast path.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2681128f
    • F
      team: implement carrier change · 4cafe373
      Flavio Leitner 提交于
      The user space teamd daemon may need to control the
      master's carrier state depending on the selected mode.
      Signed-off-by: NFlavio Leitner <fbl@redhat.com>
      Acked-by: NJiri Pirko <jiri@resnulli.us>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4cafe373
    • S
      bridge: respect RFC2863 operational state · 576eb625
      stephen hemminger 提交于
      The bridge link detection should follow the operational state
      of the lower device, rather than the carrier bit. This allows devices
      like tunnels that are controlled by userspace control plane to work
      with bridge STP link management.
      Signed-off-by: NStephen Hemminger <shemminger@vyatta.com>
      Reviewed-by: NFlavio Leitner <fbl@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      576eb625
    • D
      net: filter: return -EINVAL if BPF_S_ANC* operation is not supported · aa1113d9
      Daniel Borkmann 提交于
      Currently, we return -EINVAL for malformed or wrong BPF filters.
      However, this is not done for BPF_S_ANC* operations, which makes it
      more difficult to detect if it's actually supported or not by the
      BPF machine. Therefore, we should also return -EINVAL if K is within
      the SKF_AD_OFF universe and the ancillary operation did not match.
      
      Why exactly is it needed? If tools such as libpcap/tcpdump want to
      make use of new ancillary operations (like filtering VLAN in kernel
      space), there is currently no sane way to test if this feature /
      BPF_S_ANC* op is present or not, since no error is returned. This
      patch will make life easier for that and allow for a proper usage
      for user space applications.
      
      There was concern, if this patch will break userland. Short answer: Yes
      and no. Long answer: It will "break" only for code that calls ...
      
        { BPF_LD | BPF_(W|H|B) | BPF_ABS, 0, 0, <K> },
      
      ... where <K> is in [0xfffff000, 0xffffffff] _and_ <K> is *not* an
      ancillary. And here comes the BUT: assuming some *old* code will have
      such an instruction where <K> is between [0xfffff000, 0xffffffff] and
      it doesn't know ancillary operations, then this will give a
      non-expected / unwanted behavior as well (since we do not return the
      BPF machine with 0 after a failed load_pointer(), which was the case
      before introducing ancillary operations, but load sth. into the
      accumulator instead, and continue with the next instruction, for
      instance). Thus, user space code would already have been broken by
      introducing ancillary operations into the BPF machine per se. Code
      that does such a direct load, e.g. "load word at packet offset
      0xffffffff into accumulator" ("ld [0xffffffff]") is quite broken,
      isn't it? The whole assumption of ancillary operations is that no-one
      intentionally calls things like "ld [0xffffffff]" and expect this
      word to be loaded from such a packet offset. Hence, we can also safely
      make use of this feature testing patch and facilitate application
      development. Therefore, at least from this patch onwards, we have
      *for sure* a check whether current or in future implemented BPF_S_ANC*
      ops are supported in the kernel. Patch was tested on x86_64.
      
      (Thanks to Eric for the previous review.)
      
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Reported-by: NAni Sinha <ani@aristanetworks.com>
      Signed-off-by: NDaniel Borkmann <dborkman@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      aa1113d9
  3. 29 12月, 2012 4 次提交
    • S
      skbuff: make __kmalloc_reserve static · 61c5e88a
      stephen hemminger 提交于
      Sparse detected case where this local function should be static.
      It may even allow some compiler optimizations.
      Signed-off-by: NStephen Hemminger <shemminger@vyatta.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      61c5e88a
    • S
      tcp: make proc_tcp_fastopen_key static · bb717d76
      stephen hemminger 提交于
      Detected by sparse.
      Signed-off-by: NStephen Hemminger <shemminger@vyatta.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      bb717d76
    • S
      sctp: make sctp_addr_wq_timeout_handler static · bd2a13e2
      stephen hemminger 提交于
      Fix sparse warning about local function that should be static.
      Signed-off-by: NStephen Hemminger <shemminger@vyatta.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      bd2a13e2
    • E
      net: use per task frag allocator in skb_append_datato_frags · b2111724
      Eric Dumazet 提交于
      Use the new per task frag allocator in skb_append_datato_frags(),
      to reduce number of frags and page allocator overhead.
      
      Tested:
       ifconfig lo mtu 16436
       perf record netperf -t UDP_STREAM ; perf report
      
      before :
       Throughput: 32928 Mbit/s
          51.79%  netperf  [kernel.kallsyms]  [k] copy_user_generic_string
           5.98%  netperf  [kernel.kallsyms]  [k] __alloc_pages_nodemask
           5.58%  netperf  [kernel.kallsyms]  [k] get_page_from_freelist
           5.01%  netperf  [kernel.kallsyms]  [k] __rmqueue
           3.74%  netperf  [kernel.kallsyms]  [k] skb_append_datato_frags
           1.87%  netperf  [kernel.kallsyms]  [k] prep_new_page
           1.42%  netperf  [kernel.kallsyms]  [k] next_zones_zonelist
           1.28%  netperf  [kernel.kallsyms]  [k] __inc_zone_state
           1.26%  netperf  [kernel.kallsyms]  [k] alloc_pages_current
           0.78%  netperf  [kernel.kallsyms]  [k] sock_alloc_send_pskb
           0.74%  netperf  [kernel.kallsyms]  [k] udp_sendmsg
           0.72%  netperf  [kernel.kallsyms]  [k] zone_watermark_ok
           0.68%  netperf  [kernel.kallsyms]  [k] __cpuset_node_allowed_softwall
           0.67%  netperf  [kernel.kallsyms]  [k] fib_table_lookup
           0.60%  netperf  [kernel.kallsyms]  [k] memcpy_fromiovecend
           0.55%  netperf  [kernel.kallsyms]  [k] __udp4_lib_lookup
      
       after:
        Throughput: 47185 Mbit/s
      	61.74%	netperf  [kernel.kallsyms]	[k] copy_user_generic_string
      	 2.07%	netperf  [kernel.kallsyms]	[k] prep_new_page
      	 1.98%	netperf  [kernel.kallsyms]	[k] skb_append_datato_frags
      	 1.02%	netperf  [kernel.kallsyms]	[k] sock_alloc_send_pskb
      	 0.97%	netperf  [kernel.kallsyms]	[k] enqueue_task_fair
      	 0.97%	netperf  [kernel.kallsyms]	[k] udp_sendmsg
      	 0.91%	netperf  [kernel.kallsyms]	[k] __ip_route_output_key
      	 0.88%	netperf  [kernel.kallsyms]	[k] __netif_receive_skb
      	 0.87%	netperf  [kernel.kallsyms]	[k] fib_table_lookup
      	 0.85%	netperf  [kernel.kallsyms]	[k] resched_task
      	 0.78%	netperf  [kernel.kallsyms]	[k] __udp4_lib_lookup
      	 0.77%	netperf  [kernel.kallsyms]	[k] _raw_spin_lock_irqsave
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b2111724