1. 02 10月, 2014 2 次提交
  2. 30 9月, 2014 4 次提交
    • J
      net: sched: enable per cpu qstats · b0ab6f92
      John Fastabend 提交于
      After previous patches to simplify qstats the qstats can be
      made per cpu with a packed union in Qdisc struct.
      Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b0ab6f92
    • J
      net: sched: restrict use of qstats qlen · 64015853
      John Fastabend 提交于
      This removes the use of qstats->qlen variable from the classifiers
      and makes it an explicit argument to gnet_stats_copy_queue().
      
      The qlen represents the qdisc queue length and is packed into
      the qstats at the last moment before passnig to user space. By
      handling it explicitely we avoid, in the percpu stats case, having
      to figure out which per_cpu variable to put it in.
      
      It would probably be best to remove it from qstats completely
      but qstats is a user space ABI and can't be broken. A future
      patch could make an internal only qstats structure that would
      avoid having to allocate an additional u32 variable on the
      Qdisc struct. This would make the qstats struct 128bits instead
      of 128+32.
      Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      64015853
    • J
      net: sched: make bstats per cpu and estimator RCU safe · 22e0f8b9
      John Fastabend 提交于
      In order to run qdisc's without locking statistics and estimators
      need to be handled correctly.
      
      To resolve bstats make the statistics per cpu. And because this is
      only needed for qdiscs that are running without locks which is not
      the case for most qdiscs in the near future only create percpu
      stats when qdiscs set the TCQ_F_CPUSTATS flag.
      
      Next because estimators use the bstats to calculate packets per
      second and bytes per second the estimator code paths are updated
      to use the per cpu statistics.
      Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      22e0f8b9
    • E
      net: reorganize sk_buff for faster __copy_skb_header() · b1937227
      Eric Dumazet 提交于
      With proliferation of bit fields in sk_buff, __copy_skb_header() became
      quite expensive, showing as the most expensive function in a GSO
      workload.
      
      __copy_skb_header() performance is also critical for non GSO TCP
      operations, as it is used from skb_clone()
      
      This patch carefully moves all the fields that were not copied in a
      separate zone : cloned, nohdr, fclone, peeked, head_frag, xmit_more
      
      Then I moved all other fields and all other copied fields in a section
      delimited by headers_start[0]/headers_end[0] section so that we
      can use a single memcpy() call, inlined by compiler using long
      word load/stores.
      
      I also tried to make all copies in the natural orders of sk_buff,
      to help hardware prefetching.
      
      I made sure sk_buff size did not change.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b1937227
  3. 27 9月, 2014 4 次提交
  4. 26 9月, 2014 1 次提交
  5. 23 9月, 2014 1 次提交
  6. 20 9月, 2014 1 次提交
  7. 16 9月, 2014 2 次提交
  8. 14 9月, 2014 5 次提交
  9. 13 9月, 2014 2 次提交
  10. 11 9月, 2014 1 次提交
  11. 10 9月, 2014 2 次提交
  12. 09 9月, 2014 1 次提交
  13. 06 9月, 2014 8 次提交
    • A
      net: Add function for parsing the header length out of linear ethernet frames · 56193d1b
      Alexander Duyck 提交于
      This patch updates some of the flow_dissector api so that it can be used to
      parse the length of ethernet buffers stored in fragments.  Most of the
      changes needed were to __skb_get_poff as it needed to be updated to support
      sending a linear buffer instead of a skb.
      
      I have split __skb_get_poff into two functions, the first is skb_get_poff
      and it retains the functionality of the original __skb_get_poff.  The other
      function is __skb_get_poff which now works much like __skb_flow_dissect in
      relation to skb_flow_dissect in that it provides the same functionality but
      works with just a data buffer and hlen instead of needing an skb.
      Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com>
      Acked-by: NAlexei Starovoitov <ast@plumgrid.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      56193d1b
    • A
      net: merge cases where sock_efree and sock_edemux are the same function · 82eabd9e
      Alexander Duyck 提交于
      Since sock_efree and sock_demux are essentially the same code for non-TCP
      sockets and the case where CONFIG_INET is not defined we can combine the
      code or replace the call to sock_edemux in several spots.  As a result we
      can avoid a bit of unnecessary code or code duplication.
      Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      82eabd9e
    • A
      net-timestamp: Make the clone operation stand-alone from phy timestamping · 62bccb8c
      Alexander Duyck 提交于
      The phy timestamping takes a different path than the regular timestamping
      does in that it will create a clone first so that the packets needing to be
      timestamped can be placed in a queue, or the context block could be used.
      
      In order to support these use cases I am pulling the core of the code out
      so it can be used in other drivers beyond just phy devices.
      
      In addition I have added a destructor named sock_efree which is meant to
      provide a simple way for dropping the reference to skb exceptions that
      aren't part of either the receive or send windows for the socket, and I
      have removed some duplication in spots where this destructor could be used
      in place of sock_edemux.
      Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      62bccb8c
    • A
      net-timestamp: Merge shared code between phy and regular timestamping · 37846ef0
      Alexander Duyck 提交于
      This change merges the shared bits that exist between skb_tx_tstamp and
      skb_complete_tx_timestamp.  By doing this we can avoid the two diverging as
      there were already changes pushed into skb_tx_tstamp that hadn't made it
      into the other function.
      
      In addition this resolves issues with the fact that
      skb_complete_tx_timestamp was included in linux/skbuff.h even though it was
      only compiled in if phy timestamping was enabled.
      Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      37846ef0
    • M
      net: treewide: Fix typo found in DocBook/networking.xml · e793c0f7
      Masanari Iida 提交于
      This patch fix spelling typo found in DocBook/networking.xml.
      It is because the neworking.xml is generated from comments
      in the source, I have to fix typo in comments within the source.
      Signed-off-by: NMasanari Iida <standby24x7@gmail.com>
      Acked-by: NRandy Dunlap <rdunlap@infradead.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e793c0f7
    • G
      ethtool: Add generic options for tunables · f0db9b07
      Govindarajulu Varadarajan 提交于
      This patch adds new ethtool cmd, ETHTOOL_GTUNABLE & ETHTOOL_STUNABLE for getting
      tunable values from driver.
      
      Add get_tunable and set_tunable to ethtool_ops. Driver implements these
      functions for getting/setting tunable value.
      Signed-off-by: NGovindarajulu Varadarajan <_govind@gmx.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f0db9b07
    • D
      dev_ioctl: remove dev_load() CAP_SYS_MODULE message · e020836d
      Daniel Borkmann 提交于
      Marcel reported to see the following message when autoloading
      is being triggered when adding nlmon device:
      
        Loading kernel module for a network device with
        CAP_SYS_MODULE (deprecated). Use CAP_NET_ADMIN and alias
        netdev-nlmon instead.
      
      This false-positive happens despite with having correct
      capabilities set, e.g. through issuing `ip link del dev nlmon`
      more than once on a valid device with name nlmon, but Marcel
      has also seen it on creation time when no nlmon module is
      previously compiled-in or loaded as module and the device
      name equals a link type name (e.g. nlmon, vxlan, team).
      
      Stephen says:
      
        The netdev module alias is a hold over from the past. For
        normal devices, people used to create a alias eth0 to and
        point it to the type of network device used, that was back
        in the bad old ISA days before real discovery.
      
        Also, the tunnels create module alias for the control device
        and ip used to use this to autoload the tunnel device.
      
        The message is bogus and should just be removed, I also see
        it in a couple of other cases where tap devices are renamed
        for other usese.
      
      As mentioned in 8909c9ad ("net: don't allow CAP_NET_ADMIN
      to load non-netdev kernel modules"), we nevertheless still
      might want to leave the old autoloading behaviour in place
      as it could break old scripts, so for now, lets just remove
      the log message as Stephen suggests.
      
      Reference: http://thread.gmane.org/gmane.linux.kernel/1105168Reported-by: NMarcel Holtmann <marcel@holtmann.org>
      Suggested-by: NStephen Hemminger <stephen@networkplumber.org>
      Signed-off-by: NDaniel Borkmann <dborkman@redhat.com>
      Cc: Vasiliy Kulikov <segoon@openwall.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e020836d
    • D
      net: bpf: make eBPF interpreter images read-only · 60a3b225
      Daniel Borkmann 提交于
      With eBPF getting more extended and exposure to user space is on it's way,
      hardening the memory range the interpreter uses to steer its command flow
      seems appropriate.  This patch moves the to be interpreted bytecode to
      read-only pages.
      
      In case we execute a corrupted BPF interpreter image for some reason e.g.
      caused by an attacker which got past a verifier stage, it would not only
      provide arbitrary read/write memory access but arbitrary function calls
      as well. After setting up the BPF interpreter image, its contents do not
      change until destruction time, thus we can setup the image on immutable
      made pages in order to mitigate modifications to that code. The idea
      is derived from commit 314beb9b ("x86: bpf_jit_comp: secure bpf jit
      against spraying attacks").
      
      This is possible because bpf_prog is not part of sk_filter anymore.
      After setup bpf_prog cannot be altered during its life-time. This prevents
      any modifications to the entire bpf_prog structure (incl. function/JIT
      image pointer).
      
      Every eBPF program (including classic BPF that are migrated) have to call
      bpf_prog_select_runtime() to select either interpreter or a JIT image
      as a last setup step, and they all are being freed via bpf_prog_free(),
      including non-JIT. Therefore, we can easily integrate this into the
      eBPF life-time, plus since we directly allocate a bpf_prog, we have no
      performance penalty.
      
      Tested with seccomp and test_bpf testsuite in JIT/non-JIT mode and manual
      inspection of kernel_page_tables.  Brad Spengler proposed the same idea
      via Twitter during development of this patch.
      
      Joint work with Hannes Frederic Sowa.
      Suggested-by: NBrad Spengler <spender@grsecurity.net>
      Signed-off-by: NDaniel Borkmann <dborkman@redhat.com>
      Signed-off-by: NHannes Frederic Sowa <hannes@stressinduktion.org>
      Cc: Alexei Starovoitov <ast@plumgrid.com>
      Cc: Kees Cook <keescook@chromium.org>
      Acked-by: NAlexei Starovoitov <ast@plumgrid.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      60a3b225
  14. 04 9月, 2014 1 次提交
    • J
      qdisc: validate frames going through the direct_xmit path · 1f59533f
      Jesper Dangaard Brouer 提交于
      In commit 50cbe9ab ("net: Validate xmit SKBs right when we
      pull them out of the qdisc") the validation code was moved out of
      dev_hard_start_xmit and into dequeue_skb.
      
      However this overlooked the fact that we do not always enqueue
      the skb onto a qdisc. First situation is if qdisc have flag
      TCQ_F_CAN_BYPASS and qdisc is empty.  Second situation is if
      there is no qdisc on the device, which is a common case for
      software devices.
      
      Originally spotted and inital patch by Alexander Duyck.
      As a result Alex was seeing issues trying to connect to a
      vhost_net interface after commit 50cbe9ab was applied.
      
      Added a call to validate_xmit_skb() in __dev_xmit_skb(), in the
      code path for qdiscs with TCQ_F_CAN_BYPASS flag, and in
      __dev_queue_xmit() when no qdisc.
      
      Also handle the error situation where dev_hard_start_xmit() could
      return a skb list, and does not return dev_xmit_complete(rc) and
      falls through to the kfree_skb(), in that situation it should
      call kfree_skb_list().
      
      Fixes:  50cbe9ab ("net: Validate xmit SKBs right when we pull them out of the qdisc")
      Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com>
      Signed-off-by: NJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1f59533f
  15. 03 9月, 2014 4 次提交
  16. 02 9月, 2014 1 次提交
    • W
      sock: deduplicate errqueue dequeue · 364a9e93
      Willem de Bruijn 提交于
      sk->sk_error_queue is dequeued in four locations. All share the
      exact same logic. Deduplicate.
      
      Also collapse the two critical sections for dequeue (at the top of
      the recv handler) and signal (at the bottom).
      
      This moves signal generation for the next packet forward, which should
      be harmless.
      
      It also changes the behavior if the recv handler exits early with an
      error. Previously, a signal for follow-up packets on the errqueue
      would then not be scheduled. The new behavior, to always signal, is
      arguably a bug fix.
      
      For rxrpc, the change causes the same function to be called repeatedly
      for each queued packet (because the recv handler == sk_error_report).
      It is likely that all packets will fail for the same reason (e.g.,
      memory exhaustion).
      
      This code runs without sk_lock held, so it is not safe to trust that
      sk->sk_err is immutable inbetween releasing q->lock and the subsequent
      test. Introduce int err just to avoid this potential race.
      Signed-off-by: NWillem de Bruijn <willemb@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      364a9e93