1. 10 11月, 2016 18 次提交
  2. 08 11月, 2016 7 次提交
    • S
      sock: do not set sk_err in sock_dequeue_err_skb · f5f99309
      Soheil Hassas Yeganeh 提交于
      Do not set sk_err when dequeuing errors from the error queue.
      Doing so results in:
      a) Bugs: By overwriting existing sk_err values, it possibly
         hides legitimate errors. It is also incorrect when local
         errors are queued with ip_local_error. That happens in the
         context of a system call, which already returns the error
         code.
      b) Inconsistent behavior: When there are pending errors on
         the error queue, sk_err is sometimes 0 (e.g., for
         the first timestamp on the error queue) and sometimes
         set to an error code (after dequeuing the first
         timestamp).
      c) Suboptimality: Setting sk_err to ENOMSG on simple
         TX timestamps can abort parallel reads and writes.
      
      Removing this line doesn't break userspace. This is because
      userspace code cannot rely on sk_err for detecting whether
      there is something on the error queue. Except for ICMP messages
      received for UDP and RAW, sk_err is not set at enqueue time,
      and as a result sk_err can be 0 while there are plenty of
      errors on the error queue.
      
      For ICMP packets in UDP and RAW, sk_err is set when they are
      enqueued on the error queue, but that does not result in aborting
      reads and writes. For such cases, sk_err is only readable via
      getsockopt(SO_ERROR) which will reset the value of sk_err on
      its own. More importantly, prior to this patch,
      recvmsg(MSG_ERRQUEUE) has a race on setting sk_err (i.e.,
      sk_err is set by sock_dequeue_err_skb without atomic ops or
      locks) which can store 0 in sk_err even when we have ICMP
      messages pending. Removing this line from sock_dequeue_err_skb
      eliminates that race.
      Signed-off-by: NSoheil Hassas Yeganeh <soheil@google.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NWillem de Bruijn <willemb@google.com>
      Signed-off-by: NNeal Cardwell <ncardwell@google.com>
      Acked-by: NHannes Frederic Sowa <hannes@stressinduktion.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f5f99309
    • J
      qdisc: catch misconfig of attaching qdisc to tx_queue_len zero device · 84c46dd8
      Jesper Dangaard Brouer 提交于
      It is a clear misconfiguration to attach a qdisc to a device with
      tx_queue_len zero, because some qdisc's (namely, pfifo, bfifo, gred,
      htb, plug and sfb) inherit/copy this value as their queue length.
      
      Why should the kernel catch such a misconfiguration?  Because prior to
      introducing the IFF_NO_QUEUE device flag, userspace found a loophole
      in the qdisc config system that allowed them to achieve the equivalent
      of IFF_NO_QUEUE, which is to remove the qdisc code path entirely from
      a device.  The loophole on older kernels is setting tx_queue_len=0,
      *prior* to device qdisc init (the config time is significant, simply
      setting tx_queue_len=0 doesn't trigger the loophole).
      
      This loophole is currently used by Docker[1] to get better performance
      and scalability out of the veth device.  The Docker developers were
      warned[1] that they needed to adjust the tx_queue_len if ever
      attaching a qdisc.  The OpenShift project didn't remember this warning
      and attached a qdisc, this were caught and fixed in[2].
      
      [1] https://github.com/docker/libcontainer/pull/193
      [2] https://github.com/openshift/origin/pull/11126
      
      Instead of fixing every userspace program that used this loophole, and
      forgot to reset the tx_queue_len, prior to attaching a qdisc.  Let's
      catch the misconfiguration on the kernel side.
      Signed-off-by: NJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      84c46dd8
    • J
      net/qdisc: IFF_NO_QUEUE drivers should use consistent TX queue len · 11597084
      Jesper Dangaard Brouer 提交于
      The flag IFF_NO_QUEUE marks virtual device drivers that doesn't need a
      default qdisc attached, given they will be backed by physical device,
      that already have a qdisc attached for pushback.
      
      It is still supported to attach a qdisc to a IFF_NO_QUEUE device, as
      this can be useful for difference policy reasons (e.g. bandwidth
      limiting containers).  For this to work, the tx_queue_len need to have
      a sane value, because some qdiscs inherit/copy the tx_queue_len
      (namely, pfifo, bfifo, gred, htb, plug and sfb).
      
      Commit a813104d ("IFF_NO_QUEUE: Fix for drivers not calling
      ether_setup()") caught situations where some drivers didn't initialize
      tx_queue_len.  The problem with the commit was choosing 1 as the
      fallback value.
      
      A qdisc queue length of 1 causes more harm than good, because it
      creates hard to debug situations for userspace. It gives userspace a
      false sense of a working config after attaching a qdisc.  As low
      volume traffic (that doesn't activate the qdisc policy) works,
      like ping, while traffic that e.g. needs shaping cannot reach the
      configured policy levels, given the queue length is too small.
      
      This patch change the value to DEFAULT_TX_QUEUE_LEN, given other
      IFF_NO_QUEUE devices (that call ether_setup()) also use this value.
      
      Fixes: a813104d ("IFF_NO_QUEUE: Fix for drivers not calling ether_setup()")
      Signed-off-by: NJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      11597084
    • J
      net: make default TX queue length a defined constant · d0a81f67
      Jesper Dangaard Brouer 提交于
      The default TX queue length of Ethernet devices have been a magic
      constant of 1000, ever since the initial git import.
      
      Looking back in historical trees[1][2] the value used to be 100,
      with the same comment "Ethernet wants good queues". The commit[3]
      that changed this from 100 to 1000 didn't describe why, but from
      conversations with Robert Olsson it seems that it was changed
      when Ethernet devices went from 100Mbit/s to 1Gbit/s, because the
      link speed increased x10 the queue size were also adjusted.  This
      value later caused much heartache for the bufferbloat community.
      
      This patch merely moves the value into a defined constant.
      
      [1] https://git.kernel.org/cgit/linux/kernel/git/davem/netdev-vger-cvs.git/
      [2] https://git.kernel.org/cgit/linux/kernel/git/tglx/history.git/
      [3] https://git.kernel.org/tglx/history/c/98921832c232Signed-off-by: NJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d0a81f67
    • P
      udp: do fwd memory scheduling on dequeue · 7c13f97f
      Paolo Abeni 提交于
      A new argument is added to __skb_recv_datagram to provide
      an explicit skb destructor, invoked under the receive queue
      lock.
      The UDP protocol uses such argument to perform memory
      reclaiming on dequeue, so that the UDP protocol does not
      set anymore skb->desctructor.
      Instead explicit memory reclaiming is performed at close() time and
      when skbs are removed from the receive queue.
      The in kernel UDP protocol users now need to call a
      skb_recv_udp() variant instead of skb_recv_datagram() to
      properly perform memory accounting on dequeue.
      
      Overall, this allows acquiring only once the receive queue
      lock on dequeue.
      
      Tested using pktgen with random src port, 64 bytes packet,
      wire-speed on a 10G link as sender and udp_sink as the receiver,
      using an l4 tuple rxhash to stress the contention, and one or more
      udp_sink instances with reuseport.
      
      nr sinks	vanilla		patched
      1		440		560
      3		2150		2300
      6		3650		3800
      9		4450		4600
      12		6250		6450
      
      v1 -> v2:
       - do rmem and allocated memory scheduling under the receive lock
       - do bulk scheduling in first_packet_length() and in udp_destruct_sock()
       - avoid the typdef for the dequeue callback
      Suggested-by: NEric Dumazet <edumazet@google.com>
      Acked-by: NHannes Frederic Sowa <hannes@stressinduktion.org>
      Signed-off-by: NPaolo Abeni <pabeni@redhat.com>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7c13f97f
    • P
      net/sock: add an explicit sk argument for ip_cmsg_recv_offset() · ad959036
      Paolo Abeni 提交于
      So that we can use it even after orphaining the skbuff.
      Suggested-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NPaolo Abeni <pabeni@redhat.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ad959036
    • D
      net: Update raw socket bind to consider l3 domain · cd2c0f45
      David Ahern 提交于
      Binding a raw socket to a local address fails if the socket is bound
      to an L3 domain:
      
          $ vrf-test  -s -l 10.100.1.2 -R -I red
          error binding socket: 99: Cannot assign requested address
      
      Update raw_bind to look consider if sk_bound_dev_if is bound to an L3
      domain and use inet_addr_type_table to lookup the address.
      Signed-off-by: NDavid Ahern <dsa@cumulusnetworks.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      cd2c0f45
  3. 05 11月, 2016 3 次提交
    • L
      net: inet: Support UID-based routing in IP protocols. · e2d118a1
      Lorenzo Colitti 提交于
      - Use the UID in routing lookups made by protocol connect() and
        sendmsg() functions.
      - Make sure that routing lookups triggered by incoming packets
        (e.g., Path MTU discovery) take the UID of the socket into
        account.
      - For packets not associated with a userspace socket, (e.g., ping
        replies) use UID 0 inside the user namespace corresponding to
        the network namespace the socket belongs to. This allows
        all namespaces to apply routing and iptables rules to
        kernel-originated traffic in that namespaces by matching UID 0.
        This is better than using the UID of the kernel socket that is
        sending the traffic, because the UID of kernel sockets created
        at namespace creation time (e.g., the per-processor ICMP and
        TCP sockets) is the UID of the user that created the socket,
        which might not be mapped in the namespace.
      
      Tested: compiles allnoconfig, allyesconfig, allmodconfig
      Tested: https://android-review.googlesource.com/253302Signed-off-by: NLorenzo Colitti <lorenzo@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e2d118a1
    • L
      net: core: add UID to flows, rules, and routes · 622ec2c9
      Lorenzo Colitti 提交于
      - Define a new FIB rule attributes, FRA_UID_RANGE, to describe a
        range of UIDs.
      - Define a RTA_UID attribute for per-UID route lookups and dumps.
      - Support passing these attributes to and from userspace via
        rtnetlink. The value INVALID_UID indicates no UID was
        specified.
      - Add a UID field to the flow structures.
      Signed-off-by: NLorenzo Colitti <lorenzo@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      622ec2c9
    • L
      net: core: Add a UID field to struct sock. · 86741ec2
      Lorenzo Colitti 提交于
      Protocol sockets (struct sock) don't have UIDs, but most of the
      time, they map 1:1 to userspace sockets (struct socket) which do.
      
      Various operations such as the iptables xt_owner match need
      access to the "UID of a socket", and do so by following the
      backpointer to the struct socket. This involves taking
      sk_callback_lock and doesn't work when there is no socket
      because userspace has already called close().
      
      Simplify this by adding a sk_uid field to struct sock whose value
      matches the UID of the corresponding struct socket. The semantics
      are as follows:
      
      1. Whenever sk_socket is non-null: sk_uid is the same as the UID
         in sk_socket, i.e., matches the return value of sock_i_uid.
         Specifically, the UID is set when userspace calls socket(),
         fchown(), or accept().
      2. When sk_socket is NULL, sk_uid is defined as follows:
         - For a socket that no longer has a sk_socket because
           userspace has called close(): the previous UID.
         - For a cloned socket (e.g., an incoming connection that is
           established but on which userspace has not yet called
           accept): the UID of the socket it was cloned from.
         - For a socket that has never had an sk_socket: UID 0 inside
           the user namespace corresponding to the network namespace
           the socket belongs to.
      
      Kernel sockets created by sock_create_kern are a special case
      of #1 and sk_uid is the user that created them. For kernel
      sockets created at network namespace creation time, such as the
      per-processor ICMP and TCP sockets, this is the user that created
      the network namespace.
      Signed-off-by: NLorenzo Colitti <lorenzo@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      86741ec2
  4. 04 11月, 2016 6 次提交
  5. 03 11月, 2016 6 次提交
    • T
      ila: Fix crash caused by rhashtable changes · 1913540a
      Tom Herbert 提交于
      commit ca26893f ("rhashtable: Add rhlist interface")
      added a field to rhashtable_iter so that length became 56 bytes
      and would exceed the size of args in netlink_callback (which is
      48 bytes). The netlink diag dump function already has been
      allocating a iter structure and storing the pointed to that
      in the args of netlink_callback. ila_xlat also uses
      rhahstable_iter but is still putting that directly in
      the arg block. Now since rhashtable_iter size is increased
      we are overwriting beyond the structure. The next field
      happens to be cb_mutex pointer in netlink_sock and hence the crash.
      
      Fix is to alloc the rhashtable_iter and save it as pointer
      in arg.
      
      Tested:
      
        modprobe ila
        ./ip ila add loc 3333:0:0:0 loc_match 2222:0:0:1,
        ./ip ila list  # NO crash now
      Signed-off-by: NTom Herbert <tom@herbertland.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1913540a
    • C
      net: ip, diag -- Adjust raw_abort to use unlocked __udp_disconnect · 3de864f8
      Cyrill Gorcunov 提交于
      While being preparing patches for killing raw sockets via
      diag netlink interface I noticed that my runs are stuck:
      
       | [root@pcs7 ~]# cat /proc/`pidof ss`/stack
       | [<ffffffff816d1a76>] __lock_sock+0x80/0xc4
       | [<ffffffff816d206a>] lock_sock_nested+0x47/0x95
       | [<ffffffff8179ded6>] udp_disconnect+0x19/0x33
       | [<ffffffff8179b517>] raw_abort+0x33/0x42
       | [<ffffffff81702322>] sock_diag_destroy+0x4d/0x52
      
      which has not been the case before. I narrowed it down to the commit
      
       | commit 286c72de
       | Author: Eric Dumazet <edumazet@google.com>
       | Date:   Thu Oct 20 09:39:40 2016 -0700
       |
       |     udp: must lock the socket in udp_disconnect()
      
      where we start locking the socket for different reason.
      
      So the raw_abort escaped the renaming and we have to
      fix this typo using __udp_disconnect instead.
      
      Fixes: 286c72de ("udp: must lock the socket in udp_disconnect()")
      CC: David S. Miller <davem@davemloft.net>
      CC: Eric Dumazet <eric.dumazet@gmail.com>
      CC: David Ahern <dsa@cumulusnetworks.com>
      CC: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>
      CC: James Morris <jmorris@namei.org>
      CC: Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>
      CC: Patrick McHardy <kaber@trash.net>
      CC: Andrey Vagin <avagin@openvz.org>
      CC: Stephen Hemminger <stephen@networkplumber.org>
      Signed-off-by: NCyrill Gorcunov <gorcunov@openvz.org>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3de864f8
    • E
      tcp: enhance tcp collapsing · 2331ccc5
      Eric Dumazet 提交于
      As Ilya Lesokhin suggested, we can collapse two skbs at retransmit
      time even if the skb at the right has fragments.
      
      We simply have to use more generic skb_copy_bits() instead of
      skb_copy_from_linear_data() in tcp_collapse_retrans()
      
      Also need to guard this skb_copy_bits() in case there is nothing to
      copy, otherwise skb_put() could panic if left skb has frags.
      
      Tested:
      
      Used following packetdrill test
      
      // Establish a connection.
      0.000 socket(..., SOCK_STREAM, IPPROTO_TCP) = 3
         +0 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
         +0 bind(3, ..., ...) = 0
         +0 listen(3, 1) = 0
      
         +0 < S 0:0(0) win 32792 <mss 1460,sackOK,nop,nop,nop,wscale 8>
         +0 > S. 0:0(0) ack 1 <mss 1460,nop,nop,sackOK,nop,wscale 8>
      +.100 < . 1:1(0) ack 1 win 257
         +0 accept(3, ..., ...) = 4
      
         +0 setsockopt(4, SOL_TCP, TCP_NODELAY, [1], 4) = 0
         +0 write(4, ..., 200) = 200
         +0 > P. 1:201(200) ack 1
      +.001 write(4, ..., 200) = 200
         +0 > P. 201:401(200) ack 1
      +.001 write(4, ..., 200) = 200
         +0 > P. 401:601(200) ack 1
      +.001 write(4, ..., 200) = 200
         +0 > P. 601:801(200) ack 1
      +.001 write(4, ..., 200) = 200
         +0 > P. 801:1001(200) ack 1
      +.001 write(4, ..., 100) = 100
         +0 > P. 1001:1101(100) ack 1
      +.001 write(4, ..., 100) = 100
         +0 > P. 1101:1201(100) ack 1
      +.001 write(4, ..., 100) = 100
         +0 > P. 1201:1301(100) ack 1
      +.001 write(4, ..., 100) = 100
         +0 > P. 1301:1401(100) ack 1
      
      +.100 < . 1:1(0) ack 1 win 257 <nop,nop,sack 1001:1401>
      // Check that TCP collapse works :
         +0 > P. 1:1001(1000) ack 1
      Reported-by: NIlya Lesokhin <ilyal@mellanox.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Neal Cardwell <ncardwell@google.com>
      Cc: Yuchung Cheng <ycheng@google.com>
      Cc: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Acked-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2331ccc5
    • X
      sctp: clean up sctp_packet_transmit · e4ff952a
      Xin Long 提交于
      After adding sctp gso, sctp_packet_transmit is a quite big function now.
      
      This patch is to extract the codes for packing packet to sctp_packet_pack
      from sctp_packet_transmit, and add some comments, simplify the err path by
      freeing auth chunk when freeing packet chunk_list in out path and freeing
      head skb early if it fails to pack packet.
      Signed-off-by: NXin Long <lucien.xin@gmail.com>
      Acked-by: NMarcelo Ricardo Leitner <marcelo.leitner@gmail.com>
      Acked-by: NNeil Horman <nhorman@tuxdriver.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e4ff952a
    • R
      net/sched: cls_flower: merge filter delete/destroy common code · 13fa876e
      Roi Dayan 提交于
      Move common code from fl_delete and fl_detroy to __fl_delete.
      Signed-off-by: NRoi Dayan <roid@mellanox.com>
      Acked-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      13fa876e
    • R
      net/sched: cls_flower: add missing unbind call when destroying flows · a1a8f7fe
      Roi Dayan 提交于
      tcf_unbind was called in fl_delete but was missing in fl_destroy when
      force deleting flows.
      
      Fixes: 77b9900e ('tc: introduce Flower classifier')
      Signed-off-by: NRoi Dayan <roid@mellanox.com>
      Reviewed-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a1a8f7fe