1. 03 12月, 2017 2 次提交
    • V
      net: dsa: remove trans argument from vlan ops · 80e02360
      Vivien Didelot 提交于
      The DSA switch VLAN ops pass the switchdev_trans structure down to the
      drivers, but no one is using them and they aren't supposed to anyway.
      
      Remove the trans argument from VLAN prepare and add operations.
      
      At the same time, fix the following checkpatch warning:
      
          WARNING: line over 80 characters
          #74: FILE: drivers/net/dsa/dsa_loop.c:177:
          +				      const struct switchdev_obj_port_vlan *vlan)
      Signed-off-by: NVivien Didelot <vivien.didelot@savoirfairelinux.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      80e02360
    • P
      openvswitch: do not propagate headroom updates to internal port · 183dea58
      Paolo Abeni 提交于
      After commit 3a927bc7 ("ovs: propagate per dp max headroom to
      all vports") the need_headroom for the internal vport is updated
      accordingly to the max needed headroom in its datapath.
      
      That avoids the pskb_expand_head() costs when sending/forwarding
      packets towards tunnel devices, at least for some scenarios.
      
      We still require such copy when using the ovs-preferred configuration
      for vxlan tunnels:
      
          br_int
        /       \
      tap      vxlan
                 (remote_ip:X)
      
      br_phy
           \
          NIC
      
      where the route towards the IP 'X' is via 'br_phy'.
      
      When forwarding traffic from the tap towards the vxlan device, we
      will call pskb_expand_head() in vxlan_build_skb() because
      br-phy->needed_headroom is equal to tun->needed_headroom.
      
      With this change we avoid updating the internal vport needed_headroom,
      so that in the above scenario no head copy is needed, giving 5%
      performance improvement in UDP throughput test.
      
      As a trade-off, packets sent from the internal port towards a tunnel
      device will now experience the head copy overhead. The rationale is
      that the latter use-case is less relevant performance-wise.
      Signed-off-by: NPaolo Abeni <pabeni@redhat.com>
      Acked-by: NPravin B Shelar <pshelar@ovn.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      183dea58
  2. 02 12月, 2017 26 次提交
  3. 01 12月, 2017 6 次提交
    • D
      Merge branch 'macb-rx-packet-filtering' · 201c78e0
      David S. Miller 提交于
      Rafal Ozieblo says:
      
      ====================
      Receive packets filtering for macb driver
      
      This patch series adds support for receive packets
      filtering for Cadence GEM driver. Packets can be redirect
      to different hardware queues based on source IP, destination IP,
      source port or destination port. To enable filtering,
      support for RX queueing was added as well.
      ====================
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      201c78e0
    • R
      net: macb: Added support for RX filtering · ae8223de
      Rafal Ozieblo 提交于
      This patch allows filtering received packets to different
      hardware queues (aka ntuple).
      Signed-off-by: NRafal Ozieblo <rafalo@cadence.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ae8223de
    • R
      net: macb: Added some queue statistics · 512286bb
      Rafal Ozieblo 提交于
      Added statistics per queue:
      - qX_rx_packets
      - qX_rx_bytes
      - qX_rx_dropped
      - qX_tx_packets
      - qX_tx_bytes
      - qX_tx_dropped
      Signed-off-by: NRafal Ozieblo <rafalo@cadence.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      512286bb
    • R
      net: macb: Added support for many RX queues · ae1f2a56
      Rafal Ozieblo 提交于
      To be able for packet reception on different RX queues some
      configuration has to be performed. This patch checks how many
      hardware queue does GEM support and initializes them.
      Signed-off-by: NRafal Ozieblo <rafalo@cadence.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ae1f2a56
    • S
      vmxnet3: increase default rx ring sizes · 7475908f
      Shrikrishna Khare 提交于
      There are several reasons for increasing the receive ring sizes:
      
      1. The original ring size of 256 was chosen about 10 years ago when
      vmxnet3 was first created. At that time, 10Gbps Ethernet was not prevalent
      and servers were dominated by 1Gbps Ethernet. Now 10Gbps is common place,
      and higher bandwidth links -- 25Gbps, 40Gbps, 50Gbps -- are starting
      to appear. 256 Rx ring entries are simply not enough to keep up with
      higher link speed when there is a burst of network frames coming from
      these high speed links. Even with full MTU size frames, they are gone
      in a short time. It is also more common to have a mix of frame sizes,
      and more likely bi-modal distribution of frame sizes so the average frame
      size is not close to full MTU. If we consider average frame size of 800B,
      1024 frames that come in a burst takes ~0.65 ms to arrive at 10Gbps. With
      256 entires, it takes ~0.16 ms to arrive at 10Gbps.  At 25Gbps or 40Gbps,
      this time is reduced accordingly.
      
      2. On a hypervisor where there are many VMs and CPU is over committed,
      i.e. the number of VCPUs is more than the number of VCPUs, each PCPU is
      in effect time shared between multiple VMs/VCPUs. The time granularity at
      which this multiplexing occurs is typically coarser than between processes
      on a guest OS. Trying to time slice more finely is not efficient, for
      example, if memory cache is barely warmed up when switching from one VM
      to another occurs. This CPU overcommit adds delay to when the driver
      in a VM can service incoming packets. Whether CPU is over committed
      really depends on customer workloads. For certain situations, it is very
      common. For example, workloads of desktop VMs and product testing setups.
      Consolidation and sharing is what drives efficiency of a customer setup
      for such workloads. In these situations, the raw network bandwidth may
      not be very high, but the delays between when a VM is running or not
      running can also be relatively long.
      Signed-off-by: NShrikrishna Khare <skhare@vmware.com>
      Acked-by: NJin Heo <heoj@vmware.com>
      Acked-by: NGuolin Yang <gyang@vmware.com>
      Acked-by: NBoon Ang <bang@vmware.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7475908f
    • F
      net: dsa: bcm_sf2: Utilize b53_get_tag_protocol() · 9f66816a
      Florian Fainelli 提交于
      Utilize the much more capable b53_get_tag_protocol() which takes care of
      all Broadcom switches specifics to resolve which port can have Broadcom
      tags enabled or not.
      Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9f66816a
  4. 30 11月, 2017 6 次提交
    • P
      net/reuseport: drop legacy code · e94a62f5
      Paolo Abeni 提交于
      Since commit e32ea7e7 ("soreuseport: fast reuseport UDP socket
      selection") and commit c125e80b ("soreuseport: fast reuseport
      TCP socket selection") the relevant reuseport socket matching the current
      packet is selected by the reuseport_select_sock() call. The only
      exceptions are invalid BPF filters/filters returning out-of-range
      indices.
      In the latter case the code implicitly falls back to using the hash
      demultiplexing, but instead of selecting the socket inside the
      reuseport_select_sock() function, it relies on the hash selection
      logic introduced with the early soreuseport implementation.
      
      With this patch, in case of a BPF filter returning a bad socket
      index value, we fall back to hash-based selection inside the
      reuseport_select_sock() body, so that we can drop some duplicate
      code in the ipv4 and ipv6 stack.
      
      This also allows faster lookup in the above scenario and will allow
      us to avoid computing the hash value for successful, BPF based
      demultiplexing - in a later patch.
      Signed-off-by: NPaolo Abeni <pabeni@redhat.com>
      Acked-by: NCraig Gallek <kraig@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e94a62f5
    • L
      Documentation: net: dsa: Cut set_addr() documentation · 0fc66ddf
      Linus Walleij 提交于
      This is not supported anymore, devices needing a MAC address
      just assign one at random, it's just a driver pecularity.
      Signed-off-by: NLinus Walleij <linus.walleij@linaro.org>
      Reviewed-by: NAndrew Lunn <andrew@lunn.ch>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0fc66ddf
    • D
      Merge branch 'net-dst_entry-shrink' · 3d8068c5
      David S. Miller 提交于
      David Miller says:
      
      ====================
      net: Significantly shrink the size of routes.
      
      Through a combination of several things, our route structures are
      larger than they need to be.
      
      Mostly this stems from having members in dst_entry which are only used
      by one class of routes.  So the majority of the work in this series is
      about "un-commoning" these members and pushing them into the type
      specific structures.
      
      Unfortunately, IPSEC needed the most surgery.  The majority of the
      changes here had to do with bundle creation and management.
      
      The other issue is the refcount alignment in dst_entry.  Once we get
      rid of the not-so-common members, it really opens the door to removing
      that alignment entirely.
      
      I think the new layout looks really nice, so I'll reproduce it here:
      
      	struct net_device       *dev;
      	struct  dst_ops	        *ops;
      	unsigned long		_metrics;
      	unsigned long           expires;
      	struct xfrm_state	*xfrm;
      	int			(*input)(struct sk_buff *);
      	int			(*output)(struct net *net, struct sock *sk, struct sk_buff *skb);
      	unsigned short		flags;
      	short			obsolete;
      	unsigned short		header_len;
      	unsigned short		trailer_len;
      	atomic_t		__refcnt;
      	int			__use;
      	unsigned long		lastuse;
      	struct lwtunnel_state   *lwtstate;
      	struct rcu_head		rcu_head;
      	short			error;
      	short			__pad;
      	__u32			tclassid;
      
      (This is for 64-bit, on 32-bit the __refcnt comes at the very end)
      
      So, the good news:
      
      1) struct dst_entry shrinks from 160 to 112 bytes.
      
      2) struct rtable shrinks from 216 to 168 bytes.
      
      3) struct rt6_info shrinks from 384 to 320 bytes.
      
      Enjoy.
      
      v2:
      	Collapse some patches logically based upon feedback.
      	Fix the strange patch #7.
      
      v3:	xfrm_dst_path() needs inline keyword
      	Properly align __refcnt on 32-bit.
      ====================
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3d8068c5
    • D
      net: Remove dst->next · 7149f813
      David Miller 提交于
      There are no more users.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      Reviewed-by: NEric Dumazet <edumazet@google.com>
      7149f813
    • D
      xfrm: Stop using dst->next in bundle construction. · 5492093d
      David Miller 提交于
      While building ipsec bundles, blocks of xfrm dsts are linked together
      using dst->next from bottom to the top.
      
      The only thing this is used for is initializing the pmtu values of the
      xfrm stack, and for updating the mtu values at xfrm_bundle_ok() time.
      
      The bundle pmtu entries must be processed in this order so that pmtu
      values lower in the stack of routes can propagate up to the higher
      ones.
      
      Avoid using dst->next by simply maintaining an array of dst pointers
      as we already do for the xfrm_state objects when building the bundle.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      Reviewed-by: NEric Dumazet <edumazet@google.com>
      5492093d
    • D
      net: Rearrange dst_entry layout to avoid useless padding. · 8b207e73
      David Miller 提交于
      We have padding to try and align the refcount on a separate cache
      line.  But after several simplifications the padding has increased
      substantially.
      
      So now it's easy to change the layout to get rid of the padding
      entirely.
      
      We group the write-heavy __refcnt and __use with less often used
      items such as the rcu_head and the error code.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      Reviewed-by: NEric Dumazet <edumazet@google.com>
      8b207e73