1. 02 4月, 2017 9 次提交
    • D
      net: mpls: Increase max number of labels for lwt encap · 1511009c
      David Ahern 提交于
      Alow users to push down more labels per MPLS encap. Similar to LSR case,
      move label array to the end of mpls_iptunnel_encap and allocate based on
      the number of labels for the route.
      
      For consistency with the LSR case, re-use the same maximum number of
      labels.
      Signed-off-by: NDavid Ahern <dsa@cumulusnetworks.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1511009c
    • D
      net: mpls: bump maximum number of labels · a4ac8c98
      David Ahern 提交于
      Allow users to push down more labels per MPLS route. With the previous
      patches, no memory allocations are based on MAX_NEW_LABELS; the limit
      is only used to keep userspace in check.
      
      At this point MAX_NEW_LABELS is only used for mpls_route_config (copying
      route data from userspace) and processing nexthops looking for the max
      number of labels across the route spec.
      Signed-off-by: NDavid Ahern <dsa@cumulusnetworks.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a4ac8c98
    • D
      net: mpls: Limit memory allocation for mpls_route · df1c6316
      David Ahern 提交于
      Limit memory allocation size for mpls_route to 4096.
      Signed-off-by: NDavid Ahern <dsa@cumulusnetworks.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      df1c6316
    • D
      net: mpls: change mpls_route layout · 59b20966
      David Ahern 提交于
      Move labels to the end of mpls_nh as a 0-sized array and within mpls_route
      move the via for a nexthop after the mpls_nh. The new layout becomes:
      
         +----------------------+
         | mpls_route           |
         +----------------------+
         | mpls_nh 0            |
         +----------------------+
         | alignment padding    |   4 bytes for odd number of labels; 0 for even
         +----------------------+
         | via[rt_max_alen] 0   |
         +----------------------+
         | alignment padding    |   via's aligned on sizeof(unsigned long)
         +----------------------+
         | ...                  |
         +----------------------+
         | mpls_nh n-1          |
         +----------------------+
         | via[rt_max_alen] n-1 |
         +----------------------+
      
      Memory allocated for nexthop + via is constant across all nexthops and
      their via. It is based on the maximum number of labels across all nexthops
      and the maximum via length. The size is saved in the mpls_route as
      rt_nh_size. Accessing a nexthop becomes rt->rt_nh + index * rt->rt_nh_size.
      
      The offset of the via address from a nexthop is saved as rt_via_offset
      so that given an mpls_nh pointer the via for that hop is simply
      nh + rt->rt_via_offset.
      
      With prior code, memory allocated per mpls_route with 1 nexthop:
           via is an ethernet address - 64 bytes
           via is an ipv4 address     - 64
           via is an ipv6 address     - 72
      
      With this patch set, memory allocated per mpls_route with 1 nexthop and
      1 or 2 labels:
           via is an ethernet address - 56 bytes
           via is an ipv4 address     - 56
           via is an ipv6 address     - 64
      
      The 8-byte reduction is due to the previous patch; the change introduced
      by this patch has no impact on the size of allocations for 1 or 2 labels.
      
      Performance impact of this change was examined using network namespaces
      with veth pairs connecting namespaces. ns0 inserts the packet to the
      label-switched path using an lwt route with encap mpls. ns1 adds 1 or 2
      labels depending on test, ns2 (and ns3 for 2-label test) pops the label
      and forwards. ns3 (or ns4) for a 2-label is the destination. Similar
      series of namespaces used for 2-nexthop test.
      
      Intent is to measure changes to latency (overhead in manipulating the
      packet) in the forwarding path. Tests used netperf with UDP_RR.
      
      IPv4:                     current   patches
         1 label, 1 nexthop      29908     30115
         2 label, 1 nexthop      29071     29612
         1 label, 2 nexthop      29582     29776
         2 label, 2 nexthop      29086     29149
      
      IPv6:                     current   patches
         1 label, 1 nexthop      24502     24960
         2 label, 1 nexthop      24041     24407
         1 label, 2 nexthop      23795     23899
         2 label, 2 nexthop      23074     22959
      
      In short, the change has no effect to a modest increase in performance.
      This is expected since this patch does not really have an impact on routes
      with 1 or 2 labels (the current limit) and 1 or 2 nexthops.
      Signed-off-by: NDavid Ahern <dsa@cumulusnetworks.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      59b20966
    • D
      net: mpls: Convert number of nexthops to u8 · 77ef013a
      David Ahern 提交于
      Number of nexthops and number of alive nexthops are tracked using an
      unsigned int. A route should never have more than 255 nexthops so
      convert both to u8. Update all references and intermediate variables
      to consistently use u8 as well.
      
      Shrinks the size of mpls_route from 32 bytes to 24 bytes with a 2-byte
      hole before the nexthops.
      Signed-off-by: NDavid Ahern <dsa@cumulusnetworks.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      77ef013a
    • D
      net: mpls: rt_nhn_alive and nh_flags should be accessed using READ_ONCE · 39eb8cd1
      David Ahern 提交于
      The number of alive nexthops for a route (rt->rt_nhn_alive) and the
      flags for a next hop (nh->nh_flags) are modified by netdev event
      handlers. The event handlers run with rtnl_lock held so updates are
      always done with the lock held. The packet path accesses the fields
      under the rcu lock. Since those fields can change at any moment in
      the packet path, both fields should be accessed using READ_ONCE. Updates
      to both fields should use WRITE_ONCE.
      
      Update mpls_select_multipath (packet path) and mpls_ifdown and mpls_ifup
      (event handlers) accordingly.
      Signed-off-by: NDavid Ahern <dsa@cumulusnetworks.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      39eb8cd1
    • T
      net: dsa: fix build error with devlink build as module · 768bfa2a
      Tobias Regnery 提交于
      After commit 96567d5d ("net: dsa: dsa2: Add basic support of devlink")
      I see the following link error with CONFIG_NET_DSA=y and CONFIG_NET_DEVLINK=m:
      
      net/built-in.o: In function 'dsa_register_switch':
      (.text+0xe226b): undefined reference to `devlink_alloc'
      net/built-in.o: In function 'dsa_register_switch':
      (.text+0xe2284): undefined reference to `devlink_register'
      net/built-in.o: In function 'dsa_register_switch':
      (.text+0xe243e): undefined reference to `devlink_port_register'
      net/built-in.o: In function 'dsa_register_switch':
      (.text+0xe24e1): undefined reference to `devlink_port_register'
      net/built-in.o: In function 'dsa_register_switch':
      (.text+0xe24fa): undefined reference to `devlink_port_type_eth_set'
      net/built-in.o: In function 'dsa_dst_unapply.part.8':
      dsa2.c:(.text.unlikely+0x345): undefined reference to 'devlink_port_unregister'
      dsa2.c:(.text.unlikely+0x36c): undefined reference to 'devlink_port_unregister'
      dsa2.c:(.text.unlikely+0x38e): undefined reference to 'devlink_port_unregister'
      dsa2.c:(.text.unlikely+0x3f2): undefined reference to 'devlink_unregister'
      dsa2.c:(.text.unlikely+0x3fb): undefined reference to 'devlink_free'
      
      Fix this by adding a dependency on MAY_USE_DEVLINK so that CONFIG_NET_DSA
      get switched to be build as module when CONFIG_NET_DEVLINK=m.
      
      Fixes: 96567d5d ("net: dsa: dsa2: Add basic support of devlink")
      Signed-off-by: NTobias Regnery <tobias.regnery@gmail.com>
      Reviewed-by: NAndrew Lunn <andrew@lunn.ch>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      768bfa2a
    • A
      bpf: introduce BPF_PROG_TEST_RUN command · 1cf1cae9
      Alexei Starovoitov 提交于
      development and testing of networking bpf programs is quite cumbersome.
      Despite availability of user space bpf interpreters the kernel is
      the ultimate authority and execution environment.
      Current test frameworks for TC include creation of netns, veth,
      qdiscs and use of various packet generators just to test functionality
      of a bpf program. XDP testing is even more complicated, since
      qemu needs to be started with gro/gso disabled and precise queue
      configuration, transferring of xdp program from host into guest,
      attaching to virtio/eth0 and generating traffic from the host
      while capturing the results from the guest.
      
      Moreover analyzing performance bottlenecks in XDP program is
      impossible in virtio environment, since cost of running the program
      is tiny comparing to the overhead of virtio packet processing,
      so performance testing can only be done on physical nic
      with another server generating traffic.
      
      Furthermore ongoing changes to user space control plane of production
      applications cannot be run on the test servers leaving bpf programs
      stubbed out for testing.
      
      Last but not least, the upstream llvm changes are validated by the bpf
      backend testsuite which has no ability to test the code generated.
      
      To improve this situation introduce BPF_PROG_TEST_RUN command
      to test and performance benchmark bpf programs.
      
      Joint work with Daniel Borkmann.
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1cf1cae9
    • V
      net: dsa: add cross-chip bridging operations · 40ef2c93
      Vivien Didelot 提交于
      Introduce crosschip_bridge_{join,leave} operations in the dsa_switch_ops
      structure, which can be used by switches supporting interconnection.
      Signed-off-by: NVivien Didelot <vivien.didelot@savoirfairelinux.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      40ef2c93
  2. 31 3月, 2017 4 次提交
  3. 30 3月, 2017 3 次提交
  4. 29 3月, 2017 14 次提交
  5. 28 3月, 2017 3 次提交
  6. 25 3月, 2017 7 次提交