1. 16 2月, 2018 2 次提交
    • D
      net/ipv4: Remove fib table id from rtable · 68e813aa
      David Ahern 提交于
      Remove rt_table_id from rtable. It was added for getroute to return the
      table id that was hit in the lookup. With the changes for fibmatch the
      table id can be extracted from the fib_info returned in the fib_result
      so it no longer needs to be in rtable directly.
      Signed-off-by: NDavid Ahern <dsahern@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      68e813aa
    • K
      tun: Add ioctl() SIOCGSKNS cmd to allow obtaining net ns of tun device · f2780d6d
      Kirill Tkhai 提交于
      This patch adds possibility to get tun device's net namespace fd
      in the same way we allow to do that for sockets.
      
      Socket ioctl numbers do not intersect with tun-specific, and there
      is already SIOCSIFHWADDR used in tun code. So, SIOCGSKNS number
      is choosen instead of custom-made for this functionality.
      
      Note, that open_related_ns() uses plain get_net_ns() and it's safe
      (net can't be already dead at this moment):
      
        tun socket is allocated via sk_alloc() with zero last arg (kern = 0).
        So, each alive socket increments net::count, and the socket is definitely
        alive during ioctl syscall.
      
      Also, common variable net is introduced, so small cleanup in TUNSETIFF
      is made.
      Signed-off-by: NKirill Tkhai <ktkhai@virtuozzo.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f2780d6d
  2. 15 2月, 2018 19 次提交
  3. 14 2月, 2018 17 次提交
  4. 13 2月, 2018 2 次提交
    • K
      net: Convert loopback_net_ops · 9a4d105d
      Kirill Tkhai 提交于
      These pernet_operations have only init() method. It allocates
      memory for net_device, calls register_netdev() and assigns
      net::loopback_dev.
      
      register_netdev() is allowed be used without additional locks,
      as it's synchronized on rtnl_lock(). There are many examples
      of using this functon directly from ioctl().
      
      The only difference, compared to ioctl(), is that net is not
      completely alive at this moment. But it looks like, there is
      no way for parallel pernet_operations to dereference
      the net_device, as the most of struct net_device lists,
      where it's linked, are related to net, and the net is not liked.
      
      The exceptions are net_device::unreg_list, close_list, todo_list,
      used for unregistration, and ::link_watch_list, where net_device
      may be linked to global lists.
      
      Unregistration of loopback_dev obviously can't happen, when
      loopback_net_init() is executing, as the net as alive. It occurs
      in default_device_ops, which currently requires net_mutex,
      and it behaves as a barrier at the moment. It will be considered
      in next patch.
      
      Speaking about link_watch_list, it seems, there is no way
      for loopback_dev at time of registration to be linked in lweventlist
      and be available for another pernet_operations.
      Signed-off-by: NKirill Tkhai <ktkhai@virtuozzo.com>
      Acked-by: NAndrei Vagin <avagin@virtuozzo.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9a4d105d
    • A
      i40e/i40evf: Add support for new mechanism of updating adaptive ITR · a0073a4b
      Alexander Duyck 提交于
      This patch replaces the existing mechanism for determining the correct
      value to program for adaptive ITR with yet another new and more
      complicated approach.
      
      The basic idea from a 30K foot view is that this new approach will push the
      Rx interrupt moderation up so that by default it starts in low latency and
      is gradually pushed up into a higher latency setup as long as doing so
      increases the number of packets processed, if the number of packets drops
      to 4 to 1 per packet we will reset and just base our ITR on the size of the
      packets being received. For Tx we leave it floating at a high interrupt
      delay and do not pull it down unless we start processing more than 112
      packets per interrupt. If we start exceeding that we will cut our interrupt
      rates in half until we are back below 112.
      
      The side effect of these patches are that we will be processing more
      packets per interrupt. This is both a good and a bad thing as it means we
      will not be blocking processing in the case of things like pktgen and XDP,
      but we will also be consuming a bit more CPU in the cases of things such as
      network throughput tests using netperf.
      
      One delta from this versus the ixgbe version of the changes is that I have
      made the interrupt moderation a bit more aggressive when we are in bulk
      mode by moving our "goldilocks zone" up from 48 to 96 to 56 to 112. The
      main motivation behind moving this is to address the fact that we need to
      update less frequently, and have more fine grained control due to the
      separate Tx and Rx ITR times.
      Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      a0073a4b