1. 13 5月, 2015 15 次提交
  2. 12 5月, 2015 5 次提交
    • A
      net: Add skb_free_frag to replace use of put_page in freeing skb->head · 181edb2b
      Alexander Duyck 提交于
      This change adds a function called skb_free_frag which is meant to
      compliment the function netdev_alloc_frag.  The general idea is to enable a
      more lightweight version of page freeing since we don't actually need all
      the overhead of a put_page, and we don't quite fit the model of __free_pages.
      Signed-off-by: NAlexander Duyck <alexander.h.duyck@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      181edb2b
    • A
      mm/net: Rename and move page fragment handling from net/ to mm/ · b63ae8ca
      Alexander Duyck 提交于
      This change moves the __alloc_page_frag functionality out of the networking
      stack and into the page allocation portion of mm.  The idea it so help make
      this maintainable by placing it with other page allocation functions.
      
      Since we are moving it from skbuff.c to page_alloc.c I have also renamed
      the basic defines and structure from netdev_alloc_cache to page_frag_cache
      to reflect that this is now part of a different kernel subsystem.
      
      I have also added a simple __free_page_frag function which can handle
      freeing the frags based on the skb->head pointer.  The model for this is
      based off of __free_pages since we don't actually need to deal with all of
      the cases that put_page handles.  I incorporated the virt_to_head_page call
      and compound_order into the function as it actually allows for a signficant
      size reduction by reducing code duplication.
      Signed-off-by: NAlexander Duyck <alexander.h.duyck@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b63ae8ca
    • A
      net: Store virtual address instead of page in netdev_alloc_cache · 0e392508
      Alexander Duyck 提交于
      This change makes it so that we store the virtual address of the page
      in the netdev_alloc_cache instead of the page pointer.  The idea behind
      this is to avoid multiple calls to page_address since the virtual address
      is required for every access, but the page pointer is only needed at
      allocation or reset of the page.
      
      While I was at it I also reordered the netdev_alloc_cache structure a bit
      so that the size is always 16 bytes by dropping size in the case where
      PAGE_SIZE is greater than or equal to 32KB.
      Signed-off-by: NAlexander Duyck <alexander.h.duyck@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0e392508
    • A
      net: Use cached copy of pfmemalloc to avoid accessing page · 9451980a
      Alexander Duyck 提交于
      While testing I found that the testing for pfmemalloc in build_skb was
      rather expensive.  I found the issue to be two-fold.  First we have to get
      from the virtual address to the head page and that comes at the cost of
      something like 11 cycles.  Then there is the cost for reading pfmemalloc out
      of the head page which can be cache cold due to the fact that
      put_page_testzero is likely invalidating the cache-line on one or more
      CPUs as the fragments can be shared.
      
      To avoid this extra expense I have added a pfmemalloc member to the
      netdev_alloc_cache.  I then pushed pieces of __alloc_rx_skb into
      __napi_alloc_skb and __netdev_alloc_skb so that I could rewrite them to
      make use of the cached pfmemalloc value.  The result is that my perf traces
      show a reduction from 9.28% overhead to 3.7% for the code covered by
      build_skb, __alloc_rx_skb, and __napi_alloc_skb when performing a test with
      the packet being dropped instead of being handed to napi_gro_receive.
      Signed-off-by: NAlexander Duyck <alexander.h.duyck@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9451980a
    • E
      net: sched: deprecate enqueue_root() · b396cca6
      Eric Dumazet 提交于
      Only left enqueue_root() user is netem, and it looks not necessary :
      
      qdisc_skb_cb(skb)->pkt_len is preserved after one skb_clone()
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b396cca6
  3. 11 5月, 2015 9 次提交
    • D
      net: sched: further simplify handle_ing · d2788d34
      Daniel Borkmann 提交于
      Ingress qdisc has no other purpose than calling into tc_classify()
      that executes attached classifier(s) and action(s).
      
      It has a 1:1 relationship to dev->ingress_queue. After having commit
      087c1a60 ("net: sched: run ingress qdisc without locks") removed
      the central ingress lock, one major contention point is gone.
      
      The extra indirection layers however, are not necessary for calling
      into ingress qdisc. pktgen calling locally into netif_receive_skb()
      with a dummy u32, single CPU result on a Supermicro X10SLM-F, Xeon
      E3-1240: before ~21,1 Mpps, after patch ~22,9 Mpps.
      
      We can redirect the private classifier list to the netdev directly,
      without changing any classifier API bits (!) and execute on that from
      handle_ing() side. The __QDISC_STATE_DEACTIVATE test can be removed,
      ingress qdisc doesn't have a queue and thus dev_deactivate_queue()
      is also not applicable, ingress_cl_list provides similar behaviour.
      In other words, ingress qdisc acts like TCQ_F_BUILTIN qdisc.
      
      One next possible step is the removal of the dev's ingress (dummy)
      netdev_queue, and to only have the list member in the netdevice
      itself.
      
      Note, the filter chain is RCU protected and individual filter elements
      are being kfree'd by sched subsystem after RCU grace period. RCU read
      lock is being held by __netif_receive_skb_core().
      
      Joint work with Alexei Starovoitov.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NAlexei Starovoitov <ast@plumgrid.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d2788d34
    • D
      net: sched: consolidate handle_ing and ing_filter · c9e99fd0
      Daniel Borkmann 提交于
      Given quite some code has been removed from ing_filter(), we can just
      consolidate that function into handle_ing() and get rid of a few
      instructions at the same time.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@plumgrid.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c9e99fd0
    • E
      net: kill sk_change_net and sk_release_kernel · affb9792
      Eric W. Biederman 提交于
      These functions are no longer needed and no longer used kill them.
      Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      affb9792
    • E
      netlink: Create kernel netlink sockets in the proper network namespace · 13d3078e
      Eric W. Biederman 提交于
      Utilize the new functionality of sk_alloc so that nothing needs to be
      done to suprress the reference counting on kernel sockets.
      Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      13d3078e
    • E
      net: Modify sk_alloc to not reference count the netns of kernel sockets. · 26abe143
      Eric W. Biederman 提交于
      Now that sk_alloc knows when a kernel socket is being allocated modify
      it to not reference count the network namespace of kernel sockets.
      
      Keep track of if a socket needs reference counting by adding a flag to
      struct sock called sk_net_refcnt.
      
      Update all of the callers of sock_create_kern to stop using
      sk_change_net and sk_release_kernel as those hacks are no longer
      needed, to avoid reference counting a kernel socket.
      Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      26abe143
    • E
      net: Pass kern from net_proto_family.create to sk_alloc · 11aa9c28
      Eric W. Biederman 提交于
      In preparation for changing how struct net is refcounted
      on kernel sockets pass the knowledge that we are creating
      a kernel socket from sock_create_kern through to sk_alloc.
      Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      11aa9c28
    • E
      net: Add a struct net parameter to sock_create_kern · eeb1bd5c
      Eric W. Biederman 提交于
      This is long overdue, and is part of cleaning up how we allocate kernel
      sockets that don't reference count struct net.
      Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      eeb1bd5c
    • E
      tun: Utilize the normal socket network namespace refcounting. · 140e807d
      Eric W. Biederman 提交于
      There is no need for tun to do the weird network namespace refcounting.
      The existing network namespace refcounting in tfile has almost exactly
      the same lifetime.  So rewrite the code to use the struct sock network
      namespace refcounting and remove the unnecessary hand rolled network
      namespace refcounting and the unncesary tfile->net.
      
      This change allows the tun code to directly call sock_put bypassing
      sock_release and making SOCK_EXTERNALLY_ALLOCATED unnecessary.
      
      Remove the now unncessary tun_release so that if anything tries to use
      the sock_release code path the kernel will oops, and let us know about
      the bug.
      
      The macvtap code already uses it's internal socket this way.
      Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      140e807d
    • E
      codel: add ce_threshold attribute · 80ba92fa
      Eric Dumazet 提交于
      For DCTCP or similar ECN based deployments on fabrics with shallow
      buffers, hosts are responsible for a good part of the buffering.
      
      This patch adds an optional ce_threshold to codel & fq_codel qdiscs,
      so that DCTCP can have feedback from queuing in the host.
      
      A DCTCP enabled egress port simply have a queue occupancy threshold
      above which ECT packets get CE mark.
      
      In codel language this translates to a sojourn time, so that one doesn't
      have to worry about bytes or bandwidth but delays.
      
      This makes the host an active participant in the health of the whole
      network.
      
      This also helps experimenting DCTCP in a setup without DCTCP compliant
      fabric.
      
      On following example, ce_threshold is set to 1ms, and we can see from
      'ldelay xxx us' that TCP is not trying to go around the 5ms codel
      target.
      
      Queue has more capacity to absorb inelastic bursts (say from UDP
      traffic), as queues are maintained to an optimal level.
      
      lpaa23:~# ./tc -s -d qd sh dev eth1
      qdisc mq 1: dev eth1 root
       Sent 87910654696 bytes 58065331 pkt (dropped 0, overlimits 0 requeues 42961)
       backlog 3108242b 364p requeues 42961
      qdisc codel 8063: dev eth1 parent 1:1 limit 1000p target 5.0ms ce_threshold 1.0ms interval 100.0ms
       Sent 7363778701 bytes 4863809 pkt (dropped 0, overlimits 0 requeues 5503)
       rate 2348Mbit 193919pps backlog 255866b 46p requeues 5503
        count 0 lastcount 0 ldelay 1.0ms drop_next 0us
        maxpacket 68130 ecn_mark 0 drop_overlimit 0 ce_mark 72384
      qdisc codel 8064: dev eth1 parent 1:2 limit 1000p target 5.0ms ce_threshold 1.0ms interval 100.0ms
       Sent 7636486190 bytes 5043942 pkt (dropped 0, overlimits 0 requeues 5186)
       rate 2319Mbit 191538pps backlog 207418b 64p requeues 5186
        count 0 lastcount 0 ldelay 694us drop_next 0us
        maxpacket 68130 ecn_mark 0 drop_overlimit 0 ce_mark 69873
      qdisc codel 8065: dev eth1 parent 1:3 limit 1000p target 5.0ms ce_threshold 1.0ms interval 100.0ms
       Sent 11569360142 bytes 7641602 pkt (dropped 0, overlimits 0 requeues 5554)
       rate 3041Mbit 251096pps backlog 210446b 59p requeues 5554
        count 0 lastcount 0 ldelay 889us drop_next 0us
        maxpacket 68130 ecn_mark 0 drop_overlimit 0 ce_mark 37780
      ...
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Florian Westphal <fw@strlen.de>
      Cc: Daniel Borkmann <daniel@iogearbox.net>
      Cc: Glenn Judd <glenn.judd@morganstanley.com>
      Cc: Nandita Dukkipati <nanditad@google.com>
      Cc: Neal Cardwell <ncardwell@google.com>
      Cc: Yuchung Cheng <ycheng@google.com>
      Acked-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      80ba92fa
  4. 10 5月, 2015 11 次提交
    • A
      pktgen: introduce xmit_mode '<start_xmit|netif_receive>' · 62f64aed
      Alexei Starovoitov 提交于
      Introduce xmit_mode 'netif_receive' for pktgen which generates the
      packets using familiar pktgen commands, but feeds them into
      netif_receive_skb() instead of ndo_start_xmit().
      
      Default mode is called 'start_xmit'.
      
      It is designed to test netif_receive_skb and ingress qdisc
      performace only. Make sure to understand how it works before
      using it for other rx benchmarking.
      
      Sample script 'pktgen.sh':
      \#!/bin/bash
      function pgset() {
        local result
      
        echo $1 > $PGDEV
      
        result=`cat $PGDEV | fgrep "Result: OK:"`
        if [ "$result" = "" ]; then
          cat $PGDEV | fgrep Result:
        fi
      }
      
      [ -z "$1" ] && echo "Usage: $0 DEV" && exit 1
      ETH=$1
      
      PGDEV=/proc/net/pktgen/kpktgend_0
      pgset "rem_device_all"
      pgset "add_device $ETH"
      
      PGDEV=/proc/net/pktgen/$ETH
      pgset "xmit_mode netif_receive"
      pgset "pkt_size 60"
      pgset "dst 198.18.0.1"
      pgset "dst_mac 90:e2:ba:ff:ff:ff"
      pgset "count 10000000"
      pgset "burst 32"
      
      PGDEV=/proc/net/pktgen/pgctrl
      echo "Running... ctrl^C to stop"
      pgset "start"
      echo "Done"
      cat /proc/net/pktgen/$ETH
      
      Usage:
      $ sudo ./pktgen.sh eth2
      ...
      Result: OK: 232376(c232372+d3) usec, 10000000 (60byte,0frags)
        43033682pps 20656Mb/sec (20656167360bps) errors: 10000000
      
      Raw netif_receive_skb speed should be ~43 million packet
      per second on 3.7Ghz x86 and 'perf report' should look like:
        37.69%  kpktgend_0   [kernel.vmlinux]  [k] __netif_receive_skb_core
        25.81%  kpktgend_0   [kernel.vmlinux]  [k] kfree_skb
         7.22%  kpktgend_0   [kernel.vmlinux]  [k] ip_rcv
         5.68%  kpktgend_0   [pktgen]          [k] pktgen_thread_worker
      
      If fib_table_lookup is seen on top, it means skb was processed
      by the stack. To benchmark netif_receive_skb only make sure
      that 'dst_mac' of your pktgen script is different from
      receiving device mac and it will be dropped by ip_rcv
      Signed-off-by: NAlexei Starovoitov <ast@plumgrid.com>
      Signed-off-by: NJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      62f64aed
    • J
      pktgen: adjust flag NO_TIMESTAMP to be more pktgen compliant · f1f00d8f
      Jesper Dangaard Brouer 提交于
      Allow flag NO_TIMESTAMP to turn timestamping on again, like other flags,
      with a negation of the flag like !NO_TIMESTAMP.
      
      Also document the option flag NO_TIMESTAMP.
      
      Fixes: afb84b62 ("pktgen: add flag NO_TIMESTAMP to disable timestamping")
      Signed-off-by: NJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f1f00d8f
    • N
      netlink: allow to listen "all" netns · 59324cf3
      Nicolas Dichtel 提交于
      More accurately, listen all netns that have a nsid assigned into the netns
      where the netlink socket is opened.
      For this purpose, a netlink socket option is added:
      NETLINK_LISTEN_ALL_NSID. When this option is set on a netlink socket, this
      socket will receive netlink notifications from all netns that have a nsid
      assigned into the netns where the socket has been opened. The nsid is sent
      to userland via an anscillary data.
      
      With this patch, a daemon needs only one socket to listen many netns. This
      is useful when the number of netns is high.
      
      Because 0 is a valid value for a nsid, the field nsid_is_set indicates if
      the field nsid is valid or not. skb->cb is initialized to 0 on skb
      allocation, thus we are sure that we will never send a nsid 0 by error to
      the userland.
      Signed-off-by: NNicolas Dichtel <nicolas.dichtel@6wind.com>
      Acked-by: NThomas Graf <tgraf@suug.ch>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      59324cf3
    • N
      netlink: rename private flags and states · cc3a572f
      Nicolas Dichtel 提交于
      These flags and states have the same prefix (NETLINK_) that netlink socket
      options. To avoid confusion and to be able to name a flag like a socket
      option, let's use an other prefix: NETLINK_[S|F]_.
      
      Note: a comment has been fixed, it was talking about
      NETLINK_RECV_NO_ENOBUFS socket option instead of NETLINK_NO_ENOBUFS.
      Signed-off-by: NNicolas Dichtel <nicolas.dichtel@6wind.com>
      Acked-by: NThomas Graf <tgraf@suug.ch>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      cc3a572f
    • N
      netns: use a spin_lock to protect nsid management · 95f38411
      Nicolas Dichtel 提交于
      Before this patch, nsid were protected by the rtnl lock. The goal of this
      patch is to be able to find a nsid without needing to hold the rtnl lock.
      
      The next patch will introduce a netlink socket option to listen to all
      netns that have a nsid assigned into the netns where the socket is opened.
      Thus, it's important to call rtnl_net_notifyid() outside the spinlock, to
      avoid a recursive lock (nsid are notified via rtnl). This was the main
      reason of the previous patch.
      Signed-off-by: NNicolas Dichtel <nicolas.dichtel@6wind.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      95f38411
    • N
      netns: notify new nsid outside __peernet2id() · 3138dbf8
      Nicolas Dichtel 提交于
      There is no functional change with this patch. It will ease the refactoring
      of the locking system that protects nsids and the support of the netlink
      socket option NETLINK_LISTEN_ALL_NSID.
      Signed-off-by: NNicolas Dichtel <nicolas.dichtel@6wind.com>
      Acked-by: NThomas Graf <tgraf@suug.ch>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3138dbf8
    • N
      netns: rename peernet2id() to peernet2id_alloc() · 7a0877d4
      Nicolas Dichtel 提交于
      In a following commit, a new function will be introduced to only lookup for
      a nsid (no allocation if the nsid doesn't exist). To avoid confusion, the
      existing function is renamed.
      Signed-off-by: NNicolas Dichtel <nicolas.dichtel@6wind.com>
      Acked-by: NThomas Graf <tgraf@suug.ch>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7a0877d4
    • N
      netns: always provide the id to rtnl_net_fill() · cab3c8ec
      Nicolas Dichtel 提交于
      The goal of this commit is to prepare the rework of the locking of nsnid
      protection.
      After this patch, rtnl_net_notifyid() will not call anymore __peernet2id(),
      ie no idr_* operation into this function.
      Signed-off-by: NNicolas Dichtel <nicolas.dichtel@6wind.com>
      Acked-by: NThomas Graf <tgraf@suug.ch>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      cab3c8ec
    • N
      netns: returns always an id in __peernet2id() · 109582af
      Nicolas Dichtel 提交于
      All callers of this function expect a nsid, not an error.
      Thus, returns NETNSA_NSID_NOT_ASSIGNED in case of error so that callers
      don't have to convert the error to NETNSA_NSID_NOT_ASSIGNED.
      Signed-off-by: NNicolas Dichtel <nicolas.dichtel@6wind.com>
      Acked-by: NThomas Graf <tgraf@suug.ch>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      109582af
    • J
      tcp: set SOCK_NOSPACE under memory pressure · 790ba456
      Jason Baron 提交于
      Under tcp memory pressure, calling epoll_wait() in edge triggered
      mode after -EAGAIN, can result in an indefinite hang in epoll_wait(),
      even when there is sufficient memory available to continue making
      progress. The problem is that when __sk_mem_schedule() returns 0
      under memory pressure, we do not set the SOCK_NOSPACE flag in the
      tcp write paths (tcp_sendmsg() or do_tcp_sendpages()). Then, since
      SOCK_NOSPACE is used to trigger wakeups when incoming acks create
      sufficient new space in the write queue, all outstanding packets
      are acked, but we never wake up with the the EPOLLOUT that we are
      expecting from epoll_wait().
      
      This issue is currently limited to epoll() when used in edge trigger
      mode, since 'tcp_poll()', does in fact currently set SOCK_NOSPACE.
      This is sufficient for poll()/select() and epoll() in level trigger
      mode. However, in edge trigger mode, epoll() is relying on the write
      path to set SOCK_NOSPACE. EPOLL(7) says that in edge-trigger mode we
      can only call epoll_wait() after read/write return -EAGAIN. Thus, in
      the case of the socket write, we are relying on the fact that
      tcp_sendmsg()/network write paths are going to issue a wakeup for
      us at some point in the future when we get -EAGAIN.
      
      Normally, epoll() edge trigger works fine when we've exceeded the
      sk->sndbuf because in that case we do set SOCK_NOSPACE. However, when
      we return -EAGAIN from the write path b/c we are over the tcp memory
      limits and not b/c we are over the sndbuf, we are never going to get
      another wakeup.
      
      I can reproduce this issue, using SO_SNDBUF, since __sk_mem_schedule()
      will return 0, or failure more readily with SO_SNDBUF:
      
      1) create socket and set SO_SNDBUF to N
      2) add socket as edge trigger
      3) write to socket and block in epoll on -EAGAIN
      4) cause tcp mem pressure via: echo "<small val>" > net.ipv4.tcp_mem
      
      The fix here is simply to set SOCK_NOSPACE in sk_stream_wait_memory()
      when the socket is non-blocking. Note that SOCK_NOSPACE, in addition
      to waking up outstanding waiters is also used to expand the size of
      the sk->sndbuf. However, we will not expand it by setting it in this
      case because tcp_should_expand_sndbuf(), ensures that no expansion
      occurs when we are under tcp memory pressure.
      
      Note that we could still hang if sk->sk_wmem_queue is 0, when we get
      the -EAGAIN. In this case the SOCK_NOSPACE bit will not help, since we
      are waiting for and event that will never happen. I believe
      that this case is harder to hit (and did not hit in my testing),
      in that over the tcp 'soft' memory limits, we continue to guarantee a
      minimum write buffer size. Perhaps, we could return -ENOSPC in this
      case, or maybe we simply issue a wakeup in this case, such that we
      keep retrying the write. Note that this case is not specific to
      epoll() ET, but rather would affect blocking sockets as well. So I
      view this patch as bringing epoll() edge-trigger into sync with the
      current poll()/select()/epoll() level trigger and blocking sockets
      behavior.
      Signed-off-by: NJason Baron <jbaron@akamai.com>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      790ba456
    • D
      seccomp, filter: add and use bpf_prog_create_from_user from seccomp · ac67eb2c
      Daniel Borkmann 提交于
      Seccomp has always been a special candidate when it comes to preparation
      of its filters in seccomp_prepare_filter(). Due to the extra checks and
      filter rewrite it partially duplicates code and has BPF internals exposed.
      
      This patch adds a generic API inside the BPF code code that seccomp can use
      and thus keep it's filter preparation code minimal and better maintainable.
      The other side-effect is that now classic JITs can add seccomp support as
      well by only providing a BPF_LDX | BPF_W | BPF_ABS translation.
      
      Tested with seccomp and BPF test suites.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Cc: Nicolas Schichan <nschichan@freebox.fr>
      Cc: Alexei Starovoitov <ast@plumgrid.com>
      Cc: Kees Cook <keescook@chromium.org>
      Acked-by: NAlexei Starovoitov <ast@plumgrid.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ac67eb2c