1. 08 5月, 2018 1 次提交
    • P
      net: core: rework basic flow dissection helper · 72a338bc
      Paolo Abeni 提交于
      When the core networking needs to detect the transport offset in a given
      packet and parse it explicitly, a full-blown flow_keys struct is used for
      storage.
      This patch introduces a smaller keys store, rework the basic flow dissect
      helper to use it, and apply this new helper where possible - namely in
      skb_probe_transport_header(). The used flow dissector data structures
      are renamed to match more closely the new role.
      
      The above gives ~50% performance improvement in micro benchmarking around
      skb_probe_transport_header() and ~30% around eth_get_headlen(), mostly due
      to the smaller memset. Small, but measurable improvement is measured also
      in macro benchmarking.
      
      v1 -> v2: use the new helper in eth_get_headlen() and skb_get_poff(),
        as per DaveM suggestion
      Suggested-by: NDavid Miller <davem@davemloft.net>
      Signed-off-by: NPaolo Abeni <pabeni@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      72a338bc
  2. 01 5月, 2018 1 次提交
  3. 27 4月, 2018 1 次提交
    • W
      udp: add udp gso · ee80d1eb
      Willem de Bruijn 提交于
      Implement generic segmentation offload support for udp datagrams. A
      follow-up patch adds support to the protocol stack to generate such
      packets.
      
      UDP GSO is not UFO. UFO fragments a single large datagram. GSO splits
      a large payload into a number of discrete UDP datagrams.
      
      The implementation adds a GSO type SKB_UDP_GSO_L4 to differentiate it
      from UFO (SKB_UDP_GSO).
      
      IPPROTO_UDPLITE is excluded, as that protocol has no gso handler
      registered.
      
      [ Export __udp_gso_segment for ipv6. -DaveM ]
      Signed-off-by: NWillem de Bruijn <willemb@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ee80d1eb
  4. 20 4月, 2018 1 次提交
    • E
      net: pskb_trim_rcsum() and CHECKSUM_COMPLETE are friends · 88078d98
      Eric Dumazet 提交于
      After working on IP defragmentation lately, I found that some large
      packets defeat CHECKSUM_COMPLETE optimization because of NIC adding
      zero paddings on the last (small) fragment.
      
      While removing the padding with pskb_trim_rcsum(), we set skb->ip_summed
      to CHECKSUM_NONE, forcing a full csum validation, even if all prior
      fragments had CHECKSUM_COMPLETE set.
      
      We can instead compute the checksum of the part we are trimming,
      usually smaller than the part we keep.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      88078d98
  5. 01 4月, 2018 1 次提交
    • E
      inet: frags: get rid of ipfrag_skb_cb/FRAG_CB · bf663371
      Eric Dumazet 提交于
      ip_defrag uses skb->cb[] to store the fragment offset, and unfortunately
      this integer is currently in a different cache line than skb->next,
      meaning that we use two cache lines per skb when finding the insertion point.
      
      By aliasing skb->ip_defrag_offset and skb->dev, we pack all the fields
      in a single cache line and save precious memory bandwidth.
      
      Note that after the fast path added by Changli Gao in commit
      d6bebca9 ("fragment: add fast path for in-order fragments")
      this change wont help the fast path, since we still need
      to access prev->len (2nd cache line), but will show great
      benefits when slow path is entered, since we perform
      a linear scan of a potentially long list.
      
      Also, note that this potential long list is an attack vector,
      we might consider also using an rb-tree there eventually.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      bf663371
  6. 05 3月, 2018 2 次提交
  7. 04 3月, 2018 1 次提交
  8. 17 2月, 2018 2 次提交
  9. 01 2月, 2018 1 次提交
  10. 06 12月, 2017 1 次提交
  11. 30 11月, 2017 1 次提交
  12. 28 11月, 2017 1 次提交
  13. 24 11月, 2017 1 次提交
    • W
      net: accept UFO datagrams from tuntap and packet · 0c19f846
      Willem de Bruijn 提交于
      Tuntap and similar devices can inject GSO packets. Accept type
      VIRTIO_NET_HDR_GSO_UDP, even though not generating UFO natively.
      
      Processes are expected to use feature negotiation such as TUNSETOFFLOAD
      to detect supported offload types and refrain from injecting other
      packets. This process breaks down with live migration: guest kernels
      do not renegotiate flags, so destination hosts need to expose all
      features that the source host does.
      
      Partially revert the UFO removal from 182e0b6b~1..d9d30adf.
      This patch introduces nearly(*) no new code to simplify verification.
      It brings back verbatim tuntap UFO negotiation, VIRTIO_NET_HDR_GSO_UDP
      insertion and software UFO segmentation.
      
      It does not reinstate protocol stack support, hardware offload
      (NETIF_F_UFO), SKB_GSO_UDP tunneling in SKB_GSO_SOFTWARE or reception
      of VIRTIO_NET_HDR_GSO_UDP packets in tuntap.
      
      To support SKB_GSO_UDP reappearing in the stack, also reinstate
      logic in act_csum and openvswitch. Achieve equivalence with v4.13 HEAD
      by squashing in commit 93991221 ("net: skb_needs_check() removes
      CHECKSUM_UNNECESSARY check for tx.") and reverting commit 8d63bee6
      ("net: avoid skb_warn_bad_offload false positives on UFO").
      
      (*) To avoid having to bring back skb_shinfo(skb)->ip6_frag_id,
      ipv6_proxy_select_ident is changed to return a __be32 and this is
      assigned directly to the frag_hdr. Also, SKB_GSO_UDP is inserted
      at the end of the enum to minimize code churn.
      
      Tested
        Booted a v4.13 guest kernel with QEMU. On a host kernel before this
        patch `ethtool -k eth0` shows UFO disabled. After the patch, it is
        enabled, same as on a v4.13 host kernel.
      
        A UFO packet sent from the guest appears on the tap device:
          host:
            nc -l -p -u 8000 &
            tcpdump -n -i tap0
      
          guest:
            dd if=/dev/zero of=payload.txt bs=1 count=2000
            nc -u 192.16.1.1 8000 < payload.txt
      
        Direct tap to tap transmission of VIRTIO_NET_HDR_GSO_UDP succeeds,
        packets arriving fragmented:
      
          ./with_tap_pair.sh ./tap_send_ufo tap0 tap1
          (from https://github.com/wdebruij/kerneltools/tree/master/tests)
      
      Changes
        v1 -> v2
          - simplified set_offload change (review comment)
          - documented test procedure
      
      Link: http://lkml.kernel.org/r/<CAF=yD-LuUeDuL9YWPJD9ykOZ0QCjNeznPDr6whqZ9NGMNF12Mw@mail.gmail.com>
      Fixes: fb652fdf ("macvlan/macvtap: Remove NETIF_F_UFO advertisement.")
      Reported-by: NMichal Kubecek <mkubecek@suse.cz>
      Signed-off-by: NWillem de Bruijn <willemb@google.com>
      Acked-by: NJason Wang <jasowang@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0c19f846
  14. 16 11月, 2017 2 次提交
    • M
      mm: remove __GFP_COLD · 453f85d4
      Mel Gorman 提交于
      As the page free path makes no distinction between cache hot and cold
      pages, there is no real useful ordering of pages in the free list that
      allocation requests can take advantage of.  Juding from the users of
      __GFP_COLD, it is likely that a number of them are the result of copying
      other sites instead of actually measuring the impact.  Remove the
      __GFP_COLD parameter which simplifies a number of paths in the page
      allocator.
      
      This is potentially controversial but bear in mind that the size of the
      per-cpu pagelists versus modern cache sizes means that the whole per-cpu
      list can often fit in the L3 cache.  Hence, there is only a potential
      benefit for microbenchmarks that alloc/free pages in a tight loop.  It's
      even worse when THP is taken into account which has little or no chance
      of getting a cache-hot page as the per-cpu list is bypassed and the
      zeroing of multiple pages will thrash the cache anyway.
      
      The truncate microbenchmarks are not shown as this patch affects the
      allocation path and not the free path.  A page fault microbenchmark was
      tested but it showed no sigificant difference which is not surprising
      given that the __GFP_COLD branches are a miniscule percentage of the
      fault path.
      
      Link: http://lkml.kernel.org/r/20171018075952.10627-9-mgorman@techsingularity.netSigned-off-by: NMel Gorman <mgorman@techsingularity.net>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      453f85d4
    • L
      kmemcheck: remove annotations · 49502766
      Levin, Alexander (Sasha Levin) 提交于
      Patch series "kmemcheck: kill kmemcheck", v2.
      
      As discussed at LSF/MM, kill kmemcheck.
      
      KASan is a replacement that is able to work without the limitation of
      kmemcheck (single CPU, slow).  KASan is already upstream.
      
      We are also not aware of any users of kmemcheck (or users who don't
      consider KASan as a suitable replacement).
      
      The only objection was that since KASAN wasn't supported by all GCC
      versions provided by distros at that time we should hold off for 2
      years, and try again.
      
      Now that 2 years have passed, and all distros provide gcc that supports
      KASAN, kill kmemcheck again for the very same reasons.
      
      This patch (of 4):
      
      Remove kmemcheck annotations, and calls to kmemcheck from the kernel.
      
      [alexander.levin@verizon.com: correctly remove kmemcheck call from dma_map_sg_attrs]
        Link: http://lkml.kernel.org/r/20171012192151.26531-1-alexander.levin@verizon.com
      Link: http://lkml.kernel.org/r/20171007030159.22241-2-alexander.levin@verizon.comSigned-off-by: NSasha Levin <alexander.levin@verizon.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Eric W. Biederman <ebiederm@xmission.com>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Tim Hansen <devtimhansen@gmail.com>
      Cc: Vegard Nossum <vegardno@ifi.uio.no>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      49502766
  15. 11 11月, 2017 1 次提交
  16. 04 11月, 2017 1 次提交
  17. 07 10月, 2017 1 次提交
  18. 06 10月, 2017 1 次提交
    • E
      tcp: new list for sent but unacked skbs for RACK recovery · e2080072
      Eric Dumazet 提交于
      This patch adds a new queue (list) that tracks the sent but not yet
      acked or SACKed skbs for a TCP connection. The list is chronologically
      ordered by skb->skb_mstamp (the head is the oldest sent skb).
      
      This list will be used to optimize TCP Rack recovery, which checks
      an skb's timestamp to judge if it has been lost and needs to be
      retransmitted. Since TCP write queue is ordered by sequence instead
      of sent time, RACK has to scan over the write queue to catch all
      eligible packets to detect lost retransmission, and iterates through
      SACKed skbs repeatedly.
      
      Special cares for rare events:
      1. TCP repair fakes skb transmission so the send queue needs adjusted
      2. SACK reneging would require re-inserting SACKed skbs into the
         send queue. For now I believe it's not worth the complexity to
         make RACK work perfectly on SACK reneging, so we do nothing here.
      3. Fast Open: currently for non-TFO, send-queue correctly queues
         the pure SYN packet. For TFO which queues a pure SYN and
         then a data packet, send-queue only queues the data packet but
         not the pure SYN due to the structure of TFO code. This is okay
         because the SYN receiver would never respond with a SACK on a
         missing SYN (i.e. SYN is never fast-retransmitted by SACK/RACK).
      
      In order to not grow sk_buff, we use an union for the new list and
      _skb_refdst/destructor fields. This is a bit complicated because
      we need to make sure _skb_refdst and destructor are properly zeroed
      before skb is cloned/copied at transmit, and before being freed.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NYuchung Cheng <ycheng@google.com>
      Signed-off-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e2080072
  19. 04 10月, 2017 1 次提交
    • Y
      skbuff: Add the offload_mr_fwd_mark field · abf4bb6b
      Yotam Gigi 提交于
      Similarly to the offload_fwd_mark field, the offload_mr_fwd_mark field is
      used to allow partial offloading of MFC multicast routes.
      
      Switchdev drivers can offload MFC multicast routes to the hardware by
      registering to the FIB notification chain. When one of the route output
      interfaces is not offload-able, i.e. has different parent ID, the route
      cannot be fully offloaded by the hardware. Examples to non-offload-able
      devices are a management NIC, dummy device, pimreg device, etc.
      
      Similar problem exists in the bridge module, as one bridge can hold
      interfaces with different parent IDs. At the bridge, the problem is solved
      by the offload_fwd_mark skb field.
      
      Currently, when a route cannot go through full offload, the only solution
      for a switchdev driver is not to offload it at all and let the packet go
      through slow path.
      
      Using the offload_mr_fwd_mark field, a driver can indicate that a packet
      was already forwarded by hardware to all the devices with the same parent
      ID as the input device. Further patches in this patch-set are going to
      enhance ipmr to skip multicast forwarding to devices with the same parent
      ID if a packets is marked with that field.
      
      The reason why the already existing "offload_fwd_mark" bit cannot be used
      is that a switchdev driver would want to make the distinction between a
      packet that has already gone through L2 forwarding but did not go through
      multicast forwarding, and a packet that has already gone through both L2
      and multicast forwarding.
      
      For example: when a packet is ingressing from a switchport enslaved to a
      bridge, which is configured with multicast forwarding, the following
      scenarios are possible:
       - The packet can be trapped to the CPU due to exception while multicast
         forwarding (for example, MTU error). In that case, it had already gone
         through L2 forwarding in the hardware, thus A switchdev driver would
         want to set the skb->offload_fwd_mark and not the
         skb->offload_mr_fwd_mark.
       - The packet can also be trapped due to a pimreg/dummy device used as one
         of the output interfaces. In that case, it can go through both L2 and
         (partial) multicast forwarding inside the hardware, thus a switchdev
         driver would want to set both the skb->offload_fwd_mark and
         skb->offload_mr_fwd_mark.
      Signed-off-by: NYotam Gigi <yotamg@mellanox.com>
      Reviewed-by: NIdo Schimmel <idosch@mellaox.com>
      Signed-off-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      abf4bb6b
  20. 27 9月, 2017 1 次提交
    • D
      bpf: add meta pointer for direct access · de8f3a83
      Daniel Borkmann 提交于
      This work enables generic transfer of metadata from XDP into skb. The
      basic idea is that we can make use of the fact that the resulting skb
      must be linear and already comes with a larger headroom for supporting
      bpf_xdp_adjust_head(), which mangles xdp->data. Here, we base our work
      on a similar principle and introduce a small helper bpf_xdp_adjust_meta()
      for adjusting a new pointer called xdp->data_meta. Thus, the packet has
      a flexible and programmable room for meta data, followed by the actual
      packet data. struct xdp_buff is therefore laid out that we first point
      to data_hard_start, then data_meta directly prepended to data followed
      by data_end marking the end of packet. bpf_xdp_adjust_head() takes into
      account whether we have meta data already prepended and if so, memmove()s
      this along with the given offset provided there's enough room.
      
      xdp->data_meta is optional and programs are not required to use it. The
      rationale is that when we process the packet in XDP (e.g. as DoS filter),
      we can push further meta data along with it for the XDP_PASS case, and
      give the guarantee that a clsact ingress BPF program on the same device
      can pick this up for further post-processing. Since we work with skb
      there, we can also set skb->mark, skb->priority or other skb meta data
      out of BPF, thus having this scratch space generic and programmable
      allows for more flexibility than defining a direct 1:1 transfer of
      potentially new XDP members into skb (it's also more efficient as we
      don't need to initialize/handle each of such new members). The facility
      also works together with GRO aggregation. The scratch space at the head
      of the packet can be multiple of 4 byte up to 32 byte large. Drivers not
      yet supporting xdp->data_meta can simply be set up with xdp->data_meta
      as xdp->data + 1 as bpf_xdp_adjust_meta() will detect this and bail out,
      such that the subsequent match against xdp->data for later access is
      guaranteed to fail.
      
      The verifier treats xdp->data_meta/xdp->data the same way as we treat
      xdp->data/xdp->data_end pointer comparisons. The requirement for doing
      the compare against xdp->data is that it hasn't been modified from it's
      original address we got from ctx access. It may have a range marking
      already from prior successful xdp->data/xdp->data_end pointer comparisons
      though.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      de8f3a83
  21. 23 9月, 2017 1 次提交
  22. 20 9月, 2017 1 次提交
    • E
      net: sk_buff rbnode reorg · bffa72cf
      Eric Dumazet 提交于
      skb->rbnode shares space with skb->next, skb->prev and skb->tstamp
      
      Current uses (TCP receive ofo queue and netem) need to save/restore
      tstamp, while skb->dev is either NULL (TCP) or a constant for a given
      queue (netem).
      
      Since we plan using an RB tree for TCP retransmit queue to speedup SACK
      processing with large BDP, this patch exchanges skb->dev and
      skb->tstamp.
      
      This saves some overhead in both TCP and netem.
      
      v2: removes the swtstamp field from struct tcp_skb_cb
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Soheil Hassas Yeganeh <soheil@google.com>
      Cc: Wei Wang <weiwan@google.com>
      Cc: Willem de Bruijn <willemb@google.com>
      Acked-by: NSoheil Hassas Yeganeh <soheil@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      bffa72cf
  23. 08 9月, 2017 1 次提交
  24. 02 9月, 2017 1 次提交
  25. 24 8月, 2017 1 次提交
  26. 10 8月, 2017 1 次提交
  27. 04 8月, 2017 5 次提交
    • W
      sock: ulimit on MSG_ZEROCOPY pages · a91dbff5
      Willem de Bruijn 提交于
      Bound the number of pages that a user may pin.
      
      Follow the lead of perf tools to maintain a per-user bound on memory
      locked pages commit 789f90fc ("perf_counter: per user mlock gift")
      Signed-off-by: NWillem de Bruijn <willemb@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a91dbff5
    • W
      sock: MSG_ZEROCOPY notification coalescing · 4ab6c99d
      Willem de Bruijn 提交于
      In the simple case, each sendmsg() call generates data and eventually
      a zerocopy ready notification N, where N indicates the Nth successful
      invocation of sendmsg() with the MSG_ZEROCOPY flag on this socket.
      
      TCP and corked sockets can cause send() calls to append new data to an
      existing sk_buff and, thus, ubuf_info. In that case the notification
      must hold a range. odify ubuf_info to store a inclusive range [N..N+m]
      and add skb_zerocopy_realloc() to optionally extend an existing range.
      
      Also coalesce notifications in this common case: if a notification
      [1, 1] is about to be queued while [0, 0] is the queue tail, just modify
      the head of the queue to read [0, 1].
      
      Coalescing is limited to a few TSO frames worth of data to bound
      notification latency.
      Signed-off-by: NWillem de Bruijn <willemb@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4ab6c99d
    • W
      sock: enable MSG_ZEROCOPY · 1f8b977a
      Willem de Bruijn 提交于
      Prepare the datapath for refcounted ubuf_info. Clone ubuf_info with
      skb_zerocopy_clone() wherever needed due to skb split, merge, resize
      or clone.
      
      Split skb_orphan_frags into two variants. The split, merge, .. paths
      support reference counted zerocopy buffers, so do not do a deep copy.
      Add skb_orphan_frags_rx for paths that may loop packets to receive
      sockets. That is not allowed, as it may cause unbounded latency.
      Deep copy all zerocopy copy buffers, ref-counted or not, in this path.
      
      The exact locations to modify were chosen by exhaustively searching
      through all code that might modify skb_frag references and/or the
      the SKBTX_DEV_ZEROCOPY tx_flags bit.
      
      The changes err on the safe side, in two ways.
      
      (1) legacy ubuf_info paths virtio and tap are not modified. They keep
          a 1:1 ubuf_info to sk_buff relationship. Calls to skb_orphan_frags
          still call skb_copy_ubufs and thus copy frags in this case.
      
      (2) not all copies deep in the stack are addressed yet. skb_shift,
          skb_split and skb_try_coalesce can be refined to avoid copying.
          These are not in the hot path and this patch is hairy enough as
          is, so that is left for future refinement.
      Signed-off-by: NWillem de Bruijn <willemb@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1f8b977a
    • W
      sock: add MSG_ZEROCOPY · 52267790
      Willem de Bruijn 提交于
      The kernel supports zerocopy sendmsg in virtio and tap. Expand the
      infrastructure to support other socket types. Introduce a completion
      notification channel over the socket error queue. Notifications are
      returned with ee_origin SO_EE_ORIGIN_ZEROCOPY. ee_errno is 0 to avoid
      blocking the send/recv path on receiving notifications.
      
      Add reference counting, to support the skb split, merge, resize and
      clone operations possible with SOCK_STREAM and other socket types.
      
      The patch does not yet modify any datapaths.
      Signed-off-by: NWillem de Bruijn <willemb@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      52267790
    • W
      sock: skb_copy_ubufs support for compound pages · 3ece7826
      Willem de Bruijn 提交于
      Refine skb_copy_ubufs to support compound pages. With upcoming TCP
      zerocopy sendmsg, such fragments may appear.
      
      The existing code replaces each page one for one. Splitting each
      compound page into an independent number of regular pages can result
      in exceeding limit MAX_SKB_FRAGS if data is not exactly page aligned.
      
      Instead, fill all destination pages but the last to PAGE_SIZE.
      Split the existing alloc + copy loop into separate stages:
      1. compute bytelength and minimum number of pages to store this.
      2. allocate
      3. copy, filling each page except the last to PAGE_SIZE bytes
      4. update skb frag array
      Signed-off-by: NWillem de Bruijn <willemb@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3ece7826
  28. 03 8月, 2017 1 次提交
  29. 02 8月, 2017 2 次提交
    • W
      net: add skb_frag_foreach_page and use with kmap_atomic · c613c209
      Willem de Bruijn 提交于
      Skb frags may contain compound pages. Various operations map frags
      temporarily using kmap_atomic, but this function works on single
      pages, not whole compound pages. The distinction is only relevant
      for high mem pages that require temporary mappings.
      
      Introduce a looping mechanism that for compound highmem pages maps
      one page at a time, does not change behavior on other pages.
      Use the loop in the kmap_atomic callers in net/core/skbuff.c.
      
      Verified by triggering skb_copy_bits with
      
          tcpdump -n -c 100 -i ${DEV} -w /dev/null &
          netperf -t TCP_STREAM -H ${HOST}
      
        and by triggering __skb_checksum with
      
          ethtool -K ${DEV} tx off
      
        repeated the tests with looping on a non-highmem platform
        (x86_64) by making skb_frag_must_loop always return true.
      Signed-off-by: NWillem de Bruijn <willemb@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c613c209
    • T
      skbuff: Function to send an skbuf on a socket · 20bf50de
      Tom Herbert 提交于
      Add skb_send_sock to send an skbuff on a socket within the kernel.
      Arguments include an offset so that an skbuf might be sent in mulitple
      calls (e.g. send buffer limit is hit).
      Signed-off-by: NTom Herbert <tom@quantonium.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      20bf50de
  30. 18 7月, 2017 2 次提交
  31. 03 7月, 2017 1 次提交