1. 20 2月, 2016 6 次提交
  2. 19 2月, 2016 9 次提交
    • Y
      qed: Lay infrastructure for vlan filtering offload · 3f9b4a69
      Yuval Mintz 提交于
      Today, interfaces are working in vlan-promisc mode; But once
      vlan filtering offloaded would be supported, we'll need a method to
      control it directly [e.g., when setting device to PROMISC, or when
      running out of vlan credits].
      
      This adds the necessary API for L2 client to manually choose whether to
      accept all vlans or only those for which filters were configured.
      Signed-off-by: NYuval Mintz <Yuval.Mintz@qlogic.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3f9b4a69
    • A
      net: Optimize local checksum offload · 9e74a6da
      Alexander Duyck 提交于
      This patch takes advantage of several assumptions we can make about the
      headers of the frame in order to reduce overall processing overhead for
      computing the outer header checksum.
      
      First we can assume the entire header is in the region pointed to by
      skb->head as this is what csum_start is based on.
      
      Second, as a result of our first assumption, we can just call csum_partial
      instead of making a call to skb_checksum which would end up having to
      configure things so that we could walk through the frags list.
      Signed-off-by: NAlexander Duyck <aduyck@mirantis.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9e74a6da
    • B
      ipv6: Annotate change of locking mechanism for np->opt · e550785c
      Benjamin Poirier 提交于
      follows up commit 45f6fad8 ("ipv6: add complete rcu protection around
      np->opt") which added mixed rcu/refcount protection to np->opt.
      
      Given the current implementation of rcu_pointer_handoff(), this has no
      effect at runtime.
      Signed-off-by: NBenjamin Poirier <bpoirier@suse.com>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e550785c
    • J
      iptunnel: scrub packet in iptunnel_pull_header · 7f290c94
      Jiri Benc 提交于
      Part of skb_scrub_packet was open coded in iptunnel_pull_header. Let it call
      skb_scrub_packet directly instead.
      Signed-off-by: NJiri Benc <jbenc@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7f290c94
    • J
      vxlan: tun_id is 64bit, not 32bit · 07dabf20
      Jiri Benc 提交于
      The tun_id field in struct ip_tunnel_key is __be64, not __be32. We need to
      convert the vni to tun_id correctly.
      
      Fixes: 54bfd872 ("vxlan: keep flags and vni in network byte order")
      Reported-by: NPaolo Abeni <pabeni@redhat.com>
      Tested-by: NPaolo Abeni <pabeni@redhat.com>
      Signed-off-by: NJiri Benc <jbenc@redhat.com>
      Acked-by: NThadeu Lima de Souza Cascardo <cascardo@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      07dabf20
    • F
      nfnetlink: Revert "nfnetlink: add support for memory mapped netlink" · c5b0db32
      Florian Westphal 提交于
      reverts commit 3ab1f683 ("nfnetlink: add support for memory mapped
      netlink")'
      
      Like previous commits in the series, remove wrappers that are not needed
      after mmapped netlink removal.
      Signed-off-by: NFlorian Westphal <fw@strlen.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c5b0db32
    • F
      nfnetlink: remove nfnetlink_alloc_skb · 905f0a73
      Florian Westphal 提交于
      Following mmapped netlink removal this code can be simplified by
      removing the alloc wrapper.
      Signed-off-by: NFlorian Westphal <fw@strlen.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      905f0a73
    • F
      Revert "genl: Add genlmsg_new_unicast() for unicast message allocation" · 263ea090
      Florian Westphal 提交于
      This reverts commit bb9b18fb ("genl: Add genlmsg_new_unicast() for
      unicast message allocation")'.
      
      Nothing wrong with it; its no longer needed since this was only for
      mmapped netlink support.
      Signed-off-by: NFlorian Westphal <fw@strlen.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      263ea090
    • F
      netlink: remove mmapped netlink support · d1b4c689
      Florian Westphal 提交于
      mmapped netlink has a number of unresolved issues:
      
      - TX zerocopy support had to be disabled more than a year ago via
        commit 4682a035 ("netlink: Always copy on mmap TX.")
        because the content of the mmapped area can change after netlink
        attribute validation but before message processing.
      
      - RX support was implemented mainly to speed up nfqueue dumping packet
        payload to userspace.  However, since commit ae08ce00
        ("netfilter: nfnetlink_queue: zero copy support") we avoid one copy
        with the socket-based interface too (via the skb_zerocopy helper).
      
      The other problem is that skbs attached to mmaped netlink socket
      behave different from normal skbs:
      
      - they don't have a shinfo area, so all functions that use skb_shinfo()
      (e.g. skb_clone) cannot be used.
      
      - reserving headroom prevents userspace from seeing the content as
      it expects message to start at skb->head.
      See for instance
      commit aa3a0220 ("netlink: not trim skb for mmaped socket when dump").
      
      - skbs handed e.g. to netlink_ack must have non-NULL skb->sk, else we
      crash because it needs the sk to check if a tx ring is attached.
      
      Also not obvious, leads to non-intuitive bug fixes such as 7c7bdf35
      ("netfilter: nfnetlink: use original skbuff when acking batches").
      
      mmaped netlink also didn't play nicely with the skb_zerocopy helper
      used by nfqueue and openvswitch.  Daniel Borkmann fixed this via
      commit 6bb0fef4 ("netlink, mmap: fix edge-case leakages in nf queue
      zero-copy")' but at the cost of also needing to provide remaining
      length to the allocation function.
      
      nfqueue also has problems when used with mmaped rx netlink:
      - mmaped netlink doesn't allow use of nfqueue batch verdict messages.
        Problem is that in the mmap case, the allocation time also determines
        the ordering in which the frame will be seen by userspace (A
        allocating before B means that A is located in earlier ring slot,
        but this also means that B might get a lower sequence number then A
        since seqno is decided later.  To fix this we would need to extend the
        spinlocked region to also cover the allocation and message setup which
        isn't desirable.
      - nfqueue can now be configured to queue large (GSO) skbs to userspace.
        Queing GSO packets is faster than having to force a software segmentation
        in the kernel, so this is a desirable option.  However, with a mmap based
        ring one has to use 64kb per ring slot element, else mmap has to fall back
        to the socket path (NL_MMAP_STATUS_COPY) for all large packets.
      
      To use the mmap interface, userspace not only has to probe for mmap netlink
      support, it also has to implement a recv/socket receive path in order to
      handle messages that exceed the size of an rx ring element.
      
      Cc: Daniel Borkmann <daniel@iogearbox.net>
      Cc: Ken-ichirou MATSUZAWA <chamaken@gmail.com>
      Cc: Pablo Neira Ayuso <pablo@netfilter.org>
      Cc: Patrick McHardy <kaber@trash.net>
      Cc: Thomas Graf <tgraf@suug.ch>
      Signed-off-by: NFlorian Westphal <fw@strlen.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d1b4c689
  3. 18 2月, 2016 6 次提交
  4. 17 2月, 2016 16 次提交
  5. 12 2月, 2016 3 次提交