1. 15 9月, 2016 1 次提交
  2. 12 9月, 2016 3 次提交
  3. 11 9月, 2016 12 次提交
  4. 09 9月, 2016 1 次提交
    • Y
      tcp: use an RB tree for ooo receive queue · 9f5afeae
      Yaogong Wang 提交于
      Over the years, TCP BDP has increased by several orders of magnitude,
      and some people are considering to reach the 2 Gbytes limit.
      
      Even with current window scale limit of 14, ~1 Gbytes maps to ~740,000
      MSS.
      
      In presence of packet losses (or reorders), TCP stores incoming packets
      into an out of order queue, and number of skbs sitting there waiting for
      the missing packets to be received can be in the 10^5 range.
      
      Most packets are appended to the tail of this queue, and when
      packets can finally be transferred to receive queue, we scan the queue
      from its head.
      
      However, in presence of heavy losses, we might have to find an arbitrary
      point in this queue, involving a linear scan for every incoming packet,
      throwing away cpu caches.
      
      This patch converts it to a RB tree, to get bounded latencies.
      
      Yaogong wrote a preliminary patch about 2 years ago.
      Eric did the rebase, added ofo_last_skb cache, polishing and tests.
      
      Tested with network dropping between 1 and 10 % packets, with good
      success (about 30 % increase of throughput in stress tests)
      
      Next step would be to also use an RB tree for the write queue at sender
      side ;)
      Signed-off-by: NYaogong Wang <wygivan@google.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Yuchung Cheng <ycheng@google.com>
      Cc: Neal Cardwell <ncardwell@google.com>
      Cc: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Acked-By: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9f5afeae
  5. 08 9月, 2016 2 次提交
    • D
      rxrpc: Rewrite the data and ack handling code · 248f219c
      David Howells 提交于
      Rewrite the data and ack handling code such that:
      
       (1) Parsing of received ACK and ABORT packets and the distribution and the
           filing of DATA packets happens entirely within the data_ready context
           called from the UDP socket.  This allows us to process and discard ACK
           and ABORT packets much more quickly (they're no longer stashed on a
           queue for a background thread to process).
      
       (2) We avoid calling skb_clone(), pskb_pull() and pskb_trim().  We instead
           keep track of the offset and length of the content of each packet in
           the sk_buff metadata.  This means we don't do any allocation in the
           receive path.
      
       (3) Jumbo DATA packet parsing is now done in data_ready context.  Rather
           than cloning the packet once for each subpacket and pulling/trimming
           it, we file the packet multiple times with an annotation for each
           indicating which subpacket is there.  From that we can directly
           calculate the offset and length.
      
       (4) A call's receive queue can be accessed without taking locks (memory
           barriers do have to be used, though).
      
       (5) Incoming calls are set up from preallocated resources and immediately
           made live.  They can than have packets queued upon them and ACKs
           generated.  If insufficient resources exist, DATA packet #1 is given a
           BUSY reply and other DATA packets are discarded).
      
       (6) sk_buffs no longer take a ref on their parent call.
      
      To make this work, the following changes are made:
      
       (1) Each call's receive buffer is now a circular buffer of sk_buff
           pointers (rxtx_buffer) rather than a number of sk_buff_heads spread
           between the call and the socket.  This permits each sk_buff to be in
           the buffer multiple times.  The receive buffer is reused for the
           transmit buffer.
      
       (2) A circular buffer of annotations (rxtx_annotations) is kept parallel
           to the data buffer.  Transmission phase annotations indicate whether a
           buffered packet has been ACK'd or not and whether it needs
           retransmission.
      
           Receive phase annotations indicate whether a slot holds a whole packet
           or a jumbo subpacket and, if the latter, which subpacket.  They also
           note whether the packet has been decrypted in place.
      
       (3) DATA packet window tracking is much simplified.  Each phase has just
           two numbers representing the window (rx_hard_ack/rx_top and
           tx_hard_ack/tx_top).
      
           The hard_ack number is the sequence number before base of the window,
           representing the last packet the other side says it has consumed.
           hard_ack starts from 0 and the first packet is sequence number 1.
      
           The top number is the sequence number of the highest-numbered packet
           residing in the buffer.  Packets between hard_ack+1 and top are
           soft-ACK'd to indicate they've been received, but not yet consumed.
      
           Four macros, before(), before_eq(), after() and after_eq() are added
           to compare sequence numbers within the window.  This allows for the
           top of the window to wrap when the hard-ack sequence number gets close
           to the limit.
      
           Two flags, RXRPC_CALL_RX_LAST and RXRPC_CALL_TX_LAST, are added also
           to indicate when rx_top and tx_top point at the packets with the
           LAST_PACKET bit set, indicating the end of the phase.
      
       (4) Calls are queued on the socket 'receive queue' rather than packets.
           This means that we don't need have to invent dummy packets to queue to
           indicate abnormal/terminal states and we don't have to keep metadata
           packets (such as ABORTs) around
      
       (5) The offset and length of a (sub)packet's content are now passed to
           the verify_packet security op.  This is currently expected to decrypt
           the packet in place and validate it.
      
           However, there's now nowhere to store the revised offset and length of
           the actual data within the decrypted blob (there may be a header and
           padding to skip) because an sk_buff may represent multiple packets, so
           a locate_data security op is added to retrieve these details from the
           sk_buff content when needed.
      
       (6) recvmsg() now has to handle jumbo subpackets, where each subpacket is
           individually secured and needs to be individually decrypted.  The code
           to do this is broken out into rxrpc_recvmsg_data() and shared with the
           kernel API.  It now iterates over the call's receive buffer rather
           than walking the socket receive queue.
      
      Additional changes:
      
       (1) The timers are condensed to a single timer that is set for the soonest
           of three timeouts (delayed ACK generation, DATA retransmission and
           call lifespan).
      
       (2) Transmission of ACK and ABORT packets is effected immediately from
           process-context socket ops/kernel API calls that cause them instead of
           them being punted off to a background work item.  The data_ready
           handler still has to defer to the background, though.
      
       (3) A shutdown op is added to the AF_RXRPC socket so that the AFS
           filesystem can shut down the socket and flush its own work items
           before closing the socket to deal with any in-progress service calls.
      
      Future additional changes that will need to be considered:
      
       (1) Make sure that a call doesn't hog the front of the queue by receiving
           data from the network as fast as userspace is consuming it to the
           exclusion of other calls.
      
       (2) Transmit delayed ACKs from within recvmsg() when we've consumed
           sufficiently more packets to avoid the background work item needing to
           run.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      248f219c
    • D
      rxrpc: Preallocate peers, conns and calls for incoming service requests · 00e90712
      David Howells 提交于
      Make it possible for the data_ready handler called from the UDP transport
      socket to completely instantiate an rxrpc_call structure and make it
      immediately live by preallocating all the memory it might need.  The idea
      is to cut out the background thread usage as much as possible.
      
      [Note that the preallocated structs are not actually used in this patch -
       that will be done in a future patch.]
      
      If insufficient resources are available in the preallocation buffers, it
      will be possible to discard the DATA packet in the data_ready handler or
      schedule a BUSY packet without the need to schedule an attempt at
      allocation in a background thread.
      
      To this end:
      
       (1) Preallocate rxrpc_peer, rxrpc_connection and rxrpc_call structs to a
           maximum number each of the listen backlog size.  The backlog size is
           limited to a maxmimum of 32.  Only this many of each can be in the
           preallocation buffer.
      
       (2) For userspace sockets, the preallocation is charged initially by
           listen() and will be recharged by accepting or rejecting pending
           new incoming calls.
      
       (3) For kernel services {,re,dis}charging of the preallocation buffers is
           handled manually.  Two notifier callbacks have to be provided before
           kernel_listen() is invoked:
      
           (a) An indication that a new call has been instantiated.  This can be
           	 used to trigger background recharging.
      
           (b) An indication that a call is being discarded.  This is used when
           	 the socket is being released.
      
           A function, rxrpc_kernel_charge_accept() is called by the kernel
           service to preallocate a single call.  It should be passed the user ID
           to be used for that call and a callback to associate the rxrpc call
           with the kernel service's side of the ID.
      
       (4) Discard the preallocation when the socket is closed.
      
       (5) Temporarily bump the refcount on the call allocated in
           rxrpc_incoming_call() so that rxrpc_release_call() can ditch the
           preallocation ref on service calls unconditionally.  This will no
           longer be necessary once the preallocation is used.
      
      Note that this does not yet control the number of active service calls on a
      client - that will come in a later patch.
      
      A future development would be to provide a setsockopt() call that allows a
      userspace server to manually charge the preallocation buffer.  This would
      allow user call IDs to be provided in advance and the awkward manual accept
      stage to be bypassed.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      00e90712
  6. 07 9月, 2016 1 次提交
  7. 03 9月, 2016 1 次提交
  8. 02 9月, 2016 4 次提交
    • V
      net: dsa: remove ds_to_priv · 04bed143
      Vivien Didelot 提交于
      Access the priv member of the dsa_switch structure directly, instead of
      having an unnecessary helper.
      Signed-off-by: NVivien Didelot <vivien.didelot@savoirfairelinux.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      04bed143
    • R
      rtnetlink: fdb dump: optimize by saving last interface markers · d297653d
      Roopa Prabhu 提交于
      fdb dumps spanning multiple skb's currently restart from the first
      interface again for every skb. This results in unnecessary
      iterations on the already visited interfaces and their fdb
      entries. In large scale setups, we have seen this to slow
      down fdb dumps considerably. On a system with 30k macs we
      see fdb dumps spanning across more than 300 skbs.
      
      To fix the problem, this patch replaces the existing single fdb
      marker with three markers: netdev hash entries, netdevs and fdb
      index to continue where we left off instead of restarting from the
      first netdev. This is consistent with link dumps.
      
      In the process of fixing the performance issue, this patch also
      re-implements fix done by
      commit 472681d5 ("net: ndo_fdb_dump should report -EMSGSIZE to rtnl_fdb_dump")
      (with an internal fix from Wilson Kok) in the following ways:
      - change ndo_fdb_dump handlers to return error code instead
      of the last fdb index
      - use cb->args strictly for dump frag markers and not error codes.
      This is consistent with other dump functions.
      
      Below results were taken on a system with 1000 netdevs
      and 35085 fdb entries:
      before patch:
      $time bridge fdb show | wc -l
      15065
      
      real    1m11.791s
      user    0m0.070s
      sys 1m8.395s
      
      (existing code does not return all macs)
      
      after patch:
      $time bridge fdb show | wc -l
      35085
      
      real    0m2.017s
      user    0m0.113s
      sys 0m1.942s
      Signed-off-by: NRoopa Prabhu <roopa@cumulusnetworks.com>
      Signed-off-by: NWilson Kok <wkok@cumulusnetworks.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d297653d
    • G
      rps: flow_dissector: Add the const for the parameter of flow_keys_have_l4 · 66fdd05e
      Gao Feng 提交于
      Add the const for the parameter of flow_keys_have_l4 for the readability.
      Signed-off-by: NGao Feng <fgao@ikuai8.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      66fdd05e
    • D
      rxrpc: Don't expose skbs to in-kernel users [ver #2] · d001648e
      David Howells 提交于
      Don't expose skbs to in-kernel users, such as the AFS filesystem, but
      instead provide a notification hook the indicates that a call needs
      attention and another that indicates that there's a new call to be
      collected.
      
      This makes the following possibilities more achievable:
      
       (1) Call refcounting can be made simpler if skbs don't hold refs to calls.
      
       (2) skbs referring to non-data events will be able to be freed much sooner
           rather than being queued for AFS to pick up as rxrpc_kernel_recv_data
           will be able to consult the call state.
      
       (3) We can shortcut the receive phase when a call is remotely aborted
           because we don't have to go through all the packets to get to the one
           cancelling the operation.
      
       (4) It makes it easier to do encryption/decryption directly between AFS's
           buffers and sk_buffs.
      
       (5) Encryption/decryption can more easily be done in the AFS's thread
           contexts - usually that of the userspace process that issued a syscall
           - rather than in one of rxrpc's background threads on a workqueue.
      
       (6) AFS will be able to wait synchronously on a call inside AF_RXRPC.
      
      To make this work, the following interface function has been added:
      
           int rxrpc_kernel_recv_data(
      		struct socket *sock, struct rxrpc_call *call,
      		void *buffer, size_t bufsize, size_t *_offset,
      		bool want_more, u32 *_abort_code);
      
      This is the recvmsg equivalent.  It allows the caller to find out about the
      state of a specific call and to transfer received data into a buffer
      piecemeal.
      
      afs_extract_data() and rxrpc_kernel_recv_data() now do all the extraction
      logic between them.  They don't wait synchronously yet because the socket
      lock needs to be dealt with.
      
      Five interface functions have been removed:
      
      	rxrpc_kernel_is_data_last()
          	rxrpc_kernel_get_abort_code()
          	rxrpc_kernel_get_error_number()
          	rxrpc_kernel_free_skb()
          	rxrpc_kernel_data_consumed()
      
      As a temporary hack, sk_buffs going to an in-kernel call are queued on the
      rxrpc_call struct (->knlrecv_queue) rather than being handed over to the
      in-kernel user.  To process the queue internally, a temporary function,
      temp_deliver_data() has been added.  This will be replaced with common code
      between the rxrpc_recvmsg() path and the kernel_rxrpc_recv_data() path in a
      future patch.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d001648e
  9. 01 9月, 2016 1 次提交
  10. 31 8月, 2016 1 次提交
    • R
      net: lwtunnel: Handle fragmentation · 14972cbd
      Roopa Prabhu 提交于
      Today mpls iptunnel lwtunnel_output redirect expects the tunnel
      output function to handle fragmentation. This is ok but can be
      avoided if we did not do the mpls output redirect too early.
      ie we could wait until ip fragmentation is done and then call
      mpls output for each ip fragment.
      
      To make this work we will need,
      1) the lwtunnel state to carry encap headroom
      2) and do the redirect to the encap output handler on the ip fragment
      (essentially do the output redirect after fragmentation)
      
      This patch adds tunnel headroom in lwtstate to make sure we
      account for tunnel data in mtu calculations during fragmentation
      and adds new xmit redirect handler to redirect to lwtunnel xmit func
      after ip fragmentation.
      
      This includes IPV6 and some mtu fixes and testing from David Ahern.
      Signed-off-by: NRoopa Prabhu <roopa@cumulusnetworks.com>
      Signed-off-by: NDavid Ahern <dsa@cumulusnetworks.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      14972cbd
  11. 30 8月, 2016 6 次提交
    • D
      rxrpc: Pass struct socket * to more rxrpc kernel interface functions · 4de48af6
      David Howells 提交于
      Pass struct socket * to more rxrpc kernel interface functions.  They should
      be starting from this rather than the socket pointer in the rxrpc_call
      struct if they need to access the socket.
      
      I have left:
      
      	rxrpc_kernel_is_data_last()
      	rxrpc_kernel_get_abort_code()
      	rxrpc_kernel_get_error_number()
      	rxrpc_kernel_free_skb()
      	rxrpc_kernel_data_consumed()
      
      unmodified as they're all about to be removed (and, in any case, don't
      touch the socket).
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      4de48af6
    • D
      rxrpc: Provide a way for AFS to ask for the peer address of a call · 8324f0bc
      David Howells 提交于
      Provide a function so that kernel users, such as AFS, can ask for the peer
      address of a call:
      
         void rxrpc_kernel_get_peer(struct rxrpc_call *call,
      			      struct sockaddr_rxrpc *_srx);
      
      In the future the kernel service won't get sk_buffs to look inside.
      Further, this allows us to hide any canonicalisation inside AF_RXRPC for
      when IPv6 support is added.
      
      Also propagate this through to afs_find_server() and issue a warning if we
      can't handle the address family yet.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      8324f0bc
    • G
      netfilter: log: Check param to avoid overflow in nf_log_set · 779994fa
      Gao Feng 提交于
      The nf_log_set is an interface function, so it should do the strict sanity
      check of parameters. Convert the return value of nf_log_set as int instead
      of void. When the pf is invalid, return -EOPNOTSUPP.
      Signed-off-by: NGao Feng <fgao@ikuai8.com>
      Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org>
      779994fa
    • F
      netfilter: remove __nf_ct_kill_acct helper · ad66713f
      Florian Westphal 提交于
      After timer removal this just calls nf_ct_delete so remove the __ prefix
      version and make nf_ct_kill a shorthand for nf_ct_delete.
      Signed-off-by: NFlorian Westphal <fw@strlen.de>
      Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org>
      ad66713f
    • F
      netfilter: conntrack: get rid of conntrack timer · f330a7fd
      Florian Westphal 提交于
      With stats enabled this eats 80 bytes on x86_64 per nf_conn entry, as
      Eric Dumazet pointed out during netfilter workshop 2016.
      
      Eric also says: "Another reason was the fact that Thomas was about to
      change max timer range [..]" (500462a9, 'timers: Switch to
      a non-cascading wheel').
      
      Remove the timer and use a 32bit jiffies value containing timestamp until
      entry is valid.
      
      During conntrack lookup, even before doing tuple comparision, check
      the timeout value and evict the entry in case it is too old.
      
      The dying bit is used as a synchronization point to avoid races where
      multiple cpus try to evict the same entry.
      
      Because lookup is always lockless, we need to bump the refcnt once
      when we evict, else we could try to evict already-dead entry that
      is being recycled.
      
      This is the standard/expected way when conntrack entries are destroyed.
      
      Followup patches will introduce garbage colliction via work queue
      and further places where we can reap obsoleted entries (e.g. during
      netlink dumps), this is needed to avoid expired conntracks from hanging
      around for too long when lookup rate is low after a busy period.
      Signed-off-by: NFlorian Westphal <fw@strlen.de>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org>
      f330a7fd
    • F
      netfilter: don't rely on DYING bit to detect when destroy event was sent · 616b14b4
      Florian Westphal 提交于
      The reliable event delivery mode currently (ab)uses the DYING bit to
      detect which entries on the dying list have to be skipped when
      re-delivering events from the eache worker in reliable event mode.
      
      Currently when we delete the conntrack from main table we only set this
      bit if we could also deliver the netlink destroy event to userspace.
      
      If we fail we move it to the dying list, the ecache worker will
      reattempt event delivery for all confirmed conntracks on the dying list
      that do not have the DYING bit set.
      
      Once timer is gone, we can no longer use if (del_timer()) to detect
      when we 'stole' the reference count owned by the timer/hash entry, so
      we need some other way to avoid racing with other cpu.
      
      Pablo suggested to add a marker in the ecache extension that skips
      entries that have been unhashed from main table but are still waiting
      for the last reference count to be dropped (e.g. because one skb waiting
      on nfqueue verdict still holds a reference).
      
      We do this by adding a tristate.
      If we fail to deliver the destroy event, make a note of this in the
      eache extension.  The worker can then skip all entries that are in
      a different state.  Either they never delivered a destroy event,
      e.g. because the netlink backend was not loaded, or redelivery took
      place already.
      
      Once the conntrack timer is removed we will now be able to replace
      del_timer() test with test_and_set_bit(DYING, &ct->status) to avoid
      racing with other cpu that tries to evict the same conntrack.
      
      Because DYING will then be set right before we report the destroy event
      we can no longer skip event reporting when dying bit is set.
      Suggested-by: NPablo Neira Ayuso <pablo@netfilter.org>
      Signed-off-by: NFlorian Westphal <fw@strlen.de>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org>
      616b14b4
  12. 29 8月, 2016 4 次提交
    • E
      tcp: add tcp_add_backlog() · c9c33212
      Eric Dumazet 提交于
      When TCP operates in lossy environments (between 1 and 10 % packet
      losses), many SACK blocks can be exchanged, and I noticed we could
      drop them on busy senders, if these SACK blocks have to be queued
      into the socket backlog.
      
      While the main cause is the poor performance of RACK/SACK processing,
      we can try to avoid these drops of valuable information that can lead to
      spurious timeouts and retransmits.
      
      Cause of the drops is the skb->truesize overestimation caused by :
      
      - drivers allocating ~2048 (or more) bytes as a fragment to hold an
        Ethernet frame.
      
      - various pskb_may_pull() calls bringing the headers into skb->head
        might have pulled all the frame content, but skb->truesize could
        not be lowered, as the stack has no idea of each fragment truesize.
      
      The backlog drops are also more visible on bidirectional flows, since
      their sk_rmem_alloc can be quite big.
      
      Let's add some room for the backlog, as only the socket owner
      can selectively take action to lower memory needs, like collapsing
      receive queues or partial ofo pruning.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Yuchung Cheng <ycheng@google.com>
      Cc: Neal Cardwell <ncardwell@google.com>
      Acked-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c9c33212
    • T
      kcm: Remove TCP specific references from kcm and strparser · 96a59083
      Tom Herbert 提交于
      kcm and strparser need to work with any type of stream socket not just
      TCP. Eliminate references to TCP and call generic proto_ops functions of
      read_sock and peek_len. Also in strp_init check if the socket support
      the proto_ops read_sock and peek_len.
      Signed-off-by: NTom Herbert <tom@herbertland.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      96a59083
    • T
      tcp: Set read_sock and peek_len proto_ops · 32035585
      Tom Herbert 提交于
      In inet_stream_ops we set read_sock to tcp_read_sock and peek_len to
      tcp_peek_len (which is just a stub function that calls tcp_inq).
      Signed-off-by: NTom Herbert <tom@herbertland.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      32035585
    • T
      net: Add read_sock proto_op · 0294b625
      Tom Herbert 提交于
      Add new function in proto_ops structure. This includes moving the
      typedef got sk_read_actor into net.h and removing the definition from
      tcp.h.
      Signed-off-by: NTom Herbert <tom@herbertland.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0294b625
  13. 27 8月, 2016 2 次提交
    • I
      bridge: switchdev: Add forward mark support for stacked devices · 6bc506b4
      Ido Schimmel 提交于
      switchdev_port_fwd_mark_set() is used to set the 'offload_fwd_mark' of
      port netdevs so that packets being flooded by the device won't be
      flooded twice.
      
      It works by assigning a unique identifier (the ifindex of the first
      bridge port) to bridge ports sharing the same parent ID. This prevents
      packets from being flooded twice by the same switch, but will flood
      packets through bridge ports belonging to a different switch.
      
      This method is problematic when stacked devices are taken into account,
      such as VLANs. In such cases, a physical port netdev can have upper
      devices being members in two different bridges, thus requiring two
      different 'offload_fwd_mark's to be configured on the port netdev, which
      is impossible.
      
      The main problem is that packet and netdev marking is performed at the
      physical netdev level, whereas flooding occurs between bridge ports,
      which are not necessarily port netdevs.
      
      Instead, packet and netdev marking should really be done in the bridge
      driver with the switch driver only telling it which packets it already
      forwarded. The bridge driver will mark such packets using the mark
      assigned to the ingress bridge port and will prevent the packet from
      being forwarded through any bridge port sharing the same mark (i.e.
      having the same parent ID).
      
      Remove the current switchdev 'offload_fwd_mark' implementation and
      instead implement the proposed method. In addition, make rocker - the
      sole user of the mark - use the proposed method.
      Signed-off-by: NIdo Schimmel <idosch@mellanox.com>
      Signed-off-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6bc506b4
    • I
      devlink: remove unused priv_size · 2a313cdf
      Ivan Vecera 提交于
      Remove unused and useless priv_size member from struct devlink_ops.
      
      Cc: Jiri Pirko <jiri@mellanox.com>
      Signed-off-by: NIvan Vecera <ivecera@redhat.com>
      Acked-by: NJiri Pirko <jiri@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2a313cdf
  14. 26 8月, 2016 1 次提交
    • P
      netfilter: nf_tables: honor NLM_F_EXCL flag in set element insertion · c016c7e4
      Pablo Neira Ayuso 提交于
      If the NLM_F_EXCL flag is set, then new elements that clash with an
      existing one return EEXIST. In case you try to add an element whose
      data area differs from what we have, then this returns EBUSY. If no
      flag is specified at all, then this returns success to userspace.
      
      This patch also update the set insert operation so we can fetch the
      existing element that clashes with the one you want to add, we need
      this to make sure the element data doesn't differ.
      Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org>
      c016c7e4