1. 06 7月, 2016 2 次提交
    • D
      rxrpc: Check the source of a packet to a client conn · 689f4c64
      David Howells 提交于
      When looking up a client connection to which to route a packet, we need to
      check that the packet came from the correct source so that a peer can't try
      to muck around with another peer's connection.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      689f4c64
    • D
      rxrpc: Fix some sparse errors · 88b99d0b
      David Howells 提交于
      Fix the following sparse errors:
      
      ../net/rxrpc/conn_object.c:77:17: warning: incorrect type in assignment (different base types)
      ../net/rxrpc/conn_object.c:77:17:    expected restricted __be32 [usertype] call_id
      ../net/rxrpc/conn_object.c:77:17:    got unsigned int [unsigned] [usertype] call_id
      ../net/rxrpc/conn_object.c:84:21: warning: restricted __be32 degrades to integer
      ../net/rxrpc/conn_object.c:86:26: warning: restricted __be32 degrades to integer
      ../net/rxrpc/conn_object.c:357:15: warning: incorrect type in assignment (different base types)
      ../net/rxrpc/conn_object.c:357:15:    expected restricted __be32 [usertype] epoch
      ../net/rxrpc/conn_object.c:357:15:    got unsigned int [unsigned] [usertype] epoch
      ../net/rxrpc/conn_object.c:369:21: warning: restricted __be32 degrades to integer
      ../net/rxrpc/conn_object.c:371:26: warning: restricted __be32 degrades to integer
      ../net/rxrpc/conn_object.c:411:21: warning: restricted __be32 degrades to integer
      ../net/rxrpc/conn_object.c:413:26: warning: restricted __be32 degrades to integer
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      88b99d0b
  2. 01 7月, 2016 1 次提交
    • D
      rxrpc: Fix processing of authenticated/encrypted jumbo packets · ac5d2683
      David Howells 提交于
      When a jumbo packet is being split up and processed, the crypto checksum
      for each split-out packet is in the jumbo header and needs placing in the
      reconstructed packet header.
      
      When the code was changed to keep the stored copy of the packet header in
      host byte order, this reconstruction was missed.
      
      Found with sparse with CF=-D__CHECK_ENDIAN__:
      
          ../net/rxrpc/input.c:479:33: warning: incorrect type in assignment (different base types)
          ../net/rxrpc/input.c:479:33:    expected unsigned short [unsigned] [usertype] _rsvd
          ../net/rxrpc/input.c:479:33:    got restricted __be16 [addressable] [usertype] _rsvd
      
      Fixes: 0d12f8a4 ("rxrpc: Keep the skb private record of the Rx header in host byte order")
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      ac5d2683
  3. 27 6月, 2016 32 次提交
  4. 26 6月, 2016 5 次提交
    • D
      Merge branch 'net-sched-bulk-dequeue' · e83e5bb1
      David S. Miller 提交于
      Eric Dumazet says:
      
      ====================
      net_sched: bulk dequeue and deferred drops
      
      First patch adds an additional parameter to ->enqueue() qdisc method
      so that drops can be done outside of critical section
      (after locks are released).
      
      Then fq_codel can have a small optimization to reduce number of cache
      lines misses during a drop event
      (possibly accumulating hundreds of packets to be freed).
      
      A small htb change exports the backlog in class dumps.
      
      Final patch adds bulk dequeue to qdiscs that were lacking this feature.
      
      This series brings a nice qdisc performance increase (more than 80 %
      in some cases).
      ====================
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e83e5bb1
    • E
      net_sched: generalize bulk dequeue · 4d202a0d
      Eric Dumazet 提交于
      When qdisc bulk dequeue was added in linux-3.18 (commit
      5772e9a3 "qdisc: bulk dequeue support for qdiscs
      with TCQ_F_ONETXQUEUE"), it was constrained to some
      specific qdiscs.
      
      With some extra care, we can extend this to all qdiscs,
      so that typical traffic shaping solutions can benefit from
      small batches (8 packets in this patch).
      
      For example, HTB is often used on some multi queue device.
      And bonding/team are multi queue devices...
      
      Idea is to bulk-dequeue packets mapping to the same transmit queue.
      
      This brings between 35 and 80 % performance increase in HTB setup
      under pressure on a bonding setup :
      
      1) NUMA node contention :   610,000 pps -> 1,110,000 pps
      2) No node contention   : 1,380,000 pps -> 1,930,000 pps
      
      Now we should work to add batches on the enqueue() side ;)
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: John Fastabend <john.r.fastabend@intel.com>
      Cc: Jesper Dangaard Brouer <brouer@redhat.com>
      Cc: Hannes Frederic Sowa <hannes@stressinduktion.org>
      Cc: Florian Westphal <fw@strlen.de>
      Cc: Daniel Borkmann <daniel@iogearbox.net>
      Acked-by: NJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4d202a0d
    • E
      net_sched: sch_htb: export class backlog in dumps · 338ed9b4
      Eric Dumazet 提交于
      We already get child qdisc qlen, we also can get its backlog
      so that class dumps can report it.
      
      Also replace qstats by a single drop counter, but move it in
      a separate cache line so that drops do not dirty useful cache lines.
      
      Tested:
      
      $ tc -s cl sh dev eth0
      class htb 1:1 root leaf 3: prio 0 rate 1Gbit ceil 1Gbit burst 500000b cburst 500000b
       Sent 2183346912 bytes 9021815 pkt (dropped 2340774, overlimits 0 requeues 0)
       rate 1001Mbit 517543pps backlog 120758b 499p requeues 0
       lended: 9021770 borrowed: 0 giants: 0
       tokens: 9 ctokens: 9
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      338ed9b4
    • E
      net_sched: fq_codel: cache skb->truesize into skb->cb · 008830bc
      Eric Dumazet 提交于
      Now we defer skb drops, it makes sense to keep a copy
      of skb->truesize in struct codel_skb_cb to avoid one
      cache line miss per dropped skb in fq_codel_drop(),
      to reduce latencies a bit further.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      008830bc
    • E
      net_sched: drop packets after root qdisc lock is released · 520ac30f
      Eric Dumazet 提交于
      Qdisc performance suffers when packets are dropped at enqueue()
      time because drops (kfree_skb()) are done while qdisc lock is held,
      delaying a dequeue() draining the queue.
      
      Nominal throughput can be reduced by 50 % when this happens,
      at a time we would like the dequeue() to proceed as fast as possible.
      
      Even FQ is vulnerable to this problem, while one of FQ goals was
      to provide some flow isolation.
      
      This patch adds a 'struct sk_buff **to_free' parameter to all
      qdisc->enqueue(), and in qdisc_drop() helper.
      
      I measured a performance increase of up to 12 %, but this patch
      is a prereq so that future batches in enqueue() can fly.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Acked-by: NJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      520ac30f