1. 21 10月, 2017 9 次提交
  2. 20 10月, 2017 2 次提交
  3. 19 10月, 2017 1 次提交
  4. 18 10月, 2017 1 次提交
    • K
      net: sched: Convert timers to use timer_setup() · cdeabbb8
      Kees Cook 提交于
      In preparation for unconditionally passing the struct timer_list pointer to
      all timer callbacks, switch to using the new timer_setup() and from_timer()
      to pass the timer pointer explicitly. Add pointer back to Qdisc.
      
      Cc: Jamal Hadi Salim <jhs@mojatatu.com>
      Cc: Cong Wang <xiyou.wangcong@gmail.com>
      Cc: Jiri Pirko <jiri@resnulli.us>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: netdev@vger.kernel.org
      Signed-off-by: NKees Cook <keescook@chromium.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      cdeabbb8
  5. 17 10月, 2017 8 次提交
  6. 15 10月, 2017 1 次提交
  7. 14 10月, 2017 1 次提交
    • A
      mqprio: Introduce new hardware offload mode and shaper in mqprio · 4e8b86c0
      Amritha Nambiar 提交于
      The offload types currently supported in mqprio are 0 (no offload) and
      1 (offload only TCs) by setting these values for the 'hw' option. If
      offloads are supported by setting the 'hw' option to 1, the default
      offload mode is 'dcb' where only the TC values are offloaded to the
      device. This patch introduces a new hardware offload mode called
      'channel' with 'hw' set to 1 in mqprio which makes full use of the
      mqprio options, the TCs, the queue configurations and the QoS parameters
      for the TCs. This is achieved through a new netlink attribute for the
      'mode' option which takes values such as 'dcb' (default) and 'channel'.
      The 'channel' mode also supports QoS attributes for traffic class such as
      minimum and maximum values for bandwidth rate limits.
      
      This patch enables configuring additional HW shaper attributes associated
      with a traffic class. Currently the shaper for bandwidth rate limiting is
      supported which takes options such as minimum and maximum bandwidth rates
      and are offloaded to the hardware in the 'channel' mode. The min and max
      limits for bandwidth rates are provided by the user along with the TCs
      and the queue configurations when creating the mqprio qdisc. The interface
      can be extended to support new HW shapers in future through the 'shaper'
      attribute.
      
      Introduces a new data structure 'tc_mqprio_qopt_offload' for offloading
      mqprio queue options and use this to be shared between the kernel and
      device driver. This contains a copy of the existing data structure
      for mqprio queue options. This new data structure can be extended when
      adding new attributes for traffic class such as mode, shaper, shaper
      parameters (bandwidth rate limits). The existing data structure for mqprio
      queue options will be shared between the kernel and userspace.
      
      Example:
        queues 4@0 4@4 hw 1 mode channel shaper bw_rlimit\
        min_rate 1Gbit 2Gbit max_rate 4Gbit 5Gbit
      
      To dump the bandwidth rates:
      
      qdisc mqprio 804a: root  tc 2 map 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0
                   queues:(0:3) (4:7)
                   mode:channel
                   shaper:bw_rlimit   min_rate:1Gbit 2Gbit   max_rate:4Gbit 5Gbit
      Signed-off-by: NAmritha Nambiar <amritha.nambiar@intel.com>
      Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      4e8b86c0
  8. 13 10月, 2017 5 次提交
  9. 12 10月, 2017 4 次提交
  10. 07 10月, 2017 1 次提交
  11. 03 10月, 2017 1 次提交
  12. 01 10月, 2017 1 次提交
  13. 29 9月, 2017 3 次提交
  14. 27 9月, 2017 1 次提交
  15. 26 9月, 2017 1 次提交
    • E
      sch_netem: faster rb tree removal · 3aa605f2
      Eric Dumazet 提交于
      While running TCP tests involving netem storing millions of packets,
      I had the idea to speed up tfifo_reset() and did experiments.
      
      I tried the rbtree_postorder_for_each_entry_safe() method that is
      used in skb_rbtree_purge() but discovered it was slower than the
      current tfifo_reset() method.
      
      I measured time taken to release skbs with three occupation levels :
      10^4, 10^5 and 10^6 skbs with three methods :
      
      1) (current 'naive' method)
      
      	while ((p = rb_first(&q->t_root))) {
      		struct sk_buff *skb = netem_rb_to_skb(p);
      
      		rb_erase(p, &q->t_root);
      		rtnl_kfree_skbs(skb, skb);
      	}
      
      2) Use rb_next() instead of rb_first() in the loop :
      
      	p = rb_first(&q->t_root);
      	while (p) {
      		struct sk_buff *skb = netem_rb_to_skb(p);
      
      		p = rb_next(p);
      		rb_erase(&skb->rbnode, &q->t_root);
      		rtnl_kfree_skbs(skb, skb);
      	}
      
      3) "optimized" method using rbtree_postorder_for_each_entry_safe()
      
      	struct sk_buff *skb, *next;
      
      	rbtree_postorder_for_each_entry_safe(skb, next,
      					     &q->t_root, rbnode) {
                     rtnl_kfree_skbs(skb, skb);
      	}
      	q->t_root = RB_ROOT;
      
      Results :
      
      method_1:while (rb_first()) rb_erase() 10000 skbs in 690378 ns (69 ns per skb)
      method_2:rb_first; while (p) { p = rb_next(p); ...}  10000 skbs in 541846 ns (54 ns per skb)
      method_3:rbtree_postorder_for_each_entry_safe() 10000 skbs in 868307 ns (86 ns per skb)
      
      method_1:while (rb_first()) rb_erase() 99996 skbs in 7804021 ns (78 ns per skb)
      method_2:rb_first; while (p) { p = rb_next(p); ...}  100000 skbs in 5942456 ns (59 ns per skb)
      method_3:rbtree_postorder_for_each_entry_safe() 100000 skbs in 11584940 ns (115 ns per skb)
      
      method_1:while (rb_first()) rb_erase() 1000000 skbs in 108577838 ns (108 ns per skb)
      method_2:rb_first; while (p) { p = rb_next(p); ...}  1000000 skbs in 82619635 ns (82 ns per skb)
      method_3:rbtree_postorder_for_each_entry_safe() 1000000 skbs in 127328743 ns (127 ns per skb)
      
      Method 2) is simply faster, probably because it maintains a smaller
      working size set.
      
      Note that this is the method we use in tcp_ofo_queue() already.
      
      I will also change skb_rbtree_purge() in a second patch.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Acked-by: NDavid Ahern <dsahern@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3aa605f2