1. 07 6月, 2017 1 次提交
  2. 18 5月, 2017 2 次提交
  3. 09 5月, 2017 1 次提交
    • M
      treewide: use kv[mz]alloc* rather than opencoded variants · 752ade68
      Michal Hocko 提交于
      There are many code paths opencoding kvmalloc.  Let's use the helper
      instead.  The main difference to kvmalloc is that those users are
      usually not considering all the aspects of the memory allocator.  E.g.
      allocation requests <= 32kB (with 4kB pages) are basically never failing
      and invoke OOM killer to satisfy the allocation.  This sounds too
      disruptive for something that has a reasonable fallback - the vmalloc.
      On the other hand those requests might fallback to vmalloc even when the
      memory allocator would succeed after several more reclaim/compaction
      attempts previously.  There is no guarantee something like that happens
      though.
      
      This patch converts many of those places to kv[mz]alloc* helpers because
      they are more conservative.
      
      Link: http://lkml.kernel.org/r/20170306103327.2766-2-mhocko@kernel.orgSigned-off-by: NMichal Hocko <mhocko@suse.com>
      Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> # Xen bits
      Acked-by: NKees Cook <keescook@chromium.org>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: Andreas Dilger <andreas.dilger@intel.com> # Lustre
      Acked-by: Christian Borntraeger <borntraeger@de.ibm.com> # KVM/s390
      Acked-by: Dan Williams <dan.j.williams@intel.com> # nvdim
      Acked-by: David Sterba <dsterba@suse.com> # btrfs
      Acked-by: Ilya Dryomov <idryomov@gmail.com> # Ceph
      Acked-by: Tariq Toukan <tariqt@mellanox.com> # mlx4
      Acked-by: Leon Romanovsky <leonro@mellanox.com> # mlx5
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Cc: Anton Vorontsov <anton@enomsg.org>
      Cc: Colin Cross <ccross@android.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Cc: Ben Skeggs <bskeggs@redhat.com>
      Cc: Kent Overstreet <kent.overstreet@gmail.com>
      Cc: Santosh Raspatur <santosh@chelsio.com>
      Cc: Hariprasad S <hariprasad@chelsio.com>
      Cc: Yishai Hadas <yishaih@mellanox.com>
      Cc: Oleg Drokin <oleg.drokin@intel.com>
      Cc: "Yan, Zheng" <zyan@redhat.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Alexei Starovoitov <ast@kernel.org>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Cc: David Miller <davem@davemloft.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      752ade68
  4. 25 3月, 2017 1 次提交
  5. 12 2月, 2017 1 次提交
    • E
      net_sched: fix error recovery at qdisc creation · 87b60cfa
      Eric Dumazet 提交于
      Dmitry reported uses after free in qdisc code [1]
      
      The problem here is that ops->init() can return an error.
      
      qdisc_create_dflt() then call ops->destroy(),
      while qdisc_create() does _not_ call it.
      
      Four qdisc chose to call their own ops->destroy(), assuming their caller
      would not.
      
      This patch makes sure qdisc_create() calls ops->destroy()
      and fixes the four qdisc to avoid double free.
      
      [1]
      BUG: KASAN: use-after-free in mq_destroy+0x242/0x290 net/sched/sch_mq.c:33 at addr ffff8801d415d440
      Read of size 8 by task syz-executor2/5030
      CPU: 0 PID: 5030 Comm: syz-executor2 Not tainted 4.3.5-smp-DEV #119
      Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
       0000000000000046 ffff8801b435b870 ffffffff81bbbed4 ffff8801db000400
       ffff8801d415d440 ffff8801d415dc40 ffff8801c4988510 ffff8801b435b898
       ffffffff816682b1 ffff8801b435b928 ffff8801d415d440 ffff8801c49880c0
      Call Trace:
       [<ffffffff81bbbed4>] __dump_stack lib/dump_stack.c:15 [inline]
       [<ffffffff81bbbed4>] dump_stack+0x6c/0x98 lib/dump_stack.c:51
       [<ffffffff816682b1>] kasan_object_err+0x21/0x70 mm/kasan/report.c:158
       [<ffffffff81668524>] print_address_description mm/kasan/report.c:196 [inline]
       [<ffffffff81668524>] kasan_report_error+0x1b4/0x4b0 mm/kasan/report.c:285
       [<ffffffff81668953>] kasan_report mm/kasan/report.c:305 [inline]
       [<ffffffff81668953>] __asan_report_load8_noabort+0x43/0x50 mm/kasan/report.c:326
       [<ffffffff82527b02>] mq_destroy+0x242/0x290 net/sched/sch_mq.c:33
       [<ffffffff82524bdd>] qdisc_destroy+0x12d/0x290 net/sched/sch_generic.c:953
       [<ffffffff82524e30>] qdisc_create_dflt+0xf0/0x120 net/sched/sch_generic.c:848
       [<ffffffff8252550d>] attach_default_qdiscs net/sched/sch_generic.c:1029 [inline]
       [<ffffffff8252550d>] dev_activate+0x6ad/0x880 net/sched/sch_generic.c:1064
       [<ffffffff824b1db1>] __dev_open+0x221/0x320 net/core/dev.c:1403
       [<ffffffff824b24ce>] __dev_change_flags+0x15e/0x3e0 net/core/dev.c:6858
       [<ffffffff824b27de>] dev_change_flags+0x8e/0x140 net/core/dev.c:6926
       [<ffffffff824f5bf6>] dev_ifsioc+0x446/0x890 net/core/dev_ioctl.c:260
       [<ffffffff824f61fa>] dev_ioctl+0x1ba/0xb80 net/core/dev_ioctl.c:546
       [<ffffffff82430509>] sock_do_ioctl+0x99/0xb0 net/socket.c:879
       [<ffffffff82430d30>] sock_ioctl+0x2a0/0x390 net/socket.c:958
       [<ffffffff816f3b68>] vfs_ioctl fs/ioctl.c:44 [inline]
       [<ffffffff816f3b68>] do_vfs_ioctl+0x8a8/0xe50 fs/ioctl.c:611
       [<ffffffff816f41a4>] SYSC_ioctl fs/ioctl.c:626 [inline]
       [<ffffffff816f41a4>] SyS_ioctl+0x94/0xc0 fs/ioctl.c:617
       [<ffffffff8123e357>] entry_SYSCALL_64_fastpath+0x12/0x17
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Reported-by: NDmitry Vyukov <dvyukov@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      87b60cfa
  6. 11 2月, 2017 1 次提交
  7. 26 6月, 2016 1 次提交
    • E
      net_sched: drop packets after root qdisc lock is released · 520ac30f
      Eric Dumazet 提交于
      Qdisc performance suffers when packets are dropped at enqueue()
      time because drops (kfree_skb()) are done while qdisc lock is held,
      delaying a dequeue() draining the queue.
      
      Nominal throughput can be reduced by 50 % when this happens,
      at a time we would like the dequeue() to proceed as fast as possible.
      
      Even FQ is vulnerable to this problem, while one of FQ goals was
      to provide some flow isolation.
      
      This patch adds a 'struct sk_buff **to_free' parameter to all
      qdisc->enqueue(), and in qdisc_drop() helper.
      
      I measured a performance increase of up to 12 %, but this patch
      is a prereq so that future batches in enqueue() can fly.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Acked-by: NJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      520ac30f
  8. 16 6月, 2016 1 次提交
  9. 09 6月, 2016 1 次提交
  10. 01 3月, 2016 1 次提交
  11. 28 8月, 2015 1 次提交
    • D
      net: sched: consolidate tc_classify{,_compat} · 3b3ae880
      Daniel Borkmann 提交于
      For classifiers getting invoked via tc_classify(), we always need an
      extra function call into tc_classify_compat(), as both are being
      exported as symbols and tc_classify() itself doesn't do much except
      handling of reclassifications when tp->classify() returned with
      TC_ACT_RECLASSIFY.
      
      CBQ and ATM are the only qdiscs that directly call into tc_classify_compat(),
      all others use tc_classify(). When tc actions are being configured
      out in the kernel, tc_classify() effectively does nothing besides
      delegating.
      
      We could spare this layer and consolidate both functions. pktgen on
      single CPU constantly pushing skbs directly into the netif_receive_skb()
      path with a dummy classifier on ingress qdisc attached, improves
      slightly from 22.3Mpps to 23.1Mpps.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@plumgrid.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3b3ae880
  12. 16 7月, 2015 1 次提交
  13. 04 5月, 2015 1 次提交
  14. 30 9月, 2014 3 次提交
  15. 14 9月, 2014 2 次提交
  16. 10 9月, 2014 1 次提交
  17. 05 6月, 2014 1 次提交
  18. 15 1月, 2014 1 次提交
  19. 11 12月, 2013 1 次提交
  20. 02 4月, 2012 1 次提交
  21. 16 3月, 2012 1 次提交
    • E
      sch_sfq: revert dont put new flow at the end of flows · cc34eb67
      Eric Dumazet 提交于
      This reverts commit d47a0ac7 (sch_sfq: dont put new flow at the end of
      flows)
      
      As Jesper found out, patch sounded great but has bad side effects.
      
      In stress situation, pushing new flows in front of the queue can prevent
      old flows doing any progress. Packets can stay in SFQ queue for
      unlimited amount of time.
      
      It's possible to add heuristics to limit this problem, but this would
      add complexity outside of SFQ scope.
      
      A more sensible answer to Dave Taht concerns (who reported the issued I
      tried to solve in original commit) is probably to use a qdisc hierarchy
      so that high prio packets dont enter a potentially crowded SFQ qdisc.
      Reported-by: NJesper Dangaard Brouer <jdb@comx.dk>
      Cc: Dave Taht <dave.taht@gmail.com>
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      cc34eb67
  22. 10 2月, 2012 1 次提交
  23. 07 2月, 2012 1 次提交
  24. 13 1月, 2012 1 次提交
    • E
      net_sched: sfq: add optional RED on top of SFQ · ddecf0f4
      Eric Dumazet 提交于
      Adds an optional Random Early Detection on each SFQ flow queue.
      
      Traditional SFQ limits count of packets, while RED permits to also
      control number of bytes per flow, and adds ECN capability as well.
      
      1) We dont handle the idle time management in this RED implementation,
      since each 'new flow' begins with a null qavg. We really want to address
      backlogged flows.
      
      2) if headdrop is selected, we try to ecn mark first packet instead of
      currently enqueued packet. This gives faster feedback for tcp flows
      compared to traditional RED [ marking the last packet in queue ]
      
      Example of use :
      
      tc qdisc add dev $DEV parent 1:1 handle 10: est 1sec 4sec sfq \
      	limit 3000 headdrop flows 512 divisor 16384 \
      	redflowlimit 100000 min 8000 max 60000 probability 0.20 ecn
      
      qdisc sfq 10: parent 1:1 limit 3000p quantum 1514b depth 127 headdrop
      flows 512/16384 divisor 16384
       ewma 6 min 8000b max 60000b probability 0.2 ecn
       prob_mark 0 prob_mark_head 4876 prob_drop 6131
       forced_mark 0 forced_mark_head 0 forced_drop 0
       Sent 1175211782 bytes 777537 pkt (dropped 6131, overlimits 11007
      requeues 0)
       rate 99483Kbit 8219pps backlog 689392b 456p requeues 0
      
      In this test, with 64 netperf TCP_STREAM sessions, 50% using ECN enabled
      flows, we can see number of packets CE marked is smaller than number of
      drops (for non ECN flows)
      
      If same test is run, without RED, we can check backlog is much bigger.
      
      qdisc sfq 10: parent 1:1 limit 3000p quantum 1514b depth 127 headdrop
      flows 512/16384 divisor 16384
       Sent 1148683617 bytes 795006 pkt (dropped 0, overlimits 0 requeues 0)
       rate 98429Kbit 8521pps backlog 1221290b 841p requeues 0
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      CC: Stephen Hemminger <shemminger@vyatta.com>
      CC: Dave Taht <dave.taht@gmail.com>
      Tested-by: NDave Taht <dave.taht@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ddecf0f4
  25. 06 1月, 2012 1 次提交
    • E
      net_sched: sfq: extend limits · 18cb8098
      Eric Dumazet 提交于
      SFQ as implemented in Linux is very limited, with at most 127 flows
      and limit of 127 packets. [ So if 127 flows are active, we have one
      packet per flow ]
      
      This patch brings to SFQ following features to cope with modern needs.
      
      - Ability to specify a smaller per flow limit of inflight packets.
          (default value being at 127 packets)
      
      - Ability to have up to 65408 active flows (instead of 127)
      
      - Ability to have head drops instead of tail drops
        (to drop old packets from a flow)
      
      Example of use : No more than 20 packets per flow, max 8000 flows, max
      20000 packets in SFQ qdisc, hash table of 65536 slots.
      
      tc qdisc add ... sfq \
              flows 8000 \
              depth 20 \
              headdrop \
              limit 20000 \
      	divisor 65536
      
      Ram usage :
      
      2 bytes per hash table entry (instead of previous 1 byte/entry)
      32 bytes per flow on 64bit arches, instead of 384 for QFQ, so much
      better cache hit ratio.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      CC: Dave Taht <dave.taht@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      18cb8098
  26. 05 1月, 2012 2 次提交
  27. 04 1月, 2012 1 次提交
  28. 22 12月, 2011 1 次提交
    • E
      sch_sfq: rehash queues in perturb timer · 225d9b89
      Eric Dumazet 提交于
      A known Out Of Order (OOO) problem hurts SFQ when timer changes
      perturbation value, since all new packets delivered to SFQ enqueue might
      end on different slots than previous in-flight packets.
      
      With round robin delivery, we can thus deliver packets in a different
      order.
      
      Since SFQ is limited to small amount of in-flight packets, we can rehash
      packets so that this OOO problem is fixed.
      
      This rehashing is performed only if internal flow classifier is in use.
      
      We now store in skb->cb[] the "struct flow_keys" so that we dont call
      skb_flow_dissect() again while rehashing.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      225d9b89
  29. 30 11月, 2011 1 次提交
  30. 01 8月, 2011 1 次提交
  31. 22 6月, 2011 1 次提交
  32. 26 5月, 2011 1 次提交
  33. 24 5月, 2011 1 次提交
    • E
      sch_sfq: avoid giving spurious NET_XMIT_CN signals · 8efa8854
      Eric Dumazet 提交于
      While chasing a possible net_sched bug, I found that IP fragments have
      litle chance to pass a congestioned SFQ qdisc :
      
      - Say SFQ qdisc is full because one flow is non responsive.
      - ip_fragment() wants to send two fragments belonging to an idle flow.
      - sfq_enqueue() queues first packet, but see queue limit reached :
      - sfq_enqueue() drops one packet from 'big consumer', and returns
      NET_XMIT_CN.
      - ip_fragment() cancel remaining fragments.
      
      This patch restores fairness, making sure we return NET_XMIT_CN only if
      we dropped a packet from the same flow.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      CC: Patrick McHardy <kaber@trash.net>
      CC: Jarek Poplawski <jarkao2@gmail.com>
      CC: Jamal Hadi Salim <hadi@cyberus.ca>
      CC: Stephen Hemminger <shemminger@vyatta.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8efa8854
  34. 23 4月, 2011 1 次提交
  35. 03 2月, 2011 1 次提交