1. 12 12月, 2017 12 次提交
  2. 11 12月, 2017 4 次提交
  3. 09 12月, 2017 13 次提交
  4. 08 12月, 2017 3 次提交
  5. 07 12月, 2017 2 次提交
  6. 06 12月, 2017 6 次提交
    • D
      drm: safely free connectors from connector_iter · a703c550
      Daniel Vetter 提交于
      In
      
      commit 613051da
      Author: Daniel Vetter <daniel.vetter@ffwll.ch>
      Date:   Wed Dec 14 00:08:06 2016 +0100
      
          drm: locking&new iterators for connector_list
      
      we've went to extreme lengths to make sure connector iterations works
      in any context, without introducing any additional locking context.
      This worked, except for a small fumble in the implementation:
      
      When we actually race with a concurrent connector unplug event, and
      our temporary connector reference turns out to be the final one, then
      everything breaks: We call the connector release function from
      whatever context we happen to be in, which can be an irq/atomic
      context. And connector freeing grabs all kinds of locks and stuff.
      
      Fix this by creating a specially safe put function for connetor_iter,
      which (in this rare case) punts the cleanup to a worker.
      Reported-by: NBen Widawsky <ben@bwidawsk.net>
      Cc: Ben Widawsky <ben@bwidawsk.net>
      Fixes: 613051da ("drm: locking&new iterators for connector_list")
      Cc: Dave Airlie <airlied@gmail.com>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: Sean Paul <seanpaul@chromium.org>
      Cc: <stable@vger.kernel.org> # v4.11+
      Reviewed-by: NDave Airlie <airlied@gmail.com>
      Signed-off-by: NDaniel Vetter <daniel.vetter@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20171204204818.24745-1-daniel.vetter@ffwll.ch
      a703c550
    • C
      net_sched: remove unused parameter from act cleanup ops · 9a63b255
      Cong Wang 提交于
      No one actually uses it.
      
      Cc: Jiri Pirko <jiri@mellanox.com>
      Cc: Jamal Hadi Salim <jhs@mojatatu.com>
      Signed-off-by: NCong Wang <xiyou.wangcong@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9a63b255
    • E
      net: remove hlist_nulls_add_tail_rcu() · d7efc6c1
      Eric Dumazet 提交于
      Alexander Potapenko reported use of uninitialized memory [1]
      
      This happens when inserting a request socket into TCP ehash,
      in __sk_nulls_add_node_rcu(), since sk_reuseport is not initialized.
      
      Bug was added by commit d894ba18 ("soreuseport: fix ordering for
      mixed v4/v6 sockets")
      
      Note that d296ba60 ("soreuseport: Resolve merge conflict for v4/v6
      ordering fix") missed the opportunity to get rid of
      hlist_nulls_add_tail_rcu() :
      
      Both UDP sockets and TCP/DCCP listeners no longer use
      __sk_nulls_add_node_rcu() for their hash insertion.
      
      Since all other sockets have unique 4-tuple, the reuseport status
      has no special meaning, so we can always use hlist_nulls_add_head_rcu()
      for them and save few cycles/instructions.
      
      [1]
      
      ==================================================================
      BUG: KMSAN: use of uninitialized memory in inet_ehash_insert+0xd40/0x1050
      CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.13.0+ #3288
      Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
      Call Trace:
       <IRQ>
       __dump_stack lib/dump_stack.c:16
       dump_stack+0x185/0x1d0 lib/dump_stack.c:52
       kmsan_report+0x13f/0x1c0 mm/kmsan/kmsan.c:1016
       __msan_warning_32+0x69/0xb0 mm/kmsan/kmsan_instr.c:766
       __sk_nulls_add_node_rcu ./include/net/sock.h:684
       inet_ehash_insert+0xd40/0x1050 net/ipv4/inet_hashtables.c:413
       reqsk_queue_hash_req net/ipv4/inet_connection_sock.c:754
       inet_csk_reqsk_queue_hash_add+0x1cc/0x300 net/ipv4/inet_connection_sock.c:765
       tcp_conn_request+0x31e7/0x36f0 net/ipv4/tcp_input.c:6414
       tcp_v4_conn_request+0x16d/0x220 net/ipv4/tcp_ipv4.c:1314
       tcp_rcv_state_process+0x42a/0x7210 net/ipv4/tcp_input.c:5917
       tcp_v4_do_rcv+0xa6a/0xcd0 net/ipv4/tcp_ipv4.c:1483
       tcp_v4_rcv+0x3de0/0x4ab0 net/ipv4/tcp_ipv4.c:1763
       ip_local_deliver_finish+0x6bb/0xcb0 net/ipv4/ip_input.c:216
       NF_HOOK ./include/linux/netfilter.h:248
       ip_local_deliver+0x3fa/0x480 net/ipv4/ip_input.c:257
       dst_input ./include/net/dst.h:477
       ip_rcv_finish+0x6fb/0x1540 net/ipv4/ip_input.c:397
       NF_HOOK ./include/linux/netfilter.h:248
       ip_rcv+0x10f6/0x15c0 net/ipv4/ip_input.c:488
       __netif_receive_skb_core+0x36f6/0x3f60 net/core/dev.c:4298
       __netif_receive_skb net/core/dev.c:4336
       netif_receive_skb_internal+0x63c/0x19c0 net/core/dev.c:4497
       napi_skb_finish net/core/dev.c:4858
       napi_gro_receive+0x629/0xa50 net/core/dev.c:4889
       e1000_receive_skb drivers/net/ethernet/intel/e1000/e1000_main.c:4018
       e1000_clean_rx_irq+0x1492/0x1d30
      drivers/net/ethernet/intel/e1000/e1000_main.c:4474
       e1000_clean+0x43aa/0x5970 drivers/net/ethernet/intel/e1000/e1000_main.c:3819
       napi_poll net/core/dev.c:5500
       net_rx_action+0x73c/0x1820 net/core/dev.c:5566
       __do_softirq+0x4b4/0x8dd kernel/softirq.c:284
       invoke_softirq kernel/softirq.c:364
       irq_exit+0x203/0x240 kernel/softirq.c:405
       exiting_irq+0xe/0x10 ./arch/x86/include/asm/apic.h:638
       do_IRQ+0x15e/0x1a0 arch/x86/kernel/irq.c:263
       common_interrupt+0x86/0x86
      
      Fixes: d894ba18 ("soreuseport: fix ordering for mixed v4/v6 sockets")
      Fixes: d296ba60 ("soreuseport: Resolve merge conflict for v4/v6 ordering fix")
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Reported-by: NAlexander Potapenko <glider@google.com>
      Acked-by: NCraig Gallek <kraig@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d7efc6c1
    • V
      net: dsa: return per-port upstream port · 07073c79
      Vivien Didelot 提交于
      The current dsa_upstream_port() helper still assumes a unique CPU port
      in the whole switch fabric. This is becoming wrong, as every port in the
      fabric has its dedicated CPU port, thus every port has an upstream port.
      
      Add a port argument to the dsa_upstream_port() helper and fetch its CPU
      port instead of the deprecated unique fabric CPU port. A CPU or unused
      port has no dedicated CPU port, so return itself in this case.
      
      At the same time, change the return value from u8 to unsigned int since
      there is no need to limit the size here.
      Signed-off-by: NVivien Didelot <vivien.didelot@savoirfairelinux.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      07073c79
    • A
      net: sched: sch_api: fix code style issues · 0ac4bd68
      Alexander Aring 提交于
      This patch fix checkpatch issues for upcomming patches according to the
      sched api file. It changes checking on null pointer, remove unnecessary
      brackets, add variable names for parameters and adjust 80 char width.
      
      Cc: David Ahern <dsahern@gmail.com>
      Signed-off-by: NAlexander Aring <aring@mojatatu.com>
      Acked-by: NJamal Hadi Salim <jhs@mojatatu.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0ac4bd68
    • C
      net_sched: get rid of rcu_barrier() in tcf_block_put_ext() · efbf7897
      Cong Wang 提交于
      Both Eric and Paolo noticed the rcu_barrier() we use in
      tcf_block_put_ext() could be a performance bottleneck when
      we have a lot of tc classes.
      
      Paolo provided the following to demonstrate the issue:
      
      tc qdisc add dev lo root htb
      for I in `seq 1 1000`; do
              tc class add dev lo parent 1: classid 1:$I htb rate 100kbit
              tc qdisc add dev lo parent 1:$I handle $((I + 1)): htb
              for J in `seq 1 10`; do
                      tc filter add dev lo parent $((I + 1)): u32 match ip src 1.1.1.$J
              done
      done
      time tc qdisc del dev root
      
      real    0m54.764s
      user    0m0.023s
      sys     0m0.000s
      
      The rcu_barrier() there is to ensure we free the block after all chains
      are gone, that is, to queue tcf_block_put_final() at the tail of workqueue.
      We can achieve this ordering requirement by refcnt'ing tcf block instead,
      that is, the tcf block is freed only when the last chain in this block is
      gone. This also simplifies the code.
      
      Paolo reported after this patch we get:
      
      real    0m0.017s
      user    0m0.000s
      sys     0m0.017s
      Tested-by: NPaolo Abeni <pabeni@redhat.com>
      Cc: Eric Dumazet <edumazet@google.com>
      Cc: Jiri Pirko <jiri@mellanox.com>
      Cc: Jamal Hadi Salim <jhs@mojatatu.com>
      Signed-off-by: NCong Wang <xiyou.wangcong@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      efbf7897