1. 04 4月, 2017 1 次提交
  2. 02 4月, 2017 7 次提交
    • G
      l2tp: take a reference on sessions used in genetlink handlers · 2777e2ab
      Guillaume Nault 提交于
      Callers of l2tp_nl_session_find() need to hold a reference on the
      returned session since there's no guarantee that it isn't going to
      disappear from under them.
      
      Relying on the fact that no l2tp netlink message may be processed
      concurrently isn't enough: sessions can be deleted by other means
      (e.g. by closing the PPPOL2TP socket of a ppp pseudowire).
      
      l2tp_nl_cmd_session_delete() is a bit special: it runs a callback
      function that may require a previous call to session->ref(). In
      particular, for ppp pseudowires, the callback is l2tp_session_delete(),
      which then calls pppol2tp_session_close() and dereferences the PPPOL2TP
      socket. The socket might already be gone at the moment
      l2tp_session_delete() calls session->ref(), so we need to take a
      reference during the session lookup. So we need to pass the do_ref
      variable down to l2tp_session_get() and l2tp_session_get_by_ifname().
      
      Since all callers have to be updated, l2tp_session_find_by_ifname() and
      l2tp_nl_session_find() are renamed to reflect their new behaviour.
      
      Fixes: 309795f4 ("l2tp: Add netlink control API for L2TP")
      Signed-off-by: NGuillaume Nault <g.nault@alphalink.fr>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2777e2ab
    • G
      l2tp: hold session while sending creation notifications · 5e6a9e5a
      Guillaume Nault 提交于
      l2tp_session_find() doesn't take any reference on the returned session.
      Therefore, the session may disappear while sending the notification.
      
      Use l2tp_session_get() instead and decrement session's refcount once
      the notification is sent.
      
      Fixes: 33f72e6f ("l2tp : multicast notification to the registered listeners")
      Signed-off-by: NGuillaume Nault <g.nault@alphalink.fr>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5e6a9e5a
    • G
      l2tp: fix duplicate session creation · dbdbc73b
      Guillaume Nault 提交于
      l2tp_session_create() relies on its caller for checking for duplicate
      sessions. This is racy since a session can be concurrently inserted
      after the caller's verification.
      
      Fix this by letting l2tp_session_create() verify sessions uniqueness
      upon insertion. Callers need to be adapted to check for
      l2tp_session_create()'s return code instead of calling
      l2tp_session_find().
      
      pppol2tp_connect() is a bit special because it has to work on existing
      sessions (if they're not connected) or to create a new session if none
      is found. When acting on a preexisting session, a reference must be
      held or it could go away on us. So we have to use l2tp_session_get()
      instead of l2tp_session_find() and drop the reference before exiting.
      
      Fixes: d9e31d17 ("l2tp: Add L2TP ethernet pseudowire support")
      Fixes: fd558d18 ("l2tp: Split pppol2tp patch into separate l2tp and ppp parts")
      Signed-off-by: NGuillaume Nault <g.nault@alphalink.fr>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      dbdbc73b
    • G
      l2tp: ensure session can't get removed during pppol2tp_session_ioctl() · 57377d63
      Guillaume Nault 提交于
      Holding a reference on session is required before calling
      pppol2tp_session_ioctl(). The session could get freed while processing the
      ioctl otherwise. Since pppol2tp_session_ioctl() uses the session's socket,
      we also need to take a reference on it in l2tp_session_get().
      
      Fixes: fd558d18 ("l2tp: Split pppol2tp patch into separate l2tp and ppp parts")
      Signed-off-by: NGuillaume Nault <g.nault@alphalink.fr>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      57377d63
    • G
      l2tp: fix race in l2tp_recv_common() · 61b9a047
      Guillaume Nault 提交于
      Taking a reference on sessions in l2tp_recv_common() is racy; this
      has to be done by the callers.
      
      To this end, a new function is required (l2tp_session_get()) to
      atomically lookup a session and take a reference on it. Callers then
      have to manually drop this reference.
      
      Fixes: fd558d18 ("l2tp: Split pppol2tp patch into separate l2tp and ppp parts")
      Signed-off-by: NGuillaume Nault <g.nault@alphalink.fr>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      61b9a047
    • X
      sctp: use right in and out stream cnt · afe89962
      Xin Long 提交于
      Since sctp reconf was added in sctp, the real cnt of in/out stream
      have not been c.sinit_max_instreams and c.sinit_num_ostreams any
      more.
      
      This patch is to replace them with stream->in/outcnt.
      Signed-off-by: NXin Long <lucien.xin@gmail.com>
      Acked-by: NMarcelo Ricardo Leitner <marcelo.leitner@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      afe89962
    • Y
      openvswitch: Fix ovs_flow_key_update() · 6f56f618
      Yi-Hung Wei 提交于
      ovs_flow_key_update() is called when the flow key is invalid, and it is
      used to update and revalidate the flow key. Commit 329f45bc
      ("openvswitch: add mac_proto field to the flow key") introduces mac_proto
      field to flow key and use it to determine whether the flow key is valid.
      However, the commit does not update the code path in ovs_flow_key_update()
      to revalidate the flow key which may cause BUG_ON() on execute_recirc().
      This patch addresses the aforementioned issue.
      
      Fixes: 329f45bc ("openvswitch: add mac_proto field to the flow key")
      Signed-off-by: NYi-Hung Wei <yihung.wei@gmail.com>
      Acked-by: NJiri Benc <jbenc@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6f56f618
  3. 31 3月, 2017 4 次提交
  4. 30 3月, 2017 2 次提交
  5. 29 3月, 2017 5 次提交
    • J
      mac80211: unconditionally start new netdev queues with iTXQ support · 7d65f829
      Johannes Berg 提交于
      When internal mac80211 TXQs aren't supported, netdev queues must
      always started out started even when driver queues are stopped
      while the interface is added. This is necessary because with the
      internal TXQ support netdev queues are never stopped and packet
      scheduling/dropping is done in mac80211.
      
      Cc: stable@vger.kernel.org # 4.9+
      Fixes: 80a83cfc ("mac80211: skip netdev queue control with software queuing")
      Reported-and-tested-by: NSven Eckelmann <sven.eckelmann@openmesh.com>
      Signed-off-by: NJohannes Berg <johannes.berg@intel.com>
      7d65f829
    • L
      netfilter: nfnetlink_queue: fix secctx memory leak · 77c1c03c
      Liping Zhang 提交于
      We must call security_release_secctx to free the memory returned by
      security_secid_to_secctx, otherwise memory may be leaked forever.
      
      Fixes: ef493bd9 ("netfilter: nfnetlink_queue: add security context information")
      Signed-off-by: NLiping Zhang <zlpnobody@gmail.com>
      Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org>
      77c1c03c
    • A
      cfg80211: check rdev resume callback only for registered wiphy · b3ef5520
      Arend Van Spriel 提交于
      We got the following use-after-free KASAN report:
      
       BUG: KASAN: use-after-free in wiphy_resume+0x591/0x5a0 [cfg80211]
      	 at addr ffff8803fc244090
       Read of size 8 by task kworker/u16:24/2587
       CPU: 6 PID: 2587 Comm: kworker/u16:24 Tainted: G    B 4.9.13-debug+
       Hardware name: Dell Inc. XPS 15 9550/0N7TVV, BIOS 1.2.19 12/22/2016
       Workqueue: events_unbound async_run_entry_fn
        ffff880425d4f9d8 ffffffffaeedb541 ffff88042b80ef00 ffff8803fc244088
        ffff880425d4fa00 ffffffffae84d7a1 ffff880425d4fa98 ffff8803fc244080
        ffff88042b80ef00 ffff880425d4fa88 ffffffffae84da3a ffffffffc141f7d9
       Call Trace:
        [<ffffffffaeedb541>] dump_stack+0x85/0xc4
        [<ffffffffae84d7a1>] kasan_object_err+0x21/0x70
        [<ffffffffae84da3a>] kasan_report_error+0x1fa/0x500
        [<ffffffffc141f7d9>] ? cfg80211_bss_age+0x39/0xc0 [cfg80211]
        [<ffffffffc141f83a>] ? cfg80211_bss_age+0x9a/0xc0 [cfg80211]
        [<ffffffffae48d46d>] ? trace_hardirqs_on+0xd/0x10
        [<ffffffffc13fb1c0>] ? wiphy_suspend+0xc70/0xc70 [cfg80211]
        [<ffffffffae84def1>] __asan_report_load8_noabort+0x61/0x70
        [<ffffffffc13fb100>] ? wiphy_suspend+0xbb0/0xc70 [cfg80211]
        [<ffffffffc13fb751>] ? wiphy_resume+0x591/0x5a0 [cfg80211]
        [<ffffffffc13fb751>] wiphy_resume+0x591/0x5a0 [cfg80211]
        [<ffffffffc13fb1c0>] ? wiphy_suspend+0xc70/0xc70 [cfg80211]
        [<ffffffffaf3b206e>] dpm_run_callback+0x6e/0x4f0
        [<ffffffffaf3b31b2>] device_resume+0x1c2/0x670
        [<ffffffffaf3b367d>] async_resume+0x1d/0x50
        [<ffffffffae3ee84e>] async_run_entry_fn+0xfe/0x610
        [<ffffffffae3d0666>] process_one_work+0x716/0x1a50
        [<ffffffffae3d05c9>] ? process_one_work+0x679/0x1a50
        [<ffffffffafdd7b6d>] ? _raw_spin_unlock_irq+0x3d/0x60
        [<ffffffffae3cff50>] ? pwq_dec_nr_in_flight+0x2b0/0x2b0
        [<ffffffffae3d1a80>] worker_thread+0xe0/0x1460
        [<ffffffffae3d19a0>] ? process_one_work+0x1a50/0x1a50
        [<ffffffffae3e54c2>] kthread+0x222/0x2e0
        [<ffffffffae3e52a0>] ? kthread_park+0x80/0x80
        [<ffffffffae3e52a0>] ? kthread_park+0x80/0x80
        [<ffffffffae3e52a0>] ? kthread_park+0x80/0x80
        [<ffffffffafdd86aa>] ret_from_fork+0x2a/0x40
       Object at ffff8803fc244088, in cache kmalloc-1024 size: 1024
       Allocated:
       PID = 71
        save_stack_trace+0x1b/0x20
        save_stack+0x46/0xd0
        kasan_kmalloc+0xad/0xe0
        kasan_slab_alloc+0x12/0x20
        __kmalloc_track_caller+0x134/0x360
        kmemdup+0x20/0x50
        brcmf_cfg80211_attach+0x10b/0x3a90 [brcmfmac]
        brcmf_bus_start+0x19a/0x9a0 [brcmfmac]
        brcmf_pcie_setup+0x1f1a/0x3680 [brcmfmac]
        brcmf_fw_request_nvram_done+0x44c/0x11b0 [brcmfmac]
        request_firmware_work_func+0x135/0x280
        process_one_work+0x716/0x1a50
        worker_thread+0xe0/0x1460
        kthread+0x222/0x2e0
        ret_from_fork+0x2a/0x40
       Freed:
       PID = 2568
        save_stack_trace+0x1b/0x20
        save_stack+0x46/0xd0
        kasan_slab_free+0x71/0xb0
        kfree+0xe8/0x2e0
        brcmf_cfg80211_detach+0x62/0xf0 [brcmfmac]
        brcmf_detach+0x14a/0x2b0 [brcmfmac]
        brcmf_pcie_remove+0x140/0x5d0 [brcmfmac]
        brcmf_pcie_pm_leave_D3+0x198/0x2e0 [brcmfmac]
        pci_pm_resume+0x186/0x220
        dpm_run_callback+0x6e/0x4f0
        device_resume+0x1c2/0x670
        async_resume+0x1d/0x50
        async_run_entry_fn+0xfe/0x610
        process_one_work+0x716/0x1a50
        worker_thread+0xe0/0x1460
        kthread+0x222/0x2e0
        ret_from_fork+0x2a/0x40
       Memory state around the buggy address:
        ffff8803fc243f80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
        ffff8803fc244000: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
       >ffff8803fc244080: fc fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
                                ^
        ffff8803fc244100: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
        ffff8803fc244180: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
      
      What is happening is that brcmf_pcie_resume() detects a device that
      is no longer responsive and it decides to unbind resulting in a
      wiphy_unregister() and wiphy_free() call. Now the wiphy instance
      remains allocated, because PM needs to call wiphy_resume() for it.
      However, brcmfmac already does a kfree() for the struct
      cfg80211_registered_device::ops field. Change the checks in
      wiphy_resume() to only access the struct cfg80211_registered_device::ops
      if the wiphy instance is still registered at this time.
      
      Cc: stable@vger.kernel.org # 4.10.x, 4.9.x
      Reported-by: NDaniel J Blueman <daniel@quora.org>
      Reviewed-by: NHante Meuleman <hante.meuleman@broadcom.com>
      Reviewed-by: NPieter-Paul Giesberts <pieter-paul.giesberts@broadcom.com>
      Reviewed-by: NFranky Lin <franky.lin@broadcom.com>
      Signed-off-by: NArend van Spriel <arend.vanspriel@broadcom.com>
      Signed-off-by: NJohannes Berg <johannes.berg@intel.com>
      b3ef5520
    • J
      openvswitch: Fix refcount leak on force commit. · b768b16d
      Jarno Rajahalme 提交于
      The reference count held for skb needs to be released when the skb's
      nfct pointer is cleared regardless of if nf_ct_delete() is called or
      not.
      
      Failing to release the skb's reference cound led to deferred conntrack
      cleanup spinning forever within nf_conntrack_cleanup_net_list() when
      cleaning up a network namespace:
      
         kworker/u16:0-19025 [004] 45981067.173642: sched_switch: kworker/u16:0:19025 [120] R ==> rcu_preempt:7 [120]
         kworker/u16:0-19025 [004] 45981067.173651: kernel_stack: <stack trace>
      => ___preempt_schedule (ffffffffa001ed36)
      => _raw_spin_unlock_bh (ffffffffa0713290)
      => nf_ct_iterate_cleanup (ffffffffc00a4454)
      => nf_conntrack_cleanup_net_list (ffffffffc00a5e1e)
      => nf_conntrack_pernet_exit (ffffffffc00a63dd)
      => ops_exit_list.isra.1 (ffffffffa06075f3)
      => cleanup_net (ffffffffa0607df0)
      => process_one_work (ffffffffa0084c31)
      => worker_thread (ffffffffa008592b)
      => kthread (ffffffffa008bee2)
      => ret_from_fork (ffffffffa071b67c)
      
      Fixes: dd41d33f ("openvswitch: Add force commit.")
      Reported-by: NYang Song <yangsong@vmware.com>
      Signed-off-by: NJarno Rajahalme <jarno@ovn.org>
      Acked-by: NJoe Stringer <joe@ovn.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b768b16d
    • X
      sctp: change to save MSG_MORE flag into assoc · f9ba3501
      Xin Long 提交于
      David Laight noticed the support for MSG_MORE with datamsg->force_delay
      didn't really work as we expected, as the first msg with MSG_MORE set
      would always block the following chunks' dequeuing.
      
      This Patch is to rewrite it by saving the MSG_MORE flag into assoc as
      David Laight suggested.
      
      asoc->force_delay is used to save MSG_MORE flag before a msg is sent.
      All chunks in queue would not be sent out if asoc->force_delay is set
      by the msg with MSG_MORE flag, until a new msg without MSG_MORE flag
      clears asoc->force_delay.
      
      Note that this change would not affect the flush is generated by other
      triggers, like asoc->state != ESTABLISHED, queue size > pmtu etc.
      
      v1->v2:
        Not clear asoc->force_delay after sending the msg with MSG_MORE flag.
      
      Fixes: 4ea0c32f ("sctp: add support for MSG_MORE")
      Signed-off-by: NXin Long <lucien.xin@gmail.com>
      Acked-by: NDavid Laight <david.laight@aculab.com>
      Acked-by: NMarcelo Ricardo Leitner <marcelo.leitner@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f9ba3501
  6. 28 3月, 2017 1 次提交
    • M
      net: ipconfig: fix ic_close_devs() use-after-free · ffefb6f4
      Mark Rutland 提交于
      Our chosen ic_dev may be anywhere in our list of ic_devs, and we may
      free it before attempting to close others. When we compare d->dev and
      ic_dev->dev, we're potentially dereferencing memory returned to the
      allocator. This causes KASAN to scream for each subsequent ic_dev we
      check.
      
      As there's a 1-1 mapping between ic_devs and netdevs, we can instead
      compare d and ic_dev directly, which implicitly handles the !ic_dev
      case, and avoids the use-after-free. The ic_dev pointer may be stale,
      but we will not dereference it.
      
      Original splat:
      
      [    6.487446] ==================================================================
      [    6.494693] BUG: KASAN: use-after-free in ic_close_devs+0xc4/0x154 at addr ffff800367efa708
      [    6.503013] Read of size 8 by task swapper/0/1
      [    6.507452] CPU: 5 PID: 1 Comm: swapper/0 Not tainted 4.11.0-rc3-00002-gda42158 #8
      [    6.514993] Hardware name: AppliedMicro Mustang/Mustang, BIOS 3.05.05-beta_rc Jan 27 2016
      [    6.523138] Call trace:
      [    6.525590] [<ffff200008094778>] dump_backtrace+0x0/0x570
      [    6.530976] [<ffff200008094d08>] show_stack+0x20/0x30
      [    6.536017] [<ffff200008bee928>] dump_stack+0x120/0x188
      [    6.541231] [<ffff20000856d5e4>] kasan_object_err+0x24/0xa0
      [    6.546790] [<ffff20000856d924>] kasan_report_error+0x244/0x738
      [    6.552695] [<ffff20000856dfec>] __asan_report_load8_noabort+0x54/0x80
      [    6.559204] [<ffff20000aae86ac>] ic_close_devs+0xc4/0x154
      [    6.564590] [<ffff20000aaedbac>] ip_auto_config+0x2ed4/0x2f1c
      [    6.570321] [<ffff200008084b04>] do_one_initcall+0xcc/0x370
      [    6.575882] [<ffff20000aa31de8>] kernel_init_freeable+0x5f8/0x6c4
      [    6.581959] [<ffff20000a16df00>] kernel_init+0x18/0x190
      [    6.587171] [<ffff200008084710>] ret_from_fork+0x10/0x40
      [    6.592468] Object at ffff800367efa700, in cache kmalloc-128 size: 128
      [    6.598969] Allocated:
      [    6.601324] PID = 1
      [    6.603427]  save_stack_trace_tsk+0x0/0x418
      [    6.607603]  save_stack_trace+0x20/0x30
      [    6.611430]  kasan_kmalloc+0xd8/0x188
      [    6.615087]  ip_auto_config+0x8c4/0x2f1c
      [    6.619002]  do_one_initcall+0xcc/0x370
      [    6.622832]  kernel_init_freeable+0x5f8/0x6c4
      [    6.627178]  kernel_init+0x18/0x190
      [    6.630660]  ret_from_fork+0x10/0x40
      [    6.634223] Freed:
      [    6.636233] PID = 1
      [    6.638334]  save_stack_trace_tsk+0x0/0x418
      [    6.642510]  save_stack_trace+0x20/0x30
      [    6.646337]  kasan_slab_free+0x88/0x178
      [    6.650167]  kfree+0xb8/0x478
      [    6.653131]  ic_close_devs+0x130/0x154
      [    6.656875]  ip_auto_config+0x2ed4/0x2f1c
      [    6.660875]  do_one_initcall+0xcc/0x370
      [    6.664705]  kernel_init_freeable+0x5f8/0x6c4
      [    6.669051]  kernel_init+0x18/0x190
      [    6.672534]  ret_from_fork+0x10/0x40
      [    6.676098] Memory state around the buggy address:
      [    6.680880]  ffff800367efa600: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      [    6.688078]  ffff800367efa680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
      [    6.695276] >ffff800367efa700: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
      [    6.702469]                       ^
      [    6.705952]  ffff800367efa780: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
      [    6.713149]  ffff800367efa800: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
      [    6.720343] ==================================================================
      [    6.727536] Disabling lock debugging due to kernel taint
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>
      Cc: James Morris <jmorris@namei.org>
      Cc: Patrick McHardy <kaber@trash.net>
      Cc: netdev@vger.kernel.org
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ffefb6f4
  7. 27 3月, 2017 4 次提交
    • G
      netfilter: nf_nat_snmp: Fix panic when snmp_trap_helper fails to register · 75c689dc
      Gao Feng 提交于
      In the commit 93557f53 ("netfilter: nf_conntrack: nf_conntrack snmp
      helper"), the snmp_helper is replaced by nf_nat_snmp_hook. So the
      snmp_helper is never registered. But it still tries to unregister the
      snmp_helper, it could cause the panic.
      
      Now remove the useless snmp_helper and the unregister call in the
      error handler.
      
      Fixes: 93557f53 ("netfilter: nf_conntrack: nf_conntrack snmp helper")
      Signed-off-by: NGao Feng <fgao@ikuai8.com>
      Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org>
      75c689dc
    • L
      netfilter: nf_ct_ext: fix possible panic after nf_ct_extend_unregister · 9c3f3794
      Liping Zhang 提交于
      If one cpu is doing nf_ct_extend_unregister while another cpu is doing
      __nf_ct_ext_add_length, then we may hit BUG_ON(t == NULL). Moreover,
      there's no synchronize_rcu invocation after set nf_ct_ext_types[id] to
      NULL, so it's possible that we may access invalid pointer.
      
      But actually, most of the ct extends are built-in, so the problem listed
      above will not happen. However, there are two exceptions: NF_CT_EXT_NAT
      and NF_CT_EXT_SYNPROXY.
      
      For _EXT_NAT, the panic will not happen, since adding the nat extend and
      unregistering the nat extend are located in the same file(nf_nat_core.c),
      this means that after the nat module is removed, we cannot add the nat
      extend too.
      
      For _EXT_SYNPROXY, synproxy extend may be added by init_conntrack, while
      synproxy extend unregister will be done by synproxy_core_exit. So after
      nf_synproxy_core.ko is removed, we may still try to add the synproxy
      extend, then kernel panic may happen.
      
      I know it's very hard to reproduce this issue, but I can play a tricky
      game to make it happen very easily :)
      
      Step 1. Enable SYNPROXY for tcp dport 1234 at FORWARD hook:
        # iptables -I FORWARD -p tcp --dport 1234 -j SYNPROXY
      Step 2. Queue the syn packet to the userspace at raw table OUTPUT hook.
              Also note, in the userspace we only add a 20s' delay, then
              reinject the syn packet to the kernel:
        # iptables -t raw -I OUTPUT -p tcp --syn -j NFQUEUE --queue-num 1
      Step 3. Using "nc 2.2.2.2 1234" to connect the server.
      Step 4. Now remove the nf_synproxy_core.ko quickly:
        # iptables -F FORWARD
        # rmmod ipt_SYNPROXY
        # rmmod nf_synproxy_core
      Step 5. After 20s' delay, the syn packet is reinjected to the kernel.
      
      Now you will see the panic like this:
        kernel BUG at net/netfilter/nf_conntrack_extend.c:91!
        Call Trace:
         ? __nf_ct_ext_add_length+0x53/0x3c0 [nf_conntrack]
         init_conntrack+0x12b/0x600 [nf_conntrack]
         nf_conntrack_in+0x4cc/0x580 [nf_conntrack]
         ipv4_conntrack_local+0x48/0x50 [nf_conntrack_ipv4]
         nf_reinject+0x104/0x270
         nfqnl_recv_verdict+0x3e1/0x5f9 [nfnetlink_queue]
         ? nfqnl_recv_verdict+0x5/0x5f9 [nfnetlink_queue]
         ? nla_parse+0xa0/0x100
         nfnetlink_rcv_msg+0x175/0x6a9 [nfnetlink]
         [...]
      
      One possible solution is to make NF_CT_EXT_SYNPROXY extend built-in, i.e.
      introduce nf_conntrack_synproxy.c and only do ct extend register and
      unregister in it, similar to nf_conntrack_timeout.c.
      
      But having such a obscure restriction of nf_ct_extend_unregister is not a
      good idea, so we should invoke synchronize_rcu after set nf_ct_ext_types
      to NULL, and check the NULL pointer when do __nf_ct_ext_add_length. Then
      it will be easier if we add new ct extend in the future.
      
      Last, we use kfree_rcu to free nf_ct_ext, so rcu_barrier() is unnecessary
      anymore, remove it too.
      Signed-off-by: NLiping Zhang <zlpnobody@gmail.com>
      Acked-by: NFlorian Westphal <fw@strlen.de>
      Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org>
      9c3f3794
    • L
      netfilter: nfnl_cthelper: fix a race when walk the nf_ct_helper_hash table · 83d90219
      Liping Zhang 提交于
      The nf_ct_helper_hash table is protected by nf_ct_helper_mutex, while
      nfct_helper operation is protected by nfnl_lock(NFNL_SUBSYS_CTHELPER).
      So it's possible that one CPU is walking the nf_ct_helper_hash for
      cthelper add/get/del, another cpu is doing nf_conntrack_helpers_unregister
      at the same time. This is dangrous, and may cause use after free error.
      
      Note, delete operation will flush all cthelpers added via nfnetlink, so
      using rcu to do protect is not easy.
      
      Now introduce a dummy list to record all the cthelpers added via
      nfnetlink, then we can walk the dummy list instead of walking the
      nf_ct_helper_hash. Also, keep nfnl_cthelper_dump_table unchanged, it
      may be invoked without nfnl_lock(NFNL_SUBSYS_CTHELPER) held.
      Signed-off-by: NLiping Zhang <zlpnobody@gmail.com>
      Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org>
      83d90219
    • L
      netfilter: invoke synchronize_rcu after set the _hook_ to NULL · 3b7dabf0
      Liping Zhang 提交于
      Otherwise, another CPU may access the invalid pointer. For example:
          CPU0                CPU1
           -              rcu_read_lock();
           -              pfunc = _hook_;
        _hook_ = NULL;          -
        mod unload              -
           -                 pfunc(); // invalid, panic
           -             rcu_read_unlock();
      
      So we must call synchronize_rcu() to wait the rcu reader to finish.
      
      Also note, in nf_nat_snmp_basic_fini, synchronize_rcu() will be invoked
      by later nf_conntrack_helper_unregister, but I'm inclined to add a
      explicit synchronize_rcu after set the nf_nat_snmp_hook to NULL. Depend
      on such obscure assumptions is not a good idea.
      
      Last, in nfnetlink_cttimeout, we use kfree_rcu to free the time object,
      so in cttimeout_exit, invoking rcu_barrier() is not necessary at all,
      remove it too.
      Signed-off-by: NLiping Zhang <zlpnobody@gmail.com>
      Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org>
      3b7dabf0
  8. 25 3月, 2017 4 次提交
  9. 24 3月, 2017 1 次提交
  10. 23 3月, 2017 9 次提交
    • E
      inet: frag: release spinlock before calling icmp_send() · ec4fbd64
      Eric Dumazet 提交于
      Dmitry reported a lockdep splat [1] (false positive) that we can fix
      by releasing the spinlock before calling icmp_send() from ip_expire()
      
      This is a false positive because sending an ICMP message can not
      possibly re-enter the IP frag engine.
      
      [1]
      [ INFO: possible circular locking dependency detected ]
      4.10.0+ #29 Not tainted
      -------------------------------------------------------
      modprobe/12392 is trying to acquire lock:
       (_xmit_ETHER#2){+.-...}, at: [<ffffffff837a8182>] spin_lock
      include/linux/spinlock.h:299 [inline]
       (_xmit_ETHER#2){+.-...}, at: [<ffffffff837a8182>] __netif_tx_lock
      include/linux/netdevice.h:3486 [inline]
       (_xmit_ETHER#2){+.-...}, at: [<ffffffff837a8182>]
      sch_direct_xmit+0x282/0x6d0 net/sched/sch_generic.c:180
      
      but task is already holding lock:
       (&(&q->lock)->rlock){+.-...}, at: [<ffffffff8389a4d1>] spin_lock
      include/linux/spinlock.h:299 [inline]
       (&(&q->lock)->rlock){+.-...}, at: [<ffffffff8389a4d1>]
      ip_expire+0x51/0x6c0 net/ipv4/ip_fragment.c:201
      
      which lock already depends on the new lock.
      
      the existing dependency chain (in reverse order) is:
      
      -> #1 (&(&q->lock)->rlock){+.-...}:
             validate_chain kernel/locking/lockdep.c:2267 [inline]
             __lock_acquire+0x2149/0x3430 kernel/locking/lockdep.c:3340
             lock_acquire+0x2a1/0x630 kernel/locking/lockdep.c:3755
             __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
             _raw_spin_lock+0x33/0x50 kernel/locking/spinlock.c:151
             spin_lock include/linux/spinlock.h:299 [inline]
             ip_defrag+0x3a2/0x4130 net/ipv4/ip_fragment.c:669
             ip_check_defrag+0x4e3/0x8b0 net/ipv4/ip_fragment.c:713
             packet_rcv_fanout+0x282/0x800 net/packet/af_packet.c:1459
             deliver_skb net/core/dev.c:1834 [inline]
             dev_queue_xmit_nit+0x294/0xa90 net/core/dev.c:1890
             xmit_one net/core/dev.c:2903 [inline]
             dev_hard_start_xmit+0x16b/0xab0 net/core/dev.c:2923
             sch_direct_xmit+0x31f/0x6d0 net/sched/sch_generic.c:182
             __dev_xmit_skb net/core/dev.c:3092 [inline]
             __dev_queue_xmit+0x13e5/0x1e60 net/core/dev.c:3358
             dev_queue_xmit+0x17/0x20 net/core/dev.c:3423
             neigh_resolve_output+0x6b9/0xb10 net/core/neighbour.c:1308
             neigh_output include/net/neighbour.h:478 [inline]
             ip_finish_output2+0x8b8/0x15a0 net/ipv4/ip_output.c:228
             ip_do_fragment+0x1d93/0x2720 net/ipv4/ip_output.c:672
             ip_fragment.constprop.54+0x145/0x200 net/ipv4/ip_output.c:545
             ip_finish_output+0x82d/0xe10 net/ipv4/ip_output.c:314
             NF_HOOK_COND include/linux/netfilter.h:246 [inline]
             ip_output+0x1f0/0x7a0 net/ipv4/ip_output.c:404
             dst_output include/net/dst.h:486 [inline]
             ip_local_out+0x95/0x170 net/ipv4/ip_output.c:124
             ip_send_skb+0x3c/0xc0 net/ipv4/ip_output.c:1492
             ip_push_pending_frames+0x64/0x80 net/ipv4/ip_output.c:1512
             raw_sendmsg+0x26de/0x3a00 net/ipv4/raw.c:655
             inet_sendmsg+0x164/0x5b0 net/ipv4/af_inet.c:761
             sock_sendmsg_nosec net/socket.c:633 [inline]
             sock_sendmsg+0xca/0x110 net/socket.c:643
             ___sys_sendmsg+0x4a3/0x9f0 net/socket.c:1985
             __sys_sendmmsg+0x25c/0x750 net/socket.c:2075
             SYSC_sendmmsg net/socket.c:2106 [inline]
             SyS_sendmmsg+0x35/0x60 net/socket.c:2101
             do_syscall_64+0x2e8/0x930 arch/x86/entry/common.c:281
             return_from_SYSCALL_64+0x0/0x7a
      
      -> #0 (_xmit_ETHER#2){+.-...}:
             check_prev_add kernel/locking/lockdep.c:1830 [inline]
             check_prevs_add+0xa8f/0x19f0 kernel/locking/lockdep.c:1940
             validate_chain kernel/locking/lockdep.c:2267 [inline]
             __lock_acquire+0x2149/0x3430 kernel/locking/lockdep.c:3340
             lock_acquire+0x2a1/0x630 kernel/locking/lockdep.c:3755
             __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
             _raw_spin_lock+0x33/0x50 kernel/locking/spinlock.c:151
             spin_lock include/linux/spinlock.h:299 [inline]
             __netif_tx_lock include/linux/netdevice.h:3486 [inline]
             sch_direct_xmit+0x282/0x6d0 net/sched/sch_generic.c:180
             __dev_xmit_skb net/core/dev.c:3092 [inline]
             __dev_queue_xmit+0x13e5/0x1e60 net/core/dev.c:3358
             dev_queue_xmit+0x17/0x20 net/core/dev.c:3423
             neigh_hh_output include/net/neighbour.h:468 [inline]
             neigh_output include/net/neighbour.h:476 [inline]
             ip_finish_output2+0xf6c/0x15a0 net/ipv4/ip_output.c:228
             ip_finish_output+0xa29/0xe10 net/ipv4/ip_output.c:316
             NF_HOOK_COND include/linux/netfilter.h:246 [inline]
             ip_output+0x1f0/0x7a0 net/ipv4/ip_output.c:404
             dst_output include/net/dst.h:486 [inline]
             ip_local_out+0x95/0x170 net/ipv4/ip_output.c:124
             ip_send_skb+0x3c/0xc0 net/ipv4/ip_output.c:1492
             ip_push_pending_frames+0x64/0x80 net/ipv4/ip_output.c:1512
             icmp_push_reply+0x372/0x4d0 net/ipv4/icmp.c:394
             icmp_send+0x156c/0x1c80 net/ipv4/icmp.c:754
             ip_expire+0x40e/0x6c0 net/ipv4/ip_fragment.c:239
             call_timer_fn+0x241/0x820 kernel/time/timer.c:1268
             expire_timers kernel/time/timer.c:1307 [inline]
             __run_timers+0x960/0xcf0 kernel/time/timer.c:1601
             run_timer_softirq+0x21/0x80 kernel/time/timer.c:1614
             __do_softirq+0x31f/0xbe7 kernel/softirq.c:284
             invoke_softirq kernel/softirq.c:364 [inline]
             irq_exit+0x1cc/0x200 kernel/softirq.c:405
             exiting_irq arch/x86/include/asm/apic.h:657 [inline]
             smp_apic_timer_interrupt+0x76/0xa0 arch/x86/kernel/apic/apic.c:962
             apic_timer_interrupt+0x93/0xa0 arch/x86/entry/entry_64.S:707
             __read_once_size include/linux/compiler.h:254 [inline]
             atomic_read arch/x86/include/asm/atomic.h:26 [inline]
             rcu_dynticks_curr_cpu_in_eqs kernel/rcu/tree.c:350 [inline]
             __rcu_is_watching kernel/rcu/tree.c:1133 [inline]
             rcu_is_watching+0x83/0x110 kernel/rcu/tree.c:1147
             rcu_read_lock_held+0x87/0xc0 kernel/rcu/update.c:293
             radix_tree_deref_slot include/linux/radix-tree.h:238 [inline]
             filemap_map_pages+0x6d4/0x1570 mm/filemap.c:2335
             do_fault_around mm/memory.c:3231 [inline]
             do_read_fault mm/memory.c:3265 [inline]
             do_fault+0xbd5/0x2080 mm/memory.c:3370
             handle_pte_fault mm/memory.c:3600 [inline]
             __handle_mm_fault+0x1062/0x2cb0 mm/memory.c:3714
             handle_mm_fault+0x1e2/0x480 mm/memory.c:3751
             __do_page_fault+0x4f6/0xb60 arch/x86/mm/fault.c:1397
             do_page_fault+0x54/0x70 arch/x86/mm/fault.c:1460
             page_fault+0x28/0x30 arch/x86/entry/entry_64.S:1011
      
      other info that might help us debug this:
      
       Possible unsafe locking scenario:
      
             CPU0                    CPU1
             ----                    ----
        lock(&(&q->lock)->rlock);
                                     lock(_xmit_ETHER#2);
                                     lock(&(&q->lock)->rlock);
        lock(_xmit_ETHER#2);
      
       *** DEADLOCK ***
      
      10 locks held by modprobe/12392:
       #0:  (&mm->mmap_sem){++++++}, at: [<ffffffff81329758>]
      __do_page_fault+0x2b8/0xb60 arch/x86/mm/fault.c:1336
       #1:  (rcu_read_lock){......}, at: [<ffffffff8188cab6>]
      filemap_map_pages+0x1e6/0x1570 mm/filemap.c:2324
       #2:  (&(ptlock_ptr(page))->rlock#2){+.+...}, at: [<ffffffff81984a78>]
      spin_lock include/linux/spinlock.h:299 [inline]
       #2:  (&(ptlock_ptr(page))->rlock#2){+.+...}, at: [<ffffffff81984a78>]
      pte_alloc_one_map mm/memory.c:2944 [inline]
       #2:  (&(ptlock_ptr(page))->rlock#2){+.+...}, at: [<ffffffff81984a78>]
      alloc_set_pte+0x13b8/0x1b90 mm/memory.c:3072
       #3:  (((&q->timer))){+.-...}, at: [<ffffffff81627e72>]
      lockdep_copy_map include/linux/lockdep.h:175 [inline]
       #3:  (((&q->timer))){+.-...}, at: [<ffffffff81627e72>]
      call_timer_fn+0x1c2/0x820 kernel/time/timer.c:1258
       #4:  (&(&q->lock)->rlock){+.-...}, at: [<ffffffff8389a4d1>] spin_lock
      include/linux/spinlock.h:299 [inline]
       #4:  (&(&q->lock)->rlock){+.-...}, at: [<ffffffff8389a4d1>]
      ip_expire+0x51/0x6c0 net/ipv4/ip_fragment.c:201
       #5:  (rcu_read_lock){......}, at: [<ffffffff8389a633>]
      ip_expire+0x1b3/0x6c0 net/ipv4/ip_fragment.c:216
       #6:  (slock-AF_INET){+.-...}, at: [<ffffffff839b3313>] spin_trylock
      include/linux/spinlock.h:309 [inline]
       #6:  (slock-AF_INET){+.-...}, at: [<ffffffff839b3313>] icmp_xmit_lock
      net/ipv4/icmp.c:219 [inline]
       #6:  (slock-AF_INET){+.-...}, at: [<ffffffff839b3313>]
      icmp_send+0x803/0x1c80 net/ipv4/icmp.c:681
       #7:  (rcu_read_lock_bh){......}, at: [<ffffffff838ab9a1>]
      ip_finish_output2+0x2c1/0x15a0 net/ipv4/ip_output.c:198
       #8:  (rcu_read_lock_bh){......}, at: [<ffffffff836d1dee>]
      __dev_queue_xmit+0x23e/0x1e60 net/core/dev.c:3324
       #9:  (dev->qdisc_running_key ?: &qdisc_running_key){+.....}, at:
      [<ffffffff836d3a27>] dev_queue_xmit+0x17/0x20 net/core/dev.c:3423
      
      stack backtrace:
      CPU: 0 PID: 12392 Comm: modprobe Not tainted 4.10.0+ #29
      Hardware name: Google Google Compute Engine/Google Compute Engine,
      BIOS Google 01/01/2011
      Call Trace:
       <IRQ>
       __dump_stack lib/dump_stack.c:16 [inline]
       dump_stack+0x2ee/0x3ef lib/dump_stack.c:52
       print_circular_bug+0x307/0x3b0 kernel/locking/lockdep.c:1204
       check_prev_add kernel/locking/lockdep.c:1830 [inline]
       check_prevs_add+0xa8f/0x19f0 kernel/locking/lockdep.c:1940
       validate_chain kernel/locking/lockdep.c:2267 [inline]
       __lock_acquire+0x2149/0x3430 kernel/locking/lockdep.c:3340
       lock_acquire+0x2a1/0x630 kernel/locking/lockdep.c:3755
       __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
       _raw_spin_lock+0x33/0x50 kernel/locking/spinlock.c:151
       spin_lock include/linux/spinlock.h:299 [inline]
       __netif_tx_lock include/linux/netdevice.h:3486 [inline]
       sch_direct_xmit+0x282/0x6d0 net/sched/sch_generic.c:180
       __dev_xmit_skb net/core/dev.c:3092 [inline]
       __dev_queue_xmit+0x13e5/0x1e60 net/core/dev.c:3358
       dev_queue_xmit+0x17/0x20 net/core/dev.c:3423
       neigh_hh_output include/net/neighbour.h:468 [inline]
       neigh_output include/net/neighbour.h:476 [inline]
       ip_finish_output2+0xf6c/0x15a0 net/ipv4/ip_output.c:228
       ip_finish_output+0xa29/0xe10 net/ipv4/ip_output.c:316
       NF_HOOK_COND include/linux/netfilter.h:246 [inline]
       ip_output+0x1f0/0x7a0 net/ipv4/ip_output.c:404
       dst_output include/net/dst.h:486 [inline]
       ip_local_out+0x95/0x170 net/ipv4/ip_output.c:124
       ip_send_skb+0x3c/0xc0 net/ipv4/ip_output.c:1492
       ip_push_pending_frames+0x64/0x80 net/ipv4/ip_output.c:1512
       icmp_push_reply+0x372/0x4d0 net/ipv4/icmp.c:394
       icmp_send+0x156c/0x1c80 net/ipv4/icmp.c:754
       ip_expire+0x40e/0x6c0 net/ipv4/ip_fragment.c:239
       call_timer_fn+0x241/0x820 kernel/time/timer.c:1268
       expire_timers kernel/time/timer.c:1307 [inline]
       __run_timers+0x960/0xcf0 kernel/time/timer.c:1601
       run_timer_softirq+0x21/0x80 kernel/time/timer.c:1614
       __do_softirq+0x31f/0xbe7 kernel/softirq.c:284
       invoke_softirq kernel/softirq.c:364 [inline]
       irq_exit+0x1cc/0x200 kernel/softirq.c:405
       exiting_irq arch/x86/include/asm/apic.h:657 [inline]
       smp_apic_timer_interrupt+0x76/0xa0 arch/x86/kernel/apic/apic.c:962
       apic_timer_interrupt+0x93/0xa0 arch/x86/entry/entry_64.S:707
      RIP: 0010:__read_once_size include/linux/compiler.h:254 [inline]
      RIP: 0010:atomic_read arch/x86/include/asm/atomic.h:26 [inline]
      RIP: 0010:rcu_dynticks_curr_cpu_in_eqs kernel/rcu/tree.c:350 [inline]
      RIP: 0010:__rcu_is_watching kernel/rcu/tree.c:1133 [inline]
      RIP: 0010:rcu_is_watching+0x83/0x110 kernel/rcu/tree.c:1147
      RSP: 0000:ffff8801c391f120 EFLAGS: 00000a03 ORIG_RAX: ffffffffffffff10
      RAX: dffffc0000000000 RBX: ffff8801c391f148 RCX: 0000000000000000
      RDX: 0000000000000000 RSI: 000055edd4374000 RDI: ffff8801dbe1ae0c
      RBP: ffff8801c391f1a0 R08: 0000000000000002 R09: 0000000000000000
      R10: dffffc0000000000 R11: 0000000000000002 R12: 1ffff10038723e25
      R13: ffff8801dbe1ae00 R14: ffff8801c391f680 R15: dffffc0000000000
       </IRQ>
       rcu_read_lock_held+0x87/0xc0 kernel/rcu/update.c:293
       radix_tree_deref_slot include/linux/radix-tree.h:238 [inline]
       filemap_map_pages+0x6d4/0x1570 mm/filemap.c:2335
       do_fault_around mm/memory.c:3231 [inline]
       do_read_fault mm/memory.c:3265 [inline]
       do_fault+0xbd5/0x2080 mm/memory.c:3370
       handle_pte_fault mm/memory.c:3600 [inline]
       __handle_mm_fault+0x1062/0x2cb0 mm/memory.c:3714
       handle_mm_fault+0x1e2/0x480 mm/memory.c:3751
       __do_page_fault+0x4f6/0xb60 arch/x86/mm/fault.c:1397
       do_page_fault+0x54/0x70 arch/x86/mm/fault.c:1460
       page_fault+0x28/0x30 arch/x86/entry/entry_64.S:1011
      RIP: 0033:0x7f83172f2786
      RSP: 002b:00007fffe859ae80 EFLAGS: 00010293
      RAX: 000055edd4373040 RBX: 00007f83175111c8 RCX: 000055edd4373238
      RDX: 0000000000000000 RSI: 0000000000000000 RDI: 00007f8317510970
      RBP: 00007fffe859afd0 R08: 0000000000000009 R09: 0000000000000000
      R10: 0000000000000064 R11: 0000000000000000 R12: 000055edd4373040
      R13: 0000000000000000 R14: 00007fffe859afe8 R15: 0000000000000000
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Reported-by: NDmitry Vyukov <dvyukov@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ec4fbd64
    • E
      tcp: initialize icsk_ack.lrcvtime at session start time · 15bb7745
      Eric Dumazet 提交于
      icsk_ack.lrcvtime has a 0 value at socket creation time.
      
      tcpi_last_data_recv can have bogus value if no payload is ever received.
      
      This patch initializes icsk_ack.lrcvtime for active sessions
      in tcp_finish_connect(), and for passive sessions in
      tcp_create_openreq_child()
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Acked-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      15bb7745
    • S
      genetlink: fix counting regression on ctrl_dumpfamily() · 1d2a6a5e
      Stanislaw Gruszka 提交于
      Commit 2ae0f17d ("genetlink: use idr to track families") replaced
      
      	if (++n < fams_to_skip)
      		continue;
      into:
      
      	if (n++ < fams_to_skip)
      		continue;
      
      This subtle change cause that on retry ctrl_dumpfamily() call we omit
      one family that failed to do ctrl_fill_info() on previous call, because
      cb->args[0] = n number counts also family that failed to do
      ctrl_fill_info().
      
      Patch fixes the problem and avoid confusion in the future just decrease
      n counter when ctrl_fill_info() fail.
      
      User visible problem caused by this bug is failure to get access to
      some genetlink family i.e. nl80211. However problem is reproducible
      only if number of registered genetlink families is big enough to
      cause second call of ctrl_dumpfamily().
      
      Cc: Xose Vazquez Perez <xose.vazquez@gmail.com>
      Cc: Larry Finger <Larry.Finger@lwfinger.net>
      Cc: Johannes Berg <johannes@sipsolutions.net>
      Fixes: 2ae0f17d ("genetlink: use idr to track families")
      Signed-off-by: NStanislaw Gruszka <sgruszka@redhat.com>
      Acked-by: NJohannes Berg <johannes@sipsolutions.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1d2a6a5e
    • D
      socket, bpf: fix sk_filter use after free in sk_clone_lock · a97e50cc
      Daniel Borkmann 提交于
      In sk_clone_lock(), we create a new socket and inherit most of the
      parent's members via sock_copy() which memcpy()'s various sections.
      Now, in case the parent socket had a BPF socket filter attached,
      then newsk->sk_filter points to the same instance as the original
      sk->sk_filter.
      
      sk_filter_charge() is then called on the newsk->sk_filter to take a
      reference and should that fail due to hitting max optmem, we bail
      out and release the newsk instance.
      
      The issue is that commit 278571ba ("net: filter: simplify socket
      charging") wrongly combined the dismantle path with the failure path
      of xfrm_sk_clone_policy(). This means, even when charging failed, we
      call sk_free_unlock_clone() on the newsk, which then still points to
      the same sk_filter as the original sk.
      
      Thus, sk_free_unlock_clone() calls into __sk_destruct() eventually
      where it tests for present sk_filter and calls sk_filter_uncharge()
      on it, which potentially lets sk_omem_alloc wrap around and releases
      the eBPF prog and sk_filter structure from the (still intact) parent.
      
      Fix it by making sure that when sk_filter_charge() failed, we reset
      newsk->sk_filter back to NULL before passing to sk_free_unlock_clone(),
      so that we don't mess with the parents sk_filter.
      
      Only if xfrm_sk_clone_policy() fails, we did reach the point where
      either the parent's filter was NULL and as a result newsk's as well
      or where we previously had a successful sk_filter_charge(), thus for
      that case, we do need sk_filter_uncharge() to release the prior taken
      reference on sk_filter.
      
      Fixes: 278571ba ("net: filter: simplify socket charging")
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a97e50cc
    • E
      ipv4: provide stronger user input validation in nl_fib_input() · c64c0b3c
      Eric Dumazet 提交于
      Alexander reported a KMSAN splat caused by reads of uninitialized
      field (tb_id_in) from user provided struct fib_result_nl
      
      It turns out nl_fib_input() sanity tests on user input is a bit
      wrong :
      
      User can pretend nlh->nlmsg_len is big enough, but provide
      at sendmsg() time a too small buffer.
      Reported-by: NAlexander Potapenko <glider@google.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c64c0b3c
    • A
      ipv6: make sure to initialize sockc.tsflags before first use · d515684d
      Alexander Potapenko 提交于
      In the case udp_sk(sk)->pending is AF_INET6, udpv6_sendmsg() would
      jump to do_append_data, skipping the initialization of sockc.tsflags.
      Fix the problem by moving sockc.tsflags initialization earlier.
      
      The bug was detected with KMSAN.
      
      Fixes: c14ac945 ("sock: enable timestamping using control messages")
      Signed-off-by: NAlexander Potapenko <glider@google.com>
      Acked-by: NSoheil Hassas Yeganeh <soheil@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d515684d
    • Y
      tipc: fix nametbl deadlock at tipc_nametbl_unsubscribe · 557d054c
      Ying Xue 提交于
      Until now, tipc_nametbl_unsubscribe() is called at subscriptions
      reference count cleanup. Usually the subscriptions cleanup is
      called at subscription timeout or at subscription cancel or at
      subscriber delete.
      
      We have ignored the possibility of this being called from other
      locations, which causes deadlock as we try to grab the
      tn->nametbl_lock while holding it already.
      
         CPU1:                             CPU2:
      ----------                     ----------------
      tipc_nametbl_publish
      spin_lock_bh(&tn->nametbl_lock)
      tipc_nametbl_insert_publ
      tipc_nameseq_insert_publ
      tipc_subscrp_report_overlap
      tipc_subscrp_get
      tipc_subscrp_send_event
                                   tipc_close_conn
                                   tipc_subscrb_release_cb
                                   tipc_subscrb_delete
                                   tipc_subscrp_put
      tipc_subscrp_put
      tipc_subscrp_kref_release
      tipc_nametbl_unsubscribe
      spin_lock_bh(&tn->nametbl_lock)
      <<grab nametbl_lock again>>
      
         CPU1:                              CPU2:
      ----------                     ----------------
      tipc_nametbl_stop
      spin_lock_bh(&tn->nametbl_lock)
      tipc_purge_publications
      tipc_nameseq_remove_publ
      tipc_subscrp_report_overlap
      tipc_subscrp_get
      tipc_subscrp_send_event
                                   tipc_close_conn
                                   tipc_subscrb_release_cb
                                   tipc_subscrb_delete
                                   tipc_subscrp_put
      tipc_subscrp_put
      tipc_subscrp_kref_release
      tipc_nametbl_unsubscribe
      spin_lock_bh(&tn->nametbl_lock)
      <<grab nametbl_lock again>>
      
      In this commit, we advance the calling of tipc_nametbl_unsubscribe()
      from the refcount cleanup to the intended callers.
      
      Fixes: d094c4d5 ("tipc: add subscription refcount to avoid invalid delete")
      Reported-by: NJohn Thompson <thompa.atl@gmail.com>
      Acked-by: NJon Maloy <jon.maloy@ericsson.com>
      Signed-off-by: NYing Xue <ying.xue@windriver.com>
      Signed-off-by: NParthasarathy Bhuvaragan <parthasarathy.bhuvaragan@ericsson.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      557d054c
    • X
      sctp: remove useless err from sctp_association_init · 58194778
      Xin Long 提交于
      This patch is to remove the unnecessary temporary variable 'err' from
      sctp_association_init.
      Signed-off-by: NXin Long <lucien.xin@gmail.com>
      Acked-by: NNeil Horman <nhorman@tuxdriver.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      58194778
    • T
      cgroup, net_cls: iterate the fds of only the tasks which are being migrated · a05d4fd9
      Tejun Heo 提交于
      The net_cls controller controls the classid field of each socket which
      is associated with the cgroup.  Because the classid is per-socket
      attribute, when a task migrates to another cgroup or the configured
      classid of the cgroup changes, the controller needs to walk all
      sockets and update the classid value, which was implemented by
      3b13758f ("cgroups: Allow dynamically changing net_classid").
      
      While the approach is not scalable, migrating tasks which have a lot
      of fds attached to them is rare and the cost is born by the ones
      initiating the operations.  However, for simplicity, both the
      migration and classid config change paths call update_classid() which
      scans all fds of all tasks in the target css.  This is an overkill for
      the migration path which only needs to cover a much smaller subset of
      tasks which are actually getting migrated in.
      
      On cgroup v1, this can lead to unexpected scalability issues when one
      tries to migrate a task or process into a net_cls cgroup which already
      contains a lot of fds.  Even if the migration traget doesn't have many
      to get scanned, update_classid() ends up scanning all fds in the
      target cgroup which can be extremely numerous.
      
      Unfortunately, on cgroup v2 which doesn't use net_cls, the problem is
      even worse.  Before bfc2cf6f ("cgroup: call subsys->*attach() only
      for subsystems which are actually affected by migration"), cgroup core
      would call the ->css_attach callback even for controllers which don't
      see actual migration to a different css.
      
      As net_cls is always disabled but still mounted on cgroup v2, whenever
      a process is migrated on the cgroup v2 hierarchy, net_cls sees
      identity migration from root to root and cgroup core used to call
      ->css_attach callback for those.  The net_cls ->css_attach ends up
      calling update_classid() on the root net_cls css to which all
      processes on the system belong to as the controller isn't used.  This
      makes any cgroup v2 migration O(total_number_of_fds_on_the_system)
      which is horrible and easily leads to noticeable stalls triggering RCU
      stall warnings and so on.
      
      The worst symptom is already fixed in upstream by bfc2cf6f
      ("cgroup: call subsys->*attach() only for subsystems which are
      actually affected by migration"); however, backporting that commit is
      too invasive and we want to avoid other cases too.
      
      This patch updates net_cls's cgrp_attach() to iterate fds of only the
      processes which are actually getting migrated.  This removes the
      surprising migration cost which is dependent on the total number of
      fds in the target cgroup.  As this leaves write_classid() the only
      user of update_classid(), open-code the helper into write_classid().
      Reported-by: NDavid Goode <dgoode@fb.com>
      Fixes: 3b13758f ("cgroups: Allow dynamically changing net_classid")
      Cc: stable@vger.kernel.org # v4.4+
      Cc: Nina Schiff <ninasc@fb.com>
      Cc: David S. Miller <davem@davemloft.net>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a05d4fd9
  11. 22 3月, 2017 2 次提交