1. 30 10月, 2014 4 次提交
    • N
      inet: frags: remove the WARN_ON from inet_evict_bucket · d70127e8
      Nikolay Aleksandrov 提交于
      The WARN_ON in inet_evict_bucket can be triggered by a valid case:
      inet_frag_kill and inet_evict_bucket can be running in parallel on the
      same queue which means that there has been at least one more ref added
      by a previous inet_frag_find call, but inet_frag_kill can delete the
      timer before inet_evict_bucket which will cause the WARN_ON() there to
      trigger since we'll have refcnt!=1. Now, this case is valid because the
      queue is being "killed" for some reason (removed from the chain list and
      its timer deleted) so it will get destroyed in the end by one of the
      inet_frag_put() calls which reaches 0 i.e. refcnt is still valid.
      
      CC: Florian Westphal <fw@strlen.de>
      CC: Eric Dumazet <eric.dumazet@gmail.com>
      CC: Patrick McLean <chutzpah@gentoo.org>
      
      Fixes: b13d3cbf ("inet: frag: move eviction of queues to work queue")
      Reported-by: NPatrick McLean <chutzpah@gentoo.org>
      Signed-off-by: NNikolay Aleksandrov <nikolay@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d70127e8
    • N
      inet: frags: fix a race between inet_evict_bucket and inet_frag_kill · 65ba1f1e
      Nikolay Aleksandrov 提交于
      When the evictor is running it adds some chosen frags to a local list to
      be evicted once the chain lock has been released but at the same time
      the *frag_queue can be running for some of the same queues and it
      may call inet_frag_kill which will wait on the chain lock and
      will then delete the queue from the wrong list since it was added in the
      eviction one. The fix is simple - check if the queue has the evict flag
      set under the chain lock before deleting it, this is safe because the
      evict flag is set only under that lock and having the flag set also means
      that the queue has been detached from the chain list, so no need to delete
      it again.
      An important note to make is that we're safe w.r.t refcnt because
      inet_frag_kill and inet_evict_bucket will sync on the del_timer operation
      where only one of the two can succeed (or if the timer is executing -
      none of them), the cases are:
      1. inet_frag_kill succeeds in del_timer
       - then the timer ref is removed, but inet_evict_bucket will not add
         this queue to its expire list but will restart eviction in that chain
      2. inet_evict_bucket succeeds in del_timer
       - then the timer ref is kept until the evictor "expires" the queue, but
         inet_frag_kill will remove the initial ref and will set
         INET_FRAG_COMPLETE which will make the frag_expire fn just to remove
         its ref.
      In the end all of the queue users will do an inet_frag_put and the one
      that reaches 0 will free it. The refcount balance should be okay.
      
      CC: Florian Westphal <fw@strlen.de>
      CC: Eric Dumazet <eric.dumazet@gmail.com>
      CC: Patrick McLean <chutzpah@gentoo.org>
      
      Fixes: b13d3cbf ("inet: frag: move eviction of queues to work queue")
      Suggested-by: NEric Dumazet <eric.dumazet@gmail.com>
      Reported-by: NPatrick McLean <chutzpah@gentoo.org>
      Tested-by: NPatrick McLean <chutzpah@gentoo.org>
      Signed-off-by: NNikolay Aleksandrov <nikolay@redhat.com>
      Reviewed-by: NFlorian Westphal <fw@strlen.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      65ba1f1e
    • L
      ipv6: notify userspace when we added or changed an ipv6 token · b2ed64a9
      Lubomir Rintel 提交于
      NetworkManager might want to know that it changed when the router advertisement
      arrives.
      Signed-off-by: NLubomir Rintel <lkundrak@v3.sk>
      Cc: Hannes Frederic Sowa <hannes@stressinduktion.org>
      Cc: Daniel Borkmann <dborkman@redhat.com>
      Acked-by: NDaniel Borkmann <dborkman@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b2ed64a9
    • W
      sch_pie: schedule the timer after all init succeed · d5610902
      WANG Cong 提交于
      Cc: Vijay Subramanian <vijaynsu@cisco.com>
      Cc: David S. Miller <davem@davemloft.net>
      Signed-off-by: NCong Wang <xiyou.wangcong@gmail.com>
      Acked-by: NEric Dumazet <edumazet@google.com>
      d5610902
  2. 29 10月, 2014 1 次提交
  3. 28 10月, 2014 2 次提交
  4. 27 10月, 2014 1 次提交
  5. 26 10月, 2014 1 次提交
    • E
      tcp: md5: do not use alloc_percpu() · 349ce993
      Eric Dumazet 提交于
      percpu tcp_md5sig_pool contains memory blobs that ultimately
      go through sg_set_buf().
      
      -> sg_set_page(sg, virt_to_page(buf), buflen, offset_in_page(buf));
      
      This requires that whole area is in a physically contiguous portion
      of memory. And that @buf is not backed by vmalloc().
      
      Given that alloc_percpu() can use vmalloc() areas, this does not
      fit the requirements.
      
      Replace alloc_percpu() by a static DEFINE_PER_CPU() as tcp_md5sig_pool
      is small anyway, there is no gain to dynamically allocate it.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Fixes: 765cf997 ("tcp: md5: remove one indirection level in tcp_md5sig_pool")
      Reported-by: NCrestez Dan Leonard <cdleonard@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      349ce993
  6. 24 10月, 2014 4 次提交
  7. 23 10月, 2014 3 次提交
  8. 22 10月, 2014 7 次提交
    • S
      netfilter: nf_tables: check for NULL in nf_tables_newchain pcpu stats allocation · c123bb71
      Sabrina Dubroca 提交于
      alloc_percpu returns NULL on failure, not a negative error code.
      
      Fixes: ff3cd7b3 ("netfilter: nf_tables: refactor chain statistic routines")
      Signed-off-by: NSabrina Dubroca <sd@queasysnail.net>
      Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org>
      c123bb71
    • D
      netfilter: ipset: off by one in ip_set_nfnl_get_byindex() · 0f9f5e1b
      Dan Carpenter 提交于
      The ->ip_set_list[] array is initialized in ip_set_net_init() and it
      has ->ip_set_max elements so this check should be >= instead of >
      otherwise we are off by one.
      Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com>
      Acked-by: NJozsef Kadlecsik <kadlec@blackhole.kfki.hu>
      Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org>
      0f9f5e1b
    • M
      netfilter: nf_conntrack: allow server to become a client in TW handling · e37ad9fd
      Marcelo Leitner 提交于
      When a port that was used to listen for inbound connections gets closed
      and reused for outgoing connections (like rsh ends up doing for stderr
      flow), current we may reject the SYN/ACK packet for the new connection
      because tcp_conntracks states forbirds a port to become a client while
      there is still a TIME_WAIT entry in there for it.
      
      As TCP may expire the TIME_WAIT socket in 60s and conntrack's timeout
      for it is 120s, there is a ~60s window that the application can end up
      opening a port that conntrack will end up blocking.
      
      This patch fixes this by simply allowing such state transition: if we
      see a SYN, in TIME_WAIT state, on REPLY direction, move it to sSS. Note
      that the rest of the code already handles this situation, more
      specificly in tcp_packet(), first switch clause.
      Signed-off-by: NMarcelo Ricardo Leitner <mleitner@redhat.com>
      Acked-by: NJozsef Kadlecsik <kadlec@blackhole.kfki.hu>
      Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org>
      e37ad9fd
    • S
      net: sched: initialize bstats syncp · 7c1c97d5
      Sabrina Dubroca 提交于
      Use netdev_alloc_pcpu_stats to allocate percpu stats and initialize syncp.
      
      Fixes: 22e0f8b9 "net: sched: make bstats per cpu and estimator RCU safe"
      Signed-off-by: NSabrina Dubroca <sd@queasysnail.net>
      Acked-by: NCong Wang <cwang@twopensource.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7c1c97d5
    • T
      netlink: Re-add locking to netlink_lookup() and seq walker · 78fd1d0a
      Thomas Graf 提交于
      The synchronize_rcu() in netlink_release() introduces unacceptable
      latency. Reintroduce minimal lookup so we can drop the
      synchronize_rcu() until socket destruction has been RCUfied.
      
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Reported-by: NSteinar H. Gunderson <sgunderson@bigfoot.com>
      Reported-and-tested-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NThomas Graf <tgraf@suug.ch>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      78fd1d0a
    • Y
      tipc: fix lockdep warning when intra-node messages are delivered · 1a194c2d
      Ying Xue 提交于
      When running tipcTC&tipcTS test suite, below lockdep unsafe locking
      scenario is reported:
      
      [ 1109.997854]
      [ 1109.997988] =================================
      [ 1109.998290] [ INFO: inconsistent lock state ]
      [ 1109.998575] 3.17.0-rc1+ #113 Not tainted
      [ 1109.998762] ---------------------------------
      [ 1109.998762] inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} usage.
      [ 1109.998762] swapper/7/0 [HC0[0]:SC1[1]:HE1:SE0] takes:
      [ 1109.998762]  (slock-AF_TIPC){+.?...}, at: [<ffffffffa0011969>] tipc_sk_rcv+0x49/0x2b0 [tipc]
      [ 1109.998762] {SOFTIRQ-ON-W} state was registered at:
      [ 1109.998762]   [<ffffffff810a4770>] __lock_acquire+0x6a0/0x1d80
      [ 1109.998762]   [<ffffffff810a6555>] lock_acquire+0x95/0x1e0
      [ 1109.998762]   [<ffffffff81a2d1ce>] _raw_spin_lock+0x3e/0x80
      [ 1109.998762]   [<ffffffffa0011969>] tipc_sk_rcv+0x49/0x2b0 [tipc]
      [ 1109.998762]   [<ffffffffa0004fe8>] tipc_link_xmit+0xa8/0xc0 [tipc]
      [ 1109.998762]   [<ffffffffa000ec6f>] tipc_sendmsg+0x15f/0x550 [tipc]
      [ 1109.998762]   [<ffffffffa000f165>] tipc_connect+0x105/0x140 [tipc]
      [ 1109.998762]   [<ffffffff817676ee>] SYSC_connect+0xae/0xc0
      [ 1109.998762]   [<ffffffff81767b7e>] SyS_connect+0xe/0x10
      [ 1109.998762]   [<ffffffff817a9788>] compat_SyS_socketcall+0xb8/0x200
      [ 1109.998762]   [<ffffffff81a306e5>] sysenter_dispatch+0x7/0x1f
      [ 1109.998762] irq event stamp: 241060
      [ 1109.998762] hardirqs last  enabled at (241060): [<ffffffff8105a4ad>] __local_bh_enable_ip+0x6d/0xd0
      [ 1109.998762] hardirqs last disabled at (241059): [<ffffffff8105a46f>] __local_bh_enable_ip+0x2f/0xd0
      [ 1109.998762] softirqs last  enabled at (241020): [<ffffffff81059a52>] _local_bh_enable+0x22/0x50
      [ 1109.998762] softirqs last disabled at (241021): [<ffffffff8105a626>] irq_exit+0x96/0xc0
      [ 1109.998762]
      [ 1109.998762] other info that might help us debug this:
      [ 1109.998762]  Possible unsafe locking scenario:
      [ 1109.998762]
      [ 1109.998762]        CPU0
      [ 1109.998762]        ----
      [ 1109.998762]   lock(slock-AF_TIPC);
      [ 1109.998762]   <Interrupt>
      [ 1109.998762]     lock(slock-AF_TIPC);
      [ 1109.998762]
      [ 1109.998762]  *** DEADLOCK ***
      [ 1109.998762]
      [ 1109.998762] 2 locks held by swapper/7/0:
      [ 1109.998762]  #0:  (rcu_read_lock){......}, at: [<ffffffff81782dc9>] __netif_receive_skb_core+0x69/0xb70
      [ 1109.998762]  #1:  (rcu_read_lock){......}, at: [<ffffffffa0001c90>] tipc_l2_rcv_msg+0x40/0x260 [tipc]
      [ 1109.998762]
      [ 1109.998762] stack backtrace:
      [ 1109.998762] CPU: 7 PID: 0 Comm: swapper/7 Not tainted 3.17.0-rc1+ #113
      [ 1109.998762] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2007
      [ 1109.998762]  ffffffff82745830 ffff880016c03828 ffffffff81a209eb 0000000000000007
      [ 1109.998762]  ffff880017b3cac0 ffff880016c03888 ffffffff81a1c5ef 0000000000000001
      [ 1109.998762]  ffff880000000001 ffff880000000000 ffffffff81012d4f 0000000000000000
      [ 1109.998762] Call Trace:
      [ 1109.998762]  <IRQ>  [<ffffffff81a209eb>] dump_stack+0x4e/0x68
      [ 1109.998762]  [<ffffffff81a1c5ef>] print_usage_bug+0x1f1/0x202
      [ 1109.998762]  [<ffffffff81012d4f>] ? save_stack_trace+0x2f/0x50
      [ 1109.998762]  [<ffffffff810a406c>] mark_lock+0x28c/0x2f0
      [ 1109.998762]  [<ffffffff810a3440>] ? print_irq_inversion_bug.part.46+0x1f0/0x1f0
      [ 1109.998762]  [<ffffffff810a467d>] __lock_acquire+0x5ad/0x1d80
      [ 1109.998762]  [<ffffffff810a70dd>] ? trace_hardirqs_on+0xd/0x10
      [ 1109.998762]  [<ffffffff8108ace8>] ? sched_clock_cpu+0x98/0xc0
      [ 1109.998762]  [<ffffffff8108ad2b>] ? local_clock+0x1b/0x30
      [ 1109.998762]  [<ffffffff810a10dc>] ? lock_release_holdtime.part.29+0x1c/0x1a0
      [ 1109.998762]  [<ffffffff8108aa05>] ? sched_clock_local+0x25/0x90
      [ 1109.998762]  [<ffffffffa000dec0>] ? tipc_sk_get+0x60/0x80 [tipc]
      [ 1109.998762]  [<ffffffff810a6555>] lock_acquire+0x95/0x1e0
      [ 1109.998762]  [<ffffffffa0011969>] ? tipc_sk_rcv+0x49/0x2b0 [tipc]
      [ 1109.998762]  [<ffffffff810a6fb6>] ? trace_hardirqs_on_caller+0xa6/0x1c0
      [ 1109.998762]  [<ffffffff81a2d1ce>] _raw_spin_lock+0x3e/0x80
      [ 1109.998762]  [<ffffffffa0011969>] ? tipc_sk_rcv+0x49/0x2b0 [tipc]
      [ 1109.998762]  [<ffffffffa000dec0>] ? tipc_sk_get+0x60/0x80 [tipc]
      [ 1109.998762]  [<ffffffffa0011969>] tipc_sk_rcv+0x49/0x2b0 [tipc]
      [ 1109.998762]  [<ffffffffa00076bd>] tipc_rcv+0x5ed/0x960 [tipc]
      [ 1109.998762]  [<ffffffffa0001d1c>] tipc_l2_rcv_msg+0xcc/0x260 [tipc]
      [ 1109.998762]  [<ffffffffa0001c90>] ? tipc_l2_rcv_msg+0x40/0x260 [tipc]
      [ 1109.998762]  [<ffffffff81783345>] __netif_receive_skb_core+0x5e5/0xb70
      [ 1109.998762]  [<ffffffff81782dc9>] ? __netif_receive_skb_core+0x69/0xb70
      [ 1109.998762]  [<ffffffff81784eb9>] ? dev_gro_receive+0x259/0x4e0
      [ 1109.998762]  [<ffffffff817838f6>] __netif_receive_skb+0x26/0x70
      [ 1109.998762]  [<ffffffff81783acd>] netif_receive_skb_internal+0x2d/0x1f0
      [ 1109.998762]  [<ffffffff81785518>] napi_gro_receive+0xd8/0x240
      [ 1109.998762]  [<ffffffff815bf854>] e1000_clean_rx_irq+0x2c4/0x530
      [ 1109.998762]  [<ffffffff815c1a46>] e1000_clean+0x266/0x9c0
      [ 1109.998762]  [<ffffffff8108ad2b>] ? local_clock+0x1b/0x30
      [ 1109.998762]  [<ffffffff8108aa05>] ? sched_clock_local+0x25/0x90
      [ 1109.998762]  [<ffffffff817842b1>] net_rx_action+0x141/0x310
      [ 1109.998762]  [<ffffffff810bd710>] ? handle_fasteoi_irq+0xe0/0x150
      [ 1109.998762]  [<ffffffff81059fa6>] __do_softirq+0x116/0x4d0
      [ 1109.998762]  [<ffffffff8105a626>] irq_exit+0x96/0xc0
      [ 1109.998762]  [<ffffffff81a30d07>] do_IRQ+0x67/0x110
      [ 1109.998762]  [<ffffffff81a2ee2f>] common_interrupt+0x6f/0x6f
      [ 1109.998762]  <EOI>  [<ffffffff8100d2b7>] ? default_idle+0x37/0x250
      [ 1109.998762]  [<ffffffff8100d2b5>] ? default_idle+0x35/0x250
      [ 1109.998762]  [<ffffffff8100dd1f>] arch_cpu_idle+0xf/0x20
      [ 1109.998762]  [<ffffffff810999fd>] cpu_startup_entry+0x27d/0x4d0
      [ 1109.998762]  [<ffffffff81034c78>] start_secondary+0x188/0x1f0
      
      When intra-node messages are delivered from one process to another
      process, tipc_link_xmit() doesn't disable BH before it directly calls
      tipc_sk_rcv() on process context to forward messages to destination
      socket. Meanwhile, if messages delivered by remote node arrive at the
      node and their destinations are also the same socket, tipc_sk_rcv()
      running on process context might be preempted by tipc_sk_rcv() running
      BH context. As a result, the latter cannot obtain the socket lock as
      the lock was obtained by the former, however, the former has no chance
      to be run as the latter is owning the CPU now, so headlock happens. To
      avoid it, BH should be always disabled in tipc_sk_rcv().
      Signed-off-by: NYing Xue <ying.xue@windriver.com>
      Reviewed-by: NJon Maloy <jon.maloy@ericsson.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1a194c2d
    • Y
      tipc: fix a potential deadlock · 7b8613e0
      Ying Xue 提交于
      Locking dependency detected below possible unsafe locking scenario:
      
                 CPU0                          CPU1
      T0:  tipc_named_rcv()                tipc_rcv()
      T1:  [grab nametble write lock]*     [grab node lock]*
      T2:  tipc_update_nametbl()           tipc_node_link_up()
      T3:  tipc_nodesub_subscribe()        tipc_nametbl_publish()
      T4:  [grab node lock]*               [grab nametble write lock]*
      
      The opposite order of holding nametbl write lock and node lock on
      above two different paths may result in a deadlock. If we move the
      the updating of the name table after link state named out of node
      lock, the reverse order of holding locks will be eliminated, and
      as a result, the deadlock risk.
      Signed-off-by: NYing Xue <ying.xue@windriver.com>
      Signed-off-by: NJon Maloy <jon.maloy@ericsson.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7b8613e0
  9. 21 10月, 2014 3 次提交
    • F
      net: core: handle encapsulation offloads when computing segment lengths · f993bc25
      Florian Westphal 提交于
      if ->encapsulation is set we have to use inner_tcp_hdrlen and add the
      size of the inner network headers too.
      
      This is 'mostly harmless'; tbf might send skb that is slightly over
      quota or drop skb even if it would have fit.
      Signed-off-by: NFlorian Westphal <fw@strlen.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f993bc25
    • F
      net: make skb_gso_segment error handling more robust · 330966e5
      Florian Westphal 提交于
      skb_gso_segment has three possible return values:
      1. a pointer to the first segmented skb
      2. an errno value (IS_ERR())
      3. NULL.  This can happen when GSO is used for header verification.
      
      However, several callers currently test IS_ERR instead of IS_ERR_OR_NULL
      and would oops when NULL is returned.
      
      Note that these call sites should never actually see such a NULL return
      value; all callers mask out the GSO bits in the feature argument.
      
      However, there have been issues with some protocol handlers erronously not
      respecting the specified feature mask in some cases.
      
      It is preferable to get 'have to turn off hw offloading, else slow' reports
      rather than 'kernel crashes'.
      Signed-off-by: NFlorian Westphal <fw@strlen.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      330966e5
    • F
      net: gso: use feature flag argument in all protocol gso handlers · 1e16aa3d
      Florian Westphal 提交于
      skb_gso_segment() has a 'features' argument representing offload features
      available to the output path.
      
      A few handlers, e.g. GRE, instead re-fetch the features of skb->dev and use
      those instead of the provided ones when handing encapsulation/tunnels.
      
      Depending on dev->hw_enc_features of the output device skb_gso_segment() can
      then return NULL even when the caller has disabled all GSO feature bits,
      as segmentation of inner header thinks device will take care of segmentation.
      
      This e.g. affects the tbf scheduler, which will silently drop GRE-encap GSO skbs
      that did not fit the remaining token quota as the segmentation does not work
      when device supports corresponding hw offload capabilities.
      
      Cc: Pravin B Shelar <pshelar@nicira.com>
      Signed-off-by: NFlorian Westphal <fw@strlen.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1e16aa3d
  10. 20 10月, 2014 2 次提交
    • K
      mac80211: minstrels: fix buffer overflow in HT debugfs rc_stats · 11b2357d
      Karl Beldan 提交于
      ATM an HT rc_stats line is 106 chars.
      Times 8(MCS_GROUP_RATES)*3(SS)*2(GI)*2(BW) + CCK(4), i.e. x100, this is
      well above the current 8192 - sizeof(*ms) currently allocated.
      
      Fix this by squeezing the output as follows (not that we're short on
      memory but this also improves readability and range, the new format adds
      one more digit to *ok/*cum and ok/cum):
      
      - Before (HT) (106 ch):
      type           rate     throughput  ewma prob   this prob  retry   this succ/attempt   success    attempts
      CCK/LP          5.5M           0.0        0.0         0.0      0              0(  0)         0           0
      HT20/LGI ABCDP MCS0            0.0        0.0         0.0      1              0(  0)         0           0
      - After (75 ch):
      type           rate     tpt eprob *prob ret  *ok(*cum)        ok(      cum)
      CCK/LP          5.5M    0.0   0.0   0.0   0    0(   0)         0(        0)
      HT20/LGI ABCDP MCS0     0.0   0.0   0.0   1    0(   0)         0(        0)
      
      - Align non-HT format Before (non-HT) (83 ch):
      rate      throughput  ewma prob  this prob  this succ/attempt   success    attempts
      ABCDP  6         0.0        0.0        0.0             0(  0)         0           0
            54         0.0        0.0        0.0             0(  0)         0           0
      - After (61 ch):
      rate          tpt eprob *prob  *ok(*cum)        ok(      cum)
      ABCDP  1      0.0   0.0   0.0    0(   0)         0(        0)
            54      0.0   0.0   0.0    0(   0)         0(        0)
      
      *This also adds dynamic checks for overflow, lowers the size of the
      non-HT request (allowing > 30 entries) and replaces the buddy-rounded
      allocations (s/sizeof(*ms) + 8192/8192).
      Signed-off-by: NKarl Beldan <karl.beldan@rivierawaves.com>
      Acked-by: NFelix Fietkau <nbd@openwrt.org>
      Signed-off-by: NJohannes Berg <johannes.berg@intel.com>
      11b2357d
    • A
      Net: DSA: Fix checking for get_phy_flags function · 228b16cb
      Andrew Lunn 提交于
      The check for the presence or not of the optional switch function
      get_phy_flags() called the function, rather than checked to see if it
      is a NULL pointer. This causes a derefernce of a NULL pointer on all
      switch chips except the sf2, the only switch to implement this call.
      Signed-off-by: NAndrew Lunn <andrew@lunn.ch>
      Fixes: 6819563e ("net: dsa: allow switch drivers to specify phy_device::dev_flags")
      Cc: Florian Fainelli <f.fainelli@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      228b16cb
  11. 19 10月, 2014 3 次提交
  12. 18 10月, 2014 9 次提交