1. 15 12月, 2012 4 次提交
    • C
      inet: Fix kmemleak in tcp_v4/6_syn_recv_sock and dccp_v4/6_request_recv_sock · e337e24d
      Christoph Paasch 提交于
      If in either of the above functions inet_csk_route_child_sock() or
      __inet_inherit_port() fails, the newsk will not be freed:
      
      unreferenced object 0xffff88022e8a92c0 (size 1592):
        comm "softirq", pid 0, jiffies 4294946244 (age 726.160s)
        hex dump (first 32 bytes):
          0a 01 01 01 0a 01 01 02 00 00 00 00 a7 cc 16 00  ................
          02 00 03 01 00 00 00 00 00 00 00 00 00 00 00 00  ................
        backtrace:
          [<ffffffff8153d190>] kmemleak_alloc+0x21/0x3e
          [<ffffffff810ab3e7>] kmem_cache_alloc+0xb5/0xc5
          [<ffffffff8149b65b>] sk_prot_alloc.isra.53+0x2b/0xcd
          [<ffffffff8149b784>] sk_clone_lock+0x16/0x21e
          [<ffffffff814d711a>] inet_csk_clone_lock+0x10/0x7b
          [<ffffffff814ebbc3>] tcp_create_openreq_child+0x21/0x481
          [<ffffffff814e8fa5>] tcp_v4_syn_recv_sock+0x3a/0x23b
          [<ffffffff814ec5ba>] tcp_check_req+0x29f/0x416
          [<ffffffff814e8e10>] tcp_v4_do_rcv+0x161/0x2bc
          [<ffffffff814eb917>] tcp_v4_rcv+0x6c9/0x701
          [<ffffffff814cea9f>] ip_local_deliver_finish+0x70/0xc4
          [<ffffffff814cec20>] ip_local_deliver+0x4e/0x7f
          [<ffffffff814ce9f8>] ip_rcv_finish+0x1fc/0x233
          [<ffffffff814cee68>] ip_rcv+0x217/0x267
          [<ffffffff814a7bbe>] __netif_receive_skb+0x49e/0x553
          [<ffffffff814a7cc3>] netif_receive_skb+0x50/0x82
      
      This happens, because sk_clone_lock initializes sk_refcnt to 2, and thus
      a single sock_put() is not enough to free the memory. Additionally, things
      like xfrm, memcg, cookie_values,... may have been initialized.
      We have to free them properly.
      
      This is fixed by forcing a call to tcp_done(), ending up in
      inet_csk_destroy_sock, doing the final sock_put(). tcp_done() is necessary,
      because it ends up doing all the cleanup on xfrm, memcg, cookie_values,
      xfrm,...
      
      Before calling tcp_done, we have to set the socket to SOCK_DEAD, to
      force it entering inet_csk_destroy_sock. To avoid the warning in
      inet_csk_destroy_sock, inet_num has to be set to 0.
      As inet_csk_destroy_sock does a dec on orphan_count, we first have to
      increase it.
      
      Calling tcp_done() allows us to remove the calls to
      tcp_clear_xmit_timer() and tcp_cleanup_congestion_control().
      
      A similar approach is taken for dccp by calling dccp_done().
      
      This is in the kernel since 093d2823 (tproxy: fix hash locking issue
      when using port redirection in __inet_inherit_port()), thus since
      version >= 2.6.37.
      Signed-off-by: NChristoph Paasch <christoph.paasch@uclouvain.be>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e337e24d
    • D
      ipv6: Change skb->data before using icmpv6_notify() to propagate redirect · 093d04d4
      Duan Jiong 提交于
      In function ndisc_redirect_rcv(), the skb->data points to the transport
      header, but function icmpv6_notify() need the skb->data points to the
      inner IP packet. So before using icmpv6_notify() to propagate redirect,
      change skb->data to point the inner IP packet that triggered the sending
      of the Redirect, and introduce struct rd_msg to make it easy.
      Signed-off-by: NDuan Jiong <djduanjiong@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      093d04d4
    • K
      mac802154: fix destructon ordering for ieee802154 devices · 1e9f9545
      Konstantin Khlebnikov 提交于
      mutex_destroy() must be called before wpan_phy_free(), because it puts the last
      reference and frees memory. Catched as overwritten poison in kmalloc-2048.
      Signed-off-by: NKonstantin Khlebnikov <khlebnikov@openvz.org>
      Cc: Alexander Smirnov <alex.bluesman.smirnov@gmail.com>
      Cc: Dmitry Eremin-Solenikov <dbaryshkov@gmail.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: linux-zigbee-devel@lists.sourceforge.net
      Cc: netdev@vger.kernel.org
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1e9f9545
    • A
      bridge: remove temporary variable for MLDv2 maximum response code computation · 8fa45a70
      Ang Way Chuang 提交于
      As suggested by Stephen Hemminger, this remove the temporary variable
      introduced in commit eca2a43b
      ("bridge: fix icmpv6 endian bug and other sparse warnings")
      Signed-off-by: NAng Way Chuang <wcang@sfc.wide.ad.jp>
      Acked-by: NStephen Hemminger <shemminger@vyatta.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8fa45a70
  2. 14 12月, 2012 3 次提交
  3. 13 12月, 2012 3 次提交
  4. 12 12月, 2012 5 次提交
    • E
      pkt_sched: avoid requeues if possible · 1abbe139
      Eric Dumazet 提交于
      With BQL being deployed, we can more likely have following behavior :
      
      We dequeue a packet from qdisc in dequeue_skb(), then we realize target
      tx queue is in XOFF state in sch_direct_xmit(), and we have to hold the
      skb into gso_skb for later.
      
      This shows in stats (tc -s qdisc dev eth0) as requeues.
      
      Problem of these requeues is that high priority packets can not be
      dequeued as long as this (possibly low prio and big TSO packet) is not
      removed from gso_skb.
      
      At 1Gbps speed, a full size TSO packet is 500 us of extra latency.
      
      In some cases, we know that all packets dequeued from a qdisc are
      for a particular and known txq :
      
      - If device is non multi queue
      - For all MQ/MQPRIO slave qdiscs
      
      This patch introduces a new qdisc flag, TCQ_F_ONETXQUEUE to mark
      this capability, so that dequeue_skb() is allowed to dequeue a packet
      only if the associated txq is not stopped.
      
      This indeed reduce latencies for high prio packets (or improve fairness
      with sfq/fq_codel), and almost remove qdisc 'requeues'.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Jamal Hadi Salim <jhs@mojatatu.com>
      Cc: John Fastabend <john.r.fastabend@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1abbe139
    • E
      net: gro: avoid double copy in skb_gro_receive() · 75be4372
      Eric Dumazet 提交于
      __copy_skb_header(nskb, p) already copied p->cb[], no need to copy
      it again.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      75be4372
    • C
      bridge: fix seq check in br_mdb_dump() · 2ce297fc
      Cong Wang 提交于
      In case of rehashing, introduce a global variable 'br_mdb_rehash_seq'
      which gets increased every time when rehashing, and assign
      net->dev_base_seq + br_mdb_rehash_seq to cb->seq.
      
      In theory cb->seq could be wrapped to zero, but this is not
      easy to fix, as net->dev_base_seq is not visible inside
      br_mdb_rehash(). In practice, this is rare.
      
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Cc: Stephen Hemminger <shemminger@vyatta.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Thomas Graf <tgraf@suug.ch>
      Cc: Jesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: NCong Wang <amwang@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2ce297fc
    • A
      net: remove obsolete simple_strto<foo> · a71258d7
      Abhijit Pawar 提交于
      This patch removes the redundant occurences of simple_strto<foo>
      Signed-off-by: NAbhijit Pawar <abhi.c.pawar@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a71258d7
    • E
      net: gro: dev_gro_receive() cleanup · 89c5fa33
      Eric Dumazet 提交于
      __napi_gro_receive() is inlined from two call sites for no good reason.
      
      Lets move the prep stuff in a function of its own, called only if/when
      needed. This saves 300 bytes on x86 :
      
      # size net/core/dev.o.after net/core/dev.o.before
         text	   data	    bss	    dec	    hex	filename
        51968	   1238	   1040	  54246	   d3e6	net/core/dev.o.before
        51664	   1238	   1040	  53942	   d2b6	net/core/dev.o.after
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      89c5fa33
  5. 11 12月, 2012 6 次提交
  6. 10 12月, 2012 4 次提交
    • N
      inet_diag: validate port comparison byte code to prevent unsafe reads · 5e1f5420
      Neal Cardwell 提交于
      Add logic to verify that a port comparison byte code operation
      actually has the second inet_diag_bc_op from which we read the port
      for such operations.
      
      Previously the code blindly referenced op[1] without first checking
      whether a second inet_diag_bc_op struct could fit there. So a
      malicious user could make the kernel read 4 bytes beyond the end of
      the bytecode array by claiming to have a whole port comparison byte
      code (2 inet_diag_bc_op structs) when in fact the bytecode was not
      long enough to hold both.
      Signed-off-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5e1f5420
    • N
      inet_diag: avoid unsafe and nonsensical prefix matches in inet_diag_bc_run() · f67caec9
      Neal Cardwell 提交于
      Add logic to check the address family of the user-supplied conditional
      and the address family of the connection entry. We now do not do
      prefix matching of addresses from different address families (AF_INET
      vs AF_INET6), except for the previously existing support for having an
      IPv4 prefix match an IPv4-mapped IPv6 address (which this commit
      maintains as-is).
      
      This change is needed for two reasons:
      
      (1) The addresses are different lengths, so comparing a 128-bit IPv6
      prefix match condition to a 32-bit IPv4 connection address can cause
      us to unwittingly walk off the end of the IPv4 address and read
      garbage or oops.
      
      (2) The IPv4 and IPv6 address spaces are semantically distinct, so a
      simple bit-wise comparison of the prefixes is not meaningful, and
      would lead to bogus results (except for the IPv4-mapped IPv6 case,
      which this commit maintains).
      Signed-off-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f67caec9
    • N
      inet_diag: validate byte code to prevent oops in inet_diag_bc_run() · 405c0059
      Neal Cardwell 提交于
      Add logic to validate INET_DIAG_BC_S_COND and INET_DIAG_BC_D_COND
      operations.
      
      Previously we did not validate the inet_diag_hostcond, address family,
      address length, and prefix length. So a malicious user could make the
      kernel read beyond the end of the bytecode array by claiming to have a
      whole inet_diag_hostcond when the bytecode was not long enough to
      contain a whole inet_diag_hostcond of the given address family. Or
      they could make the kernel read up to about 27 bytes beyond the end of
      a connection address by passing a prefix length that exceeded the
      length of addresses of the given family.
      Signed-off-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      405c0059
    • N
      inet_diag: fix oops for IPv4 AF_INET6 TCP SYN-RECV state · 1c95df85
      Neal Cardwell 提交于
      Fix inet_diag to be aware of the fact that AF_INET6 TCP connections
      instantiated for IPv4 traffic and in the SYN-RECV state were actually
      created with inet_reqsk_alloc(), instead of inet6_reqsk_alloc(). This
      means that for such connections inet6_rsk(req) returns a pointer to a
      random spot in memory up to roughly 64KB beyond the end of the
      request_sock.
      
      With this bug, for a server using AF_INET6 TCP sockets and serving
      IPv4 traffic, an inet_diag user like `ss state SYN-RECV` would lead to
      inet_diag_fill_req() causing an oops or the export to user space of 16
      bytes of kernel memory as a garbage IPv6 address, depending on where
      the garbage inet6_rsk(req) pointed.
      Signed-off-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1c95df85
  7. 09 12月, 2012 4 次提交
  8. 08 12月, 2012 11 次提交
    • P
      tipc: refactor accept() code for improved readability · 0fef8f20
      Paul Gortmaker 提交于
      In TIPC's accept() routine, there is a large block of code relating
      to initialization of a new socket, all within an if condition checking
      if the allocation succeeded.
      
      Here, we simply flip the check of the if, so that the main execution
      path stays at the same indentation level, which improves readability.
      If the allocation fails, we jump to an already existing exit label.
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      0fef8f20
    • Y
      tipc: add lock nesting notation to quiet lockdep warning · 258f8667
      Ying Xue 提交于
      TIPC accept() call grabs the socket lock on a newly allocated
      socket while holding the socket lock on an old socket. But lockdep
      worries that this might be a recursive lock attempt:
      
        [ INFO: possible recursive locking detected ]
        ---------------------------------------------
        kworker/u:0/6 is trying to acquire lock:
        (sk_lock-AF_TIPC){+.+.+.}, at: [<c8c1226c>] accept+0x15c/0x310 [tipc]
      
        but task is already holding lock:
        (sk_lock-AF_TIPC){+.+.+.}, at: [<c8c12138>] accept+0x28/0x310 [tipc]
      
        other info that might help us debug this:
        Possible unsafe locking scenario:
      
                CPU0
                ----
                lock(sk_lock-AF_TIPC);
                lock(sk_lock-AF_TIPC);
      
                *** DEADLOCK ***
      
        May be due to missing lock nesting notation
        [...]
      
      Tell lockdep that this locking is safe by using lock_sock_nested().
      This is similar to what was done in commit 5131a184 for
      SCTP code ("SCTP: lock_sock_nested in sctp_sock_migrate").
      
      Also note that this is isn't something that is seen normally,
      as it was uncovered with some experimental work-in-progress
      code not yet ready for mainline.  So no need for stable
      backports or similar of this commit.
      Signed-off-by: NYing Xue <ying.xue@windriver.com>
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      258f8667
    • Y
      tipc: eliminate connection setup for implied connect in recv_msg() · cbab3687
      Ying Xue 提交于
      As connection setup is now completed asynchronously in BH context,
      in the function filter_connect(), the corresponding code in recv_msg()
      becomes redundant.
      Signed-off-by: NYing Xue <ying.xue@windriver.com>
      Signed-off-by: NJon Maloy <jon.maloy@ericsson.com>
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      cbab3687
    • Y
      tipc: introduce non-blocking socket connect · 584d24b3
      Ying Xue 提交于
      TIPC has so far only supported blocking connect(), meaning that a call
      to connect() doesn't return until either the connection is fully
      established, or an error occurs. This has proved insufficient for many
      users, so we now introduce non-blocking connect(), analogous to how
      this is done in TCP and other protocols.
      
      With this feature, if a connection cannot be established instantly,
      connect() will return the error code "-EINPROGRESS".
      If the user later calls connect() again, he will either have the
      return code "-EALREADY" or "-EISCONN", depending on whether the
      connection has been established or not.
      
      The user must have explicitly set the socket to be non-blocking
      (SOCK_NONBLOCK or O_NONBLOCK, depending on method used), so unless
      for some reason they had set this already (the socket would anyway
      remain blocking in current TIPC) this change should be completely
      backwards compatible.
      
      It is also now possible to call select() or poll() to wait for the
      completion of a connection.
      
      An effect of the above is that the actual completion of a connection
      may now be performed asynchronously, independent of the calls from
      user space. Therefore, we now execute this code in BH context, in
      the function filter_rcv(), which is executed upon reception of
      messages in the socket.
      Signed-off-by: NYing Xue <ying.xue@windriver.com>
      Signed-off-by: NJon Maloy <jon.maloy@ericsson.com>
      [PG: minor refactoring for improved connect/disconnect function names]
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      584d24b3
    • Y
      tipc: consolidate connection-oriented message reception in one function · 7e6c131e
      Ying Xue 提交于
      Handling of connection-related message reception is currently scattered
      around at different places in the code. This makes it harder to verify
      that things are handled correctly in all possible scenarios.
      So we consolidate the existing processing of connection-oriented
      message reception in a single routine.  In the process, we convert the
      chain of if/else into a switch/case for improved readability.
      
      A cast on the socket_state in the switch is needed to avoid compile
      warnings on 32 bit, like "net/tipc/socket.c:1252:2: warning: case value
      ‘4294967295’ not in enumerated type".  This happens because existing
      tipc code pseudo extends the default linux socket state values with:
      
      	#define SS_LISTENING    -1      /* socket is listening */
      	#define SS_READY        -2      /* socket is connectionless */
      
      It may make sense to add these as _positive_ values to the existing
      socket state enum list someday, vs. these already existing defines.
      Signed-off-by: NYing Xue <ying.xue@windriver.com>
      Signed-off-by: NJon Maloy <jon.maloy@ericsson.com>
      [PG: add cast to fix warning; remove returns from middle of switch]
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      7e6c131e
    • P
      tipc: standardize across connect/disconnect function naming · bc879117
      Paul Gortmaker 提交于
      Currently we have tipc_disconnect and tipc_disconnect_port.  It is
      not clear from the names alone, what they do or how they differ.
      It turns out that tipc_disconnect just deals with the port locking
      and then calls tipc_disconnect_port which does all the work.
      
      If we rename as follows: tipc_disconnect_port --> __tipc_disconnect
      then we will be following typical linux convention, where:
      
         __tipc_disconnect: "raw" function that does all the work.
      
         tipc_disconnect: wrapper that deals with locking and then calls
      		    the real core __tipc_disconnect function
      
      With this, the difference is immediately evident, and locking
      violations are more apt to be spotted by chance while working on,
      or even just while reading the code.
      
      On the connect side of things, we currently only have the single
      "tipc_connect2port" function.  It does both the locking at enter/exit,
      and the core of the work.  Pending changes will make it desireable to
      have the connect be a two part locking wrapper + worker function,
      just like the disconnect is already.
      
      Here, we make the connect look just like the updated disconnect case,
      for the above reason, and for consistency.  In the process, we also
      get rid of the "2port" suffix that was on the original name, since
      it adds no descriptive value.
      
      On close examination, one might notice that the above connect
      changes implicitly move the call to tipc_link_get_max_pkt() to be
      within the scope of tipc_port_lock() protected region; when it was
      not previously.  We don't see any issues with this, and it is in
      keeping with __tipc_connect doing the work and tipc_connect just
      handling the locking.
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      bc879117
    • J
      tipc: change sk_receive_queue upper limit · e643df15
      Jon Maloy 提交于
      The sk_recv_queue upper limit for connectionless sockets has empirically
      turned out to be too low. When we double the current limit we get much
      fewer rejected messages and no noticable negative side-effects.
      Signed-off-by: NJon Maloy <jon.maloy@ericsson.com>
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      e643df15
    • E
      net: gro: fix possible panic in skb_gro_receive() · c3c7c254
      Eric Dumazet 提交于
      commit 2e71a6f8 (net: gro: selective flush of packets) added
      a bug for skbs using frag_list. This part of the GRO stack is rarely
      used, as it needs skb not using a page fragment for their skb->head.
      
      Most drivers do use a page fragment, but some of them use GFP_KERNEL
      allocations for the initial fill of their RX ring buffer.
      
      napi_gro_flush() overwrite skb->prev that was used for these skb to
      point to the last skb in frag_list.
      
      Fix this using a separate field in struct napi_gro_cb to point to the
      last fragment.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c3c7c254
    • Y
      tcp: bug fix Fast Open client retransmission · 93b174ad
      Yuchung Cheng 提交于
      If SYN-ACK partially acks SYN-data, the client retransmits the
      remaining data by tcp_retransmit_skb(). This increments lost recovery
      state variables like tp->retrans_out in Open state. If loss recovery
      happens before the retransmission is acked, it triggers the WARN_ON
      check in tcp_fastretrans_alert(). For example: the client sends
      SYN-data, gets SYN-ACK acking only ISN, retransmits data, sends
      another 4 data packets and get 3 dupacks.
      
      Since the retransmission is not caused by network drop it should not
      update the recovery state variables. Further the server may return a
      smaller MSS than the cached MSS used for SYN-data, so the retranmission
      needs a loop. Otherwise some data will not be retransmitted until timeout
      or other loss recovery events.
      Signed-off-by: NYuchung Cheng <ycheng@google.com>
      Acked-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      93b174ad
    • C
      bridge: export multicast database via netlink · ee07c6e7
      Cong Wang 提交于
      V5: fix two bugs pointed out by Thomas
          remove seq check for now, mark it as TODO
      
      V4: remove some useless #include
          some coding style fix
      
      V3: drop debugging printk's
          update selinux perm table as well
      
      V2: drop patch 1/2, export ifindex directly
          Redesign netlink attributes
          Improve netlink seq check
          Handle IPv6 addr as well
      
      This patch exports bridge multicast database via netlink
      message type RTM_GETMDB. Similar to fdb, but currently bridge-specific.
      We may need to support modify multicast database too (RTM_{ADD,DEL}MDB).
      
      (Thanks to Thomas for patient reviews)
      
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Cc: Stephen Hemminger <shemminger@vyatta.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Thomas Graf <tgraf@suug.ch>
      Cc: Jesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: NCong Wang <amwang@redhat.com>
      Acked-by: NThomas Graf <tgraf@suug.ch>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ee07c6e7
    • Y
      tipc: eliminate aggregate sk_receive_queue limit · 9da3d475
      Ying Xue 提交于
      As a complement to the per-socket sk_recv_queue limit, TIPC keeps a
      global atomic counter for the sum of sk_recv_queue sizes across all
      tipc sockets. When incremented, the counter is compared to an upper
      threshold value, and if this is reached, the message is rejected
      with error code TIPC_OVERLOAD.
      
      This check was originally meant to protect the node against
      buffer exhaustion and general CPU overload. However, all experience
      indicates that the feature not only is redundant on Linux, but even
      harmful. Users run into the limit very often, causing disturbances
      for their applications, while removing it seems to have no negative
      effects at all. We have also seen that overall performance is
      boosted significantly when this bottleneck is removed.
      
      Furthermore, we don't see any other network protocols maintaining
      such a mechanism, something strengthening our conviction that this
      control can be eliminated.
      
      As a result, the atomic variable tipc_queue_size is now unused
      and so it can be deleted.  There is a getsockopt call that used
      to allow reading it; we retain that but just return zero for
      maximum compatibility.
      Signed-off-by: NYing Xue <ying.xue@windriver.com>
      Signed-off-by: NJon Maloy <jon.maloy@ericsson.com>
      Cc: Neil Horman <nhorman@tuxdriver.com>
      [PG: phase out tipc_queue_size as pointed out by Neil Horman]
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      9da3d475