1. 15 6月, 2016 2 次提交
  2. 08 6月, 2016 1 次提交
  3. 25 11月, 2015 1 次提交
  4. 19 10月, 2015 1 次提交
    • S
      RDS: fix rds-ping deadlock over TCP transport · 7b4b0009
      santosh.shilimkar@oracle.com 提交于
      Sowmini found hang with rds-ping while testing RDS over TCP. Its
      a corner case and doesn't happen always. The issue is not reproducible
      with IB transport. Its clear from below dump why we see it with RDS TCP.
      
       [<ffffffff8153b7e5>] do_tcp_setsockopt+0xb5/0x740
       [<ffffffff8153bec4>] tcp_setsockopt+0x24/0x30
       [<ffffffff814d57d4>] sock_common_setsockopt+0x14/0x20
       [<ffffffffa096071d>] rds_tcp_xmit_prepare+0x5d/0x70 [rds_tcp]
       [<ffffffffa093b5f7>] rds_send_xmit+0xd7/0x740 [rds]
       [<ffffffffa093bda2>] rds_send_pong+0x142/0x180 [rds]
       [<ffffffffa0939d34>] rds_recv_incoming+0x274/0x330 [rds]
       [<ffffffff810815ae>] ? ttwu_queue+0x11e/0x130
       [<ffffffff814dcacd>] ? skb_copy_bits+0x6d/0x2c0
       [<ffffffffa0960350>] rds_tcp_data_recv+0x2f0/0x3d0 [rds_tcp]
       [<ffffffff8153d836>] tcp_read_sock+0x96/0x1c0
       [<ffffffffa0960060>] ? rds_tcp_recv_init+0x40/0x40 [rds_tcp]
       [<ffffffff814d6a90>] ? sock_def_write_space+0xa0/0xa0
       [<ffffffffa09604d1>] rds_tcp_data_ready+0xa1/0xf0 [rds_tcp]
       [<ffffffff81545249>] tcp_data_queue+0x379/0x5b0
       [<ffffffffa0960cdb>] ? rds_tcp_write_space+0xbb/0x110 [rds_tcp]
       [<ffffffff81547fd2>] tcp_rcv_established+0x2e2/0x6e0
       [<ffffffff81552602>] tcp_v4_do_rcv+0x122/0x220
       [<ffffffff81553627>] tcp_v4_rcv+0x867/0x880
       [<ffffffff8152e0b3>] ip_local_deliver_finish+0xa3/0x220
      
      This happens because rds_send_xmit() chain wants to take
      sock_lock which is already taken by tcp_v4_rcv() on its
      way to rds_tcp_data_ready(). Commit db6526dc ("RDS: use
      rds_send_xmit() state instead of RDS_LL_SEND_FULL") which
      was trying to opportunistically finish the send request
      in same thread context.
      
      But because of above recursive lock hang with RDS TCP,
      the send work from rds_send_pong() needs to deferred to
      worker to avoid lock up. Given RDS ping is more of connectivity
      test than performance critical path, its should be ok even
      for transport like IB.
      Reported-by: NSowmini Varadhan <sowmini.varadhan@oracle.com>
      Acked-by: NSowmini Varadhan <sowmini.varadhan@oracle.com>
      Signed-off-by: NSantosh Shilimkar <ssantosh@kernel.org>
      Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@oracle.com>
      Acked-by: NSowmini Varadhan <sowmini.varadhan@oracle.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7b4b0009
  5. 06 10月, 2015 3 次提交
  6. 26 8月, 2015 4 次提交
  7. 08 8月, 2015 1 次提交
  8. 09 4月, 2015 1 次提交
  9. 03 3月, 2015 1 次提交
  10. 11 12月, 2014 1 次提交
  11. 10 12月, 2014 1 次提交
    • A
      put iov_iter into msghdr · c0371da6
      Al Viro 提交于
      Note that the code _using_ ->msg_iter at that point will be very
      unhappy with anything other than unshifted iovec-backed iov_iter.
      We still need to convert users to proper primitives.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      c0371da6
  12. 24 11月, 2014 1 次提交
  13. 04 10月, 2014 1 次提交
    • H
      net/rds: fix possible double free on sock tear down · 593cbb3e
      Herton R. Krzesinski 提交于
      I got a report of a double free happening at RDS slab cache. One
      suspicion was that may be somewhere we were doing a sock_hold/sock_put
      on an already freed sock. Thus after providing a kernel with the
      following change:
      
       static inline void sock_hold(struct sock *sk)
       {
      -       atomic_inc(&sk->sk_refcnt);
      +       if (!atomic_inc_not_zero(&sk->sk_refcnt))
      +               WARN(1, "Trying to hold sock already gone: %p (family: %hd)\n",
      +                       sk, sk->sk_family);
       }
      
      The warning successfuly triggered:
      
      Trying to hold sock already gone: ffff81f6dda61280 (family: 21)
      WARNING: at include/net/sock.h:350 sock_hold()
      Call Trace:
      <IRQ>  [<ffffffff8adac135>] :rds:rds_send_remove_from_sock+0xf0/0x21b
      [<ffffffff8adad35c>] :rds:rds_send_drop_acked+0xbf/0xcf
      [<ffffffff8addf546>] :rds_rdma:rds_ib_recv_tasklet_fn+0x256/0x2dc
      [<ffffffff8009899a>] tasklet_action+0x8f/0x12b
      [<ffffffff800125a2>] __do_softirq+0x89/0x133
      [<ffffffff8005f30c>] call_softirq+0x1c/0x28
      [<ffffffff8006e644>] do_softirq+0x2c/0x7d
      [<ffffffff8006e4d4>] do_IRQ+0xee/0xf7
      [<ffffffff8005e625>] ret_from_intr+0x0/0xa
      <EOI>
      
      Looking at the call chain above, the only way I think this would be
      possible is if somewhere we already released the same socket->sock which
      is assigned to the rds_message at rds_send_remove_from_sock. Which seems
      only possible to happen after the tear down done on rds_release.
      
      rds_release properly calls rds_send_drop_to to drop the socket from any
      rds_message, and some proper synchronization is in place to avoid race
      with rds_send_drop_acked/rds_send_remove_from_sock. However, I still see
      a very narrow window where it may be possible we touch a sock already
      released: when rds_release races with rds_send_drop_acked, we check
      RDS_MSG_ON_CONN to avoid cleanup on the same rds_message, but in this
      specific case we don't clear rm->m_rs. In this case, it seems we could
      then go on at rds_send_drop_to and after it returns, the sock is freed
      by last sock_put on rds_release, with concurrently we being at
      rds_send_remove_from_sock; then at some point in the loop at
      rds_send_remove_from_sock we process an rds_message which didn't have
      rm->m_rs unset for a freed sock, and a possible sock_hold on an sock
      already gone at rds_release happens.
      
      This hopefully address the described condition above and avoids a double
      free on "second last" sock_put. In addition, I removed the comment about
      socket destruction on top of rds_send_drop_acked: we call rds_send_drop_to
      in rds_release and we should have things properly serialized there, thus
      I can't see the comment being accurate there.
      Signed-off-by: NHerton R. Krzesinski <herton@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      593cbb3e
  14. 18 4月, 2014 1 次提交
  15. 19 1月, 2014 1 次提交
  16. 10 10月, 2012 1 次提交
    • J
      RDS: fix rds-ping spinlock recursion · 5175a5e7
      jeff.liu 提交于
      This is the revised patch for fixing rds-ping spinlock recursion
      according to Venkat's suggestions.
      
      RDS ping/pong over TCP feature has been broken for years(2.6.39 to
      3.6.0) since we have to set TCP cork and call kernel_sendmsg() between
      ping/pong which both need to lock "struct sock *sk". However, this
      lock has already been hold before rds_tcp_data_ready() callback is
      triggerred. As a result, we always facing spinlock resursion which
      would resulting in system panic.
      
      Given that RDS ping is only used to test the connectivity and not for
      serious performance measurements, we can queue the pong transmit to
      rds_wq as a delayed response.
      Reported-by: NDan Carpenter <dan.carpenter@oracle.com>
      CC: Venkat Venkatsubra <venkat.x.venkatsubra@oracle.com>
      CC: David S. Miller <davem@davemloft.net>
      CC: James Morris <james.l.morris@oracle.com>
      Signed-off-by: NJie Liu <jeff.liu@oracle.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5175a5e7
  17. 21 3月, 2012 1 次提交
  18. 01 11月, 2011 2 次提交
  19. 17 6月, 2011 1 次提交
  20. 31 3月, 2011 1 次提交
  21. 31 10月, 2010 1 次提交
    • A
      RDS: Let rds_message_alloc_sgs() return NULL · d139ff09
      Andy Grover 提交于
      Even with the previous fix, we still are reading the iovecs once
      to determine SGs needed, and then again later on. Preallocating
      space for sg lists as part of rds_message seemed like a good idea
      but it might be better to not do this. While working to redo that
      code, this patch attempts to protect against userspace rewriting
      the rds_iovec array between the first and second accesses.
      
      The consequences of this would be either a too-small or too-large
      sg list array. Too large is not an issue. This patch changes all
      callers of message_alloc_sgs to handle running out of preallocated
      sgs, and fail gracefully.
      Signed-off-by: NAndy Grover <andy.grover@oracle.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d139ff09
  22. 21 10月, 2010 1 次提交
  23. 09 9月, 2010 11 次提交
    • A
      RDS: Implement masked atomic operations · 20c72bd5
      Andy Grover 提交于
      Add two CMSGs for masked versions of cswp and fadd. args
      struct modified to use a union for different atomic op type's
      arguments. Change IB to do masked atomic ops. Atomic op type
      in rds_message similarly unionized.
      Signed-off-by: NAndy Grover <andy.grover@oracle.com>
      20c72bd5
    • Z
      rds: fix rds_send_xmit() serialization · 0f4b1c7e
      Zach Brown 提交于
      rds_send_xmit() was changed to hold an interrupt masking spinlock instead of a
      mutex so that it could be called from the IB receive tasklet path.  This broke
      the TCP transport because its xmit method can block and masks and unmasks
      interrupts.
      
      This patch serializes callers to rds_send_xmit() with a simple bit instead of
      the current spinlock or previous mutex.  This enables rds_send_xmit() to be
      called from any context and to call functions which block.  Getting rid of the
      c_send_lock exposes the bare c_lock acquisitions which are changed to block
      interrupts.
      
      A waitqueue is added so that rds_conn_shutdown() can wait for callers to leave
      rds_send_xmit() before tearing down partial send state.  This lets us get rid
      of c_senders.
      
      rds_send_xmit() is changed to check the conn state after acquiring the
      RDS_IN_XMIT bit to resolve races with the shutdown path.  Previously both
      worked with the conn state and then the lock in the same order, allowing them
      to race and execute the paths concurrently.
      
      rds_send_reset() isn't racing with rds_send_xmit() now that rds_conn_shutdown()
      properly ensures that rds_send_xmit() can't start once the conn state has been
      changed.  We can remove its previous use of the spinlock.
      
      Finally, c_send_generation is redundant.  Callers can race to test the c_flags
      bit by simply retrying instead of racing to test the c_send_generation atomic.
      Signed-off-by: NZach Brown <zach.brown@oracle.com>
      0f4b1c7e
    • Z
      rds: remove unused rds_send_acked_before() · 671202f3
      Zach Brown 提交于
      rds_send_acked_before() wasn't blocking interrupts when acquiring c_lock from
      user context but nothing calls it.  Rather than fix its use of c_lock we just
      remove the function.
      Signed-off-by: NZach Brown <zach.brown@oracle.com>
      671202f3
    • Z
      RDS: introduce rds_conn_connect_if_down() · f3c6808d
      Zach Brown 提交于
      A few paths had the same block of code to queue a connection's connect work if
      it was in the right state.  Let's move this in to a helper function.
      Signed-off-by: NZach Brown <zach.brown@oracle.com>
      f3c6808d
    • C
      rds: Fix reference counting on the for xmit_atomic and xmit_rdma · 1cc2228c
      Chris Mason 提交于
      This makes sure we have the proper number of references in
      rds_ib_xmit_atomic and rds_ib_xmit_rdma.  We also consistently
      drop references the same way for all message types as the IOs end.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      1cc2228c
    • C
      rds: Fix RDMA message reference counting · c9e65383
      Chris Mason 提交于
      The RDS send_xmit code was trying to get fancy with message
      counting and was dropping the final reference on the RDMA messages
      too early.  This resulted in memory corruption and oopsen.
      
      The fix here is to always add a ref as the parts of the message passes
      through rds_send_xmit, and always drop a ref as the parts of the message
      go through completion handling.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      c9e65383
    • C
      rds: don't let RDS shutdown a connection while senders are present · 7e3f2952
      Chris Mason 提交于
      This is the first in a long line of patches that tries to fix races
      between RDS connection shutdown and RDS traffic.
      
      Here we are maintaining a count of active senders to make sure
      the connection doesn't go away while they are using it.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      7e3f2952
    • A
      RDS: Update comments in rds_send_xmit() · ce47f52f
      Andy Grover 提交于
      Update comments to reflect changes in previous commit.
      
      Keeping as separate commits due to different authorship.
      Signed-off-by: NAndy Grover <andy.grover@oracle.com>
      ce47f52f
    • C
      RDS: Use a generation counter to avoid rds_send_xmit loop · 9e29db0e
      Chris Mason 提交于
      rds_send_xmit is required to loop around after it releases the lock
      because someone else could done a trylock, found someone working on the
      list and backed off.
      
      But, once we drop our lock, it is possible that someone else does come
      in and make progress on the list.  We should detect this and not loop
      around if another process is actually working on the list.
      
      This patch adds a generation counter that is bumped every time we
      get the lock and do some send work.  If the retry notices someone else
      has bumped the generation counter, it does not need to loop around and
      continue working.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      Signed-off-by: NAndy Grover <andy.grover@oracle.com>
      9e29db0e
    • A
      RDS: Get pong working again · acfcd4d4
      Andy Grover 提交于
      Call send_xmit() directly from pong()
      
      Set pongs as op_active
      Signed-off-by: NAndy Grover <andy.grover@oracle.com>
      acfcd4d4
    • A
      RDS: Remove send_quota from send_xmit() · fcc5450c
      Andy Grover 提交于
      The purpose of the send quota was really to give fairness
      when different connections were all using the same
      workq thread to send backlogged msgs -- they could only send
      so many before another connection could make progress.
      
      Now that each connection is pushing the backlog from its
      completion handler, they are all guaranteed to make progress
      and the quota isn't needed any longer.
      
      A thread *will* have to send all previously queued data, as well
      as any further msgs placed on the queue while while c_send_lock
      was held. In a pathological case a single process can get
      roped into doing this for long periods while other threads
      get off free. But, since it can only do this until the transport
      reports full, this is a bounded scenario.
      Signed-off-by: NAndy Grover <andy.grover@oracle.com>
      fcc5450c