1. 01 10月, 2015 1 次提交
    • S
      RDS: Use per-bucket rw lock for bind hash-table · 9b9acde7
      Santosh Shilimkar 提交于
      One global lock protecting hash-tables with 1024 buckets isn't
      efficient and it shows up in a massive systems with truck
      loads of RDS sockets serving multiple databases. The
      perf data clearly highlights the contention on the rw
      lock in these massive workloads.
      
      When the contention gets worse, the code gets into a state where
      it decides to back off on the lock. So while it has disabled interrupts,
      it sits and backs off on this lock get. This causes the system to
      become sluggish and eventually all sorts of bad things happen.
      
      The simple fix is to move the lock into the hash bucket and
      use per-bucket lock to improve the scalability.
      Signed-off-by: NSantosh Shilimkar <ssantosh@kernel.org>
      Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@oracle.com>
      9b9acde7
  2. 26 8月, 2015 1 次提交
  3. 08 8月, 2015 1 次提交
  4. 02 6月, 2015 1 次提交
    • W
      rds: re-entry of rds_ib_xmit/rds_iw_xmit · d655a9fb
      Wengang Wang 提交于
      The BUG_ON at line 452/453 is triggered in function rds_send_xmit.
      
       441                         while (ret) {
       442                                 tmp = min_t(int, ret, sg->length -
       443                                                       conn->c_xmit_data_off);
       444                                 conn->c_xmit_data_off += tmp;
       445                                 ret -= tmp;
       446                                 if (conn->c_xmit_data_off == sg->length) {
       447                                         conn->c_xmit_data_off = 0;
       448                                         sg++;
       449                                         conn->c_xmit_sg++;
       450                                         if (ret != 0 && conn->c_xmit_sg == rm->data.op_nents)
       451                                                 printk(KERN_ERR "conn %p rm %p sg %p ret %d\n", conn, rm, sg, ret);
       452                                         BUG_ON(ret != 0 &&
       453                                                conn->c_xmit_sg == rm->data.op_nents);
       454                                 }
       455                         }
      
      it is complaining the total sent length is bigger that we want to send.
      
      rds_ib_xmit() is wrong for the second entry for the same rds_message returning
      wrong value.
      
      the sg and off passed by rds_send_xmit to rds_ib_xmit is based on
      scatterlist.offset/length, but the rds_ib_xmit action is based on
      scatterlist.dma_address/dma_length. in case dma_length is larger than length
      there is problem. for the 2nd and later entries of rds_ib_xmit for same
      rds_message, at least one of the following two is wrong:
      
      1) the scatterlist to start with,  the choosen one can far beyond the correct
         one.
      2) the offset to start with within the scatterlist.
      
      fix:
      add op_dmasg and op_dmaoff to rm_data_op structure indicating the scatterlist
      and offset within the it to start with for rds_ib_xmit respectively. op_dmasg
      and op_dmaoff are initialized to zero when doing dma mapping for the first see
      of the message and are changed when filling send slots.
      
      the same applies to rds_iw_xmit too.
      Signed-off-by: NWengang Wang <wen.gang.wang@oracle.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      d655a9fb
  5. 01 6月, 2015 2 次提交
  6. 19 5月, 2015 1 次提交
  7. 09 4月, 2015 1 次提交
  8. 03 3月, 2015 1 次提交
  9. 24 11月, 2014 2 次提交
  10. 20 10月, 2013 1 次提交
  11. 20 3月, 2012 1 次提交
  12. 01 11月, 2011 1 次提交
  13. 20 1月, 2011 1 次提交
  14. 21 10月, 2010 1 次提交
  15. 09 9月, 2010 24 次提交