1. 26 1月, 2014 1 次提交
    • I
      libceph: add ceph_kv{malloc,free}() and switch to them · eeb0bed5
      Ilya Dryomov 提交于
      Encapsulate kmalloc vs vmalloc memory allocation and freeing logic into
      two helpers, ceph_kvmalloc() and ceph_kvfree(), and switch to them.
      
      ceph_kvmalloc() kmalloc()'s a maximum of 8 pages, anything bigger is
      vmalloc()'ed with __GFP_HIGHMEM set.  This changes the existing
      behaviour:
      
      - for buffers (ceph_buffer_new()), from trying to kmalloc() everything
        and using vmalloc() just as a fallback
      
      - for messages (ceph_msg_new()), from going to vmalloc() for anything
        bigger than a page
      
      - for messages (ceph_msg_new()), from disallowing vmalloc() to use high
        memory
      Signed-off-by: NIlya Dryomov <ilya.dryomov@inktank.com>
      Reviewed-by: NSage Weil <sage@inktank.com>
      eeb0bed5
  2. 14 1月, 2014 1 次提交
  3. 01 1月, 2014 1 次提交
  4. 02 5月, 2013 29 次提交
  5. 14 2月, 2013 1 次提交
  6. 03 10月, 2012 1 次提交
  7. 31 7月, 2012 4 次提交
    • S
      libceph: clean up con flags · 4a861692
      Sage Weil 提交于
      Rename flags with CON_FLAG prefix, move the definitions into the c file,
      and (better) document their meaning.
      Signed-off-by: NSage Weil <sage@inktank.com>
      4a861692
    • S
      libceph: replace connection state bits with states · 8dacc7da
      Sage Weil 提交于
      Use a simple set of 6 enumerated values for the socket states (CON_STATE_*)
      and use those instead of the state bits.  All of the con->state checks are
      now under the protection of the con mutex, so this is safe.  It also
      simplifies many of the state checks because we can check for anything other
      than the expected state instead of various bits for races we can think of.
      
      This appears to hold up well to stress testing both with and without socket
      failure injection on the server side.
      Signed-off-by: NSage Weil <sage@inktank.com>
      8dacc7da
    • G
      libceph: prevent the race of incoming work during teardown · a2a32584
      Guanjun He 提交于
      Add an atomic variable 'stopping' as flag in struct ceph_messenger,
      set this flag to 1 in function ceph_destroy_client(), and add the condition code
      in function ceph_data_ready() to test the flag value, if true(1), just return.
      Signed-off-by: NGuanjun He <gjhe@suse.com>
      Reviewed-by: NSage Weil <sage@inktank.com>
      a2a32584
    • S
      libceph: fix messenger retry · a16cb1f7
      Sage Weil 提交于
      In ancient times, the messenger could both initiate and accept connections.
      An artifact if that was data structures to store/process an incoming
      ceph_msg_connect request and send an outgoing ceph_msg_connect_reply.
      Sadly, the negotiation code was referencing those structures and ignoring
      important information (like the peer's connect_seq) from the correct ones.
      
      Among other things, this fixes tight reconnect loops where the server sends
      RETRY_SESSION and we (the client) retries with the same connect_seq as last
      time.  This bug pretty easily triggered by injecting socket failures on the
      MDS and running some fs workload like workunits/direct_io/test_sync_io.
      Signed-off-by: NSage Weil <sage@inktank.com>
      a16cb1f7
  8. 18 7月, 2012 1 次提交
    • S
      libceph: fix messenger retry · 5bdca4e0
      Sage Weil 提交于
      In ancient times, the messenger could both initiate and accept connections.
      An artifact if that was data structures to store/process an incoming
      ceph_msg_connect request and send an outgoing ceph_msg_connect_reply.
      Sadly, the negotiation code was referencing those structures and ignoring
      important information (like the peer's connect_seq) from the correct ones.
      
      Among other things, this fixes tight reconnect loops where the server sends
      RETRY_SESSION and we (the client) retries with the same connect_seq as last
      time.  This bug pretty easily triggered by injecting socket failures on the
      MDS and running some fs workload like workunits/direct_io/test_sync_io.
      Signed-off-by: NSage Weil <sage@inktank.com>
      5bdca4e0
  9. 06 7月, 2012 1 次提交