1. 22 4月, 2015 3 次提交
    • I
      crush: straw2 bucket type with an efficient 64-bit crush_ln() · 958a2765
      Ilya Dryomov 提交于
      This is an improved straw bucket that correctly avoids any data movement
      between items A and B when neither A nor B's weights are changed.  Said
      differently, if we adjust the weight of item C (including adding it anew
      or removing it completely), we will only see inputs move to or from C,
      never between other items in the bucket.
      
      Notably, there is not intermediate scaling factor that needs to be
      calculated.  The mapping function is a simple function of the item weights.
      
      The below commits were squashed together into this one (mostly to avoid
      adding and then yanking a ~6000 lines worth of crush_ln_table):
      
      - crush: add a straw2 bucket type
      - crush: add crush_ln to calculate nature log efficently
      - crush: improve straw2 adjustment slightly
      - crush: change crush_ln to provide 32 more digits
      - crush: fix crush_get_bucket_item_weight and bucket destroy for straw2
      - crush/mapper: fix divide-by-0 in straw2
        (with div64_s64() for draw = ln / w and INT64_MIN -> S64_MIN - need
         to create a proper compat.h in ceph.git)
      
      Reflects ceph.git commits 242293c908e923d474910f2b8203fa3b41eb5a53,
                                32a1ead92efcd351822d22a5fc37d159c65c1338,
                                6289912418c4a3597a11778bcf29ed5415117ad9,
                                35fcb04e2945717cf5cfe150b9fa89cb3d2303a1,
                                6445d9ee7290938de1e4ee9563912a6ab6d8ee5f,
                                b5921d55d16796e12d66ad2c4add7305f9ce2353.
      Signed-off-by: NIlya Dryomov <idryomov@gmail.com>
      958a2765
    • I
      crush: ensuring at most num-rep osds are selected · 45002267
      Ilya Dryomov 提交于
      Crush temporary buffers are allocated as per replica size configured
      by the user.  When there are more final osds (to be selected as per
      rule) than the replicas, buffer overlaps and it causes crash.  Now, it
      ensures that at most num-rep osds are selected even if more number of
      osds are allowed by the rule.
      
      Reflects ceph.git commits 6b4d1aa99718e3b367496326c1e64551330fabc0,
                                234b066ba04976783d15ff2abc3e81b6cc06fb10.
      Signed-off-by: NIlya Dryomov <idryomov@gmail.com>
      45002267
    • I
      crush: drop unnecessary include from mapper.c · 9be6df21
      Ilya Dryomov 提交于
      Signed-off-by: NIlya Dryomov <idryomov@gmail.com>
      9be6df21
  2. 20 4月, 2015 3 次提交
  3. 08 4月, 2015 1 次提交
    • I
      Revert "libceph: use memalloc flags for net IO" · 6d7fdb0a
      Ilya Dryomov 提交于
      This reverts commit 89baaa57.
      
      Dirty page throttling should be sufficient for us in the general case
      so there is no need to use __GFP_MEMALLOC - it would be needed only in
      the swap-over-rbd case, which we currently don't support.  (It would
      probably take approximately the commit that is being reverted to add
      that support, but we would also need the "swap" option to distinguish
      from the general case and make sure swap ceph_client-s aren't shared
      with anything else.)  See ceph-devel threads [1] and [2] for the
      details of why enabling pfmemalloc reserves for all cases is a bad
      thing.
      
      On top of potential system lockups related to drained emergency
      reserves, this turned out to cause ceph lockups in case peers are on
      the same host and communicating via loopback due to sk_filter()
      dropping pfmemalloc skbs on the receiving side because the receiving
      loopback socket is not tagged with SOCK_MEMALLOC.
      
      [1] "SOCK_MEMALLOC vs loopback"
          http://www.spinics.net/lists/ceph-devel/msg22998.html
      [2] "[PATCH] libceph: don't set memalloc flags in loopback case"
          http://www.spinics.net/lists/ceph-devel/msg23392.html
      
      Conflicts:
      	net/ceph/messenger.c [ context: tcp_nodelay option ]
      
      Cc: Mike Christie <michaelc@cs.wisc.edu>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Sage Weil <sage@redhat.com>
      Cc: stable@vger.kernel.org # 3.18+, needs backporting
      Signed-off-by: NIlya Dryomov <idryomov@gmail.com>
      Acked-by: NMike Christie <michaelc@cs.wisc.edu>
      Acked-by: NMel Gorman <mgorman@suse.de>
      6d7fdb0a
  4. 19 2月, 2015 5 次提交
  5. 12 2月, 2015 1 次提交
  6. 09 1月, 2015 1 次提交
  7. 18 12月, 2014 7 次提交
  8. 14 11月, 2014 4 次提交
    • I
      libceph: change from BUG to WARN for __remove_osd() asserts · cc9f1f51
      Ilya Dryomov 提交于
      No reason to use BUG_ON for osd request list assertions.
      Signed-off-by: NIlya Dryomov <idryomov@redhat.com>
      Reviewed-by: NAlex Elder <elder@linaro.org>
      cc9f1f51
    • I
      libceph: clear r_req_lru_item in __unregister_linger_request() · ba9d114e
      Ilya Dryomov 提交于
      kick_requests() can put linger requests on the notarget list.  This
      means we need to clear the much-overloaded req->r_req_lru_item in
      __unregister_linger_request() as well, or we get an assertion failure
      in ceph_osdc_release_request() - !list_empty(&req->r_req_lru_item).
      
      AFAICT the assumption was that registered linger requests cannot be on
      any of req->r_req_lru_item lists, but that's clearly not the case.
      Signed-off-by: NIlya Dryomov <idryomov@redhat.com>
      Reviewed-by: NAlex Elder <elder@linaro.org>
      ba9d114e
    • I
      libceph: unlink from o_linger_requests when clearing r_osd · a390de02
      Ilya Dryomov 提交于
      Requests have to be unlinked from both osd->o_requests (normal
      requests) and osd->o_linger_requests (linger requests) lists when
      clearing req->r_osd.  Otherwise __unregister_linger_request() gets
      confused and we trip over a !list_empty(&osd->o_linger_requests)
      assert in __remove_osd().
      
      MON=1 OSD=1:
      
          # cat remove-osd.sh
          #!/bin/bash
          rbd create --size 1 test
          DEV=$(rbd map test)
          ceph osd out 0
          sleep 3
          rbd map dne/dne # obtain a new osdmap as a side effect
          rbd unmap $DEV & # will block
          sleep 3
          ceph osd in 0
      Signed-off-by: NIlya Dryomov <idryomov@redhat.com>
      Reviewed-by: NAlex Elder <elder@linaro.org>
      a390de02
    • I
      libceph: do not crash on large auth tickets · aaef3170
      Ilya Dryomov 提交于
      Large (greater than 32k, the value of PAGE_ALLOC_COSTLY_ORDER) auth
      tickets will have their buffers vmalloc'ed, which leads to the
      following crash in crypto:
      
      [   28.685082] BUG: unable to handle kernel paging request at ffffeb04000032c0
      [   28.686032] IP: [<ffffffff81392b42>] scatterwalk_pagedone+0x22/0x80
      [   28.686032] PGD 0
      [   28.688088] Oops: 0000 [#1] PREEMPT SMP
      [   28.688088] Modules linked in:
      [   28.688088] CPU: 0 PID: 878 Comm: kworker/0:2 Not tainted 3.17.0-vm+ #305
      [   28.688088] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2007
      [   28.688088] Workqueue: ceph-msgr con_work
      [   28.688088] task: ffff88011a7f9030 ti: ffff8800d903c000 task.ti: ffff8800d903c000
      [   28.688088] RIP: 0010:[<ffffffff81392b42>]  [<ffffffff81392b42>] scatterwalk_pagedone+0x22/0x80
      [   28.688088] RSP: 0018:ffff8800d903f688  EFLAGS: 00010286
      [   28.688088] RAX: ffffeb04000032c0 RBX: ffff8800d903f718 RCX: ffffeb04000032c0
      [   28.688088] RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffff8800d903f750
      [   28.688088] RBP: ffff8800d903f688 R08: 00000000000007de R09: ffff8800d903f880
      [   28.688088] R10: 18df467c72d6257b R11: 0000000000000000 R12: 0000000000000010
      [   28.688088] R13: ffff8800d903f750 R14: ffff8800d903f8a0 R15: 0000000000000000
      [   28.688088] FS:  00007f50a41c7700(0000) GS:ffff88011fc00000(0000) knlGS:0000000000000000
      [   28.688088] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
      [   28.688088] CR2: ffffeb04000032c0 CR3: 00000000da3f3000 CR4: 00000000000006b0
      [   28.688088] Stack:
      [   28.688088]  ffff8800d903f698 ffffffff81392ca8 ffff8800d903f6e8 ffffffff81395d32
      [   28.688088]  ffff8800dac96000 ffff880000000000 ffff8800d903f980 ffff880119b7e020
      [   28.688088]  ffff880119b7e010 0000000000000000 0000000000000010 0000000000000010
      [   28.688088] Call Trace:
      [   28.688088]  [<ffffffff81392ca8>] scatterwalk_done+0x38/0x40
      [   28.688088]  [<ffffffff81392ca8>] scatterwalk_done+0x38/0x40
      [   28.688088]  [<ffffffff81395d32>] blkcipher_walk_done+0x182/0x220
      [   28.688088]  [<ffffffff813990bf>] crypto_cbc_encrypt+0x15f/0x180
      [   28.688088]  [<ffffffff81399780>] ? crypto_aes_set_key+0x30/0x30
      [   28.688088]  [<ffffffff8156c40c>] ceph_aes_encrypt2+0x29c/0x2e0
      [   28.688088]  [<ffffffff8156d2a3>] ceph_encrypt2+0x93/0xb0
      [   28.688088]  [<ffffffff8156d7da>] ceph_x_encrypt+0x4a/0x60
      [   28.688088]  [<ffffffff8155b39d>] ? ceph_buffer_new+0x5d/0xf0
      [   28.688088]  [<ffffffff8156e837>] ceph_x_build_authorizer.isra.6+0x297/0x360
      [   28.688088]  [<ffffffff8112089b>] ? kmem_cache_alloc_trace+0x11b/0x1c0
      [   28.688088]  [<ffffffff8156b496>] ? ceph_auth_create_authorizer+0x36/0x80
      [   28.688088]  [<ffffffff8156ed83>] ceph_x_create_authorizer+0x63/0xd0
      [   28.688088]  [<ffffffff8156b4b4>] ceph_auth_create_authorizer+0x54/0x80
      [   28.688088]  [<ffffffff8155f7c0>] get_authorizer+0x80/0xd0
      [   28.688088]  [<ffffffff81555a8b>] prepare_write_connect+0x18b/0x2b0
      [   28.688088]  [<ffffffff81559289>] try_read+0x1e59/0x1f10
      
      This is because we set up crypto scatterlists as if all buffers were
      kmalloc'ed.  Fix it.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NIlya Dryomov <idryomov@redhat.com>
      Reviewed-by: NSage Weil <sage@redhat.com>
      aaef3170
  9. 01 11月, 2014 1 次提交
  10. 30 10月, 2014 1 次提交
    • M
      libceph: use memalloc flags for net IO · 89baaa57
      Mike Christie 提交于
      This patch has ceph's lib code use the memalloc flags.
      
      If the VM layer needs to write data out to free up memory to handle new
      allocation requests, the block layer must be able to make forward progress.
      To handle that requirement we use structs like mempools to reserve memory for
      objects like bios and requests.
      
      The problem is when we send/receive block layer requests over the network
      layer, net skb allocations can fail and the system can lock up.
      To solve this, the memalloc related flags were added. NBD, iSCSI
      and NFS uses these flags to tell the network/vm layer that it should
      use memory reserves to fullfill allcation requests for structs like
      skbs.
      
      I am running ceph in a bunch of VMs in my laptop, so this patch was
      not tested very harshly.
      Signed-off-by: NMike Christie <michaelc@cs.wisc.edu>
      Reviewed-by: NIlya Dryomov <idryomov@redhat.com>
      89baaa57
  11. 15 10月, 2014 10 次提交
  12. 17 9月, 2014 1 次提交
  13. 11 9月, 2014 2 次提交