- 05 1月, 2018 1 次提交
-
-
由 Mike Snitzer 提交于
This DM multipath NVMe bio-based support requires CONFIG_NVME_MULTIPATH to not be set. In the future hopefully NVMe multipath and DM multipath can co-exist more seemlessly. But as is, if CONFIG_NVME_MULTIPATH=Y then all the individal NVMe paths will remain hidden to upper layers and as such DM multipath will not be able to manage them. Though NVMe's native multipathing doesn't multipath namespaces across subsystems; so technically a user _could_ use CONFIG_NVME_MULTIPATH=Y and also use DM multipath to multipath across subsystems. Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
- 03 1月, 2018 1 次提交
-
-
由 Mike Snitzer 提交于
Moving the dm_bio_restore() to process_queued_bios() avoids doing that work in multipath_end_io_bio(). Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
- 20 12月, 2017 5 次提交
-
-
由 Mike Snitzer 提交于
Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Mike Snitzer 提交于
All underlying members are initialized directly so the memset() calls are not needed. Also, initialize mpio->nr_bytes from the start since it never changes. Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Mike Snitzer 提交于
'struct dm_bio_details *' isn't ever needed. Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Mike Snitzer 提交于
Upper level bio-based drivers that stack immediately ontop of NVMe can leverage direct_make_request(). In addition DM's NVMe bio-based will initially only ever have one NVMe device that it submits IO to at a time. There is no splitting needed. Enhance DM core so that DM_TYPE_NVME_BIO_BASED's IO submission takes advantage of both of these characteristics. Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Mike Snitzer 提交于
If dm_table_determine_type() establishes DM_TYPE_NVME_BIO_BASED then all devices in the DM table do not support partial completions. Also, the table has a single immutable target that doesn't require DM core to split bios. This will enable adding NVMe optimizations to bio-based DM. Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
- 18 12月, 2017 1 次提交
-
-
由 Mike Snitzer 提交于
No apparent need to generic_start_io_acct() until before the IO is ready for submission. start_io_acct() is the proper place to do this accounting -- it is also where DM accounts for pending IO and, if enabled, starts dm-stats accounting. Replace start_io_acct()'s part_round_stats() with generic_start_io_acct(). This eliminates needing to take part_stat_lock() multiple times when starting an IO on bio-based devices. Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
- 17 12月, 2017 5 次提交
-
-
由 Mike Snitzer 提交于
'struct dm_io' already has the same pointer. So update all accesses from ci->md to ci->io->md. Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Mike Snitzer 提交于
Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Mike Snitzer 提交于
Eliminates need for a separate mempool to allocate 'struct dm_io' objects from. As such, it saves an extra mempool allocation for each original bio that DM core is issued. This complicates the per-bio-data accessor functions by needing to conditonally add extra padding to get to a target's per-bio-data. But in the end this provides a decent performance improvement for all bio-based DM devices. On an NVMe-loop based testbed to a ramdisk (~3100 MB/s): bio-based DM linear performance improved by 2% (went from 2665 to 2777 MB/s). Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Mike Snitzer 提交于
Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Mike Snitzer 提交于
These CRUD comments have worn out their welcome. The code is what it is, over time it'll hopefully get better. But these comments serve no purpose whatsoever. Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
- 14 12月, 2017 15 次提交
-
-
由 Mike Snitzer 提交于
Rather than having DAX support be unique by setting it based on table type in dm_setup_md_queue(). Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Mike Snitzer 提交于
__send_changing_extent_only() must follow the same pattern that was established with commit "dm: ensure bio submission follows a depth-first tree walk". That is: submit first bio up to split boundary and then split the remainder to further submissions. Suggested-by: NMikulas Patocka <mpatocka@redhat.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Mike Snitzer 提交于
alloc_multiple_bios() assumes it can allocate the requested number of bios but until now there was no gaurantee that the mempools would be accomodating. Suggested-by: NMikulas Patocka <mpatocka@redhat.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Mike Snitzer 提交于
Now that all of DM has been revised and/or verified to no longer require the use of BIOSET_NEED_RESCUER the dm_offload code may be removed. Suggested-by: NNeilBrown <neilb@suse.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Mike Snitzer 提交于
DM targets can request multiple bios be sent to them by DM core (see: num_{flush,discard,write_same,write_zeroes}_bios). But until now these bios were allocated in an unsafe manner than could potentially exhaust the DM device's bioset -- in the face of multiple threads each trying to do multiple allocations from the same DM device's bioset. Fix __send_duplicate_bios() by using the new alloc_multiple_bios(). The allocation strategy used by alloc_multiple_bios() models that used by dm-crypt.c:crypt_alloc_buffer(). Neil Brown initially proposed this fix but the implementation has been revised enough that it inappropriate to attribute the entirety of it to him. Suggested-by: NNeilBrown <neilb@suse.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 NeilBrown 提交于
No DM target provides num_write_bios and none has since dm-cache's brief use in 2013. Having the possibility of num_write_bios > 1 complicates bio allocation. So remove the interface and assume there is only one bio needed. If a target ever needs more, it must provide a suitable bioset and allocate itself based on its particular needs. Signed-off-by: NNeilBrown <neilb@suse.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 NeilBrown 提交于
A dm device can, in general, represent a tree of targets, each of which handles a sub-range of the range of blocks handled by the parent. The bio sequencing managed by generic_make_request() requires that bios are generated and handled in a depth-first manner. Each call to a make_request_fn() may submit bios to a single member device, and may submit bios for a reduced region of the same device as the make_request_fn. In particular, any bios submitted to member devices must be expected to be processed in order, so a later one must never wait for an earlier one. This ordering is usually achieved by using bio_split() to reduce a bio to a size that can be completely handled by one target, and resubmitting the remainder to the originating device. bio_queue_split() shows the canonical approach. dm doesn't follow this approach, largely because it has needed to split bios since long before bio_split() was available. It currently can submit bios to separate targets within the one dm_make_request() call. Dependencies between these targets, as can happen with dm-snap, can cause deadlocks if either bios gets stuck behind the other in the queues managed by generic_make_request(). This requires the 'rescue' functionality provided by dm_offload_{start,end}. Some of this requirement can be removed by changing the order of bio submission to follow the canonical approach. That is, if dm finds that it needs to split a bio, the remainder should be sent to generic_make_request() rather than being handled immediately. This delays the handling until the first part is completely processed, so the deadlock problems do not occur. __split_and_process_bio() can be called both from dm_make_request() and from dm_wq_work(). When called from dm_wq_work() the current approach is perfectly satisfactory as each bio will be processed immediately. When called from dm_make_request(), current->bio_list will be non-NULL, and in this case it is best to create a separate "clone" bio for the remainder. When we use bio_clone_bioset() to split off the front part of a bio and chain the two together and submit the remainder to generic_make_request(), it is important that the newly allocated bio is used as the head to be processed immediately, and the original bio gets "bio_advance()"d and sent to generic_make_request() as the remainder. Otherwise, if the newly allocated bio is used as the remainder, and if it then needs to be split again, then the next bio_clone_bioset() call will be made while holding a reference a bio (result of the first clone) from the same bioset. This can potentially exhaust the bioset mempool and result in a memory allocation deadlock. Note that there is no race caused by reassigning cio.io->bio after already calling __map_bio(). This bio will only be dereferenced again after dec_pending() has found io->io_count to be zero, and this cannot happen before the dec_pending() call at the end of __split_and_process_bio(). To provide the clone bio when splitting, we use q->bio_split. This was previously being freed by bio-based dm to avoid having excess rescuer threads. As bio_split bio sets no longer create rescuer threads, there is little cost and much gain from restoring the q->bio_split bio set. Signed-off-by: NNeilBrown <neilb@suse.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 NeilBrown 提交于
The BIOSET_NEED_RESCUER flag is only needed when a make_request_fn might do two allocations from the one bioset, and the second one could block until the first bio completes. dm_io() is called from make_request_fn() context. The closest it comes to multiple allocations is in chunk_io() in dm-snap-persistent. But there the code uses a separate thread to avoid problems. So BIOSET_NEED_RESCUER is not needed. Signed-off-by: NNeilBrown <neilb@suse.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 NeilBrown 提交于
The BIOSET_NEED_RESCUER flag is only needed when a make_request_fn might do two allocations from the one bioset, and the second one could block until the first bio completes. dm-crypt does allocate from this bioset inside the dm make_request_fn, but does so using GFP_NOWAIT so that the allocation will not block. So BIOSET_NEED_RESCUER is not needed. Signed-off-by: NNeilBrown <neilb@suse.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 NeilBrown 提交于
Clarify that dm_accept_partial_bio isn't allowed for REQ_OP_ZONE_RESET bios. Signed-off-by: NNeilBrown <neilb@suse.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Heinz Mauelshagen 提交于
Cleanup, no functional change. Signed-off-by: NHeinz Mauelshagen <heinzm@redhat.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Heinz Mauelshagen 提交于
No need to calculate the reshaping progress because mddev->curr_resync_completed holds it. Signed-off-by: NHeinz Mauelshagen <heinzm@redhat.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Heinz Mauelshagen 提交于
During reshape, 'A' chars were reported in status rather than 'a'. Signed-off-by: NHeinz Mauelshagen <heinzm@redhat.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Heinz Mauelshagen 提交于
In order to avoid redoing synchronization/recovery/reshape partially, the raid set got frozen until after all passed in table line flags had been cleared. The related table reload sequence had to be precisely followed, or reshaping may lead to data corruption caused by the active mapping carrying on with a reshape when the inactive mapping already had retrieved a stale reshape position. Harden by retrieving the actual resync/recovery/reshape position during resume whilst the active table is suspended thus avoiding to keep the raid set frozen altogether. This prevents superfluous redoing of an already resynchronized or recovered segment and, most importantly, potential for redoing of an already reshaped segment causing data corruption. Fixes: d39f0010 ("dm raid: fix raid_resume() to keep raid set frozen as needed") Signed-off-by: NHeinz Mauelshagen <heinzm@redhat.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Heinz Mauelshagen 提交于
Verifying the current raid sets redundancy based on retrieved superblock content has to use the superblock's raid level (e.g. raid0), not the constructor requested one (e.g. raid10). Using the requested raid level of raid10 lead to a "divide error" on raid0 which defines data copies divided by to be zero. Also check for bogus data copies. Signed-off-by: NHeinz Mauelshagen <heinzm@redhat.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
- 08 12月, 2017 12 次提交
-
-
由 Mike Snitzer 提交于
Also update Documentation accordingly. Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Heinz Mauelshagen 提交于
Move raid_resume()'s setting of 'rw' and 'in_sync' to just prior to mddev_resume(). Also, remove unused 'bitmap_loaded' member from "struct raid_set". No functional changes. Signed-off-by: NHeinz Mauelshagen <heinzm@redhat.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Heinz Mauelshagen 提交于
Fix various sync state issues causing racy/bogus sync ratio, sync_action ad health chars in dm_status() info output. Sync ratio could be N/N (i.e. 100%) shortly after raid set creation, i.e. creating a new RaidLV or upconverting a linear LV to raid1 thus: "0 2097152 raid raid1 2 Aa 2097162/2097152 recover 0 0 -" instead of: "0 2097152 raid raid1 2 Aa 0/2097152 idle 0 0 -" Sync action could be non-idle, when the MD thread was done with io. Health chars could be 'A' when they should be 'a' for a short time before a resynchonization started. Signed-off-by: NHeinz Mauelshagen <heinzm@redhat.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Heinz Mauelshagen 提交于
The raid_status() function passes the bool array_in_sync variable around providing synchronization state of the MD array. Replace it with a runtime flag. This will avoid a pattern of having to pass discrete variables to various functions. Signed-off-by: NHeinz Mauelshagen <heinzm@redhat.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Heinz Mauelshagen 提交于
The MD sync thread updates recovery flags providing state of any running, idle, frozen, recovering, reshaping, ... activity it performs and updates respective flags asynchronously versus dm processing raid_status(). To close that race window, take a single copy of the flags and pass it into its callees. Signed-off-by: NHeinz Mauelshagen <heinzm@redhat.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Heinz Mauelshagen 提交于
During a reshape request: if userspace reloads a "raid" table multiple times, resulting in multiple superblock reads, the raid set needs to stay frozen until all config changes (chunk size, layout data_offset, delta_disks) have been stored in the superblocks and respective flags cleared. Signed-off-by: NHeinz Mauelshagen <heinzm@redhat.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Heinz Mauelshagen 提交于
Check all component data device sizes versus calculated size. Reject if device(s) are too small. Otherwise, MD will fail the operation by accessing beyond the end of the data device. An example use-case is that growing bitmap won't fit any more and the MD runtime will report an error when DM raid should catch this earlier. Signed-off-by: NHeinz Mauelshagen <heinzm@redhat.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Heinz Mauelshagen 提交于
The raid set size is being revalidated unconditionally before a reshaping conversion is started. MD requires the size to only be reduced in case of a stripe removing (i.e. shrinking) reshape but not when growing because the raid array has to stay small until after the growing reshape finishes. Fix by avoiding the size revalidation in preresume unless a shrinking reshape is requested. Signed-off-by: NHeinz Mauelshagen <heinzm@redhat.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Heinz Mauelshagen 提交于
Pay attention to existing reshape space to define if a raid set needs resizing. Otherwise we can hit "Can't resize a reshaping raid set" when a reshape is being requested. Signed-off-by: NHeinz Mauelshagen <heinzm@redhat.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Heinz Mauelshagen 提交于
The md raid personalities call md_finish_reshape() at the end of a reshape conversion which adjusts rdev->sectors. Correct/check rdev->sectors before initiating a reshape and raise the recovery pointer accordingly. Otherwise, the DM raid coordinated reshape will fail. Signed-off-by: NHeinz Mauelshagen <heinzm@redhat.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Heinz Mauelshagen 提交于
md_stop_writes() is called in raid_presuspend() causing deadlocks on bios submitted afterwards -- which happens on loaded raid sets with conversion requests. Fix by moving md_stop_writes() to raid_postsuspend(). NOTE: when the recovery's frozen (MD_RECOVERY_FROZEN), writes haven't been started (or are already stopped) so don't stop them again. Also remove superfluous readonly setting. Signed-off-by: NHeinz Mauelshagen <heinzm@redhat.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
由 Suren Baghdasaryan 提交于
When system is under memory pressure it is observed that dm bufio shrinker often reclaims only one buffer per scan. This change fixes the following two issues in dm bufio shrinker that cause this behavior: 1. ((nr_to_scan - freed) <= retain_target) condition is used to terminate slab scan process. This assumes that nr_to_scan is equal to the LRU size, which might not be correct because do_shrink_slab() in vmscan.c calculates nr_to_scan using multiple inputs. As a result when nr_to_scan is less than retain_target (64) the scan will terminate after the first iteration, effectively reclaiming one buffer per scan and making scans very inefficient. This hurts vmscan performance especially because mutex is acquired/released every time dm_bufio_shrink_scan() is called. New implementation uses ((LRU size - freed) <= retain_target) condition for scan termination. LRU size can be safely determined inside __scan() because this function is called after dm_bufio_lock(). 2. do_shrink_slab() uses value returned by dm_bufio_shrink_count() to determine number of freeable objects in the slab. However dm_bufio always retains retain_target buffers in its LRU and will terminate a scan when this mark is reached. Therefore returning the entire LRU size from dm_bufio_shrink_count() is misleading because that does not represent the number of freeable objects that slab will reclaim during a scan. Returning (LRU size - retain_target) better represents the number of freeable objects in the slab. This way do_shrink_slab() returns 0 when (LRU size < retain_target) and vmscan will not try to scan this shrinker avoiding scans that will not reclaim any memory. Test: tested using Android device running <AOSP>/system/extras/alloc-stress that generates memory pressure and causes intensive shrinker scans Signed-off-by: NSuren Baghdasaryan <surenb@google.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-