- 03 7月, 2012 1 次提交
-
-
由 NeilBrown 提交于
Having the 'name' arg optional and defaulting to the current personality name is no necessary and leads to errors, as when changing the level of an array we can end up using the name of the old level instead of the new one. So make it non-optional and always explicitly pass the name of the level that the array will be. Reported-by: Nmajianpeng <majianpeng@gmail.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
- 31 5月, 2012 1 次提交
-
-
由 NeilBrown 提交于
The new merge_bvec_fn which calls the corresponding function in subsidiary devices requires that mddev->merge_check_needed be set if any child has a merge_bvec_fn. However were were only setting that when a device was hot-added, not when a device was present from the start. This bug was introduced in 3.4 so patch is suitable for 3.4.y kernels. However that are conflicts in raid10.c so a separate patch will be needed for 3.4.y. Cc: stable@vger.kernel.org Reported-by: NSebastian Riemer <sebastian.riemer@profitbricks.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
- 22 5月, 2012 3 次提交
-
-
由 Jonathan Brassow 提交于
A RAID1 device does not necessarily need a fullsync if the bitmap can be used instead. Similar to commit d6b212f4 in raid5.c, if a raid1 device can be brought back (i.e. from a transient failure) it shouldn't need a complete resync. Provided the bitmap is not to old, it will have recorded the areas of the disk that need recovery. Signed-off-by: NJonathan Brassow <jbrassow@redhat.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 NeilBrown 提交于
Now that bitmaps can be resized, we can allow an array to be resized while the bitmap is present. This only covers resizing that involves changing the effective size of member devices, not resizing that changes the number of devices. Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 majianpeng 提交于
When attempting to fix a read error, it is acceptable to read from a device that is recovering, provided the recovery has got past the place we are reading from. This makes the test for "can we read from here" the same as the test in read_balance. Signed-off-by: Nmajianpeng <majianpeng@gmail.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
- 21 5月, 2012 1 次提交
-
-
由 NeilBrown 提交于
When reshaping we can avoid costly intermediate backup by changing the 'start' address of the array on the device (if there is enough room). So as a first step, allow such a change to be requested through sysfs, and recorded in v1.x metadata. (As we didn't previous check that all 'pad' fields were zero, we need a new FEATURE flag for this. A (belatedly) check that all remaining 'pad' fields are zero to avoid a repeat of this) The new data offset must be requested separately for each device. This allows each to have a different change in the data offset. This is not likely to be used often but as data_offset can be set per-device, new_data_offset should be too. This patch also removes the 'acknowledged' arg to rdev_set_badblocks as it is never used and never will be. At the same time we add a new arg ('in_new') which is currently always zero but will be used more soon. When a reshape finishes we will need to update the data_offset and rdev->sectors. So provide an exported function to do that. Signed-off-by: NNeilBrown <neilb@suse.de>
-
- 12 4月, 2012 1 次提交
-
-
由 majianpeng 提交于
If r1bio->sectors % 8 != 0,then the memcmp and a later memcpy will omit the last bio_vec. This is suitable for any stable kernel since 3.1 when bad-block management was introduced. Cc: stable@vger.kernel.org Signed-off-by: Nmajianpeng <majianpeng@gmail.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
- 03 4月, 2012 2 次提交
-
-
由 NeilBrown 提交于
When comparing two pages read from different legs of a mirror, only compare the bytes that were read, not the whole page. In most cases we read a whole page, but in some cases with bad blocks or odd sizes devices we might read fewer than that. This bug has been present "forever" but at worst it might cause a report of two many mismatches and generate a little bit extra resync IO, so there is no need to back-port to -stable kernels. Reported-by: Nmajianpeng <majianpeng@gmail.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 majianpeng 提交于
Because rde->nr_pending > 0,so can not remove this disk. And in any case, we aren't holding rcu_read_lock() Signed-off-by: Nmajianpeng <majianpeng@gmail.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
- 02 4月, 2012 1 次提交
-
-
由 majianpeng 提交于
Signed-off-by: Nmajianpeng <majianpeng@gmail.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
- 19 3月, 2012 3 次提交
-
-
由 NeilBrown 提交于
Currently we don't honour merge_bvec_fn in member devices so if there is one, we force all requests to be single-page at most. This is not ideal. So create a raid1 merge_bvec_fn to check that function in children as well. This introduces a small problem. There is no locking around calls the ->merge_bvec_fn and subsequent calls to ->make_request. So a device added between these could end up getting a request which violates its merge_bvec_fn. Currently the best we can do is synchronize_sched(). This will work providing no preemption happens. If there is is preemption, we just have to hope that new devices are largely consistent with old devices. Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 NeilBrown 提交于
md.h has an 'rdev_for_each()' macro for iterating the rdevs in an mddev. However it uses the 'safe' version of list_for_each_entry, and so requires the extra variable, but doesn't include 'safe' in the name, which is useful documentation. Consequently some places use this safe version without needing it, and many use an explicity list_for_each entry. So: - rename rdev_for_each to rdev_for_each_safe - create a new rdev_for_each which uses the plain list_for_each_entry, - use the 'safe' version only where needed, and convert all other list_for_each_entry calls to use rdev_for_each. Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 NeilBrown 提交于
If RAID1 or RAID10 is used under LVM or some other stacking block device, it is possible to enter a deadlock during resync or recovery. This can happen if the upper level block device creates two requests to the RAID1 or RAID10. The first request gets processed, blocks recovery and queue requests for underlying requests in current->bio_list. A resync request then starts which will wait for those requests and block new IO. But then the second request to the RAID1/10 will be attempted and it cannot progress until the resync request completes, which cannot progress until the underlying device requests complete, which are on a queue behind that second request. So allow that second request to proceed even though there is a resync request about to start. This is suitable for any -stable kernel. Cc: stable@vger.kernel.org Reported-by: NRay Morris <support@bettercgi.com> Tested-by: NRay Morris <support@bettercgi.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
- 13 2月, 2012 1 次提交
-
-
由 NeilBrown 提交于
Since we added 'replacement' capability, RAID1 can have twice as many devices as ->raid_disks indicates. So md_raid1_congested needs to check that many possible devices, not just ->raid_disks many. Signed-off-by: NNeilBrown <neilb@suse.de>
-
- 11 1月, 2012 1 次提交
-
-
由 NeilBrown 提交于
We normally try to avoid reading from write-mostly devices, but when we do we really have to check for bad blocks and be sure not to try reading them. With the current code, best_good_sectors might not get set and that causes zero-length read requests to be send down which is very confusing. This bug was introduced in commit d2eb35ac and so the patch is suitable for 3.1.x and 3.2.x Reported-and-tested-by: NMichał Mirosław <mirq-linux@rere.qmqm.pl> Reported-and-tested-by: NArt -kwaak- van Breemen <ard@telegraafnet.nl> Signed-off-by: NNeilBrown <neilb@suse.de> Cc: stable@vger.kernel.org
-
- 23 12月, 2011 8 次提交
-
-
由 NeilBrown 提交于
Now that WantReplacement drives are replaced cleanly, mark a drive as want_replacement when we see a write error. It might get failed soon so the WantReplacement flag is irrelevant, but if the write error is recorded in the bad block log, we still want to activate any spare that might be available. Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 NeilBrown 提交于
When attempting to add a spare to a RAID1 array, also consider adding it as a replacement for a want_replacement device. Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 NeilBrown 提交于
If a Replacement is seen, file it as such. If we see two replacements (or two normal devices) for the one slot, abort. Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 NeilBrown 提交于
When recovery completes ->spare_active is called. This checks if the replacement is ready and if so it fails the original. Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 NeilBrown 提交于
Replacement devices are stored at a different offset, so look there too. Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 NeilBrown 提交于
In RAID1, a replacement is much like a normal device, so we just double the size of the relevant arrays and look at all possible devices for reads and writes. This means that the array looks like it is now double the size in some way - we need to be careful about that. In particular, we checking if the array is still degraded while creating a recovery request we need to only consider the first 'half' - i.e. the real (non-replacement) devices. Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 NeilBrown 提交于
In general mddev->raid_disks can change unexpectedly while conf->raid_disks will only change in a very controlled way. So change some uses of one to the other. The use of mddev->raid_disks will not cause actually problems but this way is more consistent and safer in the long term. Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 NeilBrown 提交于
Soon an array will be able to have multiple devices with the same raid_disk number (an original and a replacement). So removing a device based on the number won't work. So pass the actual device handle instead. Reviewed-by: NDan Williams <dan.j.williams@intel.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
- 01 11月, 2011 1 次提交
-
-
由 Paul Gortmaker 提交于
A pending cleanup will mean that module.h won't be implicitly everywhere anymore. Make sure the modular drivers in md dir are actually calling out for <module.h> explicitly in advance. Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
- 26 10月, 2011 1 次提交
-
-
由 NeilBrown 提交于
In 3.0 we changed the way recovery_disabled was handle so that instead of testing against zero, we test an mddev-> value against a conf-> value. Two problems: 1/ one place in raid1 was missed and still sets to '1'. 2/ We didn't explicitly set the conf-> value at array creation time. It defaulted to '0' just like the mddev value does so they could appear equal and thus disable recovery. This did not affect normal 'md' as it calls bind_rdev_to_array which changes the mddev value. However the dmraid interface doesn't call this and so doesn't change ->recovery_disabled; so at array start all recovery is incorrectly disabled. So initialise the 'conf' value to one less that the mddev value, so the will only be the same when explicitly set that way. Reported-by: NJonathan Brassow <jbrassow@redhat.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
- 24 10月, 2011 1 次提交
-
-
由 Tao Ma 提交于
bio originally has the functionality to set the complete cpu, but it is broken. Chirstoph said that "This code is unused, and from the all the discussions lately pretty obviously broken. The only thing keeping it serves is creating more confusion and possibly more bugs." And Jens replied with "We can kill bio_set_completion_cpu(). I'm fine with leaving cpu control to the request based drivers, they are the only ones that can toggle the setting anyway". So this patch tries to remove all the work of controling complete cpu from a bio. Cc: Shaohua Li <shaohua.li@intel.com> Cc: Christoph Hellwig <hch@infradead.org> Signed-off-by: NTao Ma <boyu.mt@taobao.com> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
- 11 10月, 2011 7 次提交
-
-
由 NeilBrown 提交于
RAID1 and RAID10 handle write requests by queuing them for handling by a separate thread. This is because when a write-intent-bitmap is active we might need to update the bitmap first, so it is good to queue a lot of writes, then do one big bitmap update for them all. However writeback request devices to appear to be congested after a while so it can make some guesstimate of throughput. The infinite queue defeats that (note that RAID5 has already has a finite queue so it doesn't suffer from this problem). So impose a limit on the number of pending write requests. By default it is 1024 which seems to be generally suitable. Make it configurable via module option just in case someone finds a regression. Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 NeilBrown 提交于
"mdk" doesn't mean anything any more. Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 NeilBrown 提交于
Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 NeilBrown 提交于
Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 NeilBrown 提交于
Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 NeilBrown 提交于
Having mddev_t and 'struct mddev_s' is ugly and not preferred Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 NeilBrown 提交于
The typedefs are just annoying. 'mdk' probably refers to 'md_k.h' which used to be an include file that defined this thing. Signed-off-by: NNeilBrown <neilb@suse.de>
-
- 07 10月, 2011 3 次提交
-
-
由 NeilBrown 提交于
Being able to dynamically enable these make them much more useful. Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 NeilBrown 提交于
We know which device we just read from so we don't need to search the bios to find out. Just use ->read_disk. Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 Namhyung Kim 提交于
When normal-write and sync-read/write bio completes, we should find out the disk number the bio belongs to. Factor those common code out to a separate function. Signed-off-by: NNamhyung Kim <namhyung@gmail.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
- 21 9月, 2011 1 次提交
-
-
由 NeilBrown 提交于
Two related problems: 1/ some error paths call "md_unregister_thread(mddev->thread)" without subsequently clearing ->thread. A subsequent call to mddev_unlock will try to wake the thread, and crash. 2/ Most calls to md_wakeup_thread are protected against the thread disappeared either by: - holding the ->mutex - having an active request, so something else must be keeping the array active. However mddev_unlock calls md_wakeup_thread after dropping the mutex and without any certainty of an active request, so the ->thread could theoretically disappear. So we need a spinlock to provide some protections. So change md_unregister_thread to take a pointer to the thread pointer, and ensure that it always does the required locking, and clears the pointer properly. Reported-by: N"Moshe Melnikov" <moshe@zadarastorage.com> Signed-off-by: NNeilBrown <neilb@suse.de> cc: stable@kernel.org
-
- 12 9月, 2011 1 次提交
-
-
由 Christoph Hellwig 提交于
There is very little benefit in allowing to let a ->make_request instance update the bios device and sector and loop around it in __generic_make_request when we can archive the same through calling generic_make_request from the driver and letting the loop in generic_make_request handle it. Note that various drivers got the return value from ->make_request and returned non-zero values for errors. Signed-off-by: NChristoph Hellwig <hch@lst.de> Acked-by: NNeilBrown <neilb@suse.de> Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
-
- 10 9月, 2011 1 次提交
-
-
由 NeilBrown 提交于
A single request to RAID1 or RAID10 might result in multiple requests if there are known bad blocks that need to be avoided. To detect if we need to submit another write request we test: if (sectors_handled < (bio->bi_size >> 9)) { However this is after we call **_write_done() so the 'bio' no longer belongs to us - the writes could have completed and the bio freed. So move the **_write_done call until after the test against bio->bi_size. This addresses https://bugzilla.kernel.org/show_bug.cgi?id=41862Reported-by: NBruno Wolff III <bruno@wolff.to> Tested-by: NBruno Wolff III <bruno@wolff.to> Signed-off-by: NNeilBrown <neilb@suse.de>
-
- 28 7月, 2011 1 次提交
-
-
由 NeilBrown 提交于
raid1d is too big with several deep branches. So separate them out into their own functions. Signed-off-by: NNeilBrown <neilb@suse.de> Reviewed-by: NNamhyung Kim <namhyung@gmail.com>
-