- 22 5月, 2012 21 次提交
-
-
由 NeilBrown 提交于
This number is more generally useful, and bytes-in-last-page is easily extracted from it. Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 NeilBrown 提交于
This new 'struct bitmap_storage' reflects the external storage of the bitmap. Having this clearly defined will make it easier to change the storage used while the array is active. Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 NeilBrown 提交于
Most often we have the page number, not the page. And that is what the *_page_attr() functions really want. So change the arguments to take that number. Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 NeilBrown 提交于
Instead of allocating pages in read_sb_page, read_page and bitmap_read_sb, allocate them all in bitmap_init_from disk. Also replace the hack of calling "attach_page_buffers(page, NULL)" to ensure that free_buffer() won't complain, by putting a test for PagePrivate in free_buffer(). Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 NeilBrown 提交于
An md bitmap comprises two parts - internal counting of active writes per 'chunk'. - external storage of whether there are any active writes on each chunk The second requires the first, but the first doesn't require the second. Not having backing storage means that the bitmap cannot expedite resync after a crash, but it still allows us to expedite the recovery of a recently-removed device. So: allow a bitmap to exist even if there is no backing device. In that case we default to 128M chunks. A particular value of this is that we can remove and re-add a bitmap (possibly of a different granularity) on a degraded array, and not lose the information needed to fast-recover the missing device. We don't actually activate these bitmaps yet - that will come in a later patch. Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 NeilBrown 提交于
If we are to allow bitmaps to be resized when the array is resized, we need to know how much space there is. So create an attribute to store this information and set appropriate defaults. It can be set more precisely via sysfs, or future metadata extensions may allow it to be recorded. Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 NeilBrown 提交于
There are two different 'pending' concepts in the handling of the write intent bitmap. Firstly, a 'page' from the bitmap (which container PAGE_SIZE*8 bits) may have changes (bits cleared) that should be written in due course. There is no hurry for these and the page will transition from PENDING to NEEDWRITE and will then be written, though if it ever becomes DIRTY it will be written much sooner and PENDING will be cleared. Secondly, a page of counters - which contains PAGE_SIZE/2 counters, one for each bit, can usefully have a 'pending' flag which indicates if any of the counters are low (2 or 1) and ready to be processed by bitmap_daemon_work(). If this flag is clear we can skip the whole page. These two concepts are currently combined in the bitmap-file flag. This causes a tighter connection between the counters and the bitmap file than I would like - as I want to add some flexibility to the bitmap file. So introduce a new flag with the page-of-counters, and rewrite bitmap_daemon_work() so that it handles the two different 'pending' concepts separately. This also allows us to clear BITMAP_PAGE_PENDING when we write out a dirty page, which may occasionally reduce the number of times we write a page. Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 Shaohua Li 提交于
REQ_SYNC is ignored in current raid5 code. Block layer does use it to do policy, for example ioscheduler. This patch adds it. Signed-off-by: NShaohua Li <shli@fusionio.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 Shaohua Li 提交于
The two variables are useless. Signed-off-by: NShaohua Li <shli@fusionio.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 majianpeng 提交于
If the allocation of rep1_bio fails, we currently don't free the 'bio' of the same dev. Reported by kmemleak. Signed-off-by: Nmajianpeng <majianpeng@gmail.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 majianpeng 提交于
When attempting to fix a read error, it is acceptable to read from a device that is recovering, provided the recovery has got past the place we are reading from. This makes the test for "can we read from here" the same as the test in read_balance. Signed-off-by: Nmajianpeng <majianpeng@gmail.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 NeilBrown 提交于
This ensures that it is always freed - there were case where we failed to free the page. Reported-by: Nmajianpeng <majianpeng@gmail.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 NeilBrown 提交于
dm-raid currently open-codes the freeing of some members of and rdev. It is more maintainable to have it call common code from md.c which does this for all call-sites. So remove free_disk_sb to md_rdev_clear, export it, and use it in dm-raid.c Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 Jim Kukunas 提交于
Reorders functions in raid6_algos as well as the preference check to reduce the number of functions tested on initialization. Also, creates symmetry between choosing the gen_syndrome functions and choosing the recovery functions. Signed-off-by: NJim Kukunas <james.t.kukunas@linux.intel.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 Jim Kukunas 提交于
Test each combination of recovery and syndrome generation functions. Signed-off-by: NJim Kukunas <james.t.kukunas@linux.intel.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 Jim Kukunas 提交于
Add SSSE3 optimized recovery functions, as well as a system for selecting the most appropriate recovery functions to use. Originally-by: NH. Peter Anvin <hpa@zytor.com> Signed-off-by: NJim Kukunas <james.t.kukunas@linux.intel.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 Jim Kukunas 提交于
<linux/module.h> drags in headers which are not visible to userspace, thus breaking the build for the test program. Signed-off-by: NJim Kukunas <james.t.kukunas@linux.intel.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 Jim Kukunas 提交于
Optimize RAID5 xor checksumming by taking advantage of 256-bit YMM registers introduced in AVX. Signed-off-by: NJim Kukunas <james.t.kukunas@linux.intel.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 Jim Kukunas 提交于
With CONFIG_PREEMPT=y, we need to disable preemption while benchmarking RAID5 xor checksumming to ensure we're actually measuring what we think we're measuring. Signed-off-by: NJim Kukunas <james.t.kukunas@linux.intel.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 Jim Kukunas 提交于
In the existing do_xor_speed(), there is no guarantee that we actually run do_2() for a full jiffy. We get the current jiffy, then run do_2() until the next jiffy. Instead, let's get the current jiffy, then wait until the next jiffy to start our test. Signed-off-by: NJim Kukunas <james.t.kukunas@linux.intel.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 NeilBrown 提交于
A 'near' or 'offset' lay RAID10 array can be reshaped to a different 'near' or 'offset' layout, a different chunk size, and a different number of devices. However the number of copies cannot change. Unlike RAID5/6, we do not support having user-space backup data that is being relocated during a 'critical section'. Rather, the data_offset of each device must change so that when writing any block to a new location, it will not over-write any data that is still 'live'. This means that RAID10 reshape is not supportable on v0.90 metadata. The different between the old data_offset and the new_offset must be at least the larger of the chunksize multiplied by offset copies of each of the old and new layout. (for 'near' mode, offset_copies == 1). A larger difference of around 64M seems useful for in-place reshapes as more data can be moved between metadata updates. Very large differences (e.g. 512M) seem to slow the process down due to lots of long seeks (on oldish consumer graded devices at least). Metadata needs to be updated whenever the place we are about to write to is considered - by the current metadata - to still contain data in the old layout. [unbalanced locking fix from Dan Carpenter <dan.carpenter@oracle.com>] Signed-off-by: NNeilBrown <neilb@suse.de>
-
- 21 5月, 2012 10 次提交
-
-
由 NeilBrown 提交于
We will soon be interpreting the layout (and chunksize etc) from multiple places to support reshape. So split it out into separate function. Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 NeilBrown 提交于
When RAID10 supports reshape it will need a 'previous' and a 'current' geometry, so introduce that here. Use the 'prev' geometry when before the reshape_position, and the current 'geo' when beyond it. At other times, use both as appropriate. For now, both are identical (And reshape_position is never set). When we use the 'prev' geometry, we must use the old data_offset. When we use the current (And a reshape is happening) we must use the new_data_offset. Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 NeilBrown 提交于
Some resync type operations need to act on the address space of the device, others on the address space of the array. This only affects RAID10, so it sets resync_max_sectors to the array size (it defaults to the device size), and that is currently used for resync only. However reshape of a RAID10 must be done against the array size, not device size, so change code to use resync_max_sectors for both the resync and the reshape cases. This does not affect RAID5 or RAID1, just RAID10. Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 NeilBrown 提交于
Some code in raid1 and raid10 use sync_page_io to read/write pages when responding to read errors. As we will shortly support changing data_offset for raid10, this function must understand new_data_offset. So add that understanding. Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 NeilBrown 提交于
We will shortly be adding reshape support for RAID10 which will require it having 2 concurrent geometries (before and after). To make that easier, collect most geometry fields into 'struct geom' and access them from there. Then we will more easily be able to add a second set of fields. Note that 'copies' is not in this struct and so cannot be changed. There is little need to change this number and doing so is a lot more difficult as it requires reallocating more things. So leave it out for now. Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 NeilBrown 提交于
The important issue here is incorporating the different in data_offset into calculations concerning when we might need to over-write data that is still thought to be valid. To this end we find the minimum offset difference across all devices and add that where appropriate. Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 NeilBrown 提交于
As there can now be two different data_offsets - an 'old' and a 'new' - we need to carefully choose between them. Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 NeilBrown 提交于
When reshaping we can avoid costly intermediate backup by changing the 'start' address of the array on the device (if there is enough room). So as a first step, allow such a change to be requested through sysfs, and recorded in v1.x metadata. (As we didn't previous check that all 'pad' fields were zero, we need a new FEATURE flag for this. A (belatedly) check that all remaining 'pad' fields are zero to avoid a repeat of this) The new data offset must be requested separately for each device. This allows each to have a different change in the data offset. This is not likely to be used often but as data_offset can be set per-device, new_data_offset should be too. This patch also removes the 'acknowledged' arg to rdev_set_badblocks as it is never used and never will be. At the same time we add a new arg ('in_new') which is currently always zero but will be used more soon. When a reshape finishes we will need to update the data_offset and rdev->sectors. So provide an exported function to do that. Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 NeilBrown 提交于
Currently a reshape operation always progresses from the start of the array to the end unless the number of devices is being reduced, in which case it progressed in the opposite direction. To reverse a partial reshape which changes the number of devices you can stop the array and re-assemble with the raid-disks numbers reversed and it will undo. However for a reshape that does not change the number of devices it is not possible to reverse the reshape in the middle - you have to wait until it completes. So add a 'reshape_direction' attribute with is either 'forwards' or 'backwards' and can be explicitly set when delta_disks is zero. This will become more important when we allow the data_offset to change in a reshape. Then the explicit statement of what direction is being used will be more useful. This can be enabled in raid5 trivially as it already supports reverse reshape and just needs to use a different trigger to request it. Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 Shaohua Li 提交于
A flush request is usually issued in transaction commit code path, so using GFP_KERNEL to allocate memory for flush request bio falls into the classic deadlock issue. This is suitable for any -stable kernel to which it applies as it avoids a possible deadlock. Cc: stable@vger.kernel.org Signed-off-by: NShaohua Li <shli@fusionio.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
- 19 5月, 2012 1 次提交
-
-
由 NeilBrown 提交于
The old code was sector_div(stride, fc); the new code was sector_dir(size, conf->near_copies); 'size' is right (the stride various wasn't really needed), but 'fc' means 'far_copies', and that is an important difference. Signed-off-by: NeilBrown <neilb@suse.de>
-
- 17 5月, 2012 2 次提交
-
-
由 Jonathan Brassow 提交于
Use del_timer_sync to remove timer before mddev_suspend finishes. We don't want a timer going off after an mddev_suspend is called. This is especially true with device-mapper, since it can call the destructor function immediately following a suspend. This results in the removal (kfree) of the structures upon which the timer depends - resulting in a very ugly panic. Therefore, we add a del_timer_sync to mddev_suspend to prevent this. Cc: stable@vger.kernel.org Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 NeilBrown 提交于
raid10 stores dev_sectors in 'conf' separately from the one in 'mddev' because it can have a very significant effect on block addressing and so need to be updated carefully. However raid10_resize isn't updating it at all! To update it correctly, we need to make sure it is a proper multiple of the chunksize taking various details of the layout in to account. This calculation is currently done in setup_conf. So split it out from there and call it from raid10_resize as well. Then set conf->dev_sectors properly. Signed-off-by: NNeilBrown <neilb@suse.de>
-
- 04 5月, 2012 1 次提交
-
-
由 NeilBrown 提交于
commit 61a0d80c "md/bitmap: discard CHUNK_BLOCK_SHIFT macro" replaced CHUNK_BLOCK_RATIO() by the same text that was replacing CHUNK_BLOCK_SHIFT() - which is clearly wrong. The result is that 'chunks' is often too small by 1, which can sometimes result in a crash (not sure how). So use the correct replacement, and get rid of CHUNK_BLOCK_RATIO which is no longe used. Reported-by: NKarl Newman <siliconfiend@gmail.com> Tested-by: NKarl Newman <siliconfiend@gmail.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
- 30 4月, 2012 5 次提交
-
-
由 Linus Torvalds 提交于
-
git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm由 Linus Torvalds 提交于
Pull power management fixes from Rafael J. Wysocki: "Fix for an issue causing hibernation to hang on systems with highmem (that practically means i386) due to broken memory management (bug introduced in 3.2, so -stable material) and PM documentation update making the freezer documentation follow the code again after some recent updates." * tag 'pm-for-3.4-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: PM / Freezer / Docs: Update documentation about freezing of tasks PM / Hibernate: fix the number of pages used for hibernate/thaw buffering
-
由 Linus Torvalds 提交于
The autofs packet size has had a very unfortunate size problem on x86: because the alignment of 'u64' differs in 32-bit and 64-bit modes, and because the packet data was not 8-byte aligned, the size of the autofsv5 packet structure differed between 32-bit and 64-bit modes despite looking otherwise identical (300 vs 304 bytes respectively). We first fixed that up by making the 64-bit compat mode know about this problem in commit a32744d4 ("autofs: work around unhappy compat problem on x86-64"), and that made a 32-bit 'systemd' work happily on a 64-bit kernel because everything then worked the same way as on a 32-bit kernel. But it turned out that 'automount' had actually known and worked around this problem in user space, so fixing the kernel to do the proper 32-bit compatibility handling actually *broke* 32-bit automount on a 64-bit kernel, because it knew that the packet sizes were wrong and expected those incorrect sizes. As a result, we ended up reverting that compatibility mode fix, and thus breaking systemd again, in commit fcbf94b9. With both automount and systemd doing a single read() system call, and verifying that they get *exactly* the size they expect but using different sizes, it seemed that fixing one of them inevitably seemed to break the other. At one point, a patch I seriously considered applying from Michael Tokarev did a "strcmp()" to see if it was automount that was doing the operation. Ugly, ugly. However, a prettier solution exists now thanks to the packetized pipe mode. By marking the communication pipe as being packetized (by simply setting the O_DIRECT flag), we can always just write the bigger packet size, and if user-space does a smaller read, it will just get that partial end result and the extra alignment padding will simply be thrown away. This makes both automount and systemd happy, since they now get the size they asked for, and the kernel side of autofs simply no longer needs to care - it could pad out the packet arbitrarily. Of course, if there is some *other* user of autofs (please, please, please tell me it ain't so - and we haven't heard of any) that tries to read the packets with multiple writes, that other user will now be broken - the whole point of the packetized mode is that one system call gets exactly one packet, and you cannot read a packet in pieces. Tested-by: NMichael Tokarev <mjt@tls.msk.ru> Cc: Alan Cox <alan@lxorguk.ukuu.org.uk> Cc: David Miller <davem@davemloft.net> Cc: Ian Kent <raven@themaw.net> Cc: Thomas Meyer <thomas@m3y3r.de> Cc: stable@kernel.org Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Marcos Paulo de Souza 提交于
The file Documentation/power/freezing-of-tasks.txt was still referencing the TIF_FREEZE flag, that was removed by the commit d88e4cb6(freezer: remove now unused TIF_FREEZE). This patch removes all the references of TIF_FREEZE that were left behind. Signed-off-by: NMarcos Paulo de Souza <marcos.souza.org@gmail.com> Signed-off-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
由 Linus Torvalds 提交于
The actual internal pipe implementation is already really about individual packets (called "pipe buffers"), and this simply exposes that as a special packetized mode. When we are in the packetized mode (marked by O_DIRECT as suggested by Alan Cox), a write() on a pipe will not merge the new data with previous writes, so each write will get a pipe buffer of its own. The pipe buffer is then marked with the PIPE_BUF_FLAG_PACKET flag, which in turn will tell the reader side to break the read at that boundary (and throw away any partial packet contents that do not fit in the read buffer). End result: as long as you do writes less than PIPE_BUF in size (so that the pipe doesn't have to split them up), you can now treat the pipe as a packet interface, where each read() system call will read one packet at a time. You can just use a sufficiently big read buffer (PIPE_BUF is sufficient, since bigger than that doesn't guarantee atomicity anyway), and the return value of the read() will naturally give you the size of the packet. NOTE! We do not support zero-sized packets, and zero-sized reads and writes to a pipe continue to be no-ops. Also note that big packets will currently be split at write time, but that the size at which that happens is not really specified (except that it's bigger than PIPE_BUF). Currently that limit is the system page size, but we might want to explicitly support bigger packets some day. The main user for this is going to be the autofs packet interface, allowing us to stop having to care so deeply about exact packet sizes (which have had bugs with 32/64-bit compatibility modes). But user space can create packetized pipes with "pipe2(fd, O_DIRECT)", which will fail with an EINVAL on kernels that do not support this interface. Tested-by: NMichael Tokarev <mjt@tls.msk.ru> Cc: Alan Cox <alan@lxorguk.ukuu.org.uk> Cc: David Miller <davem@davemloft.net> Cc: Ian Kent <raven@themaw.net> Cc: Thomas Meyer <thomas@m3y3r.de> Cc: stable@kernel.org # needed for systemd/autofs interaction fix Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-