- 02 3月, 2013 2 次提交
-
-
由 Joe Thornber 提交于
Add a target that allows a fast device such as an SSD to be used as a cache for a slower device such as a disk. A plug-in architecture was chosen so that the decisions about which data to migrate and when are delegated to interchangeable tunable policy modules. The first general purpose module we have developed, called "mq" (multiqueue), follows in the next patch. Other modules are under development. Signed-off-by: NJoe Thornber <ejt@redhat.com> Signed-off-by: NHeinz Mauelshagen <mauelshagen@redhat.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com> Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
-
由 Alasdair G Kergon 提交于
Remove EXPERIMENTAL from all existing device-mapper targets. Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
-
- 13 10月, 2012 1 次提交
-
-
由 Mike Snitzer 提交于
The bio prison code will be useful to other future DM targets so move it to a separate module. Signed-off-by: NMike Snitzer <snitzer@redhat.com> Signed-off-by: NJoe Thornber <ejt@redhat.com> Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
-
- 02 8月, 2012 1 次提交
-
-
由 NeilBrown 提交于
Now that DM_RAID supports raid10, it needs to select that code to ensure it is included. Cc: Jonathan Brassow <jbrassow@redhat.com> Reported-by: NFengguang Wu <fengguang.wu@intel.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
- 27 7月, 2012 1 次提交
-
-
由 Joe Thornber 提交于
Remove debug space map checker from dm persistent data. The space map checker is a wrapper for other space maps that double checks the reference counts are correct. It holds all these reference counts in memory rather than on disk, so uses a lot of memory and is thus restricted to small pools. As yet, this checker hasn't found any issues, but has caused a few of its own due to people turning it on by default with larger pools. Removing. Signed-off-by: NJoe Thornber <ejt@redhat.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com> Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
-
- 29 3月, 2012 3 次提交
-
-
由 Mikulas Patocka 提交于
This device-mapper target creates a read-only device that transparently validates the data on one underlying device against a pre-generated tree of cryptographic checksums stored on a second device. Two checksum device formats are supported: version 0 which is already shipping in Chromium OS and version 1 which incorporates some improvements. Signed-off-by: NMikulas Patocka <mpatocka@redhat.com> Signed-off-by: NMandeep Singh Baines <msb@chromium.org> Signed-off-by: NWill Drewry <wad@chromium.org> Signed-off-by: NElly Jones <ellyjones@chromium.org> Cc: Milan Broz <mbroz@redhat.com> Cc: Olof Johansson <olofj@chromium.org> Cc: Steffen Klassert <steffen.klassert@secunet.com> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
-
由 Alasdair G Kergon 提交于
The dm raid module (using md) is becoming the preferred way of creating long-lived mirrors through userspace LVM so remove the EXPERIMENTAL tag. Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
-
由 Alasdair G Kergon 提交于
Drop EXPERIMENTAL tag from dm-uevent. It's not changed for a while and some userspace tools are relying upon it. Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
-
- 01 11月, 2011 2 次提交
-
-
由 Joe Thornber 提交于
Initial EXPERIMENTAL implementation of device-mapper thin provisioning with snapshot support. The 'thin' target is used to create instances of the virtual devices that are hosted in the 'thin-pool' target. The thin-pool target provides data sharing among devices. This sharing is made possible using the persistent-data library in the previous patch. The main highlight of this implementation, compared to the previous implementation of snapshots, is that it allows many virtual devices to be stored on the same data volume, simplifying administration and allowing sharing of data between volumes (thus reducing disk usage). Another big feature is support for arbitrary depth of recursive snapshots (snapshots of snapshots of snapshots ...). The previous implementation of snapshots did this by chaining together lookup tables, and so performance was O(depth). This new implementation uses a single data structure so we don't get this degradation with depth. For further information and examples of how to use this, please read Documentation/device-mapper/thin-provisioning.txt Signed-off-by: NJoe Thornber <thornber@redhat.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com> Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
-
由 Mikulas Patocka 提交于
The dm-bufio interface allows you to do cached I/O on devices, holding recently-read blocks in memory and performing delayed writes. We don't use buffer cache or page cache already present in the kernel, because: * we need to handle block sizes larger than a page * we can't allocate memory to perform reads or we'd have deadlocks Currently, when a cache is required, we limit its size to a fraction of available memory. Usage can be viewed and changed in /sys/module/dm_bufio/parameters/ . The first user is thin provisioning, but more dm users are planned. Signed-off-by: NMikulas Patocka <mpatocka@redhat.com> Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
-
- 02 8月, 2011 1 次提交
-
-
由 Jonathan Brassow 提交于
Add the ability to parse and use metadata devices to dm-raid. Although not strictly required, without the metadata devices, many features of RAID are unavailable. They are used to store a superblock and bitmap. The role, or position in the array, of each device must be recorded in its superblock. This is to help with fault handling, array reshaping, and sanity checks. RAID 4/5/6 devices must be loaded in a specific order: in this way, the 'array_position' field helps validate the correctness of the mapping when it is loaded. It can be used during reshaping to identify which devices are added/removed. Fault handling is impossible without this field. For example, when a device fails it is recorded in the superblock. If this is a RAID1 device and the offending device is removed from the array, there must be a way during subsequent array assembly to determine that the failed device was the one removed. This is done by correlating the 'array_position' field and the bit-field variable 'failed_devices'. Signed-off-by: NJonathan Brassow <jbrassow@redhat.com> Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
-
- 24 3月, 2011 1 次提交
-
-
由 Josef Bacik 提交于
This target is the same as the linear target except that it returns I/O errors periodically. It's been found useful in simulating failing devices for testing purposes. I needed a dm target to do some failure testing on btrfs's raid code, and Mike pointed me at this. Signed-off-by: NJosef Bacik <josef@redhat.com> Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
-
- 14 1月, 2011 1 次提交
-
-
由 NeilBrown 提交于
This patch is the skeleton for the DM target that will be the bridge from DM to MD (initially RAID456 and later RAID1). It provides a way to use device-mapper interfaces to the MD RAID456 drivers. As with all device-mapper targets, the nominal public interfaces are the constructor (CTR) tables and the status outputs (both STATUSTYPE_INFO and STATUSTYPE_TABLE). The CTR table looks like the following: 1: <s> <l> raid \ 2: <raid_type> <#raid_params> <raid_params> \ 3: <#raid_devs> <meta_dev1> <dev1> .. <meta_devN> <devN> Line 1 contains the standard first three arguments to any device-mapper target - the start, length, and target type fields. The target type in this case is "raid". Line 2 contains the arguments that define the particular raid type/personality/level, the required arguments for that raid type, and any optional arguments. Possible raid types include: raid4, raid5_la, raid5_ls, raid5_rs, raid6_zr, raid6_nr, and raid6_nc. (again, raid1 is planned for the future.) The list of required and optional parameters is the same for all the current raid types. The required parameters are positional, while the optional parameters are given as key/value pairs. The possible parameters are as follows: <chunk_size> Chunk size in sectors. [[no]sync] Force/Prevent RAID initialization [rebuild <idx>] Rebuild the drive indicated by the index [daemon_sleep <ms>] Time between bitmap daemon work to clear bits [min_recovery_rate <kB/sec/disk>] Throttle RAID initialization [max_recovery_rate <kB/sec/disk>] Throttle RAID initialization [max_write_behind <value>] See '-write-behind=' (man mdadm) [stripe_cache <sectors>] Stripe cache size for higher RAIDs Line 3 contains the list of devices that compose the array in metadata/data device pairs. If the metadata is stored separately, a '-' is given for the metadata device position. If a drive has failed or is missing at creation time, a '-' can be given for both the metadata and data drives for a given position. Examples: # RAID4 - 4 data drives, 1 parity # No metadata devices specified to hold superblock/bitmap info # Chunk size of 1MiB # (Lines separated for easy reading) 0 1960893648 raid \ raid4 1 2048 \ 5 - 8:17 - 8:33 - 8:49 - 8:65 - 8:81 # RAID4 - 4 data drives, 1 parity (no metadata devices) # Chunk size of 1MiB, force RAID initialization, # min recovery rate at 20 kiB/sec/disk 0 1960893648 raid \ raid4 4 2048 min_recovery_rate 20 sync\ 5 - 8:17 - 8:33 - 8:49 - 8:65 - 8:81 Performing a 'dmsetup table' should display the CTR table used to construct the mapping (with possible reordering of optional parameters). Performing a 'dmsetup status' will yield information on the state and health of the array. The output is as follows: 1: <s> <l> raid \ 2: <raid_type> <#devices> <1 health char for each dev> <resync_ratio> Line 1 is standard DM output. Line 2 is best shown by example: 0 1960893648 raid raid4 5 AAAAA 2/490221568 Here we can see the RAID type is raid4, there are 5 devices - all of which are 'A'live, and the array is 2/490221568 complete with recovery. Cc: linux-raid@vger.kernel.org Signed-off-by: NNeilBrown <neilb@suse.de> Signed-off-by: NJonathan Brassow <jbrassow@redhat.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com> Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
-
- 18 5月, 2010 1 次提交
-
-
由 NeilBrown 提交于
RAID10 has been available for quite a while now and is quite well tested, so we can remove the EXPERIMENTAL designation. Reported-by: NEric MSP Veith <eveith@wwweb-library.net> Signed-off-by: NNeilBrown <neilb@suse.de>
-
- 14 12月, 2009 1 次提交
-
-
由 NeilBrown 提交于
Make it clear in the config message that MD_MULTIPATH is not under active development. Cc: Oren Held <orenhe@il.ibm.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
- 30 10月, 2009 1 次提交
-
-
由 David Woodhouse 提交于
Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
- 29 10月, 2009 1 次提交
-
-
由 David Woodhouse 提交于
We'll want to use these in btrfs too. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
-
- 30 8月, 2009 3 次提交
-
-
由 Dan Williams 提交于
Now that the resources to handle stripe_head operations are allocated percpu it is possible for raid5d to distribute stripe handling over multiple cores. This conversion also adds a call to cond_resched() in the non-multicore case to prevent one core from getting monopolized for raid operations. Cc: Arjan van de Ven <arjan@linux.intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
[ Based on an original patch by Yuri Tikhonov ] The raid_run_ops routine uses the asynchronous offload api and the stripe_operations member of a stripe_head to carry out xor+pq+copy operations asynchronously, outside the lock. The operations performed by RAID-6 are the same as in the RAID-5 case except for no support of STRIPE_OP_PREXOR operations. All the others are supported: STRIPE_OP_BIOFILL - copy data into request buffers to satisfy a read request STRIPE_OP_COMPUTE_BLK - generate missing blocks (1 or 2) in the cache from the other blocks STRIPE_OP_BIODRAIN - copy data out of request buffers to satisfy a write request STRIPE_OP_RECONSTRUCT - recalculate parity for new data that has entered the cache STRIPE_OP_CHECK - verify that the parity is correct The flow is the same as in the RAID-5 case, and reuses some routines, namely: 1/ ops_complete_postxor (renamed to ops_complete_reconstruct) 2/ ops_complete_compute (updated to set up to 2 targets uptodate) 3/ ops_run_check (renamed to ops_run_check_p for xor parity checks) [neilb@suse.de: fixes to get it to pass mdadm regression suite] Reviewed-by: NAndre Noll <maan@systemlinux.org> Signed-off-by: NYuri Tikhonov <yur@emcraft.com> Signed-off-by: NIlya Yanok <yanok@emcraft.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
Port drivers/md/raid6test/test.c to use the async raid6 recovery routines. This is meant as a unit test for raid6 acceleration drivers. In addition to the 16-drive test case this implements tests for the 4-disk and 5-disk special cases (dma devices can not generically handle less than 2 sources), and adds a test for the D+Q case. Reviewed-by: NAndre Noll <maan@systemlinux.org> Acked-by: NMaciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 22 6月, 2009 3 次提交
-
-
由 Jonthan Brassow 提交于
This patch contains a device-mapper mirror log module that forwards requests to userspace for processing. The structures used for communication between kernel and userspace are located in include/linux/dm-log-userspace.h. Due to the frequency, diversity, and 2-way communication nature of the exchanges between kernel and userspace, 'connector' was chosen as the interface for communication. The first log implementations written in userspace - "clustered-disk" and "clustered-core" - support clustered shared storage. A userspace daemon (in the LVM2 source code repository) uses openAIS/corosync to process requests in an ordered fashion with the rest of the nodes in the cluster so as to prevent log state corruption. Other implementations with no association to LVM or openAIS/corosync, are certainly possible. (Imagine if two machines are writing to the same region of a mirror. They would both mark the region dirty, but you need a cluster-aware entity that can handle properly marking the region clean when they are done. Otherwise, you might clear the region when the first machine is done, not the second.) Signed-off-by: NJonathan Brassow <jbrassow@redhat.com> Cc: Evgeniy Polyakov <johnpol@2ka.mipt.ru> Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
-
由 Kiyoshi Ueda 提交于
This patch adds a service time oriented dynamic load balancer, dm-service-time, which selects the path with the shortest estimated service time for the incoming I/O. The service time is estimated by dividing the in-flight I/O size by a performance value of each path. The performance value can be given as a table argument at the table loading time. If no performance value is given, all paths are considered equal. Signed-off-by: NKiyoshi Ueda <k-ueda@ct.jp.nec.com> Signed-off-by: NJun'ichi Nomura <j-nomura@ce.jp.nec.com> Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
-
由 Kiyoshi Ueda 提交于
This patch adds a dynamic load balancer, dm-queue-length, which balances the number of in-flight I/Os across the paths. The code is based on the patch posted by Stefan Bader: https://www.redhat.com/archives/dm-devel/2005-October/msg00050.htmlSigned-off-by: NStefan Bader <stefan.bader@canonical.com> Signed-off-by: NKiyoshi Ueda <k-ueda@ct.jp.nec.com> Signed-off-by: NJun'ichi Nomura <j-nomura@ce.jp.nec.com> Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
-
- 31 3月, 2009 2 次提交
-
-
由 NeilBrown 提交于
This was only needed when the code was experimental. Most of it is well tested now, so the option is no longer useful. Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 Dan Williams 提交于
Move the raid6 data processing routines into a standalone module (raid6_pq) to prepare them to be called from async_tx wrappers and other non-md drivers/modules. This precludes a circular dependency of raid456 needing the async modules for data processing while those modules in turn depend on raid456 for the base level synchronous raid6 routines. To support this move: 1/ The exportable definitions in raid6.h move to include/linux/raid/pq.h 2/ The raid6_call, recovery calls, and table symbols are exported 3/ Extra #ifdef __KERNEL__ statements to enable the userspace raid6test to compile Signed-off-by: NDan Williams <dan.j.williams@intel.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
- 12 10月, 2008 2 次提交
-
-
由 Alan Jenkins 提交于
Signed-off-by: NAlan Jenkins <alan-jenkins@tuffmail.co.uk> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Arjan van de Ven 提交于
RAID autodetect has the side effect of requiring synchronisation of all device drivers, which can make the boot several seconds longer (I've measured 7 on one of my laptops).... even for systems that don't have RAID setup for the root filesystem (the only FS where this matters). This patch makes the default for autodetect a config option; either way the user can always override via the kernel command line. Signed-off-by: NArjan van de Ven <arjan@linux.intel.com> Acked-by: NNeilBrown <neilb@suse.de>
-
- 15 7月, 2008 1 次提交
-
-
由 Chandra Seetharaman 提交于
Do not automatically "select" SCSI_DH for dm-multipath. If SCSI_DH doesn't exist,just do not allow hardware handlers to be used. Handle SCSI_DH being a module also. Make sure it doesn't allow DM_MULTIPATH to be compiled in when SCSI_DH is a module. [jejb: added comment for Kconfig syntax] Signed-off-by: NChandra Seetharaman <sekharan@us.ibm.com> Reported-by: NRandy Dunlap <randy.dunlap@oracle.com> Reported-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NJames Bottomley <James.Bottomley@HansenPartnership.com>
-
- 05 6月, 2008 2 次提交
-
-
由 Chandra Seetharaman 提交于
This patch removes the 3 hardware handlers that currently exist under dm as the functionality is moved to SCSI layer in the earlier patches. [jejb: removed more makefile hunks and rejection fixes] Signed-off-by: NChandra Seetharaman <sekharan@us.ibm.com> Acked-by: NAlasdair G Kergon <agk@redhat.com> Signed-off-by: NJames Bottomley <James.Bottomley@HansenPartnership.com>
-
由 Chandra Seetharaman 提交于
This patch converts dm-mpath to use scsi device handlers instead of dm's hardware handlers. This patch does not add any new functionality. Old behaviors remain and userspace tools work as is except that arguments supplied with hardware handler are ignored. One behavioral exception is: Activation of a path is synchronous in this patch, opposed to the older behavior of being asynchronous (changed in patch 07: scsi_dh: Add a single threaded workqueue for initializing a path) Note: There is no need to get a reference for the device handler module (as it was done in the dm hardware handler case) here as the reference is held when the device was first found. Instead we check and make sure that support for the specified device is present at table load time. Signed-off-by: NChandra Seetharaman <sekharan@us.ibm.com> Signed-off-by: NMike Christie <michaelc@cs.wisc.edu> Acked-by: NAlasdair G Kergon <agk@redhat.com> Signed-off-by: NJames Bottomley <James.Bottomley@HansenPartnership.com>
-
- 08 2月, 2008 1 次提交
-
-
由 Alasdair G Kergon 提交于
Drop the EXPERIMENTAL tag from well-established device-mapper targets, so the newer ones stand out better. Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
-
- 21 12月, 2007 1 次提交
-
-
由 Paul Mundt 提交于
With CONFIG_SCSI=n __scsi_print_sense() is never linked in. drivers/built-in.o: In function `hp_sw_end_io': dm-mpath-hp-sw.c:(.text+0x914f8): undefined reference to `__scsi_print_sense' Caught with a randconfig on current git. Signed-off-by: NPaul Mundt <lethal@linux-sh.org> Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
-
- 20 10月, 2007 2 次提交
-
-
由 Mike Anderson 提交于
This patch adds a uevent skeleton to device-mapper. Signed-off-by: NMike Anderson <andmike@linux.vnet.ibm.com> Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
-
由 Dave Wysochanski 提交于
This patch adds the most basic dm-multipath hardware support for the HP active/passive arrays. Signed-off-by: NDave Wysochanski <dwysocha@redhat.com> Signed-off-by: NMike Christie <michaelc@cs.wisc.edu> Acked-by: NChandra Seetharaman <sekharan@us.ibm.com> Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
-
- 25 8月, 2007 1 次提交
-
-
由 Randy Dunlap 提交于
DM_MULTIPATH_RDAC uses SCSI API(s) and is for a SCSI device, so add SCSI to its depends on to prevent build errors. Signed-off-by: NRandy Dunlap <randy.dunlap@oracle.com> [ Tested and Verified by Chandra Seetharaman ] Acked-by: NChandra Seetharaman <sekharan@us.ibm.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 18 7月, 2007 1 次提交
-
-
由 Jan Engelhardt 提交于
Change Kconfig objects from "menu, config" into "menuconfig" so that the user can disable the whole feature without having to enter the menu first. Signed-off-by: NJan Engelhardt <jengelh@gmx.de> Acked-by: NNeil Brown <neilb@suse.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 13 7月, 2007 3 次提交
-
-
由 Dan Williams 提交于
The async_tx api provides methods for describing a chain of asynchronous bulk memory transfers/transforms with support for inter-transactional dependencies. It is implemented as a dmaengine client that smooths over the details of different hardware offload engine implementations. Code that is written to the api can optimize for asynchronous operation and the api will fit the chain of operations to the available offload resources. I imagine that any piece of ADMA hardware would register with the 'async_*' subsystem, and a call to async_X would be routed as appropriate, or be run in-line. - Neil Brown async_tx exploits the capabilities of struct dma_async_tx_descriptor to provide an api of the following general format: struct dma_async_tx_descriptor * async_<operation>(..., struct dma_async_tx_descriptor *depend_tx, dma_async_tx_callback cb_fn, void *cb_param) { struct dma_chan *chan = async_tx_find_channel(depend_tx, <operation>); struct dma_device *device = chan ? chan->device : NULL; int int_en = cb_fn ? 1 : 0; struct dma_async_tx_descriptor *tx = device ? device->device_prep_dma_<operation>(chan, len, int_en) : NULL; if (tx) { /* run <operation> asynchronously */ ... tx->tx_set_dest(addr, tx, index); ... tx->tx_set_src(addr, tx, index); ... async_tx_submit(chan, tx, flags, depend_tx, cb_fn, cb_param); } else { /* run <operation> synchronously */ ... <operation> ... async_tx_sync_epilog(flags, depend_tx, cb_fn, cb_param); } return tx; } async_tx_find_channel() returns a capable channel from its pool. The channel pool is organized as a per-cpu array of channel pointers. The async_tx_rebalance() routine is tasked with managing these arrays. In the uniprocessor case async_tx_rebalance() tries to spread responsibility evenly over channels of similar capabilities. For example if there are two copy+xor channels, one will handle copy operations and the other will handle xor. In the SMP case async_tx_rebalance() attempts to spread the operations evenly over the cpus, e.g. cpu0 gets copy channel0 and xor channel0 while cpu1 gets copy channel 1 and xor channel 1. When a dependency is specified async_tx_find_channel defaults to keeping the operation on the same channel. A xor->copy->xor chain will stay on one channel if it supports both operation types, otherwise the transaction will transition between a copy and a xor resource. Currently the raid5 implementation in the MD raid456 driver has been converted to the async_tx api. A driver for the offload engines on the Intel Xscale series of I/O processors, iop-adma, is provided in a later commit. With the iop-adma driver and async_tx, raid456 is able to offload copy, xor, and xor-zero-sum operations to hardware engines. On iop342 tiobench showed higher throughput for sequential writes (20 - 30% improvement) and sequential reads to a degraded array (40 - 55% improvement). For the other cases performance was roughly equal, +/- a few percentage points. On a x86-smp platform the performance of the async_tx implementation (in synchronous mode) was also +/- a few percentage points of the original implementation. According to 'top' on iop342 CPU utilization drops from ~50% to ~15% during a 'resync' while the speed according to /proc/mdstat doubles from ~25 MB/s to ~50 MB/s. The tiobench command line used for testing was: tiobench --size 2048 --block 4096 --block 131072 --dir /mnt/raid --numruns 5 * iop342 had 1GB of memory available Details: * if CONFIG_DMA_ENGINE=n the asynchronous path is compiled away by making async_tx_find_channel a static inline routine that always returns NULL * when a callback is specified for a given transaction an interrupt will fire at operation completion time and the callback will occur in a tasklet. if the the channel does not support interrupts then a live polling wait will be performed * the api is written as a dmaengine client that requests all available channels * In support of dependencies the api implicitly schedules channel-switch interrupts. The interrupt triggers the cleanup tasklet which causes pending operations to be scheduled on the next channel * Xor engines treat an xor destination address differently than a software xor routine. To the software routine the destination address is an implied source, whereas engines treat it as a write-only destination. This patch modifies the xor_blocks routine to take a an explicit destination address to mirror the hardware. Changelog: * fixed a leftover debug print * don't allow callbacks in async_interrupt_cond * fixed xor_block changes * fixed usage of ASYNC_TX_XOR_DROP_DEST * drop dma mapping methods, suggested by Chris Leech * printk warning fixups from Andrew Morton * don't use inline in C files, Adrian Bunk * select the API when MD is enabled * BUG_ON xor source counts <= 1 * implicitly handle hardware concerns like channel switching and interrupts, Neil Brown * remove the per operation type list, and distribute operation capabilities evenly amongst the available channels * simplify async_tx_find_channel to optimize the fast path * introduce the channel_table_initialized flag to prevent early calls to the api * reorganize the code to mimic crypto * include mm.h as not all archs include it in dma-mapping.h * make the Kconfig options non-user visible, Adrian Bunk * move async_tx under crypto since it is meant as 'core' functionality, and the two may share algorithms in the future * move large inline functions into c files * checkpatch.pl fixes * gpl v2 only correction Cc: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NDan Williams <dan.j.williams@intel.com> Acked-By: NNeilBrown <neilb@suse.de>
-
由 Dan Williams 提交于
The async_tx api tries to use a dma engine for an operation, but will fall back to an optimized software routine otherwise. Xor support is implemented using the raid5 xor routines. For organizational purposes this routine is moved to a common area. The following fixes are also made: * rename xor_block => xor_blocks, suggested by Adrian Bunk * ensure that xor.o initializes before md.o in the built-in case * checkpatch.pl fixes * mark calibrate_xor_blocks __init, Adrian Bunk Cc: Adrian Bunk <bunk@stusta.de> Cc: NeilBrown <neilb@suse.de> Cc: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Chandra Seetharaman 提交于
This patch supports LSI/Engenio devices in RDAC mode. Like dm-emc it requires userspace support. In your multipath.conf file you must have: path_checker rdac hardware_handler "1 rdac" prio_callout "/sbin/mpath_prio_tpc /dev/%n" And you also then must have a updated multipath tools release which has rdac support. Signed-off-by: NChandra Seetharaman <sekharan@us.ibm.com> Signed-off-by: NMike Christie <michaelc@cs.wisc.edu> Signed-off-by: NAlasdair G Kergon <agk@redhat.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 10 5月, 2007 1 次提交
-
-
由 Heinz Mauelshagen 提交于
New device-mapper target that can delay I/O (for testing). Reads can be separated from writes, redirected to different underlying devices and delayed by differing amounts of time. Signed-off-by: NHeinz Mauelshagen <mauelshagen@redhat.com> Signed-off-by: NMilan Broz <mbroz@redhat.com> Signed-off-by: NAlasdair G Kergon <agk@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-