- 15 7月, 2007 3 次提交
-
-
由 Patrick McHardy 提交于
The set_multicast_list function may be called without holding the rtnl mutex, resulting in races when changing the underlying device's promiscous and allmulti state. Use the change_rx_mode hook, which is always invoked under the rtnl. Signed-off-by: NPatrick McHardy <kaber@trash.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Patrick McHardy 提交于
The method drivers currently use to synchronize multicast lists is not very pretty: - walk the multicast list - search each entry on a copy of the previous list - if new add to lower device - walk the copy of the previous list - search each entry on the current list - if removed delete from lower device - copy entire list This patch adds a new field to struct dev_addr_list to store the synchronization state and adds two helper functions for synchronization and cleanup. Signed-off-by: NPatrick McHardy <kaber@trash.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Patrick McHardy 提交于
Currently the set_multicast_list (and set_rx_mode) callbacks are responsible for configuring the device according to the IFF_PROMISC, IFF_MULTICAST and IFF_ALLMULTI flags and the mc_list (and uc_list in case of set_rx_mode). These callbacks can be invoked from BH context without the rtnl_mutex by dev_mc_add/dev_mc_delete, which makes reading the device flags and promiscous/allmulti count racy. For real hardware drivers that just commit all changes to the hardware this is not a real problem since the stack guarantees to call them for every change, so at least the final call will not race and commit the correct configuration to the hardware. For software devices that want to synchronize promiscous and multicast state to an underlying device however this can cause corruption of the underlying device's flags or promisc/allmulti counts. When the software device is concurrently put in promiscous or allmulti mode while set_multicast_list is invoked from bottem half context, the device might synchronize the change to the underlying device without holding the rtnl_mutex, which races with concurrent changes to the underlying device. Add a dev->change_rx_flags hook that is invoked when any of the flags that affect rx filtering change (under the rtnl_mutex), which allows drivers to perform synchronization immediately and only synchronize the address lists in set_multicast_list/set_rx_mode. Signed-off-by: NPatrick McHardy <kaber@trash.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 14 7月, 2007 1 次提交
-
-
由 Ingo Molnar 提交于
trivial cleanup: LOCAL_DISTANCE and REMOTE_DISTANCE are only used in topology.h and inside an #ifndef section - limit their existence to that #ifndef. Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 13 7月, 2007 25 次提交
-
-
由 Dan Williams 提交于
Cc: John Magolan <john.magolan@unisys.com> Signed-off-by: NShannon Nelson <shannon.nelson@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
When a read bio is attached to the stripe and the corresponding block is marked R5_UPTODATE, then a read (biofill) operation is scheduled to copy the data from the stripe cache to the bio buffer. handle_stripe flags the blocks to be operated on with the R5_Wantfill flag. If new read requests arrive while raid5_run_ops is running they will not be handled until handle_stripe is scheduled to run again. Changelog: * cleanup to_read and to_fill accounting * do not fail reads that have reached the cache Signed-off-by: NDan Williams <dan.j.williams@intel.com> Acked-By: NNeilBrown <neilb@suse.de>
-
由 Dan Williams 提交于
handle_stripe will compute a block when a backing disk has failed, or when it determines it can save a disk read by computing the block from all the other up-to-date blocks. Previously a block would be computed under the lock and subsequent logic in handle_stripe could use the newly up-to-date block. With the raid5_run_ops implementation the compute operation is carried out a later time outside the lock. To preserve the old functionality we take advantage of the dependency chain feature of async_tx to flag the block as R5_Wantcompute and then let other parts of handle_stripe operate on the block as if it were up-to-date. raid5_run_ops guarantees that the block will be ready before it is used in another operation. However, this only works in cases where the compute and the dependent operation are scheduled at the same time. If a previous call to handle_stripe sets the R5_Wantcompute flag there is no facility to pass the async_tx dependency chain across successive calls to raid5_run_ops. The req_compute variable protects against this case. Changelog: * remove the req_compute BUG_ON Signed-off-by: NDan Williams <dan.j.williams@intel.com> Acked-By: NNeilBrown <neilb@suse.de>
-
由 Dan Williams 提交于
When the raid acceleration work was proposed, Neil laid out the following attack plan: 1/ move the xor and copy operations outside spin_lock(&sh->lock) 2/ find/implement an asynchronous offload api The raid5_run_ops routine uses the asynchronous offload api (async_tx) and the stripe_operations member of a stripe_head to carry out xor+copy operations asynchronously, outside the lock. To perform operations outside the lock a new set of state flags is needed to track new requests, in-flight requests, and completed requests. In this new model handle_stripe is tasked with scanning the stripe_head for work, updating the stripe_operations structure, and finally dropping the lock and calling raid5_run_ops for processing. The following flags outline the requests that handle_stripe can make of raid5_run_ops: STRIPE_OP_BIOFILL - copy data into request buffers to satisfy a read request STRIPE_OP_COMPUTE_BLK - generate a missing block in the cache from the other blocks STRIPE_OP_PREXOR - subtract existing data as part of the read-modify-write process STRIPE_OP_BIODRAIN - copy data out of request buffers to satisfy a write request STRIPE_OP_POSTXOR - recalculate parity for new data that has entered the cache STRIPE_OP_CHECK - verify that the parity is correct STRIPE_OP_IO - submit i/o to the member disks (note this was already performed outside the stripe lock, but it made sense to add it as an operation type The flow is: 1/ handle_stripe sets STRIPE_OP_* in sh->ops.pending 2/ raid5_run_ops reads sh->ops.pending, sets sh->ops.ack, and submits the operation to the async_tx api 3/ async_tx triggers the completion callback routine to set sh->ops.complete and release the stripe 4/ handle_stripe runs again to finish the operation and optionally submit new operations that were previously blocked Note this patch just defines raid5_run_ops, subsequent commits (one per major operation type) modify handle_stripe to take advantage of this routine. Changelog: * removed ops_complete_biodrain in favor of ops_complete_postxor and ops_complete_write. * removed the raid5_run_ops workqueue * call bi_end_io for reads in ops_complete_biofill, saves a call to handle_stripe * explicitly handle the 2-disk raid5 case (xor becomes memcpy), Neil Brown * fix race between async engines and bi_end_io call for reads, Neil Brown * remove unnecessary spin_lock from ops_complete_biofill * remove test_and_set/test_and_clear BUG_ONs, Neil Brown * remove explicit interrupt handling for channel switching, this feature was absorbed (i.e. it is now implicit) by the async_tx api * use return_io in ops_complete_biofill Signed-off-by: NDan Williams <dan.j.williams@intel.com> Acked-By: NNeilBrown <neilb@suse.de>
-
由 Dan Williams 提交于
handle_stripe5 and handle_stripe6 have very deep logic paths handling the various states of a stripe_head. By introducing the 'stripe_head_state' and 'r6_state' objects, large portions of the logic can be moved to sub-routines. 'struct stripe_head_state' consumes all of the automatic variables that previously stood alone in handle_stripe5,6. 'struct r6_state' contains the handle_stripe6 specific variables like p_failed and q_failed. One of the nice side effects of the 'stripe_head_state' change is that it allows for further reductions in code duplication between raid5 and raid6. The following new routines are shared between raid5 and raid6: handle_completed_write_requests handle_requests_to_failed_array handle_stripe_expansion Changes: * v2: fixed 'conf->raid_disk-1' for the raid6 'handle_stripe_expansion' path * v3: removed the unused 'dirty' field from struct stripe_head_state * v3: coalesced open coded bi_end_io routines into return_io() Signed-off-by: NDan Williams <dan.j.williams@intel.com> Acked-By: NNeilBrown <neilb@suse.de>
-
由 Dan Williams 提交于
The async_tx api provides methods for describing a chain of asynchronous bulk memory transfers/transforms with support for inter-transactional dependencies. It is implemented as a dmaengine client that smooths over the details of different hardware offload engine implementations. Code that is written to the api can optimize for asynchronous operation and the api will fit the chain of operations to the available offload resources. I imagine that any piece of ADMA hardware would register with the 'async_*' subsystem, and a call to async_X would be routed as appropriate, or be run in-line. - Neil Brown async_tx exploits the capabilities of struct dma_async_tx_descriptor to provide an api of the following general format: struct dma_async_tx_descriptor * async_<operation>(..., struct dma_async_tx_descriptor *depend_tx, dma_async_tx_callback cb_fn, void *cb_param) { struct dma_chan *chan = async_tx_find_channel(depend_tx, <operation>); struct dma_device *device = chan ? chan->device : NULL; int int_en = cb_fn ? 1 : 0; struct dma_async_tx_descriptor *tx = device ? device->device_prep_dma_<operation>(chan, len, int_en) : NULL; if (tx) { /* run <operation> asynchronously */ ... tx->tx_set_dest(addr, tx, index); ... tx->tx_set_src(addr, tx, index); ... async_tx_submit(chan, tx, flags, depend_tx, cb_fn, cb_param); } else { /* run <operation> synchronously */ ... <operation> ... async_tx_sync_epilog(flags, depend_tx, cb_fn, cb_param); } return tx; } async_tx_find_channel() returns a capable channel from its pool. The channel pool is organized as a per-cpu array of channel pointers. The async_tx_rebalance() routine is tasked with managing these arrays. In the uniprocessor case async_tx_rebalance() tries to spread responsibility evenly over channels of similar capabilities. For example if there are two copy+xor channels, one will handle copy operations and the other will handle xor. In the SMP case async_tx_rebalance() attempts to spread the operations evenly over the cpus, e.g. cpu0 gets copy channel0 and xor channel0 while cpu1 gets copy channel 1 and xor channel 1. When a dependency is specified async_tx_find_channel defaults to keeping the operation on the same channel. A xor->copy->xor chain will stay on one channel if it supports both operation types, otherwise the transaction will transition between a copy and a xor resource. Currently the raid5 implementation in the MD raid456 driver has been converted to the async_tx api. A driver for the offload engines on the Intel Xscale series of I/O processors, iop-adma, is provided in a later commit. With the iop-adma driver and async_tx, raid456 is able to offload copy, xor, and xor-zero-sum operations to hardware engines. On iop342 tiobench showed higher throughput for sequential writes (20 - 30% improvement) and sequential reads to a degraded array (40 - 55% improvement). For the other cases performance was roughly equal, +/- a few percentage points. On a x86-smp platform the performance of the async_tx implementation (in synchronous mode) was also +/- a few percentage points of the original implementation. According to 'top' on iop342 CPU utilization drops from ~50% to ~15% during a 'resync' while the speed according to /proc/mdstat doubles from ~25 MB/s to ~50 MB/s. The tiobench command line used for testing was: tiobench --size 2048 --block 4096 --block 131072 --dir /mnt/raid --numruns 5 * iop342 had 1GB of memory available Details: * if CONFIG_DMA_ENGINE=n the asynchronous path is compiled away by making async_tx_find_channel a static inline routine that always returns NULL * when a callback is specified for a given transaction an interrupt will fire at operation completion time and the callback will occur in a tasklet. if the the channel does not support interrupts then a live polling wait will be performed * the api is written as a dmaengine client that requests all available channels * In support of dependencies the api implicitly schedules channel-switch interrupts. The interrupt triggers the cleanup tasklet which causes pending operations to be scheduled on the next channel * Xor engines treat an xor destination address differently than a software xor routine. To the software routine the destination address is an implied source, whereas engines treat it as a write-only destination. This patch modifies the xor_blocks routine to take a an explicit destination address to mirror the hardware. Changelog: * fixed a leftover debug print * don't allow callbacks in async_interrupt_cond * fixed xor_block changes * fixed usage of ASYNC_TX_XOR_DROP_DEST * drop dma mapping methods, suggested by Chris Leech * printk warning fixups from Andrew Morton * don't use inline in C files, Adrian Bunk * select the API when MD is enabled * BUG_ON xor source counts <= 1 * implicitly handle hardware concerns like channel switching and interrupts, Neil Brown * remove the per operation type list, and distribute operation capabilities evenly amongst the available channels * simplify async_tx_find_channel to optimize the fast path * introduce the channel_table_initialized flag to prevent early calls to the api * reorganize the code to mimic crypto * include mm.h as not all archs include it in dma-mapping.h * make the Kconfig options non-user visible, Adrian Bunk * move async_tx under crypto since it is meant as 'core' functionality, and the two may share algorithms in the future * move large inline functions into c files * checkpatch.pl fixes * gpl v2 only correction Cc: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NDan Williams <dan.j.williams@intel.com> Acked-By: NNeilBrown <neilb@suse.de>
-
由 Dan Williams 提交于
The async_tx api tries to use a dma engine for an operation, but will fall back to an optimized software routine otherwise. Xor support is implemented using the raid5 xor routines. For organizational purposes this routine is moved to a common area. The following fixes are also made: * rename xor_block => xor_blocks, suggested by Adrian Bunk * ensure that xor.o initializes before md.o in the built-in case * checkpatch.pl fixes * mark calibrate_xor_blocks __init, Adrian Bunk Cc: Adrian Bunk <bunk@stusta.de> Cc: NeilBrown <neilb@suse.de> Cc: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
The current implementation assumes that a channel will only be used by one client at a time. In order to enable channel sharing the dmaengine core is changed to a model where clients subscribe to channel-available-events. Instead of tracking how many channels a client wants and how many it has received the core just broadcasts the available channels and lets the clients optionally take a reference. The core learns about the clients' needs at dma_event_callback time. In support of multiple operation types, clients can specify a capability mask to only be notified of channels that satisfy a certain set of capabilities. Changelog: * removed DMA_TX_ARRAY_INIT, no longer needed * dma_client_chan_free -> dma_chan_release: switch to global reference counting only at device unregistration time, before it was also happening at client unregistration time * clients now return dma_state_client to dmaengine (ack, dup, nak) * checkpatch.pl fixes * fixup merge with git-ioat Cc: Chris Leech <christopher.leech@intel.com> Signed-off-by: NShannon Nelson <shannon.nelson@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com> Acked-by: NDavid S. Miller <davem@davemloft.net>
-
由 Dan Williams 提交于
The current dmaengine interface defines mutliple routines per operation, i.e. dma_async_memcpy_buf_to_buf, dma_async_memcpy_buf_to_page etc. Adding more operation types (xor, crc, etc) to this model would result in an unmanageable number of method permutations. Are we really going to add a set of hooks for each DMA engine whizbang feature? - Jeff Garzik The descriptor creation process is refactored using the new common dma_async_tx_descriptor structure. Instead of per driver do_<operation>_<dest>_to_<src> methods, drivers integrate dma_async_tx_descriptor into their private software descriptor and then define a 'prep' routine per operation. The prep routine allocates a descriptor and ensures that the tx_set_src, tx_set_dest, tx_submit routines are valid. Descriptor creation and submission becomes: struct dma_device *dev; struct dma_chan *chan; struct dma_async_tx_descriptor *tx; tx = dev->device_prep_dma_<operation>(chan, len, int_flag) tx->tx_set_src(dma_addr_t, tx, index /* for multi-source ops */) tx->tx_set_dest(dma_addr_t, tx, index) tx->tx_submit(tx) In addition to the refactoring, dma_async_tx_descriptor also lays the groundwork for definining cross-channel-operation dependencies, and a callback facility for asynchronous notification of operation completion. Changelog: * drop dma mapping methods, suggested by Chris Leech * fix ioat_dma_dependency_added, also caught by Andrew Morton * fix dma_sync_wait, change from Andrew Morton * uninline large functions, change from Andrew Morton * add tx->callback = NULL to dmaengine calls to interoperate with async_tx calls * hookup ioat_tx_submit * convert channel capabilities to a 'cpumask_t like' bitmap * removed DMA_TX_ARRAY_INIT, no longer needed * checkpatch.pl fixes * make set_src, set_dest, and tx_submit descriptor specific methods * fixup git-ioat merge * move group_list and phys to dma_async_tx_descriptor Cc: Jeff Garzik <jeff@garzik.org> Cc: Chris Leech <christopher.leech@intel.com> Signed-off-by: NShannon Nelson <shannon.nelson@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com> Acked-by: NDavid S. Miller <davem@davemloft.net>
-
由 David Brownell 提交于
This patch removes controller driver infrastructure which supported the now-removed usb_ep_{alloc,free}_buffer() calls. As can be seen, many of the implementations of this were broken to various degrees. Many didn't properly return dma-coherent mappings; those which did so were necessarily ugly because of bogosity in the underlying dma_free_coherent() calls ... which on many platforms can't be called from the same contexts (notably in_irq) from which their dma_alloc_coherent() sibling can be called. The main potential downside of removing this is that gadget drivers wouldn't have specific knowledge that the controller drivers have: endpoints that aren't dma-capable don't need any dma mappings at all. Signed-off-by: NDavid Brownell <dbrownell@users.sourceforge.net> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
由 David Brownell 提交于
Remove usb_ep_{alloc,free}_buffer() calls, for small dma-coherent buffers. This patch just removes the interface and its users; later patches will remove controller driver support. - This interface is invariably not implemented correctly in the controller drivers (e.g. using dma pools, a mechanism which post-dates the interface by several years). - At this point no gadget driver really *needs* to use it. In current kernels, any driver that needs such a mechanism could allocate a dma pool themselves. Removing this interface is thus a simplification and improvement. Note that the gmidi.c driver had a bug in this area; fixed. Signed-off-by: NDavid Brownell <dbrownell@users.sourceforge.net> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
由 Craig W. Nadler 提交于
USB_IAD: Adds support for USB Interface Association Descriptors. This patch adds support to the USB host stack for parsing, storing, and displaying Interface Association Descriptors. In /proc/bus/usb/devices lines starting with A: show the fields in an IAD. In sysfs if an interface on a USB device is referenced by an IAD the following files will be added to the sysfs directory for that interface: iad_bFirstInterface, iad_bInterfaceCount, iad_bFunctionClass, and iad_bFunctionSubClass, iad_bFunctionProtocol Signed-off-by: NCraig W. Nadler <craig@nadler.us> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
由 Marcel Holtmann 提交于
USB: Add URB_FREE_BUFFER flag for freeing the transfer buffer In some cases it is not needed that the driver keeps track of the transfer buffer of an URB. It can be simply freed along with the URB itself when the reference count goes down to zero. The new flag URB_FREE_BUFFER enables this behavior. Signed-off-by: NMarcel Holtmann <marcel@holtmann.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
由 Alan Stern 提交于
This patch (as920) adds an extra level of protection to the USB-Persist facility. Now it will apply by default only to hubs; for all other devices the user must enable it explicitly by setting the power/persist device attribute. The disconnect_all_children() routine in hub.c has been removed and its code placed inline. This is the way it was originally as part of hub_pre_reset(); the revised usage in hub_reset_resume() is sufficiently different that the code can no longer be shared. Likewise, mark_children_for_reset() is now inline as part of hub_reset_resume(). The end result looks much cleaner than before. The sysfs interface is updated to add the new attribute file, and there are corresponding documentation updates. Signed-off-by: NAlan Stern <stern@rowland.harvard.edu> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
由 Alan Stern 提交于
This patch (as918) introduces a new USB driver method: reset_resume. It is called when a device needs to be reset as part of a resume procedure (whether because of a device quirk or because of the USB-Persist facility), thereby taking over a role formerly assigned to the post_reset method. As a consequence, post_reset no longer needs an argument indicating whether it is being called as part of a reset-resume. This separation of functions makes the code clearer. In addition, the pre_reset and post_reset method return types are changed; they now must return an error code. The return value is unused at present, but at some later time we may unbind drivers and re-probe if they encounter an error during reset handling. The existing pre_reset and post_reset methods in the usbhid, usb-storage, and hub drivers are updated to match the new requirements. For usbhid the post_reset routine is also used for reset_resume (duplicate method pointers); for the other drivers a new reset_resume routine is added. The change to hub.c looks bigger than it really is, because mark_children_for_reset_resume() gets moved down next to the new hub_reset_resume() routine. A minor change to usb-storage makes the usb_stor_report_bus_reset() routine acquire the host lock instead of requiring the caller to hold it already. Signed-off-by: NAlan Stern <stern@rowland.harvard.edu> Signed-off-by: NJiri Kosina <jkosina@suse.cz> CC: Matthew Dharm <mdharm-usb@one-eyed-alien.net> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
由 Oliver Neukum 提交于
- introduction of usb_anchor and its methods Signed-off-by: NOliver Neukum <oneukum@suse.de> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
由 David Brownell 提交于
Make sure gadgetfs userspace interface is properly exported: - Move <linux/usb_gadgetfs.h> to <linux/usb/gadgetfs.h>; - Export it using Kbuild; - Add an #include guard; - Correct some internal documentation; - Update struct layout so it's the same on 32/64 bit kernels. Signed-off-by: NDavid Brownell <dbrownell@users.sourceforge.net> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
由 Daniel Drake 提交于
Recently, the USB device matching code stopped matching generic interface matches against devices with vendor-specific device class values. Some drivers now need to explicitly match USB device ID's (in addition to generic interface info) to retain the same behaviour as before. This new macro, suggested by Alan Stern, makes the explicit device/interface matching a little simpler for those users. Signed-off-by: NDaniel Drake <dsd@gentoo.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
由 Alan Stern 提交于
This patch (as888) adds a new USB device quirk for devices which are unable to resume correctly. By using the new code added for the USB-persist facility, it is a simple matter to reset these devices instead of resuming them. To get things kicked off, a quirk entry is added for the Philips PSC805. Signed-off-by: NAlan Stern <stern@rowland.harvard.edu> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
由 Alan Stern 提交于
This patch (as886) adds the controversial USB-persist facility, allowing USB devices to persist across a power loss during system suspend. The facility is controlled by a new Kconfig option (with appropriate warnings about the potential dangers); when the option is off the behavior will remain the same as it is now. But when the option is on, people will be able to use suspend-to-disk and keep their USB filesystems intact -- something particularly valuable for small machines where the root filesystem is on a USB device! Signed-off-by: NAlan Stern <stern@rowland.harvard.edu> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
由 Oliver Neukum 提交于
this implements generic support for suspend/resume for usb serial. Signed-off-by: NOliver Neukum <oneukum@suse.de> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
由 Jack Morgenstein 提交于
Signed-off-by: NDotan Barak <dotanb@mellanox.co.il> Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 Jack Morgenstein 提交于
Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: NRoland Dreier <rolandd@cisco.com>
-
由 H. Peter Anvin 提交于
This removes the old i386 setup code. This is done as a separate patch to avoid breaking git bisect as some of the i386 code was also used by the old x86-64 code. Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 H. Peter Anvin 提交于
struct screen_info has unaligned members, it needs to be packed. In the process, fix the naming of some of the members, which don't belong in this structure but are part of it anyway. Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 12 7月, 2007 11 次提交
-
-
由 Jean Delvare 提交于
This is a new I2C bus driver for the TAOS evaluation modules. Developped and tested on the TAOS TSL2550 EVM. Signed-off-by: NJean Delvare <khali@linux-fr.org>
-
由 Henry Su 提交于
Add the SMBus device ID for ATI SB700. Signed-off-by: NHenry Su <Henry.su@amd.com> Signed-off-by: NJean Delvare <khali@linux-fr.org>
-
由 Jean Delvare 提交于
Let the drivers specify how many bytes they want to read with i2c_smbus_read_i2c_block_data(). So far, the block count was hard-coded to I2C_SMBUS_BLOCK_MAX (32), which did not make much sense. Many driver authors complained about this before, and I believe it's about time to fix it. Right now, authors have to do technically stupid things, such as individual byte reads or full-fledged I2C messaging, to work around the problem. We do not want to encourage that. I even found that some bus drivers (e.g. i2c-amd8111) already implemented I2C block read the "right" way, that is, they didn't follow the old, broken standard. The fact that it was never noticed before just shows how little i2c_smbus_read_i2c_block_data() was used, which isn't that surprising given how broken its prototype was so far. There are some obvious compatiblity considerations: * This changes the i2c_smbus_read_i2c_block_data() prototype. Users outside the kernel tree will notice at compilation time, and will have to update their code. * User-space has access to i2c_smbus_xfer() directly using i2c-dev, so the changed expectations would affect tools such as i2cdump. In order to preserve binary compatibility, we give I2C_SMBUS_I2C_BLOCK_DATA a new numeric value, and define I2C_SMBUS_I2C_BLOCK_BROKEN with the old numeric value. When i2c-dev receives a transaction with the old value, it can convert it to the new format on the fly. Signed-off-by: NJean Delvare <khali@linux-fr.org>
-
由 Mark M. Hoffman 提交于
Kill a sparse warning by un-nesting two container_of() calls. Signed-off-by: NMark M. Hoffman <mhoffman@lightlink.com> Signed-off-by: NJean Delvare <khali@linux-fr.org>
-
由 David Brownell 提交于
Generate I2C kerneldoc; fix various glitches and add "context" sections to that documentation. Most I2C and SMBus functions still have no kerneldoc. Let me suggest providing kerneldoc for all the i2c_smbus_*() functions as a small and mostly self-contained project for anyone so inclined. :) Signed-off-by: NDavid Brownell <dbrownell@users.sourceforge.net> Signed-off-by: NJean Delvare <khali@linux-fr.org>
-
由 Eric Paris 提交于
Add a new security check on mmap operations to see if the user is attempting to mmap to low area of the address space. The amount of space protected is indicated by the new proc tunable /proc/sys/vm/mmap_min_addr and defaults to 0, preserving existing behavior. This patch uses a new SELinux security class "memprotect." Policy already contains a number of allow rules like a_t self:process * (unconfined_t being one of them) which mean that putting this check in the process class (its best current fit) would make it useless as all user processes, which we also want to protect against, would be allowed. By taking the memprotect name of the new class it will also make it possible for us to move some of the other memory protect permissions out of 'process' and into the new class next time we bump the policy version number (which I also think is a good future idea) Acked-by: NStephen Smalley <sds@tycho.nsa.gov> Acked-by: NChris Wright <chrisw@sous-sol.org> Signed-off-by: NEric Paris <eparis@redhat.com> Signed-off-by: NJames Morris <jmorris@namei.org>
-
由 Patrick McHardy 提交于
Signed-off-by: NPatrick McHardy <kaber@trash.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Patrick McHardy 提交于
The VLAN MAC address handling is broken in multiple ways. When the address differs when setting it, the real device is put in promiscous mode twice, but never taken out again. Additionally it doesn't resync when the real device's address is changed and needlessly puts it in promiscous mode when the vlan device is still down. Fix by moving address handling to vlan_dev_open/vlan_dev_stop and properly deal with address changes in the device notifier. Also switch to dev_unicast_add (which needs the exact same handling). Since the set_mac_address handler is identical to the generic ethernet one with these changes, kill it and use ether_setup(). Signed-off-by: NPatrick McHardy <kaber@trash.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Olaf Kirch 提交于
Keep netpoll/poll_napi from messing with the poll_list. Only net_rx_action is allowed to manipulate the list. Signed-off-by: NOlaf Kirch <olaf.kirch@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Zhang Rui 提交于
Well, first of all, I don't want to change so many files either. What I do: Adding a new parameter "struct bin_attribute *" in the .read/.write methods for the sysfs binary attributes. In fact, only the four lines change in fs/sysfs/bin.c and include/linux/sysfs.h do the real work. But I have to update all the files that use binary attributes to make them compatible with the new .read and .write methods. I'm not sure if I missed any. :( Why I do this: For a sysfs attribute, we can get a pointer pointing to the struct attribute in the .show/.store method, while we can't do this for the binary attributes. I don't know why this is different, but this does make it not so handy to use the binary attributes as the regular ones. So I think this patch is reasonable. :) Who benefits from it: The patch that exposes ACPI tables in sysfs requires such an improvement. All the table binary attributes share the same .read method. Parameter "struct bin_attribute *" is used to get the table signature and instance number which are used to distinguish different ACPI table binary attributes. Without this parameter, we need to offer different .read methods for different ACPI table binary attributes. This is impossible as there are various ACPI tables on different platforms, and we don't know what they are until they are loaded. Signed-off-by: NZhang Rui <rui.zhang@intel.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
由 Tejun Heo 提交于
This patch makes dentries and inodes for sysfs directories reclaimable. * sysfs_notify() is modified to walk sysfs_dirent tree instead of dentry tree. * sysfs_update_file() and sysfs_chmod_file() use sysfs_get_dentry() to grab the victim dentry. * sysfs_rename_dir() and sysfs_move_dir() grab all dentries using sysfs_get_dentry() on startup. * Dentries for all shadowed directories are pinned in memory to serve as lookup start point. Signed-off-by: NTejun Heo <htejun@gmail.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-