- 13 6月, 2013 3 次提交
-
-
由 Paul Gortmaker 提交于
The bit_spinlock functions are only used for the jbd_lock_bh_state functions (and friends) in jbd_common.h and are not directly used by either of jbd.h or jbd2.h content. The jbd_common file is new as of commit 44606672 ("jdb/jbd2: factor out common functions from the jbd[2] header files") but common (and isolated) headers were not considered for factoring at that time. Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Dmitry Monakhov 提交于
Inode's data or non journaled quota may be written w/o jounral so we _must_ send a barrier at the end of ext4_sync_fs. But it can be skipped if journal commit will do it for us. Also fix data integrity for nojournal mode. Signed-off-by: NDmitry Monakhov <dmonakhov@openvz.org> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Dmitry Monakhov 提交于
Current implementation of jbd2_journal_force_commit() is suboptimal because result in empty and useless commits. But callers just want to force and wait any unfinished commits. We already have jbd2_journal_force_commit_nested() which does exactly what we want, except we are guaranteed that we do not hold journal transaction open. Signed-off-by: NDmitry Monakhov <dmonakhov@openvz.org> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
- 07 6月, 2013 1 次提交
-
-
由 Theodore Ts'o 提交于
Rename ext4_da_writepages() to ext4_writepages() and use it for all modes. We still need to iterate over all the pages in the case of data=journalling, but in the case of nodelalloc/data=ordered (which is what file systems mounted using ext3 backwards compatibility will use) this will allow us to use a much more efficient I/O submission path. Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
- 05 6月, 2013 8 次提交
-
-
由 Jan Kara 提交于
There are two issues with current writeback path in ext4. For one we don't necessarily map complete pages when blocksize < pagesize and thus needn't do any writeback in one iteration. We always map some blocks though so we will eventually finish mapping the page. Just if writeback races with other operations on the file, forward progress is not really guaranteed. The second problem is that current code structure makes it hard to associate all the bios to some range of pages with one io_end structure so that unwritten extents can be converted after all the bios are finished. This will be especially difficult later when io_end will be associated with reserved transaction handle. We restructure the writeback path to a relatively simple loop which first prepares extent of pages, then maps one or more extents so that no page is partially mapped, and once page is fully mapped it is submitted for IO. We keep all the mapping and IO submission information in mpage_da_data structure to somewhat reduce stack usage. Resulting code is somewhat shorter than the old one and hopefully also easier to read. Reviewed-by: NZheng Liu <wenqing.lz@taobao.com> Signed-off-by: NJan Kara <jack@suse.cz> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Jan Kara 提交于
Reviewed-by: NZheng Liu <wenqing.lz@taobao.com> Signed-off-by: NJan Kara <jack@suse.cz> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Jan Kara 提交于
In some cases we cannot start a transaction because of locking constraints and passing started transaction into those places is not handy either because we could block transaction commit for too long. Transaction reservation is designed to solve these issues. It reserves a handle with given number of credits in the journal and the handle can be later attached to the running transaction without blocking on commit or checkpointing. Reserved handles do not block transaction commit in any way, they only reduce maximum size of the running transaction (because we have to always be prepared to accomodate request for attaching reserved handle). Signed-off-by: NJan Kara <jack@suse.cz> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Jan Kara 提交于
j_wait_logspace and j_wait_checkpoint are unused. Remove them. Reviewed-by: NZheng Liu <wenqing.lz@taobao.com> Signed-off-by: NJan Kara <jack@suse.cz> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Jan Kara 提交于
__jbd2_log_space_left() and jbd_space_needed() were kind of odd. jbd_space_needed() accounted also credits needed for currently committing transaction while it didn't account for credits needed for control blocks. __jbd2_log_space_left() then accounted for control blocks as a fraction of free space. Since results of these two functions are always only compared against each other, this works correct but is somewhat strange. Move the estimates so that jbd_space_needed() returns number of blocks needed for a transaction including control blocks and __jbd2_log_space_left() returns free space in the journal (with the committing transaction already subtracted). Rename functions to jbd2_log_space_left() and jbd2_space_needed() while we are changing them. Reviewed-by: NZheng Liu <wenqing.lz@taobao.com> Signed-off-by: NJan Kara <jack@suse.cz> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Jan Kara 提交于
Currently when we add a buffer to a transaction, we wait until the buffer is removed from BJ_Shadow list (so that we prevent any changes to the buffer that is just written to the journal). This can take unnecessarily long as a lot happens between the time the buffer is submitted to the journal and the time when we remove the buffer from BJ_Shadow list. (e.g. We wait for all data buffers in the transaction, we issue a cache flush, etc.) Also this creates a dependency of do_get_write_access() on transaction commit (namely waiting for data IO to complete) which we want to avoid when implementing transaction reservation. So we modify commit code to set new BH_Shadow flag when temporary shadowing buffer is created and we clear that flag once IO on that buffer is complete. This allows do_get_write_access() to wait only for BH_Shadow bit and thus removes the dependency on data IO completion. Reviewed-by: NZheng Liu <wenqing.lz@taobao.com> Signed-off-by: NJan Kara <jack@suse.cz> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Jan Kara 提交于
Similarly as for metadata buffers, also log descriptor buffers don't really need the journal head. So strip it and remove BJ_LogCtl list. Reviewed-by: NZheng Liu <wenqing.lz@taobao.com> Signed-off-by: NJan Kara <jack@suse.cz> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Jan Kara 提交于
When writing metadata to the journal, we create temporary buffer heads for that task. We also attach journal heads to these buffer heads but the only purpose of the journal heads is to keep buffers linked in transaction's BJ_IO list. We remove the need for journal heads by reusing buffer_head's b_assoc_buffers list for that purpose. Also since BJ_IO list is just a temporary list for transaction commit, we use a private list in jbd2_journal_commit_transaction() for that thus removing BJ_IO list from transaction completely. Reviewed-by: NZheng Liu <wenqing.lz@taobao.com> Signed-off-by: NJan Kara <jack@suse.cz> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
- 28 5月, 2013 2 次提交
-
-
由 Lukas Czerner 提交于
Currently punch hole is disabled in file systems with bigalloc feature enabled. However the recent changes in punch hole patch should make it easier to support punching holes on bigalloc enabled file systems. This commit changes partial_cluster handling in ext4_remove_blocks(), ext4_ext_rm_leaf() and ext4_ext_remove_space(). Currently partial_cluster is unsigned long long type and it makes sure that we will free the partial cluster if all extents has been released from that cluster. However it has been specifically designed only for truncate. With punch hole we can be freeing just some extents in the cluster leaving the rest untouched. So we have to make sure that we will notice cluster which still has some extents. To do this I've changed partial_cluster to be signed long long type. The only scenario where this could be a problem is when cluster_size == block size, however in that case there would not be any partial clusters so we're safe. For bigger clusters the signed type is enough. Now we use the negative value in partial_cluster to mark such cluster used, hence we know that we must not free it even if all other extents has been freed from such cluster. This scenario can be described in simple diagram: |FFF...FF..FF.UUU| ^----------^ punch hole . - free space | - cluster boundary F - freed extent U - used extent Also update respective tracepoints to use signed long long type for partial_cluster. Signed-off-by: NLukas Czerner <lczerner@redhat.com> Reviewed-by: NJan Kara <jack@suse.cz> Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
-
由 Lukas Czerner 提交于
Add "end" variable. Signed-off-by: NLukas Czerner <lczerner@redhat.com> Reviewed-by: NJan Kara <jack@suse.cz> Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
-
- 22 5月, 2013 4 次提交
-
-
由 Lukas Czerner 提交于
->invalidatepage() aop now accepts range to invalidate so we can make use of it in journal_invalidatepage() and all the users in ext3 file system. Also update ext3 trace point to print out length argument. Signed-off-by: NLukas Czerner <lczerner@redhat.com> Reviewed-by: NJan Kara <jack@suse.cz>
-
由 Lukas Czerner 提交于
->invalidatepage() aop now accepts range to invalidate so we can make use of it in all ext4 invalidatepage routines. Signed-off-by: NLukas Czerner <lczerner@redhat.com> Reviewed-by: NJan Kara <jack@suse.cz>
-
由 Lukas Czerner 提交于
invalidatepage now accepts range to invalidate and there are two file system using jbd2 also implementing punch hole feature which can benefit from this. We need to implement the same thing for jbd2 layer in order to allow those file system take benefit of this functionality. This commit adds length argument to the jbd2_journal_invalidatepage() and updates all instances in ext4 and ocfs2. Signed-off-by: NLukas Czerner <lczerner@redhat.com> Reviewed-by: NJan Kara <jack@suse.cz>
-
由 Lukas Czerner 提交于
Currently there is no way to truncate partial page where the end truncate point is not at the end of the page. This is because it was not needed and the functionality was enough for file system truncate operation to work properly. However more file systems now support punch hole feature and it can benefit from mm supporting truncating page just up to the certain point. Specifically, with this functionality truncate_inode_pages_range() can be changed so it supports truncating partial page at the end of the range (currently it will BUG_ON() if 'end' is not at the end of the page). This commit changes the invalidatepage() address space operation prototype to accept range to be invalidated and update all the instances for it. We also change the block_invalidatepage() in the same way and actually make a use of the new length argument implementing range invalidation. Actual file system implementations will follow except the file systems where the changes are really simple and should not change the behaviour in any way .Implementation for truncate_page_range() which will be able to accept page unaligned ranges will follow as well. Signed-off-by: NLukas Czerner <lczerner@redhat.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Hugh Dickins <hughd@google.com>
-
- 15 5月, 2013 3 次提交
-
-
由 Joern Engel 提交于
It is possible for one thread to to take se_sess->sess_cmd_lock in core_tmr_abort_task() before taking a reference count on se_cmd->cmd_kref, while another thread in target_put_sess_cmd() drops se_cmd->cmd_kref before taking se_sess->sess_cmd_lock. This introduces kref_put_spinlock_irqsave() and uses it in target_put_sess_cmd() to close the race window. Signed-off-by: NJoern Engel <joern@logfs.org> Acked-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: <stable@vger.kernel.org> Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
-
由 Joern Engel 提交于
Signed-off-by: NJoern Engel <joern@logfs.org> Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
-
由 John Stultz 提交于
Kay Sievers noted that the ALWAYS_USE_PERSISTENT_CLOCK config, which enables some minor compile time optimization to avoid uncessary code in mostly the suspend/resume path could cause problems for userland. In particular, the dependency for RTC_HCTOSYS on !ALWAYS_USE_PERSISTENT_CLOCK, which avoids setting the time twice and simplifies suspend/resume, has the side effect of causing the /sys/class/rtc/rtcN/hctosys flag to always be zero, and this flag is commonly used by udev to setup the /dev/rtc symlink to /dev/rtcN, which can cause pain for older applications. While the udev rules could use some work to be less fragile, breaking userland should strongly be avoided. Additionally the compile time optimizations are fairly minor, and the code being optimized is likely to be reworked in the future, so lets revert this change. Reported-by: NKay Sievers <kay@vrfy.org> Signed-off-by: NJohn Stultz <john.stultz@linaro.org> Cc: stable <stable@vger.kernel.org> #3.9 Cc: Feng Tang <feng.tang@intel.com> Cc: Jason Gunthorpe <jgunthorpe@obsidianresearch.com> Link: http://lkml.kernel.org/r/1366828376-18124-1-git-send-email-john.stultz@linaro.orgSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 14 5月, 2013 1 次提交
-
-
由 Laurent Pinchart 提交于
Drive strength controls both sink and source currents, clarify the description accordingly. Signed-off-by: NLaurent Pinchart <laurent.pinchart+renesas@ideasonboard.com> Signed-off-by: NLinus Walleij <linus.walleij@linaro.org>
-
- 13 5月, 2013 2 次提交
-
-
由 Jan Kara 提交于
Commit ae4647fb (jbd2: reduce journal_head size) introduced a regression where we occasionally hit panic in jbd2_journal_put_journal_head() because of wrong b_jcount. The bug is caused by gcc making 64-bit access to 32-bit bitfield and thus clobbering b_jcount. At least for now, those 8 bytes saved in struct journal_head are not worth the trouble with gcc bitfield handling so revert that part of the patch. Reported-by: NEUNBONG SONG <eunb.song@samsung.com> Reported-by: NTony Luck <tony.luck@gmail.com> Signed-off-by: NJan Kara <jack@suse.cz> Signed-off-by: N"Theodore Ts'o" <tytso@mit.edu>
-
由 Dave Airlie 提交于
We don't use these anymore so nuke them. Signed-off-by: NDave Airlie <airlied@redhat.com>
-
- 12 5月, 2013 4 次提交
-
-
由 Jan-Simon Möller 提交于
Fixes warning during compilation with clang. [rjw: Subject and changelog] Signed-off-by: NJan-Simon Möller <dl9pf@gmx.de> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Rafael J. Wysocki 提交于
The system suspend routine of the ACPI processor driver saves the BUS_MASTER_RLD register and its resume routine restores it. However, there can be only one such register in the system and it really should be saved after non-boot CPUs have been offlined and restored before they are put back online during resume. For this reason, move the saving and restoration of BUS_MASTER_RLD to syscore suspend and syscore resume, respectively, and drop the no longer necessary suspend/resume callbacks from the ACPI processor driver. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Eric Dumazet 提交于
We have seen multiple NULL dereferences in __inet6_lookup_established() After analysis, I found that inet6_sk() could be NULL while the check for sk_family == AF_INET6 was true. Bug was added in linux-2.6.29 when RCU lookups were introduced in UDP and TCP stacks. Once an IPv6 socket, using SLAB_DESTROY_BY_RCU is inserted in a hash table, we no longer can clear pinet6 field. This patch extends logic used in commit fcbdf09d ("net: fix nulls list corruptions in sk_prot_alloc") TCP/UDP/UDPLite IPv6 protocols provide their own .clear_sk() method to make sure we do not clear pinet6 field. At socket clone phase, we do not really care, as cloning the parent (non NULL) pinet6 is not adding a fatal race. Signed-off-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Rony Efraim 提交于
Make sure that the following steps are taken: - drop packets sent by the VF with vlan tag - block packets with vlan tag which are steered to the VF - drop/block tagged packets when the policy is priority-tagged - make sure VLAN stripping for received packets is set - make sure force UP bit for the VF QP is set Use enum values for all the above instead of numerical bit offsets. Signed-off-by: NRony Efraim <ronye@mellanox.com> Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 10 5月, 2013 10 次提交
-
-
由 Mike Christie 提交于
This fixes a bug where the iscsi class/driver did not do a put_device when a sess/conn device was found. This also simplifies the interface by not having to pass in some arguments that were duplicated and did not need to be exported. Reported-by: NZhao Hongjiang <zhaohongjiang@huawei.com> Signed-off-by: NMike Christie <michaelc@cs.wisc.edu> Acked-by: NVikas Chaudhary <vikas.chaudhary@qlogic.com> Signed-off-by: NJames Bottomley <JBottomley@Parallels.com>
-
由 James Bottomley 提交于
These enums have been separate since the dawn of SAS, mainly because the latter is a procotol only enum and the former includes additional state for libsas. The dichotomy causes endless confusion about which one you should use where and leads to pointless warnings like this: drivers/scsi/mvsas/mv_sas.c: In function 'mvs_update_phyinfo': drivers/scsi/mvsas/mv_sas.c:1162:34: warning: comparison between 'enum sas_device_type' and 'enum sas_dev_type' [-Wenum-compare] Fix by eliminating one of them. The one kept is effectively the sas.h one, but call it sas_device_type and make sure the enums are all properly namespaced with the SAS_ prefix. Signed-off-by: NJames Bottomley <JBottomley@Parallels.com>
-
由 Alasdair G Kergon 提交于
Document iterate_devices in device-mapper.h. Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
-
由 Chris Cummins 提交于
The intention here is to make the output of dmesg with full verbosity a bit easier for a human to parse. This commit transforms: [drm:drm_ioctl], pid=699, cmd=0x6458, nr=0x58, dev 0xe200, auth=1 [drm:drm_ioctl], pid=699, cmd=0xc010645b, nr=0x5b, dev 0xe200, auth=1 [drm:drm_ioctl], pid=699, cmd=0xc0106461, nr=0x61, dev 0xe200, auth=1 [drm:drm_ioctl], pid=699, cmd=0xc01c64ae, nr=0xae, dev 0xe200, auth=1 [drm:drm_mode_addfb], [FB:32] [drm:drm_ioctl], pid=699, cmd=0xc0106464, nr=0x64, dev 0xe200, auth=1 [drm:drm_vm_open_locked], 0x7fd9302fe000,0x00a00000 [drm:drm_ioctl], pid=699, cmd=0x400c645f, nr=0x5f, dev 0xe200, auth=1 [drm:drm_ioctl], pid=699, cmd=0xc00464af, nr=0xaf, dev 0xe200, auth=1 [drm:intel_crtc_set_config], [CRTC:3] [NOFB] into: [drm:drm_ioctl], pid=699, dev=0xe200, auth=1, I915_GEM_THROTTLE [drm:drm_ioctl], pid=699, dev=0xe200, auth=1, I915_GEM_CREATE [drm:drm_ioctl], pid=699, dev=0xe200, auth=1, I915_GEM_SET_TILING [drm:drm_ioctl], pid=699, dev=0xe200, auth=1, IOCTL_MODE_ADDFB [drm:drm_mode_addfb], [FB:32] [drm:drm_ioctl], pid=699, dev=0xe200, auth=1, I915_GEM_MMAP_GTT [drm:drm_vm_open_locked], 0x7fd9302fe000,0x00a00000 [drm:drm_ioctl], pid=699, dev=0xe200, auth=1, I915_GEM_SET_DOMAIN [drm:drm_ioctl], pid=699, dev=0xe200, auth=1, DRM_IOCTL_MODE_RMFB [drm:intel_crtc_set_config], [CRTC:3] [NOFB] v2: drm_ioctls is now a constant (Ville Syrjälä) Signed-off-by: NChris Cummins <christopher.e.cummins@intel.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Ville Syrjälä 提交于
Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Ville Syrjälä 提交于
Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: NDave Airlie <airlied@redhat.com>
-
由 Masami Hiramatsu 提交于
Modify soft-mode flag only if no other soft-mode referrer (currently only the ftrace triggers) by using a reference counter in each ftrace_event_file. Without this fix, adding and removing several different enable/disable_event triggers on the same event clear soft-mode bit from the ftrace_event_file. This also happens with a typo of glob on setting triggers. e.g. # echo vfs_symlink:enable_event:net:netif_rx > set_ftrace_filter # cat events/net/netif_rx/enable 0* # echo typo_func:enable_event:net:netif_rx > set_ftrace_filter # cat events/net/netif_rx/enable 0 # cat set_ftrace_filter #### all functions enabled #### vfs_symlink:enable_event:net:netif_rx:unlimited As above, we still have a trigger, but soft-mode is gone. Link: http://lkml.kernel.org/r/20130509054429.30398.7464.stgit@mhiramat-M0-7522 Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: David Sharp <dhsharp@google.com> Cc: Hiraku Toyooka <hiraku.toyooka.gu@hitachi.com> Cc: Tom Zanussi <tom.zanussi@intel.com> Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Masami Hiramatsu 提交于
Fix a deadlock on ftrace_regex_lock which happens when setting an enable_event trigger on dynamic kprobe event as below. ---- sh-2.05b# echo p vfs_symlink > kprobe_events sh-2.05b# echo vfs_symlink:enable_event:kprobes:p_vfs_symlink_0 > set_ftrace_filter ============================================= [ INFO: possible recursive locking detected ] 3.9.0+ #35 Not tainted --------------------------------------------- sh/72 is trying to acquire lock: (ftrace_regex_lock){+.+.+.}, at: [<ffffffff810ba6c1>] ftrace_set_hash+0x81/0x1f0 but task is already holding lock: (ftrace_regex_lock){+.+.+.}, at: [<ffffffff810b7cbd>] ftrace_regex_write.isra.29.part.30+0x3d/0x220 other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(ftrace_regex_lock); lock(ftrace_regex_lock); *** DEADLOCK *** ---- To fix that, this introduces a finer regex_lock for each ftrace_ops. ftrace_regex_lock is too big of a lock which protects all filter/notrace_hash operations, but it doesn't need to be a global lock after supporting multiple ftrace_ops because each ftrace_ops has its own filter/notrace_hash. Link: http://lkml.kernel.org/r/20130509054417.30398.84254.stgit@mhiramat-M0-7522 Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Tom Zanussi <tom.zanussi@intel.com> Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com> [ Added initialization flag and automate mutex initialization for non ftrace.c ftrace_probes. ] Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Fabio Estevam 提交于
Since commit 657eee7d (media: coda: use genalloc API) the following build error happens with imx_v4_v5_defconfig: drivers/built-in.o: In function 'coda_remove': clk-composite.c:(.text+0x112180): undefined reference to 'gen_pool_free' drivers/built-in.o: In function 'coda_probe': clk-composite.c:(.text+0x112310): undefined reference to 'of_get_named_gen_pool' clk-composite.c:(.text+0x1123f4): undefined reference to 'gen_pool_alloc' clk-composite.c:(.text+0x11240c): undefined reference to 'gen_pool_virt_to_phys' clk-composite.c:(.text+0x112458): undefined reference to 'dev_get_gen_pool' Select GENERIC_ALLOCATOR and get rid of the custom IRAM_ALLOC. Signed-off-by: NFabio Estevam <fabio.estevam@freescale.com> Signed-off-by: NShawn Guo <shawn.guo@linaro.org> Signed-off-by: NOlof Johansson <olof@lixom.net>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 09 5月, 2013 2 次提交
-
-
由 Josh Boyer 提交于
Protect the SIOCGCM* ioctl macros with parenthesis. Reported-by: NPaul Wouters <pwouters@redhat.com> Signed-off-by: NJosh Boyer <jwboyer@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Dan Williams 提交于
Some drivers (sierra_net) need the status interrupt URB active even when the device is closed, because they receive custom indications from firmware. Add functions to refcount the status interrupt URB submit/kill operation so that sub-drivers and the generic driver don't fight over whether the status interrupt URB is active or not. A sub-driver can call usbnet_status_start() at any time, but the URB is only submitted the first time the function is called. Likewise, when the sub-driver is done with the URB, it calls usbnet_status_stop() but the URB is only killed when all users have stopped it. The URB is still killed and re-submitted for suspend/resume, as before, with the same refcount it had at suspend. Signed-off-by: NDan Williams <dcbw@redhat.com> Acked-by: NOliver Neukum <oliver@neukum.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-