- 04 2月, 2015 12 次提交
-
-
由 NeilBrown 提交于
Gather all the changes that can happen atomically and might be relevant to other code into one place. This will make it easier to refine the locking. Note that this puts quite a few things between mddev_detach() and ->free(). Enabling this was the point of some recent patches. Signed-off-by: NNeilBrown <neilb@suse.de> -
由 NeilBrown 提交于
Now that the ->stop function only frees the private data, rename is accordingly. Also pass in the private pointer as an arg rather than using mddev->private. This flexibility will be useful in level_store(). Finally, don't clear ->private. It doesn't make sense to clear it seeing that isn't what we free, and it is no longer necessary to clear ->private (it was some time ago before ->to_remove was introduced). Setting ->to_remove in ->free() is a bit of a wart, but not a big problem at the moment. Signed-off-by: NNeilBrown <neilb@suse.de> -
由 NeilBrown 提交于
Each md personality has a 'stop' operation which does two things: 1/ it finalizes some aspects of the array to ensure nothing is accessing the ->private data 2/ it frees the ->private data. All the steps in '1' can apply to all arrays and so can be performed in common code. This is useful as in the case where we change the personality which manages an array (in level_store()), it would be helpful to do step 1 early, and step 2 later. So split the 'step 1' functionality out into a new mddev_detach(). Signed-off-by: NNeilBrown <neilb@suse.de> -
由 NeilBrown 提交于
The use of 'rcu' to protect accesses to ->private_data so that the ->private_data could be updated predates the introduction of mddev_suspend/mddev_resume. These are a cleaner mechanism for providing stability while swapping in a new ->private data - it is used by level_store() to support changing of raid levels. So get rid of the RCU stuff and just use mddev_suspend, mddev_resume. As these function call ->quiesce(), we add an empty function for linear just like for raid0. Signed-off-by: NNeilBrown <neilb@suse.de> -
由 NeilBrown 提交于
There is no locking around calls to merge_bvec_fn(), so it is possible that calls which coincide with a level (or personality) change could go wrong. So create a central dispatch point for these functions and use rcu_read_lock(). If the array is suspended, reject any merge that can be rejected. If not, we know it is safe to call the function. Signed-off-by: NNeilBrown <neilb@suse.de> -
由 NeilBrown 提交于
There is currently no locking around calls to the 'congested' bdi function. If called at an awkward time while an array is being converted from one level (or personality) to another, there is a tiny chance of running code in an unreferenced module etc. So add a 'congested' function to the md_personality operations structure, and call it with appropriate locking from a central 'mddev_congested'. When the array personality is changing the array will be 'suspended' so no IO is processed. If mddev_congested detects this, it simply reports that the array is congested, which is a safe guess. As mddev_suspend calls synchronize_rcu(), mddev_congested can avoid races by included the whole call inside an rcu_read_lock() region. This require that the congested functions for all subordinate devices can be run under rcu_lock. Fortunately this is the case. Signed-off-by: NNeilBrown <neilb@suse.de> -
由 NeilBrown 提交于
This lock is used for (slightly) more than helping with writing superblocks, and it will soon be extended further. So the name is inappropriate. Also, the _irq variant hasn't been needed since 2.6.37 as it is never taking from interrupt or bh context. So: -rename write_lock to lock -document what it protects -remove _irq ... except in md_flush_request() as there is no wait_event_lock() (with no _irq). This can be cleaned up after appropriate changes to wait.h. Signed-off-by: NNeilBrown <neilb@suse.de> -
由 NeilBrown 提交于
That last condition is unclear and over cautious. There are two related issues here. If a partial write is destined for a missing device, then either RMW or RCW can work. We must read all the available block. Only then can the missing blocks be calculated, and then the parity update performed. If RMW is not an option, then there is a complication even without partial writes. If we would need to read a missing device to perform the reconstruction, then we must first read every block so the missing device data can be computed. This is the case for RAID6 (Which currently does not support RMW) and for times when we don't trust the parity (after a crash) and so are in the process of resyncing it. So make these two cases more clear and separate, and perform the relevant tests more thoroughly. Signed-off-by: NNeilBrown <neilb@suse.de> -
由 NeilBrown 提交于
Both the last two cases are only relevant if something has failed and something needs to be written (but not over-written), and if it is OK to pre-read blocks at this point. So factor out those tests and explain them. Signed-off-by: NNeilBrown <neilb@suse.de> -
由 NeilBrown 提交于
Some of the conditions in need_this_block have very straight forward motivation. Separate those out and document them. Signed-off-by: NNeilBrown <neilb@suse.de> -
由 NeilBrown 提交于
fetch_block() has a very large and hard to read 'if' condition. Separate it into its own function so that it can be made more readable. Signed-off-by: NNeilBrown <neilb@suse.de> -
由 Jes Sorensen 提交于
67f45548 introduced a call to md_wakeup_thread() when adding to the delayed_list. However the md thread is woken up unconditionally just below. Remove the unnecessary wakeup call. Signed-off-by: NJes Sorensen <Jes.Sorensen@redhat.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
- 02 2月, 2015 2 次提交
-
-
由 NeilBrown 提交于
commit 8eb23b9f sched: Debug nested sleeps causes false-positive warnings in RAID5 code. This annotation removes them and adds a comment explaining why there is no real problem. Reported-by: NFengguang Wu <fengguang.wu@intel.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
由 NeilBrown 提交于
If a non-page-aligned write is destined for a device which is missing/faulty, we can deadlock. As the target device is missing, a read-modify-write cycle is not possible. As the write is not for a full-page, a recontruct-write cycle is not possible. This should be handled by logic in fetch_block() which notices there is a non-R5_OVERWRITE write to a missing device, and so loads all blocks. However since commit 67f45548, that code requires STRIPE_PREREAD_ACTIVE before it will active, and those circumstances never set STRIPE_PREREAD_ACTIVE. So: in handle_stripe_dirtying, if neither rmw or rcw was possible, set STRIPE_DELAYED, which will cause STRIPE_PREREAD_ACTIVE be set after a suitable delay. Fixes: 67f45548 Cc: stable@vger.kernel.org (v3.16+) Reported-by: NMikulas Patocka <mpatocka@redhat.com> Tested-by: NHeinz Mauelshagen <heinzm@redhat.com> Signed-off-by: NNeilBrown <neilb@suse.de>
-
- 28 1月, 2015 1 次提交
-
-
由 Andy Shevchenko 提交于
In the case when alloc_netdev fails we return NULL to a caller. But there is no check for NULL in the probe drivers. This patch changes NULL to an error pointer. The function description is amended to reflect what we may get returned. Signed-off-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 27 1月, 2015 16 次提交
-
-
With the commit d75b1ade ("net: less interrupt masking in NAPI") napi repoll is done only when work_done == budget. When in busy_poll is we return 0 in napi_poll. We should return budget. Signed-off-by: NGovindarajulu Varadarajan <_govind@gmx.com> Acked-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ben Hutchings 提交于
- Use the return value of dma_map_single(), rather than calling virt_to_page() separately - Check for mapping failue - Call dma_unmap_single() rather than dma_sync_single_for_cpu() Signed-off-by: NBen Hutchings <ben.hutchings@codethink.co.uk> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ben Hutchings 提交于
dma_map_single() may fail if an IOMMU or swiotlb is in use, so we need to check for this. Signed-off-by: NBen Hutchings <ben.hutchings@codethink.co.uk> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ben Hutchings 提交于
Currently we try to clear EDRRR and EDTRR and immediately continue to free buffers. This is unsafe because: - In general, register writes are not serialised with DMA, so we still have to wait for DMA to complete somehow - The R8A7790 (R-Car H2) manual states that the TX running flag cannot be cleared by writing to EDTRR - The same manual states that clearing the RX running flag only stops RX DMA at the next packet boundary I applied this patch to the driver to detect DMA writes to freed buffers: > --- a/drivers/net/ethernet/renesas/sh_eth.c > +++ b/drivers/net/ethernet/renesas/sh_eth.c > @@ -1098,7 +1098,14 @@ static void sh_eth_ring_free(struct net_device *ndev) > /* Free Rx skb ringbuffer */ > if (mdp->rx_skbuff) { > for (i = 0; i < mdp->num_rx_ring; i++) > + memcpy(mdp->rx_skbuff[i]->data, > + "Hello, world", 12); > + msleep(100); > + for (i = 0; i < mdp->num_rx_ring; i++) { > + WARN_ON(memcmp(mdp->rx_skbuff[i]->data, > + "Hello, world", 12)); > dev_kfree_skb(mdp->rx_skbuff[i]); > + } > } > kfree(mdp->rx_skbuff); > mdp->rx_skbuff = NULL; then ran the loop: while ethtool -G eth0 rx 128 ; ethtool -G eth0 rx 64; do echo -n .; done and 'ping -f' toward the sh_eth port from another machine. The warning fired several times a minute. To fix these issues: - Deactivate all TX descriptors rather than writing to EDTRR - As there seems to be no way of telling when RX DMA is stopped, perform a soft reset to ensure that both DMA enginess are stopped - To reduce the possibility of the reset truncating a transmitted frame, disable egress and wait a reasonable time to reach a packet boundary before resetting - Update statistics before resetting (The 'reasonable time' does not allow for CS/CD in half-duplex mode, but half-duplex no longer seems reasonable!) Signed-off-by: NBen Hutchings <ben.hutchings@codethink.co.uk> Signed-off-by: NDavid S. Miller <davem@davemloft.net> -
由 Ben Hutchings 提交于
If RX traffic is overflowing the FIFO or DMA ring, logging every time this happens just makes things worse. These errors are visible in the statistics anyway. Signed-off-by: NBen Hutchings <ben.hutchings@codethink.co.uk> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ahmed S. Darwish 提交于
While being in an ERROR_WARNING state, and receiving further bus error events with error counters still in the ERROR_WARNING range of 97-127 inclusive, the state handling code erroneously reverts back to ERROR_ACTIVE. Per the CAN standard, only revert to ERROR_ACTIVE when the error counters are less than 96. Moreover, in certain Kvaser models, the BUS_ERROR flag is always set along with undefined bits in the M16C status register. Thus use bitwise operators instead of full equality for checking that register against bus errors. Signed-off-by: NAhmed S. Darwish <ahmed.darwish@valeo.com> Cc: linux-stable <stable@vger.kernel.org> Signed-off-by: NMarc Kleine-Budde <mkl@pengutronix.de>
-
由 Ahmed S. Darwish 提交于
On some x86 laptops, plugging a Kvaser device again after an unplug makes the firmware always ignore the very first command. For such a case, provide some room for retries instead of completely exiting the driver init code. Signed-off-by: NAhmed S. Darwish <ahmed.darwish@valeo.com> Cc: linux-stable <stable@vger.kernel.org> Signed-off-by: NMarc Kleine-Budde <mkl@pengutronix.de>
-
由 Ahmed S. Darwish 提交于
Send expected argument to the URB completion hander: a CAN netdevice instead of the network interface private context `kvaser_usb_net_priv'. This was discovered by having some garbage in the kernel log in place of the netdevice names: can0 and can1. Signed-off-by: NAhmed S. Darwish <ahmed.darwish@valeo.com> Cc: linux-stable <stable@vger.kernel.org> Signed-off-by: NMarc Kleine-Budde <mkl@pengutronix.de>
-
由 Ahmed S. Darwish 提交于
Upon receiving a hardware event with the BUS_RESET flag set, the driver kills all of its anchored URBs and resets all of its transmit URB contexts. Unfortunately it does so under the context of URB completion handler `kvaser_usb_read_bulk_callback()', which is often called in an atomic context. While the device is flooded with many received error packets, usb_kill_urb() typically sleeps/reschedules till the transfer request of each killed URB in question completes, leading to the sleep in atomic bug. [3] In v2 submission of the original driver patch [1], it was stated that the URBs kill and tx contexts reset was needed since we don't receive any tx acknowledgments later and thus such resources will be locked down forever. Fortunately this is no longer needed since an earlier bugfix in this patch series is now applied: all tx URB contexts are reset upon CAN channel close. [2] Moreover, a BUS_RESET is now treated _exactly_ like a BUS_OFF event, which is the recommended handling method advised by the device manufacturer. [1] http://article.gmane.org/gmane.linux.network/239442 http://www.webcitation.org/6Vr2yagAQ [2] can: kvaser_usb: Reset all URB tx contexts upon channel close 889b77f7 [3] Stacktrace: <IRQ> [<ffffffff8158de87>] dump_stack+0x45/0x57 [<ffffffff8158b60c>] __schedule_bug+0x41/0x4f [<ffffffff815904b1>] __schedule+0x5f1/0x700 [<ffffffff8159360a>] ? _raw_spin_unlock_irqrestore+0xa/0x10 [<ffffffff81590684>] schedule+0x24/0x70 [<ffffffff8147d0a5>] usb_kill_urb+0x65/0xa0 [<ffffffff81077970>] ? prepare_to_wait_event+0x110/0x110 [<ffffffff8147d7d8>] usb_kill_anchored_urbs+0x48/0x80 [<ffffffffa01f4028>] kvaser_usb_unlink_tx_urbs+0x18/0x50 [kvaser_usb] [<ffffffffa01f45d0>] kvaser_usb_rx_error+0xc0/0x400 [kvaser_usb] [<ffffffff8108b14a>] ? vprintk_default+0x1a/0x20 [<ffffffffa01f5241>] kvaser_usb_read_bulk_callback+0x4c1/0x5f0 [kvaser_usb] [<ffffffff8147a73e>] __usb_hcd_giveback_urb+0x5e/0xc0 [<ffffffff8147a8a1>] usb_hcd_giveback_urb+0x41/0x110 [<ffffffffa0008748>] finish_urb+0x98/0x180 [ohci_hcd] [<ffffffff810cd1a7>] ? acct_account_cputime+0x17/0x20 [<ffffffff81069f65>] ? local_clock+0x15/0x30 [<ffffffffa000a36b>] ohci_work+0x1fb/0x5a0 [ohci_hcd] [<ffffffff814fbb31>] ? process_backlog+0xb1/0x130 [<ffffffffa000cd5b>] ohci_irq+0xeb/0x270 [ohci_hcd] [<ffffffff81479fc1>] usb_hcd_irq+0x21/0x30 [<ffffffff8108bfd3>] handle_irq_event_percpu+0x43/0x120 [<ffffffff8108c0ed>] handle_irq_event+0x3d/0x60 [<ffffffff8108ec84>] handle_fasteoi_irq+0x74/0x110 [<ffffffff81004dfd>] handle_irq+0x1d/0x30 [<ffffffff81004727>] do_IRQ+0x57/0x100 [<ffffffff8159482a>] common_interrupt+0x6a/0x6a Signed-off-by: NAhmed S. Darwish <ahmed.darwish@valeo.com> Cc: linux-stable <stable@vger.kernel.org> Signed-off-by: NMarc Kleine-Budde <mkl@pengutronix.de>
-
由 Ezequiel Garcia 提交于
Commit 69ad0dd7 Author: Ezequiel Garcia <ezequiel.garcia@free-electrons.com> Date: Mon May 19 13:59:59 2014 -0300 net: mv643xx_eth: Use dma_map_single() to map the skb fragments caused a nasty regression by removing the support for highmem skb fragments. By using page_address() to get the address of a fragment's page, we are assuming a lowmem page. However, such assumption is incorrect, as fragments can be in highmem pages, resulting in very nasty issues. This commit fixes this by using the skb_frag_dma_map() helper, which takes care of mapping the skb fragment properly. Additionally, the type of mapping is now tracked, so it can be unmapped using dma_unmap_page or dma_unmap_single when appropriate. This commit also fixes the error path in txq_init() to release the resources properly. Fixes: 69ad0dd7 ("net: mv643xx_eth: Use dma_map_single() to map the skb fragments") Reported-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NEzequiel Garcia <ezequiel.garcia@free-electrons.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ben Hutchings 提交于
In order to stop the RX path accessing the RX ring while it's being stopped or resized, we clear the interrupt mask (EESIPR) and then call free_irq() or synchronise_irq(). This is insufficient because the interrupt handler or NAPI poller may set EESIPR again after we clear it. Also, in sh_eth_set_ringparam() we currently don't disable NAPI polling at all. I could easily trigger a crash by running the loop: while ethtool -G eth0 rx 128 && ethtool -G eth0 rx 64; do echo -n .; done and 'ping -f' toward the sh_eth port from another machine. To fix this: - Add a software flag (irq_enabled) to signal whether interrupts should be enabled - In the interrupt handler, if the flag is clear then clear EESIPR and return - In the NAPI poller, if the flag is clear then don't set EESIPR - Set the flag before enabling interrupts in sh_eth_dev_init() and sh_eth_set_ringparam() - Clear the flag and serialise with the interrupt and NAPI handlers before clearing EESIPR in sh_eth_close() and sh_eth_set_ringparam() After this, I could run the loop for 100,000 iterations successfully. Signed-off-by: NBen Hutchings <ben.hutchings@codethink.co.uk> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ben Hutchings 提交于
If the device is down then no packet buffers should be allocated. We also must not touch its registers as it may be powered off. Signed-off-by: NBen Hutchings <ben.hutchings@codethink.co.uk> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ben Hutchings 提交于
We must only ever stop TX queues when they are full or the net device is not 'ready' so far as the net core, and specifically the watchdog, is concerned. Otherwise, the watchdog may fire *immediately* if no packets have been added to the queue in the last 5 seconds. What's more, sh_eth_tx_timeout() will likely crash if called while we're resizing the TX ring. I could easily trigger this by running the loop: while ethtool -G eth0 rx 128 && ethtool -G eth0 rx 64; do echo -n .; done Signed-off-by: NBen Hutchings <ben.hutchings@codethink.co.uk> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ben Hutchings 提交于
If an skb to be transmitted is shorter than the minimum Ethernet frame length, we currently set the DMA descriptor length to the minimum but do not add zero-padding. This could result in leaking sensitive data. We also pass different lengths to dma_map_single() and dma_unmap_single(). Use skb_padto() to pad properly, before calling dma_map_single(). Signed-off-by: NBen Hutchings <ben.hutchings@codethink.co.uk> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Mugunthan V N 提交于
In Dual EMAC, the default VLANs are used to segregate Rx packets between the ports, so adding the same default VLAN to the switch will affect the normal packet transfers. So returning error on addition of dual EMAC default VLANs. Even if EMAC 0 default port VLAN is added to EMAC 1, it will lead to break dual EMAC port separations. Fixes: d9ba8f9e (driver: net: ethernet: cpsw: dual emac interface implementation) Cc: <stable@vger.kernel.org> # v3.9+ Reported-by: NFelipe Balbi <balbi@ti.com> Signed-off-by: NMugunthan V N <mugunthanvnm@ti.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Andrey Ryabinin 提交于
Array of platform_device_id elements should be terminated with empty element. Fixes: 5bccae6e ("rtc: s5m-rtc: add real-time clock driver for s5m8767") Signed-off-by: NAndrey Ryabinin <a.ryabinin@samsung.com> Reviewed-by: NKrzysztof Kozlowski <k.kozlowski@samsung.com> Cc: <stable@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 26 1月, 2015 2 次提交
-
-
由 Thomas Richter 提交于
Do not wait for channel command buffers in IPA commands. The potential wait could be done while holding a spin lock and causes in recent kernels such a bug if kernel lock debugging is enabled: kernel: BUG: sleeping function called from invalid context at drivers/s390/net/qeth_core_main.c: 794 kernel: in_atomic(): 1, irqs_disabled(): 0, pid: 2031, name: NetworkManager kernel: 2 locks held by NetworkManager/2031: kernel: #0: (rtnl_mutex){+.+.+.}, at: [<00000000006e0d7a>] rtnetlink_rcv+0x32/0x50 kernel: #1: (_xmit_ETHER){+.....}, at: [<00000000006cfe90>] dev_set_rx_mode+0x30/0x50 kernel: CPU: 0 PID: 2031 Comm: NetworkManager Not tainted 3.18.0-rc5-next-20141124 #1 kernel: 00000000275fb1f0 00000000275fb280 0000000000000002 0000000000000000 00000000275fb320 00000000275fb298 00000000275fb298 00000000007e326a 0000000000000000 000000000099ce2c 00000000009b4988 000000000000000b 00000000275fb2e0 00000000275fb280 0000000000000000 0000000000000000 0000000000000000 00000000001129c8 00000000275fb280 00000000275fb2e0 kernel: Call Trace: kernel: ([<00000000001128b0>] show_trace+0xf8/0x158) kernel: [<000000000011297a>] show_stack+0x6a/0xe8 kernel: [<00000000007e995a>] dump_stack+0x82/0xb0 kernel: [<000000000017d668>] ___might_sleep+0x170/0x228 kernel: [<000003ff80026f0e>] qeth_wait_for_buffer+0x36/0xd0 [qeth] kernel: [<000003ff80026fe2>] qeth_get_ipacmd_buffer+0x3a/0xc0 [qeth] kernel: [<000003ff80105078>] qeth_l3_send_setdelmc+0x58/0xf8 [qeth_l3] kernel: [<000003ff8010b1fe>] qeth_l3_set_ip_addr_list+0x2c6/0x848 [qeth_l3] kernel: [<000003ff8010bbb4>] qeth_l3_set_multicast_list+0x434/0xc48 [qeth_l3] kernel: [<00000000006cfe9a>] dev_set_rx_mode+0x3a/0x50 kernel: [<00000000006cff90>] __dev_open+0xe0/0x140 kernel: [<00000000006d02a0>] __dev_change_flags+0xa0/0x178 kernel: [<00000000006d03a8>] dev_change_flags+0x30/0x70 kernel: [<00000000006e14ee>] do_setlink+0x346/0x9a0 ... The device driver has plenty of command buffers available per channel for channel command communication. In the extremely rare case when there is no command buffer available, return a NULL pointer and issue a warning in the kernel log. The caller handles the case when a NULL pointer is encountered and returns an error. In the case the wait for command buffer is possible (because no lock is held as in the OSN case), still wait until a channel command buffer is available. Signed-off-by: NThomas Richter <tmricht@linux.vnet.ibm.com> Signed-off-by: NUrsula Braun <ursula.braun@de.ibm.com> Reviewed-by: NEugene Crosser <Eugene.Crosser@ru.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> -
由 Eugene Crosser 提交于
In the functions that are registering and unregistering MAC addresses in the qeth-handled hardware, remove callback functions that are unnesessary, as only the return code is analyzed. Translate hardware response codes to semi-standard 'errno'-like codes for readability. Add kernel-doc description to the internal API function qeth_send_control_data(). Signed-off-by: NEugene Crosser <Eugene.Crosser@ru.ibm.com> Signed-off-by: NUrsula Braun <ursula.braun@de.ibm.com> Reviewed-by: NThomas-Mich Richter <tmricht@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 25 1月, 2015 4 次提交
-
-
由 Mahesh Bandewar 提交于
The ip6_route_output() always returns a valid dst pointer unlike in IPv4 case. So the validation has to be different from the IPv4 path. Correcting that error in this patch. This was picked up by a static checker with a following warning - drivers/net/ipvlan/ipvlan_core.c:380 ipvlan_process_v6_outbound() warn: 'dst' isn't an ERR_PTR Signed-off-by: NMahesh Bandewar <maheshb@google.com> Reported-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> -
由 Eric Dumazet 提交于
NAPI poll logic now enforces that a poller returns exactly the budget when it wants to be called again. If a driver limits TX completion, it has to return budget as well when the limit is hit, not the number of received packets. Reported-and-tested-by: NMike Galbraith <umgwanakikbuti@gmail.com> Signed-off-by: NEric Dumazet <edumazet@google.com> Fixes: d75b1ade ("net: less interrupt masking in NAPI") Cc: Manish Chopra <manish.chopra@qlogic.com> Acked-by: NManish Chopra <manish.chopra@qlogic.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
With the commit d75b1ade ("net: less interrupt masking in NAPI") napi repoll is done only when work_done == budget. When we are in busy_poll we return 0 in napi_poll. We should return budget. Signed-off-by: NGovindarajulu Varadarajan <_govind@gmx.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Mikulas Patocka 提交于
Commit ffcc3936 ("dm: enhance internal suspend and resume interface") attempted to handle multiple internal suspends on the same device, but it did that incorrectly. When these functions are called in this order on the same device the device is no longer suspended, but it should be: dm_internal_suspend_noflush dm_internal_suspend_noflush dm_internal_resume Fix this bug by maintaining an 'internal_suspend_count' and resuming the device when this count drops to zero. Signed-off-by: NMikulas Patocka <mpatocka@redhat.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com>
-
- 24 1月, 2015 3 次提交
-
-
由 Axel Lin 提交于
Use ATTRIBUTE_GROUPS macro to simplify the code a bit. Signed-off-by: NAxel Lin <axel.lin@ingics.com> Signed-off-by: NJean Delvare <jdelvare@suse.de>
-
由 Axel Lin 提交于
Use module_pci_driver to simplify the code a bit. Signed-off-by: NAxel Lin <axel.lin@ingics.com> Reviewed-by: NGuenter Roeck <linux@roeck-us.net> Signed-off-by: NJean Delvare <jdelvare@suse.de>
-
由 Jean Delvare 提交于
On many motherboards, for an unknown reason, the thermal sensor seems to be disabled and will return a constant temperature value of 36.5 degrees Celsius. Don't bind to the device in that case, so that we don't report this bogus value to userspace. Signed-off-by: NJean Delvare <jdelvare@suse.de> Cc: Romain Dolbeau <romain@dolbeau.org> Reviewed-by: NGuenter Roeck <linux@roeck-us.net>
-