- 01 5月, 2014 12 次提交
-
-
由 Lars Ellenberg 提交于
If the receiver needs to serve a discard request on a queue that does not announce to be discard cabable, it falls back to do synchronous blkdev_issue_zeroout(). We expect only "reasonably" large (up to one activity log extent?) discard requests. We do this to not to not block the receiver for too long in this fallback code path, and to not set/clear too many bits inside one spinlock_irq_save() in drbd_set_in_sync/drbd_set_out_of_sync, Signed-off-by: NPhilipp Reisner <philipp.reisner@linbit.com> Signed-off-by: NLars Ellenberg <lars.ellenberg@linbit.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Lars Ellenberg 提交于
We plan to use genl_family->parallel_ops = true in the future, but need to review all possible interactions first. For now, only selectively drop genl_lock() in drbd_set_role(), instead serializing on our own internal resource->conf_update mutex. We now can be promoted/demoted on many resources in parallel, which may significantly improve cluster failover times when fencing is required. Signed-off-by: NPhilipp Reisner <philipp.reisner@linbit.com> Signed-off-by: NLars Ellenberg <lars.ellenberg@linbit.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Lars Ellenberg 提交于
Because all administrative requests via genetlink have been globally serialized via genl_lock(), we used to have one static struct drbd_config_context "admin context". Move this on-stack to the respective callback functions. This will allow us to selectively drop the genl_lock() (or use genl_family->parallel_ops) in the future. Signed-off-by: NPhilipp Reisner <philipp.reisner@linbit.com> Signed-off-by: NLars Ellenberg <lars.ellenberg@linbit.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Philipp Reisner 提交于
When a 'cluster wide' disconnect executes, the result comes back from the peer, and immediately after that the connection breaks then _conn_rq_cond() reported back SS_CW_SUCCESS. Therefore _conn_request_state() calls conn_set_state(), which has a BUG() in it. The BUG() is hit because conn_is_valid_transition() does not like the transaction. Which goes back to is_valid_soft_transition() returning SS_OUTDATE_WO_CONN. This fix is to consider an error reported by is_valid_soft_transition() even when the peer agreed to the transaction. Signed-off-by: NPhilipp Reisner <philipp.reisner@linbit.com> Signed-off-by: NLars Ellenberg <lars.ellenberg@linbit.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Lars Ellenberg 提交于
Before, application IO could pre-empt resync activity for up to hardcoded 20 seconds per resync request. A very busy server could throttle the effective resync bandwidth down to one request per 20 seconds. Now, we only let application IO pre-empt resync traffic while the current resync rate estimate is above c-min-rate. If you disable the c-min-rate throttle feature (set c-min-rate = 0), application IO will no longer pre-empt resync traffic at all. Signed-off-by: NPhilipp Reisner <philipp.reisner@linbit.com> Signed-off-by: NLars Ellenberg <lars.ellenberg@linbit.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Lars Ellenberg 提交于
If max-buffers and socket buffer sizes are "too small" for the chosen resync rate, this could lead potentially lead to a distributed deadlock, which may or may not resolve itself via the "ko-count" and request timeout mechanism, or could be resolved by forced disconnect. One option to deal with this is proper configuration: use larger max-buffer and socket buffers settings, or reduce the resync rate. But even with bad configuration we should not deadlock, but "gracefully" recover. The issue is avoided by using only up to max-buffers/2 for resync requests, and by using max-buffers not as a hard limit for data buffer allocations, but as a throttle threshold only. Signed-off-by: NPhilipp Reisner <philipp.reisner@linbit.com> Signed-off-by: NLars Ellenberg <lars.ellenberg@linbit.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Lars Ellenberg 提交于
While merging adjacent dirty blocks into resync requests, the resync rate throttle was disregarded. For very low resync rates, the effective rate may have exceeded the intended rate by a larger margin. Signed-off-by: NPhilipp Reisner <philipp.reisner@linbit.com> Signed-off-by: NLars Ellenberg <lars.ellenberg@linbit.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Lars Ellenberg 提交于
If we don't make resync or verify progress for "too long", we want to flag it as "stalled". Since 2010, "use rolling marks for resync speed calculation" this "too long" was wrong by a factor of HZ. With HZ 250, it would have been flagged as stalled after 100 minutes. Hardcode 3 minutes instead. Signed-off-by: NPhilipp Reisner <philipp.reisner@linbit.com> Signed-off-by: NLars Ellenberg <lars.ellenberg@linbit.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Philipp Reisner 提交于
If a user forces the operation he takes the blame in case the peer does not have enough space. No reason to dey this... Signed-off-by: NPhilipp Reisner <philipp.reisner@linbit.com> Signed-off-by: NLars Ellenberg <lars.ellenberg@linbit.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Philipp Reisner 提交于
Signed-off-by: NPhilipp Reisner <philipp.reisner@linbit.com> Signed-off-by: NLars Ellenberg <lars.ellenberg@linbit.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Philipp Reisner 提交于
Actually we are clearing the susp_fen flag if we are not going to call a fencing handler. For setting the susp_fen flag needs to be edge-triggerd, and not level triggered. Signed-off-by: NPhilipp Reisner <philipp.reisner@linbit.com> Signed-off-by: NLars Ellenberg <lars.ellenberg@linbit.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Philipp Reisner 提交于
When we need to outdate the peer while being promoted to primary, and the connection gets established at the same time, we deadlock in drbd_try_outdate_peer() when trying to clear the susp_fen bit. Fix this by setting the STATE_SENT bit while holding the mutex. Using drbd_change_state(.. , CS_HARD, ..) which does not block until STATE_SENT is cleared, is only for clearness. It does not contribute anything to the fix. Signed-off-by: NPhilipp Reisner <philipp.reisner@linbit.com> Signed-off-by: NLars Ellenberg <lars.ellenberg@linbit.com> Signed-off-by: NJens Axboe <axboe@fb.com>
-
- 23 4月, 2014 3 次提交
-
-
由 Asai Thambi S P 提交于
A hardware quirk in P320h/P420m interfere with PCIe transactions on some AMD chipsets, making P320h/P420m unusable. This workaround is to disable ERO and NoSnoop bits in the parent and root complex for normal functioning of these devices NOTE: This workaround is specific to AMD chipset with a PCIe upstream device with device id 0x5aXX Signed-off-by: NAsai Thambi S P <asamymuthupa@micron.com> Signed-off-by: NSam Bradshaw <sbradshaw@micron.com> Cc: stable@kernel.org Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Asai Thambi S P 提交于
In module exit, dfs_parent and it's subtree were removed before unregistering with pci. When debugfs entry for each device is attempted to remove in pci_remove() context, they don't exist, as dfs_parent and its children were already ripped apart. Modified to first unregister with pci and then remove dfs_parent. Signed-off-by: NAsai Thambi S P <asamymuthupa@micron.com> Cc: stable@kernel.org Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Asai Thambi S P 提交于
Increased timeout for STANDBY IMMEDIATE command to 2 minutes. Signed-off-by: NSelvan Mani <smani@micron.com> Signed-off-by: NAsai Thambi S P <asamymuthupa@micron.com> Cc: stable@kernel.org Signed-off-by: NJens Axboe <axboe@fb.com>
-
- 22 4月, 2014 2 次提交
-
-
由 Alexander Gordeev 提交于
As result of deprecation of MSI-X/MSI enablement functions pci_enable_msix() and pci_enable_msi_block() all drivers using these two interfaces need to be updated to use the new pci_enable_msi_range() or pci_enable_msi_exact() and pci_enable_msix_range() or pci_enable_msix_exact() interfaces. Signed-off-by: NAlexander Gordeev <agordeev@redhat.com> Cc: Mike Miller <mike.miller@hp.com> Cc: iss_storagedev@hp.com Cc: Jens Axboe <axboe@kernel.dk> Cc: linux-pci@vger.kernel.org Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Alexander Gordeev 提交于
Function pci_enable_msix_exact() is a variation of pci_enable_msix_range() that allows a device driver to request a particular number of MSI-X interrupts, rather than any number within a specified range. Signed-off-by: NAlexander Gordeev <agordeev@redhat.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Cc: Kyungmin Park <kyungmin.park@samsung.com> Cc: linux-pci@vger.kernel.org Signed-off-by: NJens Axboe <axboe@fb.com>
-
- 17 4月, 2014 3 次提交
-
-
由 Jens Axboe 提交于
Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Christoph Hellwig 提交于
Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Jens Axboe 提交于
The friendly Intel kbuild test robot reported: drivers/cdrom/gdrom.c: In function 'gdrom_readdisk_dma': drivers/cdrom/gdrom.c:605:3: error: 'struct request' has no member named 'buffer' Convert that from req->buffer to bio_data(rq->bio). Apparently my grep missed this one, and I don't build for Sega Dreamcast enough. Signed-off-by: NJens Axboe <axboe@fb.com>
-
- 16 4月, 2014 4 次提交
-
-
由 Christoph Hellwig 提交于
Add a new blk_mq_tag_set structure that gets set up before we initialize the queue. A single blk_mq_tag_set structure can be shared by multiple queues. Signed-off-by: NChristoph Hellwig <hch@lst.de> Modular export of blk_mq_{alloc,free}_tagset added by me. Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Christoph Hellwig 提交于
The current blk_mq_init_commands/blk_mq_free_commands interface has a two problems: 1) Because only the constructor is passed to blk_mq_init_commands there is no easy way to clean up when a comman initialization failed. The current code simply leaks the allocations done in the constructor. 2) There is no good place to call blk_mq_free_commands: before blk_cleanup_queue there is no guarantee that all outstanding commands have completed, so we can't free them yet. After blk_cleanup_queue the queue has usually been freed. This can be worked around by grabbing an unconditional reference before calling blk_cleanup_queue and dropping it after blk_mq_free_commands is done, although that's not exatly pretty and driver writers are guaranteed to get it wrong sooner or later. Both issues are easily fixed by making the request constructor and destructor normal blk_mq_ops methods. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Christoph Hellwig 提交于
Drivers can reach their private data easily using the blk_mq_rq_to_pdu helper and don't need req->special. By not initializing it code can be simplified nicely, and we also shave off a few more instructions from the I/O path. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <axboe@fb.com>
-
由 Jens Axboe 提交于
This was used in the olden days, back when onions were proper yellow. Basically it mapped to the current buffer to be transferred. With highmem being added more than a decade ago, most drivers map pages out of a bio, and rq->buffer isn't pointing at anything valid. Convert old style drivers to just use bio_data(). For the discard payload use case, just reference the page in the bio. Signed-off-by: NJens Axboe <axboe@fb.com>
-
- 13 4月, 2014 2 次提交
-
-
由 Mikulas Patocka 提交于
This patch fixes I/O errors with the sym53c8xx_2 driver when the disk returns QUEUE FULL status. When the controller encounters an error (including QUEUE FULL or BUSY status), it aborts all not yet submitted requests in the function sym_dequeue_from_squeue. This function aborts them with DID_SOFT_ERROR. If the disk has full tag queue, the request that caused the overflow is aborted with QUEUE FULL status (and the scsi midlayer properly retries it until it is accepted by the disk), but the sym53c8xx_2 driver aborts the following requests with DID_SOFT_ERROR --- for them, the midlayer does just a few retries and then signals the error up to sd. The result is that disk returning QUEUE FULL causes request failures. The error was reproduced on 53c895 with COMPAQ BD03685A24 disk (rebranded ST336607LC) with command queue 48 or 64 tags. The disk has 64 tags, but under some access patterns it return QUEUE FULL when there are less than 64 pending tags. The SCSI specification allows returning QUEUE FULL anytime and it is up to the host to retry. Signed-off-by: NMikulas Patocka <mpatocka@redhat.com> Cc: Matthew Wilcox <matthew@wil.cx> Cc: James Bottomley <JBottomley@Parallels.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Vincenzo Maffione 提交于
This patch fixes the initialization of an array used in the TX datapath that was mistakenly initialized together with the RX datapath arrays. An out of range array access could happen when RX and TX rings had different sizes. Signed-off-by: NVincenzo Maffione <v.maffione@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 12 4月, 2014 14 次提交
-
-
由 hayeswang 提交于
When the device is unplugged, the driver would try to disable the device. Add checking the flag of RTL8152_UNPLUG to skip setting the device when it is unplugged. This could shorten the time of unloading the driver. Signed-off-by: NHayes Wang <hayeswang@realtek.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Marc Zyngier 提交于
The sun4i-emac driver is rather primitive, and doesn't support promiscuous mode. This makes usage such as bridging impossible, which is a shame on virtualization capable HW such as the Allwinner A20. The fix is fairly simple: move the RX setup code to the ndo_set_rx_mode vector, and add the required HW configuration when IFF_PROMISC is passed by the core code. This has been tested on a generic A20 box running a few virtual machines hanging off a bridge with the EMAC chip as the link to the outside world. Cc: Stefan Roese <sr@denx.de> Cc: Maxime Ripard <maxime.ripard@free-electrons.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NStefan Roese <sr@denx.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Duan Jiong 提交于
This patch fixes coccinelle error regarding usage of IS_ERR and PTR_ERR instead of PTR_ERR_OR_ZERO. Signed-off-by: NDuan Jiong <duanj.fnst@cn.fujitsu.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Mike Marciniszyn 提交于
The code was incorrectly using sg_dma_address() and sg_dma_len() instead of ib_sg_dma_address() and ib_sg_dma_len(). This prevents srpt from functioning with the Intel HCA and indeed will corrupt memory badly. Cc: Bart Van Assche <bvanassche@acm.org> Reviewed-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Tested-by: NVinod Kumar <vinod.kumar@intel.com> Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Cc: stable@vger.kernel.org # 3.3+ Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
-
由 Andy Grover 提交于
Because it doesn't always create, if there's an existing one it just returns it. Signed-off-by: NAndy Grover <agrover@redhat.com> Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
-
由 Andy Grover 提交于
These functions are not adding or deleting an lport. They are adding a wwn that may match with an lport that is present on the system. Renaming ft_del_lport also means we won't have functions named both ft_del_lport and ft_lport_del any more. Signed-off-by: NAndy Grover <agrover@redhat.com> Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
-
由 Andy Grover 提交于
Rename struct ft_lport_acl to ft_lport_wwn. "acl" is associated with something different in LIO terms. Really, ft_lport_wwn is the fabric-specific wrapper for the struct se_wwn. Rename "lacl" local variables to "ft_wwn" as well. Rename list_heads used as list members to make it clear they're nodes, not heads. Rename lport_node to ft_wwn_node. Rename ft_lport_list to ft_wwn_list Signed-off-by: NAndy Grover <agrover@redhat.com> Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
-
由 Andy Grover 提交于
tcm_fc doesn't support multiple TPGs per wwn. For proof, see ft_lport_find_tpg. Enforce this in the code. Replace ft_lport_wwn.tpg_list with a single pointer. We can't fold ft_tpg into ft_lport_wwn because they can have different lifetimes. Signed-off-by: NAndy Grover <agrover@redhat.com> Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
-
由 Andy Grover 提交于
Nobody outside tfc_conf.c uses it. Signed-off-by: NAndy Grover <agrover@redhat.com> Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
-
由 Andy Grover 提交于
ft_del_tpg checks tpg->tport is set before unlinking the tpg from the tport when the tpg is being removed. Set this pointer in ft_tport_create, or the unlinking won't happen in ft_del_tpg and tport->tpg will reference a deleted object. This patch sets tpg->tport in ft_tport_create, because that's what ft_del_tpg checks, and is the only way to get back to the tport to clear tport->tpg. The bug was occuring when: - lport created, tport (our per-lport, per-provider context) is allocated. tport->tpg = NULL - tpg created - a PRLI is received. ft_tport_create is called, tpg is found and tport->tpg is set - tpg removed. ft_tpg is freed in ft_del_tpg. Since tpg->tport was not set, tport->tpg is not cleared and points at freed memory - Future calls to ft_tport_create return tport via first conditional, instead of searching for new tpg by calling ft_lport_find_tpg. tport->tpg is still invalid, and will access freed memory. see https://bugzilla.redhat.com/show_bug.cgi?id=1071340 Cc: stable@vger.kernel.org # 3.0+ Signed-off-by: NAndy Grover <agrover@redhat.com> Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
-
由 Alex Leung 提交于
This patch addresses an issue that occurs when an ABTS is received for an se_cmd that completes just before the sess_cmd_list is searched in core_tmr_abort_task(). When the sess_cmd_list is searched, since the ABTS and the FCP_CMND being aborted (that just completed) both have the same OXID, TFO->get_task_tag(TMR) returns a value that matches tmr->ref_task_tag (from TFO->get_task_tag(FCP_CMND)), and the Abort Task tries to abort itself. When this occurs, transport_wait_for_tasks() hangs forever since the TMR is waiting for itself to finish. This patch adds a check to core_tmr_abort_task() to make sure the TMR does not attempt to abort itself. Signed-off-by: NAlex Leung <alex.leung@emulex.com> Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
-
由 Mugunthan V N 提交于
When the Ethernet interface is put down and up with heavy Ethernet traffic, then there is prossibility of an interrupt waiting in irq controller to be processed, so when the interface is brought up again just after enable interrupt, it goes to ISR due to the previous unhandled interrutp and in ISR napi is not scheduled as the napi is not enabled in ndo_open which results in disabled interrupt for CPSW and no packets are received in cpsw. So this patch moves enabling of interupts after napi_enable and clearing CPDMA interrupts. Signed-off-by: NMugunthan V N <mugunthanvnm@ti.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Mugunthan V N 提交于
When the Ethernet interface is brought down during high Ethernet traffic, then cpsw creates the following warn dump. When cpdma has already processed the packet then the status will be greater than 0, so the cpsw_rx_handler considers that the interface is up and try to resubmit one more rx buffer to cpdma which fails as the DMA is in teardown process. This can be avoided by checking the interface state and then process the received packet, if the interface is down just discard and free the skb and return. [ 2823.104591] WARNING: CPU: 0 PID: 1823 at drivers/net/ethernet/ti/cpsw.c:711 cpsw_rx_handler+0x148/0x164() [ 2823.114654] Modules linked in: [ 2823.117872] CPU: 0 PID: 1823 Comm: ifconfig Tainted: G W 3.14.0-11992-gf34c4a35 #11 [ 2823.126860] [<c0014b5c>] (unwind_backtrace) from [<c00117e4>] (show_stack+0x10/0x14) [ 2823.135030] [<c00117e4>] (show_stack) from [<c0533a9c>] (dump_stack+0x80/0x9c) [ 2823.142619] [<c0533a9c>] (dump_stack) from [<c003f0e0>] (warn_slowpath_common+0x6c/0x90) [ 2823.151141] [<c003f0e0>] (warn_slowpath_common) from [<c003f120>] (warn_slowpath_null+0x1c/0x24) [ 2823.160336] [<c003f120>] (warn_slowpath_null) from [<c03caeb0>] (cpsw_rx_handler+0x148/0x164) [ 2823.169314] [<c03caeb0>] (cpsw_rx_handler) from [<c03c730c>] (__cpdma_chan_free+0x90/0xa8) [ 2823.178028] [<c03c730c>] (__cpdma_chan_free) from [<c03c7418>] (__cpdma_chan_process+0xf4/0x134) [ 2823.187279] [<c03c7418>] (__cpdma_chan_process) from [<c03c7560>] (cpdma_chan_stop+0xb4/0x17c) [ 2823.196349] [<c03c7560>] (cpdma_chan_stop) from [<c03c766c>] (cpdma_ctlr_stop+0x44/0x9c) [ 2823.204872] [<c03c766c>] (cpdma_ctlr_stop) from [<c03cb708>] (cpsw_ndo_stop+0x154/0x188) [ 2823.213321] [<c03cb708>] (cpsw_ndo_stop) from [<c046f0ec>] (__dev_close_many+0x84/0xc8) [ 2823.221761] [<c046f0ec>] (__dev_close_many) from [<c046f158>] (__dev_close+0x28/0x3c) [ 2823.230012] [<c046f158>] (__dev_close) from [<c0474ca8>] (__dev_change_flags+0x88/0x160) [ 2823.238483] [<c0474ca8>] (__dev_change_flags) from [<c0474da0>] (dev_change_flags+0x18/0x48) [ 2823.247316] [<c0474da0>] (dev_change_flags) from [<c04d12c4>] (devinet_ioctl+0x61c/0x6e0) [ 2823.255884] [<c04d12c4>] (devinet_ioctl) from [<c045c660>] (sock_ioctl+0x68/0x2a4) [ 2823.263789] [<c045c660>] (sock_ioctl) from [<c0125fe4>] (do_vfs_ioctl+0x78/0x61c) [ 2823.271629] [<c0125fe4>] (do_vfs_ioctl) from [<c01265ec>] (SyS_ioctl+0x64/0x74) [ 2823.279284] [<c01265ec>] (SyS_ioctl) from [<c000e580>] (ret_fast_syscall+0x0/0x48) Signed-off-by: NMugunthan V N <mugunthanvnm@ti.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Several spots in the kernel perform a sequence like: skb_queue_tail(&sk->s_receive_queue, skb); sk->sk_data_ready(sk, skb->len); But at the moment we place the SKB onto the socket receive queue it can be consumed and freed up. So this skb->len access is potentially to freed up memory. Furthermore, the skb->len can be modified by the consumer so it is possible that the value isn't accurate. And finally, no actual implementation of this callback actually uses the length argument. And since nobody actually cared about it's value, lots of call sites pass arbitrary values in such as '0' and even '1'. So just remove the length argument from the callback, that way there is no confusion whatsoever and all of these use-after-free cases get fixed as a side effect. Based upon a patch by Eric Dumazet and his suggestion to audit this issue tree-wide. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-