- 26 8月, 2021 2 次提交
-
-
由 Mathias Nyman 提交于
Only TDs with status TD_CLEARING_CACHE will be given back after cache is cleared with a set TR deq command. xhci_invalidate_cached_td() failed to set the TD_CLEARING_CACHE status for some cancelled TDs as it assumed an endpoint only needs to clear the TD it stopped on. This isn't always true. For example with streams enabled an endpoint may have several stream rings, each stopping on a different TDs. Note that if an endpoint has several stream rings, the current code will still only clear the cache of the stream pointed to by the last cancelled TD in the cancel list. This patch only focus on making sure all canceled TDs are given back, avoiding hung task after device removal. Another fix to solve clearing the caches of all stream rings with cancelled TDs is needed, but not as urgent. This issue was simultanously discovered and debugged by by Tao Wang, with a slightly different fix proposal. Fixes: 674f8438 ("xhci: split handling halted endpoints into two steps") Cc: <stable@vger.kernel.org> #5.12 Reported-by: NTao Wang <wat@codeaurora.org> Signed-off-by: NMathias Nyman <mathias.nyman@linux.intel.com> Link: https://lore.kernel.org/r/20210820123503.2605901-4-mathias.nyman@linux.intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Mathias Nyman 提交于
Removes static char buffer usage in the following decode functions: xhci_decode_ctrl_ctx() xhci_decode_slot_context() xhci_decode_usbsts() xhci_decode_doorbell() xhci_decode_ep_context() Caller must provide a buffer to use. In tracing use __get_str() as recommended to pass buffer. Minor changes are needed in other xhci code as these functions are also used elsewhere Cc: <stable@vger.kernel.org> Signed-off-by: NMathias Nyman <mathias.nyman@linux.intel.com> Link: https://lore.kernel.org/r/20210820123503.2605901-3-mathias.nyman@linux.intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 17 6月, 2021 1 次提交
-
-
由 Mathias Nyman 提交于
Save a bit of power by not interrupting so often by default if XHCI_AVOID_BEI quirk is set. In normal cases the xhci driver will only generate an interrupt on the last isochronous TRB of an URB. In a common UVC webcam usecase there are 32 TRBs per URB. if AVOID_BEI flag is set then xhci driver will force an interrupt every 8th isoc TRB to make sure the event ring doesn't get too full. This is however way too frequent in common single webcam use cases, causing 1000 interrupts/sec and thus poor powermanagement performance. Instead start with interrupting every 32 isoc TRB, and halve it in case event ring becomes half-full. Stop halving when reaching a rate of every 8th trb. This is a one way solution. If interrupt rate is increased it will stay high until driver is reloaded. The highest rate is the same as the old default rate. Signed-off-by: NMathias Nyman <mathias.nyman@linux.intel.com> Link: https://lore.kernel.org/r/20210617150354.1512157-3-mathias.nyman@linux.intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 25 5月, 2021 2 次提交
-
-
由 Mathias Nyman 提交于
If endpoints halts due to a stall then the dequeue pointer read from hardware may already be set ahead of the stalled TRB. After commit 674f8438 ("xhci: split handling halted endpoints into two steps") in 5.12 xhci driver won't issue a Set TR Dequeue if hardware dequeue pointer is already in the right place. Turns out the "Set TR Dequeue pointer" command is anyway needed as it in addition to moving the dequeue pointer also clears endpoint state and cache. Fixes: 674f8438 ("xhci: split handling halted endpoints into two steps") Cc: <stable@vger.kernel.org> # 5.12 Reported-by: NPeter Ganzhorn <peter.ganzhorn@googlemail.com> Tested-by: NPeter Ganzhorn <peter.ganzhorn@googlemail.com> Signed-off-by: NMathias Nyman <mathias.nyman@linux.intel.com> Link: https://lore.kernel.org/r/20210525074100.1154090-3-mathias.nyman@linux.intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Mathias Nyman 提交于
5.12 kernel changes how xhci handles cancelled URBs and halted endpoints. Among these changes cancelled and stalled URBs are no longer given back before they are cleared from xHC hardware cache. These changes unfortunately cleared the -EPIPE status of a stalled transfer in one case before giving bak the URB, causing a USB card reader to fail from working. Fixes: 674f8438 ("xhci: split handling halted endpoints into two steps") Cc: <stable@vger.kernel.org> # 5.12 Reported-by: NPeter Ganzhorn <peter.ganzhorn@googlemail.com> Tested-by: NPeter Ganzhorn <peter.ganzhorn@googlemail.com> Signed-off-by: NMathias Nyman <mathias.nyman@linux.intel.com> Link: https://lore.kernel.org/r/20210525074100.1154090-2-mathias.nyman@linux.intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 13 5月, 2021 1 次提交
-
-
由 Mathias Nyman 提交于
Commit 9ebf3000 ("xhci: Fix halted endpoint at stop endpoint command completion") in 5.12 changes how cancelled URBs are given back. To cancel a URB xhci driver needs to stop the endpoint first. To clear a halted endpoint xhci driver needs to reset the endpoint. In rare cases when an endpoint halt (error) races with a endpoint stop we need to clear the reset before removing, and giving back the cancelled URB. The above change in 5.12 takes care of this, but it also relies on the reset endpoint completion handler to give back the cancelled URBs. There are cases when driver refuses to queue reset endpoint commands, for example when a link suddenly goes to an inactive error state. In this case the cancelled URB is never given back. Fix this by giving back the URB in the stop endpoint if queuing a reset endpoint command fails. Fixes: 9ebf3000 ("xhci: Fix halted endpoint at stop endpoint command completion") CC: <stable@vger.kernel.org> # 5.12 Signed-off-by: NMathias Nyman <mathias.nyman@linux.intel.com> Link: https://lore.kernel.org/r/20210512080816.866037-3-mathias.nyman@linux.intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 06 4月, 2021 1 次提交
-
-
由 Mathias Nyman 提交于
The same values are parsed several times from transfer and event TRBs by different functions in the same call path, all while processing one transfer event. As the TRBs are in DMA memory and can be accessed by the xHC host we want to avoid this to prevent double-fetch issues. To resolve this pass the already parsed values to the different functions in the path of parsing a transfer event Signed-off-by: NMathias Nyman <mathias.nyman@linux.intel.com> Link: https://lore.kernel.org/r/20210406070208.3406266-5-mathias.nyman@linux.intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 23 3月, 2021 1 次提交
-
-
由 Johan Hovold 提交于
Force-threaded interrupt handlers used to run with interrupts enabled, something which could lead to deadlocks in case a threaded handler shared a lock with code running in hard interrupt context (e.g. timer callbacks) and did not explicitly disable interrupts. Since commit 81e2073c ("genirq: Disable interrupts for force threaded handlers") interrupt handlers always run with interrupts disabled on non-RT so that drivers no longer need to do handle forced threading ("threadirqs"). Drop the now obsolete workaround added by commit 63aea0db ("USB: xhci: fix lock-inversion problem"). Cc: Alan Stern <stern@rowland.harvard.edu> Cc: Bart Van Assche <bart.vanassche@sandisk.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Signed-off-by: NJohan Hovold <johan@kernel.org> Link: https://lore.kernel.org/r/20210322111140.32056-1-johan@kernel.orgSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 11 3月, 2021 1 次提交
-
-
由 Stanislaw Gruszka 提交于
On some systems rt2800usb and mt7601u devices are unable to operate since commit f8f80be5 ("xhci: Use soft retry to recover faster from transaction errors") Seems that some xHCI controllers can not perform Soft Retry correctly, affecting those devices. To avoid the problem add xhci->quirks flag that restore pre soft retry xhci behaviour for affected xHCI controllers. Currently those are AMD_PROMONTORYA_4 and AMD_PROMONTORYA_2, since it was confirmed by the users: on those xHCI hosts issue happen and is gone after disabling Soft Retry. [minor commit message rewording for checkpatch -Mathias] Fixes: f8f80be5 ("xhci: Use soft retry to recover faster from transaction errors") Cc: <stable@vger.kernel.org> # 4.20+ Reported-by: NBernhard <bernhard.gebetsberger@gmx.at> Tested-by: NBernhard <bernhard.gebetsberger@gmx.at> Signed-off-by: NStanislaw Gruszka <stf_xl@wp.pl> Signed-off-by: NMathias Nyman <mathias.nyman@linux.intel.com> Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=202541 Link: https://lore.kernel.org/r/20210311115353.2137560-2-mathias.nyman@linux.intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 10 3月, 2021 1 次提交
-
-
由 Chunfeng Yun 提交于
Currently xhci-hcd.ko building depends on USB_XHCI_MTK, this is not flexible for some cases. For example: USB_XHCI_HCD is y, and USB_XHCI_MTK is m, then we can't implement extended functions if only update xhci-mtk.ko This patch is used to remove the dependence. Acked-by: NMathias Nyman <mathias.nyman@linux.intel.com> Signed-off-by: NChunfeng Yun <chunfeng.yun@mediatek.com> Link: https://lore.kernel.org/r/0b62e21ddfacc1c2874726dd27ccab80c993f303.1615170625.git.chunfeng.yun@mediatek.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 03 2月, 2021 1 次提交
-
-
由 Mathias Nyman 提交于
xhci driver may in some special cases need to copy small amounts of payload data to a bounce buffer in order to meet the boundary and alignment restrictions set by the xHCI specification. In the majority of these cases the data is in a sg list, and driver incorrectly assumed data is always in urb->sg when using the bounce buffer. If data instead is contiguous, and in urb->transfer_buffer, we may still need to bounce buffer a small part if data starts very close (less than packet size) to a 64k boundary. Check if sg list is used before copying data to/from it. Fixes: f9c589e1 ("xhci: TD-fragment, align the unsplittable case with a bounce buffer") Cc: stable@vger.kernel.org Reported-by: NAndreas Hartmann <andihartmann@01019freenet.de> Tested-by: NAndreas Hartmann <andihartmann@01019freenet.de> Signed-off-by: NMathias Nyman <mathias.nyman@linux.intel.com> Link: https://lore.kernel.org/r/20210203113702.436762-2-mathias.nyman@linux.intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 29 1月, 2021 27 次提交
-
-
由 Mathias Nyman 提交于
If we receive a transfer event indicating that an endpoint should be halted, but current endpoint state doesn't match it, then the halt might be just resolved by the stop endpoint completion handler that detects the halted endpoint due to a context state error. In this case the TD we halted on is already moved to the cancelled TD list, and should not be successfully completed and given back anymore. Let the stop endpoint completion handler reset the endpoint, and then let the reset endpoint handler give back the cancelled TD among all other ones on the cancelled TD list Signed-off-by: NMathias Nyman <mathias.nyman@linux.intel.com> Link: https://lore.kernel.org/r/20210129130044.206855-28-mathias.nyman@linux.intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Mathias Nyman 提交于
A halted endpoint can be detected both when transfer events complete, and in stop endpoint command completion. Both these handlers will start clearing up the halted endpoint and queue a reset endpoint command. It's possible to get both events for the same halted endpoint if right after a URB cancel queues a stop endpoint command the endpoint stalls. Use the EP_HALTED flag to prevent resetting the endpoint twice. Signed-off-by: NMathias Nyman <mathias.nyman@linux.intel.com> Link: https://lore.kernel.org/r/20210129130044.206855-27-mathias.nyman@linux.intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Mathias Nyman 提交于
xhci_find_new_dequeue_state() and xhci_queue_new_dequeue_state() are no longer used afer introducing the move_dequeue_past_td() function. also remove struct xhci_dequeue_state as its no longer used. Signed-off-by: NMathias Nyman <mathias.nyman@linux.intel.com> Link: https://lore.kernel.org/r/20210129130044.206855-26-mathias.nyman@linux.intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Mathias Nyman 提交于
Replace xhci_find_new_dequeue_state() and xhci_queue_new_dequeue_state() functions with one combined function. These function were always called after each other, and had a lot of extra code just to pass the newly found dequeue state from the first function to the other. The new function also returns error in case there is a failure to queue the new dequeue state. This way the caller can decide on recovery measures to handle it. Signed-off-by: NMathias Nyman <mathias.nyman@linux.intel.com> Link: https://lore.kernel.org/r/20210129130044.206855-25-mathias.nyman@linux.intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Mathias Nyman 提交于
Handle race where a stop endpoint command fails with "context state error" as hardware hasn't actually started the ring yet after a previous urb cancellation completed and restarted the endpoint. Flushing the doorbell write that restart the endpoint reduced these cases, but didn't completely resolve them. Check if the ring is running in the stop endpoint completion handler, and issue a new stop endpoint command in this case. Signed-off-by: NMathias Nyman <mathias.nyman@linux.intel.com> Link: https://lore.kernel.org/r/20210129130044.206855-24-mathias.nyman@linux.intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Mathias Nyman 提交于
xhci 4.6.9: "A busy endpoint may asynchronously transition from the Running to the Halted or Error state due to error conditions detected while processing TRBs. A possible race condition may occur if software, thinking an endpoint is in the running state, issues a Stop Endpoint Command, however at the same time the xHC asynchronously transitions the endpoint to the Halted or Error state. In this case, a Context State Error may be generated for the command completion. Software may verify that this case occurred by inspecting the EP State for Halted or Error when a Stop Endpoint Command results in a Context State Error." Halted endpoints were not detected or handled at all in the stop endpoint completion handler. A set TR Deq ptr command was bluntly queued instead of resetting the endpoint first. The set TR Deq command would fail with a context state error. Fix this case by resetting the halted endpoint first to get it to a stopped state instead of the halted (error) state. Handle cancelled TDs once endpoint reset completes, invalidating cancelled TDs on ring either by turning them to no-op, or in case ring stopped on cancelled TD then move hardware dequeue pointer past it, which will clear the cancelled TD from hw cache, and make sure HW does not process it Signed-off-by: NMathias Nyman <mathias.nyman@linux.intel.com> Link: https://lore.kernel.org/r/20210129130044.206855-23-mathias.nyman@linux.intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Mathias Nyman 提交于
Don't queue both a reset endpoint command and a set TR deq command at once when handling a halted endpoint. split this into two steps. Initially only queue a reset endpoint command, and then if needed queue a set TR deq command in the reset endpoint handler. Note: This removes the RESET_EP_QUIRK handling which was added in commit ac9d8fe7 ("USB: xhci: Add quirk for Fresco Logic xHCI hardware.") This quirk was added in 2009 for prototype xHCI hardware meant for evaluation purposes only, and should not reach consumers. This hardware could not handle two commands queued at once, and had bad data in the output context after a reset endpoint command. After this patch two command are no longer queued at once, so that part is solved in this rewrite, but the workaround for bad data in the output context solved by issuing an extra configure endpoint command is bluntly removed. Adding this workaround to the new rewrite just adds complexity, and I think it's time to let this quirk go. Print a debug message instead. Signed-off-by: NMathias Nyman <mathias.nyman@linux.intel.com> Link: https://lore.kernel.org/r/20210129130044.206855-22-mathias.nyman@linux.intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Mathias Nyman 提交于
Halted endpoints can be discoverd both when handling transfer events and command completion events. Move code that handles halted endpoints before both of those event handlers. Rename the function to xhci_handle_halted_ep() to better describe what it does. Try to reserve "cleanup" word in function names for last stage cleanup activities. No functional changes Signed-off-by: NMathias Nyman <mathias.nyman@linux.intel.com> Link: https://lore.kernel.org/r/20210129130044.206855-21-mathias.nyman@linux.intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Mathias Nyman 提交于
Refactor handler for stop endpoint command completion. Yank out the part that invalidates cancelled TDs and turn it into a separate function. Invalidating cancelled TDs should be done while the ring is stopped, but not exclusively in the stop endpoint command completeion handler. We will need to invalidate TDs after resetting endpoints as well. Signed-off-by: NMathias Nyman <mathias.nyman@linux.intel.com> Link: https://lore.kernel.org/r/20210129130044.206855-20-mathias.nyman@linux.intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Mathias Nyman 提交于
In cases where the TD can't be given back in current handler we want to be able to store it until its time to return the TD. Signed-off-by: NMathias Nyman <mathias.nyman@linux.intel.com> Link: https://lore.kernel.org/r/20210129130044.206855-19-mathias.nyman@linux.intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Mathias Nyman 提交于
use the existing xhci_td_cleanup() to give back cancelled TDs when a ring is stopped. A minor change to make sure we don't try to remove an already removed td from the list is needed as cancelled TDs are already removed from the td_list immediatelty when it's cancelled. Signed-off-by: NMathias Nyman <mathias.nyman@linux.intel.com> Link: https://lore.kernel.org/r/20210129130044.206855-18-mathias.nyman@linux.intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Mathias Nyman 提交于
No funtional changes Signed-off-by: NMathias Nyman <mathias.nyman@linux.intel.com> Link: https://lore.kernel.org/r/20210129130044.206855-17-mathias.nyman@linux.intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Mathias Nyman 提交于
Create a separate helper function to issue reset endpont commands to clear halted endpoints. This is useful for cases where a halted endpoint is discovered while completing another command, and the endpoint halt needs to be cleared with a endpoint reset first. No functional changes Signed-off-by: NMathias Nyman <mathias.nyman@linux.intel.com> Link: https://lore.kernel.org/r/20210129130044.206855-16-mathias.nyman@linux.intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Mathias Nyman 提交于
Stop endpoint command fails with "context state error" if the endpoint is already stopped. This case was observed when a previous URB cancel had just completed and rang the doorbell to restart the ring, when a new URB cancel queued a stop endpoint command. >From xHC hardware pov the endpoint had not yet started, so the stop endpoint command failed with context state error. Right after this the doorbell ring took effect and ring was restarted. Interrupt handler saw a stop endpoint command completion event with "context state error" and discovered that the ring was back up in running state. flushing the write reduces these cases in stress testing, but does not completely remove the issue. Signed-off-by: NMathias Nyman <mathias.nyman@linux.intel.com> Link: https://lore.kernel.org/r/20210129130044.206855-15-mathias.nyman@linux.intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Mathias Nyman 提交于
xhci driver relies on link TRBs existing in the correct places in TRB ring buffers shared with the host controller. The controller should not modify these link TRBs, but in theory a faulty xHC could do it. Add some basic sanity checks to avoid infinite loops in interrupt handler, or accessing unallocated memory outside a ring segment due to missing or misplaced link TRBs. Signed-off-by: NMathias Nyman <mathias.nyman@linux.intel.com> Link: https://lore.kernel.org/r/20210129130044.206855-14-mathias.nyman@linux.intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Mathias Nyman 提交于
Instead of re-reading, masking and endianness correcting the same trb several times to get the trb type from an event, just do it once and store it in a local variable. Also pass the trb_type directly to the vendor specific event handler, avoiding one more similar read. In addition to the security benefit this also cleans up the code and helps readability. Signed-off-by: NMathias Nyman <mathias.nyman@linux.intel.com> Link: https://lore.kernel.org/r/20210129130044.206855-13-mathias.nyman@linux.intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Mathias Nyman 提交于
When finishing a TD we walk the endpoint dequeue trb pointer until it matches the last TRB of the TD. TDs can contain over 100 TRBs, meaning we call a function 100 times, do a few comparisons and increase a couple values for each of these calls, all in interrupt context. This can all be avoided by adding a pointer to the last TRB segment, and a number of TRBs in the TD. So instead of walking through each TRB just set the new dequeue segment, pointer, and number of free TRBs directly. Getting rid of the while loop also reduces the risk of getting stuck in a infinite loop in the interrupt handler. Loop relied on valid matching dequeue and last_trb values to break. Signed-off-by: NMathias Nyman <mathias.nyman@linux.intel.com> Link: https://lore.kernel.org/r/20210129130044.206855-12-mathias.nyman@linux.intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Check that the slot_id that we dug out from command completion event TRB, is valid before using it to identify the slot associated with the command that generated the event. Signed-off-by: NLalithambika Krishna Kumar <lalithambika.krishnakumar@intel.com> Signed-off-by: NMathias Nyman <mathias.nyman@linux.intel.com> Link: https://lore.kernel.org/r/20210129130044.206855-11-mathias.nyman@linux.intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Mathias Nyman 提交于
xhci driver links together segments in a ring buffer by turning the last TRB of a segment into a link TRB, pointing to the beginning of the next segment. If the first TRB of every segment for some unknown reason is a link TRB pointing to the next segment, then prepare_ring() loops indefinitely. This isn't something the xhci driver would do. xHC hardware has access to these rings, it sholdn't be writing link TRBs either, but with broken xHC hardware this could in theory be possible. Signed-off-by: NMathias Nyman <mathias.nyman@linux.intel.com> Link: https://lore.kernel.org/r/20210129130044.206855-10-mathias.nyman@linux.intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Mathias Nyman 提交于
The one case that used this function can use the xhci_triad_to_transfer_ring() helper instead. Avoid having several functions doing basically the same thing. Signed-off-by: NMathias Nyman <mathias.nyman@linux.intel.com> Link: https://lore.kernel.org/r/20210129130044.206855-9-mathias.nyman@linux.intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Mathias Nyman 提交于
Two existing ring helpers, xhci_triad_to_transfer_ring() and xhci_stream_id_to_ring() have partially similar functionality. Both have some limitation, especieally with boundary checking. Add a new xhci_virt_ep_to_ring() helper with proper boundary checking that can replace parts of one helper, and later will completely replace the other helper. Signed-off-by: NMathias Nyman <mathias.nyman@linux.intel.com> Link: https://lore.kernel.org/r/20210129130044.206855-8-mathias.nyman@linux.intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Mathias Nyman 提交于
Check that the xhci_virt_dev structure that we dug out based on a slot_id value from a command completion is valid before dereferencing it. Signed-off-by: NMathias Nyman <mathias.nyman@linux.intel.com> Link: https://lore.kernel.org/r/20210129130044.206855-7-mathias.nyman@linux.intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Mathias Nyman 提交于
In several event handlers we need to find the right endpoint structure from slot_id and ep_index in the event. Add a helper for this, check that slot_id and ep_index are valid. Signed-off-by: NMathias Nyman <mathias.nyman@linux.intel.com> Link: https://lore.kernel.org/r/20210129130044.206855-6-mathias.nyman@linux.intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Mathias Nyman 提交于
several command completion handlers are passed the event trb as a paramtere even if it't not used. Remove it. Signed-off-by: NMathias Nyman <mathias.nyman@linux.intel.com> Link: https://lore.kernel.org/r/20210129130044.206855-5-mathias.nyman@linux.intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Mathias Nyman 提交于
Instead of passing slot id and endpoint index to cleanup_halted_endpoint() pass the endpoint structure pointer as it's already known. Avoids again digging out the endpoint structure based on slot id and endpoint index, and passing them along the call chain for this purpose only. Add slot_id to the virt_dev structure so that it can easily be found from a virt_dev, or its child, the virt_ep endpoint structure. Signed-off-by: NMathias Nyman <mathias.nyman@linux.intel.com> Link: https://lore.kernel.org/r/20210129130044.206855-4-mathias.nyman@linux.intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Mathias Nyman 提交于
isochronous endpoints do not support streams, meaning that there is only one ring per endpoint. Avoid double-fetching the transfer event DMA to get the ring. Also makes passing the event to skip_isoc_td() uncecessary. Signed-off-by: NMathias Nyman <mathias.nyman@linux.intel.com> Link: https://lore.kernel.org/r/20210129130044.206855-3-mathias.nyman@linux.intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Mathias Nyman 提交于
When handling transfer events the event is passed along the handling callpath and parsed again in several occasions. The event contains slot_id and endpoint index, from which the driver endpoint structure can be found. There wasn't however a way to get the endpoint index or parent usb device from this endpoint structure. A lot of extra event parsing, and thus some DMA doublefetch cases, and excess variables and code can be avoided by adding endpoint index and parent usb virt device pointer to the endpoint structure. Signed-off-by: NMathias Nyman <mathias.nyman@linux.intel.com> Link: https://lore.kernel.org/r/20210129130044.206855-2-mathias.nyman@linux.intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 16 1月, 2021 1 次提交
-
-
由 Mathias Nyman 提交于
Once the command ring doorbell is rung the xHC controller will parse all command TRBs on the command ring that have the cycle bit set properly. If the driver just started writing the next command TRB to the ring when hardware finished the previous TRB, then HW might fetch an incomplete TRB as long as its cycle bit set correctly. A command TRB is 16 bytes (128 bits) long. Driver writes the command TRB in four 32 bit chunks, with the chunk containing the cycle bit last. This does however not guarantee that chunks actually get written in that order. This was detected in stress testing when canceling URBs with several connected USB devices. Two consecutive "Set TR Dequeue pointer" commands got queued right after each other, and the second one was only partially written when the controller parsed it, causing the dequeue pointer to be set to bogus values. This was seen as error messages: "Mismatch between completed Set TR Deq Ptr command & xHCI internal state" Solution is to add a write memory barrier before writing the cycle bit. Cc: <stable@vger.kernel.org> Tested-by: NRoss Zwisler <zwisler@google.com> Signed-off-by: NMathias Nyman <mathias.nyman@linux.intel.com> Link: https://lore.kernel.org/r/20210115161907.2875631-2-mathias.nyman@linux.intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 09 12月, 2020 1 次提交
-
-
由 Tejas Joglekar 提交于
The Synopsys xHC has an internal TRB cache of size TRB_CACHE_SIZE for each endpoint. The default value for TRB_CACHE_SIZE is 16 for SS and 8 for HS. The controller loads and updates the TRB cache from the transfer ring in system memory whenever the driver issues a start transfer or update transfer command. For chained TRBs, the Synopsys xHC requires that the total amount of bytes for all TRBs loaded in the TRB cache be greater than or equal to 1 MPS. Or the chain ends within the TRB cache (with a last TRB). If this requirement is not met, the controller will not be able to send or receive a packet and it will hang causing a driver timeout and error. This can be a problem if a class driver queues SG requests with many small-buffer entries. The XHCI driver will create a chained TRB for each entry which may trigger this issue. This patch adds logic to the XHCI driver to detect and prevent this from happening. For every (TRB_CACHE_SIZE - 2), we check the total buffer size of the SG list and if the last window of (TRB_CACHE_SIZE - 2) SG list length and we don't make up at least 1 MPS, we create a temporary buffer to consolidate full SG list into the buffer. We check at (TRB_CACHE_SIZE - 2) window because it is possible that there would be a link and/or event data TRB that take up to 2 of the cache entries. We discovered this issue with devices on other platforms but have not yet come across any device that triggers this on Linux. But it could be a real problem now or in the future. All it takes is N number of small chained TRBs. And other instances of the Synopsys IP may have smaller values for the TRB_CACHE_SIZE which would exacerbate the problem. Signed-off-by: NTejas Joglekar <joglekar@synopsys.com> Signed-off-by: NMathias Nyman <mathias.nyman@linux.intel.com> Link: https://lore.kernel.org/r/20201208092912.1773650-3-mathias.nyman@linux.intel.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-